meta
dict | text
stringlengths 1
1.2M
|
---|---|
{
"arxiv_id": "2302.08612",
"language": "en",
"timestamp": "2023-02-20T02:03:16",
"url": "https://arxiv.org/abs/2302.08612",
"yymm": "2302"
} | \section{Introduction}
Globally optimizing a black-box function $f$, finding
\begin{equation}
\label{eq:bogoal}
x^* = \argmin_{x \in \mathcal [0,1]^d} \ f(x),
\end{equation}
is a common problem in recommender systems \citep{Vanchinathan2014},
hyperparameter tuning \citep{Snoek2012}, inventory management
\citep{Hong2006}, and engineering \citep{Randall1981}. Here we explore a
robot pushing problem \citep{Kaelbling2017}. In such settings, $f$ is an
expensive to evaluate computer simulation, so one must carefully design an
experiment to effectively learn the function \citep{Sacks1989} and isolate its
local or global minima. Optimization via modeling and design has a rich
history in statistics \citep{box2007response,myers2016response}. Its modern
instantiation is known as Bayesian optimization
\citep[BO;][]{mockus1978application}.
In BO, one fits a flexible, non-linear and nonparametric response-surface model
to a limited campaign of example runs, obtaining a so-called {\em surrogate}
$\hat{f}$ \citep{Gramacy2020}. Based on that fit
-- and in particular its predictive equations describing uncertainty for
$f(x)$ at new locations $x$ -- one then devises a criteria, a so-called
\emph{acquisition function} \citep{Shahriari2016}, targeting desirable
qualities, such as $x^\star$ that minimize $f$. One must choose an
surrogate family for $f$, pair it with a fitting scheme for
$\hat{f}$, and choose a criteria to solve for acquisitions.
BO is an example
of {\em active learning (AL)} where one attempts to sustain a virtuous
cycle of learning and data collection. Many good solutions exist in this
context, and we shall not provide an in-depth review here.
Perhaps the most common surrogate for BO
is the Gaussian Process
\citep[GP;][]{Sacks1989}. For a modern review, see
\citet{williams2006gaussian} or \citet{Gramacy2020}. The most popular
acquisition function is {\em expected improvement (EI)}
\citep{Jones1998}. EI balances exploration and exploitation by
suggesting locations with either high variance or low mean, or both. Greater
detail is provided in Section \ref{sec:REI}. EI is highly effective and has
desirable theoretical properties.
\begin{figure}[ht!]
\centering
\includegraphics[width=8.75cm,trim=0 45 0 60]{Adv_Init.pdf}
\caption{Surface for a 1d multimodal function shown with different
adversary levels.}
\label{fig:1D-Example}
\end{figure}
Our goal in this paper is to extend BO via GPs and EI to a richer, more
challenging class of optimization problems. In situations where there are a
multitude of competing, roughly equivalent local solutions to
Eq.~(\ref{eq:bogoal}) a practitioner may naturally express a preference
ones which enjoy a wider {\em domain of attraction}; i.e., those whose
troughs are larger. For example, consider $f(x)$ defined in
Eq.~(\ref{eq:ryan}), provided later in Section \ref{sec:empcomp}, as
characterized by the black line in Figure \ref{fig:1D-Example}. It has three
local minima. The one around $x=0.9$ is significantly higher than the other two,
and the one around 0.55 has a narrow domain of attraction but lower objective
value compared to the one around 0.15. These features mean the robust minimum
is the local minimum around $x=0.15$.
Although BO via GPs/EI would likely explore both
domains of attraction to a certain extent {\em eventually}, it will in the near
term (i.e., for smaller simulation budgets) focus on the deeper/narrower one,
providing lower resolution on the solution than many practitioners prefer.
Finding/exploring the {\em robust} global solution space requires a
modification to both the problem and the BO strategy. Here we borrow a
framework first introduced in the math programming literature on robust
optimization \citep[e.g.,][]{Menickelly2020}: an {\em adversary}. Adversarial
reasoning is popular in the wider reinforcement learning literature
\citep{Huang2011}. To our knowledge the mathematical programming notion of an
adversary has never been deployed for BO, where it is perhaps best intuited as
a form of penalty on ``sharp'' local minima. Specifically, let $x^\alpha$
denote the $\alpha$-{\em neighborhood} of an input $x$. There are many ways
to make this precise depending on context. If $x$ is one-dimensional, then
$x^\alpha = [x-\alpha, x+\alpha] \equiv [x \pm \alpha]$ is sensible. In
higher dimension, one can generalize to an $\alpha$-ball or hyper-rectangle.
More specifics will come later. Relative to that $\alpha$-neighborhood, an
adversary $g(x,\alpha)$ and robust minimum $x^r$ may be defined
as follows:
\begin{equation}
\mbox{Let} \quad g(x, \alpha) =
\max_{x \in x^\alpha} \ f(x) \quad \quad \mbox{so that} \quad \quad
x^r = \argmin_{x \in [0,1]^d} \ g(x, \alpha). \label{eq:robust}
\end{equation}
Figure \ref{fig:1D-Example} provides some examples for the $f$ introduced
earlier. Observe that the larger $\alpha$ is, the more ``flattened'' $g(x,
\alpha)$ is compared to $f(x)$. In particular, note how the penalization is
more severe in the sharper minimum compared to the shallow trough which is
only slightly higher than the original, unpenalized function.
The conventional, local, approach to finding the robust solution $x^r$ is best
understood as an embellishment of other, conventional, local schemes inspired
by Newton's method \citep[e.g.,][]{Cheney1959}. First, extract derivative (and
adversarial) information nearby through finite differencing and evaluation at
the boundary of $x^\alpha$; then take small steps that descend the
(adversarial) surface. Such schemes offer tidy local convergence
guarantees but are profligate with evaluations. When $f$ is expensive, this
approach is infeasible. We believe that idea of building an adversary can be
ported to the BO framework, making better use of limited simulation resources
toward global optimization while still favoring wider domains of attraction.
The crux of our idea is as follows. A GP $\hat{f}_n$, that would typically be
fit to a collection of $n$ evaluations of $f$ in a BO framework, can be used to
define the adversarial realization of those same values following the fitted
analog of $g$: $\hat{g}(x_i, \alpha) = \max_{x \in x_i^\alpha} \hat{f}(x)$. A
second GP $\hat{f}^\alpha_n$ can be fit to those $\hat{g}(x_i,
\alpha)$-values, $i=1,\dots,N$. We call $\hat{f}^\alpha_n$ the {\em adversarial
surrogate}. Then you can use $\hat{f}^\alpha_n$ as you would an ordinary
surrogate, $\hat{f}_n$, for example with EI guiding acquisitions. We call this
{\em robust expected improvement (REI)}. There are, of course, myriad details
and variations -- simplifications and embellishments -- that we are glossing
over here, and that we shall be more precise about in due course. The most
interesting of these may be how we suggest dealing with a practitioner's
natural reluctance to commit to a particular choice of $\alpha$ {\em a priori}.
Before discussing details, it is worth remarking that the term ``robust''
has many definitions across the statistical and optimization literature(s).
Our use of this term here is more similar to some than others. We do not mean
robust to outliers \citep{Martinez-Cantin2009} as may arise when noisy,
leptokurtic simulators $f$ are involved \citep{Beland2017} -- though we shall
have some thoughts on this setup later. Some refer to robustness as the choice
of random initialization of a local optimizer \citep{Taddy2009}. However our
emphasis is global. Our definition in Eq.~(\ref{eq:robust}), and its BO
implementation, bears some similarity to \citet{Oliveira2019} and to so-called
``unscented BO'' \citep{Nogueira2016}, where robustness over noisy or
imprecisely specified {\em inputs} is considered. However, our adversary
entertains a worst-case scenario \citep[e.g.,][]{Bogunovic2018}, rather than the
stochastic/expected-case. \citet{Marzat2016} also consider worst-case
robustness, but focus on so-called ``minimax'' problems where some dimensions
are perfectly controlled ($\alpha = 0$) and others are completely uncontrolled
($\alpha = 1$). Our definition of robustness -- $x^r$ in
Eq.~(\ref{eq:robust}) -- emphasizes a worst-case within the
$\alpha$ window. Nonetheless, these methods and their test cases make for
interesting empirical comparisons, as we demonstrate.
The rest of this paper continues as follows. Section \ref{sec:review} reviews
the bare essentials required to understand our proposed methodology. Section
\ref{sec:methods} introduces our proposed robust BO setup, via adversaries,
surrogate modeling and REI, with variations introduced and illustrations along
the way. Section \ref{sec:empirical} showcases empirical results via
comparison to those variations and to similar methodology from the recent BO
literature. Section \ref{sec:rob} then ports that benchmarking framework to
a robot pushing problem. Finally, Section \ref{sec:conc} concludes with a
brief discussion.
\section{Review}
\label{sec:review}
Here we review the basic elements of BO: GPs and EI.
\subsection {Gaussian process regression}
\label{sec:gp}
A GP may be used to model smooth relationships between inputs and outputs for
expensive black-box simulations as follows. Let $x_i \in [0, 1]^d$ represent
$d$-dimensional inputs and $y_i = f(x_i)$ be outputs, $i=1,\dots, n$. We
presume $y_i$ are deterministic realizations of $f$ at $x_i$, though the GP/BO
framework may easily be extended to noisy outputs \citep[see, e.g.,][Section
5.2.2 and 7.2.4]{Gramacy2020}. Collect inputs into $X_n$, an $n \times d$
matrix with rows $x_i^\top$, and outputs into column $n$-vector $Y_n$. These
are the training data $D_n = (X_n, Y_n)$. Then, model as $Y_n \sim
\mathcal{N}_n(\mu(X_n), \Sigma(X_n, X_n))$, i.e., presume outputs follow an
$n$-dimensional multivariate normal distribution (MVN).
Often, $\mu(X_n) = 0$ is sufficient for coded inputs and outputs (i.e., after
centering and normalization), moving all of the modeling ``action'' into the
covariance structure $\Sigma(\cdot, \cdot)$, which is defined through the choice
of a positive-definite, distance-based {\em kernel}. Here, we prefer a
squared exponential kernel, but others such as the Mat\`{e}rn \citep{Stein1999,
Abrahamsen1997} are common. This choice is not material to our presentation;
both specify that the function $f$, via $x_i$ and $y_i$, vary smoothly as a
function of inverse distance in the input space. Details can be found in
\citet[][Section 5.3.3]{Gramacy2020}. Specifically, we fill the covariance
structure $\Sigma_{ij} \equiv \Sigma(X_n, X_n)_{ij}$ via
\begin{equation}
\label{eq:cov}
\Sigma_{ij} = \tau^2 \left(\textrm{exp}\left(-\frac{||x_i - x_j||^2}
{\theta}\right) + \epsilon\mathbb{I}_{\{i=j\}}\right).
\end{equation}
The structure in Eq.~(\ref{eq:cov}) is hyperparameterized by $\tau^2$ and
$\theta$. Let $\hat{\tau}^2$ and $\hat{\theta}$ be estimates for the
hyperparameters estimated through the MVN log likelihood. Scale $\tau^2$
captures vertical spread between peaks and valleys $f$. Lengthscale $\theta$
captures how quickly the function changes direction. Larger $\theta$ means the
correlation decays less quickly leading to flatter functions. Observe that
$\epsilon$-jitter \citep{neal1998regression} is added to the diagonal to
ensure numerical stability when decomposing $\Sigma$.
Working with MVNs lends a degree of analytic tractability to many statistical
operations, in particular for conditioning \citep[e.g.,][]{Kalpic2011} as
required for prediction. Let $\mathcal{X}$, an $n_p \times d$ matrix, store
inputs for ``testing.'' Then, the conditional distribution for
$Y(\mathcal{X})$ given $D_n = (X_n, Y_n)$ is also MVN and has a convenient
closed form:
\begin{align}
\label{eq:GPpreds}
&Y(\mathcal{X}) \mid D_n \sim \mathcal{N}_{n_p}(\mu_n(\mathcal{X}),
\Sigma_n(\mathcal{X}))\\
\mbox{where } \quad &\mu_n(\mathcal{X}) = \Sigma(\mathcal{X},
X_n)\Sigma^{-1}(X_n, X_n)Y_n\nonumber \\
\mbox{and } \quad &\Sigma_n(\mathcal{X}) = \Sigma(\mathcal{X}, \mathcal{X})
- \Sigma(\mathcal{X}, X_n)\Sigma^{-1}(X_n, X_n)\Sigma(X_n,
\mathcal{X}), \nonumber
\end{align}
where $\Sigma(\cdot, \cdot)$ extends $\Sigma_{ij} \equiv \Sigma(x_i, x_j)$ in
Eq.~\eqref{eq:cov} to the rows of $\mathcal{X}$ and between $\mathcal{X}$ and
$X_n$. The diagonal of the $n_p \times n_p$ matrix $\Sigma_n(\mathcal{X})$
provides pointwise predictive variances which may be denoted as
$\sigma_n^2(\mathcal{X})$. Later, we shall use $\text{GP}_{\hat{\theta}}(D_n)$
to indicate a fitted GP surrogate $\hat{f}_n$ to data $D_n = (X_n, Y_n)$
emitting predictive equations $(\mu_n(\cdot), \sigma_n^2(\cdot))$ as in
Eq.~(\ref{eq:GPpreds}), conditioned on estimated hyperparameters
$\hat{\theta}_n$. Note that we are streamlining the notation here
somewhat and subsuming $\hat{\tau}^2$ into $\hat{\theta}$. For more
information on GPs, see \cite[Chapter 5]{Gramacy2020} or \cite{Santner2018}.
\subsection {Expected improvement}
\label{sec:ei}
Bayesian Optimization (BO) seeks a global minimum (\ref{eq:bogoal}) under a
limited experimental budget of $N$ runs. The idea is to proceed sequentially,
$n = n_0, \dots, N$, and in each iteration $n$ make a {\em greedy} selection
of the next, $n+1^\mathrm{st}$ run, $x_{n+1}$, based on solving an acquisition
function tied to a surrogate fit to data $D_n$. The initial $n_0$-sized
design could be space-filling, for example with a Latin hypercube sample (LHS)
or maximin design \citep[Chapter 17]{Dean2015} or, as some have argued
\citep{zhang2018distance}, purely at random. GPs have emerged as the
canonical surrogate model. Although there are many acquisition functions in
the literature tailored to BO via GPs, expected improvement (EI) is perhaps the
most popular. EI may be described as follows.
Let $f^{\min}_n = \min(y_1, \dots, y_{n})$ denote the best ``best observed
value'' (BOV) found so far, after the first $n$ acquisitions ($n_0$ of which
are space-filling/random). Then, define the {\em improvement} at input
location $x$ as $I(x) = \max\{0, f^{\min}_n - Y(x)\}$. $I(x)$ is a random
variable, inheriting its distribution from $Y(x) \mid D_n$. If $Y(x)$ is
Gaussian, as it is under $\text{GP}_{\hat{\theta}}(D_n)$ via
Eq.~(\ref{eq:GPpreds}), then the expectation of $I(x)$ over the distribution
of $Y(x) \mid D_n$ has a closed form:
\begin{equation}
\label{eq:ei}
\mathrm{EI}_n(x) = \mathbb{E}\{I(x) \vert D_n\} = (f^{\min}_n -
\mu_{n}(x))\Phi\left(\frac{f_{n}^{\min}
- \mu_{n}(x)}{\sigma_{n}(x)}\right) +
\sigma_{n}(x)\phi\left(\frac{f_n^{\min} -
\mu_{n}(x)}{\sigma_{n}(x)}\right),
\end{equation}
where $\Phi$ and $\phi$ are the standard Gaussian CDF and PDF, respectively.
Notice how EI is a weighted combination of mean $\mu_n(x)$ and uncertainty
$\sigma_n(x)$, trading off ``exploitation and exploration.'' The first term of
the sum is high when $\hat{f}(x)$ is much lower than $f_n^{\min}$,
while the second term is high when the GP has high uncertainty at $x$.
\begin{figure}[ht!]
\includegraphics[width=8cm,trim=0 25 0 50]{EI_Init.pdf}
\includegraphics[width=8cm,trim=0 25 0 50]{EI_Nums.pdf}
\caption{Left: EI-based (EGO) acquisition with $n_0 = 10$ initial points as
black squares and 10 acquisitions as blue circles using EGO on
the 1D example function from Eq.~(\ref{eq:ryan}) with initial GP fit in
blue. Right: EI using the initial design at different locations of $x$.}
\label{fig:EI-Example}
\end{figure}
The {\em right} panel of Figure \ref{fig:EI-Example} shows an EI surface
implied by the GP surrogate (blue) shown in the {\em left} panel. This fit is
derived from $n=10$ training data points (black squares) sampled from our
$f(x)$ from Figure \ref{fig:1D-Example}. We will discuss the blue circles
momentarily. Observe how EI is high in the two local troughs of minima, near
$x = 0.15$ and $x = 0.55$, but away from the training data. The total volume
(area under the curve) of EI is larger around the left, shallower trough, but
the maximal EI location is in the right, spiky trough. This ($x$ near 0.55) is
where EI recommends choosing the next, $n+1^\mathrm{st}$ acquisition.
Operationalizing that process, i.e., numerically solving for the next
acquisition involves its own, ``inner'' optimization: $x_{n+1} = \argmax_{x
\in [0,1]^d} \; \mathrm{EI}_n(x)$. This can be challenging because EI is
multi-modal. Observe that while $f$ only has two local minima, EI as shown
for $n=10$ in Figure \ref{fig:EI-Example}, has three local maxima. So in some
sense the inner optimization is harder than the ``outer'' one. As $n$ grows,
the number of local EI maxima can grow. A numerical optimizer such as BFGS
\citep{Byrd1995} is easily stuck, necessitating a multi-start scheme
\citep{Burden1989}. Another common approach is to use a discrete space-filling
set of candidates, replacing a continuous search for $x \in [0,1]^d$ into a
discrete one for $x \in
\mathcal{X}_{\mathrm{cand}}$. Hybrid and smart-candidate schemes are also
popular \citep{Scott2011, Gramacy2022}. In general, we can afford a
comprehensive effort toward solving the ``inner'' optimization because the
objective, derived from $\text{GP}_{\hat{\theta}}(D_n)$, is cheap -- especially
compared to the black-box $f(\cdot)$. Repeated application of EI toward
selecting $x_{n+1}$ is known as the {\em efficient global optimization}
algorithm \citep[EGO,][]{Jones1998}.
The blue circles in Figure \ref{fig:EI-Example} indicate how nine further
acquisitions (ten total) play out, i.e., $10 = n_0, \dots, n, \dots, N=20$ in
EGO. Notice that the resulting training data set $D_N$, combining both black
and blue points concentrates acquisitions in the ``tip'' of the spiky, right
trough. The rest of the space, including the shallower left trough, is more
uniformly (and sparsely) sampled. Our goal is to concentrate more acquisition
effort in the shallower trough.
\section{Proposed methodology}
\label{sec:methods}
Here we extend the EGO algorithm by incorporating adversarial thinking. The
goal is to find $x^r$ from Eq.~(\ref{eq:robust}) for some $\alpha$. We begin
by presuming a fixed, known $\alpha$ selected by a practitioner.
\subsection {The adversarial surrogate}
\label{sec:adv}
If $\hat{f}_n(x)$ is a surrogate for $f(x)$, then one may analogously notate
$\hat{f}_n^\alpha(x)$ as a surrogate for $g(x, \alpha)$, an adversarial
surrogate. We envision several ways in which $\hat{f}_n^\alpha(x)$ could be
defined in terms of $\hat{f}_n(x)$, but not many which are tractable to work
with analytically or numerically. For example, suppose $Y^\alpha(x) =
\max_{x' \in x^\alpha} Y(x')$, where $Y(x') \sim \mathcal{N}(\mu_n(x'),
\sigma^2_n(x'))$, a random variable whose distribution follows the spirit
of Eq.~(\ref{eq:robust}), but uses the predictive equations of
Eq.~(\ref{eq:GPpreds}). The distribution of $Y^\alpha(x)$ could be a version of
$\hat{f}_n^\alpha(x)$, at least notionally. However a closed form remains
elusive.
Instead, it is rather easier to define $\hat{f}_n^\alpha(x)$ as an ordinary
surrogate trained on data derived through adversarial reasoning on the
original surrogate $\hat{f}_n(x)$. Let $Y_n^\alpha$ denote these {\em
adversarial responses}, where each $y_i^\alpha$, for $i=1,\dots,n$, follows
\begin{equation}
\label{eq:advy}
y_i^\alpha = \max_{x \in x^\alpha_i} \; \mu_n(x) \quad \mbox{where
$x_i^\alpha$ is the $\alpha$-neighborhood of the $i^\mathrm{th}$ entry of
$X_n$, as usual.}
\end{equation}
There are many sensible choices for how to find $y_i^\alpha$ numerically in
practice. Newton-based optimizers, e.g., BFGS \citep{Byrd1995}, could
leverage closed form derivatives for $\mu_n(x)$, or utilize finite
differencing. A simpler option that works well is to instead take
$y_i^\alpha = \max_{x \in x_i^{\mathcal{B}_\alpha(x)}} \mu_{n}(x)$, where
$\mathcal{B}_{\alpha}(x)$ is the discrete set of points on the corners of a box
with sides of length $\alpha$ emanating from $x$. Details for our own
implementation are deferred to Section \ref{sec:implement}.
Given $Y_n^\alpha$, an adversarial surrogate may be built by
modeling {\em adversarial data} $D_n^\alpha = (X_n, Y_n^\alpha)$, i.e., the
adversarial responses paired with their original inputs, as a GP. Let
$\hat{\theta}_\alpha$ denote hyperparameter estimates for GP surrogate
$\hat{f}_n^\alpha(x)$. Fill in the covariance matrix following
Eq.~(\ref{eq:cov}), $\Sigma^\alpha(\cdot, \cdot)$. Finally, define the
{\em adversarial surrogate} as
\begin{equation}
\label{eq:advsurr}
\hat{f}_n^\alpha(x) \equiv \text{GP}_{\hat{\theta}_\alpha}(D_n^\alpha) \rightarrow (\mu_n^\alpha(\cdot), \sigma_n^{2\alpha}(\cdot))
\end{equation}
via novel hyperparameter estimates $\hat{\theta}_\alpha$ using $Y_n^\alpha$
rather than $Y_n$ with predictive equations akin to (\ref{eq:GPpreds}). Since
the $Y_n^\alpha$ are the original surrogate's ($\hat{f}_n$) estimate of
adversarial response values according $f_\alpha$, $\hat{f}_n^\alpha$ may serve
as a surrogate for $g(x,\alpha)$ from the left half of Eq.~(\ref{eq:robust}).
\begin{figure}[ht!]
\centering
\includegraphics[width=8cm,trim=0 20 0 50]{Adv_Surr.pdf}
\includegraphics[width=8cm,trim=0 20 0 50]{REI_Nums.pdf}
\caption{Left: The same black and blue lines from Figure
\ref{fig:EI-Example}. In red, $Y_n^\alpha$ are shown as the points with
$\hat{f}_n^\alpha(x)$ as the curve. Right: Robust expected improvement for
the current design.}
\label{fig:Adv_Surr}
\end{figure}
As a valid GP, $\hat{f}_n^\alpha$ can be used in any way another surrogate
might be deployed downstream. For example, $\hat{f}_n^\alpha$ may be used to
acquire new runs via EI, but hold that thought for a moment. To illustrate
$\hat{f}_n^\alpha$, the left panel of Figure \ref{fig:Adv_Surr} augments the
analogous panel in Figure \ref{fig:EI-Example} to include a visual of
$\hat{f}_n^\alpha$ via $\mu_n^\alpha$ and error-bars $\mu_n^\alpha \pm
\sigma_n^\alpha$. Perhaps the most notable takeaway from this graph is that
the new, red lines, are much higher near the sharp minimum (near $x=0.5$), but
less dramatically elevated for the wider, dull trough near $x=0.2$.
Also observe in Figure \ref{fig:Adv_Surr} that, whereas for the most part
$\mu_n(x) \leq \mu^\alpha_n(x)$, this reverses for a small portion of the input
domain near $x=0.2$. That would never happen when comparing $f(x)$ to $g(x,
\alpha)$ via Eq.~(\ref{eq:robust}). This happens for our
surrogates -- original $\hat{f}_n$ and adversarial $\hat{f}_n^\alpha$ -- for
two reasons. One is
that both are stationary processes because their covariance structure
(\ref{eq:cov}) uses only relative distances, resulting in a compromise
surface between sharp and dull. Although this is
clearly a mismatch to our data-generating mechanism, we have not found any
downsides in our empirical work. The possibility of improved
performance for non-stationary surrogates is entertained in Section
\ref{sec:conc}. The other reason is that the adversarial data
$Y_n^\alpha$, whether via a stationary or (speculatively) a non-stationary
surrogate $\hat{f}_n$, provide a low-resolution view of the true adversary
$g(x, \alpha)$ when $n$ is small. Consequently, $\hat{f}_n^\alpha$ is a
crude approximation to $g(x, \alpha)$, but that will improve with
more acquisitions (larger $n$). What is most important is the EI surface(s)
that the surrogate(s) imply. These are shown in the right panel of Figure
\ref{fig:Adv_Surr}, and discussed next.
\subsection {Robust expected improvement}
\label{sec:REI}
{\em Robust expected improvement} (REI), $\mathrm{EI}_n^\alpha(\cdot)$, is the
analog of in Eq.~(\ref{eq:ei}) except using $(\mu_n^\alpha(\cdot),
\sigma_n^{2\alpha}(\cdot))$, towards solving the mathematical program given
in the right half of Eq.~(\ref{eq:robust}). Let $f_n^{\alpha,\min} =
\min(y_1^\alpha, \dots, y_{n}^\alpha)$ denote the best estimated adversarial
response (BEAR). Then, let
\begin{equation}
\label{eq:advei}
\text{EI}^\alpha_n(x) = \mathbb{E}\{I(x) \mid D_n^\alpha\} =
(f_{n}^{\alpha,\min} - \mu_{n}^\alpha(x))\Phi\!\left(\frac{f_{n}^{\alpha,\min} - \mu_{n}^\alpha(x)}{\sigma_{n}^\alpha(x)}\right) +
\sigma_{n}^\alpha(x)\phi\!\left(\frac{f_{n}^{\alpha,\min} -
\mu_{n}^\alpha(x)}{\sigma_{n}^\alpha(x)}\right).
\end{equation}
The right panel of Figure \ref{fig:Adv_Surr} shows the REI surface for the same
initial setup as Figure \ref{fig:EI-Example}. We see only one peak,
located at the shallower, wider trough compared to three peaks in the EI
surface [Figure \ref{fig:EI-Example}]. Generally speaking, REI surfaces have
fewer local maxima compared to EI surfaces because $\hat{f}_n^\alpha(x)$
smooths over the peaked regions. Thus the inner optimization of REI often has
fewer local minima, generally requiring fewer multi-starts.
\bigskip
\begin{algorithm}[H]
\DontPrintSemicolon
\textbf{input} $D_{n} = (X_{n}, Y_{n})$, $x$, and $\alpha$\\
$\hat{f}_n(x) = \text{GP}_{\hat{\theta}}(D_n)$ \tcp*{with predictive moments
in Eq.~(\ref{eq:GPpreds})}
\For{i = 1, $\dots$, $n$} {
$y_i^\alpha$ = ${\argmax_{x' \in x^\alpha_i}} \ \mu_n(x')$
\tcp*{adversarial responses, (\ref{eq:advy})}
}
$D_n^\alpha = (X_n, Y_n^\alpha) = (X_n, \{y_i^\alpha\}_{i=1}^n)$\;
$\hat{f}^\alpha_n(x) = \text{GP}_{\hat{\theta}_\alpha}(D_n^\alpha)$
\tcp*{the adversarial surrogate, (\ref{eq:advsurr})}
\Return $(\text{EI}_n^\alpha(x))$ \tcp*{EI using
$\hat{f}^\alpha_n(x)$ from the previous line, (\ref{eq:advei})}
\caption{Robust Expected Improvement}
\label{alg:REI_Fixed}
\end{algorithm}
\noindent
\bigskip
REI is summarized succinctly in Eq.~(\ref{eq:advei}) above, but it is
important to appreciate that it is a result of a multi-step process.
Alg.~\ref{alg:REI_Fixed} provides the details: fit an ordinary surrogate, which
is used to create adversarial data, in turn defining the adversarial surrogate
upon which EI is evaluated. The algorithm is specified for a particular
reference location $x$, used toward solving $x_{n+1} = \argmax_{x \in [0,1]^d}
\; \text{EI}_n^\alpha(x)$. It may be applied identically for any $x$. In a
numerical solver it may be advantageous to cache quantities unchanged in $x$.
With repeated acquisition via EI, over $n=n_0,\dots,N - 1$ being known as EGO,
we dub robust efficient global optimization (REGO) as the repeated
application of REI towards finding a robust minimum. REGO involves a loop over
Alg.~\ref{alg:REI_Fixed}, with updates $D_n
\rightarrow D_{n+1}$ after each acquisition. The final data set, $D_N = (X_N,
Y_N)$ provides insight into both $f(x)$ and $g(x,\alpha)$ and their minima.
Whereas EGO would report BOV $f_N^{\min}$, and/or the corresponding element
$x^\star_{\mathrm{bov}}$ of $X_N$, REGO would report the BEAR
$f_N^{\alpha,\min}$, the same quantity used to define REI in
Eq.~(\ref{eq:advei}), and/or input $x^\star_{\mathrm{bear}}$.
\subsubsection*{Post hoc adversarial surrogate}
To quantify the advantages of REI/REGO over EI/EGO in our
empirical work of Sections \ref{sec:empirical}--\ref{sec:rob}, we consider
a {\em post hoc adversarial surrogate}. This is the surrogate constructed after
running all acquisitions, $n_0 + 1,\dots,N$ via EI/EGO, then at $N$ fitting an
adversarial surrogate Eq.~(\ref{eq:advsurr}), and extracting the BEAR
$f_N^{\alpha, \min}$ rather than the BOV. In other words, the last step is
faithful to the adversarial goal, whereas active learning aspects ignore it
and proceed as usual. Whereas the BOV from EGO can be a poor approximation to
the robust optimum $f(x^r)$ of Eq.~(\ref{eq:robust}), the BEAR from a post hoc
adversarial surrogate can potentially be better. Comparing two BEAR solutions,
one from REGO and one from a post hoc EGO surrogate, allows us to separately
explore the value of REI acquisitions from post hoc adversarial surrogates,
$\hat{f}_N^\alpha$.
\begin{figure}[ht!]
\centering
\includegraphics[width=9.8cm,trim=0 43 0 30]{EI_Fin.pdf}
\includegraphics[width=7cm,trim=2 22 0 35]{REI_Fin.pdf}
\caption{Left: Surrogate and post hoc surrogate fits after EGO acquisitions
($N=20$ and $n_0 = 10$) on Eq.~(\ref{eq:ryan}). Right: Same except
REGO where BOV and BEAR are the same.}
\label{fig:posthoc}
\end{figure}
To illustrate on our running example, consider the outcome of EGO,
from the left panel of Figure \ref{fig:EI-Example}, recreated in
the left panel of Figure \ref{fig:posthoc}. In both, the union of black and
blue points comprise $D_N$. In Figure \ref{fig:posthoc} the dashed blue
curve provides the post hoc adversarial surrogate, $\hat{f}_N^{\alpha}$, from
these runs, with the BOV and BEAR indicated as magenta diamonds and triangles,
respectively. Notice that for EGO, the BOV is around the global,
peaked minimum, but the BEAR captures the shallow, robust minimum. The
analogous REGO run, via EI acquisitions (red circles) and adversarial
surrogate (red lines) is shown in the right panel of Figure \ref{fig:posthoc}.
Here BEAR and BOV estimates are identical since REGO does not explore the
peaked minimum.
Looking at both panels of Figure \ref{fig:posthoc}, the BEAR offers a robust
solution for both EGO or REGO. However, EGO puts only one of its acquisitions
in the wide trough compared to four from REGO. Consequently, REGO's BEAR
offers a more accurate estimate for $x^r$. With only $N=20$ points in 1d, any
sensible acquisition strategy should yield a decent meta-model for $f(x)$ and
$g(x, \alpha)$, so we may have reached the limits of the utility of this
illustrative example. In Sections \ref{sec:empirical}--\ref{sec:rob}
we intend to provide more compelling evidence that REGO is better at targeting
robust minima. Before turning to that empirical study, we introduce two REI
variations that feature in some of those examples.
\subsubsection*{Unknown and vector-valued $\alpha$}
Until this point we have presumed fixed, scalar $\alpha$, perhaps specified by
a practitioner. In the case of inputs $x$ coded to the unit cube $[0,1]^d$, a
choice of $\alpha = 0.05$, say, represents a robustness specification of 5\%
in each coordinate direction (totaling 10\% of each dimension), i.e., where
$x^\alpha \equiv [ x_j \pm \alpha : j=1,\dots, d]$. This intuitive
relationship can be helpful in choosing $\alpha$, however one may naturally
wish to be ``robust'' to this specification. Suppose instead we had an
$\alpha_{\max}$ in mind, where $0 \leq \alpha \leq
\alpha_{\max}$. One option is to simply run REGO with $\alpha \leftarrow
\alpha_{\max}$, i.e., basing all REI acquisitions on $\alpha_{\max}$ instead.
Although we do not show this to save space, performance (via BEAR) of
$\alpha_{\max}$ relative to a nominal $\alpha$ rapidly deteriorates as the gap
between $\alpha_{\max}$ and $\alpha$ widens.
A better method is to base REI acquisitions on {\em all} $\alpha$-values
between zero and $\alpha_{\max}$ by integrating over them:
$\overline{\mathrm{EI}}^{\alpha_{\max}}_n = \int_0^{\alpha_{\max}}
\text{EI}^\alpha_n(x) \; d F(\alpha)$, say over uniform measure $F$. We are
not aware of an analytically tractable solution, in part because embedded in
$\text{EI}^\alpha_n(x)$ are complicated processes such as determining
adversarial data, and surrogates fit thereupon. But quadrature is relatively
straightforward, either via Monte Carlo (MC) $\alpha^{(t)} \sim F$, for
$t=1,\dots,T$, over a grid $\{0 = \alpha^{(1)}, \dots,
\alpha^{(T)} = \alpha_{\max}\}$. Define
\begin{equation}
\overline{\mathrm{EI}}^{\alpha_{\max}}_n \approx \frac{1}{T} \sum_{t=1}^T
\text{EI}^{\alpha^{(t)}}_n(x)
\quad \mbox{ for the former,} \quad \mbox{or} \quad
\overline{\mathrm{EI}}^{\alpha_{\max}}_n \approx \frac{1}{T} \sum_{t=1}^T
\text{EI}^{\alpha^{(t)}}_n(x) F(\alpha^{(t)}),
\label{eq:sumei}
\end{equation}
for the latter. Both may be implemented by looping over
Alg.~\ref{alg:REI_Fixed} with different $\alpha^{(t)}$ values, and then
averaging to take $x_{n+1} = \argmax_{x\in[0,1]^d}
\overline{\text{EI}}^{\alpha_{\max}}_n$. Both approximations
improve for larger $T$.
With random $\alpha^{(t)} \sim F$, even a single draw
($T=1$) suffices as an unbiased estimator -- though clearly with high
variance. This has the advantage of a simpler implementation, and faster
execution as no loop over Alg.~\ref{alg:REI_Fixed} is required. In our
empirical work to follow, we refer to this simpler option as the ``rand''
approximation, and the others (MC or grid-based) as ``sum''. It is
remarkable how similarly these options perform, relative to one another and to
a nominal REI with a fixed $\alpha$-value. One could impose $\alpha_{\min} >
0$ similarly, though we do not entertain this variation here.
As a somewhat orthogonal consideration, it may be desirable to entertain
different levels of robustness -- different $\alpha$-values $\{\alpha_1,
\dots, \alpha_d\}$ -- in each of the $d$ coordinate directions. The only change
this imparts on the description above is that now $x^\alpha = [ x_j \pm
\alpha_j : j=1,\dots, d]$, a hyperrectangle rather than
hypercube. Other quantities such as $g(x, \alpha)$ and
$\text{EI}_n^\alpha(x)$ remain as defined earlier with the understanding of a
vectorized $\alpha = \{\alpha_1, \dots, \alpha_d\}$ under the
hood. Using vectorized $\alpha_{\max}$ in Eq.~(\ref{eq:sumei}) requires
draws from a multivariate $F$, which may still be uniform, or a higher
dimensional grid of $\alpha^{(t)}$, recognizing that we are now approximating a
higher dimensional integral.
\section{Implementation and benchmarking}
\label{sec:empirical}
Here we provide a description of our implementation, variations,
methods of our main competitors, evaluation metrics and ultimately provide a
suite of empirical comparisons.
\subsection{Implementation details}
\label{sec:implement}
Our main methods (REI/REGO, variations and special cases) are coded in
\textsf{R} \citep{R2021}. These codes may be found, alongside those of our
comparators and all empirical work in this paper, in our public Git
repository:
\url{https://bitbucket.org/gramacylab/ropt/}.
GP surrogate
modeling for our new methods -- our competitors may leverage different
subroutines/libraries -- is provided by the \texttt{laGP} package
\citep{Gramacy2016}, on CRAN. Those subroutines, which are primarily
implemented in {\sf C}, leverage squared exponential covariance structure
(\ref{eq:cov}), and provide analytic $\hat{\tau}^2$ conditional on lengthscale
$\theta$. In our experiments, $\theta$ is fixed to appropriate values in
order to control MC variation that otherwise arises in
repeated MLE-updating of $\hat{\theta}$, especially in small-$n$ cases. More
details are provided as we introduce our test cases, with further discussion in
Section \ref{sec:conc}. Throughout we use $\epsilon = 10^{-8}$ in
Eq.~(\ref{eq:cov}), as appropriate for deterministic blackbox objective
function evaluations. For ordinary EI calculations we use add-on code
provided by \citet[][Chapter 7.2.2]{Gramacy2020}. All empirical work was
conducted on an 8-core hyperthreaded Intel i9-9900K CPU at 3.60 GHz with Intel
MKL linear algebra subroutines.
Section \ref{sec:adv} introduced several possibilities for solving
$y_i^\alpha$, whose definition is provided in Eq.~(\ref{eq:advy}). Although
numerical optimization, e.g. BFGS, is a gold standard, in most cases we
found this to be overkill, resulting in high runtimes for all $(N - N_0)$
acquisitions with no real improvements in accuracy over the following, far
simpler alternative. We prefer quickly optimizing over the discrete set
$x^{\mathcal{B}_\alpha(x)}$ comprising of a box extending $\alpha$-units out
in each coordinate direction from $x$, or its vectorized analog as described
in Section \ref{sec:REI}. This ``cornering'' alternative occasionally yields
$y_i^\alpha < y_i$, which is undesirable, but this is easily mitigated by
augmenting the box $x^{\mathcal{B}_\alpha(x)}$ to contain a small number of
intermediate, grid locations in each coordinate direction. Using an odd
number of such intermediate points ensures $y_i^\alpha \geq y_i$. We use
three additional grid points per input in our higher-dimensional exercises,
and none in lower dimensional ones. We find that a $d$-dimensional grid
formed from the outer product for each coordinate (i.e., a $2^5=32$-point-grid
for $d=2$) facilitates a nice compromise between computational thrift, and
accuracy of adversarial $y_i^\alpha$-values compared to the cumbersome
BFGS-based alternative.
In our test problems, which are introduced momentarily and utilize inputs
coded to $[0,1]^d$, we compare each of the three variations of REGO described
in Section \ref{sec:methods}. These comprise: (1) REI with known $\alpha$
(Alg.~\ref{alg:REI_Fixed}); (2) novel $\alpha\sim
\mathrm{Unif}(0,\alpha_{\max})$ at acquisition; and (3) averaging over a
sequence of $\alpha \in [0,\alpha_{\max}]$ (\ref{eq:sumei}). For all examples,
we set $\alpha_{\max} = 0.2$ and average over five equally spaced values. In
figures, these methods are denoted as ``known'', ``rand'' and ``sum'',
respectively. After each acquisition, we use the post hoc adversarial
surrogate to find the BEAR operating conditions: $x^*_{\mathrm{bear}}$ and
$f_{N}^{\alpha,\mathrm{min}}$ in order to track progress.
As representatives from the standard BO literature, we compare against the
following ``strawmen'': EGO \citep{Jones1998}, where acquisitions are based on
EI (\ref{eq:ei}; and ``EY'' \citep{Gramacy2020} which works similarly to EGO,
except acquisitions are selected minimizing $\mathbb{E}[Y(x) \mid D_N]$:
$x^{\text{EY}}_{\text{new}} = \argmin_{x \in [0, 1]^d} \mu_n(x)$. In other
words, EY acquires the point that the surrogate predicts has the lowest mean.
Since it does not incorporate uncertainty ($\sigma_n(x)$), repeated
EY acquisition often stagnates in one region -- usually a local minima
-- rather than exploring new areas like EI does.
In our
figures, these comparators are indicated, as follows: ``ego'',
and ``ey'' respectively for EGO and EY with progress measured by BOV:
$x^*_{\mathrm{bov}}$ and $f_{N}^{\mathrm{min}}$. Finally, we consider the post
hoc adversary with progress measured by BEAR for EGO (``egoph'') and uniform
random sampling (``unif''). In the figures, REI methods are solid curves with
robust competitors dashed and regular BO dotted.
Our final comparator is {\tt StableOPT} from \cite{Bogunovic2018}. For
completeness, we offer the following by way of a high-level overview; details
are left to their paper. \citeauthor{Bogunovic2018} assume a fixed, known
$\alpha$, although we see no reason why our extensions for unknown $\alpha$
could not be adapted to their method as well. Their algorithm relies on
confidence bounds to narrow in on $x^r$. Let $\mathrm{ucb}_n(x) = \mu_n(x) +
2\sigma_n(x)$ denote the upper 95\% confidence bound at $x$ for a fitted
surrogate $\hat{f}_n$, and similarly $\mathrm{lcb}_n(x) = \mu_n(x) -
2\sigma_n(x)$ for the analagous lower bound. Then we may translate their
algorithm into our notation, shown in Alg.~\ref{alg:stableopt}, furnishing the
$n^{\mathrm{th}}$ acquisition. Similar to REGO, this may then be wrapped in a
loop for multiple acquisitions. We could not find any public software for {\tt
StableOPT}, but it was relatively easy to implement in {\sf R}; see our public
Git repo.
\bigskip
\begin{algorithm}[H]
\DontPrintSemicolon
\textbf{input} $D_{n - 1} = (X_{n - 1}, Y_{n - 1})$ and $\alpha$\\
$\hat{f}_{n - 1}(x) = \mathrm{GP}_{\hat{\theta}}(D_{n - 1})$ \tcp*{with
predictive moments in Eq.~(\ref{eq:GPpreds})}
$\tilde{x}_n = \argmin_{x \in [0, 1]^d} \max_{a \in [-\alpha, \alpha]}
\mathrm{lcb}_{n - 1}(x + a)$ \tcp*{where we think $x^r$ is}
$a_n = \argmax_{a \in [-\alpha, \alpha]} \mathrm{ucb}_{n - 1}(\tilde{x}_n +
a)$ \tcp*{worst point within $\tilde{x}_n^\alpha$}
\Return $(\tilde{x}_n + a_n, f(\tilde{x}_n + a_n)))$ \tcp*{sample from $f$
and return}
\caption{{\tt StableOPT}}
\label{alg:stableopt}
\end{algorithm}
\noindent
\bigskip
Rather than acquiring new runs nearby likely $x^r$, {\tt StableOPT} samples
the4 worst point within $\tilde{x}_n^\alpha$. Consequently, its final $X_N$
does not contain any points thought to be $x^r$. \citeauthor{Bogunovic2018}
recommend selecting $x^*_{\mathrm{bear}}$ from all $\tilde{x}_n$ rather than
the actual points sampled, all $\tilde{x}_n + a_n$, using notation introduced
in Alg.~\ref{alg:stableopt}. We generally think it is a mistake to report an
answer at an untried input location. So in our experiments we
calculate $x^*_{\mathrm{bear}}$ for {\tt StableOPT} as derived from a final,
post hoc adversary calculation [Section \ref{sec:REI}]. In figures, this
comparator is denoted as ``stable''.
The general flow of our benchmarking exercises coming next
[Section \ref{sec:empcomp}] is as follows. Each optimization, say in
input dimension $d$, is seeded with a novel LHS of size $n_0 = 5 + 5d$ that is
shared for each comparator. Acquisitions, $n=n_0+1,\dots, N$, separate for
each comparator up to a total budget $N$ (different for each problem), are
accumulated and progress is tracked along the way. Then this is repeated, for
a total of 1,000 MC repetitions. To simplify notation, let
$x^*_{\mathrm{b, n}}$ be either $x^*_{\mathrm{bear}}$ or $x^*_{\mathrm{bov}}$,
depending on if using BEAR or BOV, after the $n^{\mathrm{th}}$ acquisition. We
utilize the following two metrics to compare BEAR or BOV across methods using
the true adversary:
\begin{equation}
\label{eq:MCmetrics}
r(x^*_{\mathrm{b, n}}) = g(x^*_{\mathrm{b, n}}, \alpha) - g(x^r, \alpha)
\quad \quad \quad \quad
d(x^*_{\mathrm{b, n}}) = \vert \vert x^*_{\mathrm{b, n}} - x^r\vert \vert,
\end{equation}
where $x^r$ is the $x$ location of the true robust minimum. The first metric
is similar to the concept of regret from decision theory \citep{Blum2007,
Kaelbling1996} at the suggested $x^*_{\mathrm{b, n}}$. Regret measures how
much you lose out by running at $x^*_{\mathrm{b, n}}$ compared to $x^r$.
Regret is always nonnegative since by definition, $x^r = \argmin_{x
\in [0, 1]^d} g(x, \alpha)$. The second metric is the distance from
$x^*_{\mathrm{b, n}}$ to $x^r$. For both metrics, lower is better with 0 being
the theoretical minimum if a method correctly identifies exactly $x^r$ as the
BEAR or BOV.
\subsection{Empirical comparisons}
\label{sec:empcomp}
\subsubsection*{One-dimensional examples}
The left panel of Figure \ref{fig:rkhs} shows the 1d RKHS function used by
\cite{Assael2014}, and an adversary with $\alpha = 0.03$.
\begin{figure}[ht!]
\centering
\includegraphics[width=5.84cm,trim=5 20 10 50]{rkhs_fun.pdf}
\includegraphics[width=5.84cm,trim=5 20 10 50]{rkhs_X.pdf}
\includegraphics[width=5.84cm,trim=5 20 0 50]{rkhs_Y.pdf}
\caption{Left: The RKHS function and the robust surface with $\alpha =
0.03$. Middle: $d(x^*_{\mathrm{b, n}})$ after each acquisition. Right:
$r(x^*_{\mathrm{b, n}})$ after each acquisition.}
\label{fig:rkhs}
\end{figure}
Observe how $f$ has a smooth region for low $x$ values and a wiggly region for
high $x$. With $\alpha=0.03$, $x^r=0.312$ whereas $x^*=0.822$. The adversary is
bumped up substantially nearby $x^*$ because the surface is so wiggly there.
Here we consider a final budget of $N=40$ with $\theta = 0.01$. Results are
provided in the middle and right panels of Figure \ref{fig:rkhs}. Observe that
REGO with known $\alpha$ performs best as it quickly gets close to $x^r$,
and more-or-less stays there. EGO with post hoc adversary does well at the
beginning, but after twenty acquisitions in, it explores around $x^*$
more, retarding progress (explaining the ``bump'') toward $x^r$.
``Sum''-based REI has a somewhat slighter bump. Averaging over
smaller $\alpha$-values favors exploring the wiggly region. {\tt
StableOPT} caps out at a worse solution than any of the REI-based methods.
Those based on BOV, like ``ego'' and ``ey'' fare worst of all. Even worse than
acquiring ``unif''ormly at random.
Figure \ref{fig:ryan} shows a 1d test function of our own creation, first
depicted in Figure \ref{fig:1D-Example}, defined as:
\begin{equation}
\label{eq:ryan}
f(x) =
\begin{cases}
3.5(x - 0.15)^2 + \log(1.3) & x < 0.4\\
\log(1 + \vert 2(x - 0.55) \vert) & 0.4 \leq x < 0.7\\
\frac{1}{20}\sin(25x - 17.5) + \log(1.3) & 0.7 \leq x.
\end{cases}
\end{equation}
In this problem, $x^*=0.55$ and $x^r=0.15$ using $\alpha = 0.075$. All GP
surrogates used $\theta = 0.25$. We see a similar story here as for our first
test problem. The only notable difference here is that EGO does not get
drawn into the peaky region, likely because it is less pronounced. EGO favors
sampling around $x^*$ initially, but eventually explores the rest of the input
space, including around $x^r$. Interestingly, REGO with known $\alpha$
performs a little worse than either of the unknown $\alpha$ methods.
\begin{figure}[ht!]
\centering
\includegraphics[width=5.84cm,trim=5 10 10 50]{ryan_fun.pdf}
\includegraphics[width=5.84cm,trim=5 10 10 50]{ryan_X.pdf}
\includegraphics[width=5.84cm,trim=5 10 0 50]{ryan_Y.pdf}
\caption{Left: The Figure \ref{fig:1D-Example} function and the robust
surface with $\alpha = 0.075$. Middle: $d(x^*_{\mathrm{b, n}})$ after
each acquisition. Right: $r(x^*_{\mathrm{b, n}})$ after each
acquisition.}
\label{fig:ryan}
\end{figure}
\subsubsection*{Two-dimensional examples}
\begin{figure}[ht!]
\includegraphics[width=6.6cm,trim=0 20 0 30]{bertsima_2d_fun.pdf}
\includegraphics[width=6.6cm,trim=0 20 0 30]{robust_bertsima_2d_fun.pdf}
\includegraphics[width=6.6cm,trim=0 20 0 30]{bertsima_2d_X.pdf}
\includegraphics[width=6.6cm,trim=0 20 0 30]{bertsima_2d_Y.pdf}
\caption{Top: The Bertsimas function on the left and the robust surface
with $\alpha = 0.15$ on the right. Bottom: $d(x^*_{\mathrm{b, n}})$
(left) and $r(x^*_{\mathrm{b, n}})$ (right) after
each acquisition.}
\label{fig:bertsima2d}
\end{figure}
The top-left panel of Figure \ref{fig:bertsima2d} shows a test problem
from \cite{Bertsimas2010}, defined as:
\begin{align}
\label{eq:bert}
f(x_1, x_2) = -2x_1^6 &+ 12.2x_1^5 - 21.2x_1^4 + 6.4x_1^3 + 4.7x_1^2 -
6.2x_1 - x_2^6 + 11x_2^5 - 43.3x_2^4\\
&+74.8x_2^3 - 56.9x_2^2 + 10x_2 + 4.1x_1x_2 + 0.1x_1^2x_2^2 - 0.4x_1x_2^2 -
0.4x_1^2x_2.\nonumber
\end{align}
We negated this function compared to \citeauthor{Bertsimas2010}, who were
interested in maximization. Originally, it was defined on $x_1 \in [-0.95,
3.2]$ and $x_2 \in [-0.45, 4.4]$ with $x^*=(2.8, 4.0)$. We coded inputs to
$[0, 1]^2$ so that the true minimum is at $(0.918, 0.908)$. In the scaled
space, $x^r = (0.2673, 0.2146)$ using $\alpha = 0.15$. We used $\theta=1.1$
and $N=90$. The bottom row of panels in Figure \ref{fig:bertsima2d} show that
non-fixed $\alpha$ for REGO can give superior performance in early
acquisitions. {\tt StableOPT} performs worse with this problem because the
objective surface near $x^r$ is relatively more peaked, say compared to the 1d
RKHS example.
Regret is trending toward 0 when using BEAR for acquisitions, with REGO-based
methods leading the charge. EGO with post hoc adversary performs much worse
for this problem. EGO-based acquisitions heavily cluster near $x^*$
which thwarts consistent identification of $x^r$ even with a post hoc surrogate.
\begin{figure}[ht!]
\centering
\includegraphics[width=6.6cm,trim=0 10 0 50]{bertsima_1d_fun.pdf}
\includegraphics[width=6.6cm,trim=0 10 0 50]{robust_bertsima_1d_fun.pdf}
\includegraphics[width=6.6cm,trim=0 10 0 50]{bertsima_1d_X.pdf}
\includegraphics[width=6.6cm,trim=0 10 0 50]{bertsima_1d_Y.pdf}
\caption{Top: The Bertsimas function on the left with the robust surface
with $\alpha = (0.2, 0)$ on the right. Bottom: $d(x^*_{\mathrm{b, n}})$
(left) and $r(x^*_{\mathrm{b, n}})$ (right)
after each acquisition.}
\label{fig:bertsima1d}
\end{figure}
Figure \ref{fig:bertsima1d} considers the same test problem (\ref{eq:bert}),
except this time we use $\alpha = (0.2, 0)$, meaning no robustness required in
$x_2$. This moves $x^r$ to $(0.412, 0.915)$ in the scaled space. In this
problem, the robust surface is fairly flat meaning that when an algorithm
finds $x_2 = 0.915$, $x_1 \in [0.35, 0.75]$ gives a similar robust output. For
that reason, all of the methods perform worse when trying to pin down the
exact $x_1$ location, so by looking at metric $d(x^*_{\mathrm{b}})$ from Eq.
(\ref{eq:MCmetrics}), all methods appear to be doing worse than looking at
$r(x^*_{\mathrm{b}})$ which goes to 0 relatively quickly. A shallower robust
minimum favors {\tt StableOPT}. Since that comparator never actually
evaluates at $x^r$, on this problem it suffices to find $x_1 \in [0.35,
0.75]$, which it does quite easily. Knowing true $\alpha$ for REI helps
considerably. This makes sense because omitting an entire dimension from
robust consideration is quite informative. Nevertheless, alternatives using
random and aggregate $\alpha$-values perform well.
\subsubsection*{Higher dimension}
Our final set of test functions comes from the Rosenbrock family
\citep{Dixon1979},
\begin{equation}
\label{eq:rosenbrock}
\sum_{i = 1}^{d - 1} \left(100(x_{i + 1} - x_i)^2 + (x_i - 1)^2\right).
\end{equation}
defined in arbitrary dimension $d$. Originally in $[-2.48, 2.48]^d$,
we again scale to $[0, 1]^d$. Although our focus here will be on $d=4$,
visualization easier in 2d.
\begin{figure}[ht!]
\centering
\includegraphics[width=5.5cm,trim=0 63 0
50,clip=true]{rosenbrock_2d_fun.pdf}
\includegraphics[width=5.5cm,trim=0 63 0 50,clip=true]{rosenbrock_2d_X.pdf}
\includegraphics[width=5.5cm,trim=0 63 0 50,clip=true]{rosenbrock_2d_Y.pdf}
\includegraphics[width=5.5cm,trim=0 10 0 50]{robust_rosenbrock_2d_fun.pdf}
\includegraphics[width=5.5cm,trim=0 10 0 50]{rosenbrock_4d_X.pdf}
\includegraphics[width=5.5cm,trim=0 10 0 50]{rosenbrock_4d_Y.pdf}
\caption{Left: 2d log Rosenbrock function on the top and the robust
surface with $\alpha = 0.1$ on the bottom. Top:
$d(x^*_{\mathrm{b, n}})$ in the middle and $r(x^*_{\mathrm{b, n}})$
on the right for 2d. Bottom: similarly in 4d.}
\label{fig:rosenbrock2d}
\end{figure}
The top-left panel of Figure \ref{fig:rosenbrock2d} shows a visual with
outputs on the log scale for a 2d case. Here $x^*=(0.75, 0.75)$ and
$x^r=(0.503, 0.525)$ when $\alpha = 0.1$. A 2d visual of this adversary is
in the bottom-left panel. In 2d, we set $\theta=0.9$ and in 4d, we use
$\theta = 0.05$. Results for 2d are in the top row (middle and right
panels), and for 4d are in the bottom row, respectively. REGO shines in both
cases because $x^*$ is in a peaked region and $x^r$ is in a shallow
one. EGO with post hoc adversary does well early on, but stops improving
much after about 150 acquisitions. {\tt StableOPT} has the same issues of
sampling around $x^r$ that we have seen throughout -- never actually sampling
it.
\subsection{Supplementary empirical analysis}
\label{sec:supp}
An instructive, qualitative way to evaluate each acquisition algorithm is to
inspect the final collection of samples (at $N$), to see visually if things
look better for robust variations. Figure \ref{fig:samps} shows the
final samples of one representative MC iteration for EGO, REGO with random
$\alpha$ and {\tt StableOPT} for 2d Rosenbrock (\ref{eq:rosenbrock})
(left panel) and both Bertsimas (\ref{eq:bert}) variations (middle
and right panels).
Consider Rosenbrock first. Here, EGO has most of its acquisitions in a mass
around $x^*$ with some acquisitions dispersed throughout the rest of the
space. This is exactly what EGO is designed to do: target the global minimum,
but still explore other areas. REGO has a similar amount of space-fillingness,
but the target cluster is focused on $x^r$ rather than $x^*$. On the other
hand, {\tt StableOPT} has almost no exploration points. Nearly all of its
acquisitions are on the perimeter of a bounding box around $x^r$. While {\tt
StableOPT} does a great job of picking out where $x^r$ is, intuitively we
do not need 70+ acquisitions all right next to each other. Some of those
acquisitions could better facilitate learning of the surface by exploring
elsewhere.
\begin{figure}[ht!]
\includegraphics[width=5.2cm,trim=0 40 75 45,clip]{Rosenbrock_Example.pdf}
\includegraphics[width=5.2cm,trim=0 40 75 45,clip]{Bertsima_2d_Example.pdf}
\includegraphics[width=6.3cm,trim=0 40 0 45,clip]{Bertsima_1d_Example.pdf}
\caption{Sample acquisitions for EGO, REGO with random $\alpha$ and
{\tt StableOPT} for 2d Rosenbrock with $\alpha = 0.1$ (left), and Bertsimas
functions with $\alpha = 0.15$ (middle), and $\alpha = (0.2, 0)$ (right).}
\label{fig:samps}
\end{figure}
Moving to the Bertsimas panels of the figure, similar behavior may be
observed. REGO and EGO have some space-filling points, but mostly target $x^r$
for REGO and $x^*$ for EGO. {\tt StableOPT} again puts almost all of its
acquisitions near $x^r$ with relatively little exploration. But the main
takeaway from the Bertsimas plots is that, since REGO does not require setting
$\alpha$ beforehand, it gives sensible designs for multiple $\alpha$ values
(the blue points are the exactly the same in both panels).
Looking more closely at the REGO design, observe
that all three minima (global and robust with $\alpha = 0.15$ and $\alpha =
(0.2, 0)$) have many acquisitions around them. This shows the power of REGO,
capturing all levels of $\alpha$ and allowing the user to delay specifying
$\alpha$ until after experimental design.
\begin{figure}[ht!]
\centering
\vbox{\vspace{-1.5cm} \includegraphics[width=5.5cm,trim=0 65 0
30,clip=true]{rkhs_Time.pdf}
\includegraphics[width=5.5cm,trim=0 65 0 30,clip=true]{ryan_Time.pdf}
\includegraphics[width=5.5cm,trim=0 65 0
30,clip=true]{bertsima_1d_Time.pdf}}
\includegraphics[width=5.5cm,trim=0 0 0 30]{bertsima_2d_Time.pdf}
\includegraphics[width=5.5cm,trim=0 0 0 30]{rosenbrock_2d_Time.pdf}
\includegraphics[width=5.5cm,trim=0 0 0 30]{rosenbrock_4d_Time.pdf}
\caption{Cumulative timings for each method/test function.}
\label{fig:times}
\end{figure}
Timings for each method/problem are in Figure \ref{fig:times}.
Comparators ``egoph'' and ``ego'' report identical timings because they
involve the same EI acquisition function. As you might expect, ``unif''
and EY have the lowest times because they do less work. For the more
competitive methods, REGO is generally a little slower than EGO, but often
faster than {\tt StableOPT}. None of these timings are substantial compared to
black-box evaluations typically involved in BO
enterprises.
\section{Robot pushing}
\label{sec:rob}
Here we consider a real world example to measure the effectiveness of REI.
The robot pushing simulator
\citep[][https://github.com/zi-w/Max-value-Entropy-Search]{Kaelbling2017} has
been used previously to test ordinary BO \citep{Wang2017a, Wang2017b} and robust
\citep{Bogunovic2018} BO methodology. The simulator models a robot hand, with up
to 14 tunable parameters, pushing a box with the goal of minimizing
distance to a target location.
Following \cite{Wang2017b}, we consider varying four of the tunable parameters,
as detailed in Table \ref{tab:push_vars}.
\begin{table}[ht!]
\centering
\begin{tabular}{|c | c | c|}
\hline
Parameter & Role & Range \\
\hline\hline
$r_x$ & initial x-location & $[-5, 5]$\\
\hline
$r_y$ & initial y-location & $[-5, 5]$\\
\hline
$r_\theta$ & initial angle & $[-\frac{\pi}{2},
\frac{\pi}{2}]$\\
\hline
$t_r$ & pushing strength & $[1, 30]$\\
\hline
\end{tabular}
\caption{Parameters for the robot pushing simulator.}
\label{tab:push_vars}
\end{table}
\noindent This simulator is coded in \textsf{Python} and uses an engine
called \texttt{Box2D} \citep{Catto2011} to simulate the physics of pushing.
Also following \cite{Wang2017b}, we consider two cases: one with a
fixed hand angle, always facing the box, determined to be $r_\theta =
\arctan(\frac{r_y}{r_x})$; and the other allowing for all 4 parameters to
vary. These create 3d and 4d problems that we call ``push3'' and ``push4'',
respectively
We consider two further adaptations. First, we de-noised the simulator, so
that it is deterministic, which is more in line with our previous examples.
Second, rather than look at a single target location, we take the minimum
distance to two, geographically distinct target locations under squared and
un-squared distances respectively. We do this in order to manifest a version
of the problem that would require robust analysis. Having the box pushed the
full distance toward either target, minimizing the objective, yields an
output of 0 since $0^2 = 0$. However, the minimum around the unsquared target
will be shallower because the unsquared surface increases slower when the
distance to the target is greater than 1. Robust BO prefers exploring the
unsquared minimum while an ordinary, non-robust method would show no
preference. Similarly, a BOV performance metric is indifferent to the target
locations, while BEAR would favor the unsquared target.
For both ``push3'' and ``push4'' variations, we somewhat arbitrarily fixed the
target locations at $(-3, 3)$ and $(3, -3)$ with the latter being the target
with squared distance outputs. Since the unsquared location is in the top-left
quadrant, the optimal robust location involves starting the robot hand in the
bottom-right so that it pushes the box up and left. Furthermore, because the
robot cannot perfectly control the initial hand location, if it is close to the
origin, a minor change in $r_x$ or $r_y$ leads to the hand pushing away from
the target location. For ``push3'' we use $\alpha = 0.1$, $x^r = (5, -5, 25.4)$,
meaning the robot hand starts as far in the bottom-right as possible and
pushes quite hard. The true setting $g(x^r, 0.1) = 1.37$ is lower than
analogously pushing toward the squared target location, $g((-5, 5, 25.4), 0.1)
= 1.88$, solely due to squaring as $1.37^2 \approx 1.88$.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.47,trim=0 10 0 90]{push3_X.pdf}
\includegraphics[scale=0.47,trim=25 10 0 90]{push3_Y.pdf}
\includegraphics[scale=0.47,trim=25 10 0 90]{push3_Time.pdf}
\caption{Results for ``push3'' $\alpha = 0.1$:
$d(x^*_{\mathrm{b, n}})$ (left), $r(x^*_{\mathrm{b,
n}})$ (middle) and cumulative time (right).}
\label{fig:push3}
\end{figure}
We compare each of the methods from Section \ref{sec:empcomp} in a similar
fashion by initializing with an LHS of size $n_0 = 10$ and acquiring forty
more points ($N=50$), repeating for 1,000 MC samples. Figure \ref{fig:push3}
summarizes our results. Observe that all of REI variations outperform the
others by minimizing regret faster and converging at a lower value. EI with
post hoc adversary and {\tt StableOPT} perform fairly well but suffer from
drawbacks similar to those described in Section \ref{sec:empcomp}. They cannot
accommodate squared and unsquared target locations differently.
\begin{figure}[ht!]
\centering
\includegraphics[width=5.8cm,trim=25 10 0 0]{push3_REI.pdf}
\includegraphics[width=5.8cm,trim=25 10 0 0]{push3_stable.pdf}
\includegraphics[width=5.8cm,trim=25 10 0 0]{push3_EI.pdf}
\caption{Distribution of $r_x$ and $r_y$ from $x^*_{\mathrm{bear}}$ for the
push3 problem after the final acquisition for ``rand'' REI using $\alpha =
0.1$ (left), {\tt StableOPT} (middle), and EI (right).}
\label{fig:push3_hists}
\end{figure}
It is worth noting, that while REI performs well, the left panel of Figure
\ref{fig:push3} suggests that none of the methods are getting very close to
always finding $x^r$ when $d(x^*_{\mathrm{b, n}}) = 0$. (Distances are not
converging to zero.) To dig a little deeper, Figure \ref{fig:push3_hists}
explores $r_x$ and $r_y$ from $x^*_{\mathrm{bear}}$ for ``rand'' REI, {\tt
StableOPT} and EI with post hoc adversary. Notice that it is rare for any of
these comparators to push toward the ``wrong'' target location, i.e., finding
$(-5, 5)$ rather than $(5, -5)$ which would result in large $d(x^*_{\mathrm{b,
n}})$ and low $r(x^*_{\mathrm{b, n}})$. However, EI is attracted to the peaked
minimum more often than either of the other methods. REI has this problem
slightly more often than {\tt StableOPT}. However, {\tt StableOPT} and EI do
recommend $x^*_{\mathrm{bear}}$ in the bottom-right, but not all the way to
$(5, -5)$. This happens more often than with REI. Only
occassionally does REI miss entirely by finding the global minimum leading to
large $d(x^*_{\mathrm{b, n}})$. On the other hand, {\tt StableOPT} identifies
the correct area slightly more often, but struggles to pinpoint $x^r$. We
conclude that both REI and {\tt StableOPT} perform well -- much better than
ordinary EI -- and any differences are largely a matter of taste or tailoring
to specific use cases, modulo computational considerations (right panel).
For ``push4'', the location of the robust minimum is the same in that the
robot hand starts in the bottom-right, pushing to the top-left. Thus $x_r^* =
(5, -5, -0.79, 25.7)$ when using $\alpha = 0.1$. Here $g(x_r^*, 0.1) = 1.64$
compared to $g((-5, 5, -0.79, 25.7), 0.1) = 2.71$ by pushing to the squared
target location. We again compared the methods from Section \ref{sec:empcomp}
with 1,000 MC iterations, but this time with an initial LHS of size $n_0 = 20$
and eighty acquisitions ($N=100$). Results are presented in Figure
\ref{fig:push4}. Note that increasing the dimension makes every method do
worse, but relative comparisons between methods are similar. Here {\tt
StableOPT}'s performance is better, on par with REI variations, again modulo
computing time (right panel).
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.47,trim=0 10 0 50]{push4_X.pdf}
\includegraphics[scale=0.47,trim=25 10 0 50]{push4_Y.pdf}
\includegraphics[scale=0.47,trim=25 10 0 50]{push4_Time.pdf}
\caption{Results for ``push4'' $\alpha = 0.1$:
$d(x^*_{\mathrm{b, n}})$ (left), $r(x^*_{\mathrm{b,
n}})$ (middle) and cumulative time (right).}
\label{fig:push4}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=5.8cm,trim=25 15 0 30]{push4_REI.pdf}
\includegraphics[width=5.8cm,trim=25 15 0 30]{push4_stable.pdf}
\includegraphics[width=5.8cm,trim=25 15 0 30]{push4_EI.pdf}
\caption{Distribution of $r_x$ and $r_y$ from $x^*_{\mathrm{bear}}$ for the
push4 problem after the final acquisition for random REI using $\alpha =
0.1$ (left), {\tt StableOPT} (middle), and EI (right).}
\label{fig:push4_hists}
\end{figure}
Figure \ref{fig:push4_hists} demonstrates the added difficulty of the
``push4'' problem, as indicated by additional spread in the optimal $(r_x,
r_y)$ compared to Figure \ref{fig:push3_hists}. This is because a slightly
misspecified starting location can be compensated for by adjusting the hand
angle. Incorporating that angle introduces a slew of local minima at each
$(r_x, r_y)$. For example, if we set $r_x = 3$ rather than 5 as is optimal,
and check the regret for ``push3'' and ``push4'', we get $1.1$ and $0.7$,
respectively, by changing $r_\theta$ to $-0.69$. We may also adjust $t_r =
24.2$ in both cases to account for the fact the hand is closer to the box.
This phenomenon is not unique to the example we chose. Any slight
misspecification of $r_x$ and $r_y$ compensated for with a commensurate
change in $r_\theta$.
\section{Discussion}
\label{sec:conc}
We introduced a robust expected improvement (REI) criteria for Bayesian
optimization (BO) that provides good experimental designs for targeting robust
minima. REI is doubly robust, in a sense,
because it is able to accomplish this feat even in cases where one does not
know the proper level of robustness, $\alpha$, to accommodate. In fact, we
have shown that in some cases, not fixing the $\alpha$ robustness level
beforehand leads to faster discovery of the robust minimum. BO
methods, e.g., ordinary EI, miss the robust target entirely. However, by
blending EI with an (surrogate modeled) adversary, we are able to get the best
of both worlds. Even when designs are not targeted to find a robust minimum,
the adversarial surrogate can be applied post hoc to similar effect.
Despite this good empirical performance, the idea of robustness undermines one
of the key assumptions of GP regression: stationarity. If one optimum is
peaked and another is shallow, then that surface is technically nonstationary.
When there are multiple such regimes it can be difficult to
pin down lengthscale(s) $\theta$, globally in the input space. The underlying
REI acquisition scheme may perform even better when equipped with a
non-stationary surrogate model such as the treed GP \citep{Gramacy2007} or a
deep GP \citep{Damianou2013, Sauer2022}. We surmise that non-GP-based
surrogates, e.g., neural networks \citep{Shahriari2020}, support vector
machines \citep{Shi2020} or random forests \citep{Dasari2019} could be
substituted without a fundamental change to the underlying REI
criteria.
Throughout we assumed deterministic $f(x)$. Noise can be
accommodated by estimating a nugget parameter. The idea of robustness can be
extended to protect against output noise \citep{Beland2017}, in addition to
the ``input uncertainty'' regime we studied here. In that setting, it can be
helpful to entertain a heteroskedastic (GP) surrogate
\citep{binois2018practical,binois2018replication, R-hetGP} when the noise level
is changing in the input space.
\subsection*{Funding}
This work was supported by the U.S. Department of Energy, Office of Science,
Office of Advanced Scientific Computing Research and Office of High Energy
Physics, Scientific Discovery through Advanced Computing (SciDAC) program under
Award Number 0000231018.
\bibliographystyle{jasa}
|
{
"arxiv_id": "2302.08595",
"language": "en",
"timestamp": "2023-02-22T02:01:37",
"url": "https://arxiv.org/abs/2302.08595",
"yymm": "2302"
} | \section{Introduction}
Frequency-domain learning draws attention from the computer vision community recently. Xu et al. \cite{2dfdl} propose a generic frequency-domain learning framework that shows a superior tradeoff between inference accuracy and input data size in various 2D computer vision tasks by learning from high-resolution inputs in the frequency domain. \cite{2dfdl} point out that the 2D CNN models have a stationary spectral bias toward a few low-frequency channels of image information in the YCbCr color space, especially the low-frequency channels of intensity (color channel Y). These observations are consistent with the theory of JPEG compression and the human visual system's bias toward low-frequency components. By removing the noisy high-frequency channels and learning from effectively higher-resolution images in the frequency domain (e.g., 2x of the original image in the spatial domain), frequency-domain learning achieves improved accuracy while the input data size in the frequency domain is equivalent to the size of low-resolution input data in the spatial domain. However, no study on frequency-domain learning has been conducted on 3D CNNs with 3D volumetric data. Thus, whether these insights can be extended to 3D CNNs and 3D data perception tasks is still in question. We hypothesize that 3D CNN models also have a certain kind of spectral bias toward a limited number of critical frequency components in various 3D data perception tasks. Thus, we conduct this study to validate our hypothesis.
3D CNNs in 3D data perception tasks represent 3D mashes or 3D points as a dense regular 3D voxel grid \cite{3dsurvey}\cite{voxnet}. As the 3D volumetric data and 2D images are both regular grid data representations, we restrict the scope of our study to volumetric-based 3D CNN models to extend frequency-domain learning from 2D images to the 3D space. Inspired by Xu et al. \cite{2dfdl}, we first study and analyze the spectral bias of 3D CNN models toward 3D frequency-domain volumetric data by training the existing 3D CNN models (e.g., VoxNet \cite{voxnet} and VRN \cite{vrn}) with a learning-based channel selection module, which takes reshaped frequency-domain data as inputs. Our study applies limited modifications to the existing 3D CNN models (e.g., removing the downsampling layers in the first two blocks of VRN \cite{vrn} to fit the frequency-domain input data size). The spectral bias analysis results of 3D CNN models show that only a limited number of frequency channels, especially the DC channel, are highly informative to the 3D data perception tasks. Consequently, we statically select only the critical frequency channels that have a high probability of selection to be activated for training and inference.
A key benefit of frequency-domain learning is the ability to learn from a higher-resolution 3D representation (equivalently in the spatial domain) at a effectively reduced input data size in the frequency domain, thus providing a better tradeoff between inference accuracy and the actual input data size. Experiment results show that frequency-domain learning can significantly reduce the size of volumetric-based 3D inputs (based on the spectral bias) while achieving comparable accuracy with conventional spatial-domain learning approaches. Specifically, frequency-domain learning is able to reduce the input data size by 98\% in 3D shape classification while limiting the average accuracy drop within 2\%, and by 98\% in 3D point cloud semantic segmentation with a 1.48\% mean-class accuracy improvement while limiting the mean-class IoU loss within 1.55\%. The experiment results also show that by learning from higher-resolution 3D representation (i.e., 2x of the original 3D representation in the spatial domain), frequency-domain learning improves the mean-class accuracy and mean-class IoU by 3.04\% and 0.63\%, respectively, while achieving an 87.5\% input data size reduction in 3D point cloud semantic segmentation.
Another benefit of frequency-domain learning with reduced input data sizes is the reduction in computational complexity and memory requirements of 3D CNN models. Our experiment results show that by learning from a 2x higher-resolution 3D representation in the frequency domain at an effectively reduced input data size, frequency-domain learning can reduce about 9\% of the floating-point operations (FLOPs) and 20\% of the GPU memory footprint required for inference compared to directly learning in the spatial domain at the same resolution. For 3D computer vision tasks like point cloud semantic segmentation, the high-resolution data has superiority in representing large-scale point clouds. But, the computation resource and memory footprint requirements also increase cubically \cite{3dsurvey}. Hence, selecting the critical and pruning the trivial frequency channels based on the learned spectral bias of 3D CNNs as a data pre-processing stage in the frequency-domain learning can potentially alleviate the large requirements for computation resources and GPU memory footprints in volumetric-based 3D vision tasks and avoid complicated modification to the existing 3D CNN models.
The contributions of this paper are as follows:
\begin{itemize}
\item To the best of our knowledge, this is the first work that studies frequency-domain learning for 3D CNN models on 3D volumetric data. We obtain the spectral bias of 3D CNNs by training the existing 3D CNN models with a learning-based channel selection module. By analyzing the spectral bias of the 3D CNN models, we reveal that the DC components and low-frequency components already carry a significant amount of information for 3D shape classification tasks, and the top 3D frequency channels that are most informative to the point cloud semantic segmentation are more distributed across the spectrum.
\item We show that the learned spectral bias of 3D CNN models can inform static channel selection to help existing 3D CNN models significantly reduce input data size with no or little accuracy degradation in shape classification and point cloud semantic segmentation. Specifically, frequency-domain learning can reduce the input data size by 98\% in shape classification, while achieving a 0.93\% accuracy improvement on VoxNet with ModelNet and limiting the accuracy drop within 2\% on VRN with ShapeNet. In addition, frequency-domain learning can also reduce the input data size by 98\% in point cloud semantic segmentation, while achieving a 1.48\% mean-class accuracy improvement on the S3DIS dataset and limiting the mean-class IoU drop within 1.55\%.
\item We investigate the impact of static channel selection in frequency-domain learning on the computational complexity and the memory footprint requirements for 3D CNN models. Our experiments show that frequency-domain learning achieves 3.04\% mean-class accuracy improvements and 0.63\% mean-class IoU improvements in learning from high-resolution data with a 9\% FLOPs decrease and a 20\% GPU memory footprint decrease compared to directly learning from high-resolution data in spatial domain.
\end{itemize}
\section{Related Works}
\subsection{Frequency-Domain Learning}
2D frequency-domain learning has made remarkable success in image-based computer vision tasks.
\cite{jpeglearning} approximately decodes the JPEG images and trains a modified ResNet \cite{resnet} with decoded DCT coefficients for the image classification task obtaining classification accuracy improvement. \cite{jpegdetection} further extends the approach in \cite{jpeglearning} to object detection task on frequency-domain features. \cite{2dfdl} reshapes the decoded DCT coefficients into a channel-wise representation and reveals the spectral bias of 2D CNN. Based on the spectral bias, they propose a frequency channel selection method with existing 2D CNN models to reduce the bandwidth required between CPUs and GPUs. \cite{2dfdl} implements a 2D CNN-based gate module to estimate the probabilities of frequency channels to be activated by joint-training the gate module with 2D CNN models. The gate module activate channels by sampling a Bernoulli distribution based on the activation probabilities generated by CNN layers fro each channel, and \cite{2dfdl} utilizes a Gumbel Softmax trick reparameterization method to perform backpropagation through the discrete sampling.
In this paper, we extend the frequency-domain learning method proposed by \cite{2dfdl} from 2D space to 3D space to study the spectral bias of 3D CNN models. The observed spectral bias from our experiment results inspires our statical frequency channel selection method for 3D volumetric data.
\subsection{Volumetric-Based 3D Vision Methods}
Volumetric-based 3D CNN methods represent point clouds into a regular 3D grid. Then, apply 3D Convolutional Neural Network on the volumetric grid representation. \cite{voxnet} first applies a 3D CNN network on data in volumetric occupancy grid representation. The three proposed occupancy models for voxel quantization achieve close accuracy in shape classification. The binary hit model is widely utilized by following volumetric-based methods. Qi et.al \cite{subvolume} further explores the power of 3D volumetric representation and 3D CNN networks by using auxiliary training with subvolume supervision. Inspired by the high-performance 2D CNN networks (e.g, InceptionNet \cite{inception} and ResNet \cite{resnet}), \cite{vrn} proposes a VRN block with residual connection on 3D convolutional layers to make the 3D CNN networks deeper and achieves the state-of-art performance in shape classification. For point cloud segmentation using volumetric data, \cite{pointseg1} first applies a 3D convolutional neural network to label 3D points, and a point is labeled the same class with the voxel it belongs to. \cite{segcloud} improves the point cloud segmentation accuracy by adding a 3D interpolation layer to train the 3D CNN network with point-wise loss and deeper network architecture.
In this paper, we utilize the VoxNet \cite{voxnet} and simplest VRN \cite{vrn} as our baseline methods. \cite{vrn} presents that the ensemble of VRNs can further improve the classification accuracy. To speed the training process, we do not use an ensemble of different VRN models. We make limited modifications to VoxNet and VRN to make them fit the inputs size of data in the frequency domain but keep the network architecture unchanged. As the state-of-art volumetric-based fully CNN point cloud segmentation method proposed by \cite{segcloud} is not open-sourced, we choose to implement an encoder-decoder 3D CNN network based on the high-performance 2D CNN network \cite{rangenet} with a 3D interpolation layer \cite{segcloud} as the baseline method for point cloud segmentation.
\section{Methodology}
\subsection{Overview}
In this paper, our 3D frequency-domain learning pipeline includes 3D data pre-processing, network modifications to existing 3D CNN models, 3D CNN model spectral bias analysis, and static frequency channels selection method. As illustrated in Fig. 1, given a piece of volumetric data, it is reshaped in the 3D discrete cosine transform (DCT) domain, and the regrouped channel-wise frequency-domain volumetric features are fed into 3D CNN models for inference. The 3D CNN models are modified slightly, e.g., removing the downsampling layers in the first two VRN blocks of the VRN \cite{vrn} shape classification model, to take the 3D frequency-domain data as inputs. Then, we analyze the spectral bias of 3D CNN models from the frequency domain learning in shape classification and point cloud segmentation and verify that the reserved frequency channels contain important and sufficient information to obtain comparable classification and segmentation accuracy with models trained by full-size spatial-domain data. According to the spectral bias, we statically select specific frequency channels based on a binary channel selection map for training and inference.
\subsection{3D Data Pre-processing for Frequency-domain Learning}
We represent the 3D data by a volumetric grid, e.g., a grid of $32 \times 32 \times 32$, and use the hit grid model proposed by \cite{voxnet} to quantify each voxel. A voxel has a binary state to indicate whether any 3D points occupy it. ``1" means occupied by at least one 3D point, and ``0" means free. Although the hit grid model loses the geometric information between closed points, many prior works \cite{voxnet}\cite{pointseg1}\cite{segcloud} show that it is enough to represent the surface and shape of objects. Except for point coordinate attributes, if a point has other attributes like mapped color information, a voxel is quantified by the average attribute value of all points that the voxel contains.
Then, the quantized volumetric data are converted to the frequency domain by $4 \times 4 \times 4$ 3D DCT transformation. A quantized volumetric data is divided into sub-blocks of size $4 \times 4 \times 4$. Each sub-block is converted to the frequency domain by applying a $4 \times 4 \times 4$ 3D DCT transformation, and one frequency-domain sub-block has 64 frequency components. The 64 components are ordered and indexed by a 3D zigzag map. All components of the same frequency in each sub-block are grouped into one channel. Therefore, for a piece of volumetric data of size $32 \times 32 \times 32$, its corresponding data in the frequency domain is of size $64 \times N_{attributes} \times 8 \times 8 \times 8$ after the above data pre-processing. $N_{attributes}$ is the number of attributes that a voxel has. For example, one voxel in the S3DIS dataset has four attributes, i.e. binary occupation stata and three-channel color information; its corresponding frequency-domain data has $256$ channels. As one channel contains frequency components from all sub-blocks, the spatial information between sub-blocks remains in each channel. Lastly, each frequency channel is normalized by the mean and variance of all training samples.
\subsection{3D Frequency Channel Selection and 3D CNN Network Modifications}
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{gatemodule.png}
\end{center}
\caption{Architecture of the 3D frequency channel selection gate module. For a 3D convolutional layer $3DConv(n, k, s)$, $n$ is the number of output channels, $k$ is the kernel size, and $s$ is the stride size. For a average pooling layer $Pool(k, s)$, k is the kernel size, and $s$ is the stride size.}
\label{fig:short}
\end{figure}
As we hypothesis that 3D CNN models are also more sensitive to several critical frequency channels, we extend the learning-based frequency channel selection method \cite{2dfdl} to 3D CNN models to reveal the spectral bias. In this paper, we replace all 2D CNN layers and pooling layers of the gate module with 3D CNN layers and 3D pooling layers to fit the 3D frequency-domain inputs. For a frequency-domain input of size $C \times 8 \times 8 \times 8$, a global average pooling layer firstly converts it to $C \times 1 \times 1 \times 1$. Then, following two 3D CNN layers with kernel size equal to 1 generate an activated probability and a non-activated probability for each channel. After sampling a Bernoulli distribution, the output of the gate module is a binary state for each frequency channel. ``1" means this frequency channel is activated for being input for the following CNN networks, and ``0" means this channel is muted. Fig. 2 demonstrates the architecture of our 3D frequency channel selection gate module. For a channel $x_{i}$, $F(x_{i}) \in \{0, 1\}$ is its output of the gate module. The channels $x_{i}^{\prime}$ for following CNN networks is the element-wise product of $x_{i}$ and $F(x_{i})$, as shown in Equation (1).
\begin{equation}
x_{i}^{\prime} = x_{i} \odot F(x_{i})
\end{equation}
$\odot$ is the element-wise product. The number of selected channels is a regularization term of the loss function, and a hyperparameter $\lambda$ weights it. Therefore, for any tasks in this paper, the loss function is expressed as
\begin{equation}
L = L_{accuracy} + \lambda \times \sum_{i}^{C} F(x_{i}),
\end{equation}
where $\L_{accuracy}$ is the loss function of 3D vision tasks, e.g. cross-entropy loss for the shape classification task. During joint-training with 3D vision tasks, we explore the spectral bias by modifying the weight of the regularization term to control the number of activated frequency channels.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\linewidth]{VoxNet.png}
\end{center}
\caption{Architecture of the spatial-domain VoxNet and frequency-domain VoxNet with a gate module. The dotted lines represent the operations performed during the joint-training with the gate module to explore the 3D spectral bias. For the static channel selection method, the network training and inference start from the Input B, and the Input B has the size of $n \times 8 \times 8 \times 8$, where $n$ is the number of selected channels.}
\label{fig:short}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=\linewidth]{frequency_vrn.png}
\end{center}
\caption{Architecture of the frequency-domain VRN. The dotted lines represent the operations performed during the joint-training with the gate module to explore the 3D spectral bias. For the static channel selection method, the network training and inference start from the Input B, and the Input B has the size of $n \times 8 \times 8 \times 8$, where $n$ is the number of selected channels.}
\label{fig:short}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\linewidth]{baseline.png}
\end{center}
\caption{Architectures of point cloud segmentation methods.The dotted lines represent the operations performed during the joint-training with the gate module to explore the 3D spectral bias. The dash lines are skip connections from encoder blocks to decoder blocks.}
\label{fig:short}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{encoderdecoder.png}
\end{center}
\caption{Architecture of the encoder block and the decoder block. The dotted lines show residual connections. The dash lines represent the skip connections from the encoder to the decoder. }
\label{fig:short}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\linewidth]{heatmap_shape_classification.png}
\end{center}
\caption{Heat maps of top selected frequency channels in shape classification datasets by using VRN and VoxNet. The heat map value means the likelihood that a channel is to be activated. $N_{channels}$ means the average number of selected channels by a sample during dynamic training. We show top $32$ frequency channels when $\lambda = 0.5$, top $16$ frequency channels when $\lambda = 1$, top $8$ frequency channels when $\lambda = 2$ and top$2$ frequency channels when $\lambda = 3$. }
\label{fig:short}
\end{figure*}
As the size of input data in the frequency domain is smaller than the size of its corresponding data in the spatial domain, we avoid downsampling on inputs in the frequency domain and keep the network architecture unchanged. We expand the number of filters in the hidden layers to keep a comparable amount of features with the spatial-domain inputs for the 3D CNN model with few layers. For example, the VoxNet \cite{voxnet}, which is one of the baselines we used for shape classification, has two 3D convolutional layers and one pooling layer. We set the stride of the two convolutional layers to 1 and the number of filters to 128. Fig. 3 shows the architecture comparison of our spatial-domain VoxNet and frequency-domain VoxNet, which have the same number of layers but the different number of filters in 3D CNN layer. For deeper networks, i.e. VRN \cite{vrn}, we remove the downsampling blocks in the early stage until the output of a hidden layer with spatial-domain input data has the close size of its corresponding data in the frequency domain. We keep the following blocks all the same with the network dealing with spatial-domain inputs. The VRN network \cite{vrn} has four blocks, and one block contains a VRN block and a downsample block. The size of the second block's output is $8 \times 8 \times 8$, which is the same size as the inputs in the frequency domain. Hence, we remove the downsampling part and double the number of filters in the first two blocks to keep a similar feature amount at each hidden layer. Fig. 4 presents the architecture of our frequency-domain VRN. In the architecture of the original VRN, Block 1 and Block 2 also have downsampling (DS) blocks.
In terms of point cloud segmentation, we implement an encoder-decoder style fully convolutional network as our baseline method. Fig. 5 shows the architectures of our spatial-domain and frequency-domain point cloud segmentation methods. The encoder part has four fully CNN encoder blocks and yields a $16\times$ downsampling. The decoder part has four fully CNN decoder blocks. Inspired by the encoder-decoder style networks with skip connections in 2D image segmentation tasks, we add the four skip connections from the encoder part to the decoder part to provide texture information lost during downsampling. We use the dash lines to show the four skip connections in fig. 5. Fig. 6 presents the architecture of an encoder block and a decoder block. An encoder block first downsamples the input by a 3D max pooling layer and extracts features by three 3D CNN layer blocks with $kernal \: size = 3$ and $stride = 1$. One 3D CNN layer block contains one 3D CNN layer, one Leaky ReLu layer, and one batch normalization layer. A decoder block first upsamples the input feature maps by a 3D transposed CNN layer with $kernal \: size = 4$ and $stride = 2$. Then, the upsampled feature maps are merged with feature maps from the encoder part by element-wise addition. Two 3D CNN blocks follow the 3D transposed CNN layer to extract features. The dotted line in the fig. 6 stands for a residual connection inspired by the ResNet \cite{resnet}. For frequency-domain learning, we remove the max pooling layers in the first two encoder blocks to avoid downsampling and upsample the outputs of the first two 3D encoder blocks to provide the skip connections for the decoder. The trilinear interpolation layer transfers the voxel-level predictions to point-level predictions, which reduce the false predictions caused by the low-resolution 3D volumetric grid.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\linewidth]{heatmap_point_segmentation.png}
\end{center}
\caption{Heat maps of top selected frequency channels in S3DIS dataset by using the encoder-decoder style baseline method. $N_{channels}$ means the average number of selected channels from each attribute by a sample during dynamic training.}
\label{fig:short}
\end{figure*}
\subsection{Static Frequency Channel Selection for 3D Volumetric Data}
\subsubsection{Spectral Bias of 3D CNN models}
As shown in Fig. 7, we plot heat maps to demonstrate the spectral bias of 3D CNN models in shape classification. The heat map shows that the likelihood that a frequency channel being activated by the gate module during inference. Elements in a heat map stand for frequency channels, and the axes indicate the coordinate of the frequency channels, e.g., the coordinate of the DC frequency channel is (0, 0, 0). The color of elements indicates the activation probabilities. The $\lambda$ is the hyperparameter term in Equation (2). $N_{channels}$ is the average number of selected channels for each sample. When the $\lambda$ increases, $N_{channels}$ will decrease as the method tends to activate fewer frequency channels to reduce the loss in the joint training. We only show top frequency components in each heat map for visualization.
For shape classification datasets, the ModelNet \cite{modelnet} and ShapeNet \cite{shapenet} contain CAD models of man-made objects, and the Sydney Urban Objects dataset contains point clouds of outdoor objects. Their heat maps indicate that in most cases, the low-frequency channels, especially the DC component channel, have higher probabilities to be selected when $\lambda = 3$. As the number of activated channels decrease, only the DC component has probabilities over $99\%$ to be activated. In the rare case of the spectral bias of VoxNet with the Sydney Urban Objects dataset, no frequency channels have noticeable higher probabilities to be selected, when the average number of selected channels decreases, which indicates that spectral bias are variant for distinct CNN models and dataset. However, in general, most of the activated frequency channels are low-frequency channels or have a low-frequency attribute along at least one dimension in space. Compared to the Voxnet with simple network architecture, the deeper the 3D CNN model, i.e. VRN, the fewer frequency channels are selected under the same $\lambda$ in the same dataset. By observing heat maps in the same dataset and same method, we noticed that the frequency channels with top activation probabilities are stable under different $\lambda$.
The spectral bias of the point cloud segmentation task on S3DIS is shown in Fig. 8 and shows different insights from the spectral bias in shape classification. The top 3D frequency channels that are most informative to the point cloud semantic segmentation are more distributed across the spectrum. The activation likelihood of top frequency channels in the heat maps of $\lambda = 0.5$ decreases as the number of activated channels decreases, and no dominated frequency channels like the DC frequency channel in the shape classification have a higher likelihood (e.g., over 80\%) to be activated and most frequency channels have close and low probabilities to be activated when $\lambda = 4$. However, most of the top selected channels have least a low-frequency attribute along a dimension in space under each condition, and the results of our following experiments show that reserving the frequency channels with top probability is enough to obtain comparable accuracy and IoU in the point cloud segmentation task as compared with spatial-domain approaches with full-size data.
\subsubsection{Static Frequency Channel Selection}
Based on the above observations, we statically select the top frequency channels based on the rank of the probabilities to be activated and train the models from scratch with the selected frequency channels to explore whether a deterministic frequency channel selection method is generic for all samples.
Although the top frequency channels are stable under different $\lambda$, some frequency channels, e.g. the DC frequency channel in the spectral bias of VRN with Sydney urban objects dataset, show the dominance when $\lambda$ is large enough. In our method, we select top $n$ frequency channels from the spectral bias when $\lambda = 2$ for static channel selection training, and $n$ should be smaller than the average number of selected channels in spectral bias. And then, we train the frequency-domain networks without the gate module on $n$ selected channels. For inference, we use a deterministic binary channel selection map for all test samples, as shown in Fig. 1, to select frequency channels of each test data, where '1' means channels to be selected, and '0' means muted channels. Our experiments results verify our hypothesis that models trained by statically selected frequency channels are able to achieve comparable results with full-size data in the spatial domain. Fig. 9 shows the trade-off between accuracy and normalized input size in inference on the pre-trained models. The models trained by selected frequency channels achieves comparable results with smaller normalized input size, which demonstrate most of the frequency channels especially high-frequency channels are redundant in 3D shape classification and 3D point cloud segmentation.
\section{Experiments}
\subsection{3D Shape Classification}
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{ijcv_acc_data_size.png}
\end{center}
\caption{Tradeoff between accuracy and normalized input data size. The blue dotted line in the above four pictures are the accuracy or meanIoU of the models trained by full-size spatial-domain volumetric data. 'Joint training' is the results of the models with the gate module trained by full-size frequency-domain data. 'Static channel selection' is the results of the models trained by selected frequency channels.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsubsection{Experiment Setup}
We explore 3D frequency-domain learning in 3D shape classification using ModelNet dataset \cite{modelnet}, ShapeNet dataset \cite{shapenet} and Sydney Urban Object dataset. The ModelNet \cite{modelnet} contains CAD models of most common objects of size $32 \times 32 \times 32 $. Each sample is augmented by rotating in 12 directions, so our observations of spectral bias are rotation invariant. We utilize a subset with 10 categories and a subset with 40 categories separately. The ShapeNet \cite{shapenet} includes about 51,300 unique 3D models with 55 categories. It provides the volumetric data of size $128 \times 128 \times 128$, and we downsample it to $32 \times 32 \times 32$. Both surface-volumetric data and filled-in volumetric data are utilized for our experiments. Sydney Urban Object dataset contains 631 urban objects in 26 categories, which are cropped from LiDAR scans. We followed the provided dataset split method, which covers 15 classes and some classes with less samples are dropped. We represent each object by a volumetric grid of $32 \times 32 \times 32$.
We train VoxNet for 32 epochs on the above mentioned three datasets and decay the learning rate by 0.8 every 4 epochs. We use the stochastic gradient descent (SGD) optimizer with the an initial learning rate of $0.001$, a momentum of $0.9$, and a weight decay of $0.001$. We use the simplest VRN \cite{vrn} network instead of an ensemble of different VRN models. For the VRN, we train it for 50 epochs on all the same datasets except for the ModelNet-10, as \cite{modelnet} claims that their proposed network does not obtain considerable accuracy on ModelNet-10 as lack of data. The initial learning rate of SGD is $0.002$, and the learn rate decay by $0.1$ when the loss of a validation set stop reducing. The other settings keep the same with VoxNet \cite{voxnet}. The loss function of the above two networks is cross-entropy loss with the regularization term mentioned in Section 3.2.
\subsubsection{Experiment Results}
We train the original VoxNet and VRN with volumetric data of size $1 \times 32 \times 32 \times 32$. Then, We train the frequency-domain VoxNet \cite{voxnet} and VRN \cite{vrn} with the gate module using the 64 channel volumetric data in the frequency-domain to explore the variation of classification accuracy with dynamically selected frequency channels by comparing with the results of original methods. We plot heat maps to demonstrate the probability statistics of activated frequency channels (see Fig. 7). The heat maps show the activation probability of top $n$ frequency channels. When $\lambda = 3$ in 3D shape classification datasets, at most two frequency channels have activation probability over 80\%. Then, we utilize the static frequency channel selection method mention in section 3.4 to train frequency-domain VoxNet \cite{voxnet} and VRN \cite{vrn} with selected frequency channels. In most instances, when only one frequency channel is selected, the selected frequency channel is the DC frequency component channel.
\begin{table}
\begin{center}
\scalebox{0.75}{
\begin{tabular}{|l|c|c|}
\hline
VoxNet & Normalized Input Size & Acc \\
\hline\hline
Voxel(binary) & $ 1.0 $ & 90.07\\
DCT $N_{channels} = 64$ & 1.0 & 89.29\\
DCT $N_{channels} = 4$ & 0.063 & 89.49 \\
DCT $N_{channels} = 2$ & 0.031 & 90.23\\
DCT $N_{channels} = 1$ & 0.016 & \textbf{91.00}\\
\hline
\end{tabular}}
\end{center}
\caption{VoxNet classification results on ModelNet-10. ``Voxel(binary)" means the full-size data in the spatial domain by using the binary hit grid occupancy model to quantify voxels, and ``DCT" means data in the frequency domain. ``$N_{channels}$" is the number of activated channels. }
\end{table}\textbf{
}
\begin{table}
\begin{center}
\scalebox{0.75}{
\begin{tabular}{|l||l|l|l|}
\hline
Network & Data & Normalized Input Size & Acc \\ \hline\hline
VoxNet & Voxel(binary) & 1.0 & 85.59\\ \cline{2-4}
&DCT $N_{channels}=64$ & 1.0 & 85.55 \\ \cline{2-4}
&DCT $N_{channels}=4$ & 0.063 & \textbf{85.33}\\ \cline{2-4}
&DCT $N_{channels}=2$ & 0.031 & 84.21\\ \cline{2-4}
&DCT $N_{channels}=1$ & 0.016 & 83.17 \\ \hline
VRN & Voxel(binary) & 1.0 & 88.11 \\ \cline{2-4}
& DCT $N_{channels}=4$ & 0.063 & \textbf{85.13} \\ \cline{2-4}
& DCT $N_{channels}=2$ & 0.031 & 84.56 \\ \cline{2-4}
& DCT $N_{channels}=1$ & 0.016 & 83.42 \\ \hline
\end{tabular}}
\end{center}
\caption{classification results on ModelNet-40 }
\end{table}
The accuracy of ModelNet-10 outperforms the baseline method with full-size data in the spatial domain at most $0.93\%$, and the accuracy of ShapeNet surface volumertric data outperforms the baseline method with full-size data in the spatial domain at most $0.58\%$. The results are consistent with our observed spectral bias that the DC component channel contains most of important information. Note that in some cases (i.e., ModelNet-10 and ShapeNet), the accuracy drops when the number of activated channels increases because of the noisy information in high-frequency channels. Additionally, note that the accuracy of the experiment with the 64 channel full-size frequency domain data is below than the full-size data in the spatial domain. We conjecture the accuracy gap between full-size frequency-domain data and spatial-domain data implies the drawback of CNN models to deal with frequency-domain data. For experiments performed using the VRN \cite{vrn} network (see Table 2-5), the overall accuracy is higher than the VoxNet \cite{voxnet} with data in the frequency domain. The models trained by one selected channel achieve comparable accuracy with full-size data in the spatial domain but the normalized input data size is smaller, e.g., $-2.04\%$ on Sydney Urban Objects dataset and $-2.03\%$ on ShapeNet surface volumetric data but the normalized input size is only 0.0156. When 2 frequency channels are selected, compared to full-size data in the spatial domain, the accuracy of dimensionality reduced data drops about $1.96\%$ on ShapeNet surface volumetric data and $0.32\%$ on Sydney Urban Objects with a 0.031 normalized input size .
\begin{table}
\begin{center}
\scalebox{0.75}{
\begin{tabular}{|l||l|l|l|}
\hline
Network & Data & Normalized Input Size & Acc \\ \hline\hline
VoxNet & Voxel(binary) & 1.0 & 46.97\\ \cline{2-4}
&DCT $N_{channels}=4$ & 0.063 & \textbf{44.68} \\ \cline{2-4}
&DCT $N_{channels}=2$ & 0.031 & 43.30\\ \cline{2-4}
&DCT $N_{channels}=1$ & 0.016 & 43.08\\ \hline
VRN & Voxel(binary) & 1.0 & 48.86\\ \cline{2-4}
& DCT $N_{channels}=4$ & 0.063 & 47.48\\ \cline{2-4}
& DCT $N_{channels}=2$ & 0.031 & \textbf{48.54} \\ \cline{2-4}
& DCT $N_{channels}=1$ & 0.016 & 46.82 \\ \hline
\end{tabular}}
\end{center}
\caption{classification results on Sydney Urban Objects }
\end{table}
\begin{table}
\begin{center}
\scalebox{0.75}{
\begin{tabular}{|l||l|l|l|}
\hline
Network & Data & Normalized Input Size & Acc \\ \hline\hline
VoxNet & Voxel(binary) & 1.0 & 81.58\\ \cline{2-4}
&DCT $N_{channels}=4$ & 0.063 & \textbf{82.01} \\ \cline{2-4}
&DCT $N_{channels}=2$ & 0.031& 81.93\\ \cline{2-4}
&DCT $N_{channels}=1$ & 0.016 & 82.16\\ \hline
VRN & Voxel(binary) & 1.0& 85.59\\ \cline{2-4}
& DCT $N_{channels}=4$ & 0.063 & \textbf{83.85}\\ \cline{2-4}
& DCT $N_{channels}=2$ & 0.031& 83.63 \\ \cline{2-4}
& DCT $N_{channels}=1$ & 0.016 & 83.56 \\ \hline
\end{tabular}}
\end{center}
\caption{classification results on ShapeNet surface volumetric data }
\end{table}
\begin{table}
\begin{center}
\scalebox{0.75}{
\begin{tabular}{|l||l|l|l|}
\hline
Network & Data & Normalized Input Size & Acc \\ \hline\hline
VoxNet &Voxel(binary) & 1.0 & 82.10\\ \cline{2-4}
&DCT $N_{channels}=4$ & 0.063 & \textbf{82.26} \\ \cline{2-4}
&DCT $N_{channels}=2$ & 0.031 & 81.42\\ \cline{2-4}
&DCT $N_{channels}=1$ & 0.016& 81.48\\ \hline
VRN & Voxel(binary) & 1.0 & 86.62\\ \cline{2-4}
& DCT $N_{channels}=4$ & 0.063 & \textbf{83.36}\\ \cline{2-4}
& DCT $N_{channels}=2$ & 0.031 & 82.64 \\ \cline{2-4}
& DCT $N_{channels}=1$ & 0.016 & 82.26 \\ \hline
\end{tabular}}
\end{center}
\caption{classification results on ShapeNet filled-in volumetric data }
\end{table}
\subsection{Point Cloud Segmentation}
\subsubsection{Experiment Setup}
We explore 3D frequency-domain learning in 3D point cloud segmentation using using Stanford Large-Scale 3D Indoor Spaces dataset (S3DIS). \cite{S3DIS} contains large scale 3D points cloud for six reconstructed areas. The dataset covers 14 common indoor categories, and 13 among them used for evaluation. We used the same dataset settings with \cite{segcloud}, which tests their method on the fifth area, and trains their method on the rest.
One area of the S3DIS \cite{S3DIS} contains multiple scenarios. To keep a higher resolution of volumetric data, we divide each scenarios into several $5m \times 5m \times 5m$ sub-blocks. Each sub-block is voxelized in a grid of size $80 \times 80 \times 80 $. A voxel has a resolution of $0.0625 m$. For the experiments with higher spatial-domain resolution volumetric data, we voxelized each sub-block in a grid of size $160 \times 160 \times 160$, and a voxel has a resolution of $0.03125 m$. One voxel has four attributes - binary occupation state, Y, Cb, and Cr. The color attribute of a voxel is the average value of each color information channel of all points in a voxel. We train the encoder-decoder-style fully convolutional model with 200 epochs and decay the learning rate by 0.1 every 50 epochs. We use an Adam optimizer with an initial learning rate of 0.001 and a weight decay of 0.001. The loss function is a point-wise cross-entropy loss with class weights.
\subsubsection{Experiment Results}
\begin{table*}[t]
\begin{center}
\resizebox{\linewidth}{!}{
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|l||l|l|l|}
\hline
& door & floor & wall & ceiling & clutter & chair & board & bookcase & beam & table &window & sofa & column & Acc & mAcc & mIoU \\
\hline\hline
Voxel + RGB & \textbf{15.87} & \textbf{94.47} & \textbf{53.43} & 73.26 & \textbf{17.38} & \textbf{25.23} & 7.39 & \textbf{37.49} & 0.26 & \textbf{31.22} & 31.68 & 2.22 & 10.09 & \textbf{87.33} & 42.69 & \textbf{30.77}\\
\hline
DCT $N_{channels} = 4$ & 12.75 & 94.09 & 34.65 & 75.69 & 14.41 & 19.23 &
11.50 & 36.56 & 0.00 & 26.71 & 38.79 & \textbf{4.72} &
\textbf{10.71} & 84.73 & 44.17 & 29.22 \\
\hline
DCT $N_{channels} = 2$ & 11.90 & 93.31 & 30.69 & \textbf{76.97} & 10.27 & 23.93 & \textbf{12.23} & 33.92 & 0.00 & 30.87 & \textbf{39.60} & 1.45 & 9.63 & 84.15 & \textbf{45.02} & 28.83\\
\hline
\end{tabular}
}
\end{center}
\caption{Point segmentation results of the model trained by full-size data in the spatial domain and selected frequency channels}
\end{table*}
\begin{table}
\begin{center}
\scalebox{0.75}{
\begin{tabular}{|l|c|c|c|c|}
\hline
& Normalized Input Size & ACC & mACC & mIoU \\
\hline\hline
Voxel+RGB & $ 1.0 $ & 87.33 & 42.69 & 30.77\\
LR DCT $N_{channels} = 2$ & 0.008 & 84.15 & 45.02 & 28.83\\
LR DCT $N_{channels} = 4$ & 0.016 & 84.73 & 44.17 & 29.22 \\
HR DCT $N_{channels} = 2$ & 0.063 & 85.01 & 43.67 & 29.47\\
HR DCT $N_{channels} = 4$ & 0.125 & 86.28 & 45.73 & 31.40\\
\hline
\end{tabular}
}
\end{center}
\caption{Point cloud segmentation results of models trained by selected high-resolution frequency channels. "HR" is the high-resolution data. "LR" is the low-resolution data.}
\end{table}
We show quantitative results of trained models by full-size data in the spatial domain and the selected frequency channels separately in Table 6. We evaluate the models by the overall classification accuracy of all points, mean of classification accuracy of all classes (mAcc) and mean of intersection over union of all classes (mIOU). For the model trained by two selected frequency channels, we select one frequency channel from the voxel binary occupancy state attribute and one frequency channel from the Y attribute, and no channel is selected from Cb and Cr attributes. For other models trained by selected frequency channels, the channels are selected averagely from all attributes. In most cases, the models trained by selected frequency channels outperform the baseline method using full-size spatial domain data at most $2.33\%$ in terms of mACC and loses at most $2.65\%$ on mIoU. The experiment results of models trained by selected frequency channels from high-resolution transformed data (see Table 7) present that the frequency-domain learning model outperforms the full-size spatial-domain learning model at $0.63\%$ on mIoU and $3.04\%$ on mACC with an 87.5\% smaller normalized input size than full-size low-resolution spatial-domain input. Figure 10 shows the inference visualization results of point cloud segmentation with models trained by low-resolution frequency channels.
As the frequency-domain input size is smaller than the corresponding spatial domain input size, the memory footprint and FLOPs are reduced in the first two encoder blocks of our point cloud segmentation baseline. For the high-resolution spatial-domain input of size $160 \times 160 \times 160$, the FLOPs needed in inference is about 1127 GFLOPs. The model that takes frequency-domain input needs about 1025 GFLOPs, and the frequency-domain learning reduces about 9\% FLOPs. On the other hand, the GPU memory footprint requirement for a spatial-domain input of size $160 \times 160 \times 160$ is about 8047 MB in inference, and the corresponding frequency-domain input occupies about 1632 MB which means about 20\% GPU memory footprint requirement is reduced.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{Segmentation_dimensionality_reduction.png}
\end{center}
\caption{Examples of point cloud segmentation results on low-resolution data, i.e. $80 \times 80 \times 80$, of S3DIS.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\section{Conclusion}
In this paper, we study frequency-domain learning for volumetric-based 3D data perception. Our 3D frequency-domain learning utilizes existing 3D CNN models with little modifications to reveal the spectral bias of 3D CNN models. Experiment results of the static frequency channel selection method demonstrate that only frequency channels with high activation probability, e.g., over 90\%, contain key information for volumetric-based 3D data perception. Experiment results show that frequency-domain learning is able to reduce the input data size by 98\% in 3D shape classification while limiting the average accuracy drop within 2\%, and by 98\% in 3D point cloud semantic segmentation with a 1.48\% mean-class accuracy improvement while limiting the mean-class IoU loss within 1.55\%. Our experiment results also indicate that 3D frequency-domain learning can effectively reduce the input data size when learning from high-resolution volumetric data, which potentially alleviates the large requirements for computation resources and GPU memory footprints in volumetric-based 3D vision tasks and avoid complicated modification to the existing 3D CNN models.
|
{
"arxiv_id": "2302.08680",
"language": "en",
"timestamp": "2023-02-20T02:06:23",
"url": "https://arxiv.org/abs/2302.08680",
"yymm": "2302"
} | \section{Introduction}
In recent studies, estimates of the average Research $\&$ Development cost per new drug range from less than $\$1$ billion to more than $\$2$ billion per drug. Creating a new drug is not just incredibly expensive but also time-consuming. The full research, development and approval process can last from 12 to 15 years. Meanwhile the world has been facing new threats from epidemic diseases caused by new and constantly evolving viruses (e.g, COVID-19 caused by SARS-CoV-2). Therefore, it is essential to reuse/repurpose existing drugs and discover new ways of combining drugs (i.e. polypharmacy) in order to treat never-seen-before diseases. Most approaches to tackling drug repurposing problems estimate drug side effects and drug responses to cell lines to provide important information about the use of those drugs. In particular, polypharmacy is the practice of treating complex diseases or co-existing health conditions by using combinations of multiple medications \citep{bansal2014community}. Understanding drug-drug interactions (DDIs) is pivotal in predicting the potential side effects of multiple co-administered medications. It is, however, unfeasible to render clinical testing of all drug combinations due to the tremendous number of relations between drug pairs or drugs and their targets (e.g., proteins).
\vskip 1em
Machine learning models have been widely applied to provide tractable solutions in predicting potential drug interactions. Data-driven techniques (e.g., machine learning or deep learning) have proved their capabilities for dealing with complex patterns in data. Such methods have produced remarkable results in various fields, such as computer vision, natural language processing, and audio processing. Therefore, it is reasonable to employ machine learning approaches to investigate the interactions among complex combinations of medications. Previous work on DDIs \citep{gottlieb2012indi, vilar2012drug, cheng2014machine} used various hand-crafted drug features (e.g., chemical properties or Morgan fingerprints) to compute the similarity of each drug pair, and drug-drug interactions can be predicted based on the drug-pair proximity scores. Nevertheless, using fixed drug representations can result in sub-optimal outcomes since they do not sufficiently capture the complex interrelations of drugs. Recent approaches rely on graph neural networks (GNNs) \citep{battaglia2018relational} to directly learn drug and protein representations from structural data (i.e. graphs) augmented with additional information for node features. \cite{10.1093/bioinformatics/bty294} proposed a graph autoencoder to predict polypharmacy side effects on a multimodal graph consisting of drug and protein nodes with various edge types. Similarly, \cite{yin2022deepdrug} introduced DeepDrug combining graph convolutional neural networks (GCN) \citep{Kipf:2017tc} and convolutional neural networks (CNN)\citep{lecun1995convolutional} to learn the structural and sequential representations of drugs and proteins. In addition, \cite{wang2022deepdds} suggested using several graph attention (GAT) \citep{velickovic2018graph} layers to learn on hand-crafted features of drugs to predict drug-pair interactions. Nyamabo
et al. \cite{nyamabo2021ssi} use multiple GAT layers to compute the drug representations and aggregate them using co-attention to produce final predictions. These mentioned GNN-based models effectively learn the latent representations of drugs and proteins, which can be used in several downstream tasks.
\vskip 1em
We hypothesize that a multimodal network can provide more comprehensive information for learning drug representations than the aforementioned methods. Rather than only considering pair-wise associations of drugs, neural networks operating on multimodal graphs can acknowledge the connections of multiple drugs under a specific interaction type, resulting in better modeling of drug interactions (e.g., polypharmacy side effects in \citep{Zitnik2017}). In other words, each interaction type is modeled as a graph of drug nodes and their pair-wise relations, and the aggregation of their vectors can be regarded as an informative representation of this interaction. Predicting drug-drug or drug-target interactions can be regarded as a link prediction task on graphs. We argue that latent spaces produced by the aforementioned approaches are disjoint; hence, they are not capable of generating new links on some graph benchmarks. Therefore, we suggest using deep graph generative models to make the latent spaces continuous; as a result, they are superior to traditional graph autoencoders (GAE) in predicting new links on biomedical multimodal graphs. Instead of yielding deterministic latent representations, variational graph autoencoders (VGAE) \citep{kipf2016variational} use a probabilistic approach to compute the latent variables. In this work, we aim to use VGAE to learn the representations of drugs and proteins on a multimodal graph; then, we predict several node-pair interactions (e.g., drug-pair polypharmacy side effects, drug-cell line response, etc.) using the learned representations. We demonstrate that using VGAE can attain superior performance in predicting drug-drug interactions on multimodal networks, outperforming GAE-based architectures and hand-crafted feature-based methods. In addition, we leverage drug molecular representations (i.e. Morgan fingerprints) concatenated with drug latent representations before the decoding stage to further enhance the model's performance in predicting drug-pair polypharmacy side effects. Experiments are conducted on three different biomedical datasets. Our proposed approach shows promising quantitative results, compared with other methods. Moreover, visualizations are presented to provide some findings in modeling drug interactions by multimodal graphs and deep generative models.
\section{Related Work}
\subsection{Link Prediction with Graph Neural Networks}
Graph neural networks (GNNs) are deep learning models that learn to generate representations on graph-structured data. Most GNN-based approaches use message-passing paradigms \citep{4700287, pmlr-v70-gilmer17a} wherein each node aggregates vectorized information from its neighbors and updates the messages to produce new node representations. \cite{Kipf:2017tc} introduced a scalable method that approximates first-order spectral graph convolutions, which are equivalent to one-hop neighborhood message-passing neural networks. Besides, \cite{velickovic2018graph} proposed graph attention networks (GATs) that use soft attention mechanisms to weigh the messages propagated by the neighbors. \cite{NIPS2017_5dd9db5e} introduced GraphSAGE which efficiently uses node attribute information to generate representations on large graphs in inductive settings. Link prediction is one of the key problems in graph learning. In link-level tasks, the network consists of nodes and an incomplete set of edges between them, and partial information on the graph are used to infer the missing edges. The task has a wide range of variants and applications such as knowledge graph completion \citep{10.5555/2886521.2886624}, link prediction on citation networks \citep{bojchevski2018deep}, and predicting protein-protein interactions\citep{Zitnik2017}.
\subsection{Deep Generative Model on Graphs}
Deep generative models aim to generate realistic samples, which should satisfy properties in nature. Different from traditional methods that rely on hand-crafted features, data-driven approaches can be categorized into variational autoencoders (VAEs) \citep{kingma2013auto}, generative adversarial networks (GANs) \citep{NIPS2014_5ca3e9b1}, and autoregressive models. In the field of deep generative models on graphs, several works \citep{pmlr-v70-ingraham17a, NEURIPS2020_41d80bfc} aim at adding stochasticity among latents to make the models more robust to complicated data distributions. \cite{graphvae} proposed a regularized graph VAE model wherein the decoder outputs probabilistic graphs. In addition, \cite{netgan} introduced stochastic neural networks trained by Wassterstain GAN object to learn the distribution of random walk distribution on graphs. For autoregressive methods, \cite{gran} proposed an attention-based GNN network to generate graph nodes and associated edges after many decision steps in a generation process.
\subsection{Drug-Drug Interaction Modeling}
Previous studies predicted drug-drug interactions by measuring the relevance between two graphical representations of drug chemicals (i.e. 2D graphs). In particular, \cite{nyamabo2021ssi} introduce a deep neural network stacked by four GAT layers and a co-attention layer to compute the interaction relevance of two drug chemical substructures. \cite{mrgnn} propose to learn drug representations in a multi-resolution architecture. Moreover, concatenations of drug representations calculated by GNN-based models are insufficient to predict drug-pair interactions as they do not capture joint drug-drug information, which is regarded as interrelationships among medications. To overcome this problem, \cite{co_attention} use a co-attention mechanism that allows their message-passing-based neural networks to exchange substructure information between two drugs, resulting in better representations of individual drugs. Additionally, \cite{gognn} train a model to capture graph-graph interactions (i.e. interactions between drugs at the chemical structure level) through dual attention mechanisms operating in the view of a graph consisting of multiple graphs. \cite{10.1093/bioinformatics/bty294} model a multimodal network consisting of medical node entities (i.e. drugs and proteins) with their relations to extract informative knowledge for predicting drug-drug interactions.
\section{Method}
\subsection{Problem Setup}
Given a multimodal graph, our task is to predict whether two nodes of the same or different types are connected under a specific edge type. This can be regarded as a link-level prediction on graphs. In our work, the graph $G$ consists of two node types (e.g., drug and protein), and each side effect is represented as an edge type. In addition to polypharmacy interactions, $G$ also involves edges representing drug-protein and protein-protein interactions. Formally, let denote $G = (V, E, X)$, where $V = V_d \cup V_p$ is a union of two node sets of different types (i.e. $V_d$ is the set of drug nodes and $V_p$ is the set of protein nodes), $E$ is a set of edges, and $X = X_d \oplus X_p$ is a concatenated matrix denoting the node features of different node types. Each edge in $E$ is a triplet $(v_i, e, v_j)$ in which node $v_i$ interacts with node $v_j$ under a specific edge type $e$. In case of a binary classification setting, we perform negative sampling to generate non-edge labels to make the model robust to negative examples. Thus, the objective is to learn a function $f: E \rightarrow T$, where $f$ predicts the value of a particular triplet $(v_i, e, v_j)$; $T$ can be either $\{0,1\}$ or $\mathbb{R}$.
\subsection{Model Architecture}
\label{sec:sub2}
Our proposed model has three main components: an encoder, a latent encoder, and a decoder.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\includegraphics[width =\textwidth]{Figures/multi_modal_graph.pdf}
\caption{}
\label{fig:graph}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.5\textwidth}
\centering
\includegraphics[width =\textwidth]{Figures/encoder_9.pdf}
\caption{}
\label{fig:encoder}
\end{subfigure}
\vskip 1.2em
\caption{Overview of a biomedical multimodal graph and one graph convolution layer in our framework. (a) A graph consists of two node types (e.g., red nodes are protein and green are drug nodes) and five edge types $\{e_i\}$. (b) A layer of the encoder in which the representation of the source node $c$ is computed by aggregating its neighbors' information under different edge types. Black rectangles denote aggregation, whereas white rectangles indicate neural networks that share parameters across the nodes. $\{(W^1_{e_i}, W^2_{e_i})\}$ denote trainable weight matrices of different edge types $\{e_i\}$. $h_c$ can be moved to either a successive convolution layer or the latent encoder.}
\label{fig:2}
\end{figure}
\subsubsection*{Encoder}
\label{latent_encoder} a two-layer graph convolutional neural network operating message passing scheme \cite{pmlr-v70-gilmer17a} on $G$ and producing node embeddings for each node type (e.g., drugs and proteins). Each layer in the network has the following form:
\[h_i = \phi \big(\sum_{e}\sum_{j \in \mathcal{N}^i_e \cup \{i\}} W_{e}\frac{1}{\sqrt{c_i c_j}} x_j\big)\]
where $\mathcal{N}^i_e$ denotes the neighbor set of node $x_i$ under the edge type $e$.
$W_{e} \in \mathbb{R} ^ {d_k \times d}$ is a edge-type specific transformation matrices that map $x_i \in \mathbb{R} ^ {d_i}$ and its neighbors $x_j \in \mathbb{R} ^ {d_j}$ into $d_k$-dimensional vector spaces, resulting in $h_i \in \mathbb{R}^{d_k}$. It is worth noting that $d_i$ and $d_j$ are not necessarily the same because $x_i$ and $x_j$ can be nodes of two different node types (i.e. $d_i = d_j$ when $x_i$ and $x_j$ are in the same node type). $c_i$ and $c_j$ indicates the degree node $i$ and $j$ on the network. Also, $\phi$ denotes a nonlinear activation function; we use rectified linear unit (ReLU) in our experiments. Figure \ref{fig:2} illustrates an overview of the encoder in our framework.
\thickmuskip=0mu
\subsubsection*{Latent Encoder}
For each node type $v$, there are two multilayer perceptrons (MLPs) receiving the node embeddings from the encoder and computing the predicted mean $\mu$ and logarithm of the standard deviation $\log \sigma$ of the posterior latent distribution:
\[q_v (Z_v\mid X, E) = \prod_{i = 1}^{|V_v|}q_v(z^i_{v} \mid X, E)\]
$q_v(z^i_v \mid X, E) = \mathcal{N}(z^i_v \mid \mu^i_v, \textrm{diag}((\sigma^i_v)^2)$ denotes the posterior distribution of a node of a specific node type. Here, $\mu_v$ and $\log \sigma _v$ are computed as follows:
\[\mu_v = W^2_{\mu_v} \tanh(W^1_{\mu_v} h_v)\]
\[\log \sigma_v = W^2_{\sigma_v} \tanh(W^1_{\sigma_v} h_v)\]
where $W^i_{\mu_v} \in \mathbb{R} ^ {d_k \times d}$, $W^i_{\sigma_v} \in \mathbb{R} ^ {d_k \times d}$ are the weight matrices, $\mu_v$ and $\log \sigma_v$ are the matrices of mean vector $\mu^i_v$ and logarithm of standard deviation vector $\log \sigma^i_v$, respectively. Figure \ref{fig:3} demonstrates how we integrate molecular structures of drug nodes with their latent representation to improve the performance in side effect link prediction.
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{Figures/latent_encoder_1.pdf}
\caption{Overview of the latent encoder. Morgan fingerprints are concatenated with latent vector $z_c$ to improve the performance of polypharmacy side effects prediction. $\{W_\mu, W_\sigma, W_m$\} are trainable weight matrices. $\oplus$ denotes concatenation.}
\label{fig:3}
\end{figure}
\subsubsection*{Decoder}
In our experiments, we use two types of decoders including tensor factorization and Multilayer perception (MLP) for different purposes. A tensor-based decoder is used in the polypharmacy side effects experiments not only to make predictions but the on-diagonal entries of the tensor can also be used as representations for the drug-pair side effects. On the other hand, a multilayer perceptron (MLP) are used in the two remaining experiments to provide better performances.
\paragraph{Tensor Factorization:}
a tensor factorization using latent embeddings to predict the node interactions on $G$. We follow the approach proposed in \cite{10.1093/bioinformatics/bty294} to design the decoder:
\begin{equation*}
g(v_i, e, v_j) =
\begin{cases}
z^T_i D_e R D_e z_j & \text{if $v_i$ and $v_j$ are drugs} \\
z^T_i M_e z_j & \text{if $v_i$ and $v_j$ are a protein and a drug, or vice versa.}
\end{cases}
\end{equation*}
where $D_e, R, M_e \in \mathbb{R} ^ {d \times d}$ are learnable parameters. $R$ denotes the global matrix representing all drug-drug interactions among all polypharmacy side effects; $M_e$ is an edge-type-specific matrix modeling drug-protein and protein-protein relations.
Also, $D_e$ is a diagonal matrix, and its on-diagonal entries model the significance factors of $z_i$ and $z_j$ in multiple dimensions under the side effect type $e$.
\paragraph{Multilayer perceptron:} a multilayer perceptron (MLP) can also be used to predict the links between nodes as follows:
\begin{equation*}
g(v_i, e, v_j) = \texttt{MLP}(\texttt{concat}(z_i, z_j))
\end{equation*}
where $z_i$ and $z_j$ are two drug vectors sampled from the latent posterior distribution.
\vskip 0.5em
\noindent
Finally, the probability of edge $(v_i, e, v_j)$ is calculated via a sigmoid function $\sigma$.
\[p_e(v_i, v_j) = \sigma(g(v_i, e, v_j))\]
\section{Experiments}
\label{sec:3}
\subsection{Polypharmacy side effects}
We conduct experiments on the dataset introduced in \cite{10.1093/bioinformatics/bty294}. It is a multimodal graph consisting of 19,085 protein and 645 drug nodes. There are three types of interaction drug-drug, drug-protein, and protein-protein. In particular, the dataset contains 964 commonly occurring polypharmacy side effects, resulting in 964 edge types of drug-drug interaction. Protein-protein and drug-protein interactions are regarded as two other edge types; therefore, the multimodal graph has 966 edge types in total.
\vskip 1em
We randomly split the edges into training, validation, and testing sets with a ratio of $8 : 1 : 1$. Then, the edges in the training edge set are randomly divided into $80\%$ edges for message passing and $20\%$ edges for supervision. We use Adam optimizer to minimize the loss function $L$ shown in Equation \ref{eq:1} in 300 epochs with a learning rate of $0.001$. We run the experiments with six different random seeds.
\begin{equation}
L = \sum_{(v_i, e, v_j) \in E}-\log p_e(v_i, v_j) - \mathbb{E}_{v_n \sim P_e(v_i)}[ 1 - \log p_e(v_i, v_n)] - \sum_{v} \lambda_v \mathbb{D}_{KL}(q_v(Z_v\mid X, E)\mid p_v(Z_v))
\label{eq:1}
\end{equation}
where $p_v(Z_v)$ denotes the latent space's prior distribution of the node type $v$. The first term of $L$ denotes the cross-entropy loss of the probabilities of positive and negative edges which is sampled by choosing a random node $v_n$ for each node $v_i$ under a specific edge type $e$, while the second term is the weighted sum of KL-divergence of each node type $v$; $\{\lambda_v\}$ are hyper-parameters. In our experiments, $\lambda_d$ and $\lambda_p$ equal to $0.9$ and $0.9$ for drug and protein nodes, respectively. Finally, we use one-hot representations indicating node indices as features for both drug and protein nodes.
\subsection{DrugBank Drug-Drug Interaction}
We run experiments on the DrugBank dataset used in \citep{drug_food}. In this dataset, 191,400 drug-pair interactions among 1,704 drugs are categorized into 86 types and split into training, validation, and testing subsets with a ratio of 6:2:2 respectively. We use the SMILES string representations of drug nodes to compute their 2,048-dimension Morgan fingerprints feature vectors which indicate the molecular structures of the medications.
\vskip 1em
In this task, we build a multimodal network in which drugs are nodes and their interactions are edge types, which is similar to the polypharmacy prediction task. The nodes are attributed to their Morgan fingerprint representations. To the best of our knowledge, our work is the first attempt to approach the DrugBank dataset in this way. Morgan fingerprint vectors are good molecular features of individual drugs, and aggregating drug information under different edge types can produce superior drug interrelationships. In our experiments, we construct the multimodal network using drug pairs from the training set. Our VGAE model learns to predict the interactions of a drug pair under different edge types by minimizing the loss function in equation \ref{eq:1}. In this task, we use multiplayer perceptron as the decoder since the main objective is to predict different interactions between two medications rather than modeling their co-occurring effects.
\subsection{Anticancer drug response}
In addition to drug-pair polypharmacy side effects, we evaluate our approach for the anticancer drug response prediction problem; the task is to predict the interactions between the drug and cell line on a multimodal network.
Integrated information between drugs and cell lines are an effective approach to calculating anticancer drug responses using computational methods; various type of information can be exploited such as drug chemical structures, gene expression profiles, etc., to provide more accurate predictions. In this work, we evaluate the performance of VGAE on the CCLE dataset proposed in \citep{ahmadi2020adrml} which contains comprehensive information of drugs and cell lines. Three types of pairwise similarities are provided for each modality, resulting in nine types of combination used in computation among drug and cell line pairs; in addition, pairwise logarithms of the half-maximal inhibitory concentration (IC50) scores between drugs and cell lines are given as values to be predicted. The combinations are detailed in Table \ref{tab:3}.
To apply VGAE in predicting the IC50 scores among drug-cell line pairs, we construct an undirected multimodal graph consisting of drug and cell line nodes with the similarity scores indicating the edge weights under three types of relation, including drug-drug, cell line-cell line, and drug-cell line. The problem is regarded as a link prediction task in which a number of edges connecting drug and cell line nodes are masked, and the objective is to compute real values $w_e$ denoting the weights of these missing links. The implementation of VGAE is almost the same as detailed in \ref{sec:sub2} with modifications of the decoding stage in which representations of two nodes are concatenated before being preceded to a simple multilayer perceptron with two hidden layers of size 16, ReLU nonlinearity, and a linear layer on top to predict a final score denoting the edge weights among them.
We randomly split the edges with a ratio of 7:1:2 for training, validation, and testing purposes, respectively. In this task, the models are trained to reconstruct a weighted adjacency matrix and regularized by KL-divergence of the node latent distributions with hyperparameters $\lambda_d$ and $\lambda_c$ denoting the coefficients for drug and cell nodes, respectively; $\lambda_d = \lambda_c = 0.001$ in our experiments. We train the models in 500 epochs with a learning rate of 0.01 to minimize the loss function $L$ as follows:
\begin{equation}
L = \sum_{(v_i, e, v_j) \in E} (\widehat{s_e}(v_i, v_j) - s_e(v_i, v_j))^2
- \sum_{v} \lambda_v \mathbb{D}_{KL}(q_v(Z_v\mid X, E)\mid p_v(Z_v))
\label{eq:2}
\end{equation}
where $\widehat{s_e}(v_i, v_j)$ indicates the predicted edge weights between node $v_i$ and $v_j$, whereas $s_e(v_i, v_j)$ are their ground truths. Finally, experiments are run with 20 different random seeds.
\section{Comparative Results}
\subsection{Polypharmacy side effects prediction}
We compare the performance of VGAE to alternative approaches. In addition to VGAE, we also implement a graph autoencoder and train the model in the same setting detailed in Sec. \ref{sec:3}. The baseline results are taken from \citep{10.1093/bioinformatics/bty294} including: RESCAL Tensor Factorization \citep{nickel2011three}, DEDICOM Tensor Factorization \citep{papalexakis2016tensors}, DeepWalk Neural Embeddings \citep{perozzi2014deepwalk, zong2017deep}, Concatenated Drug Features, and Decagon \citep{10.1093/bioinformatics/bty294}. It is worth noting that Decagon is also a GAE-based model with two GCN layers, yet \cite{10.1093/bioinformatics/bty294} use side information (e.g., side effects of individual drugs) as additional features for drug nodes. By contrast, our VGAE and GAE are trained on one-hot representations for drug and protein nodes. To examine the effects of Morgan fingerprints in drug-drug interaction decoding, we augment the Morgan fingerprint information into the latent vectors of drug nodes before preceding them to tensor-based decoders to produce final predictions.
\vskip 1em
Table \ref{tab:2} shows the results of all approaches in predicting the polypharmacy side effects. The models are evaluated based on three metrics, including the area under the ROC curve (AUROC), the area under the precision-recall curve (AUPRC), and average precision at 50 (AP@50). The scores reveal that VGAE without augmenting Morgan fingerprints outperforms traditional tensor-based approaches by a large margin, resulting in up to $24.8 \%$ (AUROC), $32.0 \%$ (AUPRC), and $40.3 \%$ (AP@50). We also compare VGAE with two machine-learning-based methods. The model achieves a $25.8 \%$ gain over DeepWalk neural embeddings and $18.0 \%$ over Concatenated drug features in AP@50 scores. Furthermore, albeit trained on one-hot feature vectors, vanilla VGAE can achieve competitive performance with Decagon, outperforming the baseline by $6.0\%$ (AP@50). This indicates the effectiveness of VGAE in recommending potential side effects in a featureless drug-protein multimodal network. Finally, VGAE with additional molecular information (i.e. VGAE + MFs) at the decoding stage provides the best performance across the approaches. The method achieves the highest scores among all three metrics and outperforms the others by a large margin. The results reveal that using molecular fingerprints is a simple yet effective solution in predicting drug-drug interactions as suggested in \citep{long500molecular}.
\begin{table}[ht]
\centering
\caption{Average performance on polypharmacy link prediction across all side effects}
\vskip 1.2em
\label{tab:sample}
\begin{tabular}{cccc}
\toprule
Method & AUROC & AUPRC & AP@50 \\
\midrule
RESCAL & 0.693 & 0.613 & 0.476 \\
DEDICOM & 0.705 & 0.637 & 0.567 \\
DeepWalk & 0.761 & 0.737 & 0.658 \\
Concatenation & 0.793 & 0.764 & 0.712 \\
Decagon & 0.872 & 0.832 & 0.803 \\
GAE & 0.893 $\pm$ 0.002 & 0.862 $\pm$ 0.003 & 0.819 $\pm$ 0.006 \\
\midrule
VGAE (ours) & 0.905 $\pm$ 0.001 & 0.880 $\pm$ 0.001 & 0.853 $\pm$ 0.005 \\
VGAE + MFs (ours) & \textbf{0.944 $\pm$ 0.005} & \textbf{0.926 $\pm$ 0.005} & \textbf{0.920 $\pm$ 0.004} \\
\bottomrule
\end{tabular}
\label{tab:2}
\end{table}
\paragraph{Visualization of side effect embeddings}
\begin{figure}
\centering
\captionsetup{justification=centering}
\includegraphics[width = 0.65\textwidth]{Figures/plot_side_effect_bold-1.png}
\caption{Polypharmcy side effects embeddings}
\label{fig:1}
\end{figure}
To further demonstrate the capability of VGAE in learning to model drug-pair polypharmacy side effects, we plot their representations in 2D dimension using the t-SNE \cite{van2008visualizing} method. The embedding of each side effect is derived by taking on-diagonal entries of the tensor $D_e$ to create a $d$-dimensional vector which is accordingly projected into 2D space. Figure \ref{fig:1} illustrates a vector space in which side effects representations establish a clustering structure.
\begin{figure}
\centering
\includegraphics[scale=0.6]{Figures/atc_diagram_1.pdf}
\caption{Fourteen groups at Level 1 in Anatomical Therapeutic Chemical classification system}
\label{fig:atc}
\end{figure}
\paragraph{Visualization of drug embeddings}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{Figures/drug_tnse.png}
\caption{Visualization of drug embeddings indicates how drugs are clustered into groups}
\label{fig:latent}
\end{figure}
We run a spectral algorithm to find a clustering structure of the latent drug space produced by the proposed model. Specifically, the number of clusters is determined based on the Anatomical Therapeutic Chemical (ATC) classification system \cite{atc} in which medications are divided into groups at five different levels. At level 1, drugs are classified into 14 pharmacological groups. Figure \ref{fig:atc} demonstrates the drug classification at level 1 of ATC. In our framework, therapeutic knowledge of drugs is extracted from a biomedical multimodal graph and Morgan fingerprints can represent their chemical structures. Therefore, we hypothesize that drug latent vectors combined with both pharmacological and molecular structures can derive informative drug clustering results that are useful for classification systems (e.g., ATC) to determine the functions of new medications.
The clustering latent space shown in Figure \ref{fig:latent} demonstrates a promising result, suggesting new directions for further work in learning to cluster drugs or diseases by exploring their joint representations on multimodal networks.
\subsection{DrugBank Drug-Drug Interaction}
\begin{table}[ht]
\centering
\caption{\centering Comparisons on the DrugBank dataset}
\vskip 1.2em
\begin{tabular}{cccc}
\toprule
Method & ACC & AUROC & AUPRC \\
\midrule
DeepDDI & 0.9315 & \textbf{0.9976} & 0.9319 \\
MR-GNN & 0.9406 & \textbf{0.9976} & 0.9410 \\
GoGNN & 0.8478 & 0.9163 & 0.9001 \\
MHCADDI & 0.7850 & 0.8633 & 0.8938 \\
SSI-DDI & 0.9447 & 0.9838 & 0.9814 \\
\midrule
VGAE (ours) & \textbf{0.9788} & 0.9938 & \textbf{0.9914} \\
\bottomrule
\end{tabular}
\label{tab:6}
\end{table}
We report the experimental results on three metrics: accuracy (ACC), the area under the ROC curve (AUROC), and the area under the precision-recall curve (AUPRC). The baselines are taken from \cite{nyamabo2021ssi}, including DeepDDI \cite{drug_food}, MR-GNN \cite{mrgnn}, GoGNN \cite{gognn}, MHCADDI \cite{co_attention}, and SSI-DDI \cite{nyamabo2021ssi}. Table \ref{tab:6} summarizes the results of different methods evaluated on the DrugBank dataset. According to the table, our proposed method surpasses the competitors in ACC and AUPRC and achieves comparable scores in AUROC. VGAE attains the highest accuracy (ACC) at $97.88\%$, outperforming SSI-DDI by $3.5\%$. Similarly, the AUPRC produced by VGAE is relatively higher than SSI-DDI by $1.0\%$.
\subsection{Anticancer drug response prediction}
In this task, the models are evaluated using four criteria including root mean square error (RMSE), coefficient of determination ($R^2$), Pearson correlation coefficient (PCC), and fitness which is computed as:
\[fitness \,=\, R^2 + PCC - RMSE\]
Table \ref{tab:3} shows the experimental results of drug response prediction on nine combinations of pairwise similarities of drugs and cell lines. Drug pairwise similarities are calculated via three different properties, including chemical structures (Chem), target protein (Target), and KEGG pathways (KEGG). For cell lines, their similarities are determined by mutation (Mutation), gene expressions (GE), and copy number variation (CNV).
In particular, VGAE trained on copy number variation with KEGG-pathways as similarity scores for cell lines and drugs achieve the best performance, yielding the lowest RMSE at 0.46 $\pm$ 0.02 and highest scores in $R^2$ = 0.67 $\pm$ 0.03, PCC = 0.85 $\pm$ 0.01, and fitness = 1.05 $\pm$ 0.06. Moreover, Table \ref{tab:4} demonstrates the comparisons between VGAE and other baselines which are taken from \citep{ahmadi2020adrml} including CDRscan
\citep{chang2018cancer}, CDCN \citep{wei2019comprehensive}, SRMF \citep{suphavilai2018predicting}, CaDRRes \citep{wang2017improved}, ADRML \citep{ahmadi2020adrml}, and
k-nearest neighbors (KNN). The results reveal that compared with the baselines, VGAE has competitive performance, especially in the fitness score determining the effectiveness of the approaches in the drug response prediction task.
\begin{table}
\centering
\caption{Performance of VGAE on various types of similarities of drugs and cell lines}
\vskip 1em
\begin{tabular}{cccccc}
\toprule
Cell & Drug & RMSE $\downarrow$ & $R^2$ $\uparrow$ & PCC $\uparrow$ & fitness $\uparrow$ \\
\midrule
Mutation & Chem & 0.48 $\pm$ 0.02 & 0.64 $\pm$ 0.04 & 0.84 $\pm$ 0.01 & 1.00 $\pm$ 0.06 \\
GE & Chem & 0.47 $\pm$ 0.01 & 0.66 $\pm$ 0.03 & 0.85 $\pm$ 0.00 & 1.03 $\pm$ 0.05 \\
CNV & Chem & 0.47 $\pm$ 0.02 & 0.65 $\pm$ 0.03 & 0.84 $\pm$ 0.01 & 1.02 $\pm$ 0.05 \\
Mutation & Target & 0.48 $\pm$ 0.02 & 0.66 $\pm$ 0.03 & 0.84 $\pm$ 0.01 & 1.01 $\pm$ 0.05 \\
GE & Target & 0.47 $\pm$ 0.02 & 0.66 $\pm$ 0.03 & 0.84 $\pm$ 0.01 & 1.03 $\pm$ 0.05 \\
CNV & Target & 0.48 $\pm$ 0.01 & 0.65 $\pm$ 0.03 & 0.84 $\pm$ 0.01 & 1.02 $\pm$ 0.05 \\
Mutation & KEGG & 0.47 $\pm$ 0.02 & 0.66 $\pm$ 0.03 & 0.85 $\pm$ 0.01 & 1.03 $\pm$ 0.06 \\
GE & KEGG & 0.47 $\pm$ 0.02 & 0.65 $\pm$ 0.04 & 0.84 $\pm$ 0.01 & 1.03 $\pm$ 0.08 \\
CNV & KEGG & \textbf{0.46 $\pm$ 0.02} & \textbf{0.67 $\pm$ 0.03} & \textbf{0.85 $\pm$ 0.01} & \textbf{1.05 $\pm$ 0.06} \\
\bottomrule
\end{tabular}
\label{tab:3}
\end{table}
\begin{table}
\centering
\caption{Methods' performance in predicting anticancer drug response}
\vskip 1em
\begin{tabular}{ccccc}
\toprule
Method & RMSE $\downarrow$ & $R^2$ $\uparrow$ & PCC $\uparrow$ & fitness $\uparrow$ \\
\midrule
ADRML & 0.49 & \textbf{0.68} & \textbf{0.85} & 1.04 \\
CDRscan & 0.76 & 0.67 & 0.83 & 0.74\\
CDCN & 0.48 & 0.67 & 0.83 & 1.02 \\
SRMF & \textbf{0.25} & 0.40 & 0.80 & 0.95 \\
CaDRRes & 0.53 & 0.31 & 0.52 & 0.3 \\
KNN & 0.56 & 0.57 & 0.78 & 0.79 \\
\midrule
VGAE (ours) & 0.46 $\pm$ 0.02 & 0.67 $\pm$ 0.03 & \textbf{0.85 $\pm$ 0.01} & \textbf{1.05 $\pm$ 0.06} \\
\bottomrule
\end{tabular}
\label{tab:4}
\end{table}
\newpage
\section{Conclusion}
In this work, we evaluate the effectiveness of variational graph autoencoders in predicting potential polypharmacy side effects on multimodal networks. The results reveal that VGAE trained on one-hot feature vectors outperforms other approaches. Moreover, augmenting Morgan fingerprints before the decoding stage helps boost the performance of VGAE. This suggests further examination of the use of molecular fingerprints in drug-drug interaction problems.
|
{
"arxiv_id": "2302.08620",
"language": "en",
"timestamp": "2023-02-28T02:12:36",
"url": "https://arxiv.org/abs/2302.08620",
"yymm": "2302"
} | \section{Introduction}
Developing in-silico methodologies capable of consistent and reliable binding free energy estimates would be a major breakthrough for drug design and other areas of chemical research.\cite{Jorgensen2004,cournia2020rigorous,griego2021acceleration,xu2022slow} With several advanced simulation software packages now routinely used in industry and academia to model binding affinities of protein-drug complexes,\cite{abel2017advancing,ganguly2022amber} the field has made significant strides toward this goal. Alchemical methods have emerged as the industry standard partly because they can target the changes of binding affinities upon specific chemical modifications of the ligands directly.\cite{cournia2017relative,armacost2020novel,schindler2020large} Theoretical and methodological aspects of free energy are continuously refined and improved.\cite{Mey2020Best,lee2020alchemical,macchiagodena2020virtual,khuttan2021alchemical} However, many challenges remain about the quality of potential energy models\cite{qiu2021Parsleydevelopment} and the correct representation of all of the relevant conformations of the molecular systems.\cite{mobley2012let}
The validation of computational predictions with respect to experimental binding affinities has given the community an understanding of the pitfalls of the models, with indications of ways in which they can be avoided.\cite{wang2015accurate,schindler2020large} Blinded validations, such as the Statistical Assessment of the Modeling of Proteins and Ligands (SAMPL),\cite{geballe2010sampl2} have played a particularly important role in this process.\cite{mobley2014blind,GallicchioSAMPL4,azimi2022application,amezcua2022overview} Because computational predictions are formulated without prior knowledge of experimental results, the evaluation of the models' relative performance is free of implicit biases and reflects more faithfully the expected reliability of the computational models in actual research and discovery settings.
Many SAMPL challenges include host-guest systems are considered to be straightforward, as well as more approachable, theoretically and experimentally, than macromolecular systems in terms of testing for reliability in free energy prediction tools.\cite{pal2016SAMPL5,yin2017overview} In this work, we report our findings in tackling the SAMPL9 bCD challenge set, which includes the binding of five phenothiazine-based drugs\cite{guerrero2008complexation} to the $\beta$-cyclodextrin host and its methylated derivative.\cite{SAMPL9bCDrepo} Molecular complexes of $\beta$-cyclodextrin (bCD) are well-known and are used in a variety of biomedical and food science applications.\cite{bertrand1989substituent} They are extensively modeled\cite{chen2004calculation,Wickstrom2013,henriksen2017evaluating,he2019role,rizzi2020sampl6,wu2021alchemical} and thus provide a familiar testing ground for computational models. However, as we will show, the binding equilibrium between phenothiazines and cyclodextrin hosts is far from straightforward and requires deploying the most advanced computational tools and methods in our arsenal. As also discussed in later sections, handling conformational heterogeneity in the form of chirality and multiple binding poses has been the greatest challenge in our computational protocol.
This paper is organized as follows. We first describe the above molecular systems in detail to illustrate how these exist in equilibrium as a mixture of many conformations, each with its distinct binding characteristics. We then review the Alchemical Transfer Method (ATM)\cite{wu2021alchemical,azimi2022relative} and the FFEngine bespoke force field parameterization used here to estimate the binding free energies of the cyclodextrin complexes. We describe the extensive alchemical process involving absolute as well as relative binding free energy calculations to obtain the binding constants of each pose in the host-guest systems and how these constants are combined\cite{Gallicchio2011adv} to yield values comparable to the experimental readouts. Reaching convergence for some complexes involving slow intramolecular degrees of freedom required advanced metadynamics-based conformational sampling strategies,\cite{bussi2018metadynamics,eastman2017openmm} which we incorporated into the alchemical binding free energy calculations. This significant intellectual and computational effort resulted in converged binding free energy estimates with a very good experimental agreement for the bCD host. The effort also illustrates the major challenges inherent in modeling complex molecular binding phenomena as well as the theories and technologies available to tackle these challenges.
\section{\label{sec:molsys}Molecular Systems}
The bCD SAMPL9 challenge concerned the binding of five phenothiazine drugs (Figure \ref{fig:guests})\cite{dahl1986phenothiazine} to $\beta$-cyclodextrin (hereafter bCD) and a modified $\beta$-cyclodextrin (hereafter mCD) (Figure \ref{fig:hosts}). The guests\cite{guerrero2008complexation} share a 3-ring phenothiazine scaffold with a variable alkylamine sidechain on the nitrogen atom of the central ring. Unlike the other guests, PMT's alkylamine sidechain is branched at a chiral center. The CPZ, TDZ, and TFP guests also have a variable aromatic substituent on one of the phenyl rings of the phenothiazine scaffold
The $\beta$-cyclodextrin host (Figure \ref{fig:hosts}) is a cyclic oligosaccharide of seven D-glucose monomers forming a binding cavity with a wide opening surrounded by secondary hydroxyl groups (top in Figure \ref{fig:hosts}) and a narrower opening (bottom) surrounded by primary hydroxyl groups. Hence, we will refer to the wide opening as the secondary face of bCD and the narrow opening as the primary face. The second host, heptakis-2,6-dimethyl-$\beta$-cyclodextrin (mCD, Figure \ref{fig:hosts}), is a derivative of $\beta$-cyclodextrin in which all of the primary hydroxyl groups and half of the secondary ones are methylated, affecting the size, accessibility, and hydrophobicity of the binding cavity. Although mCD does not have secondary and primary hydroxyl groups, for simplicity, we will refer to the two openings of mCD as secondary and primary faces by analogy with bCD. Being composed of chiral monomers, bCD and mCD are themselves chiral molecules with potentially different affinities for the enantiomers of optically active guests.\cite{rekharsky2000chiral}
\begin{figure*}
\centering
\includegraphics{pics/guests.pdf}
\caption{The phenothiazine molecular guests included in the SAMPL9 $\beta$-cyclodextrin challenge.}
\label{fig:guests}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale=0.7]{pics/hosts.pdf}
\caption{The $\beta$-cyclodextrin (left) and the heptakis-2,6-dimethyl-$\beta$-cyclodextrin (right) molecular hosts included in the SAMPL9 $\beta$-cyclodextrin challenge. The top face $\beta$-cyclodextrin is surrounded by primary hydroxyl groups and the bottom face by secondary hydroxyl groups. The corresponding faces of heptakis-2,6-dimethyl-$\beta$-cyclodextrin are partially or totally methylated relative to $\beta$-cyclodextrin.}
\label{fig:hosts}
\end{figure}
\subsection{Multiple Chemical Species of the Guests}
The amine group of the alkylamine sidechain is expected to be largely protonated in solution and the host-guest complex at the pH of the experiment. However, the two tautomers of the protonated piperazine group of the TFP guest are likely exist at appreciable concentrations and can contribute to host binding to different extents. Similarly, in the case of TDZ, protonation of the alkyl nitrogen produces two enantiomers that can interact differently with the cyclodextrin hosts. Moreover, rather than being planar, the phenothiazine ring system is bent at the connecting central ring with conformations with both positive and negative bends present in equal amounts in solution (Figure \ref{fig:pheno-chirality}). As illustrated for TDZ in Figure \ref{fig:pheno-chirality}, when an aromatic substituent is present, the species with positive and negative bend are conformational enantiomers, each with the potential to interact differently with the cyclodextrin hosts.\cite{rekharsky2000chiral,quinton2018confining}
The experimental binding assay reports an average over the contributions of the various chemical species of the guests. However, because interconversions between species cannot occur in molecular mechanics simulations or occur too slowly relative to the molecular dynamics timescales, to obtain a binding affinity estimate comparable with the experimental observations, it is necessary to model the binding of each relevant species individually and combine the results.\cite{Jayachandran2006} In this work, we have considered the two conformational enantiomers for each guest (including those of PMZ and PMT with the unsubstituted phenothiazine scaffold compounds to test for convergence), plus the two chiral forms of the protonated alkyl nitrogen of TDZ and the two forms of TFP protonated at the distal and proximal alkyl nitrogens, for a total of 14 guest species. We labeled the seven species with R chirality of the phenothiazine scaffold as PMZ1R, CPZ1R, PMT1R, TDZNR1R, TDZNS1S, TFP1aR, and TFP1bR, where the first part of the label identifies the compound, followed by the net charge ($+1$ for all the species considered) with `a' and `b' label identifying the distal and proximal protonated forms of TFP respectively, plus `NR' and `NS' labels to distinguish the R and S chiral forms of the protonated alkyl nitrogen of TDZ. The last letter identifies the chirality of the phenothiazine scaffold so that the seven species with S chirality are named PMZ1S, CPZ1S, etc.
\begin{figure}
\centering
\includegraphics[scale=0.6]{pics/phenothiazine-chirality.pdf}
\caption{Illustration of the two conformational enantiomers of the TDZ guest. Similarly to the CPZ and TFP guests, chirality is induced by the aromatic substituent (a thioether here). The unsubstituted guests PMZ and PMT do not possess conformational chirality.}
\label{fig:pheno-chirality}
\end{figure}
\subsection{Multiple Binding Poses}
The guests included in the SAMPL9 bind the cyclodextrin hosts in four distinct binding modes (Figure \ref{fig:poses}). To identify the binding modes, we will refer to the narrow opening of the $\beta$-cyclodextrin circled by primary hydroxyl groups as the primary face of the host (the bottom opening in Figure \ref{fig:hosts}). Similarly, the wider opening (top in Figure \ref{fig:hosts}) surrounded by secondary hydroxyl groups will be referred to as the secondary face of the hosts. The guests can bind the cyclodextrin hosts with the alkylamine sidechain pointing towards the host's secondary (denoted by `s') or primary (denoted by `p') faces (Figure \ref{fig:poses}). Each of these poses is further classified in terms of the position of the aromatic substituent, which can be at either the secondary or primary faces of the host. Hence the binding modes of the guest/cyclodextrin complexes are labeled: `ss', `sp', `ps', and `pp', where the first letter refers to the position of the alkylamine sidechain and the second to the position of the aromatic substituent (Figure \ref{fig:poses}).
The binding mode labels are combined with the labels discussed above that identify the chemical form of the guest to obtain the labels for each form of the guest in each binding mode. For example, the guest PMT with $+1$ charge with R chirality in the `ss' binding mode is labeled as PMT1Rss.
For the purpose of the alchemical calculations, the binding modes are defined geometrically in terms of the polar angle $\theta$ and the azimuthal angle $\psi$ illustrated in Figure \ref{fig:posedef}. $\theta$ is the angle between the molecular axes of the host and the guest and determines the orientation of the alkylamine sidechain relative to the host. The molecular axis of the cyclodextrin host (labeled $z$ in Figure \ref{fig:posedef}) is oriented from the primary to the secondary faces going through the centroid of the atoms lining the faces (see Computational Details). The molecular axis of the guests goes from the sulfur and nitrogen atoms of the central phenothiazine ring. The angle $\psi$ describes the rotation around the molecular axis of the guest and determines the position of the aromatic substituent. The `sp' binding mode, for example, is defined by $0 \le \theta \le 90^\circ$ and $90^\circ \le \psi \le 180^\circ$ (see Figure \ref{fig:poses} and Computational Details).
\begin{figure}
\centering
\includegraphics[scale=0.5]{pics/SAMPL9-poses.pdf}
\caption{Illustration of the classification of the four binding poses of the phenothiazine/cyclodextrin complexes based on the polar and twist angles introduced in Figure \ref{fig:posedef}. Poses are labeled as `ss', `sp', etc.\ where `s' refers to the secondary face of the host and `p' to the primary face of the host. The first letter of the label refers to the orientation of the alkylamine sidechain that can protrude out from the secondary face (poses `ss' and `sp') or from the primary face of the host (poses `ps' and `pp'). Similarly, the second letter refers to the position of the aromatic substituent protruding out from either the primary or secondary faces of the host. }
\label{fig:poses}
\end{figure}
\section{\label{sec:methods}Theory and Methods}
\subsection{Design of the Alchemical Process }
The alchemical calculations aim to estimate the guests' absolute binding free energies (ABFEs) to each host. Direct alchemical ABFE calculations failed to reach convergence for these systems partly due to the relatively large sizes of the guests and partly because of the slow conformational reorganization of the cyclodextrin hosts from a closed apo state to an open guest-bound state.\cite{wickstrom2016parameterization,he2019role} To overcome these obstacles, we adopted a step-wise process made of a series of relative binding free energy calculations (RBFE) starting from the ABFE of a small guest that could be reliably estimated. Specifically, we obtained the ABFE of trans-4-methylcyclohexanol (Figure \ref{fig:otherguests})--the G1 guest of the SAMPL7 bCD challenge\cite{amezcua2021sampl7}--for the secondary and primary binding modes to each host. We defined the `G1s' binding mode of the G1 guest as the one in which the hydroxyl group points toward the secondary face of the cyclodextrin host, while it points to the opposite face in the `G1p' mode (Figure \ref{fig:g1}).
Each binding mode of the complex with G1 was then alchemically converted to an intermediary phenothiazine guest (N-methylphenothiazine, or MTZ, in Figure \ref{fig:otherguests}), similar to the SAMPL9 phenothiazine derivatives with a small methyl group replacing the large alkylamine sidechains. Even though MTZ does not have conformational chirality (Figure \ref{fig:pheno-chirality}), we treated its S and R enantiomers individually to test the convergence of the RBFE estimates for each binding pose. We used atom indexes to distinguish the S and R enantiomers of these symmetric guests. Calculations were conducted to obtain the RBFEs from the G1s to the MTZRsp, MTZRss, MTZSsp, and MTZSss binding poses of the complexes of MTZ with bCD and mCD, and from G1p to the MTZRps, MTZRpp, MTZSps, and MTZSpp of the same complexes, all independently and from different starting conformations. The MTZRsp, MTZRss, MTZSsp, and MTZSss complexes are equivalent and should yield the same RBFE values within uncertainty. Similarly, the MTZRps, MTZRpp, MTZSps, and MTZSpp should yield equivalent RBFEs but distinguishable from those of the MTZRsp, MTZRss, MTZSsp, MTZSss group.
Next, RBFEs were obtained for each complex of MTZ to the corresponding complex of PMZ. For example, the MTZRsp binding pose of the MTZ complex with bCD and mCD were converted to the PMZ1Rsp binding pose of the corresponding complexes between PMZ and the hosts. Finally, the RBFEs between each pose of PMZ and the corresponding binding poses of the other guests were obtained. During this process, we monitored convergence by looking at the discrepancy between the RBFEs corresponding to the equivalent symmetric poses of the achiral PMZ and PMT guests. The overall alchemical process to obtain the ABFEs of the SAMPL9 phenothiazine guests is summarized in Figure \ref{fig:alchemical-process}.
\begin{figure}
\centering
\includegraphics[scale=0.9]{pics/otherguests.pdf}
\caption{The structures and abbreviations of the molecular guests used as intermediate compounds in the alchemical process. }
\label{fig:otherguests}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{pics/SAMPL9-posedef.pdf}
\caption{Illustration of the geometrical definition of the binding poses of the phenothiazine/cyclodextrin complexes. The definition is based on the polar ($\theta$) and twist ($\psi$) angles of the molecular axis of the guest with respect to the coordinate frame of the host, which includes the $z$-axis that runs from the primary to the secondary face of the host. See Computational Details for the specific definition of the guests' and hosts' coordinate frames.}
\label{fig:posedef}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.7]{pics/SAMPL9-G1.pdf}
\caption{The `s' and `p' binding modes of the G1/$\beta$-cyclodextrin complex. The `s' mode, in which the hydroxyl group points towards the secondary face of the host, is used as a starting species for the `ss' and `sp' binding modes of the phenothiazine/cyclodextrin complexes. The `p' mode, which points towards the primary face, is the starting species for the `ps' and `pp' modes (Figure \ref{fig:poses}).}
\label{fig:g1}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.85]{alchemical-process-2-1-1.pdf}
\caption{The map of relative binding free energy calculations to obtain the binding free energies of each pose of each guest starting from the absolute binding free energy of the corresponding poses of the G1 guest. Nodes of the same color contribute to the binding free energy estimate of one of the five guests: PMZ (yellow), PMT (green), CPZ (cyan), TDZ (violet), and TFP (purple).}
\label{fig:alchemical-process}
\end{figure*}
\subsubsection{Free Energy of Binding for Complexes with Multiple Binding Modes }
The observed binding constant $K_b$ of the complex $RL$ of a receptor $R$ with a ligand $L$ present in forms or poses $L_i$, $i = 1, 2, \ldots$ is the weighted average of the binding constant $K_b(i)$ for each form with weights equal to the relative population $P_0(i)$ of each form in solution\cite{Jayachandran2006,Gallicchio2011adv}
\begin{equation}
K_b = \sum_i P_0(i) K_b(i)
\label{eq:combination-formula}
\end{equation}
Statistical mechanics-based derivations of this formula, which we refer to as the Free Energy Combination Formula, are available.\cite{Jayachandran2006,Gallicchio2011adv} The Free Energy Combination Formula can also be derived using elementary notions as follows: $K_b = C^\circ \frac{[RL]}{[R][L]} = \sum_i C^\circ \frac{[RL_i]}{[R][L]} = \sum_i \frac{[L_i]}{[L_i]} C^\circ \frac{[RL_i]}{[R][L]}$ and $C^\circ \frac{[RL_i]}{[R][L]} = K_b(i)$ and $\frac{[L_i]}{[L]} = P_0(i)$, where $C^\circ = 1$ mol/L, $[RL] = \sum_i [RL_i]$ is the total molar concentration of the complex and $[RL_i]$ is the concentration of mode $i$ of the complex. Similar definitions apply to the concentrations of the ligand $[L]$ and $[L_i]$, and $P_0(i) = [L_i]/[L]$ is the population of mode $i$ of the ligand in solution.
Moreover, as also shown in the Appendix, the fractional contribution of binding mode $i$ to the overall binding constant is the population, $P_1(i)$ of mode $i$ of the complex:\cite{Gallicchio2011adv}
\begin{equation}
P_1(i) = \frac{ P_0(i) K_b(i)}{K_b}
\label{eq:p1}
\end{equation}
Below we used this property to infer the probability of occurrence of each mode of the host-guest complexes.
In this specific application, the binding modes refer to the `ss', `sp', etc. orientations of the R and S enantiomers of each guest. We individually obtained the binding mode-specific binding constants $K_b(i)$ for each binding mode. In the corresponding alchemical simulations, the orientation of the ligand in the binding site is set by restraining potentials based on the $\theta$ and $\psi$ angles (see Figure \ref{fig:posedef} and Computational Details). These orientations are equally likely in solutions. We also assume an equal likelihood of the R and S conformational enantiomers of the guests in solution, leading to $P_0(i) = 1/8$ for each pose of each guest. The TDZ and TFP guests have twice as many poses due to point chirality and multiple protonation states of their alkylamine sidechain that we also assume to be equally likely in solution. Hence, we set $P_0(i) = 1/16$ for each state of the TDZ and TFP guests.
\subsection{The Alchemical Transfer Method}
The Alchemical Transfer Method (ATM) is based on a coordinate displacement perturbation of the ligand between the receptor-binding site and the explicit solvent bulk and a thermodynamic cycle connected by a symmetric intermediate in which the ligand interacts with the receptor and solvent environments with equal strength.\cite{wu2021alchemical,azimi2022application} The perturbation energy $u$ for transferring the ligand from the solution to the binding site is incorporated into a $\lambda$-dependent alchemical potential energy function of the form
\begin{equation}
U_\lambda(x) = U_0(x) + W_\lambda(u)
\label{eq:alch-func}
\end{equation}
where $x$ represents the system's coordinates, $U_0(x)$ is the potential energy function that describes the unbound state, and $W_\lambda$ is the softplus alchemical potential\cite{khuttan2021alchemical,wu2021alchemical,azimi2022application} such that the system's potential energy function is transformed from that of the unbound state to that of the bound state as $\lambda$ goes from $0$ to $1$. This alchemical process computes the excess component of the free energy of binding. The ideal term $-k_B T \ln C^\circ V_{\rm site}$, where $C^\circ = 1$ mol/L and $V_{\rm site}$ is the volume of the binding site is added in post-processing to obtain the standard free energy of binding.
In this work, we used the strategy above to compute the absolute binding free energy (ABFE) of the G1 guest in two different poses. The binding free energies of the phenothiazine guests are obtained by a series of relative binding free energy (RBFE) calculations starting from G1 (Figure \ref{fig:alchemical-process}). The alchemical RBFE implementation of the ATM method\cite{azimi2022relative} is similar to ABFE implementation except that one ligand of the pair under investigation is transferred from the solution to the binding site while the other ligand is transferred in the opposite direction. The receptor and the two ligands are placed in a single solvated simulation box in such a way that one ligand is bound to the receptor and the other is placed in an arbitrary position in the solvent bulk. Molecular dynamics simulations are then conducted with a $\lambda$-dependent alchemical potential energy function that connects, in an alchemical sense, the state of the system where one guest is bound to the receptor and the other in solution, to the state where the positions of the two guests are reversed. The free energy change of this process yields the relative binding free energy of the two guests. To enhance convergence, the two ligands are kept in approximate alignment to prevent the one in solution to reorient away from the orientation of the bound pose. We have shown mathematically that the alignment restraints implemented in ATM do not bias the binding free energy estimates.\cite{azimi2022relative}
In this work, we employed metadynamics conformational sampling to obtain converged RBFE estimates for the PMT guest. Well-tempered metadynamics\cite{barducci2008well} is a well-established enhanced sampling technique to sample rare events during MD simulations when they are separated from other metastable states by large energy barriers. The technique uses a bias potential, $U_{\rm bias}$, that lowers energy barriers along a slow degree of freedom. In this work, the metadynamics biasing potential is obtained along a dihedral angle, $\varphi$, of the alkylamine sidechain of PMT (see Computational Details) from a simulation in a water solution, using OpenMM's well-tempered metadynamics implementation by Peter Eastman.\cite{eastman2017openmm} The alchemical binding free energy calculation is then conducted with the biasing potential, $U_{\rm bias}(\varphi)$, added to the alchemical potential energy function in Eq.\ (\ref{eq:alch-func}). The resulting binding free energy estimate is then unbiased using a book-ending approach\cite{hudson2019use} by computing the free energy differences of the system without the biasing potential from samples collected with the biasing potential at the endpoints of the alchemical path. In this work, we used a simple unidirectional exponential averaging formula
\[
-k_B T \ln \langle \exp(U_{\rm bias}/k_B T) \rangle_{\rm bias}
\]
to evaluate the free energy corrections for unbiasing. Due to the larger excursions of the dihedral angle with metadynamics, the unbiased ensemble is a subset of the biased ensemble and the exponential averaging estimator converges quickly in this case.
\subsection{Force Field Parametrization}
Force field parameters were assigned to the hosts and the guests using an in-house development FFEngine workflow at Roivant Discovery. FFEngine is a workflow for the bespoke parametrization of ligands with the Amber force field functional form.\cite{wang2006automatic} The partial charges were derived from GFN2-xTB/BCC with pre-charges from semi-empirical QM method GFN2-xTB,\cite{bannwarth2019gfn2} and bond charge correction (BCC) parameters fitted to the HF/6-31G* electrostatic potential (ESP) with the COSMO implicit solvation model from a 50,000 drug-like compounds dataset. The ESP with an implicit solvation model was deemed necessary for these highly polar and charged host-guest systems even though it is expected to yield a fixed charge model slightly more polarized than the default GAFF2/Amber one.
\subsection{Computational Details }
The guests were manually docked to the hosts in each binding pose using Maestro (Schr\"{o}dinger, Inc.) in each of the four binding poses. A flat-bottom harmonic positional restraint with a force constant $k_c = 25$ kcal/mol/\AA$^2$ and a tolerance of $4$ \AA\ was applied to define the binding site region.\cite{Gilson:Given:Bush:McCammon:97,Gallicchio2011adv} For this purpose, the centroid of the guest was taken as the center of the central ring of the phenothiazine core, and the centroid of the cyclodextrin host was defined as the center of the oxygen atoms forming the ether linkages between the sugar monomers. Boresch-style\cite{Boresch:Karplus:2003} orientational restraints were imposed to keep each complex in the chosen binding pose. These were implemented as flat-bottom restraints acting on the $\theta$ and $\phi$ angles in Figure \ref{fig:posedef} with a force constant of $k_a = 0.05$ kcal/mol/deg$^2$, and centers and tolerances tailored for each pose. For example, the orientational restraints for the `sp' pose are centered on $\theta = 0^\circ$ and $\phi = 180^\circ$ with $\pm 90^\circ$ tolerances for both. The $\theta$ angle is defined as the angle between the $z$-axis of the host, defined as the axis going through the centroid of the oxygen atoms of the primary hydroxyl groups and the centroid of the oxygen atoms of the secondary hydroxyl groups, and the molecular axis of the guest, defined as the axis going through the S and N atoms of the central ring of the phenothiazine core. The $\phi$ angle is defined as the dihedral angle between the plane formed by the C1-N-S triplet of atoms of the phenothiazine core of the guest and the plane spanned by the $z$-axis of the host and the molecular axis of the guest, where C1 represents the carbon atom of the phenothiazine host with the aromatic substituent. Very loose flat-bottom harmonic positional restraints ($4$ \AA\ tolerance and $25$ kcal/mol/\AA$^2$ force constant) were applied to the ether linkages oxygen atoms of the cyclodextrins to keep the hosts from wandering freely in the simulation box. The ATM displacement vector was set to $(-30,0,0)$ \AA\ .
Force field parameters were assigned as described above. In RBFE calculations, the second ligand in the pair was placed in the solvent by translating it along the displacement vector. The resulting system was then solvated using tleap\cite{AmberTools} in a TIP3P box with a 10 \AA\ solvent buffer and sodium and chloride counterions to balance the negative and positive charges, respectively. The systems were energy minimized, thermalized, and relaxed in the NPT ensemble at 300 K and 1 atm pressure. Annealing of the systems to $\lambda = 1/2$ in the nVT ensemble followed to obtain initial structures to initiate the parallel replica exchange ATM calculations. Alchemical calculations were conducted with the OpenMM 7.7 MD engine on the CUDA platform with a 2 fs time-step, using the AToM-OpenMM software.\cite{AToM-OpenMM} Asynchronous Hamiltonian Replica Exchange\cite{gallicchio2015asynchronous} in $\lambda$ space was attempted every 20 ps and binding energy samples were collected at the same frequency. The ATM-RBFE employed 22 $\lambda$ states distributed between $\lambda$ = 0 to $\lambda$ = 1 using the non-linear alchemical soft-plus potential and the soft-core perturbation energy with parameters $u_{\rm max} = 200$ kcal/mol, $u_c = 100$ kcal/mol, and $a=1/16$.\cite{khuttan2021alchemical} The input files of the simulations are provided in our lab's GitHub repository (https://github.com\-/GallicchioLab\-/SAMPL9-bCD). We collected approximately 20 ns trajectories for each replica corresponding to approximately 440 ns for each of the 64 alchemical steps for each host (Figure \ref{fig:alchemical-process}). Overall, we simulated the systems for over $6$ $\mu$. UWHAM\cite{Tan2012} was used to compute binding free energies and the corresponding uncertainties after discarding the first half of the trajectory.
To obtain the torsional flattening biasing potential, we simulated the PMT guest in solution with metadynamics over the (C-N-C-C) alkylamine sidechain torsional angle highlighted in Figure \ref{fig:pmf-torsion}. A well-tempered metadynamics bias factor of 8 was used, with a Gaussian height of 0.3 kcal/mol and width of $10^\circ$.\cite{barducci2008well} The bias potential was collected for 20 ns, updating it every 100 ps. The resulting potential of mean force is shown in Figure \ref{fig:pmf-torsion}. The metadynamics-derived biasing potential was used in all the RBFE calculations involving the PMT guest to accelerate the sampling of the slow torsional degree of freedom in question.
\section{Results}
\subsection{Binding Free Energy Predictions }
The calculated binding free energies of the cyclodextrin/phenothiazine complexes obtained by combining the pose-specific binding free energies are listed in Table \ref{tab:dgvsexpt} compared to the experimental measurements. We provide the results of each individual free calculation in the Supplementary Information. The second column of Table \ref{tab:dgvsexpt} reports the blinded computational predictions submitted to the SAMPL9 organizers and the results of revised predictions (third column) obtained subsequently to correct setup errors and resolve unconverged calculations. Specifically, we uncovered cases where binding poses were misidentified and where centers of ligands and the hosts had reversed chirality during energy minimizations due to close initial atomic overlaps. As discussed below, in the initial predictions, we were also unable to obtain consistent binding free energy predictions for symmetric poses.
The blinded and revised predictions for the bCD complexes agree with the experiments. The revised predictions, in particular, are all within 1.5 kcal/mol of the experimental measurements and within 1 kcal/mol for four of the five bCD complexes. Although the range of the binding affinities is small, some trends are reproduced and the weakest binder (PMT) is correctly identified. The quality of the predictions for the mCD host is not as good, and it worsened upon revision. The experiments show that the phenothiazine guests bind slightly more strongly to mCD than bCD. However, except for CPZ, the calculations predict significantly weaker binding to mCD relative to bCD. The computed free energies of the mCD complexes are on overage over 2 kcal/mol less favorable than the experimental ones. The revised prediction for the mCD-TDZ complex is particularly poor and fails to identify this complex as the most stable in the set. While a detailed investigation of the sources of the poor prediction for mCD has not been carried out, our model could not have identified the best possible binding poses for this more flexible host. mCD is also more hydrophobic and the energy model may overpredict the reorganization free energy to go from the apo to the holo conformational ensemble for this host.
\begin{table*}
\caption{\label{tab:dgvsexpt}The binding free energy predictions submitted to the SAMPL9 challenge compared to the revised predictions and the experimental measurements.}
\begin{center}
\sisetup{separate-uncertainty}
\begin{tabular}{l S[table-format = 3.2(2)] S[table-format = 3.2(2)] S[table-format = 3.2]}
Name & \multicolumn{1}{c}{$\Delta G_b$(SAMPL9)$^{a,b,c}$} & \multicolumn{1}{c}{$\Delta G_b$(ATM)$^{a,b,d}$} & \multicolumn{1}{c}{$\Delta G_b$(expt)$^{a,e}$} \\ [0.5ex]
\hline
bCD-TDZ & -4.28(90) & -4.56(47) & -5.73 \\
bCD-TFP & \multicolumn{1}{S[table-format = 3.2(3)]}{-6.51(111)} & -5.42(99) & -5.09 \\
bCD-PMZ & -3.73(48) & -4.03(45) & -5.00 \\
bCD-PMT & -2.53(70) & -3.00(47) & -4.50 \\
bCD-CPZ & -7.28(92) & -4.64(70) & -5.45 \\
\hline
mCD-TDZ & \multicolumn{1}{S[table-format = 3.2(3)]}{-5.16(140)} & -2.96(68) & -6.50 \\
mCD-TFP & -4.14(62) & -3.98(70) & -5.57 \\
mCD-PMZ & -2.37(54) & -2.34(55) & -5.08 \\
mCD-PMT & -1.80(99) & -1.58(60) & -5.39 \\
mCD-CPZ & -5.22(90) & -5.13(88) & -5.43 \\
\hline
\end{tabular}
\begin{flushleft}\small
$^a$In kcal/mol. $^b$Errors are reported as twice the standard deviation. $^c$Blinded computational predictions submitted to the SAMPL9 challenge organizers. $^d$Revised ATM computational predictions. $^e$Provided by the SAMPL9 organizers: https://www.samplchallenges.org.
\end{flushleft}
\end{center}
\end{table*}
\subsection{Binding Mode Analysis}
We used the binding mode-specific binding constants we obtained (see Supplementary Information) to infer the population of each binding mode for each complex shown in Figure \ref{fig:bmode-populations}. The results indicate that the complexes visit many poses with appreciable probability. The only exceptions are TFP binding to bCD and CPZ binding to mCD which are predicted to exist with over 75\% probability in only one pose (`sp' in the R configuration in the case of the TFP-bCD complex and `sp' in the S configuration in the case of the CPZ-mCD complex). In general, the guests bind the hosts preferentially in the `sp' and `ss' modes with the alkylamine sidechain placed near the primary face of the hosts (Figure \ref{fig:poses}). This trend is less pronounced for the complexes between PMT, PMZ, and TDZ with bCD, which occur in the `sp'/`ss' and `pp'/`ps' modes with similar frequency, and it is more pronounced for all complexes with mCD which strongly prefer the alkylamine sidechain towards the primary face. Unlike the alkylamine sidechain, the aromatic substituent of the CPZ, TDZ, and TFP guests is preferentially placed towards the secondary face of the cyclodextrin hosts. This is evidenced by the higher probability of the `sp' binding modes (red and green bars in Figure \ref{fig:bmode-populations}) over the `ss' binding modes (blue and yellow).
Reassuringly, the calculations predict that the populations of the symmetric binding modes of the complexes with the PMZ and PMT guests are more evenly distributed than for the other complexes. Lacking an aromatic substituent (Figure \ref{fig:guests}), the PMZ and PMT guests do not display conformational chirality (Figure \ref{fig:pheno-chirality}). Hence, their `ssS', `spS', `ssR`, and `spR' binding modes are chemically equivalent and should have the same population. Similarly, the binding modes `psS', `ppS', `psR`, and `ppR' of these guests are mutually equivalent. Still, they are distinguishable from the `ssS', `spS', `ssR`, `spR' group by the position of the alkylamine sidechain (Figure \ref{fig:poses}). We used these equivalences to assess the level of convergence of the binding free energy estimates. Although redundant for symmetric poses, we simulated each binding mode of these guests individually, starting from different initial configurations, and checked how close the pose-specific binding free energies varied within each symmetric group. For example, the computed populations of the `ssS', `spS', `ssR`, and `spR' poses of the PMZ-bCD complex vary in a narrow range between $7.5$ and $15.9$\%, indicating good convergence. However, the corresponding populations for the complex with mCD are not as consistent--the 'ssS' mode predicted to be significantly less populated (4\%) than the other modes (20-40\%)--reflecting poorer convergence.
The pose-specific binding free energy estimates probe the chiral binding specificity of the hosts. Except for the TFP guest that is predicted to bind predominantly in the R chiral form (88\% population), bCD shows little chiral preference. mCD displays a slightly stronger chiral specificity, with CPZ predicted to bind predominantly in the S form and TFP in the R form.
\begin{figure*}
\centering
\includegraphics[scale=0.50]{binding_mode_bCD.pdf} \includegraphics[scale=0.50]{binding-mode-mCD.png}
\caption{\label{fig:bmode-populations} Binding mode populations of the complexes with bCD (left) and mCD (right).}
\end{figure*}
\subsection{Enhanced Conformational Sampling of the PMT Guest }
As discussed above, the `ssS', `spS', `ssR', and `spR' binding poses of the PMT guest, which lacks an aromatic substituent, are chemically indistinguishable and should yield equivalent pose-specific ABFE estimates. Similarly, the `psS', `ppS', `psR', and `ppR' should yield the same binding free energy within statistical uncertainty. Yet, in our first attempt submitted to SAMPL, our predictions did not achieve the expected consistency (Table \ref{tab:dgpmt1}, second column). In Table \ref{tab:dgpmt1} we show the binding free energy estimates for each PMT binding pose relative to the same pose of PMZ, whose poses are equivalent in the same way as for PMT. For instance, while the four top poses for bCD are expected to yield the same RBFEs, the actual estimates show a scatter of more than $4$ kcal/mol. The other groups of equivalent binding poses of bCD and mCD also show significant scatter, which is indicative of poor convergence.
Analysis of the molecular dynamics trajectories later revealed that the observed scatter of relative binding free energy estimates was due to the conformational trapping of the PMT guest in the starting conformation, which was randomly set during the system setup. Simulations with PMT trapped in some conformations overestimated the RBFE while those in the other underestimated it. We pin-pointed the conformational trapping to the branched alkylamine side chain of PMT which showed hindered rotation around one of its torsional angles (Figure \ref{fig:pmf-torsion}) caused by a large free energy barrier separating the gauche(+) and gauche(-) conformers (Figure \ref{fig:pmf-torsion}). The variations of conformers in the alchemical calculation broke the symmetry between equivalent poses and caused the scatter in the observed RBFEs.
To correct these inconsistencies, we modified our alchemical binding protocol to include a metadynamics-derived flattening potential bias that reduced the magnitude of the free energy barrier separating the conformers of the alkylamine sidechain of PMT (see Methods and Computational Details). We confirmed that the biasing potential was able to successfully induce rapid conformational transitions between these conformers, which were rarely achieved with the conventional ATM protocol. Consequently, integrating metadynamics-enhanced sampling with ATM (ATM-MetaD) indeed produced much better convergence of binding free energy estimates of symmetric poses starting from different initial conformers (Table \ref{tab:dgpmt1}). For example, the large discrepancy of RBFE estimates between the `spS` and `spR' binding poses was reduced to less than 1 kcal/mol with ATM-metaD and in closer consistency with statistical uncertainties. With only one exception, similarly improved convergence was achieved for the equivalent binding poses of bCD and mCD falling within a 1 kcal/mol range of each other (Table \ref{tab:dgpmt1}).
\begin{figure}
\centering
\includegraphics[scale=0.25]{pics/pmt1-pmf-torsion.png}
\caption{The potential of mean force (PMF) in water solution along the highlighted torsional angle, $\varphi$, of PMT1 computed by well-tempered metadynamics sampling.\cite{barducci2008well} The PMF identifies two major gauche conformational states at positive and negative angles around $60^\circ$ and $120^\circ$ separated by a large free energy barrier at $180^\circ$ of more than 7 kcal/mol from the global minimum at $-60^\circ$. The free energy barrier is sufficiently high that interconversions between the two stable conformational states are not observed in the time-scale of the alchemical calculations without the metadynamics landscape-flattening potential.\label{fig:pmf-torsion}}
\end{figure}
\begin{table}
\caption{\label{tab:dgpmt1} Relative binding free energy estimates of the binding poses of PMT relative to the same binding pose of PMZ for the two cyclodextrin hosts bCD and mCD and with and without metadynamics enhanced sampling.}
\begin{center}
\sisetup{separate-uncertainty}
\begin{tabular}{l S[table-format = 3.2(2)] S[table-format = 3.2(2)]}
Pose & \multicolumn{1}{c}{$\Delta\Delta G_b$(ATM)$^{a,b,c}$} & \multicolumn{1}{c}{$\Delta\Delta G_b$(ATM+MetaD)$^{a,b,d}$} \\ [0.5ex]
\hline
\multicolumn{3}{c}{bCD} \\
spS & 3.94(39) & 0.44(25) \\
ssS & 2.80(39) & 0.58(25) \\
spR & 0.28(34) & 1.12(24) \\
ssR & 4.62(45) & 1.65(25) \\ \hline
psS & 1.98(36) & 1.49(24) \\
ppS & 2.03(39) & 1.10(24) \\
psR & 0.77(29) & 1.46(24) \\
ppR & 0.28(39) & 0.57(24) \\
\hline\hline
\multicolumn{3}{c}{mCD} \\
spS & 1.93(37) & 0.96(25) \\
ssS & 0.57(44) & 0.11(26) \\
spR & 1.59(41) & 0.30(25) \\
ssR & 0.25(42) & 1.90(25) \\ \hline
psS & 1.26(39) & -0.14(24) \\
ppS & 0.20(42) & 0.95(25) \\
psR & 1.54(41) & -0.41(25) \\
ppR & 0.10(38) & 0.10(24) \\
\end{tabular}
\begin{flushleft}\small
$^a$In kcal/mol. $^b$Errors are reported as twice the standard deviation. $^c$Estimates computational predictions submitted to the SAMPL9 challenge organizers. $^d$Revised ATM estimates with metadynamics conformational sampling.
\end{flushleft}
\end{center}
\end{table}
\section{Discussion }
Molecular binding equilibria are central to applications ranging from pharmaceutical drug design to chemical engineering. Obtaining reliable estimates of binding affinities by molecular modeling is one of the holy grails of computational science. Enabled by recent developments in free energy theories and models, and an increase in computing power, early static structure-based virtual screening tools, such as molecular docking, are increasingly complemented by more rigorous dynamical free energy models of molecular recognition that represent the conformational diversity of molecules at atomic resolution. However, many challenges still remain to achieve a sufficient level of usability and performance for free energy models to widely apply them to chemical research. By offering a platform to assess and validate computational models against high-quality experimental datasets in an unbiased fashion, the SAMPL series of blinded challenges have significantly contributed to the advancement of free energy models.\cite{mobley2017predicting} By participating in SAMPL challenges we have refined and improved our models against host-guest and protein-ligand datasets\cite{Gallicchio2012a,Gallicchio2014octacid,GallicchioSAMPL4,deng2016large,pal2016SAMPL5,azimi2022application} and built an appreciation for the complexities of molecular recognition phenomena and the level of detail required to model them accurately.
The present SAMPL9 bCD challenge highlights the importance of the proper treatment of conformational heterogeneity to obtain reliable quantitative descriptions of binding equilibria. We undertook this challenge with the mindset that host-guest systems are simpler surrogates of more challenging and conformationally diverse protein-ligand complexes and, hence, more suitable to assess computational methodologies. However, while working on setting up the molecular simulations, we realized that the phenothiazine/cyclodextrin complexes were far more chemically and conformationally diverse than expected. The majority of the guests exist in solution as mixtures of enantiomers related by nitrogen inversion (Figure \ref{fig:pheno-chirality}) which are distinctly recognized by the chiral hosts. As a result, one enantiomer could be significantly enriched relative to the other when bound to the host. In addition, each enantiomer binds the host in four generally distinct binding orientations that differ in the placement of the alkylamine sidechain and the aromatic substituent relative to the host (Figure \ref{fig:poses}). While in the experimental setting the guests and the complexes rapidly transition from one pose to another, this level of conformational heterogeneity poses serious challenges for standard molecular dynamics conformational sampling algorithms which are generally limited to one binding pose.
When facing these complexities, it is tempting to limit the modeling to the most important binding pose. While it is true that often the most favorable pose contributes the most to the binding affinity and that neglecting minor poses results in small errors, binding pose selection remains an unresolved issue. Clearly, the identification of the major pose cannot be carried out by binding free energy analysis of each pose because that is precisely what one seeks to avoid in such a scenario. Whichever approach is adopted, it must be capable of identifying the most stable pose of each complex among many competing poses. The present results illustrate this challenge. For example, the populations derived from our free energy analysis (\ref{fig:bmode-populations}) indicate that the `spR' binding pose is often one of the most populated (red in Figure \ref{fig:bmode-populations}). However, CPZ is predicted to strongly prefer the `spS' pose when binding to mCD (orange in Figure \ref{fig:bmode-populations}B), and limiting the modeling to the `spR' pose would result in a gross underestimation of the binding free energy. Similarly, the TDZ-bCD complex is predicted to exist in a variety of poses (Figure \ref{fig:bmode-populations}A), including, for instance, the `psR' pose with the alkylamine sidechain pointing towards the primary face of bCD, with the `spR' pose contributing only a small fraction of the observed binding affinity. Clearly, at least in this case, limiting the modeling to one carefully selected predetermined pose would lead to significant mispredictions for individual complexes.
To obtain an estimate of the observed binding constants, in this work, we opted to compute the binding free energies of all of the relevant binding poses of the system and to integrate them using the free energy combination formula [Eq.\ (\ref{eq:combination-formula})]. The combination formula requires the populations of the conformational states of the guest in solution that, in this case, are easily obtained by symmetry arguments. Still, the work involved 64 individual alchemical free energy calculations (Figure \ref{fig:alchemical-process}) and hundreds of nanoseconds of simulation on GPU devices. While we attempted to automate the process as much as possible, setup errors were made and it is likely that some yet undiscovered defects are still affecting our revised results. We assessed the convergence of the pose-specific binding free energy estimates by monitoring the consistency between the results for symmetric poses. As a result of this assessment, we realized that one guest (PMT) was affected by slow conformational reorganization that required metadynamics treatment. This best-effort attempt resulted in good quantitative predictions for the complexes with $\beta$-cyclodextrin but, for largely unknown reasons, failed to properly describe the binding free energies of the complexes with its methylated derivative.
\section{Conclusions}
In this work, we describe our effort to obtain alchemical computational estimates of the binding constants of a set of phenothiazine guests to cyclodextrin hosts as part of the SAMPL9 bCD challenge using the Alchemical Transfer Method. The free energy modeling of these systems proved significantly more challenging than expected due to the multiple conformational states of the guests and the multiple binding poses of the complexes which had to be treated individually. Overall, 64 individual alchemical calculations were employed to obtain binding free energy estimates comparable to the experimental observations. The predictions were quantitative for the $\beta$-cyclodextrin host but failed to accurately describe the observed binding affinities to the methylated derivative. The work shows that even simple molecular systems can require extensive modeling efforts to treat conformational heterogeneity appropriately and it illustrates the role that multiple binding poses can play in protein-ligand binding prediction and, ultimately, drug design.
\section{Acknowledgements}
We acknowledge support from the National Science Foundation (NSF CAREER 1750511 to E.G.). Molecular simulations were conducted on Roivant's computational cluster and on the Expanse GPU supercomputer at the San Diego Supercomputing Center supported by NSF XSEDE award TG-MCB150001. We are grateful to Mike Gilson for providing experimental data for the SAMPL9 bCD challenge. We appreciate the National Institutes of Health for its support of the SAMPL project via R01GM124270 to David L. Mobley.
\section{Data Availability}
Input files of the AToM-OpenMM simulations are available on the GitHub repository {\tt github.com/\-GallicchioLab/\-SAMPL9-bCD}. The AToM-OpenMM software is freely available for download on GitHub.\cite{AToM-OpenMM} A detailed list of the results and their analysis are provided in the Supplementary Information. Simulation MD trajectories are available from the corresponding author upon reasonable request.
\section{Supplementary information}
Spreadsheets \href{https://docs.google.com/spreadsheets/d/1ftXvRjq36rh_LcUNihMgoCRSC1s_9XK-XTu3ZCDVmPE}{SAMPL9-bCDpc-FFEngine} and \href{https://docs.google.com/spreadsheets/d/1GOFku4EadRT9fpTbrfSbj7HYDC-6kx8Smq5JaLD6c8s}{SAMPL9-mCDpc-FFEngine} containing: (i) the absolute binding free energy of host G1 in the `s' and `p' binding modes, (ii) the relative binding free energies between G1 in the `p' and `s' poses and MTZ in the `ss', `sp', etc. binding modes, (iii) the relative binding free energy between MTZ and PMZ in each of the eight binding modes, (iv) the relative binding free energies between PMZ and the other guests in each of the eight binding poses, (v) the binding mode specific binding constants for each complex in each binding mode, and (vi) the calculated populations of each binding mode for each complex.
|
{
"arxiv_id": "2302.08629",
"language": "en",
"timestamp": "2023-02-20T02:04:10",
"url": "https://arxiv.org/abs/2302.08629",
"yymm": "2302"
} | \section{Abstract}
In this work, we present a novel physics-based data-driven framework for reduced-order modeling of laser ignition in a model rocket combustor based on parameterized neural ordinary differential equations (PNODE). Deep neural networks are embedded as functions of high-dimensional parameters of laser ignition to predict various terms in a 0D flow model including the heat source function, pre-exponential factors, and activation energy. Using the governing equations of a 0D flow model, our PNODE needs only a limited number of training samples and predicts trajectories of various quantities such as temperature, pressure, and mass fractions of species while satisfying physical constraints. We validate our physics-based PNODE on solution snapshots of high-fidelity Computational Fluid Dynamics (CFD) simulations of laser-induced ignition in a prototype rocket combustor. We compare the performance of our physics-based PNODE with that of kernel ridge regression and fully connected neural networks. Our results show that our physics-based PNODE provides solutions with lower mean absolute errors of average temperature over time, thus improving the prediction of successful laser ignition with high-dimensional parameters.
\section{Introduction}
\subsection{Background and Related Work.}
Computational Fluid Dynamics (CFD) simulations are a critical tool in the design of combustion systems, as they provide predictions of crucial quantities that are often not accessible in experiments. However, such high-fidelity simulations are computationally expensive because they require the solution of partial differential equations with complex combustion chemistry with a wide range of spatiotemporal scales. This becomes an obstacle to the optimization, control, and design of combustion systems, including scramjets, rocket combustors, and gas turbines, where repeated experiments or simulations of ignition are often needed to evaluate the sensitivity to ignition and uncertainty under various operational conditions. Therefore, many studies in the past have focused on building predictive and scalable surrogate models for combustion that can help improve the development of efficient engine systems.
One possible approach is the use of thermodynamical models without spatial information, which are also known as zero-dimensional models \cite{tang2022review}. Predicting average combustion variables that are crucial to understanding the combustion dynamics, zero-dimensional single-zone models provide real-time simulations with significantly lower computational cost while also maintaining reasonable accuracy for model-based control of combustion engines. First introduced as a mean value model by \citeauthor{hendricks1990mean}, a zero-dimensional single-zone model was constructed based on energy conservation laws and fitted to spark ignition engines that showed 2\% error over the entire operating range \cite{hendricks1990mean}. The use of a reaction-based 0D diesel combustion model for the turbocharged engine was considered by \citeauthor{men2019reaction} to predict the combustion rate, the temperature and the pressure of the engine \cite{men2019reaction}. \citeauthor{bengtsson2004modeling} applied a 0D model with eight reactions to homogeneous charge compression ignition (HCCI) engines for control analysis \cite{bengtsson2004modeling}. However, the combustion parameters of the 0D model often need to be re-calibrated under different operational conditions, and the calibration of chemical reaction parameters, such as reaction rate and activation energy, requires a deep understanding of chemical kinetics and combustion dynamics \cite{sun2005modeling}.
Data-driven reduced-order modeling (ROM), where fast solvers are constructed based on data from high-fidelity simulations, has become another attractive alternative for the optimization and quantification of uncertainties in combustion systems. In particular, machine learning-based surrogate models, which are known for their ability to learn highly complex and non-linear mappings in physical or chemical processes, have been widely applied to build reduced-order models for Direct Numerical Simulations (DNS) of fluid turbulence in recent years \cite{zhou2022machine, ihme2022combustion}.
Many attempts have been made to use machine learning-based models as surrogate models for chemical reactions in combustion processes, which, when coupled with high-fidelity CFD simulations, require significant computational resources that are challenging to satisfy. To predict the evolution of reactive species and physical quantities as a function of species composition, \citeauthor{blasco1998modelling} trained an artificial neural network (ANN) on simulation data generated from a reduced-order model of the methane/air combustion system. The performance of ANNs demonstrated encouraging savings in computational time and memory \cite{blasco1998modelling}. \citeauthor{sharma2020deep} predicted the evolution of reacting species in a hydrogen-oxidation reaction with an ANN and integrated it into a real-world reacting flow CFD \cite{sharma2020deep}. \citeauthor{wan2020chemistry} used deep neural networks, which take species mass fractions and temperature as input, to predict the chemical source term from DNS of a turbulent non-premixed syngas oxy-flame interacting with a cooled wall \cite{wan2020chemistry}. Their results show that ANNs coupled with numerical simulations can achieve a computational speedup of 25 compared to DNS with a detailed chemistry mechanism \cite{wan2020chemistry}.
Furthermore, recently many works have also started applying machine learning-based models to high-dimensional data from either real experiments or full-scaled high-fidelity simulations. \citeauthor{an2020deep} replaced a conventional numerical solver with a deep learning-based solver consisting of convolutional neural networks to predict flow fields from hydrogen-fueled turbulent combustion simulations \cite{an2020deep}. \citeauthor{emami2012laminar} replaced a numerical integration with artificial neural networks (ANN) in a laminar flamelet model to predict mean reactive scalars in a non-premixed methane/hydrogen/nitrogen flame. Neural networks were shown to provide accurate predictions for the concentration and temperature of mean species while having a computational cost approximately one order of magnitude lower than traditional integration schemes \cite{emami2012laminar}. However, the lack of physical constraints in data-driven models means that their solutions will not typically satisfy physical constraints, and thus can result in non-physical solutions. Furthermore, in those previous studies, deep learning models are trained based on paired samples with information on the current and next states. Therefore, when coupled with numerical integration or other iterative schemes, these deep learning models often diverge from the true solutions due to the accumulation of errors in the early stages of the algorithm.
\subsection{Neural Ordinary Differential Equations}
Deep neural networks have been known for their capability to learn highly complex and non-linear functions with a relatively low computational cost. With sufficient training data and a suitable model structure, neural networks can approximate arbitrary functions with arbitrary precision \cite{hornik1989multilayer}. For dynamical systems, a novel class of deep learning models known as neural ordinary differential equations has become popular recently, where deep neural networks are used to predict the derivative of quantities of interest given the current state variables \cite{chen2018neural}. More precisely, a neural ODE is a parameterized ODE system of the following form:
\begin{equation}
\frac{du}{dt} = F(u(t), t;\theta),
\label{eq:neuralode}
\end{equation}
where $u$ is the state variable of the system, $t$ is time, $F$ is a deep neural network, and $\theta$ are model parameters such as weights and biases. Gradients of the loss function with respect to model parameters are computed through automatic differentiation or the adjoint sensitivity method, which allows the neural ODE to be trained with high computational efficiency.
Instead of computing the loss function based on paired samples of consecutive states, neural ODEs calculate the loss function directly based on solutions from the ODE, thus avoiding accumulating errors when performing numerical integration separately. Previous studies in combustion systems have attempted to use neural ODEs to learn chemical reactions from homogeneous reactors \cite{owoyele2022chemnode, dikeman2022stiffness}. \citeauthor{owoyele2022chemnode} proposed a deep learning framework based on neural ODEs to learn the homogeneous auto-ignition of hydrogen-air mixture with varying initial temperature and species composition \cite{owoyele2022chemnode}. \citeauthor{dikeman2022stiffness} combined an auto-encoder neural network with a neural ODE to compute the solution in the latent space instead of the original stiff system of chemical kinetics \cite{dikeman2022stiffness}. However, since a neural ODE can learn a single ODE from a given training data set \cite{dupont2019augmented, chalvidal2020neural}, only initial conditions are varied during the neural ODE training phase in the aforementioned studies. For instance, when a neural ODE learns the following simple equation:
\begin{equation}
\frac{dy}{dx} = \alpha x
\end{equation}
with $y(0) = \beta$, only the value of of $\beta$ is varied in training data. When the value of $\alpha$ changes, a new neural ODE needs to be trained on a different data set. To model a wide range of ODE systems simultaneously, \citeauthor{lee2021parameterized} changed the original NODE framework and introduced parameterized neural normal differential equations (PNODE) that take additional parameters as input to deep neural networks \cite{lee2021parameterized}. Parameterized Neural Ordinary Differential Equations now describe the following system:
\begin{equation}
\frac{du}{dt} = F(u(t,\eta), t, \eta;\theta),
\label{eq:pnode}
\end{equation}
where $\eta$ represents problem-dependent parameters for the dynamical systems. This allows us to learn a parameterized family of ODEs instead of a single ODE. Using a similar optimization procedure (\textit{i.e.}, automatic differentiation) and the same set of model parameters, PNODE is capable of predicting solutions of dynamical systems under different conditions depending on the input parameter $\eta$.
In previous data-driven approaches based on neural ODEs, the source function $F$ in \cref{eq:neuralode} is a deep neural network $F_{\theta}$ with weights $\theta$ that only learns to predict the time derivative of the state variables from input and output data. To learn combustion processes and chemical reactions with a wider range of parameters, large amounts of combustion data are often required for such purely data-driven neural ODEs, which can be challenging if training data are obtained from expensive high-fidelity simulations or physical experiments. Therefore, previous applications of neural ODEs in combustion studies have focused on simulations of simplified chemical kinetics in homogeneous or 0D reactors, with data sets that vary only two or three parameters, such as mass inflow, initial temperature, and species composition.
In this work, we consider the use of neural ODE for combustion studies from a different perspective. Instead of learning the source function directly with deep neural networks in NODE, we consider using existing 0D models as source functions and learn important parameters such as the heat source and activation energy in 0D models with neural networks. More precisely, to incorporate physical knowledge into our PNODE, we construct $F$ based on a 0D flow model $F_{0D}$ and embed deep neural networks into our 0D flow model to model terms such as the heat source and activation energy. Let $\dot{Q}(t, \eta; \theta)$ be the heat source function, $A_f(t,\eta; \theta)$ be the pre-exponential factor, and $E(\eta; \theta)$ be the activation energy function. These are represented by deep neural networks and are used as inputs to the 0D flow model $F$. Then we have
\[\frac{du}{dt} = F\left(u(t,\eta), t, \eta;\theta\right) = F_{0D}\left(u(t, \eta), \dot{Q}(t, \eta; \theta), A_f(t, \eta; \theta), E(\eta; \theta)\right)\]
The prediction $\hat{u}(t_i)$ can be obtained by solving the ODE system using a numerical solver:
\[\hat{u} = (\hat{u}(t_0), \dots, \hat{u}(t_n)) = \textrm{ODESolve}(u(t_0), F_{0D}, t_1,...,t_n)\]
To optimize the parameters in the neural networks that model the heat source, pre-exponential factor, and activation energy, we minimize the root mean squared loss between observations and predictions. This is accomplished using \emph{reverse mode automatic differentiation} applied to the numerical ODE solver. The exact formulation of each component in $F_{0D}$ is described in \cref{sec:method}. To the best of our knowledge, our work is the first application to consider the use of 0D models with neural ODEs for combustion systems. With additional information from the 0D model, our physics-based PNODE differs from previous neural ODE approaches in that it learns parameters from a higher dimensional space and directly from high-fidelity simulations.
\section{Our contribution}
In this work, we propose a novel physics-based PNODE framework that directly learns from high-fidelity CFD instead of 0D simulations with simple chemical kinetics. We parameterize our neural ODEs with additional parameters to account for CFD simulations with different experimental conditions such as mass inflow, initial species composition, and geometry of the combustor chamber. Physical knowledge is incorporated into our neural ODEs, as we only use deep neural networks to predict the heat source term and chemical kinetics in our 0D flow model, instead of the entire source term $F$ in \cref{eq:pnode}. In particular, our physics-based PNODE framework allows us to build a reduced-order surrogate model for laser ignition in a rocket combustor that
\begin{itemize}
\item provides solutions satisfying \emph{physical constraints},
\item learns combustion with \emph{high-dimensional} input parameters,
\item needs \emph{less training samples} than purely data-driven approaches,
\item matches the accuracy of \emph{high-fidelity} simulations.
\end{itemize}
The general workflow is illustrated in \cref{fig:motivation}. We validate our approach on high-fidelity simulation data describing laser-induced ignition of a prototype rocket combustor. We compare the performance of our PNODE-based 0D model with conventional interpolation methods such as kernel ridge regression and neural networks.
The remainder of this paper is organized as follows. \Cref{sec:method} describes our overall methodology for physics-based PNODEs. \Cref{sec:0dmodel} describes the overall framework of our PNODE-based 0D model. \Cref{sec:neuralnetworks} reviews the structure of deep neural networks, the choice of hyper-parameters, and optimization pipelines. \Cref{sec:experiments} shows the numerical benchmarks of the PNODE-based models on high-fidelity simulations of laser-induced ignition of a rocket combustor.
\begin{figure}
\begin{center}
\begin{tikzpicture}[node distance=2cm]
\node (laser_parameter) [values, align=center] {Parameters of \\ Laser Ignition};
\node (experiments) [simulation, right of=laser_parameter, xshift = 2.5cm, align=center] {Experiments or High-\\fidelity Simulations};
\node (pnode) [simulation, below of=experiments] {Physics-based\\ PNODEs};
\node (output) [values, right of=experiments, xshift = 2.5cm, align=center] {Average Temperature,\\ Pressure, and Mass\\ Fractions of Species};
\draw [arrow] (laser_parameter) -- (experiments);
\draw [arrow] (laser_parameter) |- (pnode);
\draw [arrow] (pnode) -| node[anchor = south east, align = center, xshift = -0.4cm]{Lower Cost} (output);
\draw [arrow] (experiments) -- (output);
\draw [arrow] (output) -- (9.0, 2.0) -- node[anchor=south]{Improved Design} (0.0, 2.0) -- (laser_parameter);
\end{tikzpicture}
\end{center}
\caption{Motivation for constructing a physics-based PNODE model. Time traces of volume-averaged temperature, pressure, and mass fractions of species provide crucial information that indicates ignition success or failure. Instead of computing volume-averaged quantities from high-fidelity simulations or physical experiments, we construct a PNODE-based 0D flow model that provides volume-averaged time traces directly with a lower computational cost.}
\label{fig:motivation}
\end{figure}
\section{Methodology} \label{sec:method}
\subsection{0D flow model} \label{sec:0dmodel}
Given a parameter $\eta$, we wish to use a 0D model to describe the ignition process of the target combustion system by predicting the evolution of average quantities such as the temperature, pressure, and mass fractions of species. We formulate our 0D flow model based on the Continuously Stirred Tanked Reactor (CSTR) model with fixed volume from the Cantera library \cite{cantera}. CSTR is a simplified reactor with several inlets, several outlets, and a constant volume, and the reacting flow in the chamber is considered spatially homogeneous.
\begin{figure}
\centering
\tikzset{every picture/.style={line width=0.75pt}}
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw (229,35.67) -- (229,80) ;
\draw (170.48,80) -- (207.5,80) -- (229,80) ;
\draw (229,188.95) -- (229,237.86) ;
\draw (229,99) -- (169.75,99) ;
\draw (229,36) -- (460,36) ;
\draw (460,36.09) -- (460,189) ;
\draw (460,237) -- (229,237) ;
\draw (460,210) -- (460,237) ;
\draw (460,189) -- (519.43,189) ;
\draw (460,210.13) -- (519.43,210) ;
\draw (229,189) -- (229,99) ;
\draw (290,20.25) -- (400.5,20.25) ;
\draw (345.25,20.25) -- (345.25,170) ;
\draw (320.5,170) -- (370.5,170) ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (288,80.5) .. controls (288,66.69) and (310.39,55.5) .. (338,55.5) -- (338,70.5) .. controls (310.39,70.5) and (288,81.69) .. (288,95.5) ;\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (288,95.5) .. controls (288,105.75) and (300.34,114.56) .. (318,118.42) -- (318,123.42) -- (338,113) -- (318,98.42) -- (318,103.42) .. controls (300.34,99.56) and (288,90.75) .. (288,80.5)(288,95.5) -- (288,80.5) ;
\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (402.85,95.07) .. controls (403.36,108.87) and (381.4,120.88) .. (353.81,121.9) -- (353.25,106.91) .. controls (380.85,105.89) and (402.81,93.88) .. (402.3,80.08) ;\draw [fill={rgb, 255:red, 255; green, 255; blue, 255 } ,fill opacity=1 ] (402.3,80.08) .. controls (401.92,69.84) and (389.26,61.49) .. (371.48,58.28) -- (371.29,53.29) -- (351.69,64.44) -- (372.21,78.27) -- (372.03,73.27) .. controls (389.82,76.48) and (402.47,84.83) .. (402.85,95.07)(402.3,80.08) -- (402.85,95.07) ;
\draw (250.5,170.86) .. controls (250.44,163.61) and (266.06,157.6) .. (285.39,157.44) .. controls (304.72,157.27) and (320.44,163) .. (320.5,170.25) .. controls (320.56,177.5) and (304.95,183.51) .. (285.62,183.68) .. controls (266.29,183.84) and (250.57,178.11) .. (250.5,170.86) -- cycle ;
\draw (370.5,170.25) .. controls (370.5,163) and (386.17,157.13) .. (405.5,157.13) .. controls (424.83,157.13) and (440.5,163) .. (440.5,170.25) .. controls (440.5,177.5) and (424.83,183.37) .. (405.5,183.37) .. controls (386.17,183.37) and (370.5,177.5) .. (370.5,170.25) -- cycle ;
\draw[arrow] (109.43,88) -- node[anchor=south]{$\dot{m}_{\text{in}}$} (156,88) ;
\draw[arrow] (534.43,200) -- node[anchor=south]{$\dot{m}_{\text{out}}$} (581,200);
\draw (290,217.31) node [align=left] {\begin{minipage}[lt]{68pt}\setlength\topsep{0pt}
$P,V,T,Y_i$
\end{minipage}};
\end{tikzpicture}
\caption{Illustration of a Continuously Stirred Tanked Reactor (CSTR) with one inlet and one outlet. $P$ is volume-averaged pressure, $V$ is volume, $T$ is volume-averaged temperature, and $Y_i$ is the mass fraction of species $i$ in the reactor. Gases inside the reactor are perfectly mixed. This is a simplified model for high-fidelity simulations, where fuels are injected into the chamber through the inlet at rate $\dot{m}_{\text{in}}$ and gases inside the reactor leave through the outlet at rate $\dot{m}_{\text{out}}$.}
\end{figure}
\subsubsection{Mass conservation}
In a CSTR reactor, the total mass of the system is conserved. Therefore, we have
\begin{equation}
\frac{dm}{dt} = \sum_{in} \dot{m}_{in} - \sum_{out} \dot{m}_{out},
\label{eq:mass_conservation}
\end{equation}
where $m$ is the total mass of all species in the reactor, $\dot{m}_{in}$ is the mass inflow of an inlet and $\dot{m}_{out}$ is the mass outflow of an outlet.
\subsubsection{Species conservation}
In reacting flows, species are created and destroyed in time from chemical reactions. The rate of species creation $k$ is given as:
\begin{equation}
\dot{m}_{k,\text{gen}} = V\dot{\omega}_{k}W_{k},
\label{eq:creationrate}
\end{equation}
where $W_k$ is the molecular weight of species $k$ and $\dot{\omega}_k$ is the net reaction rate. The rate of change of a species' mass is given by
\begin{equation}
\frac{d(mY_k)}{dt} = \sum_{in} \dot{m}_{in}Y_{k, \text{in}} - \sum_\text{out} \dot{m}_\text{out} Y_{k,\text{out}} + \dot{m}_{k,\text{gen}},
\label{eq:speciesmass}
\end{equation}
where $Y_k$ is the mass fraction of species $k$ in the reactor, $Y_{k, in}$ is the mass fraction of species $k$ in the inflow, $Y_{k, out}$ is the mass fraction of species $k$ in the outflow. Combining \cref{eq:creationrate,eq:speciesmass}, we obtain
\begin{equation}
m\frac{dY_k}{dt} = \sum_{in} \dot{m_{in}}(Y_{k,in}-Y_k) - \sum_{out} \dot{m}_{out}(Y_k - Y_{k, out}) + V \dot{\omega}_{k} W_k.
\label{eq:species}
\end{equation}
\subsubsection{Chemical reactions}
Suppose that for reaction $j$ we have
\begin{equation}
\sum_{k=1}^N \nu'_{kj} \mathcal{M}_k \leftrightharpoons \sum_{k=1}^N \nu''_{kj}\mathcal{M}_k,
\label{eq:reactions}
\end{equation}
where $\mathcal{M}_k$ is a symbol for species $k$, $\nu'_{kj}$ and $\nu''_{kj}$ are the stoichiometric coefficients of species $k$ in reaction $j$. Let $\nu_{kj} = \nu''_{kj} - \nu'_{kj}$. Then the source term $\dot{\omega}_{k}$ can be written as the sum of the source terms $\dot{\omega}_{k,j}$ from reaction $j$:
\begin{equation}
\dot{\omega}_k = \sum_{j} \dot{\omega}_{k, j}
= W_k \sum_{j=1}^M \nu_{kj}\mathcal{Q}_j,
\label{eq:omegasum}
\end{equation}
where
\begin{equation}
\mathcal{Q}_j = K_{fj}\prod_{k=1}^N [X_k]^{\nu'_{kj}}-K_{rj}\prod_{k=1}^{N}[X_k]^{\nu''_{kj}},
\label{eq:qj}
\end{equation}
and
\begin{equation}
K_{fj} = A_{fj}T^{\beta_j}\exp{\left(-\frac{E_{j}}{RT}\right)}, K_{rj} = \frac{K_{fj}}{\left(\frac{p_a}{RT}\right)^{\sum_{k=1}^N\nu_{kj}}\exp{\left( \frac{\Delta S^0_j}{R} - \frac{\Delta H^0_j}{RT} \right)}},
\label{eq:forwardconstant}
\end{equation}
where $\mathcal{Q}_j$ is the rate of progress of reaction $j$, $X_k$ is the molar concentration of species $k$, $K_{fj}$ and $K_{rj}$ are the forward and reverse rates of reaction $j$, $A_{fj}$, $\beta_j$ and $E_j$ are the pre-exponential factor, temperature exponent and activation energy for reaction $j$, respectively. $\Delta S^0_j$ and $\Delta H^0_j$ are entropy and enthalpy changes, respectively, for reaction $j$ \cite{poinsot2005theoretical}.
\subsubsection{Energy conservation}
For a CSTR reactor with fixed volume, the internal energy can be expressed by writing the first law of thermodynamics for an open system:
\begin{equation}
\frac{dU}{dt} = - \dot{Q} + \sum_{in} \dot{m}_{in}h_{in} - \sum_{out} h\dot{m}_{out},
\label{eq:thermo}
\end{equation}
where $\dot{Q}$ is the heat source, $h$ is the enthalpy per unit mass of the homogeneous gas in the reactor and $h_{in}$ is the enthalpy of the mass inflow. We can describe the evolution of the average temperature by expressing the internal energy $U$ in terms of the species mass fractions $Y_k$ and temperature $T$:
\begin{equation}
U = m\sum_{k}Y_ku_k(T).
\label{eq:enthalpy}
\end{equation}
So that
\begin{equation}
\frac{dU}{dt} = u\frac{dm}{dt} + mc_v\frac{dT}{dt} + m \sum_k u_k \frac{dY_k}{dt}.
\label{eq:dudt}
\end{equation}
From \cref{eq:dudt,eq:thermo} we have
\begin{align*}
mc_v \frac{dT}{dt} & = \frac{dU}{dt} - u \frac{dm}{dt} - m\sum_{k}u_k \frac{dY_k}{dt}, \\
& = -\dot{Q} + \sum_{int} \dot{m}_{in}h_{in} - \sum_{out} h\dot{m}_{out} - u \frac{dm}{dt} - m\sum_{k}u_k \frac{dY_k}{dt}.
\end{align*}
Next, using \cref{eq:mass_conservation,eq:species} we have
\[
mc_v \frac{dT}{dt} = -\dot{Q} + \sum_{in} \dot{m}_{in}\left(h_{in} - \sum_{k} u_k Y_{k,in}\right) - \sum_{out} \dot{m}_{out}\left(h - \sum_{k} u_k Y_k - (Y_k - Y_{k, out})\right) - \sum_k V\dot{\omega}_{k}W_ku_k
\]
Finally, using the identity $hm = U + pV$ and \cref{eq:enthalpy}, we have
\begin{equation}
mc_v \frac{dT}{dt} = -\dot{Q} + \sum_{in} \dot{m}_{in}\left(h_{in} - \sum_{k} u_k Y_{k,in}\right) - \sum_{out} \dot{m}_{out}\left(\frac{pV}{m} - (Y_k - Y_{k, out})\right) - \sum_k V\dot{\omega}_{k}W_ku_k.
\label{eq:energy}
\end{equation}
\subsection{Physics-based parameterized neural ordinary differential equations} \label{sec:neuralnetworks}
Using information from the parameters $\eta = (\eta_1, ..., \eta_p)$ of the laser ignition as input, we compute the heat source $\dot{Q}$ in \cref{eq:energy}, the pre-exponential factor $A_{f,j}$, and the activation energy $E_j$ in \cref{eq:forwardconstant} with fully connected feed-forward neural networks $F_{\theta}(t, \eta) = (\dot{Q}, E_{j},A_{fj})$ as shown in \cref{fig:neuralnetwork}, where $\theta$ are model parameters.
\begin{figure}
\begin{center}
\begin{tikzpicture}[x=2.2cm,y=1.4cm]
\message{^^JNeural network, shifted}
\readlist\Nnod{4,5,5,5,4}
\readlist\Nstr{p,m,m,m,k}
\readlist\Cstr{\strut \eta,a^{(\prev)},a^{(\prev)},a^{(\prev)},Y}
\readlist\Zstr{\strut T, P}
\def\yshift{0.5}
\message{^^J Layer}
\foreachitem \N \in \Nnod{
\def\lay{\Ncnt}
\pgfmathsetmacro\prev{int(\Ncnt-1)}
\message{\lay,}
\foreach \i [evaluate={\c=int(\i==\N); \y=\N/2-\i-\c*\yshift;
\index=(\i<\N?int(\i):"\Nstr[\lay]");
\x=\lay; \n=\nstyle; \s=int(\i-1); \z=int(\lay); \u=(\i<\N?int(\i-2):"\Nstr[\lay]")}] in {1,...,\N}{
\ifnum\lay>1
\ifnum\lay=5
\ifnum\i<3
\node[node \n] (N\lay-\i) at (\x,\y) {$\Zstr[\i]$};
[\else
\node[node \n] (N\lay-\i) at (\x,\y) {$\Cstr[\lay]_{\u}$};
]
\fi
[\else
\node[node \n] (N\lay-\i) at (\x,\y) {$\Cstr[\lay]_{\index}$};
]
\fi
[\else
\ifnum\i=1
\node[node \n] (N\lay-\i) at (\x,\y) {$t$};
[\else
\node[node \n] (N\lay-\i) at (\x,\y) {$\Cstr[\z]_{\index}$};
]
\fi
]
\fi
\ifnum\lay>1
\foreach \j in {1,...,\Nnod[\prev]}{
\draw[connect,white,line width=1.2] (N\prev-\j) -- (N\lay-\i);
\draw[connect] (N\prev-\j) -- (N\lay-\i);
}
\fi
}
\path (N\lay-\N) --++ (0,1+\yshift) node[midway,scale=1.5] {$\vdots$};
}
\node[above=5,align=center,mygreen!60!black] at (N1-1.90) {input\\[-0.2em]layer};
\node[above=2,align=center,myblue!60!black] at (N3-1.90) {hidden layers};
\node[above=10,align=center,myred!60!black] at (N\Nnodlen-1.90) {output\\[-0.2em]layer};
\end{tikzpicture}
\end{center}
\caption{Structure of a fully connected neural network $F_{\theta}$ with three hidden layers. Here $a_{j}^{(i)}$ refers to the $j$th neuron in the $i$th hidden layer of our deep neural network. In each forward pass, our deep neural network takes laser parameters $\eta_1,...,\eta_p$, time $t$ as input and predicts the volume-averaged temperature $T$, pressure $P$, and mass fractions $Y_i$ at time $t$ as output.}
\label{fig:neuralnetwork}
\end{figure}
To ensure the training stability of our physics-based PNODE, we also normalize the total heat deposited by our heat source function $\dot{Q}$ and multiply it by another neural network $C_{\xi}$ that learns the total amount of heat injected into the system separately. That is, we define
\begin{equation}
\tilde{\dot{Q}}(t, \eta) = C_{\xi}(\eta)
\frac{\dot{Q}(t,\eta)}{\int_{t_\text{start}}^{t_\text{end}}
\dot{Q}(t,\eta) dt},
\label{eq:normalization}
\end{equation}
and we replace \cref{eq:energy} with the following equation:
\begin{equation}
mc_v \frac{dT}{dt} = -\dot{\tilde{Q}}(t, \eta)
+ \sum_{in} \dot{m}_{in}
\Big(h_{in} - \sum_{k} u_k Y_{k,in}\Big)
- \sum_{out} \dot{m}_{out}\Big(\frac{pV}{m} - (Y_k - Y_{k, out})\Big)
- \sum_k V\dot{\omega}_{k}W_ku_k,
\label{eq:parametrized_energy}
\end{equation}
where $C_{\xi}(\eta)$ is another deep neural network with model parameter $\xi$.
We recall the advantage of this parameterization of the 0D flow model using neural networks. While only scalars parameters are calibrated in conventional 0D models, our physics-based PNODE framework uses deep neural networks that are optimized automatically based on arbitrary loss functions. Additionally, deep neural networks have been shown as a powerful tool for learning complex and nonlinear chemical reactions in combustion studies\cite{christo1996integrated, christo1996artificial}. The temperature $T(i,j)$ and the mass fraction of oxygen $Y_{\ce{O2}}(i,j)$ over time are used to compute the mean squared error as our loss function. That is, our loss function is defined as
\begin{equation}
L = \sum_{i = 1}^{n} \sum_{j = 1}^{m} (T(i, j) - T_\text{true}(i,j))^2 + \alpha (Y_{\ce{O2}}(i,j) - Y_{\ce{O2}, \text{true}}(i, j))^2,
\label{eq:loss}
\end{equation}
where $\alpha$ is a hyper-parameter that determines the weighted mean squared errors of the temperature and mass fraction of oxygen, $T(i,j)$ and $Y_{\ce{O2}}(i,j)$ are the predictions of the temperature and mass fraction of oxygen, respectively, at observation time $t_j$ for sample $i$. Note that since only a 1 step chemical mechanism (which is described in \cref{sec:setup}) is used in our 0D flow model and high-fidelity simulations, it is sufficient to minimize over the mass fraction of oxygen in the loss function.
The gradient of the loss function $L$ with respect to the weights $\theta$ are computed via automatic differentiation using ADCME \cite{xu2020adcme}. In our physics-based PNODE, we use a Runge-Kutta fourth-order method to perform numerical integration. We use L-BFGS-B as the optimizer for the weights of the neural networks in ADCME. The overall optimization pipeline is illustrated in \cref{fig:optimization}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[node distance=2cm]
\node (laser_parameter) [values, align=center] {Parameters of \\ Laser Ignition};
\node (in1) [model, below of=laser_parameter, align=center] {Deep\\ Neural Networks};
\node (pnode) [values, right of=in1, xshift = 2.5cm, yshift = -1cm] {Chemical Kinetics};
\node (experiments) [values, right of=laser_parameter, xshift = 2.5cm, yshift = -1cm, align=center] {Heat Source};
\node (output) [model, right of=experiments, xshift = 2.5cm, yshift = -1cm, align=center] {0D Flow Model};
\node (condition) [values, above of=output, align=center] {Initial and\\ Boundary\\ Conditions} ;
\node (prediction) [values, below of=output] {Prediction} ;
\node (observation) [values, below of=prediction] {Observation} ;
\node (loss_function) [loss, left of=observation, xshift = -2.5cm, yshift = 1cm] {Loss Function} ;
\draw [arrow] (in1) -- (experiments);
\draw [arrow] (laser_parameter) -- (in1);
\draw [arrow] (in1) -- (pnode);
\draw [arrow] (pnode) -- (output);
\draw [arrow] (experiments) -- (output);
\draw [arrow] (condition) -- (output);
\draw [arrow] (output) -- node[anchor = west]{Numerical Integration} (prediction);
\draw [arrow] (observation) -- (loss_function);
\draw [arrow] (prediction) -- (loss_function);
\draw [arrow] (loss_function) -| node[xshift = 1.5cm, yshift = 0.5cm, align = center] {Automatic \\ Differentiation}(in1);
\end{tikzpicture}
\end{center}
\caption{Optimization pipeline of the physics-based PNODE framework. Parameters of laser ignition are passed into deep neural networks to predict the heat source and chemical kinetics for our 0D flow model. Predictions of volume-average temperature, pressure, and mass fractions of species over time are obtained through numerical integration. We update the weights in our deep neural networks by leveraging \emph{reverse mode automatic differentiation}.}
\label{fig:optimization}
\end{figure}
\section{Numerical experiments} \label{sec:experiments}
We validate our approach on data generated from high-fidelity simulations of a planar jet diffusion based on the Hypersonics Task-Based Research (HTR) solver developed by \cite{di2020htr}. In the rocket combustor, a gaseous O2 jet, along with CH4 coflow, is injected into the chamber filled with gaseous CH4. The jet is ignited by intense, short-time heating at a specific location. \Cref{fig:combustor} illustrates the setup of the rocket combustor in our 2D high-fidelity simulations. In this work, we consider six parameters in total for laser-induced ignition, which are described in \cref{tab:parameters}. We use a 1 step chemical mechanism, which has 5 species and 1 global reaction, as the chemical scheme in our 0D flow model \cite{franzelli2012large}. To model ignition in high-fidelity simulations, we adjusted our initial values of the Arrhenius reaction parameters before training for our 0D flow model. In particular, for initial values before optimization, we set our pre-exponential factors to be 121.75 and set the activation energies equal to $1.68 \times 10^7$. Note that to fit simulation data from high-fidelity 2D simulations, we are setting both the pre-exponential factor and the activation energy to be smaller than default values. This is because the volume-averaged temperature during ignition is significantly lower than that of default 0D reactions, and also it increases at a much slower rate after successful ignition. We also choose $\alpha$ in \cref{eq:loss} as $1 \times 10^7$.
\subsection{Simulation setup} \label{sec:setup}
To validate our approach, we used a direct numerical simulation of a two-dimensional planar jet diffusion flame as a high-fidelity reference. Although simplified compared to full-scale rocket combustors, for the present objectives, this configuration bears a sufficient physical resemblance to such a system while requiring relatively low computational resources, and it is taken as a high-fidelity model compared to the 0D model (\cref{sec:0dmodel}). A schematic of the simulation domain is shown in \cref{fig:combustor}. The combustion chamber initially contains 100\% methane at 350K and 50662.5Pa, and pure oxygen is injected at 350K and with Reynolds number $\mathrm{Re}\equiv Ud/\nu_{\ce{O2}}=400$ and Mach number $\mathrm{Ma}\equiv U_{\ce{O2}}/a_{\ce{O2}}=0.1$, where $U_{\ce{O2}}$ is the injection velocity, $d$ is the jet diameter, $\nu_{\ce{O2}}$ is the kinematic viscosity of the injected oxygen and $a_{\ce{O2}}$ is its speed of sound. A coflow of methane at a lower velocity $U_{\ce{CH4}}/U_{\ce{O2}}=0.001$ accompanies the oxygen jet.
The compressible Navier--Stokes equations for a multicomponent ideal gas are solved with four chemical species (\ce{CH4}, \ce{O2}, \ce{H2O}, \ce{CO2}) and a single irreversible reaction for methane--oxygen combustion \cite{franzelli2012large}:
\[
\ce{CH4 + 2 O2 -> 2 H2O + CO2}.
\]
Characteristic boundary conditions are used at inflow and outflow boundaries, and an isothermal no-slip condition is used on the walls. Standard methods for calculating transport and thermodynamic properties are used. The simulations are conducted using HTR \cite{di2020htr}.
Shortly after the injection of methane and oxygen begins, a focused deposition of energy is deployed near the leading tip of the oxygen jet, as shown in \cref{fig:combustor}. This is modeled as an energy source $\dot{Q}_L$ in the governing equation for total energy,
\[
\dot{Q}_L = B \frac{1}{2\pi\sigma_r^2}
\exp\Big[-\frac{(x-x_0)^2+(y-y_0)^2}{2\sigma_r^2}\Big]
\frac{2}{\sigma_t\sqrt{2\pi}}
\exp\Big[-\frac{4(t-t_0)^2}{2\sigma_t^2}\Big]\,,
\]
where $B$ is the amplitude, $\sigma_r$ is the radius of the energy kernel, $\sigma_t$ is the duration of the energy pulse, $(x_0,y_0)$ is the focal location, and $t_0$ is the time of energy deposition. This produces a kernel of hot gas, seen in the \cref{fig:combustor} inset. Successful ignition depends on the parameters of the energy deposition as well as the local composition and flow conditions as the hot kernel cools and advects with the flow.
\begin{figure}
\begin{tikzpicture}
\centering
\node (at) at (-3, 3) {\includegraphics[width=4cm, height=4cm]{Snapshot_Temperature.pdf}};
\node (et) at (5, 3) {\includegraphics[width=8cm, height=4cm]{Snapshot_Concentration.pdf}};
\node[draw=red, dashed, minimum width=1.1cm, minimum height=1.2cm, text centered] (bt) at (3,3) {};
\node[yshift = -0.3cm, xshift = -0.1cm] (ct) at (at.north) {};
\node[yshift = 0.5cm, xshift = -0.1cm] (dt) at (at.south) {};
\draw[arrow, dashed, red] (bt.north) -- (ct);
\draw[arrow, dashed, red] (bt.south) -- (dt);
\draw [decorate, decoration = {calligraphic brace}] (7.96,1) -- node[anchor=north]{60.0 (jet thickness)} (1.5,1);
\draw [decorate, decoration = {calligraphic brace}] (9,4.75) -- node[anchor=south, rotate = 270]{20.0 (jet thickness)} (9,1.25);
\draw[arrow] (9.75, 4) -- node[anchor=south] {Outflow} (11, 4);
\draw[arrow] (9.75, 2) -- node[anchor=south] {Outflow} (11, 2);
\draw[arrow] (-0.75, 4) -- node[anchor=south] {CH4 co-flow} (1.25, 4);
\draw[arrow] (-0.75, 3) -- node[anchor=south] {O2 jet} (1.25, 3);
\draw[arrow] (-0.75, 2) -- node[anchor=south] {CH4 co-flow} (1.25, 2);
\node[text = white] at (4.75,4.65) {Wall};
\node[text = white] at (4.75,1.5) {Wall};
\node at (4.75,5.1) {Mass Fraction of Oxygen};
\node at (-3.25,5.1) {Temperature (F)};
\node[draw = red, circle] (rc) at (-3.8, 2.6) {};
\draw (rc) node[anchor=south, text = white, above=5pt]{Ignition};
\draw[dashed, red] (rc) -- node[anchor = south, text = black, below = 15pt] {$x$}(-3.8, 1.3);
\draw[dashed, red] (rc) -- node[anchor = east, text = black, left = 8pt] {$y$} (-4.8, 2.6);
\end{tikzpicture}
\caption{Setup of a 2D laser-induced ignition in a rocket combustor. The snapshot was taken during laser deployment. Pure oxygen is injected at 350K into a two-dimensional chamber, along with a methane co-flow. Here $x$ and $y$ are the two-dimensional coordinates of laser-induced ignition inside the chamber.}
\label{fig:combustor}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tabular}{@{\hskip1pt}c@{\hskip1pt}c@{\hskip1pt}c@{\hskip1pt}c}
\text{\scriptsize $Y_{\text{Oxygen}}$, t=0.0e+00s}
&
\text{\scriptsize $Y_{\text{Oxygen}}$, t=2.0e-04s}
&
\text{\scriptsize Temperature, t=0.0e+00s}
&
\text{\scriptsize Temperature, t=2.0e-04s}
\\
\includegraphics[scale=0.15]{Evolution/MassFractionofO2,t=0.0e+00s.pdf}
&
\includegraphics[scale=0.15]{Evolution/MassFractionofO2,t=2.0e-04s.pdf}
&
\includegraphics[scale=0.15]{Evolution/Temperature,t=0.0e+00s.pdf}
&
\includegraphics[scale=0.15]{Evolution/Temperature,t=2.0e-04s.pdf}
\\
\text{\scriptsize $Y_{\text{Oxygen}}$, t=4.0e-05s}
&
\text{\scriptsize $Y_{\text{Oxygen}}$, t=2.4e-04s}
&
\text{\scriptsize Temperature, t=4.0e-05s}
&
\text{\scriptsize Temperature, t=2.4e-04s}
\\
\includegraphics[scale=0.15]{Evolution/MassFractionofO2,t=4.0e-05s.pdf}
&
\includegraphics[scale=0.15]{Evolution/MassFractionofO2,t=2.4e-04s.pdf}
&
\includegraphics[scale=0.15]{Evolution/Temperature,t=4.0e-05s.pdf}
&
\includegraphics[scale=0.15]{Evolution/Temperature,t=2.4e-04s.pdf}
\\
\text{\scriptsize $Y_{\text{Oxygen}}$, t=8.0e-05s}
&
\text{\scriptsize $Y_{\text{Oxygen}}$, t=2.8e-04s}
&
\text{\scriptsize Temperature, t=8.0e-05s}
&
\text{\scriptsize Temperature, t=2.8e-04s}
\\
\includegraphics[scale=0.15]{Evolution/MassFractionofO2,t=8.0e-05s.pdf}
&
\includegraphics[scale=0.15]{Evolution/MassFractionofO2,t=2.8e-04s.pdf}
&
\includegraphics[scale=0.15]{Evolution/Temperature,t=8.0e-05s.pdf}
&
\includegraphics[scale=0.15]{Evolution/Temperature,t=2.8e-04s.pdf}
\\
\text{\scriptsize $Y_{\text{Oxygen}}$, t=1.2e-04s}
&
\text{\scriptsize $Y_{\text{Oxygen}}$, t=3.2e-04s}
&
\text{\scriptsize Temperature, t=1.2e-04s}
&
\text{\scriptsize Temperature, t=3.2e-04s}
\\
\includegraphics[scale=0.15]{Evolution/MassFractionofO2,t=1.2e-04s.pdf}
&
\includegraphics[scale=0.15]{Evolution/MassFractionofO2,t=3.2e-04s.pdf}
&
\includegraphics[scale=0.15]{Evolution/Temperature,t=1.2e-04s.pdf}
&
\includegraphics[scale=0.15]{Evolution/Temperature,t=3.2e-04s.pdf}
\\
\text{\scriptsize $Y_{\text{Oxygen}}$, t=1.6e-04s}
&
\text{\scriptsize $Y_{\text{Oxygen}}$, t=3.6e-04s}
&
\text{\scriptsize Temperature, t=1.6e-04s}
&
\text{\scriptsize Temperature, t=3.6e-04s}
\\
\includegraphics[scale=0.15]{Evolution/MassFractionofO2,t=1.6e-04s.pdf}
&
\includegraphics[scale=0.15]{Evolution/MassFractionofO2,t=3.6e-04s.pdf}
&
\includegraphics[scale=0.15]{Evolution/Temperature,t=1.6e-04s.pdf}
&
\includegraphics[scale=0.15]{Evolution/Temperature,t=3.6e-04s.pdf}
\end{tabular}
\caption{Evolution of the temperature and mass fraction of oxygen in the 2D chamber with successful ignition. Snapshots of temperature and mass fraction of oxygen are taken every 40 microseconds. Oxygen and methane are constantly injected into the reactor from the left side. A laser is deployed inside the chamber to induce ignition around 120 microseconds after the simulation starts. This triggers combustion mechanisms, gradually increasing the average temperature of the temperature as more and more methane is ignited in the engine.}
\label{fig:snapshots}
\end{figure}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ll,}
\toprule
\hdr{l}{Parameter} & \hdr{l}{Definition} & \hdr{c}{Range} \\
\toprule
x & x coordinate of the center of the heat kernel & [0, \, 7.0] \\
y & y coordinate of the center of the heat kernel & [0, \, 1.0]\\
amplitude & amount of energy deposited by the heat kernel & [0, \, 0.08] \\
radius & spatial radius of the heat kernel & [0, \, 0.5] \\
duration & duration of the heating & [0, \, 1.0] \\
MaF & Mach number of the co-flowing CH4 jet & [0, \, 0.02]\\
\bottomrule
\end{tabular}
\caption{Description of combustion parameters}
\label{tab:parameters}
\end{center}
\end{table}
\subsection{Planar jet diffusion simulation with fixed combustion parameters}
We first show the performance of our physics-based PNODE model in learning a single trajectory from a planar jet diffusion simulation with successful ignition, which is shown in \cref{fig:snapshots}. \Cref{fig:sing_dynamic_temp,fig:sing_dynamic_o2} show the evolution of temperature and the mass fraction of oxygen over time. In this case, we choose $2 \times 10^{-5} \ \rm s$ as the time step of our numerical integration for our physics-based PNODE and choose a neural network with 2 hidden layers and 50 neurons in each layer as $F_{\theta}$. With 20
observations (\textit{i.e.,} points in simulation chosen to compute the mean squared errors) of the temperature and the mass fraction of oxygen, our physics-based PNODE can predict the evolution of both the average temperature and the mass fraction of oxygen with high accuracy. \Cref{fig:sing_dynamic_mass} shows the prediction of the total mass of the system over time. Note that with knowledge of the mass inflow and mass outflow, our physics-based PNODE can recover the change in total mass over time. \Cref{fig:sing_dynamic_heat} shows the predicted heat source from the neural network as a function of time. We observe that the heat source in our 0D flow model represents the laser energy deposited. The predicted temperature from the PNODE continues to rise due to the energy released from chemical reactions after the peak of the heat source function, showing that our physics-based PNODE is able to learn not only the impact of the laser beam but also the chemical reactions from the combustion system.
\begin{figure}[htbp]
\centering
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{single_dynamics/Temperature.pdf}
\caption{Prediction of average temperature by physics-based PNODE. The temperature initially stays at 350K and then rises gradually to 3500K after successful ignition. The evolution of volume-averaged temperature from PNODE matches closely with observations from the simulation.}
\label{fig:sing_dynamic_temp}
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{single_dynamics/o2.pdf}
\caption{Prediction of mass fraction of \ce{O2} by physics-based PNODE. The mass fraction of oxygen gradually increases from 0 to 12\%, with a temporary decrease during laser ignition that triggers combustion mechanisms. Similarly, the prediction from PNODE matches closely with observations.}
\label{fig:sing_dynamic_o2}
\end{minipage}
\label{fig:single_dynamics_1}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{single_dynamics/totalmass.pdf}
\caption{Prediction of total mass by physics-based PNODE. The overall mass inside the reactor constantly decreases due to mass outflow from the outlet. Since our physics-based PNODE preserves physical laws, the total mass matches perfectly with observations.}
\label{fig:sing_dynamic_mass}
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{single_dynamics/heatsource.pdf}
\caption{Heat source $\dot{Q}$ predicted by deep neural networks to represent the deposition of laser energy. Our physics-based PNODE calibrates the amount of heat deposited into the reactor so that the 0D model is able to simulate successful ignition from high-fidelity simulations.}
\label{fig:sing_dynamic_heat}
\end{minipage}
\label{fig:single_dynamics_2}
\end{figure}
\subsection{Planar jet diffusion simulations with varying \texorpdfstring{$y$}{y} coordinate and amplitude of heat kernel}
In this section, we consider data generated from planar jet diffusion simulations with only two varying parameters: the $y$ coordinate of the location of the laser in the chamber and the amplitude of the laser beam. 143 data points are generated uniformly at random from selected intervals for the y location and amplitude of the laser. The generated final data points are shown in \cref{fig:2d_simulation_final_temp,fig:2d_simulation_final_o2}, which show the distribution of the temperature and the mass fraction of \ce{O2}, respectively, at the end of the simulation with different $y$ locations and amplitudes. \Cref{fig:2d_simulation_temp_evolution,fig:2d_simulation_o2_evolution} show the evolution of the average temperature and the mass fraction of \ce{O2}, respectively, over time for all simulations in the training data. We observe that there are sharp transitions of temperature near the boundary of ignition success (\textit{i.e.,} the boundary of the subset of laser parameters where the final temperature is above 1,000K) in the distribution when we vary two laser parameters.
We randomly selected 100 data points out of 143 samples to train our PNODE-based 0D model with neural networks with the hyperparameters shown in \cref{tab:hyperparameters}. Since we are using a simple feed-forward neural network, we manually optimize the structure of our PNODE based on its performance on 20 samples from a validation dataset randomly chosen from 100 training samples. The performance of our PNODE on seen parameters (training data) is shown in \cref{fig:2d_simulation_temp_train,fig:2d_simulation_o2_train}, which show the mean relative error of temperature and mass fraction of oxygen over time. We observe that the prediction of our neural ODE model has high accuracy on all training data points with mean relative error of temperature lower than 1.6\% and mean relative error of mass fraction of oxygen lower than 13.3\%. We also test the performance of our PNODE-based 0D model with the remaining 43 unseen samples from our data set. The results are shown in \cref{fig:2d_simulation_temp_test,fig:2d_simulation_o2_test}. Our PNODE-based model is able to predict ignition success or failure accurately for all points far away from the boundary of the ignition area. Since there are sharp transitions from temperature of 350K to temperature higher than 1000K near the boundary of the area of successful ignition, false prediction of either ignition or non-ignition will both lead to large errors in mean relative error of temperature and mass fraction of oxygen. Hence, we observe that there are some errors by our physics-based PNODE for some of the test data points close to the boundary of the ignition area. Nevertheless, our PNODE-based model is still able to capture the sharp transition from non-ignition region to ignition region, which is often difficult to learn with conventional interpolation methods.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ll}
\toprule
Hyperparameter & Value \\ \midrule
Number of hidden layers & 2 \\
Number of neurons in each layer & 300\\
Activation function & tanh \\
Optimization algorithm & L-BFGS-B \\
\bottomrule
\end{tabular}
\caption{Hyperparameters for PNODE}
\label{tab:hyperparameters}
\end{center}
\end{table}
\begin{figure}[htbp]
\centering
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{2d_benchmark/final_temp.pdf}
\caption{Final average temperature with different $y$ and amplitude of laser beam. We observe that there are sharp transitions near the boundary of successful ignition. The final temperature is equal to 350K in the non-ignition area but increases drastically to around 1400K once the parameters cross the S-shaped boundary as shown in the figure.}
\label{fig:2d_simulation_final_temp}
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{2d_benchmark/final_o2.pdf}
\caption{Final mass fraction of \ce{O2} with different $y$ and amplitude of laser beam. Similarly, we observe that the average mass fraction displays a binary nature. The final values are either 0.04 or 0.06 depending on the success of laser ignition. The parameters that lead to ignition can be separated from other parameters by an S-shaped curve.}
\label{fig:2d_simulation_final_o2}
\end{minipage}
\label{fig:2d_simulation_1}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{2d_benchmark/temperature_over_time.pdf}
\caption{Evolution of average temperature in training data. The average temperature is initially at 350K. With successful ignition, the temperature gradually increases to values above 1000k. We observe that most ignitions happen between 100 and 150 microseconds after the simulation starts.}
\label{fig:2d_simulation_temp_evolution}
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{2d_benchmark/mass_fraction_o2.pdf}
\caption{Evolution of mass fraction of \ce{O2} in training data. The mass fraction of \ce{O2} increases linearly first due to the injection of oxygen into the chamber. Successful ignition triggers chemical reactions that consume the oxygen, leading to a decrease in mass fraction of \ce{O2}}
\label{fig:2d_simulation_o2_evolution}
\end{minipage}
\label{fig:2d_simulation_2}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{2d_benchmark/train_error_temp.pdf}
\caption{Mean relative error of temperature on training data. We observe that errors for data points in the non-ignition region are around 0.1\%, while errors for the ignition region are around 1.6\%. The observed difference is most likely attributed to the wider range of final temperatures ranging from 1000K to 1400K for successful ignition cases, in contrast to the final temperature of 350K across all training data points for non-ignition cases.}
\label{fig:2d_simulation_temp_train}
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{2d_benchmark/train_error_o2.pdf}
\caption{Mean relative error of mass fraction of \ce{O2} on training data. We observe that the error ranges from 8.9\% to 13.3\% across all the training data points. The evolution of the mass fraction of oxygen is more difficult for PNODE to learn, as it is continuously increasing due to the injection of the fuel. The mean relative errors for successful ignition cases are around 11.2\%, which is slightly lower than the values for non-ignition cases.}
\label{fig:2d_simulation_o2_train}
\end{minipage}
\label{fig:2d_simulation_3}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{2d_benchmark/test_error_temp.pdf}
\caption{Mean relative error of temperature on test data. Most of the errors from our physics-based PNODE are less than 5\%. Due to sharp transitions of final temperature from 350K to 1400K, failure in predicting ignition success leads to large relative errors. Hence, we observe that there are mean relative errors around 31\% occur near the boundary of the ignition area.}
\label{fig:2d_simulation_temp_test}
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{2d_benchmark/test_error_o2.pdf}
\caption{Mean relative error of mass fraction of \ce{O2} on test data. Most of the errors are less than 15\%. Similar to errors in final temperature, there are some errors larger than 20\% near the boundary of the ignition region because of failure in predicting ignition success. The overall errors are also higher than errors in temperature, as it is more difficult for PNODE to predict the mass fraction of oxygen.}
\label{fig:2d_simulation_o2_test}
\end{minipage}
\label{fig:2d_simulation_4}
\end{figure}
\subsection{Planar jet diffusion simulations with six varying combustion parameters}
We now consider planar jet diffusion simulations with all six varying combustion parameters. We generate training data by sampling within selected intervals for each of the six parameters uniformly at random as specified in \cref{tab:parameters}. To illustrate both the expressiveness and robustness of our PNODE, we use the same hyper-parameters in \cref{tab:hyperparameters} and train our PNODE with data sets of two different sizes: one with 100 samples and one with 4,000 samples. Both data sets are generated by random uniform sampling from selected intervals for each of the six parameters in \cref{tab:parameters}.
We test the performance of our physics-based PNODE on test data that represent cross sections of selected two dimensions of our six-dimensional parameter space. More precisely, after choosing two parameters out of six parameters for validation, we collect test data on a 30 by 30 grid for those two parameters while fixing the value of the remaining four parameters. In particular, we choose the following three pairs of parameters for test data:
\begin{itemize}
\item radius and amplitude
\item duration and amplitude
\item y coordinate and amplitude
\end{itemize}
We also compare the results of our PNODE with kernel ridge regression and neural networks. 20\% of the samples in each data set are used as validation data. To illustrate the improvement from our physics-based PNODE, we deliberately choose the same network structure for the neural network as the ones embedded in our 0D flow model. For kernel ridge regression, we use a radial basis function with $\alpha = 2$ and $\gamma = 1$ from the scikit-learn package, which is manually optimized based on its performance on the validation set. The kernel ridge regression and classical neural network interpolate the temperature at the end of simulations in the six-dimensional parameter space. That is, they take the combustion parameter $\eta$ as input and predict the final temperature $T$ as output, which is a significantly simpler task than predicting the evolution of the system, including temperature, pressure, and mass fractions of species. We tested the performance of the three models on additional 100 test samples based on the final temperature. The mean absolute errors of the PNODE, kernel ridge regression, and neural network are shown in \cref{tab:benchmark}. The physics-based PNODE provides the most accurate predictions with both 100 training samples and 4,000 training samples.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{ldd}
\toprule
& \hdr{c}{One hundred samples} & \hdr{c}{Four thousand samples} \\
\midrule
Physics-based PNODE & \bd{200.35} & \bd{149.17} \\
Neural Network & 501.33 & 397.65\\
Kernel Ridge Regression & 273.59 & 225.48 \\
\bottomrule
\end{tabular}
\caption{Mean absolute error of prediction on the final average temperature by three models in the test data}
\label{tab:benchmark}
\end{center}
\end{table}
The prediction of the three models for the cross-section data with 100 training samples is shown in \cref{fig:amplitude_duration,fig:amplitude_radius,fig:y_amplitude}. The prediction with 4,000 training samples is shown in \cref{fig:amplitude_duration_4000,fig:amplitude_radius_4000,fig:y_amplitude_4000}. With only 100 training data points to interpolate in six-dimensional parameter space, both neural networks and kernel ridge regression predict solutions that transition smoothly from non-ignition to ignition in all three cross sections, while our PNODE can capture the sharp transitions near the boundary of the ignition area and also provides an accurate prediction for final temperature for most cases with successful ignition. With the advantage of physics-based PNODE, our 0D flow model can capture the non-linearity and complexity of the ignition map for planar jet diffusion simulations, despite the limited size of training data. With 4,000 training samples, all three models improve their predictions of the ignition area as shown in three cross sections. However, the neural network and kernel ridge regression still predict linear transitions across the boundary of successful ignition, while physics-based PNODE can predict an almost vertical jump from non-ignition to ignition, which is consistent with our ground-truth data. This further shows that physics-based PNODE, with sufficient training data, is capable of matching the accuracy of high-fidelity simulations in predicting the region of successful ignition, even when learning combustion from a high-dimensional parameter space.
In this study, we aim to distinguish the performance of physics-based PNODE, neural networks, and kernel ridge regression on predicting final temperatures in combustion systems. To achieve this, we applied Density-based Spatial Clustering of Applications with Noise (DBSCAN) on 300 additional test data points and analyzed the error distribution of the three methods. Our test data was separated into two data sets: ignited and non-ignited. We calculate the distances between data points based on six combustion parameters and divided the points into three categories: core, boundary, and noise points.
Core points have many close neighbors in the same data set, making them easier to predict than other points. Boundary points, on the other hand, are located at the margin of two clusters, most of which are often close to both the ignition and non-ignition data sets. Lastly, we have noise data points that are far away from both data sets. The error distribution of the three models on different types of data points for the ignited case and non-ignited case are shown in \cref{fig:DBSCAN_ignited,fig:DBSCAN_nonignited}. Our results indicate that physics-based PNODE provides consistently accurate predictions of final temperature for both core and boundary points, with most absolute errors less than 200K.
On the contrary, the performance of neural networks and kernel ridge regression was significantly worse on boundary points, as it is more challenging to distinguish ignited cases from non-ignited cases near the boundary. All three methods have low accuracy for noise points, as they are difficult to predict due to a lack of nearby data points. Our analysis showed that a large number of errors from neural networks and kernel ridge regression fall within the 400 to 1000 range, indicating a smooth transition from ignition to non-ignition regions in terms of final temperature. This is inconsistent with the actual ignition process, as final temperatures from high-fidelity simulations or experiments typically display a binary nature with sharp transitions from non-ignition to ignition regions.
\begin{figure}[htbp]
\centering
\begin{tabular}{ll}
\includegraphics[scale=0.55]{6d_benchmark/pjd_amp_dur.pdf}
&
\includegraphics[scale=0.55]{6d_benchmark/krr_amp_dur.pdf}
\\
\includegraphics[scale=0.55]{6d_benchmark/vnn_amp_dur.pdf}
&
\includegraphics[scale=0.55]{6d_benchmark/neural_ode_amp_dur.pdf}
\end{tabular}
\caption{Prediction of the final temperature by the three models with 100 training samples on simulations with varying amplitude and duration of the laser beam. Top left: ground truth by Planar Jet Diffusion Simulations. Top right: prediction by kernel ridge regression. Bottom left: prediction by neural networks. Bottom right: prediction by physics-based PNODE. We observe that, with only 100 training data points, our physics-based PNODE provides the most accurate prediction of final temperature among all three methods, with a sharp boundary between ignition and non-ignition regions. On the contrary, kernel ridge regression and neural networks predict the final temperature of many data points to be between 400K and 1000K, which is inconsistent with our high-fidelity simulations.}
\label{fig:amplitude_duration}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tabular}{ll}
\includegraphics[scale=0.55]{6d_benchmark/pjd_4000_amp_dur.pdf}
&
\includegraphics[scale=0.55]{6d_benchmark/krr_4000_amp_dur.pdf}
\\
\includegraphics[scale=0.55]{6d_benchmark/vnn_4000_amp_dur.pdf}
&
\includegraphics[scale=0.55]{6d_benchmark/neural_ode_4000_amp_dur.pdf}
\end{tabular}
\caption{Prediction of the final temperature by the three models with 4,000 training samples on simulations with varying amplitude and duration of the laser beam. Top left: ground truth by Planar Jet Diffusion Simulations. Top right: prediction by kernel ridge regression. Bottom left: prediction by neural networks. Bottom right: prediction by physics-based PNODE. We observe that, with only 100 training data points, our physics-based PNODE provides the most accurate prediction of final temperature among all three methods, with a sharp boundary between ignition and non-ignition regions. On the contrary, kernel ridge regression and neural networks predict the final temperature of many data points to be between 400K and 1000K, which is inconsistent with our high-fidelity simulations.}
\label{fig:amplitude_duration_4000}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tabular}{ll}
\includegraphics[scale=0.55]{6d_benchmark/pjd_amp_radi.pdf}
&
\includegraphics[scale=0.55]{6d_benchmark/krr_amp_radi.pdf}
\\
\includegraphics[scale=0.55]{6d_benchmark/vnn_amp_radi.pdf}
&
\includegraphics[scale=0.55]{6d_benchmark/neural_ode_amp_radi.pdf}
\end{tabular}
\caption{Prediction of the final temperature by the three models with 100 training samples on simulations with varying amplitude and radius of the laser beam. Top left: ground truth by Planar Jet Diffusion Simulations. Top right: prediction by kernel ridge regression. Bottom left: prediction by neural networks. Bottom right: prediction by physics-based PNODE. We observe that, with only 100 training data points, our physics-based PNODE provides the most accurate prediction of final temperature among all three methods, with a sharp boundary between ignition and non-ignition regions. On the contrary, kernel ridge regression and neural networks predict the final temperature of many data points to be between 400K and 1000K, which is inconsistent with our high-fidelity simulations.}
\label{fig:amplitude_radius}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tabular}{ll}
\includegraphics[scale=0.55]{6d_benchmark/pjd_4000_amp_radi.pdf}
&
\includegraphics[scale=0.55]{6d_benchmark/krr_4000_amp_radi.pdf}
\\
\includegraphics[scale=0.55]{6d_benchmark/vnn_4000_amp_radi.pdf}
&
\includegraphics[scale=0.55]{6d_benchmark/neural_ode_4000_amp_radi.pdf}
\end{tabular}
\caption{Prediction of the final temperature by the three models with 4,000 training samples on simulations with varying amplitude and radius of the laser beam. Top left: ground truth by Planar Jet Diffusion Simulations. Top right: prediction by kernel ridge regression. Bottom left: prediction by neural networks. Bottom right: prediction by physics-based PNODE. We observe that, with only 100 training data points, our physics-based PNODE provides the most accurate prediction of final temperature among all three methods, with a sharp boundary between ignition and non-ignition regions. On the contrary, kernel ridge regression and neural networks predict the final temperature of many data points to be between 400K and 1000K, which is inconsistent with our high-fidelity simulations.}
\label{fig:amplitude_radius_4000}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tabular}{ll}
\includegraphics[scale=0.55]{6d_benchmark/pjd_y_amp.pdf}
&
\includegraphics[scale=0.55]{6d_benchmark/krr_y_amp.pdf}
\\
\includegraphics[scale=0.55]{6d_benchmark/vnn_y_amp.pdf}
&
\includegraphics[scale=0.55]{6d_benchmark/neural_ode_y_amp.pdf}
\end{tabular}
\caption{Prediction of the final temperature by the three models with 100 training samples on simulations with varying amplitude and y coordinate of the laser beam. Top left: ground truth by Planar Jet Diffusion Simulations. Top right: prediction by kernel ridge regression. Bottom left: prediction by neural networks. Bottom right: prediction by physics-based PNODE. With 4000 data points, all three methods improve their predictions on volume-average final temperature across all data points. We observe that neural networks and kernel ridge regression still predict linear transitions across the boundary of successful ignition, while physics-based PNODE can predict an almost vertical jump from non-ignition to ignition, which is consistent with our high-fidelity simulations.}
\label{fig:y_amplitude}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tabular}{ll}
\includegraphics[scale=0.55]{6d_benchmark/pjd_4000_y_amp.pdf}
&
\includegraphics[scale=0.55]{6d_benchmark/krr_4000_y_amp.pdf}
\\
\includegraphics[scale=0.55]{6d_benchmark/vnn_4000_y_amp.pdf}
&
\includegraphics[scale=0.55]{6d_benchmark/neural_ode_4000_y_amp.pdf}
\end{tabular}
\caption{Prediction of the final temperature by the three models with 4,000 training samples on simulations with varying amplitude and y coordinate of the laser beam. Top left: ground truth by Planar Jet Diffusion Simulations. Top right: prediction by kernel ridge regression. Bottom left: prediction by neural networks. Bottom right: prediction by physics-based PNODE. With 4000 data points, all three methods improve their predictions on volume-average final temperature across all data points. We observe that neural networks and kernel ridge regression still predict linear transitions across the boundary of successful ignition, while physics-based PNODE can predict an almost vertical jump from non-ignition to ignition, which is consistent with our high-fidelity simulations.}
\label{fig:y_amplitude_4000}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tabular}{lll}
\includegraphics[scale=0.35]{DBSCAN/Physics-based_PNODE_Core_ignited.pdf}
&
\includegraphics[scale=0.35]{DBSCAN/Physics-based_PNODE_Boundary_ignited.pdf}
&
\includegraphics[scale=0.35]{DBSCAN/Physics-based_PNODE_Noise_ignited.pdf}
\\
\includegraphics[scale=0.35]{DBSCAN/Neural_Network_Core_ignited.pdf}
&
\includegraphics[scale=0.35]{DBSCAN/Neural_Network_Boundary_ignited.pdf}
&
\includegraphics[scale=0.35]{DBSCAN/Neural_Network_Noise_ignited.pdf}
\\
\includegraphics[scale=0.35]{DBSCAN/Kernel_Ridge_Regression_Core_ignited.pdf}
&
\includegraphics[scale=0.35]{DBSCAN/Kernel_Ridge_Regression_Boundary_ignited.pdf}
&
\includegraphics[scale=0.35]{DBSCAN/Kernel_Ridge_Regression_Noise_ignited.pdf}
\end{tabular}
\caption{Error distribution of physics-based PNODE, neural networks, and kernel ridge regression on ignited data sets. Our results indicate that the neural networks and kernel ridge regression provide less accurate predictions than physics-based PNODE, especially on boundary points, as it is more challenging to distinguish ignited cases from non-ignited cases near the boundary of successful ignition.}
\label{fig:DBSCAN_ignited}
\end{figure}
\begin{figure}[htbp]
\centering
\begin{tabular}{lll}
\includegraphics[scale=0.35]{DBSCAN/Physics-based_PNODE_Core_non-ignited.pdf}
&
\includegraphics[scale=0.35]{DBSCAN/Physics-based_PNODE_Boundary_non-ignited.pdf}
&
\includegraphics[scale=0.35]{DBSCAN/Physics-based_PNODE_Noise_non-ignited.pdf}
\\
\includegraphics[scale=0.35]{DBSCAN/Neural_Network_Core_non-ignited.pdf}
&
\includegraphics[scale=0.35]{DBSCAN/Neural_Network_Boundary_non-ignited.pdf}
&
\includegraphics[scale=0.35]{DBSCAN/Neural_Network_Noise_non-ignited.pdf}
\\
\includegraphics[scale=0.35]{DBSCAN/Kernel_Ridge_Regression_Core_non-ignited.pdf}
&
\includegraphics[scale=0.35]{DBSCAN/Kernel_Ridge_Regression_Boundary_non-ignited.pdf}
&
\includegraphics[scale=0.35]{DBSCAN/Kernel_Ridge_Regression_Noise_non-ignited.pdf}
\end{tabular}
\caption{Error distribution of physics-based PNODE, neural networks and kernel ridge regression on non-ignited data sets. Our results indicate that neural networks and kernel ridge regression provide less accurate predictions than physics-based PNODE, especially on boundary points, as it is more challenging to distinguish ignited cases from non-ignited cases near the boundary of successful ignition.}
\label{fig:DBSCAN_nonignited}
\end{figure}
\section{Conclusion}
In this work, we proposed a novel hybrid model that combines a 0D reacting flow model and deep neural networks based on physics-based PNODEs. By embedding deep neural networks into the 0D model as the heat source function and Arrhenius reaction parameters, we were able to build a reduced-order surrogate model for high-fidelity Computational Fluid Dynamics simulations of combustion processes. Furthermore, our PNODE-based model is capable of providing, with a limited number of training samples, physically constrained solutions that describe highly complex parameter space for combustion systems. We validated our approach with high-fidelity Planar Jet Diffusion simulations using the HTR solver with six varying combustion parameters. The performance of our PNODE-based 0D model is compared with that of two widely used approaches, namely kernel ridge regression and classical neural networks. Our results show that our PNODE-based 0D model is able to predict, with only a limited number of data points, sharp transitions near the boundary of successful ignition, as well as the evolving chemistry of the combustion system.
\printbibliography
\end{document}
|
{
"arxiv_id": "2302.08713",
"language": "en",
"timestamp": "2023-02-20T02:07:37",
"url": "https://arxiv.org/abs/2302.08713",
"yymm": "2302"
} | \section{Introduction} \label{sec:intro}
Most wall-bounded flows found in engineering, transportation, energy production, and other industrial applications, as well as in nature are of a high Reynolds number turbulent nature, such as the the Reynolds number of the turbulent flow over the wing of a heavy transport aircraft is of the order of $10^8$, which to this day are unattainable by direct numerical simulations, and must rely on experimental investigation for understanding via empirical measurements \citep{talamelli2009ciclope}. An important turbulent boundary layer (TBL) encountered in numerous engineering applications including diffusers, turbine blades, aircraft wings, wind turbine blades as well as marine vessel flow is one that develops through stagnation to flow reversal at the wall if it has developed for a sufficient streamwise domain in an Adverse pressure gradient (APG) environment \citep{Oswatitsch1958,Perry75}, or if the boundary layer is subjected to an abrupt increase in the applied pressure gradient with the detached TBL separating from the surface consists of vortices within the separation bubble \citep{Chong-etal-1998}. Flow separation in TBLs is undesirable, as it may lead to catastrophic consequences and reduced performance due to a significantly altered mean pressure distribution on the surface \citep{kitsios2016direct}, as well as large pressure fluctuations \cite{FRICKE1971113} that can be an important source for fatigue failure. Therefore, for such flows subjected to an APG environment to remain attached to the curved surface is crucial to the efficiency of many engineering systems \citep{kitsios2017direct}. Much research has been conducted on the design of aerodynamic and hydrodynamic surfaces and the controls to prevent or delay flow separation, such as the boundary layer trip devices on the lower surfaces of a glider and the vortex generators on aircraft wings. However, many of the solutions to boundary layer separation due to APG environments are based on trial-and-error with limited detailed measurements that have shed little light on the fluid physics of the APG-TBL. In order to enable the design of more efficient active or passive techniques to prevent, delay or reduce the flow separation, significant new understanding based on well defined experiments measured with diagnostics tools that elucidated the statistical and spatial structure of the APG-TBL with sufficient spatial resolutions, especially in the near-wall region of an APG-TBL at the verge of separation.
Few numerical and experimental investigations have studied the spatial structures, specifically the structure of coherent motions, in an APG-TBL near separation \citep{skaare1994turbulent,krogstad1995influence,Chong-etal-1998,cheng2015large,drozdz2016study,kitsios2017direct,drozdz2017experimental,eich2020large}.
The non-equilibrium turbulent boundary layer near separation, whose turbulent structure depended on the upstream history conditions was investigated using hot-wire anemometry by \citet{drozdz2017experimental}. Their study showed that, for the same APG, the near-wall momentum increases with increasing Reynolds number at incipient detachment with an observed increase in the turbulent kinetic energy. It was found that the outer scaling laws are no longer applicable in this region. The outer inflection point in the mean streamwise velocity profile was found to corresponds to the outer peak location in the Reynolds stresses and the zero-crossing values of the skewness. The wall-normal position of this inflection point was found to vary with upstream flow history conditions. The inflection point is the result of interaction between the large and small scales, which causes enhanced convection velocities of the near-wall small scales. At the incipient detachment, the turbulence activity drops near the wall and is maximum in the outer region \citep{drozdz2016study}.
\citet{eich2020large} showed that the dynamics of the separation line are strongly modulated in space and time by the low-frequency large-scale motions (LSMs). Using conditional sampling, they observed that high-momentum LSMs are able to shift the separation point downstream, whereas low-momentum LSMs resulted in the opposite.
\citet{drozdz2021effect} studied a wide range of Reynolds numbers, $Re_{\delta_2}$ ranging from 4,900 to 14,600, in an APG-TBL approaching separation using hot-wire anemometry. This study found that the separation point is shifted downstream with increasing Reynolds numbers due to an increase in the near-wall convection velocity. In addition it was also observed that with increasing Reynolds number the interactions between the large and small scales, {\em i.e.} the amplitude modulation, is enhanced.
Streamwise vortices in wall-bounded flows are known transfer high momentum from the outer region to the near-wall region and visa versa with regards to low momentum \citep{10.1017/s0022112097008562}. This mechanism is used in the application of vortex generators to control APG-TBL flow separation where the vortex generators reduce separation by redirecting the reversed flow towards the mean flow direction \citep{logdberg2006vortex}.
\citet{chan2022investigation} used micro-vortex generators to produce large-scale coutour-rotating vortex pairs, which contribute to the streaks by transferring the high-momentum fluids from the outer region to the near-wall region. This produces high-speed regions with their centres at the vortex pairs and low-speed regions on the outer sides of the vortex pairs. Local skin friction decreases by up to 15\% in the low-speed region, whereas an increase in the skin friction is observed over the high-speed region. If the contributions of high- and low-momentum motions to skin friction can be accurately predicted, then the design and implementation of such flow-control methodologies could be more efficient.
In order to investigate the structure of the wall-bounded turbulence at high Reynolds numbers in a region that is representative of a well-defined self-similar APG-TBL, requires a facility which has been designed specifically to produce this flow. The facility requires optical access to enable high-spatial-resolution (HSR) two-component -- two-dimensional (2C-2D) particle image velocimetry (PIV) over a large streamwsie extend of the TBL, {\em i.e.} typically more than $2\delta$ to enable the investigation of ejections and sweeps, spanwise vortices, and uniform momentum zones\citep{thavamani2020characterisation}. The requirement for HSR often means that a single camera cannot capture the entire wall-normal range of a wall-bounded turbulent flow for the required domain size along the streamwise direction with multiple cameras required to be used for the HSR 2C-2D PIV. \citet{de2014high} and \citet{cuvier2017extensive} used an array of cameras to obtain large-field measurements of a ZPG-TBL and an APG-TBL, respectively. However, the recent development of the large CCD sensors has enabled the acquisition of PIV image pairs with sufficient spatial resolution while capturing the complete wall-normal range of the boundary layer over a streamwise domain of several boundary layer thicknesses. \citet{sun2021distortion} used a 47 MPx CCD camera to acquire the complete wall-normal range of a ZPG-TBL at $Re_\tau = 2,386$. Their measurement domain size was $2.5\delta \times 1.5\delta$ with the wall-normal resolution of 2.7 viscous units.
This paper describes the facility used to esatblish the self-similar AGP-TBL and provides the details of the PIV experimental set-up and PIV analysis. The characteristics of the strong APG-TBL are documented, including the determination from HSR 2C-2D PIV of the boundary layer parameters of: the relevant length and velocity scales, wall-shear stress, mean skin friction, pressure gradient parameter and shape factor, as well as the spatial variation of the similarity coefficients necessary to establish that the conditions of self-similarity are satisfied in the experimental facility. First- and second-order statistics scaled with the inner and outer units and the wall-normal distribution of the quadrant decomposition of the Reynolds shear stress are presented and compared with the ZPG-TBL and the mild APG-TBL results. Additionally, the profiles of the third-order moments, skewness, and flatness are presented and used to analyse the effect of a strong APG on the structure of a turbulent boundary layer. Finally, mean skin friction is studied in detail using the RD decomposition \citep{renard2016theoretical} to analyse the contributions of the turbulent kinetic energy, as well as the APG to mean skin friction. Furthermore, a conditional decomposition of mean skin friction is also presented to demonstrate the contributions of ejections and sweeps to the generation of mean skin friction.
\section{Experimental method}
\subsection{LTRAC APG-TBL wind tunnel}
For the experimental study of a self-similar high-Reynolds-number APG-TBL flow near separation, a wind tunnel facility is used to develop such flow. The flow passes through a contraction following a blower and several flow straightening stages, to sufficiently thin the boundary layer on the walls. Following a trip device at the entrance of the test section, the turbulent boundary layer is required to develop over a sufficient length of a flat plate to obtain the maximum possible thickness and achieve high Reynolds numbers. In order to impose an APG on the boundary layer, the cross-section of the tunnel should diverge in the streamwise direction. This can be achieved by making the roof follow a curved path. If the roof structure is rigid, that allows only a single fixed value of the imposed APG at one streamwise location. A flexible shape of the curved roof can impose different magnitudes of APG on the flow, which is essential to fine-tune the APG required to produce a self-similar turbulent boundary layer over a considerable length in the streamwise direction.
The second requirement is to bring the self-similar APG-TBL flow near separation.
According to \citet{dengel1990experimental}, the mean skin friction coefficient $C_f$ may be as low as $3.5 \times 10^{-4}$ before separation. So, in the measured domain of interest (DOI), $C_f$ should be within an order of magnitude of this value. To achieve that, the approach of \citet{skaare1994turbulent} can be adopted, which imposes a strong APG at the beginning of the test section such that $dP/dx > 0$ and $d^2P/dx^2> 0$, in order to bring the flow close to separation. Then, to keep $C_f$ constant in the DOI, the APG in the downstream region is relaxed so that $dP/dx > 0$ and $d^2P/dx^2< 0$ and a stable equilibrium boundary layer is produced over a considerable length of the streamwise domain.
A new test section satisfying the above requirements was built on an existing wind tunnel facility in the Laboratory of Turbulence Research in Aerospace and Combustion (LTRAC), Monash University, Australia, which is reported by \citet{amili2012measuring}. A 3D model of the wind tunnel is shown in figure \ref{fig:3D_model_WT}. This is an open-type wind tunnel that consists of an inlet, a centrifugal fan, several flow straighteners, a contraction, a tripping device, a test section, and an outlet. The centrifugal fan is powered by a three-phase 5.5 kW motor. The rotational speed of the motor is adjusted to control the airflow rate. A settling chamber connected to the blower consists of honeycombs and screens which act as flow straighteners. The contraction has an aspect ratio of 10:1 with the smaller open face measuring 1000 mm $\times$ 100 mm. Based on its location and purpose, the new wind tunnel is referred to as the LTRAC APG-TBL wind tunnel.
\begin{figure*}[!t]
\centering
\begin{overpic}[percent,grid=false,tics=10,width=1.05\textwidth]{WT14-000-MASTER-Assem1-profile4_2.pdf}
\put(62,19){\color{blue}\rotatebox{-15}{\tiny 4.5 m}}
\put(40,24.5){\color{blue}\vector(-1,0.265){0}}
\put(40,24.5){\color{blue}\vector(1,-0.265){37}}
\put(31.8,32){\color{blue}\rotatebox{45}{\tiny 1 m}}
\put(30,31){\color{blue}\vector(-1,-1){0}}
\put(30,31){\color{blue}\vector(1,1){4.3}}
\put(30.4,26){\color{blue}\rotatebox{90}{\tiny 1 m}}
\put(30,31){\color{blue}\vector(0,1){0}}
\put(30,31){\color{blue}\vector(0,-1){8}}
\end{overpic}
\caption{A 3D model of the LTRAC APG-TBL wind tunnel.}
\label{fig:3D_model_WT}
\end{figure*}
The LTRAC APG-TBL wind tunnel is 4.5-m long, 1-m wide and 0.62-m high. The floor and sidewalls of the test section are made of clear glass fixed in aluminium frames. This allows optical access to the flow in the wall-normal as well as the spanwise directions. The side windows, when closed, are sealed with the base floor to prevent any leakage of air that may carry the seeding particles. When opened, each window is supported by two gas springs. The provision of the windows allows easy cleaning of the accumulated seeding particles from the glass to allow a clear view for the PIV cameras and to avoid irregular scattering of the laser light from the accumulated particles. The floor of the test section was levelled to the $xz$ plane.
The roof of the test section is a 4.5 mm thick polycarbonate sheet held in place using eight lifting stations, which themselves are supported on the top frame. Each lifting station has two vertically hanging screw rods that can be winded in and out of the two bearings fixed on the top frame to change the roof height at a particular streamwise location. The two winding wheels on the top are interconnected using a chain and sprocket mechanism to apply uniform lifting on both lateral ends.
The extraction of the wind tunnel consists of a 90$\degree$ elbow duct, a lofted connection from rectangular to circular shape, and a pipe of 300 mm diameter. The extraction parts are made of aluminium sheet metal.
The aluminium parts of the window and floor frames, the top frame, and the lifting stations are black anodised to avoid reflections from the laser sheet. The 90$\degree$ elbow duct of the extraction is painted black from the inside for the same purpose.
\subsection{Development of the self-similar APG-TBL flow near separation}
\subsubsection{Importance of self-similarity}
\noindent Continuous changes in pressure gradients on a curved surface make an APG-TBL flow difficult to study, as they are affected by the history of upstream conditions \citep{kitsios2017direct}. To study the effect of pressure gradient on turbulent statistics, it is important to make the APG-TBL independent of the influence of the upstream conditions. This is achieved by making the flow self-similar. A self-similar APG-TBL is the simplest canonical form of flow to investigate \citep{kitsios2017direct}, and is defined as the flow in which the relative contribution of each term in the governing equations does not depend on the streamwise location \citep{simpson1977features, mellor1966equilibrium}.
\subsubsection{The roof profile for self-similarity}
As the roof of the LTRAC wind tunnel can be adjusted to impose a variable pressure gradient, an optimum roof profile that enables the flow to be self-similar in a domain spanning several boundary layer thicknesses in the streamwise direction is desired. The wall tufts were fixed in a matrix of 50 $\times$ 15 on the ceiling and 50 $\times$ 5 on the floor to visualise any flow separations while adjusting the roof profile.
An estimate of the roof profile which satisfies the self-similarity conditions is shown in figure \ref{fig:roof_profile}. This is governed by the equation:
\begin{equation}
\begin{gathered}
h= 100 - 5.1798 x + 51.9825 x^2 - 13.0411 x^3 + \\ 7.2 \times 10 ^{-12} x^4 + 0.5559 x^5- 0.0719 x^6
\end{gathered}
\end{equation}
where the height $h$ and the streamwise position $x$ are in metres and millimetres, respectively. To achieve this profile, a strong APG with $dP/dx > 0$ and $d^2P/dx^2> 0$ was imposed in the region where $x<0.95$ m, to bring the flow close to separation. Beyond $x=0.95$ m, the APG was relaxed so that $dP/dx > 0$, $d^2P/dx^2< 0$ and $C_f$ is constant and a stable equilibrium boundary layer is produced.
\begin{figure*}
\centering
\setlength{\figH}{0.35\textwidth}
\setlength{\figW}{0.95\textwidth}
\input{"profile5_for_thesis2.tikz"}
\caption{Roof profile of the LTRAC APG-TBL wind tunnel is shown by the dashed blue line. The red circles show the locations of the lifting stations on the roof, whereas the black solid line represents the floor.}
\label{fig:roof_profile}
\end{figure*}
\subsection{Experimental setup for 2C-2D PIV measurements}
A NewWave Solo 200XT Nd:YAG pulsed laser was used to illuminate the seeding particles in the 2C-2D PIV experiments. It is a dual-cavity laser that provides two pulses, each with a pulse width of 3$-$5 ns and maximum energy of 200 mJ at the maximum frequency of 10 Hz. The beam diameter at the exit of the laser head is approximately 6 mm. A laser arm was used to direct the laser beam from the laser head to the laser optics set-up.
For 2C-2D PIV, the laser beam was expanded using a cylindrical lens of $f = -100$ mm (where $f$ is the focal length) that was installed 1,300 mm upstream of the FOV. The thickness of the laser sheet is reduced to be minimum at the middle of the FOV, using a $f=1,500$ mm focal length cylindrical lens, installed at 1,500 mm upstream of the middle of the FOV.
The laser sheet formed by the $f=-100$ mm lens expands slowly before entering the wind tunnel from underneath the glass floor near the end of the test section and is reflected towards the FOV using a large mirror with the reflecting surface measuring 150 mm $\times$ 38 mm. All optics are mounted on an optomechanical cage system, except for the large mirror, which is placed inside the tunnel and supported on the tunnel floor. The laser sheet thickness was measured using a low-cost in situ method presented in \citet{Shehzad2019}. To measure the laser sheet thickness, an automated system has been developed which consists of a low-cost CCD camera, a Raspberry Pi to control the camera, and software that calculates the thickness of the laser sheet falling on the bare CCD array in real time. The thickness measured at the waist, which is in the middle of the FOV is approximately 500 $\mu$m. The thin laser sheet allows high-quality 2C-2D PIV measurements, provided that the seeding and camera focus are properly set. The laser sheet is aligned parallel to the calibration target using a novel alignment method presented by \citet{shehzad2021assuring}.
Smoke particles are used as seeding in the current 2C-2D PIV experiments. A Compact ViCount smoke machine is used to create the smoke from a water-glycol mixture. The maximum aerosol output of the smoke machine is 166 mg/sec. It is operated using compressed nitrogen gas at a maximum operating pressure of 820 kPa. Pressure can be varied to change the amount of smoke output. The vapour density of the water-glycol mixture is approximately equal to the vapour density of water. The smoke is mixed with air at the inlet of the centrifugal fan.
Two high-spatial-resolution CCD cameras (Imperx B6640) are used side-by-side to image a large field of view (FOV) of $3.3\delta \times 1.1\delta$ in the $xy$ plane, where $\delta$ is the boundary layer thickness at the middle of the field of view. Each camera has a large sensor that measures 36.17 mm $\times$ 24.11 mm and contains 6,600 $\times$ 4,400 pixels with a square pixel size of 5.5 $\mu$m. The cameras are used with Zeiss lenses of 100 mm focal length set at the f-stop number of 2.8. At a spatial resolution of 40 $\mu$m/pixel and with an overlap of 5 mm between two cameras, a combined FOV that measures 523 mm $\times$ 178 mm is achieved.
The exposure and bit-depth for both cameras are set to 186 $\mu$s and 12 bits, respectively. The cameras are controlled by an external signal provided by a signal synchroniser and operated in the double exposure mode at a frequency of 2 Hz which enables the acquisition of two PIV image pairs every second. A BeagleBone Black (BBB) platform is used as the signal synchroniser. This control device is enclosed in a MED025 box which has 20 BNC ports for 14 outputs and 6 inputs. The device was developed at LTRAC \cite[see][]{fedrizzi2015application}. It is connected to a computer via a USB 2.0 cable and can be controlled using a Python program in a web-based Cloud9 IDE to generate synchronised signals at multiple BNC interfaces. The elapsed time between the two laser pulses $\Delta t$ is set at 100 $\mu$s, so that the maximum particle image displacement on the sensor is less than 15 pixels at the spatial resolution of 40 $\mu$m/pixel and the inflow velocity of 15.6 m/s.
A 40-mm wide strip of 40-grit sandpaper, affixed on top of a 60-mm wide and 1.2-mm thick sheet metal, is used as the boundary layer trip device. It is placed at the entrance of the wind tunnel test section and this location is taken as $x=0$.
\subsection{Digital 2C-2D PIV analysis}
PIV images of the strong APG-TBL were analysed using multigrid/multipass cross-correlation digital PIV (MCCDPIV) \citep{soria1996investigation} with an in-house parallel code. The parameters of the PIV analysis along with the experimental setup and the flow conditions are presented in table \ref{tab:piv_parameters_viscous}. The values of the field of view (FOV), grid spacing and the interrogation window (IW) size are presented in viscous units as well as in pixels. A thin laser sheet together with the sufficient seeding of microparticles leads to the use of a small IW size. The IWs have an overlap of 50\% in the streamwise direction and 75\% in the wall-normal direction. The latter leads to the wall-normal spatial resolution of less than a viscous unit.
The MCCDPIV results were validated using the normalised median test \citep{westerweel2005universal} with a threshold value of $2.0$ and the maximum allowable velocity of 0.21$\times$IW$_x$ where IW$_x$ represents the size of the IW in the streamwise direction. The subpixel accuracy of the PIV displacement was obtained using a Gaussian function fit \citep{willert1991digital, soria1996investigation}.
\begin{table}
\begin{center}
\caption{Parameters related to the experimental setup and the PIV analysis.}
\begin{tabular}{lc}
\hline \hline\noalign{\medskip}
Measurement (Units) & Value \\
\noalign{\smallskip}\hline\noalign{\medskip}
Inflow Velocity (m/s) & 15.5 \vspace{0.0em}\\
Viscous length scale $l^+$ ($\mu$m) & 175\textsuperscript{1} \vspace{0.0em}\\
Magnification $M$ ($\mu$m/pixel) & 40.65 \vspace{0.0em} \\
f-stop number & 2.8 \vspace{0.0em}\\
Camera lens focal length $f$ (mm) & 100 \vspace{0.5em}\\
FOV ($l^+ \times l^+$) & $3,076 \times 1,052$ \\
(pixels $\times$ pixels) & $12,880 \times 4,400$ \\
(mm $\times$ mm) & $523 \times 178$ \\
($\delta \times \delta$) & $3.3 \times 1.06$ \vspace{0.5em} \\
Grid spacing ($l^+ \times l^+$) & $3.82 \times 0.96$ \\
(pixels $\times$ pixels) & $16 \times 4$ \vspace{0.5em} \\
IW size ($l^+ \times l^+$) & $7.65 \times 2.87$ \\
(pixels $\times$ pixels) & $32 \times 12$ \vspace{0.5em} \\
Max velocity ratio filter (Threshold) & 0.21 \vspace{0.0em} \\
Normalised median filter (Threshold) & 2.0 \vspace{0.0em} \\
Time interval between frames $\Delta t$ ($\mu$s) & 100 \vspace{0.0em} \\
Velocity field acquisition frequency ($Hz$) & 2 \vspace{0.0em} \\
Number of samples & 25,000 \vspace{0.0em} \\
Vector field size & $813 \times 1,091$ \vspace{0.0em} \\
\noalign{\smallskip}\hline\noalign{\medskip}
\label{tab:piv_parameters_viscous}
\end{tabular}
\end{center}
\end{table}
\footnotetext{\textsuperscript{1}This is at the middle of the FOV at $x_o$.}
\section{Boundary layer parameters}
\noindent A strong APG-TBL has a negative $\partial V_\infty/\partial x$ at the far-field in the free stream. Hence, $\partial U/\partial y$ at the boundary must be negative for the far-field to have a zero spanwise vorticity. This leads to the conclusion that $U$ profile must have a maximum point when plotted against $y$. Since the mean streamwise velocity profile in the wall-normal direction does not approach a constant value, the classical definitions of the displacement and momentum thicknesses ($\delta_1$ and $\delta_2$) are inappropriate. Therefore, the appropriate velocity scale (the edge velocity $U_e$) and the length scales (the boundary layer thickness ($\delta$), $\delta_1$, and $\delta_2$) will be used as defined in \citet{kitsios2017direct,spalart1993experimental}. \citet{lighthill1963boundary} first proposed this velocity scale, which is given as
\begin{equation}
U_e(x) = U_\Omega (x, y_\Omega),
\end{equation}
\noindent where
\begin{equation}
U_\Omega (x,y) = - \int_{0}^{y} \Omega_z (x, \widetilde{y}) d\widetilde{y},
\end{equation}
\noindent with $\Omega_z$ representing the mean spanwise vorticity, $y_\Omega$ is the wall-normal position at which $\Omega_z$ is 0.2\% of the mean vorticity on the wall and $\delta =y_\Omega$. The integral length scales are given by
\begin{equation}
\delta_1(x) = \frac{-1}{U_e} \int_{0}^{y_\Omega} y \Omega_z (x, y) dy, \,\,\,\,\, \text{ and}
\end{equation}
\begin{equation}
\delta_2(x) = \frac{-2}{{U_e}^2} \int_{0}^{y_\Omega} y U_\Omega \Omega_z (x, y) dy - \delta_1(x).
\end{equation}
\begin{table}[!b]
\begin{center}
\caption{Turbulent boundary layer parameters at the middle of the FOV of the strong APG-TBL.}
\label{tab:bl_parameters}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{cccccccccc}
\hline \hline\noalign{\medskip}
$U_e$(m/s) &$\delta $ (mm) &$\delta_1 $ (mm) &$\delta_2 $ (mm) &$H$ & $u_\tau$ (m/s) & $Re_{\delta_1}$ &$Re_{\delta_2}$ & $Re_{\tau}$ & $\beta$\\
\noalign{\smallskip}\hline\noalign{\medskip}
5.43 &158.37 &62.04 &25.51 &2.42 & 0.084 & 22,920 &9,660 & 940 &30.24 \\ \noalign{\smallskip}\hline\noalign{\medskip}
\end{tabular}
}
\end{center}
\vspace{-0.2em}
\end{table}
The measured length and velocity scales in the middle of the FOV for strong APG-TBL are presented in table \ref{tab:bl_parameters}. The current APG-TBL has $\beta=30.24$ compared to $\beta=2.27$ of the APG-TBL measured in the LMFL wind tunnel. Therefore, the former is referred to as `strong APG-TBL' and the latter as `mild APG-TBL'.
\begin{figure}[!b]
\begin{center}
\begin{tabular}{c}
\begin{overpic}[width=0.4\textwidth]{BLT_length_scales.pdf} \put(25,68){(a)} \end{overpic} \\
\begin{overpic}[width=0.4\textwidth]{Ue.pdf} \put(25,68){(b)} \end{overpic}
\end{tabular}
\end{center}
\vspace*{-0.2in}\caption{Streamwise variation of the length and velocity scales: (a) boundary layer thickness $\delta$, displacement thickness $\delta_1$, momentum thickness $\delta_2$, and (b) the edge velocity $U_e$.
\label{fig:lengt_and_vel_scales}}
\end{figure}
The streamwise dependence of these scales is presented in figure \ref{fig:lengt_and_vel_scales}. The $\delta$, $\delta_1$, and $\delta_2$ vary linearly with $x$ for the first 70\% of the streamwise ($x$) domain. Information on vorticity is not available near the edge of the boundary layer in the latter 30\% of the $x$ domain due to the limited FOV in the wall-normal direction. It is presumed that the length scales vary linearly in this part, also. The linear fits to the length scales have been extrapolated to cover the full $x$ range. This assumption will be tested later with the collapse of the profiles of the mean streamwise velocity and the Reynolds stresses, scaled using the extrapolated outer variables $U_e$ and $\delta_1$ following the scaling in \citet{kitsios2017direct}. If the wall-normal profiles of the turbulent statistics in the latter 30\% of the $x$ domain scaled using the extrapolated outer variables are consistent with the profiles from the first 70\% of the $x$ domain, this would prove the assumption that the boundary layer is growing linearly with the streamwise distance throughout the $x$ domain.
\begin{figure}[!t]
\begin{center}
\begin{tabular}{c}
\begin{overpic}[width=0.43\textwidth]{friction_velocity.pdf} \put(25,62){(a)} \end{overpic}\\
\begin{overpic}[width=0.43\textwidth]{tw.pdf} \put(25,62){(b)} \end{overpic}
\end{tabular}
\end{center}
\vspace*{-0.2in}\caption{Streamwise variation of the (a) friction velocity, $u_\tau$ and, (b) the wall-shear stress $\tau_w$.
\label{fig:ut_and_tw}}
\end{figure}
Since the present data of the strong APG-TBL has a high spatial resolution in the wall normal direction, the wall shear stress and the friction velocity are measured at every $x$ location using the mean streamwise velocity profile within the viscous sublayer. Figure \ref{fig:ut_and_tw} shows the streamwise variation of the measured friction velocity $u_\tau$ and the wall-shear stress $\tau_w$. The decay of these velocities is weakly quadratic.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\begin{overpic}[width=0.4\textwidth]{Cf.pdf} \put(25,62){(a)} \end{overpic} \\
\begin{overpic}[width=0.4\textwidth]{Beta.pdf} \put(21,43){(b)} \end{overpic} \\
\begin{overpic}[width=0.4\textwidth]{H.pdf} \put(23,62){(d)} \end{overpic} \\
\begin{overpic}[width=0.4\textwidth]{Beta_vs_h_x_.pdf} \put(27,42){(c)} \end{overpic} \\
\end{tabular}
\end{center}
\vspace*{-0.2in}\caption{Streamwise variation of (a) mean skin friction coefficient $C_f$, (b) Clauser pressure gradient parameter $\beta$, and (c) shape factor $H$. $H=2.35$ is the empirical value for an equilibrium APG-TBL reported by \citet{mellor1966equilibrium}. (d) shows $\beta$ as a function of the roof height $h$ within the measurement domain.
\label{fig:other_parameters}}
\end{figure}
Mean skin friction coefficient in turbulent boundary layer is defined as $C_f = \tau_w / (0.5 \rho U_\infty ^2) = u_\tau ^2 / (0.5 U_\infty ^2)$. Figure \ref{fig:other_parameters}(a) shows the streamwise variation of $C_f$. It has an approximately constant value of \num{4.8e-4}, which is comparable to the constant value of $\num{5.7e-4}$ for an equilibrium turbulent boundary layer near separation reported by \citet{skaare1994turbulent}. As reported by \citet{dengel1990experimental}, $C_f$ may be as low as \num{3.5e-4} before reaching the separation point.
Figure \ref{fig:other_parameters}(b) shows that $\beta$ linearly increases from 25.88 to 34.14 in the measured $x$ domain. Its streamwise average of approximately 30 is higher than the $\beta \approx 20$ in the equilibrium region of the strong APG-TBL reported in \citet{skaare1994turbulent} and less than $\beta=39$ in the DNS of a strong APG-TBL reported in \citet{kitsios2017direct}.
Figure \ref{fig:other_parameters}(c) shows that the shape factor $H$ is approximately constant at about 2.45. This is close to the empirical value of $H = 2.35$ for an equilibrium APG-TBL, as reported in \citet{mellor1966equilibrium}. It is also in agreement with the $H$ of the strong APG-TBL of \citet{kitsios2017direct}. Figure \ref{fig:other_parameters}(d) presents $\beta$ as a function of the roof height $h$. As expected, $\beta$ is linearly proportional to $h$ within the measurement domain.
\section{Conditions of self-similarity}
For an APG-TBL flow to be self-similar, the mean skin friction coefficient $C_f$ and the shape factor $H$ should remain constant along the streamwise direction. This is not expected in a ZPG-TBL in which both $C_f$ and $H$ decrease slowly when moving downstream \citep{townsend1980structure}.
Much of the theoretical work to derive the conditions of self-similarity and develop velocity and length scales to collapse the profiles of turbulent statistics at different streamwise locations has been carried out by \citet{townsend1980structure, mellor1966equilibrium, durbin1992scaling, perry1995wall, castillo2004similarity}. The Reynolds-averaged Navier-Stokes (RANS) equations of continuity, streamwise momentum, and wall-normal momentum, along with some similarity ansatz, can be used to achieve conditions of self-similarity as first presented by \citet{so1993boundary} and summarised by \citet{kitsios2017direct}. Based on these conditions, one can determine that the following quantities must be independent of $x$ for a flow to be self-similar.
\begin{equation}
C_{uu} = R_{uu}/U_e^2
\end{equation}
\begin{equation}
C_{vv} = R_{vv}/U_e^2
\end{equation}
\begin{equation}
C_{uv} = R_{uv}/(U_e^2 \delta ^ \prime _1)
\end{equation}
\begin{equation}
C_\nu = \nu/(U_e \delta _1 \delta ' _1)
\end{equation}
\begin{equation}
\Lambda = - \delta_1 U '_e / (U_e \delta '_1) = \delta_1 (P') / (U_e^2 \delta'_1) = (U_p / U_e)^2 / \delta'_1
\label{eq:Lambda}
\end{equation}
\noindent where $\Lambda$ is a pressure gradient parameter defined by \citet{castillo2004similarity} and $R_{uu}$, $R_{vv}$ and $R_{uv}$ are the functions that can be determined at different $x$ locations using the integrals of $\overline{u'u'}$, $\overline{v'v'}$ and $\overline{u'v'}$ from $\zeta=0$ to $\zeta=\delta /L_0$, respectively, where $L_0(x) \equiv \delta_1(x)U_e(x)/U_0(x)$, $P'$ is the streamwise pressure gradient, $\delta_1'$ is the streamwise gradient of displacement thickness and $ U_p = \sqrt{\delta_1 P'}$ is the pressure velocity.
\begin{figure*}
\begin{center}
\begin{overpic}[width=0.8\textwidth]{similarity_variables.pdf}
\put(9,60){(a)}
\put(59,60){(b)}
\put(9,30){(c)}
\put(59,30){(d)}
\end{overpic}
\end{center}
\vspace*{-0.2in}\caption{Streamwise distribution of the similarity coefficients presented in equations (1) to (4).
\label{fig:similarity_variables}}
\end{figure*}
If $C_{uu}$, $C_{vv}$, $C_{uv}$, and $C_\nu$ are constant along the $x$ direction, the profiles of $\langle uu \rangle$, $\langle vv \rangle$, $\langle uv \rangle$, and $\nu \partial^2 U / \partial y^2$, respectively, show self-similarity in both the inner and outer regions. However, if only $C_\nu$ is not independent of $x$, it implies that the scaling only applies to the outer region of the turbulent boundary layer \citep{kitsios2017direct}.
The streamwise profiles of the similarity coefficients $C_{uu}$, $C_{vv}$, $C_{uv}$, and $C_\nu$ are shown in figure \ref{fig:similarity_variables} and their streamwise averages as well as standard deviations are given in table \ref{tab:similarity_variables_mean_SD}. The standard deviations of these coefficients over the measured streamwise domain size of $3.3\delta$ are comparable to corresponding standard deviations reported in \citet{kitsios2017direct}.
\begin{table}[!b]
\begin{center}
\caption{Streamwise average and standard deviation of $\beta$ and the similarity coefficients within the $x$ domain.}
\label{tab:similarity_variables_mean_SD}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lccccc}
\hline \hline\noalign{\medskip}
& $\beta$ & $C_{uu}$ & $C_{vv}$ & $C_{uv}$ & $C_\nu$ \\
\noalign{\smallskip}\hline\noalign{\medskip}
Average & 30.134 & \num{0.001643} & \num{0.0005647} & \num{0.01488} & \num{0.00165} \\
Standard deviation & 2.352 & \num{0.0001911} & \num{7.196e-05} & \num{0.001827} &\num{7.256e-05} \\
\noalign{\smallskip}\hline\noalign{\medskip}
\end{tabular}}
\end{center}
\end{table}
The integration of equation $\Lambda \delta'_1/\delta_1 = -U'_e/U_e$, which is rearranged from the equation \ref{eq:Lambda}, can lead to the relationship $U_e \propto \delta_1^{-\Lambda}$. According to \citet{kitsios2017direct}, if a boundary layer grows linearly as in the present case, $U_e \propto (K x) ^{-\Lambda} \propto x^ {-\Lambda} \equiv x^m$ for $m = -\Lambda$. Figure \ref{fig:Lambda} shows the streamwise variation of $\Lambda$. In the middle of the FOV, the exponent $m$ is calculated using $\delta'_1 = 0.02651$, $P' = 2.8613$, $U_e = 5.4334$, and $\delta_1 = 0.06204$. The calculated value of $ m $ is $-0.23 $ which is expected for an incipient APG-TBL \citep{kitsios2017direct}.
\begin{figure}
\begin{center}
\begin{overpic}[width=0.45\textwidth]{Lambda.pdf} \end{overpic}
\end{center}
\vspace*{-0.2in}\caption{Streamwise variation of $\Lambda$.
\label{fig:Lambda}}
\end{figure}
\section{First- and second-order statistics}
\label{sec:first_second_order_stats}
Figure \ref{fig:MVP_Re_stresses_inner}(a) shows the profiles of the mean streamwise velocity scaled with the inner variable $u_\tau$ and $\nu / u_\tau$ at several equidistant streamwise positions. Here, a very good collapse is observed from the wall to the log layer. The slight spread in the wake region is the effect of the increasing Reynolds number along the streamwise direction. Figure \ref{fig:MVP_Re_stresses_inner}(b) shows the inner-scaled Reynolds stress profiles at the same streamwise locations as in \ref{fig:MVP_Re_stresses_inner}(a). The collapse in the inner region and the effect of the increasing Reynolds number in the outer region observed in the Reynolds stress profiles are similar to the mean velocity profiles. \textcolor{black}{This shows that the outer layer statistics do not scale with the viscous units in a strong APG-TBL, even if it is self-similar.}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\begin{overpic}[height=2.6in,width=0.45\textwidth]{MVP_inner.pdf} \put(60,70){(a)} \end{overpic} \\
\begin{overpic}[height=2.6in,width=0.475\textwidth]{Re_stresses_inner.pdf} \put(30,65){(b)} \end{overpic} \\
\end{tabular}
\end{center}
\vspace*{-0.2in}\caption{(a) Inner-scaled mean streamwise velocity profiles at several equidistant streamwise locations, where $x_m = x - x_o$ and $\delta$ is the boundary layer thickness at the middle of the FOV. (b) Reynolds stress profiles at the same locations as in (a).
\label{fig:MVP_Re_stresses_inner}}
\end{figure}
\textcolor{black}{To assess the self-similar nature of strong APG-TBL, the profiles of the mean streamwise velocity at several equidistant streamwise locations, scaled with the outer variables, $U_e$ and $\delta_1$, have been illustrated in figure \ref{fig:MVP_Re_stresses_outer}(a). Here, a very good collapse is observed in the whole boundary layer. This demonstrates the self-similar nature of the APG-TBL flow within the measurement domain.}
Reynolds stresses at the same streamwise locations as in \ref{fig:MVP_Re_stresses_outer}(a), scaled with the outer variables, are shown in figure \ref{fig:MVP_Re_stresses_outer}(b). The outer peaks in all the profiles for all Reynolds stresses are located slightly past $y=\delta_1$. These profiles also show a good collapse for each Reynolds stress, except in the region of the maximum shear stress, where they are slightly distant from each other. This is in agreement with the findings of \citet{kitsios2017direct} who also observed a spread among the peaks of the Reynolds stress profiles at several equally spaced $x$ locations.
Since the profiles of the mean velocity and the Reynolds stress in the last 30\% of the DOI, scaled using the extrapolated length scale $\delta_1$ and the velocity scale $U_e$, are consistent with the profiles from the first 70\% of the DOI, the assumption that the boundary layer is expanding in the whole DOI is right, and the use of the extrapolated length and velocity scales is appropriate.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\begin{overpic}[height=2.6in,width=0.45\textwidth]{MVP_outer.pdf} \put(20,70){(a)} \end{overpic} \\
\begin{overpic}[height=2.6in,width=0.475\textwidth]{Re_stresses_outer.pdf} \put(30,65){(b)} \end{overpic} \\
\end{tabular}
\end{center}
\vspace*{-0.2in}\caption{(a) Mean streamwise velocity profiles, and (b) Reynolds stress profiles at the same streamwise locations as in figure \ref{fig:MVP_Re_stresses_inner}(a), scaled with the outer variables.
\label{fig:MVP_Re_stresses_outer}}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\begin{overpic}[height=2.05in,width=0.4\textwidth]{MVP_inner_allTBL2.pdf} \put(80,25){(a)} \end{overpic} \\
\begin{overpic}[height=2.05in,width=0.4\textwidth]{Mean_Shear_Rate_allTBL2.pdf} \put(80,63){(b)} \end{overpic} \\
\begin{overpic}[height=2.05in,width=0.4\textwidth]{Mean_Shear_Rate_outer_allTBL2.pdf} \put(80,63){(c)}\end{overpic} \\
\begin{overpic}[height=2.05in,width=0.4\textwidth]{Mean_Defect_velocity_outer_allTBL.pdf} \put(80,63){(d)} \end{overpic} \\
\end{tabular}
\end{center}
\vspace*{-0.2in}\caption{(a) Mean streamwise velocity of all TBLs scaled with the viscous units, (b) diagnostic function $\Xi$ scaled with the viscous units, (c) mean shear rate scaled with the outer variables, (d) velocity defect profiles. All profiles are in the middle of the corresponding FOVs except the defect profiles of the strong APG-TBL, which are shown by the dash-dotted lines at six equidistant streamwise locations.
\label{fig:U_MeanshearRate_defectU_inner_allTBL}}
\end{figure}
To assess the effect of the APG on the turbulent statistics, the profiles of the strong APG-TBL at the streamwise middle of the FOV are compared with the corresponding profiles of ZPG-TBL and the mild APG-TBL. Figure \ref{fig:U_MeanshearRate_defectU_inner_allTBL} illustrates the mean streamwise velocity for all TBLs scaled with the inner variables. In the ZPG-TBL and the mild APG-TBL, the outer-layer measurements are consistent with the inner-layer HSR measurements. The profile of the strong APG-TBL from \citet{skaare1994turbulent} ($\beta \approx 20$) is also included (referred to as the SK profile). As the pressure gradient increases, the extent of the wake region increases and the width of the log-layer decreases. These measurements follow the same trend as for the $U^+$ vs. $y^+$ profiles from the DNS of the four TBLs with $\beta$ values ranging from 0 to 9, reported by \citet{lee2017large}. However, their profiles in the log layer are shifted downward, with the degree of the shift increasing with an increase in APG. This is not observed in the present TBLs, including the SK profile. This may be due to the experimental uncertainty in the measurements of the boundary layer parameters of the current TBLs.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\begin{overpic}[width=0.4\textwidth]{MVP_outer_allTBL2.pdf} \put(22,63){(a)} \end{overpic} \\
\begin{overpic}[width=0.4\textwidth]{uu_outer_allTBL3.pdf} \put(24,63){(b)} \end{overpic} \\
\begin{overpic}[width=0.4\textwidth]{vv_outer_allTBL3.pdf} \put(22,60){(c)} \end{overpic} \\
\begin{overpic}[width=0.4\textwidth]{uv_outer_allTBL3.pdf} \put(22,30){(d)} \end{overpic} \\
\end{tabular}
\end{center}
\vspace*{-0.2in}\caption{Mean streamwise velocity and the Reynolds stress profiles, scaled with the outer variables, for all TBLs at the middle of the FOVs.
\label{fig:stats_comparison_allTBL}}
\end{figure}
Profiles of the diagnostic function $\Xi = y^+ (dU^+ / dy^+)$ for all TBLs are shown in figure \ref{fig:U_MeanshearRate_defectU_inner_allTBL}(b). Near the wall, $\Xi$ is observed to be almost invariant with the pressure gradient because of the law of the wall. The peaks in the outer region are caused by the inflections in the wake of the velocity profiles. The $\Xi$ profiles also indicate that with an increase in $\beta$, the log-region slope ($1/\kappa$) increases and the log-region moves closer to the wall. These findings are also consistent with those reported in \citet{lee2017large}. Figure \ref{fig:U_MeanshearRate_defectU_inner_allTBL}(c) shows the mean shear rate normalised by the outer variables. Below $y = 0.3 \delta_1$, the strength of the mean shear rate is reduced with increasing APG. Further out, the opposite is the case. Figure \ref{fig:U_MeanshearRate_defectU_inner_allTBL}(d) represents the mean velocity defect profiles of all TBL, scaled with the outer variables. Here, the profiles of the strong APG-TBL at six equally-spaced streamwise locations (as in figure \ref{fig:MVP_Re_stresses_inner}(a)) are shown as dash-dotted lines. \textcolor{black}{The outer scaling shows a remarkable collapse of the defect profiles, demonstrating the self-similar nature of the flow. The similarity among these profiles is noticeably higher than that presented by \citet{lee2017large} for various APG-TBLs while using the local friction velocity and the defect thickness $\Delta = \delta_1 (2/C_f)^{1/2}$ \citep{skote1998direct} for scaling. Near the wall, the defect velocity normalised with the local $U_e$ has larger values for larger $\beta$. The ZPG-TBL has the longest wall-normal defect region, which decreases in its extent with an increase in APG.}
In figure \ref{fig:stats_comparison_allTBL}(a), the mean velocity profiles of all TBLs scaled with outer variables $\delta_1$ and $U_e$ are shown. Similarly, the Reynolds stress profiles of these data sets are also scaled with the outer variables and are shown in figures \ref{fig:stats_comparison_allTBL}(b-d). The DNS profiles of the strong APG-TBL ($\beta = 39$) of \citet{kitsios2017direct} are also included for comparison. The present profiles of the mean streamwise velocity for different $\beta$ are qualitatively similar to the corresponding ZPG, mild APG, or strong APG profiles reported in \citet{kitsios2017direct}. But as $\beta$ values of their mild and strong APG cases don't exactly match the $\beta$ value in the current case, the quantitative similarity between the compared profiles is not expected.
The $\overline{u'u'}$ profile of the ZPG-TBL has only an inner peak and no outer peak. The profile of the mild APG-TBL has an outer peak that is as strong as the inner peak and is located around ${y=1.3\delta}_1$. As the APG increases, the outer peak becomes stronger and moves closer to ${y=\delta}_1$, while the inner peak becomes more and more diminished. Similarly, the outer peaks in $\overline{v' v'}$ and $-\overline{u' v'}$ profiles also become stronger with increasing APG. This indicates the movement of the turbulent activity away from the wall and a reduced area of influence of the near-wall length and velocity scales with an increase in APG. The $\overline{v' v'}$ and $-\overline{u' v'}$ profiles exhibit plateaus in the outer region in ZPG-TBL.
Interestingly, at the wall-normal position of $y \approx 0.15 \delta_1$, the $\overline{u'u'}$ and $-\overline{u'v'}$ profiles exhibit similar values among all TBLs when scaled with the outer variables.
\subsection{Quadrant decomposition of the Reynolds shear stress}
\label{sec:overall_quadrant_analysis}
Ejections and sweeps are studied using the quadrant splitting method proposed by \citet{wallace1972wall} who used it in a turbulent channel flow at $Re_\tau = 187$. The four quadrants are: $Q1(+u', +v')$, $Q2 (-u', +v')$ (ejections), $Q3 (-u', -v')$, and $Q4 (+u', -v')$ (sweeps). This segmentation allows the study of the contributions of ejections and sweeps to Reynolds shear stress across the boundary layer.
Figure \ref{fig:quadrant_analysis} presents the contribution of the quadrants to the Reynolds shear stress $\overline{{u'v'}}$, where $\overline{u'v'}_Q$ is the contribution of the Reynolds shear stress of an individual quadrant $Q$ and normalised by the overall $\overline{{u'v'}}$. As expected, Q2 and Q4 events have positive contributions, while Q1 and Q3 events have negative contributions from the wall to the boundary layer edge in all TBLs. In the ZPG-TBL, very close to the wall, Q4 contributes considerably more than Q2. Beyond the cross-over point at $y\approx 0.04\delta_1$, the contribution of Q2 is larger than Q4. Above $y = 0.2\delta_1$, the contributions of both ejections and sweeps are nearly constant. In the same region, Q1 and Q3 have similar contributions which are nearly 50\% of the Q4 contribution but with the opposite sign. The distribution of the quadrant contributions is qualitatively similar to the quadrant contributions in a turbulent channel flow presented in \citet{wallace1972wall}.
In the mild APG-TBL, contributions of Q2 and Q3 are similar to their counterparts in the ZPG-TBL, but are up to 80\% more intense near the wall. Contrary to the ZPG-TBL, Q4 contributes more than Q2 from the wall to $y \approx \delta_1$ in the mild APG-TBL. Similarly, Q1 contributes more than Q3 in the same region.
There is no significant increase in the Q2 and Q3 contributions due to the strong APG, but the Q4 and Q1 contributions are stronger than in the mild APG-TBL from the wall to $y \approx \delta_1$. In this region, the contributions of sweeps are dominant over the ejections. These observations are consistent with the observations of \citet{skaare1994turbulent} who report that the sweeps are the dominant motions in a strong APG-TBL, whereas in the ZPG-TBL both the ejections and sweeps are equally important when analysed using the quadrant-splitting of \citet{wallace1972wall}.
\textcolor{black}{Since the contributions of both Q1 and Q4 are amplified under APG, the increase in the contribution of Q4 is greater than in Q1 and this demonstrates that when an APG is imposed, the outer peak in the Reynolds shear stress profile is formed due to the energisation of sweep motions of high-momentum fluid in the outer region of the boundary layer.}
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{uv_quad_split_allTBL2.pdf}
\end{center}
\vspace*{-0.2in}\caption{Quadrant contributions to the Reynolds shear stress in (a) ZPG-TBL, (b) mild APG-TBL, and (c) strong APG-TBL.
\label{fig:quadrant_analysis}}
\end{figure*}
\section{Turbulence production}
\label{sec:turbulence_production}
The terms that significantly contribute to the turbulence production in the current TBLs are $-\overline{u^\prime v^\prime} \partial U/\partial y$ and $\overline{{v^\prime}^2} \partial U/\partial y$. These terms are present in the transport equations for $\overline{{u^\prime}^2}$ and $-\overline{u^\prime v^\prime}$, respectively. Figure \ref{fig:turbulence_production} shows the profiles of these terms scaled with $U_e$ and $\delta_1$. Because the turbulence production terms are prominent in the near-wall region of the ZPG-TBL and the mild APG-TBL, only the near-wall HSR measurements of these TBLs have been included. In both TBLs, the term $-\overline{u^\prime v^\prime} \partial U/\partial y$ is the dominant contributor, while $\overline{{v^\prime}^2} \partial U/\partial y$ contributes about 50\% of the other term. Furthermore, the maximum values of these terms are about 3\% and 11\% stronger in the mild APG-TBL than in the ZPG-TBL, respectively. Also, the peaks in the APG-TBL are closer to the wall than in the ZPG-TBL. No significant outer peaks are found in both TBLs within the range of the HSR data. However, the contribution of $\overline{{v^\prime}^2} \partial U/\partial y$ is greater than that of $-\overline{u^\prime v^\prime} \partial U/\partial y$ in the outer region.
\begin{figure}[!b]
\begin{center}
\begin{overpic}[width=0.45\textwidth]{turbulence_production.pdf} \end{overpic}
\end{center}
\vspace*{-0.2in}\caption{Profiles of the dominant turbulence production terms $-\overline{u^\prime v^\prime} \partial U/\partial y$ and $\overline{{v^\prime}^2} \partial U/\partial y$. Black symbols: ZPG-TBL, blue symbols: mild APG-TBL, and red symbols: strong APG-TBL.
\label{fig:turbulence_production}}
\end{figure}
\textcolor{black}{The strong APG-TBL shows the weaker inner peaks and stronger outer peaks, which demonstrate that with a strong increase in APG, the turbulence production is shifted from the near-wall region towards the outer region.}
Also, the overall contribution of $-\overline{u^\prime v^\prime} \partial U/\partial y$ is reduced and that of $\overline{{v^\prime}^2} \partial U/\partial y$ is enhanced. Compared to the peak in the $-\overline{u^\prime v^\prime} \partial U/\partial y$ profile of the ZPG-TBL, the weak inner peaks are about 75\% smaller and the stronger outer peaks are 39\% and 22\% smaller for the profiles of $-\overline{u^\prime v^\prime} \partial U/\partial y$ and $\overline{{v^\prime}^2} \partial U/\partial y$, respectively. \textcolor{black}{For the strong APG-TBL, the wall-normal location of the outer peaks in the turbulence production terms matches the outer peak locations in the Reynolds shear stress profiles and the outer inflection point in the velocity profile (shown in figure \ref{fig:stats_comparison_allTBL}(a)).} This is consistent with the observation of \citet{skaare1994turbulent} in a strong APG-TBL at $\beta \approx 20$, who also found two distinct peaks in the inner and outer regions for each of the above-plotted terms. They could not resolve the locations of the inner peak because their hotwire data did not have sufficient spatial resolution in the wall-normal direction. However, the outer peaks were found at the locations of maximum stress values, {\em i.e.} $y/\delta = 0.45$ which corresponds to $y = 1.69\delta_1$. As the APG increases, the outer peaks move closer to the displacement thickness height. Since, the current TBL has a stronger APG ($\beta \approx 30$), it has the peak locations at $y = 1.3\delta_1$. This shows the consistency of the current measurements with the observations made in previous studies.
\section{Third-order moments}
Figure \ref{fig:third_order_moments} shows the outer-scaled profiles of the third-order moments: $\overline{{u'}^3}$, $\overline{{u'}^2 v'}$, $\overline{{v'}^3}$, and $-\overline{u' {v'}^2}$. The first term represents the turbulent transport of $\overline{{u'}^2}$ in the streamwise direction. The following three quantities represent the turbulent transport of $\overline{{u'}^2}$, $\overline{{v'}^2}$, and the turbulent work of $\overline{u'v'}$ along the wall-normal direction, respectively. For the strong APG-TBL, the third-order statistics are shown in the middle of the FOV. The figure also includes the strong APG-TBL profiles at several equidistant streamwise locations within the FOV, shown by the dot-dashed lines. It is obvious that these profiles collapse with the outer scaling, except at the maxima, where their distribution is similar to the distributions at peak locations in the corresponding Reynolds stress profiles. The maximum magnitude of $\overline{{u'}^3}$ is 3.4, 6.4, and 4.8 times larger than the maximum magnitudes of $\overline{{u'}^2 v'}$, $\overline{{v'}^3}$, and $-\overline{u' {v'}^2}$, respectively. The locations of the zero-crossing of the corresponding third-order moment profiles are approximately the same as the locations of the outer peaks in the corresponding Reynolds stress profiles. Below these locations, energy is transported downward from Reynolds stresses $\overline{{u'}^2}$, $\overline{{v'}^2}$, and $-\overline{u'v'}$. Beyond these locations, the energy is diffused towards the boundary layer edge. These observations are consistent with the findings of \citet{skaare1994turbulent} for a strong APG-TBL in $\beta \approx 20$ and $Re_{\delta_2} = 44,420$.
\begin{figure}
\begin{center}
\begin{overpic}[width=0.45\textwidth]{third_order_moments_allTBL.pdf} \end{overpic}
\end{center}
\vspace*{-0.2in}\caption{Third-order moments scaled with the outer variables.
\label{fig:third_order_moments}}
\end{figure}
The zero crossings of the third-order moments in the mild APG-TBL are at $y \approx \delta_1$. There is an inner peak in the $\overline{u'^3}$ profile that is almost as strong as the inner peak in the strong APG-TBL, but slightly closer to the wall. The outer negative peak, however, is about five times weaker than in the strong APG-TBL and lies further away from the wall. There is no outer positive peak visible in the mild APG-TBL. In the profiles of $\overline{{u'}^2 v'}$, $\overline{{v'}^3}$, and $-\overline{u' {v'}^2}$ as well, the outer positive peaks are about five times weaker than in the strong APG-TBL. \textcolor{black}{This demonstrates that a decrease in the APG reduces the energy diffusion towards the boundary layer edge from the locations of the outer peaks in the Reynolds stress profiles, and this phenomenon is shifted closer to the boundary layer edge.}
In the ZPG-TBL, the inner positive peak in $\overline{{u'}^3}$ profile is stronger than in both APG-TBLs and its zero crossing is at the location of the inner peak in the $\overline{{u'}^2}$ profile. Beyond this point, $\overline{{u'}^3}$ fluctuates between positive and negative values. Contrary to APG-TBL, the profiles of $\overline{{u'}^2 v'}$, $\overline{{v'}^3}$, and $-\overline{u' {v'}^2}$ in the ZPG-TBL have positive values beyond their zero crossings in the inner region. The outer negative peak in the $\overline{{u'}^3}$ profile and the outer positive peaks in the profiles of $\overline{{u'}^2 v'}$, $\overline{{v'}^3}$, and $-\overline{u' {v'}^2}$ are about 20 times weaker than in the strong APG-TBL and are found further from the wall than in the mild APG-TBL.
Another interesting observation is that the wall-normal profiles of the third-order moments of all TBLs meet at a single point, $y \approx 1.2 \delta_1$. The significance of this point is yet to be explored.
\section{Skewness and Flatness}
\begin{figure*}
\begin{center}
\begin{overpic}[width=0.8\textwidth]{Skenwss_flatness_allTBL_outer.pdf} \end{overpic}
\end{center}
\vspace*{-0.2in}\caption{Skewness ($S_u$, $S_v$) and flatness ($F_u$, $F_v$) profiles for the ZPG-TBL and the strong APG-TBL.}
\label{fig:skewness_flatness}
\end{figure*}
Skewness $S$ is a measure of the asymmetry of the probability density function (PDF) of a variable, whereas flatness $F$ represents the deviation of the PDF from the Gaussian distribution. For a variable $p$, these quantities are defined as:
\begin{equation}
S_p = \frac{\overline{p^3}}{(\overline{p^2})^{3/2}} \,\, , \,\,\,\,\,
F_p = \frac{\overline{p^4}}{(\overline{p^2})^2}
\end{equation}
$S_p = 0$ is expected for a symmetric PDF of $p$. In wall-bounded turbulence, velocity fluctuations ($u'$ and $v'$) often have $S = 0$ and $F = 3$, which represents a Gaussian distribution \citep{lin2007assessment}.
Figure \ref{fig:skewness_flatness} shows the skewness and flatness factors of the streamwise and wall-normal velocity fluctuations in the ZPG-TBL and the strong APG-TBL. These factors exhibit a plateau in the logarithmic region in both TBLs. In this region, the ZPG-TBL has $S_u \approx 0$ and $F_u \approx 3$ which shows that $u'$ has a nearly Gaussian distribution. $S_u > 0$ near the wall, while it has a negative peak in the outer layer. \textcolor{black}{This shows the asymmetric distributions of $u'$ in the near-wall and the outer layers. For $S_v$, this trend is the opposite, but the outer peak is not as strong as in $S_u$. In the ZPG-TBL, the first zero crossing in $S_u$ is around $0.04 \delta_1$ and in $S_v$, around $0.08\delta_1$.}
\begin{figure*}
\begin{center}
\begin{overpic}[width=0.8\textwidth]{Skenwss_vs_flatness_allTBL.pdf} \put(10,12){(a)} \put(61,12){(b)} \end{overpic}
\end{center}
\vspace*{-0.2in}\caption{Skewness ($S_u$, $S_v$) vs. flatness ($F_u$, $F_v$) profiles of the ZPG-TBL and strong APG-TBL. The black and red stars show the locations of the maximum values in the corresponding Reynolds stress profiles of the ZPG-TBL and the strong APG-TBL, respectively. The dashed orange represents the formulation $F_u = 2.65 + 1.45 S_u^2$ that was reported by \citet{orlu2016high} as a curve fit to their $F_u$ vs. $S_u$ profile of a ZPG-TBL.
\label{fig:skewness_vs_flatness}}
\end{figure*}
In the strong APG-TBL, the near-wall distribution of $S_u$ is similar but up to 15\% larger compared to the ZPG-TBL, but the negative peak in the outer layer is almost two times stronger. This means that in the outer region, the strong APG-TBL has streamwise fluctuations more skewed towards the smaller values than in the ZPG-TBL. The plateau in the logarithmic region is at $S_u\approx 0.4$ and the zero crossing is far away from the wall, in the outer region. The amplitude of $S_u$ in the logarithmic region increases with increasing APG, which pushes the zero-crossing away from the wall. The amplitude of $F_u$ is also observed to increase with APG. Similar distributions of the $S_u$ and $F_u$ profiles for the ZPG- and APG-TBLs are reported in \citet{mathis2015inner} and \citet{vila2020experimental}. This shows that the energy of large-scale motions in the log-layer increases with increasing APG.
\citet{hutchins2007large} showed that the large-scale structures of the outer region modulate the near-wall small scales when they superimpose on the near-wall region in turbulent boundary layers. \citet{mathis2009large} quantified this modulation by applying the Hilbert transform to the small-scale components of the streamwise velocity fluctuations. They found that the wall-normal profile of the amplitude modulation (AM) correlation coefficient between the large scales of the outer region and the envelope of the near-wall small scale has a strong resemblance to the skewness profile of $u'$. \citet{schlatter2010quantifying} and \citet{mathis2011relationship} assessed this relationship and showed that the Reynolds number trend in the skewness profiles of $u'$ is strongly related to the AM effect. Later, several studies used the dominant components of the skewness term in the study of the amplitude modulation of small scales by the large-scale structures \citep{bernardini2011inner,eitel2014simulation}. The skewness profiles in the current study demonstrate that the amplitude modulation of the near-wall small scales increases with increasing APG. This has also been indicated by \citet{mathis2009large,bernardini2011inner,eitel2014simulation}.
In the log-layer of the ZPG-TBL, $S_v \approx 0$ and $F_v \approx 3$ which shows that similar to $u'$, PDF of $v'$ also has a Gaussian distribution in this region. In the strong APG-TBL, $S_v$ has a predominantly negative distribution from the wall to the outer region. The outer layer peak in the strong APG-TBL case is found to be around $y=2.4\delta_1$ compared to $y=6.4\delta_1$ in the ZPG-TBL case.
$F_v$ in the strong APG-TBL shows a larger inner peak than in the ZPG-TBL and a relatively smaller outer peak. In the log layer, $F_v$ in the strong APG-TBL is greater than in the ZPG-TBL. \citet{skaare1994turbulent} reported that $F_v$ closely follows $F_u$ in their strong APG-TBL ($\beta \approx 20)$, which does not agree with the current strong APG-TBL ($\beta \approx 30)$. $F_u$ is slightly below $F_v$ as also observed by \citet{skaare1994turbulent}, indicating that $u'$ has a narrower range compared to $v'$. However, in the wake region, the peak in $F_u$ is almost 10 times stronger than in $F_v$.
The correlation between the zero crossing of skewness, minimum flatness and the locations of the maxima in the corresponding Reynolds stress profiles can be visualised by plotting the flatness as a function of the skewness as shown in Figure \ref{fig:skewness_vs_flatness}. In $F_u$ vs. $S_u$, there is a good collapse between the ZPG-TBL and the strong APG-TBL for $S_u<0$, while for $S_u>0$ (near-wall region), the profiles are slightly distant from each other. In $F_v$ vs. $S_v$, the ZPG-TBL and the mild APG-TBL profiles collapse in the wake region, which is not the case near the wall. The dashed orange line is a representation of the parabola $F_u = 2.65 + 1.45 S_u^2$ which was reported in \citet{orlu2016high} as the best fit to their $F_u$ vs. $S_u$ profile for the ZPG-TBL. This also fits the current ZPG-TBL profile in the range of $-1<S_u<+1$.
The black and red star markers show the locations of the maximum values in the corresponding Reynolds stress profiles for the ZPG-TBL and the strong APG-TBL, respectively. The collapse among the location of the peak in the Reynolds streamwise stress, and the locations of the zero skewness and the minimum flatness was previously observed by \citet{eitel2014simulation} for a ZPG-TBL and later confirmed for a ZPG-TBL by \citet{orlu2016high} and several mild APG-TBLs by \citet{vila2020experimental}. A similar collapse is observed in the current ZPG-TBL and strong APG-TBL for the streamwise velocity fluctuations at $S_u \sim 0$ and $F_{u_{min}} \sim 3$. \textcolor{black}{This shows that at the location of the outer peak in the Reynolds streamwise stress, the streamwise velocity fluctuations have more like a Gaussian distribution and are not skewed towards the larger or the smaller values. Interestingly, the wall-normal velocity fluctuations show a similar collapse among the locations of the peak in the Reynolds stress, the near-zero skewness and the minimum flatness.}
\section{Decomposition of mean skin friction}
In order to apply various flow control techniques to an APG-TBL flow to prevent or delay flow separation, the contributions of coherent structures to the skin friction distribution, and hence drag, need to be understood. \citet{fukagata2002contribution} proposed a method to decompose mean skin friction $C_f$ into three components (hereby referred to as FIK decomposition).
The three components of the FIK decomposition are given in equation \ref{eq:FIK}.
\begin{equation}
\label{eq:FIK}
\begin{gathered}
C_{f_{FIK}} = \underbrace{ \frac{4(1-\delta_1/\delta_\Omega)}{Re_{\delta_1}} }_{C_{f_1}} \,\,\,+\,\,\,
\underbrace{ 4 \int_{0}^{1} \frac{- \overline{u' v'}}{U_2^2} \bigg(1 - \frac{y}{\delta_\Omega} \bigg)}_{C_{f_2}} \,\,\,+ \\
\underbrace{ 2 \int_{0}^{1} - \bigg(1 - \frac{y}{\delta_\Omega} \bigg)^2 \bigg(\frac{1}{\rho} \frac{\partial \overline{P}}{\partial x} + \overline{I_x} + \frac{\partial U}{\partial t} \bigg) d\bigg(\frac{y}{\delta_\Omega} \bigg)}_{C_{f_3}}
\end{gathered}
\end{equation}
\noindent where $\delta_\Omega$ represents the wall-normal distance at which the mean spanwise
vorticity $\Omega_z$ is 0.2\% of the mean vorticity at the wall and
$$\overline{I_x} = \partial U^2 / \partial x \,\, + \partial(UV)/\partial y \,\, - \nu \partial^2 U/\partial x^2 \,\, + \partial(\overline{u'u'})/\partial x \, .$$
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{RD_decomposition_premultiplied_integrand_filter_check.pdf}
\end{center}
\vspace*{-0.2in}\caption{Premultiplied integrands of (a) $C_{f_b}$ and (b) $C_{f_c}$. The red lines represent the profiles computed using the Gaussian filtered $\overline{u'v'}$ and $U$ in the wall-normal direction, whereas the grey lines represent the profiles computed without any filtering.
\label{fig:RD_decomposition_premultiplied_filter_check}}
\end{figure*}
The FIK decomposition is based on the contribution from the Reynolds shear stress where the stress is weighted by a linear ($1-y/\delta$) function. The linear nature of this function has no simple explanation in terms of physical processes \citep{renard2016theoretical}. The identity is obtained by a complex derivation which involves three integrations (by parts) of a momentum budget in a wall reference frame. In this decomposition method, the role of the wake region in skin friction generation is larger than the log-layer \citep{renard2016theoretical}.
More recently, a theoretical decomposition of mean skin friction has been devised by \citet{renard2016theoretical}. It is referred to here as the RD decomposition and its components as the RD components. The RD decomposition is based on the turbulent-kinetic-energy budget and is calculated by a single integration of the local energy budget over all wall-normal distances. This technique incorporates the turbulent fluctuations even from above the boundary layer edge ($y>\delta$) and suggests that $C_f$ at high Reynolds numbers is dominated by turbulent kinetic energy production in the outer region. This is consistent with the well-established importance of the logarithmic region. The flow control strategies based on the RD decomposition would control the turbulent kinetic energy rather than the Reynolds shear stress using the FIK decomposition \citep{renard2016theoretical}. The RD decomposition can provide further insights into the role of the logarithmic region in the generation of mean skin friction in high Reynolds number APG-TBL flows for flow control strategies that modify the turbulent dynamics in the logarithmic layer.
The RD decomposition is presented as
\begin{equation}
\begin{gathered}
C_{f_{RD}} = \underbrace{\frac{2}{U_e ^3} \int_{0}^{\infty} \nu \bigg(\frac{\partial U}{\partial y}\bigg) ^2 dy }_{C_{f_a}} \,\,\, + \,\,\,
\underbrace{\frac{2}{U_e ^3} \int_{0}^{\infty} - \overline{u' v'} \,\, \frac{\partial U}{\partial y} dy }_{C_{f_b}} \,\,\, + \,\,\, \\
\underbrace{\frac{2}{U_e ^3} \int_{0}^{\infty} (U - U_e) \frac{\partial}{\partial y} \bigg( \frac{\tau}{\rho} \bigg) dy }_{C_{f_c}}
\end{gathered}
\end{equation}
\noindent where $$ \frac{\tau}{\rho} = \nu \frac{\partial U}{\partial y} - \overline{u' v'}. $$
$C_{f_b}$ is explicitly dependent on the turbulent fluctuations and is characterised by an integral of the positive turbulence production ($-\overline{u' v'} \,\, \partial U /\partial y$) normalised with the free-stream or edge velocity. $C_{f_a}$, $C_{f_b}$, and $C_{f_c}$ are also referred to as RD components. The premultiplied integrands of these contributions are
\begin{equation}
C_{{f_a}^*} = y \frac{2\nu}{U_e ^3} \bigg(\frac{\partial U}{\partial y}\bigg) ^2,
\end{equation}
\begin{equation}
C_{{f_b}^*} = y \frac{-2 \overline{ u' v' }}{U_e ^3} \frac{\partial U}{\partial y}, \text{ and}
\end{equation}
\begin{equation}
C_{{f_c}^*} = y \frac{2 (U - U_e)}{U_e ^3} \frac{\partial}{\partial y} \bigg( \frac{\tau}{\rho} \bigg) dy .
\end{equation}
Regarding the application of the RD decomposition to the experimental measurements of a turbulent boundary layer, data noise is the major problem, especially in the calculation of $C_{f_c}$ where a single wall-normal differentiation of $\overline{u' v'}$ and a double differentiation of $U$ are involved. Figure \ref{fig:RD_decomposition_premultiplied_filter_check} presents the premultiplied integrands $C_{{f_b}^*}$ and $C_{{f_c}^*}$ for the strong APG-TBL, plotted against the wall-normal distance normalised by the displacement thickness, whereas the $C_{{f_c}^*}$ profile shown in grey is very noisy. This contrasts with the $C_{{f_b}^*}$ profile, which involves a single differentiation of $U$. In order to reduce the noise in the experimental measurements, especially with regards to the differentiation of the second-order statistics, it is proposed to use the Gaussian filtering in the wall-normal direction, which smoothens the $C_{{f_c}^*}$ profile to an extent where it can be used to calculate $C_{{f_c}}$. The red lines in figure \ref{fig:RD_decomposition_premultiplied_filter_check} represent the profiles computed after the Gaussian filtering of $\overline{u' v'}$ and $U$ using a kernel size of 20 viscous units ($\sim$ 21 data points) in the wall-normal direction. In order to assess the effect of Gaussian filtering on creating bias, the $C_{{f_b}^*}$ is also plotted using filtered $\overline{u' v'}$ and $U$. Since the red line collapses very well with the grey line, the filtering does not change the component that is measurable without filtering. Hence, it is deduced that filtering creates only a negligible bias in the calculation of $C_{{f_b}}$ and $C_{{f_c}}$ but rather reduces the measurement error by smoothing the $C_{{f_c}^*}$ profile.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{RD_decomposition_premultiplied_integrand.pdf}
\end{center}
\vspace*{-0.2in}\caption{Premultiplied integrands of the RD decomposition of mean skin friction. The grey lines in the background are at various equidistant locations along $x$ in the measured domain, while the red dots represent the profiles in the middle of the $x$ domain.
\label{fig:RD_decomposition_premultiplied}}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{RD_decomposition_x_Re_tau.pdf}
\end{center}
\vspace*{-0.2in}\caption{RD decomposition of mean skin friction in the strong APG-TBL. The symbols represent the plotting against $(x-x_o)/\delta$, whereas the lines show the plotting against $Re_\tau$.
\label{fig:RD_decomposition}}
\end{figure}
Figure \ref{fig:RD_decomposition_premultiplied} shows the premultiplied integrands of the RD components of $C_f$. The grey lines in the background are at various equidistant streamwise locations within the measured domain, while the red dots represent the profiles at the streamwise middle of the domain. \textcolor{black}{As can be seen, the decomposition using Gaussian filtering reduces the noise to a reasonable level. The wall-normal location of the outer peak in the $C_{{f_b}^*}$ profile and the zero-crossing of the $C_{{f_c}^*}$ profile coincide with the wall-normal location of the outer peak in the Reynolds shear stress profile. This is consistent with the observation of \citet{senthil2020analysis} for a strong APG-TBL at $\beta=39$. \citet{senthil2020analysis} also showed that $C_{f_b}$ is enhanced by 256\% whereas $C_{f_c}$ is reduced by 1,400\% when $\beta$ changes from 0 to 39. The values of $C_{f_b}$ and $C_{f_c}$ for the current strong APG-TBL ($\beta \approx 30$) are comparative to those reported by \citet{senthil2020analysis} for $\beta=39$ case. This demonstrates that the role of the outer layer is enhanced in skin friction generation when a TBL is imposed with a strong APG.}
Figure \ref{fig:RD_decomposition} represents the streamwise variation of the calculated RD components in the strong APG-TBL. $C_{f_a}$ decreases linearly with $x$ and $Re_\tau$ within the measured domain. $C_{f_b}$ and $-C_{f_c}$ increase almost linearly, while $C_{f_b}$ is slightly larger than $-C_{f_c}$, hence, the most of the effect of $C_{f_b}$ is balanced out by the negative values of $C_{f_c}$. \textcolor{black}{This shows that the positive contribution of the Reynolds shear stress to mean skin friction is reduced by the negative contribution of the strong APG. The streamwise average and the standard deviation of the RD components of $C_f$ are given in table \ref{tab:mean_SD_RD_components} represents. The values of these components are comparable to those of \citet{senthil2020analysis} who report the streamwise averaged values of $C_f = \num{0.000377}$, $C_{f_a} = \num{0.000064}$, $C_{f_b} = \num{0.004370}$, and $C_{f_v} = \num{-0.004057}$ for a strong self-similar APG-TBL ($\beta = 39$) of \citet{kitsios2017direct}. Since the current TBL has a weaker APG ($\beta \approx 30)$, its $C_f = \num{0.000482}$ is a little larger than the value reported by \citet{senthil2020analysis}.}
\begin{table}
\begin{center}
\caption{Streamwise average and standard deviation of the RD components of the $C_f$.}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lccc}
\hline \hline\noalign{\medskip}
&Mean $\times 10^3$ & Standard deviation $\times 10^3 $ & Mean/
Mean($C_{f_{RD}}$) \\
\noalign{\smallskip}\hline\noalign{\medskip}
$C_{f_a}$ & 0.0226 & 0.0008 & 0.0465 \\
$C_{f_b}$ & 4.7259 & 0.1901 & 9.7830 \\
$C_{f_c}$ & $-4.2656$ & 0.1783 & $-8.8298$ \\
\noalign{\smallskip}\hline\noalign{\medskip}
\label{tab:mean_SD_RD_components}
\end{tabular}
}
\end{center}
\vspace{-0.2em}
\end{table}
\subsection{RD decomposition conditioned on the quadrant contributions}
\label{sec:RD_decomposition_quadrants}
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{RD_decomposition_quadSplit.pdf}
\end{center}
\vspace*{-0.2in}\caption{RD decomposition of mean skin friction in the strong APG-TBL, conditional on the four quadrants.
\label{fig:RD_decomposition_quadSplit}}
\end{figure*}
In order to assess the contributions of the four quadrants to the distribution of $C_{f_b}$, $C_{f_c}$, and $C_{f_{RD}}$, the contributions of the four quadrants to the Reynolds shear stress are used in the RD decomposition. The results for the strong APG-TBL are presented in figure \ref{fig:RD_decomposition_quadSplit}. Note that the same mean streamwise velocity is used in the conditional and unconditional RD decomposition. Therefore, $C_{f_a}$ for all quadrants, is the same as in figure \ref{fig:RD_decomposition}(a) and need not be presented again. It is observed that the Q2 and Q4 events are the positive contributors of $C_{f_b}$, $-C_{f_c}$ and $C_{f_{RD}}$. On the other hand, the contributions of Q1 and Q3 are always negative. Q1 and Q3 contribute almost equally to $C_{f_b}$ whereas Q3 contributes more to $-C_{f_c}$ than Q1. In the case of $C_{f_{RD}}$, Q1 contributes more than Q3. Overall, the positive and major contributions of Q2 and Q4 show that in a turbulent boundary layer, mean skin friction is mainly associated with the Q2 and Q4 events. This is consistent with the findings of \citet{chan2021skin} who studied the modification of the skin friction by a large-eddy breakup device in a ZPG-TBL. It is also in agreement with the observations of \citet{fuaad2019enhanced} whom, in their DNS study on drag reduction in a channel flow over superhydrophobic surfaces, conclude that a drop in ejections and sweeps reduces the turbulent contribution to the wall-shear-stress.
\begin{figure}[!t]
\begin{center}
\begin{overpic}[width=0.45\textwidth]{turbulence_production_quadSplit.pdf} \end{overpic}
\end{center}
\vspace*{-0.2in}\caption{Profile of $-\overline{u^\prime v^\prime} \partial U/\partial y$ in the strong APG-TBL, conditioned on ejections and sweeps, compared with the unconditioned overall profile.
\label{fig:turbulence_production_quadSplit}}
\end{figure}
\textcolor{black} {Ejections (Q2 events) always contribute more to $C_{f_b}$, $-C_{f_c}$ and $C_{f_{RD}}$ than sweeps (Q4 events). Figure \ref{fig:turbulence_production_quadSplit} demonstrates that ejections have larger overall wall-normal contributions to the turbulence production term $-\overline{u^\prime v^\prime} \partial U/\partial y$ than sweeps. Since $C_f$ at high Reynolds numbers is dominated by the turbulence production within the logarithmic layer, the larger contribution of the ejections to the $C_f$ than the sweeps is expected.}
\section{Conclusion}
A self-similar APG-TBL flow is developed in the LTRAC wind tunnel and the flow velocity is measured using 2C-2D PIV with a field of view spanning $3.3\delta \times 1.1 \delta$ while achieving the wall-normal spatial resolution of less than the viscous length. The similarity coefficients $C_{uu}$, $C_{vv}$, $C_{uv}$, and $C_\nu$ for the current strong APG-TBL ($\beta \sim 30$) are not strictly constant along the $x$ direction, but their standard deviation is very small and comparable to those reported in \citet{kitsios2017direct} for a self-similar APG-TBL flow at $\beta=39$ and a Reynolds number comparable to the current case.
It has been observed that the outer-layer statistics do not scale with viscous units in a strong APG-TBL, even if it is self-similar. Oppositely, in a self-similar strong APG-TBL, the profiles of the mean streamwise velocity and Reynolds stresses collapse when they are scaled with the outer variables, $U_e$ and $\delta_1$. The outer scaling also shows a remarkable collapse of the defect profiles, demonstrating the self-similar nature of the flow. This collapse is better than what is achieved using the local friction velocity and the defect thickness $\Delta = \delta_1 (2/C_f)^{1/2}$ \citep{skote1998direct} in various APG-TBLs \citep{lee2017large}. Near the wall, the defect velocity normalised with the local $U_e$ has larger values for larger $\beta$. The ZPG-TBL has the longest wall-normal defect region which decreases in its extent with increasing APG.
When the streamwise and wall-normal velocity fluctuations are observed using the quadrant analysis of \citet{wallace1972wall}, it is found that the contributions of both Q1 and Q4 are amplified under APG and the increase in the Q4 contribution is larger than in the Q1. This demonstrates that when an APG is imposed, the outer peak in the Reynolds shear stress profile is formed due to the energisation of sweep motions of high-momentum fluid in the outer region of the boundary layer.
The profiles of the dominant terms of turbulence production ($-\overline{u^\prime v^\prime} \partial U/\partial y$ and $\overline{{v^\prime}^2} \partial U/\partial y$) show weaker inner peaks and stronger outer peaks in the strong APG-TBL, instead of only inner peaks in the ZPG-TBL. This demonstrates that with a strong increase in APG, the turbulence production is shifted away from the near-wall region towards the outer region. For the strong APG-TBL, the wall-normal location of the outer peaks in the turbulence production terms matches the outer peak locations in the Reynolds shear stress profiles and the outer inflection point in the mean streamwise velocity profile. This location matches with the zero-crossings of profiles of the third-order moments $\overline{u'^3}$, $\overline{{u'}^2 v'}$, $\overline{{v'}^3}$, and $-\overline{u' {v'}^2}$ in the strong APG-TBL. The outer positive peaks in the profiles of $\overline{{u'}^2 v'}$, $\overline{{v'}^3}$, and $-\overline{u' {v'}^2}$ demonstrate that a decrease in APG results in the reduced energy diffusion towards the boundary layer edge from the locations of the outer peaks in the Reynolds stress profiles, and this phenomenon is shifted closer to the boundary layer edge.
The profiles of the skewness and flatness factors show that the streamwise and wall-normal velocity fluctuations have a nearly Gaussian distribution in the log-layer of the ZPG-TBL while having asymmetric distributions in the near-wall and the wake regions. In the near-wall region, the streamwise velocity fluctuations are skewed toward the larger values, while the wall-normal velocity fluctuations are skewed towards the smaller values. This trend is reversed in the wake region. In the strong APG-TBL, however, the log region shows skewness to larger values of streamwise velocity fluctuations and the smaller values of the wall-normal fluctuations and the zero-crossings of the skewness profiles move further from the wall. The further zero-crossings of the skewness profiles from the wall show that the amplitude modulation of the near-wall small scales increases with increasing APG, according to the interrelation of the skewness of $u'$ and the amplitude modulation coefficient demonstrated by \citet{schlatter2010quantifying} and \citet{mathis2011relationship}. The extent of asymmetry in the near-wall and wake regions increases with the increasing APG. Furthermore, in the strong APG-TBL, the locations of the zero-crossings in the skewness profiles match the locations of the maxima in the profiles of the Reynolds stress and turbulence production. This demonstrates that these locations have symmetric distributions of streamwise and wall-normal velocity fluctuations.
The locations of the maximum Reynolds streamwise stress, the zero skewness and the minimum flatness collapse for the ZPG-TBL which has been reported in several previous studies. The current study also observes the same collapse in the strong APG-TBL. Interestingly, the wall-normal velocity fluctuations also exhibit the collapse among the locations of the peak in the Reynolds stress, the near-zero skewness and the minimum flatness in the ZPG-TBL and the strong APG-TBL.
With regards to the application of the identity of \citet{renard2016theoretical} in the decomposition of mean skin friction into three components (referred to as the RD components) in TBLs measured using 2C-2D PIV, it is shown that the noise in the computation of the third component, which involves the second derivative of the $U$ and the first derivative $-\overline{u'v'}$ is considerably reduced by Gaussian filtering of 21 viscous units in the wall-normal direction. It is observed that the wall-normal locations of the outer peak in the profiles of the premultiplied integrands of $C_{{f_b}}$ and $C_{{f_c}}$ coincide with the wall-normal location of the outer peak in the Reynolds shear stress profile in the strong APG-TBL. This is consistent with the observations of \citet{senthil2020analysis} for a strong APG-TBL at $\beta=39$.
The contribution of the turbulent kinetic energy in skin-friction generation (represented by $C_{{f_b}}$) is enhanced with increasing APG because of the emergence of the outer peaks and the diminishing of the inner peaks in the turbulence production profiles. This is matched by the enhanced negative contribution due to APG (represented by $C_{{f_c}}$), and the overall $C_f$ decreases with an increase in APG. In the strong APG-TBL, $C_{f_a}$ decreases whereas the $C_{{f_b}}$ and $-C_{{f_c}}$ increase almost linearly in the streamwise direction.
The findings are in agreement with the observations of \citet{senthil2020analysis} who performed a similar analysis using the DNS data of a strong APG-TBL ($\beta=39$) of \citet{kitsios2017direct}.
Using the quadrant analysis of \citet{wallace1972wall}, ejections are always shown to contribute more than sweeps, to the $C_{f_b}$, $-C_{f_c}$ and the overall $C_{f}$ of the RD decomposition. Since $C_f$ at high Reynolds numbers is dominated by turbulence production in the outer region and ejections have a larger overall contribution to turbulence production than sweeps, the larger contribution of ejections to $C_f$ is expected.
The methodology of \citet{skaare1994turbulent} is found good at bringing a boundary layer to the verge of separation and then relaxing the APG in order to make the flow self-similar. In future, such boundary layer flow could be produced with a similar methodology in a larger wind tunnel facility, where the flow is allowed to develop over a longer streamwise range and the second-order statistics are converged. The convergence of the Reynolds stresses' profiles at the location of the outer peaks in the strong APG-TBL may be checked as a function of the number of the statistically independent velocity fields as well as the streamwise distance from the trip wire, within the self-similar region.
A thorough analysis of the importance of displacement thickness height in the boundary layers could is imperative. The following quantities show similar values around the displacement thickness height.
\begin{itemize}
\item The outer-scaled Reynolds streamwise and shear stresses in the three TBLs.
\item The relative contributions of ejections and sweeps to the Reynolds shear stress and the turbulence production term $-\overline{u'v'} \partial U/\partial y$ in the mild and strong APG-TBLs.
\item The outer-scaled third-order moments $\overline{{u'}^3}$, $\overline{{u'}^2 v'}$, $\overline{{v'}^3}$, and $-\overline{u' {v'}^2}$ in the three TBLs.
\item The skewness and flatness profiles.
\end{itemize}
When APG increases, the outer peaks in the Reynolds stress profiles move from the outer region to a vicinity closer to the displacement thickness height. The reason for this shift in turbulent activity is yet to be investigated.
\section*{Acknowledgements}
The authors would like to acknowledge the support of the Australian Government of this research through an Australian Research Council Discovery grant. Muhammad Shehzad acknowledges the Punjab Educational Endowment Fund
(PEEF), Punjab, Pakistan for funding his PhD research. The research was also benefited from computational resources provided by the Pawsey Supercomputing Centre and the NCI Facility both supported by the Australian Government with access provided via a NCMAS grant.
|
{
"arxiv_id": "2302.08603",
"language": "en",
"timestamp": "2023-02-24T02:18:16",
"url": "https://arxiv.org/abs/2302.08603",
"yymm": "2302"
} | \section{Introduction}
\end{center}
The Gross–Neveu (GN) model is a relativistic field theory of four-fermi interactions. Four-fermi interactions are fundamental and fruitful in: providing models of superconductivity such as in BCS theory, giving an effective, or low energy, theory of nucleons and mesons via the NJL-model, and providing a dual to the quantum Sine-Gordon model via the Thirring model. However, BCS theory is non-relativistic, the Thirring model is nonrenormalizable in \(3+1d\) \cite{banerjee1997explicit}, and the NJL-model is nonrenormalizable in \(3+1d\), not UV complete. Nonetheless, given the successful history of four-fermi interactions, we would like to be able to write down a renormalizable and UV complete theory of such interactions to act as a toy model for QCD in \(3+1d\).
To do so we turn to a large-N Gross-Neveu-inspired model in \(3+1d\). In \(1+1d\) the GN-model is a well-known toy model for QCD exhibiting asymptotic freedom and a negative \(\beta\)-function, and in \(2+1d\) the GN-model mimics planar superconductors. Indeed, in \(1+1d\) the large-\(N\) GN model is likewise asymptotically free and renormalizable \cite{moshe2003quantum}\cite{gracey1990three}\cite{gross1974dynamical}. However, in \(1+1d\) the running coupling contains a pole when approaching the IR, demanding an IR cutoff \cite{moshe2003quantum}. Further, when working in \(3+1d\), four-fermi interactions cannot be renormalizable via a running coupling (in the standard sense). Instead, a term containing a fractional power of the fermionic fields must be removed from the Lagrangian \cite{parisi1975theory}. Further, Moshe et al. \cite{moshe2003quantum} claims the non-renormalizability of four-fermi interactions in \(3+1d\) is due to the algebra of the \(\gamma\)-matrices which causes an infinite number of four-fermion interactions to mix under renormalization. Moshe et al. \cite{moshe2003quantum} also claims the GN model is renormalizable in \(1+1d\) because the symmetry of the model is effectively \(O(N)\). However, we note that the renormalizability of GN in \(1+1d\) can also be seen as an artifact of the theory's marginal coupling. With this, our strategy for renormalization starts by writing down a marginal GN-like model in \(3+1d\) with fractional interactions, which we claim is related to renormalization schemes suggested by Parisi \cite{parisi1975theory}. In doing so, our theory has many features of the GN model in \(1+1d\), such as renormalizability, a negative \(\beta\)-function, and a pole in the IR. Next, we make use of the Ai, Bender, and Sarkar (ABS) conjecture \cite{bender2022pt} by implementing a \(\mathcal{PT}\)-symmetric extension of the theory to ``see around" the pole found in the IR. Where ``see around" the pole means: 1) we can see into the UV using the standard theory, and we can see into the IR using the \(\mathcal{PT}\)-symmetric theory, and 2) poles in the running of the coupling do not affect the theories observables and both theories have identical observables. Points 1) and 2) will be fully illustrated in section \ref{section 3}. Together the strategies of fractional interactions and \(\mathcal{PT}\)-symmetric extensions yield two fully renormalized fermionic theories in \(3+1d\) with identical physics, which together give a complete theory which is UV and IR complete.
A brief discussion is warranted around the topic of large-\(N\) theories with fractional interactions. Indeed, standard perturbation theory is not possible with fractional interaction terms \cite{peskin2018introduction}. However, in large-\(N\) theories \(\frac{1}{N}\) acts as a coupling independent expansion parameter. The coupling independence of \(N\) gives way to renormalizable and solvable theories without the use of traditional perturbation theory \cite{moshe2003quantum}\cite{romatschke2019finite}. Recognition of the \(N\)-field expansion parameter as a way to tackle nonrenormalizable fields was first recognized by Parisi in the 1970s \cite{parisi1975theory}. However, many recent papers have taken advantage of the large-\(N\) expansion parameters to present theories which are analytically solvable for all couplings, generate transport coefficients, and calculate viscosities
\cite{pinto2020three}\cite{romatschke2019finite}\cite{romatschke2019thermal}\cite{romatschke2021shear}\cite{Weiner:2022vwa}.
Further, some large-\(N\) theories are both solvable and renormalizable with fractional interactions in arbitrary dimensions \cite{grable2022theremal}.
Next, we address the strategy of studying \(\mathcal{PT}\)-symmetric versions of related Hermitian theories. It has been known for several decades that \(\mathcal{PT}\)-symmetric extensions of Hermitian theories, which are themselves non-Hermitian, still possess a positive, discrete, and real eigenspectrum \cite{bender1999pt}. However recently Ai, Bender, and Sarkar gave a recipe for formulating \(\mathcal{PT}\)-symmetric field theories from cousin Hermitian theories via analytically continuing the coupling \cite{bender2022pt}:
\begin{equation}
\ln Z^{\mathcal{PT}} (g) = \frac{1}{2}\ln Z_{\text{Herm}}(\lambda \rightarrow{-g +i0^+}) + \frac{1}{2}\ln Z_{\text{Herm}}(\lambda \rightarrow{-g -i0^+}).
\end{equation}
Although \cite{bender2022pt} only explicitly covers scalar theories other works \cite{BenderFermi}\cite{Mavromatos:2021hpe} have shown the ability to generate \(\mathcal{PT}\)-symmetric fermionic theories. Particularly \cite{BenderFermi} has shown the existence of \(\mathcal{PT}\)-symmetric GN theory in \(3+1d\). However, we note that the product of \(\Bar{\Psi}\Psi\) is Hermitian, and therefore any product of the form \(g\big(\Bar{\Psi}\Psi\big)^\epsilon\) for any \(\epsilon\in \mathbb{R}\) is also Hermitian and is likewise \(\mathcal{PT}\)-symmetric. If we look at theory with interaction terms \(-g\big(\Bar{\Psi}\Psi\big)^\epsilon\) the problem is not that the Lagrangian is non-Hermitian, but that it is not asymptotically free or bounded from below. Indeed, the fact that the Lagrangian \(L(g)\) \textit{is} asymptotically free and maps to the local limit of a Yukawa model is Gross and Neveu's original motivation for defining the coupling to be positive \cite{gross1974dynamical}. However, in recent work, Ai, Bender, and Sarkar conjecture that \(\mathcal{PT}\)-symmetric theories which inherit ``the wrong coupling sign" are indeed well defined via analytic continuation. Regardless of whether this conjecture holds, the fact remains that large-\(N\) theories are immune to Landua poles, as their perturbative expansion is around \(\frac{1}{N}\) and not \(\lambda\). Further, a running coupling renormalization scheme absorbs any overall constants or signs in front of the coupling, leaving the theory invariant to such changes.
Or, put more boldly, Large-N theories are immune to the sickness of Hamiltonians which are unbounded from below. Further, Romatschke \cite{Romatschke2022} has recently shown that the \(\mathcal{PT}\)-symmetric analog of a large-\(N\) \(\phi^4\) theory in \(3+1d\) can be used to see through the pole found in the UV. This is due to the novel fact that flipping the sign of the coupling also flips the sign of the \(\beta\)-function during renormalization. Thus the \(\beta\)-function of the standard \(\phi^4\)-theory ends at a pole and the \(\beta\)-function of the cousin \(\mathcal{PT}\)-symmetric theory begins at that pole and then exhibits asymptotic freedom into the UV. The catch is that both theories generate the same thermodynamics, and therefore one can now follow the running coupling through the pole into UV with the use of \(\mathcal{PT}\)-symmetric theory.
\begin{center}
\item\section{A Large-$N$ Fermionic Model}
\end{center}
Consider a marginal \(U(N)\) Fermionic theory in \(d\)-dimensions where \(\psi_f = (\psi_1, \psi_2 ... \psi_N\)) in the limit \(N\rightarrow \infty\), with the Euclidean action
\begin{equation}\label{eq1}
A^{(d)} = \int d^dx \bigg[\Bar{\psi}_f\left(\slashed{\partial}+m_b\right) \psi_f + \bigg(\frac{\alpha}{N}\bigg)^\frac{1}{d-1}\big(\Bar{\psi}_f\psi_f )^{\frac{d}{d-1}}\bigg],
\end{equation}
where \(\psi_f(x)\) with \(f = 1,2,...N\) are \(N\times 4\)-component Dirac spinors, and \(m_b\) is the bare mass. The sign convention of the coupling follows \cite{diab2016and} and \cite{parisi1975theory}, and allows us to smoothly connect to a Yukawa-Gross-Neveu model in \(d=4-\epsilon\) dimensions. To calculate the partition function \(Z^{(d)}(T) = \int \mathcal{D} \Bar{\psi}\mathcal{D}\psi e^{-A^{(d)}}\) of an \(U(N)\) model we multiply by a convenient factor of 1 \cite{grable2022theremal},
\begin{equation}\label{eq2}
1= \int\mathcal{D}\sigma\mathcal{D}\xi e^{\int_x\int_0^\beta i\xi(\sigma - \frac{\Bar{\psi}_f\psi_f}{N})},
\end{equation}
and let the Euclidean dimension of time be compactified to a thermal circle of radius \(\beta\) \cite{laine2016basics}.
For simplicity we let \( \xi \rightarrow iN\times \xi\), and \(\sigma \rightarrow \frac{\sigma}{\alpha}\) to evenly distribute the coupling in \(Z(T)\). Next, we take saddle point integrals around the mean-field values of \(\xi\) and \(\sigma\) (\({\xi_0}\) and \(\sigma_0\)), which are exact in the large N limit. Applying the saddle point condition for the \(\sigma\) field and keeping only the positive solution in terms of \(\xi_0\) (which is equivalent to keeping only positive effective mass terms) \(Z(T)\) is given as,
\begin{equation}\label{eq3}
\begin{split}
Z^{(d)}(T) = \int \mathcal{D}\bar{\psi} \mathcal{D}{\psi} d\xi_0 \exp \bigg[-N\int_x dx \Bigg( \Bar{\psi} \big(\slashed{\partial} +m_b + \xi_0 \big)\psi - \frac{\xi_0^{d}}{ \alpha}\bigg(\frac{1}{d}\bigg)\bigg(\frac{d-1}{d}\bigg)^{d-1}\Bigg)\bigg].
\end{split}
\end{equation}
Since \(\xi_0\) is a constant, the integral over \(\xi_0\) can be found via saddle point integration and is exact in the large-\(N\) limit. Further, at finite temperature, the \(\slashed \partial\)-operator can be diagonalized in terms of its fermionic Matsubara frequencies \(\omega^f_n = 2\pi T (n+\frac{1}{2})\) with \(n \in \mathbb{Z}\). After diagonalizing \(\slashed \partial\) and integrating \(\xi_0\) we let \(\xi_0\rightarrow m\), in noting \(\xi_0\) acts as the effective mass. Up to an overall constant, the partition function is now,
\begin{equation}\label{eq5}
Z^{(d)}(T)=\exp\bigg[ N\beta V\Bigg( p^{(d)}_{\text{free}}\big(T,m+m_b\big) + \frac{m^{d}}{ \alpha}\bigg(\frac{1}{d}\bigg)\bigg(\frac{d-1}{d}\bigg)^{d-1}\Bigg)\bigg],
\end{equation}
where \(\beta V\) is the spacetime volume and \(N\) is still the number of fermionic fields. Further, \(p^{(d)}_{\text{free}} \big(T,m\big)\) is the pressure of a free fermionic field theory at temperature \(T\) in \(d\) dimensions, which is given by,
\begin{equation}\label{eq6}
p^{(d)}_{\text{free}}(T,m)= d\int \frac{d^{d-1}p}{(2\pi)^{d-1}}\bigg[\frac{E_p}{2} + T\ln (1 + e^{-\beta E_p})\bigg].
\end{equation}
The partition function in \(3+1d\) is then
\begin{equation}
Z= e^{N\beta V \big(4J(T,m+m_b) +\frac{m^4}{4 \alpha}(\frac{3}{4})^3\big)}.
\end{equation}
Where in \(3+1d\)
\begin{equation}
J(T,m) = \int d^3p \bigg[\frac{E_p}{2} + T\ln(1+ e^{-\beta E_p})\bigg],
\end{equation}
and \(E_p = \sqrt{\Vec{p^2}+(m+m_b)^2}\). Using dimensional regularization by perturbing around the dimension parameter \(d\), such that \(d= 4-2\epsilon\), \(J(T,m)\) evaluates to
\begin{equation}
J(T,m)=-\frac{m^4}{64\pi^2}\left[\frac{1}{\epsilon}+\ln\left(\frac{\Bar{\mu}^2e^{3/2}}{m^2}\right)\right]-\frac{m^2T^2}{2\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right),
\end{equation}
where \(K_2(\cdot)\) denotes the modified Bessel function of the second kind.
In adding a renormalization scale \(\Bar{\mu}\), in the \(\overline{MS}\) scheme the pressure per fermionic component is
\footnote{
We note the pressure in the form of equation \eqref{eq10} is reminiscent of the usual GN-model where the the 2nd and 3rd terms represent a gas of quasiparticles of an effective mass \(m\).}
\begin{equation}\label{eq10}
p^{(4)}(T,m)=\frac{ (m-m_b)^4}{4\alpha}\bigg(\frac{3}{4}\bigg)^3-\frac{m^4}{16\pi^2}\left[\frac{1}{\epsilon}+\ln\left(\frac{\Bar{\mu}^2e^{3/2}}{m^2}\right)\right]-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right).
\end{equation}
Next, the pressure can then be non-perturbatively renormalized by absorbing \(\epsilon\) into the coupling such that
\begin{equation}
\frac{m^4}{4\alpha}\bigg(\frac{3}{4}\bigg)^3- \frac{m^4}{16\pi^2}\frac{1}{\epsilon} = \frac{m^4}{4\alpha_R(\Bar{\mu})}
\end{equation}
where \(\alpha_R\) is the renormalized coupling. With this, the zero temperature piece of the pressure is
\begin{equation}
p^{(4)}(T=0,m) = \frac{m^4}{4\alpha_R(\bar{\mu})}-\frac{m^4}{16\pi^2}\ln\left(\frac{\Bar{\mu}^2e^{3/2}}{m^2}\right),
\end{equation}
and has a corresponding negative \(\beta\)-function of
\begin{equation}
\beta(\alpha_R)\equiv\frac{\partial \alpha_R(\Bar{\mu})}{\partial \ln(\Bar{\mu})} = -\frac{\alpha^2_R(\bar{\mu})}{2\pi^2}.
\end{equation}
Upon integration, there is a pole at some cut-off \(\Lambda_{c}\),
\begin{equation}
\alpha_R(\Bar{\mu}) =\frac{4\pi^2}{\ln\left(\frac{\Bar{\mu}^2}{\Lambda^2_{c}}\right)}.
\end{equation}
We note that our result for \(\beta(\alpha_R)\) is \(-\beta(\lambda_R)\) where \(\beta(\lambda_R)\) is the \(\beta\)-function for a large-\(N\) scalar with quartic interactions given by \cite{Romatschke2022}. The renormalized pressure in terms of \(\alpha_R(\Bar{\mu})\) now reads
\begin{align}\label{eq15}
p^{(4)}(m,T)=&\frac{ -4 m^3 m_0+6 m^2 m_0^2+m_0^4-4 m m_0^3}{4\alpha}\bigg(\frac{3}{4}\bigg)^3 \\
&-\frac{m^4}{16\pi^2}\ln\left(\frac{\Lambda_c^2 e^{3/2}}{m^2}\right)-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right). \nonumber
\end{align}
However \(\frac{1}{\alpha}\) diverges as \(\epsilon\rightarrow 0\), therefore the remaining term which mixes bare and effective masses must be renormalized. To do this let
\begin{equation}
\frac{m_b}{\alpha}\bigg(\frac{3}{4}\bigg)^3 = \frac{m_R(\Bar{\mu})}{\alpha_R(\Bar{\mu})}.
\end{equation}
In demanding \(\frac{\partial p(m)}{\partial \ln (\Bar{\mu})}=0\) we have the expression
\begin{align}
p(m,T)=&-m^3\frac{m_R(\Bar{\mu})}{\alpha_R(\Bar{\mu})}+\frac{6 m^2 m_0^2+m_0^4-4 m m_0^3}{4\alpha}\bigg(\frac{3}{4}\bigg)^3 \\
&-\frac{m^4}{16\pi^2}\ln\left(\frac{\Lambda_c^2 e^{3/2}}{m^2}\right)-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right)\nonumber.
\end{align}
Now looking at higher powers of \(m_0\) we note
\begin{equation}
\begin{split}
&\frac{m_b^2}{\alpha}=\bigg(\frac{m_R(\Bar{\mu})}{\alpha_R(\Bar{\mu})}\bigg)^2 \alpha
= \text{Const}^2\frac{1}{\frac{1}{\alpha_R(\Bar{\mu})} + \frac{1}{4\pi^2\epsilon}}.
\end{split}
\end{equation}
Thus higher powers of the bare mass \(m_0\) are vanishing under our renormalization scheme as \(\epsilon \rightarrow 0\) and the renormalized pressure becomes
\begin{equation}
\begin{split}
&p(m,T)=-m^3\frac{ m_R(\Bar{\mu})}{\alpha_R(\Bar{\mu})}-\frac{m^4}{16\pi^2}\ln\left(\frac{\Lambda_c^2 e^{3/2}}{m^2}\right)-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right).
\end{split}
\end{equation}
The condition \(\frac{\partial p_\text{ren}}{\partial\ln\Bar{\mu}}=0\) is fulfilled for
\begin{equation}
m_R(\Bar{\mu}) = \frac{\text{const}}{\ln\big(\frac{\Bar{\mu}^2}{\Lambda^2}\big)}
\end{equation}
and we can define the constant $M\equiv \frac{m_R}{\alpha_R}$ as the renormalized bare mass. Finally, the renormalized pressure is given as,
\begin{equation}
p(m,T)= -m^3 M-\frac{m^4}{16\pi^2}\ln\left(\frac{\Lambda_c^2 e^{3/2}}{m^2}\right)-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right).
\end{equation}
\section{The $\mathcal{PT}$ Symmetric extension at Large-\(N\)}\label{section 3}
We now consider the \(\mathcal{PT}\)-symmetric version of equation \eqref{eq5} by analytically continuing $\alpha\rightarrow -g\pm i0^+$ such that
\begin{equation}
\ln Z^{\mathcal{PT}} (g) = \frac{1}{2}\ln Z_{\text{Herm}}(\alpha \rightarrow{-g +i0^+}) + \frac{1}{2}\ln Z_{\text{Herm}}(\lambda \rightarrow{-g -i0^+}),
\end{equation}
as conjectured by \cite{bender2022pt}. The analytically continued partition function is now
\begin{equation}\label{eq4}
Z^{(d)}(T)=\exp\bigg[ N\beta V\Bigg( p^{(d)}_{\text{free}}\big(T,m\big) - \frac{m^{d}}{ (g\pm i0^+)}\bigg(\frac{1}{d}\bigg)\bigg(\frac{d-1}{d}\bigg)^{d-1}\Bigg)\bigg],
\end{equation}
where in this expression we have let the bare mass \(m_0 =0\) for simplicity, provided we know how to reinsert the renormalized bare mass when needed from the previous section. Under dimensional regularization the pressure of the \(\mathcal{PT}\)-symmetric theory is given as
\begin{equation}
p(m,T)=-\frac{m^4}{4g}\bigg(\frac{3}{4}\bigg)^3-\frac{m^4}{16\pi^2}\left[\frac{1}{\epsilon}+\ln\left(\frac{\Bar{\mu}^2e^{3/2}}{m^2}\right)\right]-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right),
\end{equation}
and is non-perturbatively renormalized through the coupling parameter by letting
\begin{equation}
\frac{m^4}{4g_R(\Bar{\mu})}=\frac{m^4}{4g}\bigg(\frac{3}{4}\bigg)^3+\frac{m^4}{16\pi^2\epsilon},
\end{equation}
were \(g_R(\Bar{\mu})\) is the renormalized coupling. Now as a consequence of analytically continuing the coupling the \(\beta\)-function is positive and decaying into the IR,
\begin{equation}
\begin{split}
&\beta(g_R) = \frac{g^2_R(\bar{\mu})}{2\pi^2}, \quad\text{and}\quad
g_R(\Bar{\mu})=-\frac{4\pi^2}{\ln\left(\frac{\Bar{\mu}^2}{\Lambda^2_{c}}\right)} \qquad \bar{\mu}<\Lambda_c,
\end{split}
\end{equation}
yielding a theory that is well-defined in the low momentum limit. In re-inserting the renormalized bare mass \(m_0^{'}\) and likewise re-inserting the analytic continuation of the coupling the pressure becomes
\begin{equation}
p(m,T) = -m^3 M -\frac{m^4}{16\pi^2}\ln\left(\frac{(\Lambda_c^2-i0^+)e^{3/2}}{m^2}\right)-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right).
\end{equation}
We note that the analytic continuation of the coupling gets removed via the coupling renormalization scheme. With this, the only physical parameter that remains in place of the coupling is the cut-off \(\Lambda\), which now carries the analytic continuation.
By taking the real part of the analytically continued partition function the renormalized pressure is given as
\begin{equation}\label{eq22}
p(m,T)=-m^3 M-\frac{m^4}{16\pi^2}\ln\left(\frac{\Lambda_c^2 e^{3/2}}{m^2}\right)-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right).
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth]{pt_mass_pressure.png}
\caption{\label{graph}The above figure gives the pressure per component of a Large-\(N\) theory of interacting fermions as a function of the effective mass, $m$. The plot is at a specific temperature \(T=.06\Lambda_c\) and mass scale \(M = -.01\Lambda_c\). We see that a non-zero solution to the saddle point equation (\ref{eq24}), around $m\approx 0.48\Lambda_c$, is thermodynamically favored.}
\label{fig: plot1}
\end{figure}
Likewise the entropy density is calculated by the thermodynamic relation $s\equiv\frac{\partial p}{\partial T}$, to give
\begin{equation}
s(m,T)= -\frac{2m^3}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^nK_3(n\beta m)}{n},
\end{equation}
where $m$ is the solution to the gap equation.
Note that equation \eqref{eq15} is identical to equation \eqref{eq22}, thus the thermodynamics of the original theory, and the \(\mathcal{PT}\)-symmetric theory are identical. In this guise, we can view the coupling in the IR through the lens of the \(\mathcal{PT}\)-symmetric extension, and likewise the coupling in the UV can be viewed through the lens of the original theory. In this, we can see through the pole in running coupling and the theory becomes well-defined in both the IR and UV limits.
\section{Solving the Gap Equations of the \(\mathcal{PT}\)-Symmetric Model}
In both models, we have taken saddle point integrals around the auxiliary fields, and we have yet to consider gap equations of either theory. Generally, the partition function can be written in the large $N$ limit as
\begin{equation}
\lim_{N\gg 1}Z\left(\lambda\rightarrow -g+i0^+\right) = \sum_{i=1}^K e^{N\beta V p(m_i)},
\end{equation}
where $K$ enumerates the saddle points at each $m_i$ which minimizes the pressure. In taking the derivative of the dimensionally regulated pressure with respect to \(m\) (which is generated by the auxiliary field) the saddle point condition of the \(\mathcal{PT}\)-symmetric theory is given by
\begin{equation}\label{eq24}
0=\frac{d p}{d m}=-3 M m^2-\frac{m^3 \left(1+\log \left(\frac{\Lambda_c ^2-i0^+}{m^2}\right)\right)}{4 \pi ^2}+\frac{2m^2 T}{\pi^2}\sum _{n=1}^{\infty } \frac{(-1)^n K_1(m n \beta )}{n}.
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth]{pt_pressure1.png}
\caption{\label{graph} This plot shows the $m_0=0$ solution to the gap equation becomes thermodynamically favored around \(T=0.139\Lambda_c\). }
\label{fig: plot2}
\end{figure}
We first consider the analytic gap equations at zero temperature and zero bare mass \(T=M=0\). With this, the gap equation is given by
\begin{equation}
m^3\log \left( \frac{\Lambda_c^2 e^1}{m^2} \right)=0,
\end{equation}
and has two solutions:
\begin{equation}\label{26}
m_0(T=0)=0,\quad\text{and} \quad m_1(T=0)=\sqrt{e}\Lambda_c.
\end{equation}
We note that \(m_1\) depends on the scale \(\Lambda_c\) and \(\Lambda_c\) depends on the coupling \(\alpha_R\). Thus, \(m_1\) is explicitly related to the GN-model, and must be physical. Now the partition function evaluates to
\begin{equation}
\lim_{N\gg 1}Z\left(\lambda=-g,T=0\right) = e^{-N\beta V \Lambda_c^4 e^2/32\pi^2} + 1.
\end{equation}
We note that zero temperature gap equations of our fermionic theory are equivalent for not only both Hermitian and \(\mathcal{PT}\)-symmetric theories, but they also match the zero temperature gap solutions of \(\phi\)-four scalar theory in \(3+1d\) given by \cite{Romatschke2022}.
Next, we consider re-inserting the bare mass term. The zero-temperature solutions to our gap equations are now given by
\begin{equation}
m = \frac{6 \pi ^2 M}{W\left(-6 \pi ^2 \sqrt{\frac{M^2}{e\Lambda_c ^2}}\right)}\quad \& \quad m= \frac{6 \pi ^2 M}{W\left(6 \pi ^2 \sqrt{\frac{M^2}{e\Lambda_c ^2}}\right)}.
\end{equation}
Where \(W(\cdot)\) is the Lambert \(W\)-functuion. In letting \(M=-.01\Lambda_c\) we find three solutions to the zero temperature gap equation at \(m_0=0\), \(m_1=0.47945\Lambda_c\), and \(m_2=0.74319\Lambda_c\), shown in figure (\ref{fig: plot1}). At finite temperatures, the in-medium mass solutions to (\ref{eq24}) can be solved numerically. The pressure can then be evaluated at these solutions to determine which ones are thermodynamically favored and which ones are meta-stable. With this, the pressure is greatest for the temperature-dependent solution $m_1(T)$ up until \(T_c\approx 0.139\Lambda_c\) when the \(m_0=0\) solution becomes thermodynamically favored as shown in figure \ref{fig: plot2}. When we evaluate the entropy density we find a first-order phase transition which can be seen by a discontinuity at $T_c$ in figure \ref{fig: plot3}.
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth]{pt_entropy.png}
\caption{\label{graph} The entropy density has a discontinuity at $T=T_c$ signaling a first-order phase transition.}
\label{fig: plot3}
\end{figure}
\section{Conclusion}
In this paper, we constructed two theories of fermionic interactions which share the same physical observables. The overlap of these theories gives a combined theory which is UV and IR complete, renormalizable, and fully solvable for all couplings. This theory is shown to have a stable, meta-stable, and unstable phase which are separated by a first-order phase transition. Further, our theory contains a meta-stable state in the pressure and a first-order phase transition around \(T=0.139\Lambda_c\). Generally, we found that at large \(N\) the gap equations and the pressures of our theories were invariant to sign changes in the coupling, as anticipated.
This work motivates several paths for future work. As mentioned in the introduction, the NJL has a rich history, motivating the study of a model with marginal pseudo-scalar four-fermi interactions of the form $(\Bar{\psi}\gamma_5\psi)^2$. As the NJL which model contains continuous chiral symmetry instead of only discrete, the model provides (pseudo) Goldstone bosons to describe pions for example. If our model could be generalized to consider a pseudo-scalar four fermion channel perhaps one could move beyond the drawbacks of the standard NJL-model, that being its nonrenormalizability and a a cutoff of about 700 MeV.
Further, we conjecture that similar methods can be used to add deformations of relevant fermionic operators to the marginal theory in \(3+1d\). Likewise, one could extend our theory, (or a relevant version of our theory) with a finite chemical potential, and look for a tricritical point that is phenomenologically similar to QCD.
Working in lower dimensions we also speculate that the benign nature of the poles found at large-\(N\) will hold in \(1+1d\). One could show this explicitly by looking at theories with changes in signs of the couplings. If so, we speculate such results could be used to experimentally see past critical scales associated with \(1+1d\) relativistic fermion interactions and spin chains.
Further, novel changes in the sign of the coupling at large-\(N\) can be explored in making UV and IR complete large-\(N\) theories for QED and electroweak theory, and likewise large-\(N\) toy models for QCD. This and many other papers have shown how the coupling independent expansion parameter of \(\frac{1}{N}\) can lead to fully solvable theories for all couplings, making the general topic of large-\(N\) theories a fascinating and fruitful subject with ample room for exploration.
\section{Acknowledgements}
We thank Paul Romatschke, Scott Lawrence, Ryan Weller, and Marcus Pinto for the abundance of insightful feedback and discussion. This work was supported by the DOE, award number DE- SC0017905.
\end{spacing}
\bibliographystyle{plain}
\section{Introduction}
\end{center}
The Gross–Neveu (GN) model is a relativistic field theory of four-fermi interactions. Four-fermi interactions are fundamental and fruitful in: providing models of superconductivity such as in BCS theory, giving an effective, or low energy, theory of nucleons and mesons via the NJL-model, and providing a dual to the quantum Sine-Gordon model via the Thirring model. However, BCS theory is non-relativistic, the Thirring model is nonrenormalizable in \(3+1d\) \cite{banerjee1997explicit}, and the NJL-model is nonrenormalizable in \(3+1d\), not UV complete. Nonetheless, given the successful history of four-fermi interactions, we would like to be able to write down a renormalizable and UV complete theory of such interactions to act as a toy model for QCD in \(3+1d\).
To do so we turn to a large-N Gross-Neveu-inspired model in \(3+1d\). In \(1+1d\) the GN-model is a well-known toy model for QCD exhibiting asymptotic freedom and a negative \(\beta\)-function, and in \(2+1d\) the GN-model mimics planar superconductors. Indeed, in \(1+1d\) the large-\(N\) GN model is likewise asymptotically free and renormalizable \cite{moshe2003quantum}\cite{gracey1990three}\cite{gross1974dynamical}. However, in \(1+1d\) the running coupling contains a pole when approaching the IR, demanding an IR cutoff \cite{moshe2003quantum}. Further, when working in \(3+1d\), four-fermi interactions cannot be renormalizable via a running coupling (in the standard sense). Instead, a term containing a fractional power of the fermionic fields must be removed from the Lagrangian \cite{parisi1975theory}. Further, Moshe et al. \cite{moshe2003quantum} claims the non-renormalizability of four-fermi interactions in \(3+1d\) is due to the algebra of the \(\gamma\)-matrices which causes an infinite number of four-fermion interactions to mix under renormalization. Moshe et al. \cite{moshe2003quantum} also claims the GN model is renormalizable in \(1+1d\) because the symmetry of the model is effectively \(O(N)\). However, we note that the renormalizability of GN in \(1+1d\) can also be seen as an artifact of the theory's marginal coupling. With this, our strategy for renormalization starts by writing down a marginal GN-like model in \(3+1d\) with fractional interactions, which we claim is related to renormalization schemes suggested by Parisi \cite{parisi1975theory}. In doing so, our theory has many features of the GN model in \(1+1d\), such as renormalizability, a negative \(\beta\)-function, and a pole in the IR. Next, we make use of the Ai, Bender, and Sarkar (ABS) conjecture \cite{bender2022pt} by implementing a \(\mathcal{PT}\)-symmetric extension of the theory to ``see around" the pole found in the IR. Where ``see around" the pole means: 1) we can see into the UV using the standard theory, and we can see into the IR using the \(\mathcal{PT}\)-symmetric theory, and 2) poles in the running of the coupling do not affect the theories observables and both theories have identical observables. Points 1) and 2) will be fully illustrated in section \ref{section 3}. Together the strategies of fractional interactions and \(\mathcal{PT}\)-symmetric extensions yield two fully renormalized fermionic theories in \(3+1d\) with identical physics, which together give a complete theory which is UV and IR complete.
A brief discussion is warranted around the topic of large-\(N\) theories with fractional interactions. Indeed, standard perturbation theory is not possible with fractional interaction terms \cite{peskin2018introduction}. However, in large-\(N\) theories \(\frac{1}{N}\) acts as a coupling independent expansion parameter. The coupling independence of \(N\) gives way to renormalizable and solvable theories without the use of traditional perturbation theory \cite{moshe2003quantum}\cite{romatschke2019finite}. Recognition of the \(N\)-field expansion parameter as a way to tackle nonrenormalizable fields was first recognized by Parisi in the 1970s \cite{parisi1975theory}. However, many recent papers have taken advantage of the large-\(N\) expansion parameters to present theories which are analytically solvable for all couplings, generate transport coefficients, and calculate viscosities
\cite{pinto2020three}\cite{romatschke2019finite}\cite{romatschke2019thermal}\cite{romatschke2021shear}\cite{Weiner:2022vwa}.
Further, some large-\(N\) theories are both solvable and renormalizable with fractional interactions in arbitrary dimensions \cite{grable2022theremal}.
Next, we address the strategy of studying \(\mathcal{PT}\)-symmetric versions of related Hermitian theories. It has been known for several decades that \(\mathcal{PT}\)-symmetric extensions of Hermitian theories, which are themselves non-Hermitian, still possess a positive, discrete, and real eigenspectrum \cite{bender1999pt}. However recently Ai, Bender, and Sarkar gave a recipe for formulating \(\mathcal{PT}\)-symmetric field theories from cousin Hermitian theories via analytically continuing the coupling \cite{bender2022pt}:
\begin{equation}
\ln Z^{\mathcal{PT}} (g) = \frac{1}{2}\ln Z_{\text{Herm}}(\lambda \rightarrow{-g +i0^+}) + \frac{1}{2}\ln Z_{\text{Herm}}(\lambda \rightarrow{-g -i0^+}).
\end{equation}
Although \cite{bender2022pt} only explicitly covers scalar theories other works \cite{BenderFermi}\cite{Mavromatos:2021hpe} have shown the ability to generate \(\mathcal{PT}\)-symmetric fermionic theories. Particularly \cite{BenderFermi} has shown the existence of \(\mathcal{PT}\)-symmetric GN theory in \(3+1d\). However, we note that the product of \(\Bar{\Psi}\Psi\) is Hermitian, and therefore any product of the form \(g\big(\Bar{\Psi}\Psi\big)^\epsilon\) for any \(\epsilon\in \mathbb{R}\) is also Hermitian and is likewise \(\mathcal{PT}\)-symmetric. If we look at theory with interaction terms \(-g\big(\Bar{\Psi}\Psi\big)^\epsilon\) the problem is not that the Lagrangian is non-Hermitian, but that it is not asymptotically free or bounded from below. Indeed, the fact that the Lagrangian \(L(g)\) \textit{is} asymptotically free and maps to the local limit of a Yukawa model is Gross and Neveu's original motivation for defining the coupling to be positive \cite{gross1974dynamical}. However, in recent work, Ai, Bender, and Sarkar conjecture that \(\mathcal{PT}\)-symmetric theories which inherit ``the wrong coupling sign" are indeed well defined via analytic continuation. Regardless of whether this conjecture holds, the fact remains that large-\(N\) theories are immune to Landua poles, as their perturbative expansion is around \(\frac{1}{N}\) and not \(\lambda\). Further, a running coupling renormalization scheme absorbs any overall constants or signs in front of the coupling, leaving the theory invariant to such changes.
Or, put more boldly, Large-N theories are immune to the sickness of Hamiltonians which are unbounded from below. Further, Romatschke \cite{Romatschke2022} has recently shown that the \(\mathcal{PT}\)-symmetric analog of a large-\(N\) \(\phi^4\) theory in \(3+1d\) can be used to see through the pole found in the UV. This is due to the novel fact that flipping the sign of the coupling also flips the sign of the \(\beta\)-function during renormalization. Thus the \(\beta\)-function of the standard \(\phi^4\)-theory ends at a pole and the \(\beta\)-function of the cousin \(\mathcal{PT}\)-symmetric theory begins at that pole and then exhibits asymptotic freedom into the UV. The catch is that both theories generate the same thermodynamics, and therefore one can now follow the running coupling through the pole into UV with the use of \(\mathcal{PT}\)-symmetric theory.
\begin{center}
\item\section{A Large-$N$ Fermionic Model}
\end{center}
Consider a marginal \(U(N)\) Fermionic theory in \(d\)-dimensions where \(\psi_f = (\psi_1, \psi_2 ... \psi_N\)) in the limit \(N\rightarrow \infty\), with the Euclidean action
\begin{equation}\label{eq1}
A^{(d)} = \int d^dx \bigg[\Bar{\psi}_f\left(\slashed{\partial}+m_b\right) \psi_f + \bigg(\frac{\alpha}{N}\bigg)^\frac{1}{d-1}\big(\Bar{\psi}_f\psi_f )^{\frac{d}{d-1}}\bigg],
\end{equation}
where \(\psi_f(x)\) with \(f = 1,2,...N\) are \(N\times 4\)-component Dirac spinors, and \(m_b\) is the bare mass. The sign convention of the coupling follows \cite{diab2016and} and \cite{parisi1975theory}, and allows us to smoothly connect to a Yukawa-Gross-Neveu model in \(d=4-\epsilon\) dimensions. To calculate the partition function \(Z^{(d)}(T) = \int \mathcal{D} \Bar{\psi}\mathcal{D}\psi e^{-A^{(d)}}\) of an \(U(N)\) model we multiply by a convenient factor of 1 \cite{grable2022theremal},
\begin{equation}\label{eq2}
1= \int\mathcal{D}\sigma\mathcal{D}\xi e^{\int_x\int_0^\beta i\xi(\sigma - \frac{\Bar{\psi}_f\psi_f}{N})},
\end{equation}
and let the Euclidean dimension of time be compactified to a thermal circle of radius \(\beta\) \cite{laine2016basics}.
For simplicity we let \( \xi \rightarrow iN\times \xi\), and \(\sigma \rightarrow \frac{\sigma}{\alpha}\) to evenly distribute the coupling in \(Z(T)\). Next, we take saddle point integrals around the mean-field values of \(\xi\) and \(\sigma\) (\({\xi_0}\) and \(\sigma_0\)), which are exact in the large N limit. Applying the saddle point condition for the \(\sigma\) field and keeping only the positive solution in terms of \(\xi_0\) (which is equivalent to keeping only positive effective mass terms) \(Z(T)\) is given as,
\begin{equation}\label{eq3}
\begin{split}
Z^{(d)}(T) = \int \mathcal{D}\bar{\psi} \mathcal{D}{\psi} d\xi_0 \exp \bigg[-N\int_x dx \Bigg( \Bar{\psi} \big(\slashed{\partial} +m_b + \xi_0 \big)\psi - \frac{\xi_0^{d}}{ \alpha}\bigg(\frac{1}{d}\bigg)\bigg(\frac{d-1}{d}\bigg)^{d-1}\Bigg)\bigg].
\end{split}
\end{equation}
Since \(\xi_0\) is a constant, the integral over \(\xi_0\) can be found via saddle point integration and is exact in the large-\(N\) limit. Further, at finite temperature, the \(\slashed \partial\)-operator can be diagonalized in terms of its fermionic Matsubara frequencies \(\omega^f_n = 2\pi T (n+\frac{1}{2})\) with \(n \in \mathbb{Z}\). After diagonalizing \(\slashed \partial\) and integrating \(\xi_0\) we let \(\xi_0\rightarrow m\), in noting \(\xi_0\) acts as the effective mass. Up to an overall constant, the partition function is now,
\begin{equation}\label{eq5}
Z^{(d)}(T)=\exp\bigg[ N\beta V\Bigg( p^{(d)}_{\text{free}}\big(T,m+m_b\big) + \frac{m^{d}}{ \alpha}\bigg(\frac{1}{d}\bigg)\bigg(\frac{d-1}{d}\bigg)^{d-1}\Bigg)\bigg],
\end{equation}
where \(\beta V\) is the spacetime volume and \(N\) is still the number of fermionic fields. Further, \(p^{(d)}_{\text{free}} \big(T,m\big)\) is the pressure of a free fermionic field theory at temperature \(T\) in \(d\) dimensions, which is given by,
\begin{equation}\label{eq6}
p^{(d)}_{\text{free}}(T,m)= d\int \frac{d^{d-1}p}{(2\pi)^{d-1}}\bigg[\frac{E_p}{2} + T\ln (1 + e^{-\beta E_p})\bigg].
\end{equation}
The partition function in \(3+1d\) is then
\begin{equation}
Z= e^{N\beta V \big(4J(T,m+m_b) +\frac{m^4}{4 \alpha}(\frac{3}{4})^3\big)}.
\end{equation}
Where in \(3+1d\)
\begin{equation}
J(T,m) = \int d^3p \bigg[\frac{E_p}{2} + T\ln(1+ e^{-\beta E_p})\bigg],
\end{equation}
and \(E_p = \sqrt{\Vec{p^2}+(m+m_b)^2}\). Using dimensional regularization by perturbing around the dimension parameter \(d\), such that \(d= 4-2\epsilon\), \(J(T,m)\) evaluates to
\begin{equation}
J(T,m)=-\frac{m^4}{64\pi^2}\left[\frac{1}{\epsilon}+\ln\left(\frac{\Bar{\mu}^2e^{3/2}}{m^2}\right)\right]-\frac{m^2T^2}{2\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right),
\end{equation}
where \(K_2(\cdot)\) denotes the modified Bessel function of the second kind.
In adding a renormalization scale \(\Bar{\mu}\), in the \(\overline{MS}\) scheme the pressure per fermionic component is
\footnote{
We note the pressure in the form of equation \eqref{eq10} is reminiscent of the usual GN-model where the the 2nd and 3rd terms represent a gas of quasiparticles of an effective mass \(m\).}
\begin{equation}\label{eq10}
p^{(4)}(T,m)=\frac{ (m-m_b)^4}{4\alpha}\bigg(\frac{3}{4}\bigg)^3-\frac{m^4}{16\pi^2}\left[\frac{1}{\epsilon}+\ln\left(\frac{\Bar{\mu}^2e^{3/2}}{m^2}\right)\right]-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right).
\end{equation}
Next, the pressure can then be non-perturbatively renormalized by absorbing \(\epsilon\) into the coupling such that
\begin{equation}
\frac{m^4}{4\alpha}\bigg(\frac{3}{4}\bigg)^3- \frac{m^4}{16\pi^2}\frac{1}{\epsilon} = \frac{m^4}{4\alpha_R(\Bar{\mu})}
\end{equation}
where \(\alpha_R\) is the renormalized coupling. With this, the zero temperature piece of the pressure is
\begin{equation}
p^{(4)}(T=0,m) = \frac{m^4}{4\alpha_R(\bar{\mu})}-\frac{m^4}{16\pi^2}\ln\left(\frac{\Bar{\mu}^2e^{3/2}}{m^2}\right),
\end{equation}
and has a corresponding negative \(\beta\)-function of
\begin{equation}
\beta(\alpha_R)\equiv\frac{\partial \alpha_R(\Bar{\mu})}{\partial \ln(\Bar{\mu})} = -\frac{\alpha^2_R(\bar{\mu})}{2\pi^2}.
\end{equation}
Upon integration, there is a pole at some cut-off \(\Lambda_{c}\),
\begin{equation}
\alpha_R(\Bar{\mu}) =\frac{4\pi^2}{\ln\left(\frac{\Bar{\mu}^2}{\Lambda^2_{c}}\right)}.
\end{equation}
We note that our result for \(\beta(\alpha_R)\) is \(-\beta(\lambda_R)\) where \(\beta(\lambda_R)\) is the \(\beta\)-function for a large-\(N\) scalar with quartic interactions given by \cite{Romatschke2022}. The renormalized pressure in terms of \(\alpha_R(\Bar{\mu})\) now reads
\begin{align}\label{eq15}
p^{(4)}(m,T)=&\frac{ -4 m^3 m_0+6 m^2 m_0^2+m_0^4-4 m m_0^3}{4\alpha}\bigg(\frac{3}{4}\bigg)^3 \\
&-\frac{m^4}{16\pi^2}\ln\left(\frac{\Lambda_c^2 e^{3/2}}{m^2}\right)-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right). \nonumber
\end{align}
However \(\frac{1}{\alpha}\) diverges as \(\epsilon\rightarrow 0\), therefore the remaining term which mixes bare and effective masses must be renormalized. To do this let
\begin{equation}
\frac{m_b}{\alpha}\bigg(\frac{3}{4}\bigg)^3 = \frac{m_R(\Bar{\mu})}{\alpha_R(\Bar{\mu})}.
\end{equation}
In demanding \(\frac{\partial p(m)}{\partial \ln (\Bar{\mu})}=0\) we have the expression
\begin{align}
p(m,T)=&-m^3\frac{m_R(\Bar{\mu})}{\alpha_R(\Bar{\mu})}+\frac{6 m^2 m_0^2+m_0^4-4 m m_0^3}{4\alpha}\bigg(\frac{3}{4}\bigg)^3 \\
&-\frac{m^4}{16\pi^2}\ln\left(\frac{\Lambda_c^2 e^{3/2}}{m^2}\right)-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right)\nonumber.
\end{align}
Now looking at higher powers of \(m_0\) we note
\begin{equation}
\begin{split}
&\frac{m_b^2}{\alpha}=\bigg(\frac{m_R(\Bar{\mu})}{\alpha_R(\Bar{\mu})}\bigg)^2 \alpha
= \text{Const}^2\frac{1}{\frac{1}{\alpha_R(\Bar{\mu})} + \frac{1}{4\pi^2\epsilon}}.
\end{split}
\end{equation}
Thus higher powers of the bare mass \(m_0\) are vanishing under our renormalization scheme as \(\epsilon \rightarrow 0\) and the renormalized pressure becomes
\begin{equation}
\begin{split}
&p(m,T)=-m^3\frac{ m_R(\Bar{\mu})}{\alpha_R(\Bar{\mu})}-\frac{m^4}{16\pi^2}\ln\left(\frac{\Lambda_c^2 e^{3/2}}{m^2}\right)-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right).
\end{split}
\end{equation}
The condition \(\frac{\partial p_\text{ren}}{\partial\ln\Bar{\mu}}=0\) is fulfilled for
\begin{equation}
m_R(\Bar{\mu}) = \frac{\text{const}}{\ln\big(\frac{\Bar{\mu}^2}{\Lambda^2}\big)}
\end{equation}
and we can define the constant $M\equiv \frac{m_R}{\alpha_R}$ as the renormalized bare mass. Finally, the renormalized pressure is given as,
\begin{equation}
p(m,T)= -m^3 M-\frac{m^4}{16\pi^2}\ln\left(\frac{\Lambda_c^2 e^{3/2}}{m^2}\right)-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right).
\end{equation}
\section{The $\mathcal{PT}$ Symmetric extension at Large-\(N\)}\label{section 3}
We now consider the \(\mathcal{PT}\)-symmetric version of equation \eqref{eq5} by analytically continuing $\alpha\rightarrow -g\pm i0^+$ such that
\begin{equation}
\ln Z^{\mathcal{PT}} (g) = \frac{1}{2}\ln Z_{\text{Herm}}(\alpha \rightarrow{-g +i0^+}) + \frac{1}{2}\ln Z_{\text{Herm}}(\lambda \rightarrow{-g -i0^+}),
\end{equation}
as conjectured by \cite{bender2022pt}. The analytically continued partition function is now
\begin{equation}\label{eq4}
Z^{(d)}(T)=\exp\bigg[ N\beta V\Bigg( p^{(d)}_{\text{free}}\big(T,m\big) - \frac{m^{d}}{ (g\pm i0^+)}\bigg(\frac{1}{d}\bigg)\bigg(\frac{d-1}{d}\bigg)^{d-1}\Bigg)\bigg],
\end{equation}
where in this expression we have let the bare mass \(m_0 =0\) for simplicity, provided we know how to reinsert the renormalized bare mass when needed from the previous section. Under dimensional regularization the pressure of the \(\mathcal{PT}\)-symmetric theory is given as
\begin{equation}
p(m,T)=-\frac{m^4}{4g}\bigg(\frac{3}{4}\bigg)^3-\frac{m^4}{16\pi^2}\left[\frac{1}{\epsilon}+\ln\left(\frac{\Bar{\mu}^2e^{3/2}}{m^2}\right)\right]-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right),
\end{equation}
and is non-perturbatively renormalized through the coupling parameter by letting
\begin{equation}
\frac{m^4}{4g_R(\Bar{\mu})}=\frac{m^4}{4g}\bigg(\frac{3}{4}\bigg)^3+\frac{m^4}{16\pi^2\epsilon},
\end{equation}
were \(g_R(\Bar{\mu})\) is the renormalized coupling. Now as a consequence of analytically continuing the coupling the \(\beta\)-function is positive and decaying into the IR,
\begin{equation}
\begin{split}
&\beta(g_R) = \frac{g^2_R(\bar{\mu})}{2\pi^2}, \quad\text{and}\quad
g_R(\Bar{\mu})=-\frac{4\pi^2}{\ln\left(\frac{\Bar{\mu}^2}{\Lambda^2_{c}}\right)} \qquad \bar{\mu}<\Lambda_c,
\end{split}
\end{equation}
yielding a theory that is well-defined in the low momentum limit. In re-inserting the renormalized bare mass \(m_0^{'}\) and likewise re-inserting the analytic continuation of the coupling the pressure becomes
\begin{equation}
p(m,T) = -m^3 M -\frac{m^4}{16\pi^2}\ln\left(\frac{(\Lambda_c^2-i0^+)e^{3/2}}{m^2}\right)-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right).
\end{equation}
We note that the analytic continuation of the coupling gets removed via the coupling renormalization scheme. With this, the only physical parameter that remains in place of the coupling is the cut-off \(\Lambda\), which now carries the analytic continuation.
By taking the real part of the analytically continued partition function the renormalized pressure is given as
\begin{equation}\label{eq22}
p(m,T)=-m^3 M-\frac{m^4}{16\pi^2}\ln\left(\frac{\Lambda_c^2 e^{3/2}}{m^2}\right)-\frac{2m^2T^2}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^n}{n^2}K_2\left(n\beta m\right).
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth]{pt_mass_pressure.png}
\caption{\label{graph}The above figure gives the pressure per component of a Large-\(N\) theory of interacting fermions as a function of the effective mass, $m$. The plot is at a specific temperature \(T=.06\Lambda_c\) and mass scale \(M = -.01\Lambda_c\). We see that a non-zero solution to the saddle point equation (\ref{eq24}), around $m\approx 0.48\Lambda_c$, is thermodynamically favored.}
\label{fig: plot1}
\end{figure}
Likewise the entropy density is calculated by the thermodynamic relation $s\equiv\frac{\partial p}{\partial T}$, to give
\begin{equation}
s(m,T)= -\frac{2m^3}{\pi^2}\sum_{n=1}^{\infty}\frac{(-1)^nK_3(n\beta m)}{n},
\end{equation}
where $m$ is the solution to the gap equation.
Note that equation \eqref{eq15} is identical to equation \eqref{eq22}, thus the thermodynamics of the original theory, and the \(\mathcal{PT}\)-symmetric theory are identical. In this guise, we can view the coupling in the IR through the lens of the \(\mathcal{PT}\)-symmetric extension, and likewise the coupling in the UV can be viewed through the lens of the original theory. In this, we can see through the pole in running coupling and the theory becomes well-defined in both the IR and UV limits.
\section{Solving the Gap Equations of the \(\mathcal{PT}\)-Symmetric Model}
In both models, we have taken saddle point integrals around the auxiliary fields, and we have yet to consider gap equations of either theory. Generally, the partition function can be written in the large $N$ limit as
\begin{equation}
\lim_{N\gg 1}Z\left(\lambda\rightarrow -g+i0^+\right) = \sum_{i=1}^K e^{N\beta V p(m_i)},
\end{equation}
where $K$ enumerates the saddle points at each $m_i$ which minimizes the pressure. In taking the derivative of the dimensionally regulated pressure with respect to \(m\) (which is generated by the auxiliary field) the saddle point condition of the \(\mathcal{PT}\)-symmetric theory is given by
\begin{equation}\label{eq24}
0=\frac{d p}{d m}=-3 M m^2-\frac{m^3 \left(1+\log \left(\frac{\Lambda_c ^2-i0^+}{m^2}\right)\right)}{4 \pi ^2}+\frac{2m^2 T}{\pi^2}\sum _{n=1}^{\infty } \frac{(-1)^n K_1(m n \beta )}{n}.
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth]{pt_pressure1.png}
\caption{\label{graph} This plot shows the $m_0=0$ solution to the gap equation becomes thermodynamically favored around \(T=0.139\Lambda_c\). }
\label{fig: plot2}
\end{figure}
We first consider the analytic gap equations at zero temperature and zero bare mass \(T=M=0\). With this, the gap equation is given by
\begin{equation}
m^3\log \left( \frac{\Lambda_c^2 e^1}{m^2} \right)=0,
\end{equation}
and has two solutions:
\begin{equation}\label{26}
m_0(T=0)=0,\quad\text{and} \quad m_1(T=0)=\sqrt{e}\Lambda_c.
\end{equation}
We note that \(m_1\) depends on the scale \(\Lambda_c\) and \(\Lambda_c\) depends on the coupling \(\alpha_R\). Thus, \(m_1\) is explicitly related to the GN-model, and must be physical. Now the partition function evaluates to
\begin{equation}
\lim_{N\gg 1}Z\left(\lambda=-g,T=0\right) = e^{-N\beta V \Lambda_c^4 e^2/32\pi^2} + 1.
\end{equation}
We note that zero temperature gap equations of our fermionic theory are equivalent for not only both Hermitian and \(\mathcal{PT}\)-symmetric theories, but they also match the zero temperature gap solutions of \(\phi\)-four scalar theory in \(3+1d\) given by \cite{Romatschke2022}.
Next, we consider re-inserting the bare mass term. The zero-temperature solutions to our gap equations are now given by
\begin{equation}
m = \frac{6 \pi ^2 M}{W\left(-6 \pi ^2 \sqrt{\frac{M^2}{e\Lambda_c ^2}}\right)}\quad \& \quad m= \frac{6 \pi ^2 M}{W\left(6 \pi ^2 \sqrt{\frac{M^2}{e\Lambda_c ^2}}\right)}.
\end{equation}
Where \(W(\cdot)\) is the Lambert \(W\)-functuion. In letting \(M=-.01\Lambda_c\) we find three solutions to the zero temperature gap equation at \(m_0=0\), \(m_1=0.47945\Lambda_c\), and \(m_2=0.74319\Lambda_c\), shown in figure (\ref{fig: plot1}). At finite temperatures, the in-medium mass solutions to (\ref{eq24}) can be solved numerically. The pressure can then be evaluated at these solutions to determine which ones are thermodynamically favored and which ones are meta-stable. With this, the pressure is greatest for the temperature-dependent solution $m_1(T)$ up until \(T_c\approx 0.139\Lambda_c\) when the \(m_0=0\) solution becomes thermodynamically favored as shown in figure \ref{fig: plot2}. When we evaluate the entropy density we find a first-order phase transition which can be seen by a discontinuity at $T_c$ in figure \ref{fig: plot3}.
\begin{figure}[h]
\centering
\includegraphics[width=.9\textwidth]{pt_entropy.png}
\caption{\label{graph} The entropy density has a discontinuity at $T=T_c$ signaling a first-order phase transition.}
\label{fig: plot3}
\end{figure}
\section{Conclusion}
In this paper, we constructed two theories of fermionic interactions which share the same physical observables. The overlap of these theories gives a combined theory which is UV and IR complete, renormalizable, and fully solvable for all couplings. This theory is shown to have a stable, meta-stable, and unstable phase which are separated by a first-order phase transition. Further, our theory contains a meta-stable state in the pressure and a first-order phase transition around \(T=0.139\Lambda_c\). Generally, we found that at large \(N\) the gap equations and the pressures of our theories were invariant to sign changes in the coupling, as anticipated.
This work motivates several paths for future work. As mentioned in the introduction, the NJL has a rich history, motivating the study of a model with marginal pseudo-scalar four-fermi interactions of the form $(\Bar{\psi}\gamma_5\psi)^2$. As the NJL which model contains continuous chiral symmetry instead of only discrete, the model provides (pseudo) Goldstone bosons to describe pions for example. If our model could be generalized to consider a pseudo-scalar four fermion channel perhaps one could move beyond the drawbacks of the standard NJL-model, that being its nonrenormalizability and a a cutoff of about 700 MeV.
Further, we conjecture that similar methods can be used to add deformations of relevant fermionic operators to the marginal theory in \(3+1d\). Likewise, one could extend our theory, (or a relevant version of our theory) with a finite chemical potential, and look for a tricritical point that is phenomenologically similar to QCD.
Working in lower dimensions we also speculate that the benign nature of the poles found at large-\(N\) will hold in \(1+1d\). One could show this explicitly by looking at theories with changes in signs of the couplings. If so, we speculate such results could be used to experimentally see past critical scales associated with \(1+1d\) relativistic fermion interactions and spin chains.
Further, novel changes in the sign of the coupling at large-\(N\) can be explored in making UV and IR complete large-\(N\) theories for QED and electroweak theory, and likewise large-\(N\) toy models for QCD. This and many other papers have shown how the coupling independent expansion parameter of \(\frac{1}{N}\) can lead to fully solvable theories for all couplings, making the general topic of large-\(N\) theories a fascinating and fruitful subject with ample room for exploration.
\section{Acknowledgements}
We thank Paul Romatschke, Scott Lawrence, Ryan Weller, and Marcus Pinto for the abundance of insightful feedback and discussion. This work was supported by the DOE, award number DE- SC0017905.
\end{spacing}
\bibliographystyle{plain}
|
{
"arxiv_id": "2302.08668",
"language": "en",
"timestamp": "2023-02-20T02:05:54",
"url": "https://arxiv.org/abs/2302.08668",
"yymm": "2302"
} | \section{Introduction}
When a novel and dangerous disease unfolds, governments often
implement a wide range of non-pharmaceutical interventions (NPIs) to decrease the burden on health care
services~\cite{flaxman2020estimating,fong2020nonpharmaceutical,markel2007nonpharmaceutical}. These
interventions include, for example, travel bans, quarantine measures,
and school closures. Epidemiological studies have shown that the
spread of contagious diseases depends on multiple factors, including
the network of face-to-face
contacts~\cite{de2004sexual,susswein2020characterizing,luke2007network}. Therefore,
studying the effect of different network structures on the spread of epidemics
becomes essential to develop more effective interventions.
In the last few years, several mathematical models have been proposed
to study NPIs in complex
networks~\cite{wang2015coupled,wang2021literature,kojaku2021effectiveness,rizi2022epidemic}. For
example, St-Onge et al.~\cite{st2021social,st2021master} have recently
explored a susceptible-infected-susceptible (SIS) model on networks
with cliques (defined as groups where all members are connected to
each other) and proposed a mitigation strategy that consists in
reducing the maximum clique size. They found that the total fraction of
infected people decreases as the maximum clique size is reduced. Another NPI that
has been extensively studied in the field of complex networks is the
rewiring strategy in which susceptible individuals protect themselves by
breaking their links with infected contacts and creating new ones with
non-infectious people~\cite{gross2006epidemic}. Interestingly, recent
work has shown that this strategy can lead to an explosive epidemic
for a susceptible-infected-recovered (SIR)
model~\cite{durrett2022susceptible,ball2022epidemics}.
Several works have also explored the effect of different quarantine
strategies on the spread of
epidemics~\cite{strona2018rapid,vyasarayani2020complete,hasegawa2017efficiency}. For
example, Hasegawa and Nemoto~\cite{hasegawa2017efficiency}
investigated a susceptible-infected-recovered-quarantined (SIRQ) model
with a ``prompt quarantine strategy'' that works as follows. At each
time step, after individuals become infected, they are immediately
detected with probability $f$, and then the detected ones and their contacts are
placed under quarantine. In that work, they showed (for networks without cliques) that
the probability of an epidemic and the proportion of recovered
people undergo a continuous phase transition. On the other hand,
very recently, B\"orner et al.~\cite{borner2022explosive} studied an
SIRQ model with a quarantine strategy that becomes less effective over
time. More specifically, they considered the case in which the rate at
which individuals are quarantined decreases as the total number of
infected people increases. For a mean-field model (corresponding to a
homogeneously well-mixed population), they showed that the proportion
of recovered people at the final stage could exhibit a discontinuous
transition. However, they also observed that the probability of an
epidemic vanishes continuously around this transition point, so their
model exhibits features of both continuous and discontinuous phase
transitions.
Following the line of research on non-pharmaceutical interventions, in
this manuscript, we investigate an SIRQ model with a prompt quarantine
strategy on random networks with cliques. On the one hand, numerical
simulations show that the probability of an epidemic ($\Pi$) vanishes
continuously at a transition point $f=f_c$ (i.e., the probability of a small outbreak, $1-\Pi$, goes to one at $f=f_c$). However, numerical simulations also reveal that
the fraction of recovered people ($R$) is abruptly suppressed
around $f=f_c$, so our model displays features of both continuous and
discontinuous phase transitions as in~\cite{borner2022explosive}. Note that this result is markedly
different from the case without cliques, where only a continuous phase
transition was observed~\cite{hasegawa2017efficiency}, as mentioned
above. Finally, we find that our model exhibits the phenomenon of
backward bifurcation. In order to elucidate the origin of these
results, we explore the spread dynamics close to the transition point,
and numerical simulations suggest that the quarantine strategy becomes
less effective over time, which may explain why our model exhibits the
same behavior as in~\cite{borner2022explosive}.
This manuscript
is organized as follows. In Sec.~\ref{Sec.Model}, we describe the details of our model. In
Secs.~\ref{Sec.FinS}-\ref{sec.prob}, we investigate the final stage of
an epidemic and the probability of small outbreaks ($1-\Pi$) when only one person is infected at the beginning of the
outbreak. In the
following section, we explore the final stage of an epidemic when a large
proportion of the population is infected at the beginning of the
spreading process. Finally, we display our conclusions.
\section{Model description}\label{Sec.Model}
\subsection{Network with cliques}\label{Sec.Bip}
Networks with cliques can be represented as bipartite networks (as illustrated in Fig.~\ref{fig.Bip}). In this work, we will focus on bipartite networks that are locally tree-like because they have two main advantages. Firstly,
they can be easily generated by using a version of the configuration
model~\cite{molloy1998size,molloy1995critical}, and secondly, they
simplify the analytical treatment, as explained in~\cite{karrer2010random}.
To generate these networks, we apply the following steps:
\begin{itemize}
\item Step 1) We create two disjoint sets, denoted by $I$ and $C$, where
$I$ corresponds to the set of individuals and $C$ represents the set of
cliques. The total numbers of individuals and cliques are denoted by
$N_I$ and $N_C$, respectively.
\item Step 2) We randomly assign a number $k_I$ of cliques (or
``stubs'') to every person according to a probability distribution
$P(k_I)$. Similarly, we assign a number $k_C$ of individuals (or
``stubs'') to every clique according to a probability distribution
$P(k_C)$. Initially, each stub is unmatched. We denote the total
number of stubs in sets $I$ and $C$, by $\mathcal{S}_I$ and
$\mathcal{S}_C$, respectively. In the limit of large network sizes,
the relation $\mathcal{S}_C=\mathcal{S}_I$ holds (as explained in~\cite{newman2002spread}). Additionally, in this limit, we have
that $\mathcal{S}_I=\langle k_I\rangle N_I$ and
$\mathcal{S}_C=\langle k_C\rangle N_C$, where $\langle k_I\rangle
=\sum_{k_I} k_IP(k_I)$ and $\langle k_C\rangle =\sum_{k_C} k_C
P(k_C)$.
\item Step 3) In practice, for finite networks, if $|\mathcal{S}_C-\mathcal{S}_I|<0.01
\langle k_I\rangle N_I$ then we proceed as follows. We randomly
choose one stub from each set and join them together to make a
complete link (but avoiding multiple connections between individuals
and cliques). This procedure is repeated until one of these sets is
empty. On the other hand, if $|\mathcal{S}_C-\mathcal{S}_I|>0.01
\langle k_I\rangle N_I$, our algorithm returns to Step 1.
\item Step 4) Finally, we eliminate those stubs that remained
unmatched from the previous step, and project the set of cliques
onto the set of individuals, as illustrated in Fig.~\ref{fig.Bip}.
\end{itemize}
\begin{figure}[H]
\vspace{0.5cm}
\begin{center}
\begin{overpic}[scale=1.5]{Fig01.eps}
\put(0,65){(a)}
\put(60,65){(b)}
\end{overpic}
\vspace{1cm}
\end{center}
\caption{(Color online) Illustration of a bipartite network (panel a) and its
projection (panel b). Each blue node represents a clique and each
light blue node represents an individual.}\label{fig.Bip}
\end{figure}
\subsection{Susceptible-infected-recovered-quarantined model}\label{Sec.SIRQ}
Let us first introduce the susceptible-infected-recovered model (SIR),
and some definitions.
The SIR model splits the population into three compartments called
susceptible ($S$), infected ($I$), and recovered ($R$). Here, the symbols $S$,
$I$, and $R$ refer to both the state of an individual and the proportion of
the population in each compartment, where $S+I+R=1$. For a
discrete-time SIR model, all individuals synchronously update their
states according to the following rules. At each time step, $t\to t+1$,
every infected individual
\begin{enumerate}
\item transmits the disease to each susceptible
neighbor with probability $\beta$,
\item recovers from the disease after
being infected for $t_r$ time steps (which is called the recovery
time) and becomes permanently immune. In this manuscript, we will use $t_r=1$.
\end{enumerate}
Typically, the spreading process starts with a single infected individual,
called the ``index-case'', and the rest of the population is
susceptible. The disease then spreads through the population until the
system reaches a final stage with only susceptible and recovered
individuals. If the disease dies out after a few time steps and only
an insignificant fraction of the population has become infected, then
such an event is defined as a small outbreak. Conversely, the outbreak
turns into an epidemic if the fraction of recovered people is
macroscopic at the final stage. In the last few years, several works
have also studied the case in which a macroscopic fraction $I_0$ of
the population is infected at the beginning of the spreading
process~\cite{radicchi2020epidemic,miller2014epidemics,krapivsky2021infection,machado2022effect,hasegawa2016outbreaks}. This
case is usually referred to as a non-trivial or large initial
condition.
A widely used measure to predict,
whether a disease will develop into a small outbreak or an epidemic is
the basic reproduction number $R_0$, defined as the average number of
secondary cases infected by the
index-case~\cite{brauer2008mathematical}. For a value of $R_0$ less
than one, the probability of a disease becoming an epidemic is known
to be zero ($\Pi=0$), while for $R_0$ greater than one, this
probability is positive ($\Pi>0$). Finally, around $R_0=1$, there is a
second-order phase transition where many quantities behave as
power-laws~\cite{stauffer2014introduction,grassberger1983critical}. For
example, at $R_0=1$ the probability distribution of the number of
recovered individuals for small outbreaks, denoted by $P(s)$, decays
algebraically as $P(s)\sim s^{-(\tau-1)}$, where $\tau$ is called the
Fisher exponent~\cite{stauffer2014introduction}.
As explained in the Introduction, an extension of the SIR model has
been proposed in~\cite{hasegawa2017efficiency}, which introduces
a $Q$ compartment in order to study the effect of a prompt quarantine
strategy on the epidemic spread. In this model, the states of the
nodes were updated asynchronously. However, in our work, we will
consider a synchronous version of that model in order to simplify the
analytical study. More precisely, our model works as follows: at time
$t$,
\begin{enumerate}
\item All infected individuals are detected and isolated with probability $f$,
i.e., they move to the $Q$ compartment.
\item Next, all the neighbors of the individuals who were
isolated in the previous step, also move to the $Q$ compartment.
\item After that, those infected individuals who have not been isolated, will transmit the disease to each
susceptible neighbor with probability $\beta$.
\item Finally, individuals who are still in the $I$ compartment and
have been infected for $t_r$ time steps, will recover. Likewise,
people who have been infected and then quarantined will move to the
$R$ compartment after $t_r$ time steps, so $R$ represents the
proportion of the population ever infected.
\end{enumerate}
Similar to the standard SIR model, at the final stage of the SIRQ
model, the population consists solely of susceptible, recovered and
quarantined individuals.
Note that, according to the rules of our model, it is sufficient to
detect a single infected person in a clique to quarantine the entire
clique. Therefore, larger (smaller) cliques have a higher (lower)
probability of being quarantined. On the other hand, from one perspective, our model could be seen as a spreading process in higher-order networks~\cite{battiston2020networks,battiston2021physics}
because the transition from a susceptible to a quarantined state is
not caused by pairwise interactions but rather by group
interactions. Typically, in models with higher-order structures, nodes become "infected" through group interactions, and after that, they transmit the "infection" to other nodes. However, it should be noted that in our model, quarantined individuals are removed from the system, so they cannot transmit their state to the rest of the population, unlike other contagion models with higher-order structures.
In the following sections, we will study our SIRQ model on networks
with cliques.
\section{Results}\label{Sec.Res}
\subsection{Final stage}\label{Sec.FinS}
In this section, we investigate the final stage of the SIRQ model for
random regular (RR) networks with cliques, defined as networks in
which every clique has $k_C$ members and every individual belongs to
$k_I$ cliques. We will show numerical results for RR with
$k_I=3$, and $k_C=7$ and focus only on the case
where a single individual is infected at the beginning of the dynamic
process. In Appendix~\ref{Sec.AppAddit}, we present additional results
for networks in which $k_C$ and $k_I$ follow other
probability distributions.
In Fig.~\ref{fig.Pinf}a, we show a scatter-plot of $R$ vs. $\beta$ for
several values of the probability of detection $f$. For low values of
$f$, we observe that the transition from an epidemic-free phase to an
epidemic phase is continuous. However, for $f\gtrsim 0.35$, we see
that as $\beta$ increases, another phase transition exists above which
the fraction of recovered individuals is abruptly suppressed. This
transition is also observed in other network topologies (see
Appendix~\ref{Sec.AppAddit}), especially in networks containing larger
cliques. In Sec.~\ref{Sec.back}, we will show that
around this transition point, a backward bifurcation occurs.
\begin{figure}[H]
\vspace{0.5cm}
\begin{center}
\begin{overpic}[scale=0.25]{Fig02a.eps}
\put(80,20){(a)}
\end{overpic}
\vspace{0.5cm}
\vspace{0.5cm}
\begin{overpic}[scale=0.25]{Fig02b.eps}
\put(80,20){(b)}
\end{overpic}
\vspace{0.0cm}
\begin{overpic}[scale=0.25]{Fig02c.eps}
\put(20,15){(c)}
\end{overpic}
\hspace{0.5cm}
\begin{overpic}[scale=0.25]{Fig02d.eps}
\put(20,20){(d)}
\end{overpic}
\vspace{1cm}
\vspace{0.0cm}
\end{center}
\caption{(Color online) Panel a: Scatter-plot of the fraction of
recovered people at the final stage, $R$, as a function of
$\beta$ for a RR network with $k_C=7$, $k_I=3$, $N_I=10^6$, and
different values of the probability of detection $f$. Results were
obtained from $10^3$ stochastic realizations. Panel b: $\langle s\rangle$ against $\beta$ for
$f=0.4$ and several values of $N_I$. Results were averaged over
$10^5$ realizations. The vertical arrow indicates the peak position $\beta_c$ of $\langle s\rangle$ for $N_I=8\cdot 10^5$. In the inset, we show the height of the peak of
$\langle s \rangle$, which we call $\langle s \rangle_{max}$ (in
log-log scale) for the same values of $N_I$ as in the main plot. The
dashed line corresponds to a power-law fit with an exponent of 0.46. Panel
c: distribution $P(s)$ for $\beta=0.78$, $f=0.4$, and $N_I=10^6$,
obtained from $3\cdot 10^5$ stochastic realizations (symbols). The solid
black line is a guide to the eye, and the dashed red line is a
power-law function with an exponent equal to $\tau-1=1.5$. Panel d: Probability
of a small outbreak, $1-\Pi$, against $\beta$ for the same parameter values
as in panel a. Results were averaged over $10^5$ stochastic
realizations.}\label{fig.Pinf}
\end{figure}
To delve deeper into the nature of the transition point at which $R$
is abruptly suppressed, we will study how small outbreaks behave
around this point. Here, we consider that a small outbreak occurs when the fraction of the recovered people is below 1\% at the final stage. Fig.~\ref{fig.Pinf}b shows the average number of
recovered individuals for small outbreaks $\langle s\rangle$ vs. $\beta$ for
$f=0.4$. Interestingly, we note that $\langle s\rangle$ exhibits a
peak around a value of $\beta$ that we call $\beta_c$, which roughly
corresponds to the point at which $R$ is abruptly suppressed (see
Fig.~\ref{fig.Pinf}a). Furthermore, the height of this peak increases
with $N_I$ as a power-law (see the inset of Fig.~\ref{fig.Pinf}b ),
which is a typical finite-size effect of a second-order phase
transition~\cite{stauffer2014introduction}. On the other hand,
Fig.~\ref{fig.Pinf}c shows the probability distribution of the number
of recovered individuals for outbreaks at $\beta=\beta_c$. It can be
seen that $P(s)$ decays as a power-law. Finally, in
Fig.~\ref{fig.Pinf}d, we display the probability of a small outbreak,
$1-\Pi$, as a function of $\beta$ (note that $\Pi$ is the
probability that an epidemic occurs), and we get that $1-\Pi$ goes
continuously to one around $\beta=\beta_c$, which again is a feature of other epidemic and percolation models in random networks
with a continuous phase
transition~\cite{kenah2007second,meyers2006predicting}. Therefore, if
we take together the results of Figs.\ref{fig.Pinf}b-d, they all
suggest that quantities associated with small outbreaks will exhibit properties of a continuous phase transition.
To provide a broader picture of the effect of our strategy on networks
with cliques, in Fig.~\ref{fig.Phase}, we show the heat-map of $R$ when an epidemic occurs in the plane
$\beta-f$. From this figure, we observe that there is a minimum detection
probability $f^*$, above which the system is always in an
epidemic-free phase. On the other hand, we also find that in the
region $\beta\lesssim 1$, an abrupt color change occurs around
$f\approx 0.4$, which indicates that the system undergoes a
discontinuous transition in that region of the parameter space.
\begin{figure}[H]
\vspace{0.5cm}
\begin{center}
\begin{overpic}[scale=0.25]{Fig03.eps}
\put(70,20){}
\end{overpic}
\vspace{1cm}
\vspace{0.0cm}
\end{center}
\caption{(Color online) Heat-map of $R$ in the plane $\beta-f$ for RR networks with cliques (with $k_C=7$ and $k_I=3$), obtained from numerical
simulations. To compute $R$, we took into account only
those realizations in which an epidemic occurs ($R>1$\%). Darker
colors represent a low value of $R$ (black corresponds to $R=0$) and
brighter colors a higher value of $R$ (yellow corresponds to
$R=1$). Simulation results were averaged over $10^3$ stochastic
realizations with $N_I=10^5$. The dashed white line was obtained from
Eq.~(\ref{eq.R0}) for $R_0=1$, and the point $f^*=0.51$ corresponds to
the value of $f$ above which the system is in an epidemic-free phase
for any value of $\beta$.}\label{fig.Phase}
\end{figure}
Next, we will compute the
basic reproduction number, $R_0$. As mentioned in
Sec.~\ref{Sec.SIRQ}, $R_0$ is a widely used quantity to predict
whether a disease outbreak will become an epidemic or die out
quickly, and typically, around $R_0=1$, a second-order phase transition occurs. In order to estimate $R_0$, we adapt the approach
proposed in~\cite{miller2009spread}, leading us to the
following expression for RR networks with cliques:
\begin{eqnarray}\label{eq.R0}
R_0&=&\frac{\epsilon_1+\epsilon_2}{\beta(k_C-1)},
\end{eqnarray}
with
\begin{eqnarray}
\epsilon_1&=&(k_C-1)(1-\beta)\left[(\beta(1-f)+(1-\beta))^{k_C-2}-(\beta(1-\beta)(1-f)+(1-\beta))^{k_C-2}\right],\\
\epsilon_2&=&(1-f)(1-\beta f)^{k_C-2}(k_I-1)(k_C-1)^2\beta^2.
\end{eqnarray}
In Eq.~(\ref{eq.R0}):
\begin{enumerate}
\item the denominator is the average number of individuals (within a clique) who
are infected by the index-case. We refer to these individuals as the ``first generation''.
\item the numerator corresponds to the average number of people who
are infected by the first generation. In Appendix~\ref{Sec.AppR0},
we explain how to derive the expressions of $\epsilon_1$ and
$\epsilon_2$.
\end{enumerate}
In Fig.~\ref{fig.Phase}, we plot the set of points ($\beta_c,f_c$) that satisfy the constraint $R_0=1$. In particular, for $\beta_c=1$, it
can be easily obtained from Eq.~(\ref{eq.R0}) that $f_c$ is given by:
\begin{eqnarray}\label{eq.fcR0}
f_c=1-\left(\frac{1}{(k_I-1)(k_C-1)}\right)^{\frac{1}{k_C-1}}.
\end{eqnarray}
Remarkably, from Fig.~\ref{fig.Phase}, we can see that the predicted
curve agrees well with the entire boundary between the
epidemic and epidemic-free phases, including in the region where a
discontinuous transition occurs. In Sec.~\ref{Sec.back}, we will
see that this result is consistent with a backward bifurcation phenomenon around
$R_0=1$.
In summary, in this section we found that the
SIRQ model on networks with cliques has a discontinuous phase
transition, but at the same time, several quantities (specifically, $\langle s \rangle$,
$1-\Pi$, and $P(s)$) display the same features of a
continuous phase transition. We note, however, that the results shown in this section were
obtained from simulations in finite networks and from approximate
formulas. In the following section, we will demonstrate that in the
thermodynamic limit ($N_I\to \infty$), the probability of a small outbreak
($1-\Pi$) goes continuously to one at the transition point for $\beta=1$.
\subsection{Probability of a small outbreak}\label{sec.prob}
In this section, we will describe the SIRQ model as a forward
branching process~\cite{valdez2020epidemic,kenah2007second}
to calculate the probability of a small outbreak $1-\Pi$ and the transition
point $f=f_c$ for $\beta=1$. Here, we focus only on RR networks with cliques,
but in Appendix~\ref{Sec.AppPi}, we compute these quantities for
other network structures.
Branching theory has been extensively applied to the study of many
processes on random networks, including cascading
failures~\cite{buldyrev2010catastrophic,valdez2020cascading}, disease
transmission~\cite{newman2002spread,pastor2015epidemic,
wang2018critical}, random
percolation~\cite{dong2021optimal,cohen2002structural}, k-core
percolation~\cite{baxter2011heterogeneous,di2019insights} and fractional
percolation~\cite{shang2014vulnerability,valdez2022emergent}. For
an SIR model, this theory was first applied to networks without
cliques to calculate the behavior of various quantities as a function
of $\beta$~\cite{newman2002spread,pastor2015epidemic}. Later on,
multiple works used branching theory to study the SIR model on
networks with
cliques~\cite{mann2021random,mann2021exact,allard2012bond,gleeson2009bond}. However,
their calculations were usually more complex because they required an
exhaustive enumeration of transmission events occurring within a clique with at
least one infected person. But, for $\beta=1$, these calculations can be
substantially simplified. This is because when
individuals become infected (in a clique composed of susceptible
members), then at the next time step, they will transmit the disease to
the rest of the clique members with probability one, unless an
intervention strategy is applied. Therefore, in what follows, we will
focus only on the case $\beta=1$.
To compute the probability of a small outbreak
$1-\Pi$, we first need to calculate the probability
$\phi$ that an infected individual (reached through a link) will not
generate an epidemic~\cite{kenah2007second,meyers2006predicting}. By using the branching process approach, it can
be found that $\phi$ is the solution of the following self-consistent
equation:
\begin{eqnarray}\label{eq.phi}
\phi&=&\left[ ((1-f)\phi)^{k_C-1}+1-(1-f)^{k_C-1}\right]^{k_I-1}.
\end{eqnarray}
The l.h.s of this equation is the probability that an infected
individual ``$j$'' reached through a link, does not cause an
epidemic. On the other hand, the r.h.s. is the probability
that an infected individual ``$j$'' transmits the disease, but none of
the $k_I-1$ outgoing cliques will be able to cause an epidemic. This is
because one of the following two events happens to every clique:
\begin{enumerate}
\item with probability $1-(1-f)^{k_C-1}$, at least one member (other than ``$j$'') is detected, so the whole clique is placed under quarantine,
\item with probability $((1-f)\phi)^{k_C-1}$, none of its members are detected but also they won't be able to generate epidemics.
\end{enumerate}
After solving Eq.~(\ref{eq.phi}), the probability $1-\Pi$ that an
index-case does not cause an epidemic can be obtained from the following equation,
\begin{eqnarray}\label{eq.noPi}
1-\Pi&=&f+(1-f)\left[((1-f)\phi)^{k_C-1}+1-(1-f)^{k_C-1}\right]^{k_I},
\end{eqnarray}
where, in the r.h.s. :
\begin{enumerate}
\item the first term corresponds to the probability that the index-case is detected
\item the second term corresponds to the scenario where the index-case is
not detected and transmits the disease, but none of the $k_I$
outgoing cliques will be able to generate an epidemic, similarly as
in Eq.~(\ref{eq.phi}).
\end{enumerate}
It is worth noting that Eqs.~(\ref{eq.phi})-(\ref{eq.noPi}) are valid only if the initial fraction of index-cases is
infinitesimal.
Another quantity of interest that can be calculated in the limit of large network
sizes is the critical threshold $f_c$ at which a phase transition
occurs. To derive $f_c$, we take derivatives of both sides of Eq.~(\ref{eq.phi}) at
$\phi=1$ and obtain:
\begin{eqnarray}\label{eq.fcBranch}
f_c=1-\left(\frac{1}{(k_I-1)(k_C-1)}\right)^{\frac{1}{k_C-1}},
\end{eqnarray}
which has the same expression as in Eq.~(\ref{eq.fcR0}).
To verify the validity of our theoretical analysis, we performed
numerical simulations of the SIRQ model on RR networks with
cliques. In Fig.~\ref{fig.TheorT1}a, we show the mean size of small
outbreaks $\langle s\rangle$ vs. $f$ for
different network sizes ($N_I$). It can be seen that as $N_I$
increases, the peak position of $\langle s\rangle$ (that we
call $f_c(N_I)$) converges to the critical threshold $f_c$ predicted
by Eq.~(\ref{eq.fcBranch}). On the other hand, in
Fig.~\ref{fig.TheorT1}b, we display the probability of a small outbreak,
$1-\Pi$, obtained from our simulations and theoretical predictions [see
Eqs.~(\ref{eq.phi})-(\ref{eq.noPi})]. As seen in this figure, the agreement
between theory and simulation is excellent. In addition, we observe
that $1-\Pi$ goes continuously to one (i.e., $\Pi\to 0$) at the critical
threshold $f=f_c$ predicted by Eq.~(\ref{eq.fcBranch}). Thus, our findings in this section provide further evidence that small outbreaks display features of a continuous phase
transition around $f=f_c$, as noted in the previous
section.
In the next section, we will investigate the effect of non-trivial
initial conditions on the final stage of the propagation process and
discuss the mechanism leading to the discontinuous transition observed
in Sec.~\ref{Sec.FinS}.
\begin{figure}[H]
\vspace{0.5cm}
\begin{center}
\begin{overpic}[scale=0.25]{Fig04a.eps}
\put(80,55){}
\end{overpic}
\hspace{1cm}
\begin{overpic}[scale=0.25]{Fig04b.eps}
\put(80,55){}
\end{overpic}
\vspace{1cm}
\end{center}
\caption{(Color online) Panel a: $\langle s\rangle$ vs.
$f$ for $\beta=1$ and RR networks with cliques ($k_C=7$ and $k_I=3$)
for different network sizes (from bottom to top:
$N_I=1.25\cdot 10^4,\;2.5\cdot 10^4,\;5\cdot 10^4,\;10^5$, and $2\cdot 10^5$). Symbols
correspond to simulation results averaged over $10^5$ stochastic realizations. The
vertical dashed line indicates the predicted value of $f_c$ obtained
from Eq.~(\ref{eq.fcBranch}). In the inset, we show the peak position of $\langle s \rangle$ (estimated from the main
plot), called $f(N_I)$, as a function of $N_I$ in a linear-log scale. The dashed line
corresponds to our theoretical prediction of $f_c$. Dotted lines are
a guide to the eye. Panel b: Probability of a small outbreak
($1-\Pi$) vs. $f$ for $\beta=1$
and RR networks with cliques ($k_C=7$ and $k_I=3$). The line
corresponds to the theory given by
Eqs.~(\ref{eq.phi})-(\ref{eq.noPi}), and symbols are
simulation results averaged over $10^5$ realizations with $N_I=10^6$.}\label{fig.TheorT1}
\end{figure}
\subsection{Backward bifurcation}\label{Sec.back}
In previous sections, we focused our attention only on the case where a
single index-case was infected at the beginning of the outbreak. Here, we will study the effect of a non-trivial initial condition
on the final stage of the propagation process. To this end, we conduct
numerical simulations in which the fraction of infected individuals
at $t=0$ (denoted by $I_0$) is macroscopic. In particular, we are
interested only in the case where $f>f_c$ (i.e., $R_0<1$) because for $f<f_c$
(i.e., $R_0>1$), we have already found that an epidemic can take off
even from a single index-case (see Sec.~\ref{sec.prob}).
Fig.~\ref{fig.I0} shows a scatter-plot of the proportion of recovered
people $R$ at the final stage as a function of $I_0$ for
$\beta=1$ and $f=0.40$ (which is greater than $f_c=0.3391$, see
Sec.~\ref{sec.prob}), and for several network sizes
$N_I$. Additionally, in the inset, we plot the average value of $R$
vs. $f$ for the same parameter values used in the main
plot. Interestingly, we obtain that $R$ has an abrupt jump around
$I_0\approx 2.5\times 10^{-3}\equiv I_0^*$. Therefore, our numerical
simulations reveal that the final fraction of recovered people strongly
depends on the initial fraction of infected individuals for
$R_0<1$. In the language of bifurcation theory, these findings imply
that our model undergoes a backward
bifurcation~\cite{gumel2012causes}, i.e., the final fraction of
recovered people is bistable for $R_0<1$ ($f>f_c$). In
Appendix~\ref{sec.appBif}, we present additional results showing that
the system is also bistable for other values of $f$ and network topologies.
Previous studies have shown that this type of bifurcation can be
caused by multiple mechanisms, such as exogenous re-infection and
the use of an imperfect vaccine against
infection~\cite{gumel2012causes}. On the other hand, very recently,
B{\"o}rner et al.~\cite{borner2022explosive} proposed a mean-field
SIRQ model to explore different quarantine measures whose
effectiveness decreases over time. Although not explicitly mentioned
in that work, it can be seen that their model is sensitive to initial
conditions for $R_0<1$. Thus, a backward bifurcation phenomenon can
also be caused by a quarantine measure that becomes less effective
over time. Additionally, in~\cite{borner2022explosive}, it was
shown that a discontinuous epidemic phase transition occurs, and the
probability of an epidemic vanishes around the transition point.
To explain why our model is sensitive to initial conditions
for $R_0<1$, we will next measure the time evolution of $\langle
n\rangle$ for several values of $I_0$, where $\langle n \rangle$ is
defined as the average number of members (either in a susceptible or
infected state) in a clique. In particular, for RR networks with
cliques, the inequality $\langle n\rangle \leq k_C$ holds. From
Fig.~\ref{fig.evoln}, we can clearly see that $\langle n\rangle$ is a
decreasing function with time, or in other words, cliques become
smaller as the population moves into the $Q$ and $R$
compartments. This leads us to the conclusion that the effectiveness
of our strategy diminishes over time (as in~\cite{borner2022explosive}) because, as indicated in
Sec.~\ref{Sec.SIRQ}, smaller cliques are less likely to be placed
under quarantine. Therefore, based on what was observed in~\cite{borner2022explosive}, we conjecture that a decrease in $\langle n \rangle$ over
time could explain why our model displays an abrupt transition and a
backward bifurcation diagram, as seen in Secs.~\ref{Sec.FinS}
and~\ref{Sec.back}, respectively.
\begin{figure}[H]
\vspace{0.5cm}
\begin{center}
\begin{overpic}[scale=0.35]{Fig05.eps}
\put(80,55){}
\end{overpic}
\vspace{0.5cm}
\vspace{0.0cm}
\end{center}
\caption{(Color online) Scatter-plot of $R$ vs. $I_0$
obtained from numerical simulations for $\beta=1$ and $f=0.40$ in a
RR network with $k_C=7$, $k_I=3$, and different network sizes
$N_I$. Inset: Average value of $R$ as a function of $I_0$ for the
same parameter values used in the main plot. Numerical results were averaged
over $10^4$ stochastic realizations. The vertical arrow indicates
the value of $I_0^*$ around which $R$ undergoes a phase
transition.}\label{fig.I0}
\end{figure}
\begin{figure}[H]
\vspace{0.5cm}
\begin{center}
\begin{overpic}[scale=0.25]{Fig06a.eps}
\put(80,55){(a)}
\end{overpic}
\vspace{0.7cm}
\hspace{0.5cm}
\begin{overpic}[scale=0.25]{Fig06b.eps}
\put(80,55){(b)}
\end{overpic}
\vspace{0.5cm}
\begin{overpic}[scale=0.25]{Fig06c.eps}
\put(80,55){(c)}
\end{overpic}
\vspace{0.0cm}
\end{center}
\caption{(Color online) Time evolution of the average number of
individuals (either in a susceptible or infected state) in a clique,
denoted by $\langle n \rangle$, for $f=0.40$, $\beta=1$, and
several initial conditions: $I_0=1.5\cdot 10^{-3}$ (panel a),
$I_0=2.5\cdot 10^{-3}$ (panel b), and $I_0=3.5\cdot 10^{-3}$ (panel
c). We generated 500 simulation trajectories (light blue lines) on
RR networks with $k_C=7$, $k_I=3$, and $N_I=10^6$. Box plots show the
5th, 25th, 50th, 75th and 95th percentile values of $\langle
n\rangle$.}\label{fig.evoln}
\end{figure}
\section{Conclusions}
In summary, in this manuscript, we have investigated an SIRQ model
with a prompt quarantine measure on networks with cliques. Numerical
simulations revealed that epidemics could be abruptly suppressed at
a critical threshold $f_c$, especially on networks with larger cliques
(as shown in Appendix~\ref{Sec.AppAddit}). On the contrary, we observed that small outbreaks exhibit properties of a continuous
phase transition around $f_c$. Furthermore, using branching theory, we demonstrated
that the probability of a small outbreak goes continuously to one at
$f=f_c$ for $\beta=1$. Therefore, these results indicate that our model can exhibit features of both
continuous and discontinuous transitions. Next, we explored the impact
of a macroscopic fraction of infected population at the beginning of
the epidemic outbreak, and found that for $R_0<1$, a backward
bifurcation phenomenon emerges. Finally, numerical simulations showed
that the quarantine measure becomes less effective over time, which
could explain why our model exhibits an abrupt transition and a
backward bifurcation phenomenon.
Several lines of research can be derived from this work. For example,
one question that remains open is whether the fraction of
recovered people (in the event of an epidemic) can be predicted by
branching theory since in this manuscript we have only used this theory to study small outbreaks. On the other hand, our model could be extended to
include a time-lag between infection and detection. Another relevant
modification would be to allow quarantined individuals to return to
the network after a certain period of time (especially those who were
susceptible) because it is unrealistic to assume that they will remain
isolated until the end of an epidemic outbreak. Lastly, our model could
be studied in higher-order networks with simplicial complexes. It is known that simplicial contagion
models can lead to explosive epidemic transitions~\cite{battiston2021physics,battiston2020networks}, so it would be interesting to
investigate how they compete with a prompt quarantine measure. We
will explore some of these extensions in a forthcoming work.
\section*{Acknowledgement}
We thank UNMdP (EXA 956/20), FONCyT (PICT 1422/2019) and CONICET, Argentina, for financial support. We also thank Dr. C. E. La Rocca and Lic. I. A. Perez for valuable discussions.
|
{
"arxiv_id": "2302.08626",
"language": "en",
"timestamp": "2023-02-20T02:04:03",
"url": "https://arxiv.org/abs/2302.08626",
"yymm": "2302"
} | \section{Introduction}
Attention mechanism has revolutionized the application of neural networks in numerous areas such as computer vision and natural language processing. Different mechanism of attention such as Additive Attention \cite{38ed090f8de94fb3b0b46b86f9133623}, Multiplicative Attention \cite{luong-etal-2015-effective}, and Key-Value Attention \cite{DBLP:journals/corr/DanilukRWR17} have been introduced in the past. Among all different attention mechanisms, perhaps the one that is used most frequently is the Dot-Product Attention \cite{NIPS2017_3f5ee243} that was introduced for transformers. Hereon, any mention of attention refers to this Dot-Product attention.
In the recent past, the attention module has been analyzed by various works, primarily in attempts to improve its squared complexity and aid its feasibility for long sequences~\cite{tay2020efficient}. Additionally, other works have analyzed potential redundancies in components like the multiple attention heads~\cite{michel2019sixteen,behnke-heafield-2020-losing,voita2019analyzing}.
While these variants have shown promising evidence, the original transformer seems to be performing the best when compared in various scales of model sizes~\cite{tay2022scaling}. In this work, we analyze the attention module and attempt to dissect it to better understand its components.
Attention mechanism includes three linear transformations, namely \textit{query}, \textit{key}, and \textit{value} transformations, which are affine transformations with respective bias terms. In this work, we study the role of these bias terms, and mathematically show that the bias term for the \textit{key} linear transformation does not have any role in the attention function and could be omitted altogether. This result was also independently reported in \cite{DBLP:journals/corr/abs-2006-16362}. We next verify this result numerically, and show that replacing these bias vectors with arbitrary and random vectors does not result in any significant difference\footnote{The difference is non-zero due to numerical errors. See \Cref{sec:change_b_k} for details.} in the output of transformers.
Another implication of this result is in BitFit \cite{ben-zaken-etal-2022-bitfit} where only bias terms in a transformer-based language model are fine-tuned on downstream tasks for parameter-efficient training. We show that by freezing the \textit{key} bias parameters in attention, we could reduce the number of trainable parameters in BitFit by over 11\% with no impact on the performance of the final model on downstream tasks.
\section{Notation}
\label{sec:notation}
In attention mechanism, an input vector ${\bm{h}} \in \mathbb{R}^{d}$, attends to a set of $n$ vectors (subject of attention) which are represented as columns of matrix ${\bm{C}} \in \mathbb{R}^{d\times n}$.
Within the attention mechanism, first a query vector ${\bm{q}} \in \mathbb{R}^{d}$ is constructed based on ${\bm{h}}$ using a linear transformation, i.e., ${\bm{q}} = {\bm{W}}_q {\bm{h}} + {\bm{b}}_q$, where ${\bm{W}}_q \in \mathbb{R}^{d \times d}$ and ${\bm{b}}_q \in \mathbb{R}^{d}$ are the respective weight and bias parameters. Also, a set of key vectors that establish a key matrix ${\bm{K}} \in \mathbb{R}^{d\times n}$ are constructed by applying another linear transformation on ${\bm{C}}$, i.e., ${\bm{K}} = {\bm{W}}_k{\bm{C}}+ {\bm{b}}_k \mathbbm{1}^T$, where $\mathbbm{1} \in \mathbb{R}^{n}$ is a vector of 1s. Next, score distribution between the query and the keys is created by applying softmax ($\sigma(.)$) on the product of these two linear transformations:
\begin{align*}
&\sigma\left(({\bm{W}}_q{\bm{h}}+{\bm{b}}_q)^T\left({\bm{W}}_k{\bm{C}}+ {\bm{b}}_k\mathbbm{1}^T)\right)\right) = \sigma\left({\bm{q}} ^T{\bm{K}}\right),
\end{align*}
which is referred to as attention distribution. On the other hand, similar to the process of creating the set of key vectors (${\bm{K}}$), a set of value vectors, constituting matrix ${\bm{V}}$, are created by applying another linear transformation on ${\bm{C}}$, i.e., ${\bm{V}} = ({\bm{W}}_v{\bm{C}}+ {\bm{b}}_v\mathbbm{1}^T)$. Finally, attention is computed by multiplying the value vectors and the attention distribution:
\begin{align}
\label{eq:attn_simple}
\text{Attn}&^{{\bm{C}}}({\bm{h}})= {\bm{V}}\sigma\left({\bm{q}} ^T {\bm{K}}\right)^T,
\end{align}
which could be thought of as a convex combination of value vectors (columns of ${\bm{V}}$).\footnote{Note that we omit writing the scaling factor $\frac{1}{\sqrt{d}}$ employed on the dot-product, for brevity.}
\section{Dissecting Attention}
\label{sec:attn_dissec}
To better understand the inter-workings of attention, we take a deeper look into the interactions of the different elements that form the attention mechanism. First, we expand ${\bm{V}}$ in \Cref{eq:attn_simple}:
\begin{align}
&\text{Attn}^{ {\bm{C}}}( {\bm{h}}) = {\bm{V}}\sigma\left( {\bm{q}} ^T {\bm{K}}\right)^T \nonumber\\
&\quad =( {\bm{W}}_v {\bm{h}}+ {\bm{b}}_v\mathbbm{1}^T)\sigma\left( {\bm{q}} ^T {\bm{K}}\right)^T \nonumber\\
&\quad ={\bm{W}}_v {\bm{h}} \sigma\left( {\bm{q}} ^T {\bm{K}}\right)^T + {\bm{b}}_v\mathbbm{1}^T\sigma\left( {\bm{q}} ^T {\bm{K}}\right)^T.
\label{eq:attn-open-1}
\end{align}
Since $ {\bm{b}}_v\mathbbm{1}^T$ is a matrix with identical columns ${\bm{b}}_v$, any convex combination of its columns, including the one using $\sigma\left( {\bm{q}} ^T {\bm{K}}\right)$ as weights, is essentially equal to ${\bm{b}}_v$, i.e.,
\begin{align*}
{\bm{b}}_v\mathbbm{1}^T\sigma\left( {\bm{q}} ^T {\bm{K}}\right)^T = {\bm{b}}_v.
\end{align*}
Therefore, from \Cref{eq:attn-open-1}, we can write the attention function as,
\begin{align}
\text{Attn}&^{ {\bm{C}}}( {\bm{h}})={\bm{W}}_v {\bm{h}}\sigma\left( {\bm{q}} ^T {\bm{K}}\right)^T + {\bm{b}}_v .
\label{eq:attn-open-V}
\end{align}
Next, we expand $ {\bm{K}}$ in \Cref{eq:attn-open-V}:
\begin{align}
&\text{Attn}^{ {\bm{C}}}( {\bm{h}})= {\bm{W}}_v {\bm{h}} \sigma\left( {\bm{q}} ^T {\bm{K}}\right)^T + {\bm{b}}_v\nonumber\\
&\quad ={\bm{W}}_v {\bm{h}}\sigma\left( {\bm{q}} ^T( {\bm{W}}_k {\bm{C}}+ {\bm{b}}_k\mathbbm{1}^T)\right)^T + {\bm{b}}_v \nonumber\\
&\quad ={\bm{W}}_v {\bm{h}}\sigma\left( {\bm{q}} ^T {\bm{W}}_k {\bm{C}}+ {\bm{q}} ^T {\bm{b}}_k\mathbbm{1}^T\right)^T + {\bm{b}}_v . \label{eq:attn-open-2}
\end{align}
Using the fact that $ {\bm{b}}_k\mathbbm{1}^T$ is a matrix with identical columns, it follows that $ {\bm{q}}^T {\bm{b}}_k\mathbbm{1}^T$ is a vector with equal elements, where all elements are equal to $ {\bm{q}}^T {\bm{b}}_k$. On the other hand, softmax is invariant under translation by the same value in each coordinate. In other words, $\forall\delta \in \mathbb{R}, \forall {\bm{z}} \in \mathbb{R}^n$,
$\sigma( {\bm{z}}+\text{\boldmath$\delta$}) = \sigma( {\bm{z}})$, where $\text{\boldmath$\delta$}=\delta\mathbbm{1}$.
As a result, and from \Cref{eq:attn-open-2} we can conclude that,
\begin{align*}
\text{Attn}^{ {\bm{C}}}( {\bm{h}})= {\bm{W}}_v {\bm{h}}\sigma\left( {\bm{q}} ^T {\bm{W}}_k {\bm{C}}\right)^T + {\bm{b}}_v.
\end{align*}
Next, in the equation above we replace $ {\bm{q}}$ with its original linear transformation, resulting in,
\begin{align}
\text{At}&\text{tn}^{ {\bm{C}}}( {\bm{h}})= \nonumber \\
&{\bm{W}}_v {\bm{h}}\sigma\left(( {\bm{W}}_q {\bm{h}}+ {\bm{b}}_q) ^T {\bm{W}}_k {\bm{C}}\right)^T + {\bm{b}}_v \label{eq:attn_rewritten}.
\end{align}
This rewriting of attention highlights the different roles that $ {\bm{b}}_q$, $ {\bm{b}}_k$, and $ {\bm{b}}_v$ play in the attention function. From \Cref{eq:attn_rewritten} it is clear that $ {\bm{b}}_k$ plays \textbf{no role} in the attention function and it is in fact redundant. $ {\bm{b}}_v$ on the other hand plays a very important role in attention since it is one of the two terms that are added to each other to constitute the attention function. Finally, $ {\bm{b}}_q$ plays a role in creating the attention distribution along with other parameters of the attention namely, $ {\bm{W}}_q$ and $ {\bm{W}}_K$.
\section{Numerical Analysis}
\label{sec:comp}
In \Cref{sec:attn_dissec}, we mathematically show that the bias term of the \textit{key} linear transformation within the attention mechanism, i.e., $ {\bm{b}}_k$, is redundant in the attention function and can be removed. In this section, we verify these results numerically. We also discuss some computational gains that could be achieved due to this result.
\begin{table*}[ht!]
\small
\centering
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{|l||cccc||cccc||cccc|}
\hline
\multirow{2}{*}{}&
\multicolumn{4}{c||}{${\bm{b}}_k$} &
\multicolumn{4}{c||}{${\bm{b}}_q$} &
\multicolumn{4}{c|}{${\bm{b}}_v$} \\
\cline{2-13}
& \textbf{0} & \textbf{1} & \textbf{10} & \textbf{[-5, 5]} &\textbf{0} & \textbf{1} & \textbf{10} & \textbf{[-5, 5]} & \textbf{0} & \textbf{1} & \textbf{10} & \textbf{[-5, 5]}\\
\hline
\hline
\rule{0pt}{10pt}RoBERTa-base & $10^{-4}$ & $10^{-4}$ & $10^{-4}$ & $10^{-4}$ & $10$ & $10$ & $10^{2}$ & $10^{2}$ & $10$ & $10$ & $10^{2}$ & $10^{2}$ \\
\rule{0pt}{10pt}RoBERTa-large & $10^{-5}$ & $10^{-5}$ & $10^{-5}$ & $10^{-5}$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $10$ & $10$ \\
\hline
\rule{0pt}{10pt}BART-base & $10^{-5}$ & $10^{-5}$ & $10^{-5}$ & $10^{-5}$ & $10$ & $1$ & $10$ & $10$ & $1$ & $10$ & $10$ & $10$ \\
\rule{0pt}{10pt}BART-large & $10^{-5}$ & $10^{-5}$ & $10^{-5}$ & $10^{-5}$ & $10$ & $1$ & $10$ & $10$ & $1$ & $10$ & $10$ & $10$ \\
\hline
\end{tabular}
\caption{Tolerance level at which the final hidden state of the models before and after changing values of attention bias terms ${\bm{b}}_k$, ${\bm{b}}_q$, or ${\bm{b}}_v$ for 100 sentences are equal. The elements of bias vectors are set to one of 0 (equivalent to removing), 1, 10, or a random value between -5 and 5. The models are not sensitive to even drastic changes to ${\bm{b}}_k$.}
\label{tbl:bias-change}
\end{table*}
\subsection{Changing \texorpdfstring{$ {\bm{b}}_k$}{} in Pre-Trained Language Models}
\label{sec:change_b_k}
We examine the sensitivity of transformer-based pre-trained language models to changes to attention bias terms, i.e., $ {\bm{b}}_k$, $ {\bm{b}}_q$, and $ {\bm{b}}_v$, for all attention modules within the model. The idea behind this analysis is that since we show that $ {\bm{b}}_k$ is redundant in theory, changing its pre-trained values to arbitrary values should not impact the output of the models in practice. On the other hand, for $ {\bm{b}}_q$ and $ {\bm{b}}_v$, which are not redundant in the attention function, changing their values should result in significant changes in model outputs.
For this analysis, we take the first 100 sentences from the Wikipedia page for ``Machine Learning''\footnote{https://en.wikipedia.org/wiki/Machine_learning}. The length of these sentences ranges from 1 (``Overview'') to 75 tokens (counted using spaCy\footnote{\texttt{en_core_web_sm} package. https://spacy.io/}).
Next, we feed these sentences to several pre-trained language models and get their last layer's hidden state for each sentence. We represent this hidden state matrix for sentence $i$ as ${\bm{H}}_i \in \mathbb{R}^{d \times s_i}$, where $s_i$ is the length of sentence $i$.
We also feed these sentences to the same pre-trained language models but with changes applied to their $ {\bm{b}}_k$, ${\bm{b}}_q$, or $ {\bm{b}}_v$ (details are forthcoming) and represent the corresponding hidden state matrix as ${\bm{H}}'_i$, which is also in $\mathbb{R}^{d \times s_i}$.
We then compare ${\bm{H}}_i$ and ${\bm{H}}'_i$ across all $i$s for each of the models:
\begin{equation*}
\label{eq:tol}
x^{\star} := \inf\left\{ x\in\mathbb{Z} \mid \max_{i} \| {\bm{H}}_i - {\bm{H}}'_i \|_{\text{max}} \le 10^x \right\},
\end{equation*}
where $\| . \|_{\text{max}}$ is matrix max norm
\footnote{https://en.wikipedia.org/wiki/Matrix_norm\#Max_norm}, $\mathbb{Z}$ is the set of all integers, and $i\in\{1, \dots, 100\}$.
In other words, the \textit{tolerance level} (i.e., $10^{x^{\star}}$) at which ${\bm{H}}_i$ and ${\bm{H}}'_i$ are equal across all sentences is calculated.
We run this analysis for base and large sizes of both RoBERTa \cite{Liu2019RoBERTaAR} and BART \cite{lewis-etal-2020-bart} using Huggingface.
We set the values of the elements of bias vectors ${\bm{b}}_k$, ${\bm{b}}_q$, or ${\bm{b}}_v$ to 0, 1, 10, and random numbers uniformly sampled from $[-5, 5]$. In other words, ${\bm{b}}_k$, ${\bm{b}}_q$, or ${\bm{b}}_v$ is set to a vector of zeros, ones, tens, or a random vector. This is done for all attention modules (e.g., both self- and cross-attentions in BART) within the models. A small python script for this is shown in the Appendix. The tolerance levels ($10^{x^{\star}}$) for this analysis are reported in Table \ref{tbl:bias-change}.
For both BART-base and -large, as well as for RoBERTa-large, we see the models are not sensitive to the values of ${\bm{b}}_k$ at $10^{-5}$ tolerance level. For RoBERTa-base this tolerance level is $10^{-4}$. On the other hand, for the other two attention bias terms ${\bm{b}}_q$ and ${\bm{b}}_v$, the models are very sensitive to changes to these bias terms. For instance, for RoBERTa-base, changing the elements of ${\bm{b}}_q$ or ${\bm{b}}_v$ to random values in $[-5,5]$, results in the elements of the final hidden states to change up to 100 in value. This study shows that ${\bm{b}}_k$ does not have any significant impact on the output of the models.
From the conclusion of \Cref{sec:attn_dissec} that ${\bm{b}}_k$ is redundant in the computations of attention, one might expect that the tolerance levels reported in Table \ref{tbl:bias-change} under ${\bm{b}}_k$ should be much lower. This discrepancy is simply due to numerical errors associated with calculating softmax within the attention function. For example, in theory, softmax of $[0.1,0.2]$ is equal to the softmax of $[5.1, 5.2]$, since softmax is invariant under translation by the same value in each coordinate. However, numerically this equality only holds at $10^{-9}$ tolerance level\footnote{Calculated using the implementation of softmax in the nn module of PyTorch version 1.10.0}.
These errors propagated across hundreds of dimensions (instead of 2 in this example), and numerous transformer layers would lead to tolerance levels of $10^{-4}$ and $10^{-5}$ that are reported in \Cref{tbl:bias-change}.
We conduct a similar experiment on text classification task and measure how changes in ${\bm{b}}_k$, ${\bm{b}}_q$, and ${\bm{b}}_v$ impact the accuracy of models. For this purpose for three GLUE \cite{DBLP:journals/corr/abs-1804-07461} tasks, namely SST-2, MNLI, and QQP, we take pre-trained RoBERTa-large based models and change the values of ${\bm{b}}_k$, ${\bm{b}}_q$; and ${\bm{b}}_v$ to random values uniformly sampled from $[-5, 5]$; we report accuracy of models on the validation set before and after these changes in \Cref{tbl:glue}. From the numbers, it is immediately clear that applying these drastic changes to ${\bm{b}}_k$ results in no change in the accuracy of the models. On the other hand these changes to ${\bm{b}}_q$ and ${\bm{b}}_v$ result in very large degradation in the performance of the models. It is also evident that changes in ${\bm{b}}_v$ result in much larger degradation than changes in ${\bm{b}}_q$, which supports the evidence in \Cref{sec:attn_dissec} about role of ${\bm{b}}_v$.
\begin{table}[t!]
\footnotesize
\centering
\begin{tabular}{|l||c|c|c|c|}
\hline
& & \multicolumn{3}{c|}{\textbf{[-5,5]}} \\
\hline
& Original & ${\bm{b}}_k$ & ${\bm{b}}_q$ & ${\bm{b}}_v$ \\
\hline
SST-2\tablefootnote{https://huggingface.co/philschmid/roberta-large-sst2} & 0.9644 & 0.9644 & 0.7007 & 0.4908 \\
MNLI\tablefootnote{https://huggingface.co/roberta-large-mnli} & 0.9060 & 0.9060 & 0.4101 & 0.3182 \\
QQP\tablefootnote{https://huggingface.co/howey/roberta-large-qqp} & 0.9214 & 0.9214 & 0.6434 & 0.3682 \\
\hline
\end{tabular}
\caption{Accuracy of GLUE \cite{DBLP:journals/corr/abs-1804-07461} tasks as the values of ${\bm{b}}_k$, ${\bm{b}}_q$, and ${\bm{b}}_v$ of trained models, which are based on RoBERTa-Large, are set to uniform random values between -5 and 5.
}
\label{tbl:glue}
\end{table}
\subsection{Pre-Training of Language Models without \texorpdfstring{$ {\bm{b}}_k$}{}}
We train two transformer-based language models, GPT-2 \cite{radford2019language} and RoBERTa, from scratch both with and without $ {\bm{b}}_k$, and compare the language modeling performance of the model on a test set. We use Huggingface with the original hyper-parameters for training these models. Both of these models are trained on wikitext-103-v1 \cite{DBLP:journals/corr/MerityXBS16} dataset. We train each of the GPT-2 (small) and RoBERTa (base) models from scratch with three different random seeds for 50,000 steps, and we report the loss of final model on the test set averaged over three runs in Table \ref{tbl:lm}. Note that there is no statistically significant difference between the two settings at a p-value $\leq$ 0.05 in an unpaired two-tailed T-test.
\begin{table}[t!]
\small
\centering
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|l|c|c|}
\hline
& \textbf{GPT-2} & \textbf{RoBERTa-base} \\
\hline
Original & 2.9251 & 5.8890 \\
\hline
No $ {\bm{b}}_k$ & 2.9250 & 5.8909 \\
\hline
\end{tabular}
\caption{Average loss (on test set) of GPT-2 (small) and RoBERTa-base compared to their variants without $ {\bm{b}}_k$, trained from scratch. Numbers are average over 3 runs with different random seed, and no statistically significant difference is observed.}
\label{tbl:lm}
\end{table}
\subsection{BitFit without \texorpdfstring{$ {\bm{b}}_k$}{}}
\label{sec:bitfit}
One place where removing the redundant $ {\bm{b}}_k$ could result in significant savings in computation is in BitFit \cite{ben-zaken-etal-2022-bitfit}, where a pre-trained transformer based language model is fine-tuned for downstream tasks such as text classification, summarization, etc., by freezing all the trainable parameters of the model, except for bias terms within different modules of the model. This is further discussed in details in \Cref{sec:bitfit}.
Next, we study the effect of freezing ${\bm{b}}_k$ vectors across all transformer layers in BitFit. Normally in BitFit, ${\bm{b}}_k$ vectors are among the fine-tuned parameters, but since we show that ${\bm{b}}_k$ is redundant in the attention function, we study what happens if these vectors are not fine-tuned in BitFit. In this section all models are fine-tuned using the exact hyper-parameters used in \cite{DBLP:journals/corr/abs-2110-04366} for the corresponding models.
Table \ref{tbl:xsum} shows the results of BitFit for summarization task on the XSUM \cite{narayan-etal-2018-dont} dataset, using BART-large with and without fine-tuning ${\bm{b}}_k$. Freezing ${\bm{b}}_k$ in BitFit results in 11.1\% decrease in the number of trainable parameters. Each model is fine-tuned three times with different random seeds. The reported numbers in \Cref{tbl:xsum} are different rouge metrics averaged over the three runs. According to a two-tailed T-test there is no statistically significant difference between the rouge metrics at p-value $\leq$ 0.05.
\begin{table}[t!]
\footnotesize
\centering
\begin{tabular}{|l|l|l|l|l|l|}
\hline
& & \multicolumn{4}{c|}{\textbf{Average Rouge}} \\
\hline
& & R-1 & R-2 & R-L & R-Sum \\
\hline
\multirow{2}{*}{BART-large} & Orig. & 40.66 & 17.32 & 32.25 & 32.25 \\
& No ${\bm{b}}_k$ & 40.67 & 17.33 & 32.26 & 32.26 \\
\hline
\end{tabular}
\caption{BitFit on BART-large with/without ${\bm{b}}_k$ on XSUM. Second row has 11.11\% less trainable parameters than the first row.
No statistically significant difference in the evaluation set accuracy according to a two-tailed T-test at p-value 0.05.
}
\label{tbl:xsum}
\end{table}
We conduct a similar experiment for a text classification problem, namely SST-2 from the GLUE benchmark using RoBERTa. The main difference between this and the summarization setting is the new classification layers that need to be added to RoBERTa for performing classification, whose parameters are fine-tuned along with the bias parameters in BitFit. As a result, the savings in the number of trainable parameters by freezing ${\bm{b}}_k$ in this setting is smaller than the summarization setting with BART. For RoBERTa-base and RoBERTa-large freezing ${\bm{b}}_k$ during fine-tuning using BitFit results in 1.3\% and 1.9\% savings in trainable parameters, respectively.
Table \ref{tbl:sst} shows the average accuracy (over five runs with different random seeds) of the fine-tuned models on the evaluation set for SST-2. According to a two-tailed T-test, at p-value $\leq$ 0.05 there is no statistically significant difference between the BitFit results with and without ${\bm{b}}_k$ for both base and large sizes of RoBERTa.
\begin{table}[ht!]
\footnotesize
\centering
\begin{tabular}{|l|l|c|}
\hline
& & \textbf{Eval. Accuracy} \\
\hline
\multirow{2}{*}{RoBERTa-base} & Original & 94.98 \\
& No ${\bm{b}}_k$ & 94.92 \\
\hline
\multirow{2}{*}{RoBERTa-large} & Original & 95.83 \\
& No ${\bm{b}}_k$ & 95.85 \\
\hline
\end{tabular}
\caption{Average accuracy over 5 runs of BitFit on RoBERTa-base and -large with and without ${\bm{b}}_k$ on SST-2. No statistically significant difference in accuracy is observed according to a two-tailed T-test at p-value 0.05.}
\label{tbl:sst}
\end{table}
\section{Implications for Transformers}
As was shown in the previous sections, the bias term of the \textit{key} linear transformation, i.e., $ {\bm{b}}_k$ in the attention function is redundant. In the context of transformers, if we consider one transformer layer, $ {\bm{b}}_k$ constitutes only less than 0.01\% of the parameters of the layer. As a result, removing $ {\bm{b}}_k$ from the transformer architecture both during training or even from a pre-trained model at inference time does not result in significant savings in computations.
However, for the following reasons we argue that this finding is important. (1) The small size of these redundant parameters within one of the most widely used neural networks architectures does not change the fact that they are redundant, and we argue that this redundancy, however small, should be addressed. (2) It is important to note that this small redundancy appears in thousands of transformer-based models that are being invoked millions of times a day across academia and different industries within different products. From this aggregate perspective this redundancy results in significant redundant computational operations that could be avoided. (3) Recent works such as $(IA)^3$ \cite{https://doi.org/10.48550/arxiv.2205.05638} show how a small set of parameters (of the same size as $b_k$) could be used for efficiently adapting large language models to downstream tasks (often rivaling fully fine-tuned variants). From this angle, the redundant $b_k$ parameters could be repurposed to improve the performance of models. (4) Finally, in some recent works the bias terms of ``dense kernels'' and ``layer norms'' are dropped from the architecture based on the observation of positive impact on stability of the training process. Our analysis reveals that from a theoretical standpoint some additional biases ($b_k$) could be dropped from these architectures as well.
\section{Conclusions}
\label{sec:conc}
In this work, we analyze the attention function predominant in present-day transformer architectures and find that
the biases in the linear transformations of the attention function play different roles. Our analysis reveals that ${\bm{b}}_v$ is an important component in the attention computation, whereas
the bias of the key linear transformation, ${\bm{b}}_k$, is completely redundant. We also numerically confirm that removing ${\bm{b}}_k$ does not significantly change the outcome of transformers.
While our analysis has been focused on the softmax-based (scaled) dot-product attention, recent works have demonstrated how attention can be generalized from the kernel lens. This has led to innovations in different kernel designs that perform equivalently or better
and it would be interesting to explore the role of bias terms in the proposed kernels.
\label{sec:refs}
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.08599",
"language": "en",
"timestamp": "2023-02-20T02:02:47",
"url": "https://arxiv.org/abs/2302.08599",
"yymm": "2302"
} | \section{Some concentration inequalities}
\subsection{Concentration for independent non-identically distributed exponential random variables}\label{appendix_concentration_for_nonid_exp_rvs}
\begin{restatable}{lemma}{lemmaWeightedExpChernoff}\label{Lemma_weighted_exp_chernoff}
Let $\mathbf{u}\in\ensuremath{\mathbb{R}}^n_+$ be a vector with $\|\mathbf{u}\|_1=n$ and let $\mathbf{Z}$ be a random vector with independent $\Exp(1)$ components. Then for any $t\in[0,1)$, we have
\begin{equation}
\mathbb{P}(\mathbf{u}\cdot \mathbf{Z} \le tn) \le (te^{1-t})^n \prod_{i=1}^n u_i^{-1} \le (te)^n \prod_{i=1}^n u_i^{-1}.
\end{equation}
In fact, the upper bound given by the second inequality holds trivially when $t\ge 1$ and is invariant under simultaneous scaling of $u$ and $t$.
Further, when $1/K \le u_i\le K$ for some constant $K\ge 1$, we have
\begin{equation}
\mathbb{P}(\mathbf{u}\cdot \mathbf{Z} \le tn) \ge e^{-\ensuremath{O}(n^{2/3})} (te^{1-Kt})^n \prod_{i=1}^n u_i^{-1}.
\end{equation}
For $t=o(1)$, this lower bound becomes
\[
e^{-\ensuremath{O}(n^{2/3})+(1-K)tn} (te^{1-t})^n \prod_{i=1}^n u_i^{-1} = e^{o(n)} (te)^n \prod_{i=1}^n u_i^{-1},
\]
indicating that the upper bound is tight up to a factor of $e^{o(n)}$.
In particular, when $t=\ensuremath{O}(n^{-1/3})$, the gap is $e^{\ensuremath{O}(n^{2/3})}$.
\end{restatable}
\begin{proof}
First we establish the upper bound. Directly applying Chernoff's method on $\mathbf{u}\cdot \mathbf{Z}$, we have
\begin{equation}
\mathbb{P}(\mathbf{u}\cdot \mathbf{Z} \le tn) \le \inf_{\lambda \ge 0} \frac{\ensuremath{\mathbb{E}}[\exp(-\lambda \mathbf{u}\cdot \mathbf{Z})]}{\exp(-\lambda tn)} = \inf_{\lambda\ge 0} e^{\lambda tn} \prod_{i=1}^n \frac{1}{1+\lambda u_i}.
\end{equation}
Taking $\lambda=1/t - 1$ (which is the minimizer when $\mathbf{u}=\mathbf{1}$) gives
\begin{equation}\label{Eqn_lemma_weighted_exp_chernoff_3}
\mathbb{P}(\mathbf{u}\cdot \mathbf{X} \le tn) \le e^{n-tn} \prod_{i=1}^n \frac{t}{t+(1-t)u_i}.
\end{equation}
Notice that $\mathbf{u} \mapsto \sum_{i=1}^n \log u_i$ is a concave function on $\ensuremath{\mathbb{R}}_+^n$, and hence
\[
\prod_{i=1}^n (t+(1-t)u_i) \ge \left(\prod_{i=1}^n u_i\right)^{1-t} \ge \prod_{i=1}^n u_i
\]
since $\prod_{i=1}^n u_i \le \big(n^{-1}\sum_{i=1}^n u_i\big)^n=1$.
Plugging the above inequality into \eqref{Eqn_lemma_weighted_exp_chernoff_3} gives the desired upper bound.
Now we establish the tightness of the bound under the additional assumption that $1/K \le u_i \le K$ for all $i\in[n]$.
Consider independent random variables $W_i\sim \Exp(u_i R/t)$ for $i=1,\cdots,n$ with $R=1+n^{-1/3}$, so that by Chebyshev's inequality
\[
q_n := \mathbb{P}(\mathbf{u}\cdot \mathbf{W}\le tn) = \mathbb{P}_{T\sim \Gamma(n,1)}(T \le nR) \ge 1 - n^{-1/3}.
\]
For convenience, we similarly write
\[
p_n := \mathbb{P}(\mathbf{u}\cdot \mathbf{Z}\le tn)
\]
and write the (joint) distributions of $\mathbf{Z}$ and $\mathbf{W}$ as $P_n = \Exp(1)^{\otimes n}$ and $Q_n = \bigotimes_{i=1}^n \Exp(u_i R/t)$, respectively. Applying the data processing inequality to the channel $\mathcal{C}$ that maps $\mathbf{\zeta}\in\ensuremath{\mathbb{R}}^n$ to $\mathbbm{1}\{\mathbf{u}\cdot \mathbf{\zeta} \le tn\}$ gives
\begin{multline}\label{Eqn_lemma_weighted_exp_chernoff_tightness_data_proc_ineq}
D(Q_n\|P_n) \ge D(\mathcal{C}(Q_n) \| \mathcal{C}(P_n)) = q_n \log\frac{q_n}{p_n} + (1-q_n) \log\frac{1-q_n}{1-p_n} \\
= - q_n \log p_n + (1-q_n) \log(1-p_n) + (q_n\log q_n + (1-q_n)\log(1-q_n)),
\end{multline}
where $D(\cdot\|\cdot)$ denotes the Kullback-Leibler (KL) divergence between two probability distributions. A direct computation gives
\begin{align}
D(Q_n\|P_n) &= \sum_{i=1}^n \left(\frac{t}{Ru_i} - 1 - \log\frac{t}{Ru_i}\right) \nonumber\\
&\le \sum_{i=1}^n \left(\frac{Kt}{R} - 1 - \log\frac{t}{Ru_i}\right) \nonumber\\
&= -n + R^{-1} Ktn - n \log t + n\log R + \sum_{i=1}^n \log u_i \\
&\le -n + Ktn - n \log t + n^{2/3} + \sum_{i=1}^n \log u_i.\label{Eqn_lemma_weighted_exp_chernoff_tightness_compute_kl}
\end{align}
Combining this with \eqref{Eqn_lemma_weighted_exp_chernoff_tightness_data_proc_ineq} and letting $n\to\infty$ gives
\begin{equation}
-n + Ktn - n \log t + n^{2/3} + \sum_{i=1}^n \log u_i \ge - (1-O(n^{-1/3}))\log p_n + o(1),
\end{equation}
where we used the fact that
$\log(1-p_n)\to 0$ (due to our upper bound). Exponentiating both sides gives the desired lower bound for $p_n$.
\end{proof}
As a consequence, we have the following lemma.
\begin{restatable}{lemma}{lemmaWgtExpCondConcentration}\label{Lemma_wgt_exp_cond_concentration}
Let $\mathbf{u},\mathbf{v}\in\ensuremath{\mathbb{R}}^n_+$ be two vectors with bounded components, namely $\|\mathbf{u}\|_1=\|\mathbf{v}\|_1=n$ and $1/K \le u_i,v_i\le K$ for some fixed $K\ge 1$. For independent $Z_1,\cdots,Z_n\sim \Exp(1)$, we have
\begin{equation}\label{Eqn_main_Lemma_wgt_exp_cond_concentration}
\mathbb{P}\left(\left|\frac{\mathbf{u}\cdot \mathbf{Z}}{t \mathbf{u}\cdot \mathbf{v}^{-1}} - 1\right| >\zeta \;\middle|\; \mathbf{v}\cdot \mathbf{Z} < tn\right) \le \exp(-\Theta(n\zeta^2))
\end{equation}
for $t=o(1)$ and and any fixed constant $\zeta>0$, where $\mathbf{v}^{-1}$ denotes the component-wise inverse of vector $\mathbf{v}$.
\end{restatable}
Notice that this result is invariant under simultaneous scaling of vector $\mathbf{u}$, $\mathbf{v}$, and $t$. Essentially, we only need $tn/\|\mathbf{v}\|_1 = o(1)$ and bounded ratios between among the entries of $\mathbf{u}$ and $\mathbf{v}$. Further, the result remains unchanged if $Z_i\sim\Exp(c_i)$ independently with $c_i$'s bounded on some $[1/K', K']$; the $c_i$'s can simply be absorbed into $\mathbf{u}$ and $\mathbf{v}$.
\begin{proof}
We first prove the concentration bound for the lower tail.
Writing
\begin{multline}
\mathbb{P}(u\cdot x < (1-\zeta) tu\cdot v^{-1} | v\cdot x < tn) = \frac{\mathbb{P}(u\cdot x < (1-\zeta) tu\cdot v^{-1}, v\cdot x < tn)}{\mathbb{P}(v\cdot x < tn)} \\
\le \frac{\mathbb{P}((\lambda u+(1-\lambda)v)\cdot x < (1-\zeta)\lambda t u\cdot v^{-1} + (1-\lambda)tn)}{\mathbb{P}(v\cdot x < tn)}
\end{multline}
for some $\lambda > 0$ to be determined later, the previous Lemma bounds the numerator by
\[
t^n \left(1-\lambda + \frac{(1-\zeta)\lambda u\cdot v^{-1}}{n}\right)^n e^{n - (1-\zeta)\lambda t u\cdot v^{-1} - (1-\lambda)tn} \prod_{i=1}^n \frac{1}{\lambda u_i + (1-\lambda)v_i}.
\]
For lower bounding the denominator $\mathbb{P}(v\cdot x<tn)$, Lemma~\ref{Lemma_weighted_exp_chernoff} indicates that for $t=o(n)$, the denominator is well approximated by $t^n e^{n-tn} \prod_i v_i^{-1}$, up to an error of $e^{o(n)}$. Taking the ratio between the two quantities gives
\begin{equation}\label{Eqn_proof_wgt_exp_cond_concentration_lower_p_ratio_up_to_e^o(n)_1}
\left(1-\lambda + \frac{(1-\zeta)\lambda u\cdot v^{-1}}{n}\right)^n e^{\lambda tn-(1-\zeta)\lambda t u\cdot v^{-1}} \prod_{i=1}^n \frac{1}{\lambda u_i v_i^{-1} + 1-\lambda}
\end{equation}
Focus on the quantity $\prod_{i=1}^n (\lambda u_i v_i^{-1} + 1-\lambda)$. We use the following claim for a bound on the gap between the arithmetic and geometric means.
\begin{claim}
For $z\in\ensuremath{\mathbb{R}}_+^n$ with $\bar{z} = n^{-1}\sum_{i=1}^n z_i$, the function $f:[0,1]\to\ensuremath{\mathbb{R}}$ given by $f(\alpha) = \sum_{i=1}^n \log(\bar{z} + \alpha (z_i-\bar{z}))$ is concave with a maximum at $\alpha=0$. (Indeed, the function $z\mapsto \sum_{i=1}^n \log z_i$ is concave on $\ensuremath{\mathbb{R}}_+^n$.) Hence,
\begin{equation}
0\le f(0) - f(1) \le -f'(1) = -\sum_{i=1}^n \frac{z_i-\bar{z}}{z_i} = -n + \bar{z}\sum_{i=1}^n\frac{1}{z_i}.
\end{equation}
Exponentiating both sides gives
\begin{equation}
\bar{z}^n \prod_{i=1}^n z_i^{-1} \le \exp\left( -n + \bar{z}\sum_{i=1}^n\frac{1}{z_i} \right).
\end{equation}
\end{claim}
Applying the above claim to the product in \eqref{Eqn_proof_wgt_exp_cond_concentration_lower_p_ratio_up_to_e^o(n)_1} with $z_i = \lambda u_i v_i^{-1} + 1-\lambda$, we obtain
\begin{equation}
\prod_{i=1}^n \frac{1}{\lambda u_i v_i^{-1} + 1-\lambda} \le \left(1-\lambda + \frac{\lambda u\cdot v^{-1}}{n}\right)^{-n} \exp\left( -n +\sum_{i=1}^n\frac{1-\lambda + \lambda u\cdot v^{-1}/n}{1-\lambda + \lambda u_i v_i^{-1}}\right)
\end{equation}
Hence, the conditional probability of interest is upper bounded, up to $e^{o(n)}$, by the following expression
\begin{equation}
\left(1-\lambda + \frac{(1-\zeta)\lambda u\cdot v^{-1}}{n}\right)^n e^{\lambda tn-(1-\zeta)\lambda t u\cdot v^{-1}} \left(1-\lambda + \frac{\lambda u\cdot v^{-1}}{n}\right)^{-n} \exp\left( -n +\sum_{i=1}^n\frac{1-\lambda + \lambda u\cdot v^{-1}/n}{1-\lambda + \lambda u_i v_i^{-1}}\right).
\end{equation}
Denote the negative logarithm of the $n$-th root of the quantity above by $\underline{\chi}(\lambda)$. That is,
\begin{equation}
\mathbb{P}(u\cdot x < (1-\zeta) tu\cdot v^{-1} | v\cdot x < tn)
\le \inf_{\lambda>0}e^{o(n) - n \underline{\chi}(\lambda)} = \exp\Big(o(n) - n \sup_{\lambda>0}\underline{\chi}(\lambda)\Big)
\end{equation}
for any $\lambda > 0$ with
\begin{multline}
\underline{\chi}(\lambda) := -\log\left(1-\lambda + \frac{(1-\zeta)\lambda u\cdot v^{-1}}{n}\right) - \lambda t + \frac{(1-\zeta)\lambda t u\cdot v^{-1}}{n} + \log\left(1-\lambda + \frac{\lambda u\cdot v^{-1}}{n}\right) \\
+ 1 - \frac{1}{n}\sum_{i=1}^n\frac{1-\lambda + \lambda u\cdot v^{-1}/n}{1-\lambda + \lambda u_i v_i^{-1}}.\label{Eqn_proof_wgt_exp_cond_concentration_lower_neg_log_prob_def}
\end{multline}
The $o(n)$ factor is of lower order, and it suffices to show that there exists some $\lambda$ such that $\underline{\chi}(\lambda)=\Theta(\zeta^2)$.
For $\lambda$ sufficiently small (e.g., $\lambda \le K^{-2}/2$, recalling that $u_i,v_i\in[1/K,K]$), we may approximate the logarithm function near its zero and obtain
\begin{equation}
\log\left(1-\lambda + \frac{\lambda u\cdot v^{-1}}{n}\right) \ge \frac{\lambda u\cdot v^{-1}}{n} -\lambda - \left(\frac{\lambda u\cdot v^{-1}}{n} -\lambda\right)^2.
\end{equation}
Then the two log terms in \eqref{Eqn_proof_wgt_exp_cond_concentration_lower_neg_log_prob_def} combined can be bounded below by
\begin{equation}
\lambda - \frac{(1-\zeta)\lambda u\cdot v^{-1}}{n} + \frac{\lambda u\cdot v^{-1}}{n} -\lambda - \left(\frac{\lambda u\cdot v^{-1}}{n} -\lambda\right)^2 = \zeta \lambda \frac{u\cdot v^{-1}}{n} - \lambda^2 \left(\frac{u\cdot v^{-1}}{n}-1\right)^2.
\end{equation}
With $u_i,v_i\in[1/K,K]$, a naive lower bound is the following
\begin{equation}\label{Eqn_proof_wgt_exp_cond_concentration_lower_neg_log_prob_lower_bound}
\underline{\chi}(\lambda) \ge \zeta \lambda K^{-2} - \lambda^2 K^4 - \lambda t + (1-\zeta)\lambda t K^{-2}
+ 1 - \frac{(2-2\lambda+K^2\lambda+K^{-2}\lambda)^2}{4(1-\lambda+K^2\lambda)(1-\lambda+K^{-2}\lambda)},
\end{equation}
where the summation at the end of \eqref{Eqn_proof_wgt_exp_cond_concentration_lower_neg_log_prob_def} is bounded using Schweitzer's inequality \cite{schweitzer1914egy} for the ratio between arithmetic and harmonic means, stating
\[
\frac{1}{n}\sum_{i=1}^n\frac{\bar{z}}{z_i} \le \frac{(a+b)^2}{4ab}
\]
for $z\in\ensuremath{\mathbb{R}}^n$ with bounded components $0< a\le z_i\le b$. Further, we observe
\begin{equation}
1 - \frac{(2-2\lambda+K^2\lambda+K^{-2}\lambda)^2}{4(1-\lambda+K^2\lambda)(1-\lambda+K^{-2}\lambda)} = -\frac{(4K^2(K^{-2}-1)^2+(K-K^{-1})^4)}{4(1-\lambda+K^2\lambda)(1-\lambda+K^{-2}\lambda)}\lambda^2 \ge - 3K^4\lambda^2.
\end{equation}
Taking $\lambda = \zeta K^{-6}/8$ in \eqref{Eqn_proof_wgt_exp_cond_concentration_lower_neg_log_prob_lower_bound} yields
\begin{equation}
\underline{\chi}\left(\frac{1}{8}\zeta K^{-6}\right) \ge \frac{1}{16}\zeta^2 K^{-8} - \frac{1}{8}t\zeta K^{-6} \ge \Theta(\zeta^2),
\end{equation}
hence finishing our proof for the lower tail.
The proof for the upper tail follows a similar structure.
Writing
\begin{multline*}
\mathbb{P}(u\cdot x > (1+\zeta) tu\cdot v^{-1} | v\cdot x < tn) = \frac{\mathbb{P}(u\cdot x > (1+\zeta) tu\cdot v^{-1}, v\cdot x < tn)}{\mathbb{P}(v\cdot x < tn)} \\
\le \frac{\mathbb{P}((-\lambda u+ (1+\lambda)v)\cdot x < -(1+\zeta)\lambda t u\cdot v^{-1} + (1+\lambda) tn)}{\mathbb{P}(v\cdot x < tn)}
\end{multline*}
for some $0 < \lambda < K^{-2}$ (so that $-\lambda u+ (1+\lambda)v\in\ensuremath{\mathbb{R}}_+^n$) to be determined later, Lemma~\ref{Lemma_weighted_exp_chernoff} and \ref{Lemma_weighted_exp_chernoff} together imply that the ratio is, up to $e^{o(n)}$,
\begin{equation}\label{Eqn_proof_wgt_exp_cond_concentration_upper_p_ratio_up_to_e^o(n)_1}
\left(1+\lambda - \frac{(1+\zeta)\lambda u\cdot v^{-1}}{n}\right)^n e^{(1+\zeta)\lambda t u\cdot v^{-1} - \lambda tn} \prod_{i=1}^n \frac{1}{1+\lambda -\lambda u_i v_i^{-1}}
\end{equation}
As in the proof of lower tail bound, the product term can be bounded by
\begin{equation}
\prod_{i=1}^n (1+\lambda-\lambda u_i v_i^{-1}) \le
\left(1+\lambda - \frac{\lambda u\cdot v^{-1}}{n}\right)^{-n} \exp\left( -n +\sum_{i=1}^n\frac{1+\lambda - \lambda u\cdot v^{-1}/n}{1+\lambda - \lambda u_i v_i^{-1}}\right),
\end{equation}
giving an upper bound, again up to $e^{o(n)}$, of
\begin{equation}
\left(1+\lambda - \frac{(1+\zeta)\lambda u\cdot v^{-1}}{n}\right)^n e^{(1+\zeta)\lambda t u\cdot v^{-1}-\lambda tn} \left(1+\lambda - \frac{\lambda u\cdot v^{-1}}{n}\right)^{-n} \exp\left( -n +\sum_{i=1}^n\frac{1+\lambda - \lambda u\cdot v^{-1}/n}{1+\lambda - \lambda u_i v_i^{-1}}\right)
\end{equation}
for the conditional probability of interest.
Denote the negative logarithm of the $n$-th root of the quantity above by $\overline{\chi}(\lambda)$. That is,
\begin{equation}
\mathbb{P}(u\cdot x > (1+\zeta) tu\cdot v^{-1} | v\cdot x < tn)
\le \inf_{0<\lambda<K^{-2}} e^{o(n) - n \overline{\chi}(\lambda)}
\end{equation}
for any $\lambda \in(0,K^{-2})$ with
\begin{multline}
\overline{\chi}(\lambda) := -\log\left(1+\lambda - \frac{(1+\zeta)\lambda u\cdot v^{-1}}{n}\right) + \lambda t -\frac{(1+\zeta)\lambda t u\cdot v^{-1}}{n} + \log\left(1+\lambda - \frac{\lambda u\cdot v^{-1}}{n}\right) \\
+ 1 - \frac{1}{n}\sum_{i=1}^n\frac{1+\lambda - \lambda u\cdot v^{-1}/n}{1+\lambda - \lambda u_i v_i^{-1}}.
\end{multline}
Again, it suffices to prove that for some choice of $\lambda$ we have $\overline{\chi}(\lambda)=\Theta(\zeta^2)$.
With similar arithmetic as in the proof for the lower tail, we observe that for $\lambda$ sufficiently small (e.g., $\lambda \le K^{-2}/2$)
\begin{equation}\label{Eqn_proof_wgt_exp_cond_concentration_upper_neg_log_prob_lower_bound}
\overline{\chi}(\lambda) \ge \zeta \lambda K^{-2} + \lambda t - (1+\zeta)\lambda t K^2 - 4 \lambda^2 K^4.
\end{equation}
Again, taking $\lambda = \zeta K^{-6}/8$ in \eqref{Eqn_proof_wgt_exp_cond_concentration_upper_neg_log_prob_lower_bound} gives the desired lower bound of $\Theta(\zeta^2)$ for $\sup_{0<\lambda<K^{-2}}\overline{\chi}(\lambda)$ and thus finishes our proof.
\end{proof}
\subsection{A generalized DKW inequality for independent and nearly identically distributed random variables}
\begin{lemma}\label{Lemma_dkw_non_identical}
Let $X_i$, $i=1,\cdots,n$ be independent random variables each with (non-identical) distribution function $G_i$, and assume that there exists a constant $\delta>0$ and a distribution $F$ such that $\|G_i-F\|_\infty \leq \delta$ uniformly across all $i=1,\cdots,n$. Let $\hat{G}$ be the empirical distribution function of $\{X_i\}_{i=1}^n$. Then
\begin{equation}\label{Eqn_dkw_non_identical_lemma}
\mathbb{P}(\|\hat{G}-F\|_\infty > 2\delta + \epsilon) < 4\exp(-2n\epsilon^2/9).
\end{equation}
\end{lemma}
\begin{proof}
Let $U_i = G_i(X_i)$ so that $U_1,\cdots,U_n$ are i.i.d. uniform on $[0,1]$, and denote their empirical distribution function by $\hat{J}$. Let $Y_i=F^{-1}(U_i)=F^{-1}(G_i(X_i))$ so that $Y_1,\cdots,Y_n$ are i.i.d. each with distribution function $F$, and denote their empirical distribution function by $\hat{F}$. Notice that
\begin{align*}
\|\hat{G}-F\|_\infty &= \sup_{x\in\mathbb{R}} \left| n^{-1} \sum_{i=1}^n I_{(-\infty,x)}(X_i) - F(x)\right|\\
&= \sup_{x\in\mathbb{R}} \left| n^{-1} \sum_{i=1}^n I_{(-\infty,x)}(Y_i) - F(x) + n^{-1} \sum_{i=1}^n \left(I_{(-\infty,x)}(Y_i) - I_{(-\infty,x)}(X_i)\right) \right|\\
&\leq \sup_{x\in\mathbb{R}} \left| n^{-1} \sum_{i=1}^n I_{(-\infty,x)}(Y_i) - F(x)\right| + \sup_{x\in\mathbb{R}}\left(n^{-1} \sum_{i=1}^n \left|I_{(-\infty,x)}(Y_i) - I_{(-\infty,x)}(X_i) \right|\right)\\
&= \|\hat{F}-F\|_\infty + \sup_{x\in\mathbb{R}}A(x).
\end{align*}
From the classic result of DKW inequality \cite{dvoretzky1956asymptotic} applied to $\hat{F}$ and $F$, we know that
\begin{equation}\label{Eqn_proof_dkw_non_identical_bound_1}
\mathbb{P}(\|\hat{F}-F\|_\infty > \epsilon/3) < 2\exp(-2n\epsilon^2/9).
\end{equation}
For the second supremum of in the sum above, we now consider $U_i$, $i=1,\cdots,n$ as the underlying random variables. Each term in the summation in $A$ contributes 1 to the sum if and only if
\[F^{-1}(U_i)=Y_i<x\leq X_i=G_i^{-1}(U_i) \;\text{ or }\; G_i^{-1}(U_i)=X_i<x\leq Y_i=F^{-1}(U_i),\]
or alternatively
\[F(x)\wedge G_i(x) \leq U_i \leq F(x)\vee G_i(x),\footnote{We may safely ignore the case where the two sides are equal, as it happens with probability zero.}\]
where $\wedge$ and $\vee$ denote the operators of taking the minimum and maximum, respectively. Hence,
\begin{align*}
A(x) &= n^{-1} \sum_{i=1}^n \left|I_{(-\infty,x)}(Y_i) - I_{(-\infty,x)}(X_i) \right|\\
&= n^{-1} \sum_{i=1}^n I_{(F(x)\wedge G_i(x),F(x)\vee G_i(x))}(U_i)\\
&\leq n^{-1} \sum_{i=1}^n I_{(\bigwedge_j G_j(x)\wedge F(x),\bigvee_j G_j(x)\vee F(x))}(U_i)\\
&= \hat{J}(M(x)) - \hat{J}(m(x)),
\end{align*}
where $M$ and $m$ denote the maximum and minimum across $F$ and $G_i$, $i=1,\cdots,n$, respectively. By our assumption that $\|G_i-F\|_\infty \leq \delta$ across all $i$, we have that
\[
0\leq M(x) - m(x)\leq 2\delta
\]
for all $x\in\mathbb{R}$. Noticing that the true distribution function $J$ of $U_i$, $i=1,\cdots,n$ is the identity function on $[0,1]$, we have
\begin{align*}
A(x) &= \hat{J}(M(x)) - \hat{J}(m(x))\\
&\leq \left|\hat{J}(M(x)) - J(M(x))\right| + \left|\hat{J}(m(x)) - J(m(x))\right| + \left|J(M(x)) - J(m(x))\right|\\
&\leq 2\|\hat{J}-J\|_\infty + 2\delta
\end{align*}
on $\mathbb{R}$ uniformly. Therefore, applying DKW inequality again, we see that
\begin{equation}\label{Eqn_proof_dkw_non_identical_bound_2}
\mathbb{P}(\sup A > 2\delta + 2\epsilon/3) \leq \mathbb{P}(\|\hat{J}-J\|_\infty > \epsilon/3) < 2\exp(-2n\epsilon^2/9).
\end{equation}
Combining \eqref{Eqn_proof_dkw_non_identical_bound_1} and \eqref{Eqn_proof_dkw_non_identical_bound_2} yields the desired bound in \eqref{Eqn_dkw_non_identical_lemma}.
\end{proof}
\section{Additional proofs}\label{appendix_extra_proofs}
\subsection{Proof of Corollary~\ref{Cor_subexp_num_stable_match}}
In this section, we prove Corollary~\ref{Cor_subexp_num_stable_match}, which is restated below for convenience. We will assume Proposition~\ref{Prop_EqXY_bound}, whose proof is deferred to Appendix~\ref{Append_proof_prop_EqXY}.
\corSubExpNumOfStableMatch*
\begin{proof}
The last part is simply Corollary~\ref{Cor_Rstar_likely}.
For the first part, observe that for each $\mathcal{M}'\subseteq\mathcal{M}$ and $\mathcal{W}'\subseteq\mathcal{W}$ with $|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}$ and partial matching $\mu'$ between $\mathcal{M}'$ and $\mathcal{W}'$,
\begin{multline}
\ensuremath{\mathbb{P}}(\mu'\text{ is stable and satisfies }\mathcal{R}^*) \le e^{o(n)} \ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}^*}(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})] \le \\
e^{o(n)} \ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot\mathbbm{1}_{\mathcal{R}_1}(\mathbf{Y}_{\mathcal{W}'})] \le e^{o_\delta(n)} \frac{(\delta n)!}{n!}\prod_{i\in\mathcal{M}'} a_{i,\mu'(i)} b_{\mu'(i),i}
\end{multline}
by Proposition~\ref{Prop_EqXY_bound}. Summing over $\mathcal{M}'$, $\mathcal{W}'$, and $\mu'$ bounds the expected number of such stable partial matchings above by
\begin{align}
\ensuremath{\mathbb{E}}[N_\delta] &\le \sum_{\substack{\mathcal{M}'\subseteq\mathcal{M},\mathcal{W}'\subseteq\mathcal{W}\\|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}}}\sum_{\substack{\mu':\mathcal{M}'\to\mathcal{W}'\\\text{bijection}}} e^{o_\delta(n)} \frac{(\delta n)!}{n!}\prod_{i\in\mathcal{M}'} a_{i,\mu'(i)} b_{\mu'(i),i} \nonumber \\
&\labelrel={Step1_num_stable} \frac{1}{\floor{\delta n}!} \sum_{\substack{\mu:\mathcal{M}\to\mathcal{W}\\\text{bijection}}} \sum_{\substack{\mathcal{M}'\subseteq\mathcal{M}\\|\mathcal{M}'|=n-\floor{\delta n}}} e^{o_\delta(n)} \frac{(\delta n)!}{n!}\prod_{i\in\mathcal{M}'} a_{i,\mu(i)} b_{\mu(i),i} \nonumber \\
&\labelrel\le{Step2_num_stable} \sum_{\substack{\mu:\mathcal{M}\to\mathcal{W}\\\text{bijection}}} \sum_{\substack{\mathcal{M}'\subseteq\mathcal{M}\\|\mathcal{M}'|=n-\floor{\delta n}}} e^{o_\delta(n)} \frac{1}{n!}\cdot C^{2\floor{\delta n}} \prod_{i\in\mathcal{M}} a_{i,\mu(i)} b_{\mu(i),i} \nonumber \\
&\labelrel\le{Step3_num_stable} e^{o_\delta(n)} \binom{n}{\floor{\delta n}} \cdot \frac{1}{n!} \Perm(\mathbf{A}\circ \mathbf{B}^\top) \nonumber \\
&\labelrel\le{Step4_num_stable} e^{o_\delta(n)}, \nonumber
\end{align}
where in \eqref{Step1_num_stable} we use an alternative counting of partial matchings by counting sub-matchings of size $n-\floor{\delta n}$ in full matchings and then deduplicate by a factor of $\floor{\delta n}!$; in \eqref{Step2_num_stable} we use the boundedness assumption on the components of $\mathbf{A}$ and $\mathbf{B}$; in \eqref{Step3_num_stable} we merge $C^{2\floor{\delta n}}$ into $e^{o_\delta(n)}$; and finally in \eqref{Step4_num_stable} we merge $\binom{n}{\floor{\delta n}}=\exp(h(\delta) n + o(n))$ into $e^{o_\delta(n)}$ and bound the permanent term by $n^n \Perm(\mathbf{M}) \le \Theta(n!)$ using the moderate deviation property of $\mathbf{M}$ \citep[Sec.~3]{mccullagh2014asymptotic}.
\end{proof}
\subsection{Proof of Lemma~\ref{Prop_eigenvec_of_M_high_prob}}\label{Append_proof_prop_eigenvec_of_M}
In this section, we prove Lemma~\ref{Prop_eigenvec_of_M_high_prob}, which is restated below for convenience.
\PropEigenVecOfMHighProf*
To prepare for the proof of Lemma~\ref{Prop_eigenvec_of_M_high_prob}, let us denote the expectation in \eqref{Eqn_Prop_eigenvec_of_M_Expectation_is_small} by $E$, and express it as
\begin{align}
E &= \int_{0}^\infty \ensuremath{\mathbb{P}}\big(q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}\backslash\Omega_{\text{eig}}(\zeta)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) > s\big) \ensuremath{\,d} s \nonumber\\
&= \int_0^1 \ensuremath{\mathbb{P}}\big(\exp(-n\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'}) > s, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \in \mathcal{R} \backslash \Omega_{\text{eig}}(\zeta)\big) \ensuremath{\,d} s \nonumber \\
&= \int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \in \mathcal{R}_2 \backslash \Omega_{\text{eig}}(\zeta), \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t \nonumber \\
&= \int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_{\text{eig}}(\zeta), \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t,
\end{align}
where $\bar{t} := t \wedge (c_2(\log n)^{1/8})$.
If we can find two families of regions $\Omega_1(\zeta;s),\Omega_2(\zeta;s)\subseteq\ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n$ such that $\Omega_{\text{eig}}(\zeta) \supseteq \Omega_1(\Theta(\zeta);s)\cap\Omega_2(\Theta(\zeta);s)$ for all $0<s<c_2(\log n)^{1/8}$, by union bound and relaxing the requirement that $\mathbf{X}_{\mathcal{M}'}$ (resp. $\mathbf{Y}_{\mathcal{W}'}$) is in $\mathcal{R}_1$, we will obtain
\begin{align}
E
&\le \int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_1(\Theta(\zeta);\bar{t}), \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t \nonumber \\
&\qquad + \int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_2(\Theta(\zeta);\bar{t}), \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t \nonumber\\
&\le \int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_1(\Theta(\zeta);\bar{t}), \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t \nonumber \\
&\qquad + \int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, (\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_2(\Theta(\zeta);\bar{t}), \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t. \nonumber
\end{align}
Rewriting the probabilities through conditioning and further relaxing the requirement gives
\begin{align}
E
&\le \int_0^\infty \ensuremath{\mathbb{P}}\big((\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_1(\Theta(\zeta);\bar{t}) \big| \mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1\big) \nonumber \\
&\qquad\qquad \cdot \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t \nonumber \\
&\qquad + \int_0^\infty \ensuremath{\mathbb{P}}\big((\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_2(\Theta(\zeta);\bar{t}) \big| \mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \nonumber \\
&\qquad\qquad \cdot \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t. %
\label{Eqn_decompose_E_exp_qxy_on_Omega_eig_zeta}
\end{align}
Due to the symmetry between the two integrals, it then suffices to bound one of the two integrals (e.g., the latter) by showing
\begin{equation}\label{Eqn_unif_concentrat_MX_cond_on_XE}
\sup_{0 < t < c_2(\log n)^{1/8}} \ensuremath{\mathbb{P}}\big((\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \notin \Omega_2(\Theta(\zeta);t) \big| \mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, \mathbf{X}_{\mathcal{M}'}\in\mathcal{R}_1\big) \le \exp(-\Theta(\zeta^2 n))
\end{equation}
and
\begin{equation}\label{Eqn_EexpXMY_over_Y_in_R1}
\int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t
\le e^{o(n)+o_\delta(n)} \frac{(\delta n)!}{n!} \prod_{i\in\mathcal{M}'} a_{i,\mu'(i)} b_{\mu'(i),i},
\end{equation}
from which the desired upper bound immediately follows. Recognizing that
\begin{equation*}
\int_0^\infty \ensuremath{\mathbb{P}}\big(\mathbf{X}_{\mathcal{M}'}^\top \mathbf{M} \mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{Y}_{\mathcal{W}'}\in\mathcal{R}_1\big) \cdot n e^{-nt}\ensuremath{\,d} t = \ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\cdot\mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\cdot\mathbbm{1}_{\mathcal{R}_1}(\mathbf{Y}_{\mathcal{W}'})\big],
\end{equation*}
we reduce \eqref{Eqn_EexpXMY_over_Y_in_R1} to Proposition~\ref{Prop_EqXY_bound}. Our road map is to first find the desirable choices for $\Omega_1$ and $\Omega_2$ and establish \eqref{Eqn_unif_concentrat_MX_cond_on_XE}, and then prove Proposition~\ref{Prop_EqXY_bound} in Appendix~\ref{Append_proof_prop_EqXY}. Note that Proposition~\ref{Prop_EqXY_bound} is in fact independent of our choice of $\Omega_1$ and $\Omega_2$, but we will develop useful intermediate results to prepare for its proof.
Concretely, we consider events $\Omega_1$ and $\Omega_2$ as follows:
\begin{equation}
\Omega_1(\zeta;t) := \left\{(\mathbf{x},\mathbf{y})\in\ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \max_{i\in[n]}\left|(1-\delta)\frac{\mathbf{M}_{i,\cdot}\cdot \mathbf{y}}{t \mathbf{M}_{i,\cdot} \cdot (\mathbf{M}^\top\mathbf{x})^{-1}_{\mathcal{W}'}} - 1\right| >\zeta\right\},
\end{equation}
\begin{equation}
\Omega_2(\zeta;t) := \left\{(\mathbf{x},\mathbf{y})\in\ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \max_{j\in[n]}\left|(1-\delta)\frac{\mathbf{M}_{\cdot,j}\cdot \mathbf{x}}{t \mathbf{M}_{\cdot,j} \cdot (\mathbf{M}\mathbf{y})^{-1}_{\mathcal{M}'}} - 1\right| >\zeta\right\},
\end{equation}
where $\mathbf{M}_{i,\cdot}$ and $\mathbf{M}_{\cdot,j}$ denote the $i$-th row and the $j$-th column of $\mathbf{M}$, respectively; inverse is applied coordinate-wise on vectors; and $\mathbf{v}_{S}$ denotes the $n$-dimensional vector obtained by zeroing out the $i$-th component $v_i$ of $\mathbf{v}\in\ensuremath{\mathbb{R}}_+^n$ for all $i\in[n]\backslash S$ (with this operation performed after coordinate-wise inverse). We first verify the following lemma.
\begin{lemma}\label{lemma_Omega1_and_Omega2_suggests_Oeig}
There exist absolute constants $\zeta_0,\delta_0 > 0$ and $k_1, k_2 > 0$ such that for all $\zeta\in(0,\zeta_0)$, $\delta\in(0,\delta_0)$, and $t>0$ we have
\begin{equation}
\Omega_1(\zeta;t)\cap\Omega_2(\zeta;t) \subseteq \Omega_{\text{eig}}(k_1\delta + k_2\zeta).
\end{equation}
\end{lemma}
\begin{proof}
Let $\mathbf{d} = \mathbf{M}^\top \mathbf{x}$ and $\mathbf{e} = \mathbf{M}\mathbf{y}$.
Under the event that $(\mathbf{x},\mathbf{y})\in\Omega_1(\zeta;t)\cap\Omega_2(\zeta;t)$,
we have
\begin{align}
\frac{1}{d_j} &= \frac{1}{\mathbf{M}_{\cdot,j}\cdot \mathbf{x}} \labelrel\le{Step_use_Omega2_Lemma_Omega12_gives_Oeig} \frac{1-\delta}{(1-\zeta) t\mathbf{M}_{\cdot,j} \cdot \mathbf{e}^{-1}_{\mathcal{M}'}} \labelrel\le{Step_bdd_comp_Lemma_Omega12_gives_Oeig} \frac{1-\delta}{(1-\zeta) t(1+2C^2\delta)\mathbf{M}_{\cdot,j} \cdot \mathbf{e}^{-1}} \nonumber \\
&\labelrel\le{Step_Jensen_Lemma_Omega12_gives_Oeig} \frac{1-\delta}{(1-\zeta) t (1+2C^2\delta)} \sum_{i=1}^n m_{ij} e_i = \frac{1-\delta}{(1-\zeta) t (1+2C^2\delta)} \sum_{i=1}^n m_{ij}\mathbf{M}_{i,\cdot}\cdot \mathbf{y} \nonumber \\
&\labelrel\le{Step_use_Omega1_Lemma_Omega12_gives_Oeig} \frac{1-\delta}{(1-\zeta) t (1+2C^2\delta)} \sum_{i=1}^n m_{ij} \left( \frac{1+\zeta}{1-\delta} t \mathbf{M}_{i,\cdot} \cdot \mathbf{d}^{-1}_{\mathcal{W}'}\right) \le \frac{1+\zeta}{(1-\zeta)(1+2C^2\delta)} (\mathbf{M}^\top\mathbf{M}\mathbf{d}^{-1})_j,
\end{align}
where \eqref{Step_use_Omega2_Lemma_Omega12_gives_Oeig} uses the definition of $\Omega_2(\zeta;t)$; \eqref{Step_bdd_comp_Lemma_Omega12_gives_Oeig} uses the fact that $\mathbf{M}$ and $\mathbf{e}$ both have bounded ratios (at most $C$) among their entries and assumed $\delta < 1/2$; \eqref{Step_Jensen_Lemma_Omega12_gives_Oeig} is due to Jensen's inequality (or equivalently, harmonic-mean-arithmetic-mean inequality); and \eqref{Step_use_Omega1_Lemma_Omega12_gives_Oeig} uses the definition of $\Omega_1(\zeta;t)$.
Recall our assumption that $\mathbf{M}$ has entries bounded on $[1/(Cn), C/n]$. It is straightforward to verify that for any vector $\mathbf{v}\in\ensuremath{\mathbb{R}}_+^n$ with $\bar{v}=\frac{1}{n}\sum_{i=1}^n v_i$, we have $\max_{i\in[n]} (\mathbf{M}\mathbf{v})_i - \bar{v} \le \max_{i\in[n]} v_i - \frac{1}{C n} \cdot n(\max_{i\in[n]} v_i - \bar{v}) - \bar{v} = (1-C^{-1}) (\max_{i\in[n]} v_i - \bar{v})$. In the case of $\mathbf{v} = \mathbf{d}^{-1}$, this implies that
\begin{multline}\label{Eqn_chain_bound_d_i_star_and_harmonic_mean}
(1-C^{-1})^2 \big(d_{i^*}^{-1} - \bar{d}_{(H)}^{-1}\big) \ge \max_{i\in[n]}(\mathbf{M}^\top\mathbf{M}\mathbf{d}^{-1})_i - \bar{d}_{(H)}^{-1} \\
\ge (\mathbf{M}^\top\mathbf{M}\mathbf{d}^{-1})_{i^*} - \bar{d}_{(H)}^{-1} \ge (1+2C^2\delta) \frac{1-\zeta}{1+\zeta}d_{i^*}^{-1} - \bar{d}_{(H)}^{-1},
\end{multline}
where $i^*=\argmin_{i\in[n]} d_i$ and $\bar{d}_{H} = \big(n^{-1}\sum_{i=1}^n d_i^{-1}\big)^{-1}$ is the harmonic mean of $d_1,\ldots,d_n$. Solving \eqref{Eqn_chain_bound_d_i_star_and_harmonic_mean} gives
\begin{equation}
\frac{ d_{i^*}^{-1} - \bar{d}_{(H)}^{-1} }{ \bar{d}_{(H)}^{-1} } \le \Theta(\delta) + \frac{2\zeta}{1 - \zeta - (1-C^{-2})^2(1+\zeta)} \le \Theta(\delta + \zeta)
\end{equation}
with hidden constants independent of $\delta$ and $\zeta$, granted that $\zeta$ is sufficiently small. Hence, for all but $\sqrt{\delta+\zeta} n$ indices $i\in[n]$, we have $1-\Theta(\sqrt{\delta+\zeta}) \le \frac{\bar{d}_{(H)}}{d_i} \le 1 + \Theta(\delta+\zeta)$, implying that $(\mathbf{x},\mathbf{y})\in\Omega_{\text{eig}}(\Theta(\delta+\zeta))$.
\end{proof}
Let $\mathbf{D} = \mathbf{M}^\top \mathbf{X}_{\mathcal{M}'}$ and $\mathbf{E} = \mathbf{M}\mathbf{Y}_{\mathcal{W}'}$. Note that $\mathbf{D}$ and $\mathbf{E}$ both have bounded ratios among their components due to the bounded ratio assumption on $\mathbf{M}$, and in addition $\|\mathbf{D}\|_1=\|\mathbf{X}_{\mathcal{M}'}\|_1$ and $\|\mathbf{E}\|_1=\|\mathbf{Y}_{\mathcal{W}'}\|_1$. By Lemma~\ref{Lemma_wgt_exp_cond_concentration}, whenever $t\le c_2(\log n)^{1/8}$, we have for each column $\mathbf{M}_{\cdot,j}$ of $\mathbf{M}$, $j=1,\ldots,n$,
\begin{equation}
\mathbb{P}\left(\left|(1-\delta)\frac{\mathbf{M}_{\cdot,j}\cdot \mathbf{X}_{\mathcal{M}'}}{t \mathbf{M}_{\cdot,j} \cdot \mathbf{E}^{-1}_{\mathcal{M}'}} - 1\right| >\zeta \;\middle|\; \mathbf{X}_{\mathcal{M}'} \cdot \mathbf{E} < t, \|\mathbf{E}\|_1\ge \underline{c}_1 \log n\right) \le \exp(-\Theta(n\zeta^2)),
\end{equation}
where we note that the effective dimension of $\mathbf{X}_{\mathcal{M}'}$ is $n-\floor{\delta n}$ instead of $n$.
By a union bound over $j\in[n]$, this gives
\begin{equation}
\mathbb{P}\left(\max_{j\in[n]}\left|(1-\delta)\frac{\mathbf{M}_{\cdot,j}\cdot \mathbf{X}_{\mathcal{M}'}}{t \mathbf{M}_{\cdot,j} \cdot \mathbf{E}^{-1}_{\mathcal{M}'}} - 1\right| >\zeta, \|\mathbf{E}\|_1\ge \underline{c}_1 \log n \;\middle|\; \mathbf{X}_{\mathcal{M}'} \cdot \mathbf{E} < t \right) \le \exp(-\Theta(n\zeta^2)),
\end{equation}
which is simply \eqref{Eqn_unif_concentrat_MX_cond_on_XE}.
\subsection{Proof of Corollary~\ref{Cor_no_stable_outside_Oeigz}}\label{Append_proof_no_stable_outside_oeigz}
We now prove Corollary~\ref{Cor_no_stable_outside_Oeigz}, restated below.
\CorNoStableOutsideOeigz*
\begin{proof}
Summing over all partial matchings with size $n-\floor{\delta n}$ gives
\begin{multline}\label{Eqn_sum_expectation_Omega_zeta}
\sum_{\substack{\mathcal{M}'\subseteq\mathcal{M},\mathcal{W}'\subseteq\mathcal{W}\\|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}}}\sum_{\substack{\mu':\mathcal{M}'\to\mathcal{W}'\\\text{ bijection}}} \ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu')) \cdot \mathbbm{1}_{\mathcal{R}\backslash\Omega_{\text{eig}}(\zeta)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big] \\
\le \exp(o_\delta(n)-\Theta(\zeta^2 n)) \cdot \frac{(\delta n)!}{n!} \sum_{\substack{\mathcal{M}'\subseteq\mathcal{M},\mathcal{W}'\subseteq\mathcal{W}\\|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}}}\sum_{\substack{\mu':\mathcal{M}'\to\mathcal{W}'\\\text{bijection}}} \prod_{i\in\mathcal{M}'} (n m_{i,\mu'(i)}).
\end{multline}
To bound the summation, %
notice that
\begin{align}
\sum_{\substack{\mathcal{M}'\subseteq\mathcal{M},\mathcal{W}'\subseteq\mathcal{W}\\|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}}}\sum_{\substack{\mu':\mathcal{M}'\to\mathcal{W}'\\\text{bijection}}} \prod_{i\in\mathcal{M}'} (n m_{i,\mu'(i)})
&= \frac{1}{(\floor{\delta n})!} \sum_{\substack{\mu:\mathcal{M}\to\mathcal{W}\\\text{bijection}}} \sum_{\substack{\mathcal{M}'\subseteq\mathcal{M}\\|\mathcal{M}'|=n-\floor{\delta n}}} \prod_{i\in\mathcal{M}'} (n m_{i,\mu'(i)}) \nonumber\\
&\le \frac{1}{(\floor{\delta n})!} \sum_{\substack{\mu:\mathcal{M}\to\mathcal{W}\\\text{bijection}}} \binom{n}{\floor{\delta n}} C^{\floor{\delta n}} \prod_{i\in\mathcal{M}} (n m_{i,\mu(i)}) \nonumber\\
&= \frac{e^{o_\delta(n)}}{(\delta n)!} n^n \Perm(\mathbf{M}).\label{Eqn_sum_expectation_Omega_zeta_summation_bound}
\end{align}
Under the assumption that the bistochastic matrix $\mathbf{M}$ is of moderate deviation (cf. \cite[Section~3]{mccullagh2014asymptotic}), we know that $n^n \Perm(\mathbf{M}) = O( n! )$. Hence, the quantity in \eqref{Eqn_sum_expectation_Omega_zeta} is bounded by $\exp(o_\delta(n)-\Theta(\zeta^2 n))$. Invoking Lemma~\ref{Lemma_reduction_to_q} finishes the proof.
\end{proof}
\subsection{Proof of Proposition~\ref{Prop_EqXY_bound}}\label{Append_proof_prop_EqXY}
In this section, we present the proof of Proposition~\ref{Prop_EqXY_bound}, restated below.
\propEqXYBound*
Denote the target expectation by $E$ and express it as an integral of tail probability
\begin{equation}\label{Eqn_proof_prop_6_6_goal}
E = \int_0^\infty \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M}\mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{Y}_{\mathcal{W}'} \in \mathcal{R}_1) \cdot ne^{-nt} \ensuremath{\,d} t,
\end{equation}
where $\bar{t} = t \wedge (c_2(\log n)^{1/8})$. It suffices to upper bound probabilities of the form $\ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M}\mathbf{Y}_{\mathcal{W}'} < \bar{t}, \mathbf{Y}_{\mathcal{W}'} \in \mathcal{R}_1)$ for all $t\in(0,c_2(\log n)^{1/8}$. We will go one step further and prove a stronger result by relaxing the $\mathbf{Y}_{\mathcal{W}'} \in \mathcal{R}_1$ condition, which will eventually translate to a bound on $\ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})]$.
\begin{lemma}\label{Lemma_x_y_not_both_small_cond_on_XMY_and_R1}
There exists a positive constant $\beta$ such that, for any $t\in(0, c_2(\log n)^{1/8})$,
\begin{equation}
\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn}, \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn} \middle| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \le 0.1
\end{equation}
for $n$ sufficiently large.
\end{lemma}
\begin{proof}
Let $p$ denote the target probability. We have
\begin{align}
p &= \ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn} \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn}, \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \nonumber \\
&\qquad\cdot \ensuremath{\mathbb{P}}\left(\|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn} \middle| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \nonumber \\
&\le \ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn} \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn}, \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \nonumber \\
&= \frac{\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn}, \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn}\right)}{\ensuremath{\mathbb{P}}\left(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn}\right)} \nonumber \\
&\le \frac{\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn} \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn}\right)}{\ensuremath{\mathbb{P}}\left(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \beta \sqrt{tn}\right)} \nonumber \\
&\le \frac{\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn}\right)}{\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \sqrt{tn}/(C\beta)\right)} = \ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn} \middle| \|\mathbf{X}_{\mathcal{M}'}\|_1 \le (C\beta)^{-1}\sqrt{tn}\right),
\end{align}
where the last inequality follows from the independence between $\mathbf{X}$ and $\mathbf{Y}$ and the fact that $n\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'}\le C\|\mathbf{X}_{\mathcal{M}'}\|_1 \|\mathbf{Y}_{\mathcal{W}'}\|_1$.
By choosing $\beta = (2C)^{-1/2}$, the upper bound becomes
\begin{equation*}
\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \beta \sqrt{tn} \middle| \|\mathbf{X}_{\mathcal{M}'}\|_1 \le 2\beta\sqrt{tn}\right).
\end{equation*}
A direct invocation of Lemma~\ref{Lemma_wgt_exp_cond_concentration} implies an $\exp(-\Theta(n))$ upper bound for this probability.
\end{proof}
\begin{lemma}\label{Lemma_high_prob_small_xnorm_cond_XMY}
There exists a positive constant $\gamma$ such that, for any $t\in(0, c_2(\log n)^{1/8})$,
\begin{equation}\label{Eqn_Lemma_high_prob_small_xnorm_cond_XMY_goal}
\ensuremath{\mathbb{P}}\left(\|\mathbf{Y}_{\mathcal{W}'}\|_1 \ge \gamma n (\log n)^{-7/8}
\middle| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \le 0.1
\end{equation}
\end{lemma}
\begin{proof}
To free ourselves from always carrying the notation for the partial matching, let us observe that, once we relinquish the condition on the bistochasticity of $\mathbf{M}$, it becomes irrelevant that $\mu'$ is a partial matching between $\mathcal{M}'\subseteq\mathcal{M}$ and $\mathcal{W}'\subseteq\mathcal{W}$ (instead of a complete one between $\mathcal{M}$ and $\mathcal{W}$), since the difference in the market size $|\mathcal{M}|=|\mathcal{W}|=n$ and $|\mathcal{M}'|=|\mathcal{W}'|=n-{\delta n}$ does not affect the final asymptotics in the Lemma. Hence, it suffices to establish a version of \eqref{Eqn_Lemma_high_prob_small_xnorm_cond_XMY_goal} with $\mathbf{X}_{\mathcal{M}'}$ and $\mathbf{Y}_{\mathcal{W}'}$ replaced by non-truncated value vectors $\mathbf{X}$ and $\mathbf{Y}$ in a complete (instead of partial) matching $\mu$, as long as we do not rely on bistochasticity of $\mathbf{M}$.
In the simplified notation, let $\mathbf{Z} = \mathbf{a} \circ \mathbf{X}$ and $\mathbf{W} = \mathbf{b} \circ \mathbf{Y}$ with $\mathbf{a}=(a_{i,\mu(i)})_{i\in[n]}$ and $\mathbf{b}=(b_{j,\mu^{-1}(j)})_{j\in[n]}$, so that $\mathbf{Z},\mathbf{W}\sim \Exp(1)^n$ and are independent. Moreover, let $R=\|\mathbf{Z}\|_1$ and $\mathbf{U}=R^{-1}\mathbf{Z}$ so that, as is well known, $R\sim\Gamma(n, 1)$, $\mathbf{U}\sim\Unif(\Delta_{n-1})$, and $R$ and $\mathbf{U}$ are independent. Similarly, let $S=\|\mathbf{W}\|_1$, and $\mathbf{V}=S^{-1}\mathbf{W}$. Then
\begin{equation*}
\mathbf{X}^\top \mathbf{M}\mathbf{Y} = \mathbf{Z}^\top \diag(\mathbf{a}^{-1})\mathbf{M} \diag(\mathbf{b}^{-1}) \mathbf{W} = RS \mathbf{U}^\top \tilde{\mathbf{M}} \mathbf{V},
\end{equation*}
where $\tilde{\mathbf{M}}:=\diag(\mathbf{a}^{-1})\mathbf{M} \diag(\mathbf{b}^{-1})$ again has entries bounded on $[1/(C^2n),C^2/n]$. Since $\|\mathbf{Y}\|_1 = \Theta(S)$, it suffice to find a positive constant $\gamma$ such that
\begin{equation}
\ensuremath{\mathbb{P}}\left(S \ge \gamma n (\log n)^{-7/8}
\middle| RS \mathbf{U}^\top \tilde{\mathbf{M}} \mathbf{V} < t\right) < 0.1
\end{equation}
for all $n$ sufficiently large and $t\in(0, c_2(\log n)^{1/8})$.
Note that $1/(C^2n)\le \mathbf{U}^\top \tilde{\mathbf{M}} \mathbf{V} \le C^2/n$ a.s. By conditional on all possible values of $\mathbf{U}^\top \tilde{\mathbf{M}} \mathbf{V}$, it suffices to show that for all $t'\in(0, c_2C^2(\log n)^{1/8})$
\begin{equation}
\ensuremath{\mathbb{P}}\left(S \ge \gamma n (\log n)^{-7/8}
\middle| RS < t' n\right) < 0.1 \label{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_reduced_goal_RS}
\end{equation}
asymptotically.
First, we write $\ensuremath{\mathbb{P}}(S \ge \gamma n (\log n)^{-7/8}, RS < t' n)$ as
\begin{equation*}
\ensuremath{\mathbb{P}}\big(S \ge \gamma n (\log n)^{-7/8}, RS < t' n\big) = \int_{\gamma n (\log n)^{-7/8}}^\infty G(t'n/s) g(s) ds,
\end{equation*}
where $g(x)=\frac{x^{n-1}e^{-x}}{(n-1)!}$ is the probability density function of $\Gamma(n,1)$ and $G$ is the corresponding CDF. Since $t'n/s \ll n$, we may use Lemma~\ref{Lemma_weighted_exp_chernoff} to upper bound $G(t'n/s)$, giving
\begin{align}
\ensuremath{\mathbb{P}}\big(S \ge \gamma n (\log n)^{-7/8}, RS < t' n\big) &\le \int_{\gamma n (\log n)^{-7/8}}^\infty \left(\frac{t'e}{s}\right)^n \frac{s^{n-1}e^{-s}}{(n-1)!} ds \nonumber \\
&= \frac{(t'e)^n}{(n-1)!} \int_{\gamma n (\log n)^{-7/8}}^\infty \frac{e^{-s}}{s} ds \nonumber \\
&\le \frac{(t'e)^n}{(n-1)!} e^{-\gamma n (\log n)^{-7/8}}. \label{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_upper_bound_numerator}
\end{align}
Next, we lower bound $\ensuremath{\mathbb{P}}(RS < t' n)$ by
\begin{equation*}
\ensuremath{\mathbb{P}}\left(n^{1/2} \le S \le n^{2/3}, RS < t' n\right) = \int_{n^{1/2}}^{n^{2/3}} G(t'n/s) g(s) ds.
\end{equation*}
Note that $t'n/s \le \ensuremath{O}(n^{2/3})$ for all $s\in[n^{1/2},n^{2/3}]$. Using the lower bound in Lemma~\ref{Lemma_weighted_exp_chernoff}, we have
\begin{align}
\ensuremath{\mathbb{P}}\left(n^{1/2} \le S \le n^{2/3}, RS < t' n\right) &\ge e^{-\ensuremath{O}(n^{2/3})} \int_{n^{1/2}}^{n^{2/3}} \left(\frac{t'e}{s}\right)^n \frac{s^{n-1}e^{-s}}{(n-1)!} ds \nonumber \\
&= e^{-\ensuremath{O}(n^{2/3})} \frac{(t'e)^n}{(n-1)!} \int_{n^{1/2}}^{n^{2/3}} \frac{e^{-s}}{s} ds \nonumber \\
&\ge \frac{(t'e)^n}{(n-1)!} n^{-2/3} e^{-\ensuremath{O}(n^{2/3})}. \label{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_lower_bound_denominator}
\end{align}
Comparing \eqref{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_upper_bound_numerator} with \eqref{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_lower_bound_denominator} establishes \eqref{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_reduced_goal_RS} and hence finishes the proof.
\end{proof}
\begin{remark}
Note that this lemma should be treated only as a technical result about the typical behavior of $\mathbf{X}_{\mathcal{M}'}$ and $\mathbf{Y}_{\mathcal{W}'}$ when $q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})=\exp(-n\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M}\mathbf{Y}_{\mathcal{W}'})$ is large, and should not be confused with any attempt to bound the number of stable (partial) matchings with women's total values in a certain range. For example, one might hope to replace $\gamma n (\log n)^{-7/8}$ with $\gamma n(\log n)^{-1}$ in the proof to conclude that stable matchings with $\|\mathbf{Y}_\delta\|_1\in[n^{1/2},n^{2/3}]$ are over $e^{n^{2/3}}$ times more common than those with $\|\mathbf{Y}_\delta\|_1\ge \Omega(n(\log n)^{-1})$. This, however, is generally not true as we know in the classic case with uniformly random preferences. To see why this fact is not contradictory to our proof, recall from Proposition~\ref{Prop_ratio_p_q_high_prob} (see Section~\ref{sec_prep_proof} and Appendix~\ref{Append_weak_regular_scores}) that $q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})$ is only a good approximation to $p_{\mu'}(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})$ when, among other conditions, $\|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \Theta(n(\log n)^{-7/8})$; even then, the approximation is only valid up to an $e^{o(n)}$ factor. As the ratio between \eqref{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_upper_bound_numerator} and \eqref{Eqn_proof_Lemma_high_prob_small_xnorm_cond_XMY_lower_bound_denominator} is only $e^{o(n)}$, the quality of approximation is insufficient for us to rule out the possibility for a (partial) stable matching to have $\|\mathbf{Y}_{\mathcal{W}'}\|_1\ge\Theta(n(\log n)^{-1})$: the man-optimal stable matching obtained from the man-proposing deferred acceptance algorithm will be such an example.
\end{remark}
\begin{corollary}\label{Cor_x_not_small_when_y_large_cond_on_XMY_and_R1}
There exists a positive constant $\gamma'$ such that, for any $t\in(0, c_2(\log n)^{1/8})$,
\begin{equation}
\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \gamma' t (\log n)^{7/8}, \|\mathbf{Y}_{\mathcal{W}'}\|_1 \ge \beta \sqrt{tn}
\middle| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \le 0.2
\end{equation}
for $n$ sufficiently large, where $\beta$ is the constant appearing in Lemma~\ref{Lemma_x_y_not_both_small_cond_on_XMY_and_R1}.
\end{corollary}
\begin{proof}
Note that $\|\mathbf{M}\mathbf{Y}_{\mathcal{W}'}\|_1 = \|\mathbf{Y}_{\mathcal{W}'}\|_1 \gtrsim \sqrt{tn} \gg t$ for $t\lesssim (\log n)^{1/8}$. For any $\mathbf{y}$ supported on coordinates indexed by $\mathcal{W}'$ with $t \ll \|\mathbf{y}\|_1 \le \gamma n (\log n)^{-7/8}$, Lemma~\ref{Lemma_wgt_exp_cond_concentration} implies
\begin{equation}
\ensuremath{\mathbb{P}}\Big(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le 0.9 \frac{t}{n-\floor{\delta n}}\big\|(\mathbf{M}\mathbf{y})_{\mathcal{M}'}^{-1}\big\|_1 \Big| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, \mathbf{Y}_{\mathcal{W}'}=\mathbf{y}\Big) \le e^{-\Theta(n)}.
\end{equation}
Plugging in $\big\|(\mathbf{M}\mathbf{y})_{\mathcal{M}'}^{-1}\big\|_1 \ge (n-\floor{\delta n}) \frac{n}{C\|\mathbf{y}\|_1} \ge \frac{n-\floor{\delta n}}{C\gamma}(\log n)^{7/8}$ gives
\begin{equation}
\ensuremath{\mathbb{P}}\big(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le 0.9(C\gamma)^{-1} t(\log n)^{7/8} \big| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, \mathbf{Y}_{\mathcal{W}'}=\mathbf{y}\big) \le e^{-\Theta(n)}.
\end{equation}
Marginalizing over all relevant values of $\mathbf{y}$ implies
\begin{equation}
\ensuremath{\mathbb{P}}\big(\|\mathbf{X}_{\mathcal{M}'}\|_1 \le \gamma' t(\log n)^{7/8}, \beta \sqrt{tn} \le \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \gamma n (\log n)^{-7/8} \big| \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\big) \le e^{-\Theta(n)}
\end{equation}
with $\gamma'=0.9(C\gamma)^{-1}$. Combining this with Lemma~\ref{Lemma_high_prob_small_xnorm_cond_XMY} completes the proof.
\end{proof}
\begin{corollary}
For any $t\in(0, c_2(\log n)^{1/8})$ and $n$ sufficiently large,
\begin{equation}
\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1 \wedge \|\mathbf{Y}_{\mathcal{W}'}\|_1 \ge \gamma' t (\log n)^{7/8}, \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right) \ge \frac{1}{2} \ensuremath{\mathbb{P}}\left( \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t\right).
\end{equation}
\end{corollary}
\begin{proof}
This is a direct consequence of Lemma~\ref{Lemma_x_y_not_both_small_cond_on_XMY_and_R1} and Corollary~\ref{Cor_x_not_small_when_y_large_cond_on_XMY_and_R1}
\end{proof}
We are now ready to state the proof of Proposition~\ref{Prop_EqXY_bound}.
\begin{proof}[Proof of Proposition~\ref{Prop_EqXY_bound}]
Define events
\begin{equation*}
\begin{aligned}[c]
A_1(t)&: \|\mathbf{X}_{\mathcal{M}'}\|_1 \ge \gamma' t(\log n)^{1/8},\\
B_1(t)&: (\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in\Omega_1(\zeta;t),
\end{aligned}
\qquad
\begin{aligned}[c]
A_2(t)&: \|\mathbf{Y}_{\mathcal{W}'}\|_1 \ge \gamma' t(\log n)^{1/8},\\
B_2(t)&: (\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in\Omega_2(\zeta;t),
\end{aligned}
\end{equation*}
where $\Omega_1$ and $\Omega_2$ are defined in Appendix~\ref{Append_proof_prop_eigenvec_of_M}, and $\zeta$ is to be specified later.
We have
\begin{align}
\frac{1}{2}\ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t) &\le \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t), A_2(t)) \nonumber \\
&\le \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t), A_2(t), B_1(t), B_2(t)) \nonumber \\
&\qquad + \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t), B_1(t)^c) + \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_2(t), B_2(t)^c) \nonumber \\
&\le \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t), A_2(t), B_1(t), B_2(t)) \nonumber \\
&\qquad + \ensuremath{\mathbb{P}}(B_1(t)^c | \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t)) \cdot \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t) \nonumber \\
&\qquad + \ensuremath{\mathbb{P}}(B_2(t)^c | \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_2(t)) \cdot \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t).
\end{align}
For any fixed $\delta>0$ and $\zeta=o_\delta(1)$, $\ensuremath{\mathbb{P}}(B_1(t)^c | \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t)) \to 0$ by Lemma~\ref{Lemma_wgt_exp_cond_concentration}; in particular, we may assume that $\ensuremath{\mathbb{P}}(B_1(t)^c | \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t))$ (and by symmetry $\ensuremath{\mathbb{P}}(B_2(t)^c | \mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_2(t))$) is at most $1/8$.
Thus,
\begin{equation}
\ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t) \le 4 \ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t, A_1(t), A_2(t), B_1(t), B_2(t)).
\end{equation}
By Lemma~\ref{lemma_Omega1_and_Omega2_suggests_Oeig}, $B_1(t)$ and $B_2(t)$ together imply that $n\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} = (1 + o_{\delta}(1)) \|\mathbf{X}_{\mathcal{M}'}\|_1\|\mathbf{Y}_{\mathcal{W}'}\|_1$. Further, along with the events $\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} \mathbf{Y}_{\mathcal{W}'} < t$, $A_1(t)$, and $A_2(t)$, they imply $t(\log n)^{1/8} \lesssim \|\mathbf{Y}_{\mathcal{W}'}\|_1 \lesssim n(\log n)^{-1/8}$. Hence,
\begin{align}
\ensuremath{\mathbb{P}}(\mathbf{X}_{\mathcal{M}'}^\top\mathbf{M} &\mathbf{Y}_{\mathcal{W}'} < t, A_1(t), A_2(t), B_1(t), B_2(t)) \nonumber \\
&\le \ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1\|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \frac{nt}{1+o_{\delta,\zeta}(1)}, t(\log n)^{1/8} \le \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \Theta(n(\log n)^{-1/8})\right) \nonumber \\
&\le \ensuremath{\mathbb{E}}\left[\ensuremath{\mathbb{P}}\left(\|\mathbf{X}_{\mathcal{M}'}\|_1\|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \frac{nt}{1+o_{\delta,\zeta}(1)}, t(\log n)^{1/8} \le \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \Theta(n(\log n)^{-1/8}) \middle| \|\mathbf{Y}_{\mathcal{W}'}\|_1\right)\right] \nonumber \\
&\le e^{o_{\delta}(n)} \bigg(\frac{ent}{n-\floor{\delta n}}\bigg)^{n-\floor{\delta n}} \prod_{i\in\mathcal{M}'} a_{i,\mu'(i)} \nonumber \\
&\qquad \cdot \ensuremath{\mathbb{E}}\left[\|\mathbf{Y}_{\mathcal{W}'}\|_1^{-n+\floor{\delta n}}; t(\log n)^{1/8} \le \|\mathbf{Y}_{\mathcal{W}'}\|_1 \le \Theta(n(\log n)^{-1/8})\right]. \label{Eqn_proof_prop_EqXY_P_XMY_and_A1A2B1B2}
\end{align}
It is straightforward, albeit a bit tedious, to explicitly bound the expectation term in \eqref{Eqn_proof_prop_EqXY_P_XMY_and_A1A2B1B2} by
\begin{equation*}
(\Theta(\log n) - \log t) e^{n-\floor{\delta n}} \prod_{i\in\mathcal{M}'} b_{\mu'(i),i},
\end{equation*}
again using Lemma~\ref{Lemma_weighted_exp_chernoff}. Carrying out the integral over $t$ in \eqref{Eqn_proof_prop_6_6_goal} finishes the proof.
\end{proof}
\subsection{Proof of Lemma~\ref{Prop_Oempe_likely_for_happiness_emp_distr}}\label{Proof_prop_Oempe_likely}
In this section, we present the proof of Lemma~\ref{Prop_Oempe_likely_for_happiness_emp_distr}, restated below.
\propOempeLikelyForHappinessEmpDist*
\begin{proof}
In light of Proposition~\ref{Prop_EqXY_bound}, it suffices to show that for all $\mathbf{y} \in \text{Proj}_y(\mathcal{R}\cap\Omega_{\text{eig}}(\zeta))$, we have
\begin{equation}\label{Eqn_want_ratio_EOempe_EqXY}
\frac{\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \cap \Omega_{\text{eig}}(\zeta)\backslash\Omega_{\text{emp}}(\eps)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \;|\; \mathbf{Y}_{\mathcal{W}'} = \mathbf{y} \big]}{\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \;|\; \mathbf{Y}_{\mathcal{W}'} = \mathbf{y} \big]} \le \exp(-\Theta(\ensuremath{\epsilon}^2 n)).
\end{equation}
It then follows that
\begin{align}
\ensuremath{\mathbb{E}}\big[q(&\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \cap \Omega_{\text{eig}}(\zeta)\backslash\Omega_{\text{emp}}(\eps)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big] \nonumber \\
&= \ensuremath{\mathbb{E}}\big[\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \cap \Omega_{\text{eig}}(\zeta)\backslash\Omega_{\text{emp}}(\eps)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big| \mathbf{Y}_{\mathcal{W}'}\big]\big] \nonumber \\
&\le \ensuremath{\mathbb{E}}\big[\exp(-\Theta(\ensuremath{\epsilon}^2 n)) \cdot \ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \;|\; \mathbf{Y}_{\mathcal{W}'}] \cdot \mathbbm{1}_{\text{Proj}_y(\mathcal{R}\cap\Omega_{\text{eig}}(\zeta))}(\mathbf{Y}_{\mathcal{W}'})\big] \nonumber \\
&\le \exp(-\Theta(\ensuremath{\epsilon}^2 n)) \cdot \ensuremath{\mathbb{E}}\big[\ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \;|\; \mathbf{Y}_{\mathcal{W}'}] \cdot \mathbbm{1}_{\mathcal{R}_1}(\mathbf{Y}_{\mathcal{W}'})\big] \nonumber \\
&\le \exp(-\Theta(\ensuremath{\epsilon}^2 n)) \cdot \ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_1}(\mathbf{Y}_{\mathcal{W}'})],
\end{align}
and Proposition~\ref{Prop_EqXY_bound} immediately implies the desired bound.
To show \eqref{Eqn_want_ratio_EOempe_EqXY}, notice that the quotient in the left-hand side is simply
\begin{equation}\label{Eqn_proof_distr_happi_equiv_emp_tail_bound_for_nearly_iid_X}
\ensuremath{\mathbb{P}}_{\mathbf{X}\sim\bigotimes_{i=1}^n \Exp(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i)}\big((\mathbf{X}_{\mathcal{M}'},\mathbf{y})\in\mathcal{R}\cap\Omega_{\text{eig}}(\zeta)\backslash\Omega_{\text{emp}}(\eps)\big).
\end{equation}
Recall that $(\mathbf{X}_{\mathcal{M}'})_i = X_i$ for $i \in \mathcal{M}'$ and $(\mathbf{X}_{\mathcal{M}'})_i = 0$ for $i\notin\mathcal{M}'$. For any $\mathbf{y} \in \text{Proj}_y(\mathcal{R}\cap\Omega_{\text{eig}}(\zeta))$, there must exist $\hat y\in\ensuremath{\mathbb{R}}_+$ such that for all but at most $\sqrt{\zeta} n$ indices $i\in[n]$ we have $|(\mathbf{M} \mathbf{y})_i-\hat y| \le \sqrt{\zeta} \hat y$. In other words, under the distribution $\mathbf{X}\sim\bigotimes_{i=1}^n \Exp(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i)$, for all but at most $(\delta + \sqrt{\zeta}) n$ indices $i\in[n]$, we have $n\hat y X_i\sim \Exp\big(\lambda_i\big)$ for some
\begin{equation*}
\lambda_i = \frac{a_{i,\mu'(i)}}{n\hat y}+\frac{(\mathbf{M}\mathbf{y})_i}{\hat y} = 1 + \Theta(\sqrt{\zeta}) + \Theta(1/\log n),
\end{equation*}
where we used the fact that $n\hat y\ge\Theta(\|\mathbf{y}\|_1) \ge \Theta(\log n)$ as implied by $\mathbf{y}\in\text{Proj}_y(\mathcal{R}\cap\Omega_{\text{eig}}(\zeta))$. The generalized Dvoretzky–Kiefer–Wolfowitz (DKW) inequality (see Lemma~\ref{Lemma_dkw_non_identical}) for independent and nearly-identically distributed random variables implies that the probability \eqref{Eqn_proof_distr_happi_equiv_emp_tail_bound_for_nearly_iid_X} is upper bounded by
\begin{equation}
\ensuremath{\mathbb{P}}_{\mathbf{X}\sim\bigotimes_{i=1}^n \Exp(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i)}\big(\big\|\hat{\mathcal{F}}(\mathbf{x}) - F_{n\hat y}\|_\infty > \ensuremath{\epsilon} + \Theta(\delta + \sqrt{\zeta})\big) \le \exp(-\Theta(\ensuremath{\epsilon}^2 n)),
\end{equation}
which finishes the proof.
\end{proof}
\subsection{Proof of Theorem~\ref{Thm_main_rank_dist_body}}\label{Append_proof_thm_main_rank}
Heuristically, we would expect the rank $R_i$ for a man to be proportional to his value $X_i(\mu)$. We will see below that this is approximately the case when $x_i\ll 1$. There are, however, going to be some $x_i$ of constant order, making it hard for us to say anything exact about $R_i$. But as we will soon see, for all but a $o(1)$ fraction of the $n$ men, we will indeed have $x_i = o(1)$. As we are concerned with the empirical distribution, such small fraction becomes negligible in the limit and can be simply ignored. This heuristics is formalized in the next Lemma.
\begin{lemma}\label{Lemma_probable_happiness_majority}
Fix any $\delta > 0$. Let $\mu'$ be a partial matching of size $n-\floor{\delta n}$ on $\mathcal{M}'\subseteq\mathcal{M}$ and $\mathcal{W}'\subseteq\mathcal{W}$. For any $0<\xi<\rho < 1$, consider $\Omega_{\text{tail}}(\xi,\rho)$ defined as
\begin{equation}
\bigg\{(\mathbf{x},\mathbf{y})\in \ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \sum_{i=1}^n \mathbbm{1}\Big\{n x_i (\mathbf{M}\mathbf{y})_i\notin (F^{-1}(\xi/2)/2, F^{-1}(1-\xi/2))\Big\} \le \floor{\delta n} + \rho (n-\floor{\delta n})\bigg\}.
\end{equation}
Then
\begin{equation}\label{Eqn_Lemma_probable_happiness_majority_main_bound}
\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \backslash\Omega_{\text{tail}}(\xi,\rho)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big] \le \exp(o_\delta(n)-\Theta(D(\rho\|\xi)n)) \cdot \frac{(\delta n)!}{n!} \prod_{i\in\mathcal{M}'}a_{i,\mu'(i)}b_{\mu'(i),i},
\end{equation}
where $D(q\|p)$ denotes the KL divergence from $\Bern(p)$ to $\Bern(q)$.
\end{lemma}
\begin{proof}
The proof entirely mirrors that of Lemma~\ref{Prop_Oempe_likely_for_happiness_emp_distr}. It suffices to show that for all $\mathbf{y}\in\text{Proj}_y(\mathcal{R})$ we have
\begin{equation}\label{Eqn_want_ratio_EOtailab_EqXY}
\frac{\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \backslash\Omega_{\text{tail}}(\xi,\rho)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \;|\; \mathbf{Y}_{\mathcal{W}'} = \mathbf{y} \big]}{\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \;|\; \mathbf{Y}_{\mathcal{W}'} = \mathbf{y} \big]} \le \exp(-\Theta(D(\rho\|\xi)n)).
\end{equation}
The quotient is simply
\begin{equation}\label{Eqn_proof_probable_happiness_majority_rewrite_x_distr_cond_on_y}
\ensuremath{\mathbb{P}}_{\mathbf{X}\sim\bigotimes_{i=1}^n \Exp(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i)}\big((\mathbf{X}_{\mathcal{M}'},\mathbf{y})\in\mathcal{R}\backslash\Omega_{\text{tail}}(\xi,\rho)\big) \le \ensuremath{\mathbb{P}}\big((\mathbf{X}_{\mathcal{M}'},\mathbf{y})\notin\Omega_{\text{tail}}(\xi,\rho)\big),
\end{equation}
where we will have $\mathbf{X}\sim\prod_{i=1}^n \Exp(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i)$ for the rest of this proof.
Note that under this specified distribution, $\Big(\big(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i\big)X_i\Big)_{i\in\mathcal{M}'}$ are $n-\floor{\delta n}$ i.i.d. samples from $\Exp(1)$, each falling outside the interval $(F^{-1}(\xi/2), F^{-1}(1-\xi/2))$ with probability precisely $\xi$. Hence,
\begin{multline}\label{Eqn_proof_probable_happiness_majority_hoeffding_for_renormalized_x}
\ensuremath{\mathbb{P}}\bigg(\sum_{i\in\mathcal{M}'} \mathbbm{1}\Big\{X_i\big(a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i\big)\notin (F^{-1}(\xi/2), F^{-1}(1-\xi/2))\Big\} \le \rho (n-\floor{\delta n})\bigg) \\
= \ensuremath{\mathbb{P}}_{Z\sim\Binom(n-\floor{\delta n}, \xi)}(Z > \rho(n-\floor{\delta n})) \le \exp(-D(\rho\|\xi)(n-\floor{\delta n}))
\end{multline}
by the Hoeffding bound for binomial distribution. Since $n(\mathbf{M}\mathbf{y})_i \le a_{i,\mu'(i)}+n(\mathbf{M}\mathbf{y})_i \le 2n(\mathbf{M}\mathbf{y})_i$ across all $i\in\mathcal{M}'$ for $\mathbf{y}\in\mathcal{R}_1$ and $n$ sufficiently large, the probability \eqref{Eqn_proof_probable_happiness_majority_hoeffding_for_renormalized_x} upper bounds $\ensuremath{\mathbb{P}}\big((\mathbf{X}_{\mathcal{M}'},\mathbf{y})\notin\Omega_{\text{tail}}(\xi,\rho)\big)$. This establishes \eqref{Eqn_want_ratio_EOtailab_EqXY} and concludes the proof.
\end{proof}
By fixing some small $\delta,\rho$ and choosing $\xi$ sufficiently small, we can make $D(\rho\|\xi)$ arbitrarily large and obtain the following Corollary.
\begin{corollary}\label{Cor_no_stable_match_tilde_Otailab}
For any $0 < \delta,\rho < 1/2$, there exists a choice of $\xi > 0$ such that %
\begin{equation}
\ensuremath{\mathbb{P}}(\exists \mu\in\mathcal{S}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin\tilde{\Omega}_{\text{tail}}(\xi,\rho)) \lesssim e^{-n^c}
\end{equation}
asymptotically as $n\to\infty$,
where $\tilde{\Omega}_{\text{tail}}(\xi,\rho)$ is defined as
\begin{equation}
\bigg\{ (\mathbf{x},\mathbf{y})\in \ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \sum_{i=1}^n \mathbbm{1}\Big\{x_i\notin \Big(F^{-1}(\xi/2)\frac{(\log n)^{7/8}}{C^2\overline{c}_1 n}, 2F^{-1}(1-\xi/2)\frac{C^2}{\underline{c}_1 \log n}\Big)\Big\} \le (\delta + \rho) n \bigg\}.
\end{equation}
That is, with high probability, no stable matchings $\mu$ have more than $\delta+\rho$ fraction of the men's post-truncation values outside an interval $(\Theta(n^{-1}(\log n)^{7/8}), \Theta(1/\log n))$.
\end{corollary}
\begin{proof}
Observe that $\mathcal{R}\backslash\tilde{\Omega}_{\text{tail}}(\xi,\rho) \subseteq \mathcal{R}\backslash\Omega_{\text{tail}}(\xi,\rho)$ by our definition of $\mathcal{R}$ and the boundedness assumption on the entries of $\mathbf{M}$. Again, invoking Lemma~\ref{Lemma_reduction_to_q} using inequality \eqref{Eqn_Lemma_probable_happiness_majority_main_bound} in Lemma~\ref{Lemma_probable_happiness_majority} yields the $\Theta(e^{-n^c})$ upper bound on $\ensuremath{\mathbb{P}}((\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin\tilde{\Omega}_{\text{tail}}(\xi,\rho))$.
\end{proof}
\begin{remark}
Recall that in a market with uniform preferences, the man-optimal (and woman-pessimal) stable matching realizes an average rank of $\Theta(\log n)$ for men and $\Theta(n/\log n)$ for women. Under the heuristics that values multiplied by $n$ roughly correspond to ranks (which we will formalize below), Lemma~\ref{Lemma_probable_happiness_majority} nicely matches our expectation that even in the most extreme cases, few individuals will strike a rank better (smaller) than $\Theta((\log n)^{7/8})$ or worse (larger) than $\Theta(n/\log n)$. The lower bound can be refined to $\Theta(\log n/\log \log n)$ with a more careful analysis.
\end{remark}
Now let us consider a specific partial matching $\mu'$ of size $n-\floor{\delta n}$ between $\mathcal{M}'$ and $\mathcal{W}'$ and condition on $\mu'$ being stable with value vectors $(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})=(\mathbf{x},\mathbf{y})\in\tilde{\Omega}_{\text{tail}}(\xi,\rho)$. That is, there exists a subset $\bar{\mathcal{M}}'\subseteq\mathcal{M}'$ with $|\bar{\mathcal{M}}'|\ge (1-\delta-\rho)n$ such that $\Theta(n^{-1}(\log n)^{7/8}) \le (X_{\mathcal{M}'})_i \le \Theta(1/\log n)$ for all $i\in\bar{\mathcal{M}}'$. By symmetry, we may further assume that there exists $\bar{\mathcal{W}}'\subseteq\mathcal{W}'$ with $|\bar{\mathcal{W}}'|\ge (1-\delta-\rho)n$ such that $\Theta(n^{-1}(\log n)^{7/8}) \le (Y_{\mathcal{W}'})_j \le \Theta(1/\log n)$ for all $j\in\bar{\mathcal{W}}'$. We want to show that, for $i\in\bar{\mathcal{M}}'$, the \emph{pre-truncation} rank $R_i$ of man $m_i$ (i.e., over the entire market, including the $\floor{\delta n}$ women outside $\mathcal{M}'$) is well characterized by his value $X_{i,\,u'(i)}$ in the matching, up to some proper scaling. From now on, we will consider some $i\in\bar{\mathcal{M}}'$ with value $X_{i,\,u'(i)}=x_i$, and write
\begin{equation}\label{Eqn_def_eqn_rank_i}
R_i = 1 + \sum_{j\ne \mu'(i)}\mathbbm{1}_{[0,x_i]}(X_{ij}).
\end{equation}
The condition that $\mu'$ is stable requires $(X_{ij}, Y_{ji}) \notin [0, x_i]\times[0,y_j]$ for all $j\in\mathcal{W}'\backslash\{\mu'(i)\}$. Thus, for all $j\in\mathcal{W}'\backslash\{\mu'(i)\}$,
\begin{multline}
\ensuremath{\mathbb{P}}(X_{ij} \le x_i | \mu'\text{ stable}, (\mathbf{X}_{\mathcal{M}'})_i=x_i, (\mathbf{Y}_{\mathcal{W}'})_j=y_j) = \frac{\ensuremath{\mathbb{P}}(X_{ij} \le x_i,Y_{ji} > y_j)}{1 - \ensuremath{\mathbb{P}}(X_{ij} \le x_i,Y_{ji} \le y_j)} \\
= \frac{(1-e^{-a_{ij}x_i})e^{-b_{ji}y_j}}{1-(1-e^{-a_{ij}x_i})(1-e^{-b_{ji}y_j})},
\end{multline}
and for all $j\in\mathcal{W}\backslash\mathcal{W}'$ (so $(\mathbf{Y}_{\mathcal{W}'})_j=0$),
\begin{equation}
\ensuremath{\mathbb{P}}(X_{ij} \le x_i | \mu'\text{ stable}, (\mathbf{X}_{\mathcal{M}'})_i=x_i) = 1-e^{-a_{ij}x_i}.
\end{equation}
Define
\begin{equation*}
p_{ij} = \begin{cases}
1 & \quad \text{ when }j=\mu'(i), \\
\frac{(1-e^{-a_{ij}x_i})e^{-b_{ji}y_j}}{1-(1-e^{-a_{ij}x_i})(1-e^{-b_{ji}y_j})} & \quad \text{ when } j\in\mathcal{W}'\backslash\{\mu'(i)\}, \\
1-e^{-a_{ij}x_i} & \quad\text{ when } j\in\mathcal{W}\backslash\mathcal{W}',
\end{cases}
\end{equation*}
and $I_{ij}\sim \Bern(p_{ij})$ independently for $i\in[n]$ so that $R_i = \sum_{j=1}^n I_{ij}$ conditional on $(\mathbf{X}_{\mathcal{M}'})_i = X_i=x_i$.
Note that for any $j\ne \mu'(i)$ and $j\notin \bar{\mathcal{W}}'$, we always have
\begin{equation}\label{Eqn_relate_rank_to_happi_p_ij_upperbound}
p_{ij} \le 1-e^{-a_{ij}x_i} \le a_{ij}x_i
\end{equation}
and
\begin{multline}\label{Eqn_relate_rank_to_happi_p_ij_lowerbound}
p_{ij} \ge \frac{(1-e^{-a_{ij}x_i})e^{-b_{ji}y_j}}{1-(1-e^{-a_{ij}x_i})(1-e^{-b_{ji}y_j})} \ge (1-e^{-a_{ij}x_i})e^{-b_{ji}y_j} \\
\ge e^{-\Theta(\frac{1}{\log n})}\bigg(1 - \Theta\Big(\frac{1}{\log n}\Big)\bigg) a_{ij}x_i = (1-o(1))a_{ij}x_i.
\end{multline}
For $j \in \bar{\mathcal{W}}'\backslash\{\mu'(i)\}$, $p_{ij}$ admits the same upper bound \eqref{Eqn_relate_rank_to_happi_p_ij_upperbound} and the trivial lower bound of zero.
Hence, conditional on
\begin{equation}\label{Eqn_condition_stable_with_vx_vy}
\mu'\text{ stable and }(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})=(\mathbf{x},\mathbf{y})\in\tilde{\Omega}_{\text{tail}}(\xi,\rho)\cap \tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon}) \cap\mathcal{R} \tag{$\dagger$}
\end{equation}
for any (fixed) $\xi,\rho,\ensuremath{\epsilon}> 0$,
we have the stochastic dominance
\begin{equation}
1 + \sum_{j\notin \bar{\mathcal{W}}'\cup\{\mu'(i)\}} \underline{I}_{ij} \preceq R_i \preceq 1 + \sum_{j\ne \mu'(i)} \overline{I}_{ij},
\end{equation}
where $\underline{I}_{ij}\sim\Bern((1-o(1))a_{ij}x_i)$ and $\overline{I}_{ij}\sim\Bern(a_{ij}x_i)$. Since $i\in\bar{\mathcal{M}}'$ by our assumption and thus $\Theta((\log n)^{7/8}/n)\le x_i \le \Theta(1/\log n)$, the expectation of $R_i/x_i$ can be upper bounded by
\begin{equation}
\ensuremath{\mathbb{E}}\bigg[\frac{R_i}{x_i} \bigg| \eqref{Eqn_condition_stable_with_vx_vy}\bigg] \le \frac{1}{x_i} + \sum_{j\ne\mu'(i)}a_{ij} = (1+o(1))\sum_{j=1}^n a_{ij}
\end{equation}
and lower bounded by
\begin{equation}
\ensuremath{\mathbb{E}}\bigg[\frac{R_i}{x_i} \bigg| \eqref{Eqn_condition_stable_with_vx_vy}\bigg] \ge (1-o(1))\sum_{j\ne\bar{\mathcal{W}}'}a_{ij} = (1-\Theta(\delta))\sum_{j=1}^n a_{ij}.
\end{equation}
Similarly, we may bound the variance of $R_i/x_i$ by
\begin{equation}
{\rm Var}\bigg(\frac{R_i}{x_i} \bigg| \eqref{Eqn_condition_stable_with_vx_vy}\bigg) \le \sum_{j\ne\mu'(i)}a_{ij} (1-a_{ij}x_i) \le \sum_{j=1}^n a_{ij}.
\end{equation}
Hence, we have
\begin{equation}\label{Eqn_final_E_Var_bound_for_rank_happiness_ratio}
1-\Theta(\delta) \le \ensuremath{\mathbb{E}}\bigg[\frac{R_i}{x_i \sum_{j=1}^n a_{ij}} \bigg| \eqref{Eqn_condition_stable_with_vx_vy}\bigg] \le 1+o(1) \enspace \text{ and } \enspace {\rm Var}\bigg(\frac{R_i}{x_i \sum_{j=1}^n a_{ij}} \bigg| \eqref{Eqn_condition_stable_with_vx_vy}\bigg) \le \Theta(n^{-1}),
\end{equation}
with these quantities conditionally independent for all $i\in\bar{\mathcal{M}}'$ and the hidden constants depending only on $C$, implying concentration of $R_i$ around $x_i\sum_{j=1}^n a_{ij}$ in the following sense.
\begin{proposition}\label{Prop_most_have_good_rank_happi_ratio}
Conditional on \eqref{Eqn_condition_stable_with_vx_vy}, for any fixed $\theta > 0$ and $\delta,\rho,\gamma\in(0,1/2)$, we have
\begin{equation}
\ensuremath{\mathbb{P}}\left(\sum_{i=1}^n \mathbbm{1}_{(\theta + \Theta(\delta), \infty)}\bigg(\bigg|\frac{R_i}{x_i\sum_{j=1}^n a_{ij}} - 1\bigg|\bigg) \ge (\delta+\rho+\gamma)n \middle| \eqref{Eqn_condition_stable_with_vx_vy}\right)
\lesssim \ensuremath{\mathbb{P}}_{N\sim\Poi(\Theta(\theta^{-2}))}(N\ge \gamma n)
\le e^{-\Theta(\gamma n)}.
\end{equation}
\end{proposition}
\begin{proof}
By Chebyshev's inequality and \eqref{Eqn_final_E_Var_bound_for_rank_happiness_ratio}, $\ensuremath{\mathbb{P}}\big(\big|\frac{R_i}{x_i\sum_{j=1}^n a_{ij}} - \ensuremath{\mathbb{E}}\big[\frac{R_i}{x_i \sum_{j=1}^n a_{ij}} \big| \eqref{Eqn_condition_stable_with_vx_vy}\big]\big| \ge \theta \big| \eqref{Eqn_condition_stable_with_vx_vy}\big) \le \Theta\big((n\theta^2)^{-1}\big)$ for all $i\in\bar{\mathcal{M}}'$. Hence, by conditional independence of the ranks, $\sum_{i\in\bar{\mathcal{M}}'} \mathbbm{1}_{(\theta + \Theta(\delta), \infty)}\big(\big|\frac{R_i}{x_i\sum_{j=1}^n a_{ij}} - 1\big|\big)$ is stochastically dominated by $\Binom\big(n, \Theta\big((n\theta^2)^{-1}\big)\big)$, which converges to $\Poi(\Theta(\theta^{-2}))$ in total variance. The Proposition follows from the well known tail bound for $N\sim\Poi(\lambda)$ that $\ensuremath{\mathbb{P}}(N\ge \lambda + t) \le \exp\big(-\frac{t^2}{2(\lambda+t)}\big)$, which implies $\ensuremath{\mathbb{P}}_{N\sim\Poi(\Theta(\theta^{-2}))}(N\ge \gamma n) \le \exp\big(-\frac{(\gamma n - \Theta(\theta^{-2}))^2}{2\gamma n}\big) \lesssim \exp(-\gamma n/2)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Thm_main_rank_dist_body}]
Note that in Corollary~\ref{Cor_no_stable_match_tilde_Otailab} and Corollary~\ref{Cor_no_stable_match_tilde_Otailab}, $\rho$ can be chosen arbitrarily small once $\delta$ is fixed. In particular, we may always guarantee $\rho \le \delta$. Similarly, we may assume $\theta \le \delta$ in Proposition~\ref{Prop_most_have_good_rank_happi_ratio}. Thus,
\begin{equation}
\ensuremath{\mathbb{P}}\left(\sum_{i=1}^n \mathbbm{1}_{(\Theta(\delta), \infty)}\bigg(\bigg|\frac{R_i}{x_i w_i} - 1\bigg|\bigg) \ge (2\delta+\gamma)n \middle| \eqref{Eqn_condition_stable_with_vx_vy}\right)
\le e^{-\Theta(\gamma n)},
\end{equation}
where $w_i = \sum_{j=1}^n a_{ij}$ is the fitness value of man $m_i$.
Marginalizing over all pairs of relevant value vectors $(\mathbf{x},\mathbf{y})\in\tilde{\Omega}_{\text{tail}}(\xi,\rho)\cap\tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon})\cap\mathcal{R}$ in the condition \eqref{Eqn_condition_stable_with_vx_vy}, we obtain
\begin{equation}
\ensuremath{\mathbb{P}}\left(\mathcal{E}_{\text{ratio}}(\delta,\gamma) \middle| (\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in\tilde{\Omega}_{\text{tail}}(\xi,\rho)\cap\tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon})\cap\mathcal{R}\right)
\le e^{-\Theta(\gamma n)},
\end{equation}
where $\mathcal{E}_{\text{ratio}}(\delta,\gamma)$ denotes the undesirable event that $\sum_{i=1}^n \mathbbm{1}_{(\Theta(\delta), \infty)}\big(\big|\frac{R_i}{(\mathbf{X}_{\mathcal{M}'})_i w_i} - 1\big|\big) \ge (2\delta+\gamma)n$ for the partial matching $\mu'$.
By Proposition~\ref{Prop_EqXY_bound},
\begin{multline}
\ensuremath{\mathbb{P}}((\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in\tilde{\Omega}_{\text{tail}}(\xi,\rho)\cap\Omega_{\text{emp}}(\eps)\cap\mathcal{R}) \le \ensuremath{\mathbb{P}}(\mu'\text{ stable}, (\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in\mathcal{R}) \\
\le e^{o(n)+o_\delta(n)} \frac{(\delta n)!}{n!}\prod_{i\in\mathcal{M}'} a_{i,\mu'(i)} b_{\mu'(i),i}.
\end{multline}
By choosing $\gamma=\gamma(\delta)=o_\delta(1)$ sufficiently large (relative to $\delta$) and following a similar computation as in Lemma~\ref{Lemma_reduction_to_q} and Corollary~\ref{Cor_subexp_num_stable_match}, we can ensure that with probability $1-\Theta(e^{-n^c})$ there exists no stable partial matching $\mu'$ where both $\mathcal{E}_{\text{ratio}}(\delta,\gamma)$ and $(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in \tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon}_0(\delta))\cap\mathcal{R}$ happen, where the function $\ensuremath{\epsilon}_0$ is defined in the proof of \ref{Thm_main_happiness_dist_body}. Notice that by repeated uses of the triangle inequality,
\begin{equation}
(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'})\in \tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon}_0(\delta)), \mathcal{E}_{\text{ratio}}(\delta,\gamma)^c \enspace \Longrightarrow \enspace \|\hat{\mathcal{F}}(\mathbf{w}^{-1}\circ\mathbf{R}(\mu'))-F_\lambda\|_\infty \le \Theta(\delta) + \gamma(\delta) + \ensuremath{\epsilon}_0(\delta) = o_\delta(1)
\end{equation}
for the choice of $\lambda = \|\mathbf{Y}_{\mathcal{W}'}\|_1$. Combining this with \eqref{Eqn_proof_Thm_main_happiness_dist_tOemp} and Proposition~\ref{Prop_R_likely}, we conclude that with probability $1-\Theta(e^{-n^c})$, all stable matchings $\mu\in\mathcal{S}$ induce $\delta$-truncated stable partial matchings $\mu_\delta$ with $\|\hat{\mathcal{F}}(\mathbf{w}^{-1}\circ\mathbf{R}(\mu_\delta))-F_{\lambda(\mu)}\|_\infty = o_\delta(1)$, where $\lambda(\mu)=\|\mathbf{Y}_{\delta}(\mu)\|_1$. The $\delta$-truncation affects the distance by at most $\delta$, which can be absorbed into the $o_\delta(1)$ upper bound. Thus, by choosing $\delta$ sufficiently small relative to any fixed $\ensuremath{\epsilon} > 0$, we complete our proof of Theorem~\ref{Thm_main_rank_dist_body}.
\end{proof}
\subsection{Proofs of Theorem~\ref{Thm_dist_body_approx_stable} and Corollary~\ref{Cor_body_imbalance}}
\ThmMainApproxStable*
\begin{proof}
There are $\binom{n}{\alpha n}^2 = \exp(2 h_b(\alpha) n + O(\ln n))$ sub-markets of size at least $(1-\alpha) n$, where $h_b(p) = -p \log p - (1-p) \log (1-p)$ is the binary entropy function. Under Assumption~\ref{Assumption_C_bounded} for the whole market, each of such sub-markets also satisfies Assumption~\ref{Assumption_C_bounded}. Fix any $\ensuremath{\epsilon} > 0$. By Theorem~\ref{Thm_main_happiness_dist_body}, each of such sub-markets can only contain a stable matching with men's empirical distribution of value deviating from the family of exponential distributions by at least $\ensuremath{\epsilon}/2$ in Kolmogorov-Smirnov distance with probability at most $1-\exp(-n^c)$ for any fixed $c\in(0,1/2)$. Whenever $\alpha < n^{-\eta}$ for some $\eta > 1/2$, we have $h_b(\alpha) n < n^{1-\eta}$. Choosing $c \in (1-\eta, 1/2)$ and applying union bound over all relevant sub-markets gives the first part of \eqref{Eqn_happiness_dist_approx_stable}, since the additional $\alpha$ fraction of the market affects the empirical distribution by at most $\alpha\to 0$. The second part follows analogously from Theorem~\ref{Thm_main_rank_dist_body}.
\end{proof}
\CorImbalanceMarket*
\begin{proof}
The proof is entirely the same as the proof of Theorem~\ref{Thm_dist_body_approx_stable}. The union bound covers all sub-markets of size $n-k$, that is, consisting all the men and a subset of the women. The rest is the same.
\end{proof}
\section{Proofs of typical behaviors of scores in stable matching}\label{Append_weak_regular_scores}
Recall that for a matching $\mu$ with (latent) value vectors $\mathbf{X}(\mu)$ and $\mathbf{Y}(\mu)$, we define $\mathbf{U}(\mu) = F(\hat{\mathbf{X}}(\mu))$ and $\mathbf{V}(\mu) = f(\hat{\mathbf{X}}(\mu))$ with the standard exponential CDF $F(z) = 1-e^{-z}$ applied component-wise to the renormalized values $\hat{X}_i(\mu) = X_i(\mu) a_{i,\mu(i)}$ and $\hat{Y}_i(\mu) = Y_i(\mu) b_{i,\mu^{-1}(i)}$ for $i\in[n]$, so that $U_i,V_i\sim\Unif([0,1])$ and are mutually independent due to the way the score matrices are generated.
\begin{lemma}\label{Lemma_DA_prop_num}
For any $c \in (0,1/2)$, there exists a constant $\theta_1> 0$ (depending on $c$ and $C$) such that in a random instance of the market, at least $\theta_1 n\ln n$ proposals are made during man-proposing deferred acceptance with probability $1 - \exp(-n^c)$.
\end{lemma}
\begin{proof}
This result follows from a standard analysis of the deferred acceptance algorithm executed as in \cite[Section~3]{ashlagi2020tiered}. In \cite{ashlagi2020tiered}, the preference model involves tiers, where fitness values among different tiers differ by at most a constant factor. It turns out that this bounded ratio of fitness is the only thing used in the proofs, and is also satisfied by our matching market under Assumption~\ref{Assumption_C_bounded}. The main steps of analysis are as follows.
\begin{enumerate}
\item Consider (man-proposing) deferred acceptance \emph{with re-proposals}, where each time when man $i$ proposes, his proposal will go to woman $i$ with probability proportional to $a_{ij}$, independent of all previous proposals (and their acceptance/rejection). The total number $T$ of proposals in this process is equal in distribution to the number of draws in the coupon collector problem, and from standard concentration bounds we can show $\ensuremath{\mathbb{P}}(T \ge k n^{1+c}) \le e^{-n^c-2}$ for $k$ sufficiently large and $\ensuremath{\mathbb{P}}(T < \alpha n\ln n) \le \exp(-n^c-2)$ for $\alpha>0$ sufficiently small (see also \cite{doerr2020probabilistic}). The details mirror Appendices~A and B in \cite{ashlagi2020tiered}.
\item Analogous to Lemmas~3.5 and 3.6 in \cite{ashlagi2020tiered}, we can show that, with probability $1-\exp(-n^c-1)$, no single man makes more than $\ell n^{2c}$ proposals during deferred acceptance with re-proposal for $\ell$ sufficiently large.
\item Conditional on $T\ge \alpha n\ln n$ and the maximum number of proposals any man makes being at most $\ell n^{2c}$, the fraction of re-proposals should be no greater than $C\ell n^{2c-1}$ in expectation, since each proposal will be a duplicate of a previous proposal independently with probability at most $\frac{C\ell n^{2 n}}{n}$. It follows immediately from a binomial concentration that the (conditional) probability that the number of repeated proposals exceeds $T/2$ is exponentially small. Hence, the actual number of proposals during deferred acceptance (without re-proposals) is at least $\frac{\alpha}{2} n\ln n$ with probability $1-\exp(-n^c)$.
\end{enumerate}%
\end{proof}
\begin{lemma}\label{Lemma_U1_ge_ln_n}
For any $c \in (0,1/2)$, there exists a constant $\theta_1> 0$ (depending on $c$ and $C$) such that,
with probability $1-\exp(-n^c)$,
there exist no stable matchings $\mu$ where $\|\mathbf{U}(\mu)\|_1 \le \theta_2\ln n$.
\end{lemma}
\begin{proof}
Note that $U_i(\mu) = 1-e^{-a_{i,\mu(i)} X_i(\mu)} \in [C^{-1}F(X_i(\mu)),C F(X_i(\mu))]$ since $a_{i,\mu(i)}\in[1/C, C]$. For any stable matching $\mu$, $\mathbf{X}(\mu) \succeq \mathbf{X}(\mu_\text{MOSM})$ where $\mu_\text{MOSM}$ is the man-optimal stable matching obtained from the man-proposing deferred acceptance, where all the $n$ men are matched with their optimal possible stable partner (and hence achieves best value) simultaneously, and hence
\begin{equation}
\|\mathbf{U}(\mu)\|_1 \ge \frac{1}{C} \|F(\mathbf{X}(\mu))\|_1 \ge \frac{1}{C} \|F(\mathbf{X}(\mu))\|_1 \ge \frac{1}{C^2} \|\mathbf{U}(\mu_\text{MOSM})\|_1.
\end{equation}
Thus, it suffices to consider the event where $\|\mathbf{U}(\mu_\text{MOSM})\|_1 \le \theta_2 C^2 \ln n$.
Without loss of generality, we may assume that $\mu_\text{MOSM}$ matches man $i$ with woman $i$ for $i\in[n]$. Denote by $R_i\in[n]$ and $X_i=X_{ii}\in\ensuremath{\mathbb{R}}_+$ the (random) rank of partner and the latent value in $\mu_\text{MOSM}$ for man $i\in[n]$, and let $U_i = F(a_{ii}X_i)$. By definition, $X_i\sim\Exp(a_{ii})$ and $R_i = \sum_{j\ne i} \mathbbm{1}\{X_{ij}/a_{ij} < X_i\}$.
We condition on a specific execution of the man-proposing deferred acceptance algorithm, i.e., on the sequence of proposals, which specifies the rank $R_i$ and an ordering over his top $R_i$ most preferred partners for each man. Notice that the specific value $X_{ij}$ only affect the execution through the ordering of proposals, and hence conditional on a particular ordering, the values of the men are independent. Further, the value $X_i$ conditional on $R_i$ and an ordering $w_{j_1} \succeq_{m_i} w_{j_2} \succeq_{m_i} w_{j_1} \succeq \cdots \succeq_{m_i} w_{j_{R_i}}$ is equal in distribution to $\tilde{X}_{(R_i)}$, i.e., the $R_i$-th order statistic of $(\tilde{X}_1,\ldots,\tilde{X}_n)$ with $\tilde{X}_j \sim \Exp(a_{ij})$, conditional on $\tilde{X}_{(k)} = \tilde{X}_{j_k}$ for all $k\in[R_i]$. By the representation of exponential order statistics given in \cite{nevzorov1986representations}, under such conditions,
\begin{equation}
\tilde{X}_{(R_i)} \overset{d}{=} \sum_{t=1}^{R_i} \frac{Z_{i,t}}{\sum_{k=t}^n a_{i,j_k}}, %
\end{equation}
where $Z_{i,t}\sim\Exp(1)$ are independently sampled for $t\in[n]$. Conditional on $R_i$ and the sequence $j_1,\ldots,j_{R_i}$, we have
\begin{multline}
U_i = F(a_{ii} X_i) \overset{d}{=} F\bigg(\sum_{t=1}^{R_i} \frac{a_{ii} Z_{i,t}}{\sum_{k=t}^n a_{i,j_k}} \bigg) \ge F\bigg( \frac{1}{n}\sum_{t=1}^{R_i} \frac{Z_{i,t}}{C^2} \bigg) \\
\ge \frac{1}{n}\sum_{t=1}^{R_i}F(Z_{i,t}/C^2) \ge \frac{1}{C^2 n} \sum_{t=1}^{R_i} F(Z_{i,t}) = \frac{1}{C^2 n} \sum_{t=1}^{R_i} W_{i,t},
\end{multline}
where the second last inequality follows from Jensen's inequality (applied to the concave function $F$), and $W_{i,t}=F(Z_{i,t}) \sim \Unif([0,1])$ independently.
Thus, conditional on $\mathbf{R}$, we have $C^2 n\|\mathbf{U}\|_1 \succeq \sum_{i=1}^n \sum_{t=1}^{R_i} W_{i,t}$, independent of the specific ordering (and the identity) of the proposals made. Therefore, we may marginalize over this ordering to get
\begin{equation}
\ensuremath{\mathbb{P}}\bigg(\|\mathbf{U}\|_1 \le \theta_2 C^2 \ln n\bigg| \mathbf{R}\bigg) \le \ensuremath{\mathbb{P}}\left( \sum_{i=1}^n \sum_{t=1}^{R_i} W_{i,t} \le \theta_2 C^4 n \ln n \middle| \mathbf{R}\right).
\end{equation}
Whenever $\|\mathbf{R}\|_1 \ge \theta_1 n \ln n$, the probability above is at most $\exp(-\Theta(n\ln n)) \ll \exp(-n^c)$ by Hoeffding's inequality granted that we choose $\theta_2 < \frac{\theta_1}{2C^4}$,
and our proof is complete as we marginalize over all possible realizations of $\mathbf{R}$ with $\|\mathbf{R}\|_1\ge \theta_1 n\ln n$.
\end{proof}
\begin{lemma}\label{Lemma_U1V1_le_O_n_ln}
For any $\kappa \ge 0$ and $\theta_3 > 0$, the expected number of stable matchings $\mu$ with $\|\mathbf{U}(\mu)\|_1 \|\mathbf{V}(\mu)\|_1 \ge \theta_3 n(\ln n)^{1/8}$ is upper bounded by $\exp(-\kappa n)$.
In particular, with high probability, no such stable matchings exists.
\end{lemma}
\begin{proof}
It suffices to show that the probability that any fixed matching $\mu$ is stable and satisfies $\|\mathbf{U}(\mu)\|_1 \|\mathbf{V}(\mu)\|_1 \ge \theta_3 n(\ln n)^{1/8}$ is upper bounded by $o(e^{-\kappa n}/n!)$ for any $\theta_3>0$. Write $I=[0,1]$ for the unit interval. Let
\begin{equation*}
\Omega = \left\{(\mathbf{u},\mathbf{v})\in I^n\times I^n : \|\mathbf{u}\|_1\|\mathbf{v}\|_1 > \theta_3 n(\ln n)^{1/8}\right\}
\end{equation*}
and let
\begin{equation}
P := \ensuremath{\mathbb{P}}(\mu\in\mathcal{S}, (\mathbf{U}(\mu),\mathbf{V}(\mu))\in \Omega) = \int_{\ensuremath{\mathbb{R}}^n_+\times \ensuremath{\mathbb{R}}^n_+} p(\vx,\vy) \cdot \mathbbm{1}_{\Omega}(F(\hat{\mathbf{x}}), F(\hat{\mathbf{y}})) \cdot \prod_{i=1}^n f(\hat{x}_i)f(\hat{y}_i) \ensuremath{\,d}\hat{\mathbf{x}} \ensuremath{\,d}\hat{\mathbf{y}},
\end{equation}
where $f(t)=e^{-t}$ denotes the standard exponential density function. We apply the simple bound \eqref{Eqn_naive_bound_pxy} on $p(\vx,\vy)$ to obtain
\begin{align}
P &\le \int_{\ensuremath{\mathbb{R}}^n_+\times \ensuremath{\mathbb{R}}^n_+} \prod_{i\ne j} \Big(1 - \big(1-e^{-\hat{x}_i/C^2}\big)\big(1-e^{-\hat{y}_j/C^2}\big) \Big) \cdot \mathbbm{1}_{\Omega}(F(\hat{\mathbf{x}}), F(\hat{\mathbf{y}})) \cdot \prod_{i=1}^n f(\hat{x}_i)f(\hat{y}_i) \ensuremath{\,d}\hat{\mathbf{x}} \ensuremath{\,d}\hat{\mathbf{y}} \nonumber\\
&= \int_{I^n\times I^n} \prod_{i\ne j} \Big(1 - \big(1-(1-u_i)^{1/C^2}\big)\big(1-(1-v_j)^{1/C^2}\big) \Big) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}, \mathbf{v}) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v} \nonumber\\
&\le \int_{I^n\times I^n} \prod_{i\ne j} \Big(1 - \frac{1}{C^4} u_iv_j \Big) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}, \mathbf{v}) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v} \nonumber\\
&\le \int_{I^n\times I^n} \exp\Big(-\frac{1}{C^4}(\|\mathbf{u}\|_1\|\mathbf{v}\|_1 - \mathbf{u}\cdot\mathbf{v})\Big) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}, \mathbf{v}) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v} \nonumber\\
&\le \int_{I^n\times I^n} \exp\Big(\frac{1}{C^4}(n - \|\mathbf{u}\|_1\|\mathbf{v}\|_1)\Big) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}, \mathbf{v}) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v}\label{Eqn_Lemma_weak_reg_U_apply_naive_bound},
\end{align}
where we use the basic facts that $1-z^{1/C^2}\ge (1-z)/C^2$ for all $z\in[0,1]$ and $C\ge 1$ and that $1+z\le e^z$ for all $z\in\ensuremath{\mathbb{R}}$.
Let $s = \|\mathbf{u}\|_1$ and $t = \|\mathbf{v}\|_1$. It is well known (e.g., \cite[Ch.~I,~Sec.~9]{feller1971introduction}) that the probability density of $S=\|\mathbf{U}\|_1$ with $U_1,\ldots,U_n$ independent samples from $\Unif(I)$ is bounded above by $\frac{s^{n-1}}{(n-1)!}$. Hence,
\begin{equation}
P \le \int_{s,t\in[0,n]\,:\, st\ge \theta_3 n(\ln n)^{1/8}} e^{(n-st)/C^4} \cdot \frac{n^2(st)^{n-1}}{(n!)^2} \ensuremath{\,d} s \ensuremath{\,d} t.
\end{equation}
Note that when $st \ge n (\ln n)^2$, we have $\exp\big((n-st)/C^4\big) \le \exp\big((n-n(\ln n)^2)/C^4) = o(\exp(-\kappa n))/n!$ and therefore the region $\{s,t\in[0,n]\,:\, st\ge n(\ln n)^2\}$ contributes a negligible amount to the integral. Hence,
\begin{align}\label{Eqn_Lemma_weak_reg_U_p2_final_bound}
P &\le \frac{o(e^{-\kappa n})}{n!} + \int_{s,t\in[0,n]\,:\, \theta_3 n(\ln n)^{1/8} \le st\le n(\ln n)^2} e^{(n-st)/C^4} \cdot \frac{n^2(st)^{n-1}}{(n!)^2} \ensuremath{\,d} s \ensuremath{\,d} t \nonumber\\
&\labelrel\le{Rel_proof_lemma_u1v1_1} \frac{o(e^{-\kappa n})}{n!} + n^2\cdot e^{(n-\theta_3 n(\ln n)^{1/8})/C^4} \cdot \frac{n^2(n(\ln n)^2)^{n-1}}{(n!)^2} \nonumber\\
&\labelrel\le{Rel_proof_lemma_u1v1_2} \frac{o(e^{-\kappa n})}{n!} + \frac{1}{n!} \cdot \exp\Big(\frac{1}{C^4}(n-\theta_3 n(\ln n)^{1/8}) + 3\ln n + 2(n-1)\ln\ln n + n\Big) = \frac{o(e^{-\kappa n})}{n!},
\end{align}
where in step \eqref{Rel_proof_lemma_u1v1_1} we upper bound the integral by the product of the Lebesgue measure of its domain (bounded by $n^2$) and the supremum of its integrand, and in step \eqref{Rel_proof_lemma_u1v1_2} we invoke Stirling's approximation.
\end{proof}
\begin{corollary}\label{Cor_V1_le_n_over_ln}
For any constant $c\in(0,1/2)$,
there exists a constant $\theta_4> 0$ such that,
with probability $1-\exp(-n^c)$,
there exist no stable matchings $\mu$ with $\|\mathbf{U}(\mu)\|_1 \ge \frac{\theta_4 n}{(\ln n)^{7/8}}$.
\end{corollary}
\begin{proof}
This follows immediately from Lemma~\ref{Lemma_U1_ge_ln_n} and \ref{Lemma_U1V1_le_O_n_ln}.
\end{proof}
\begin{proposition}\label{Prop_characterize_low_prob_events}
Let $\Omega\subset I^n$ be a (sequence of) regions in the $n$-dimensional hypercube. For $k\in\ensuremath{\mathbb{Z}}_+$, define interval $I_k = (2^{-k}n, 2^{-k+1}n]$. If
\begin{equation}\label{Eqn_prop_characterize_low_prob_events_main_assumpt}
\ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in \Omega, \|\mathbf{W}\|_1\in I_k) \le \frac{g(n)}{e^{6n}C^{8n}}
\end{equation}
for some function $g(n)$ and uniformly for all $k\in\ensuremath{\mathbb{Z}}_+$, we can guarantee that the expected number of stable matchings $\mu$ with value vector $\mathbf{X}(\mu)\in\ensuremath{\mathbb{R}}^n$ satisfying $\mathbf{U}(\mu) \in \Omega$ is upper bounded by $g(n)+e^{-\Theta(n^2)}$; %
in particular, with high probability, no such stable matchings exist if $g(n) = o(1)$.
\end{proposition}
\begin{proof}
We focus on a fixed matching $\mu$ and by union bound, it suffices to show that
\begin{equation}
\ensuremath{\mathbb{P}}(\mu\in\mathcal{S}, \mathbf{U}(\mu)\in\Omega) = \frac{g(n)+ e^{-\Theta(n^2)}}{n!}
\end{equation}
under the condition of \eqref{Eqn_prop_characterize_low_prob_events_main_assumpt}.
The same chain of reasoning as in \eqref{Eqn_Lemma_weak_reg_U_apply_naive_bound} gives
\begin{align}
P := \ensuremath{\mathbb{P}}(\mu\in\mathcal{S}, \mathbf{U}(\mu) \in \Omega) &\le \int_{I^n\times I^n} \exp\Big(\frac{1}{C^4}(n - \|\mathbf{u}\|_1\|\mathbf{v}\|_1)\Big) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v} \nonumber\\
&\le e^n \int_{I^n\times I^n} \exp( - \|\mathbf{u}\|_1\|\mathbf{v}\|_1/C^4)) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v}
\end{align}
since $C\ge 1$.
Observing that $\|\mathbf{U}(\mu)\|_1=0$ and $\|\mathbf{V}(\mu)\|_1=0$ are both probability zero events, we may split the domain of the integral above into sub-regions according to which intervals $\|\mathbf{U}(\mu)\|_1$ and $\|\mathbf{V}(\mu)\|_1$ fall into, and then bound the value of the integral within each sub-region. That is, with the help of the monotone convergence theorem to interchange summation with integral,
\begin{align}
P &\le e^n \sum_{k,\ell=1}^\infty \int_{I^n\times I^n}\exp( - \|\mathbf{u}\|_1\|\mathbf{v}\|_1/C^4) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}) \mathbbm{1}_{I_k}(\|\mathbf{u}\|_1) \mathbbm{1}_{I_\ell}(\|\mathbf{v}\|_1) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v} \nonumber\\
&\le e^n \sum_{k,\ell=1}^\infty \int_{I^n\times I^n}\exp( - 2^{-k-l}n^2/C^4) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}) \mathbbm{1}_{I_k}(\|\mathbf{u}\|_1) \mathbbm{1}_{I_\ell}(\|\mathbf{v}\|_1) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v}.
\end{align}
For all $k\in\ensuremath{\mathbb{Z}}_+$, we have $e^{2n}\exp(-2^k\|\mathbf{u}\|_1)\ge 1$ whenever $\mathbf{u}\in I^n$ with $\|\mathbf{u}\|_1\in I_k$. Thus,
\begin{align}
P &\le e^{5n} \sum_{k,\ell=1}^\infty \int_{I^n\times I^n} \exp( - 2^{-k-l}n^2/C^4) \cdot \mathbbm{1}_{\Omega}(\mathbf{u}) \mathbbm{1}_{I_k}(\|\mathbf{u}\|_1) \mathbbm{1}_{I_\ell}(\|\mathbf{v}\|_1) \cdot \exp(-2^k\|\mathbf{u}\|_1 - 2^\ell\|\mathbf{v}\|_1) \ensuremath{\,d}\mathbf{u} \ensuremath{\,d}\mathbf{v} \nonumber\\
&= e^{5n} \sum_{k,\ell=1}^\infty 2^{-(k+\ell)n} \exp( - 2^{-k-l}n^2/C^4) \ensuremath{\mathbb{E}}_{\mathbf{U}\sim\Exp(2^k)^{\otimes n},\mathbf{V}\sim\Exp(2^\ell)^{\otimes n}}\left[ \mathbbm{1}_{\Omega}(\mathbf{U})\mathbbm{1}_{I^n}(\mathbf{V})\mathbbm{1}_{I_k}(\|\mathbf{U}\|_1) \mathbbm{1}_{I_\ell}(\|\mathbf{V}\|_1)\right].
\end{align}
Observe that all the terms with $k+\ell \ge n$ combined contribute at most
\begin{multline}
e^{5n}\sum_{k+\ell \ge n} 2^{-(k+\ell)n} = e^{5n} \left( \sum_{k=1}^{n-1} 2^{-kn} \sum_{\ell=n-k}^\infty 2^{-\ell n} + \sum_{k=n}^\infty 2^{-kn} \sum_{\ell=1}^\infty 2^{-\ell n} \right) \\
= e^{5n}\cdot 2^{-n^2-n} \cdot\frac{n(1-2^{-n})+1}{(1-2^{-n})^2} = \frac{e^{-\Theta(n^2)}}{n!},
\end{multline}
which is negligible. Therefore, we only need to consider $\ensuremath{O}(n^2)$ terms and
\begin{align}
P
&\le \frac{e^{-\Theta(n^2)}}{n!} + n^2 e^{5n} \max_{k,\ell\in\ensuremath{\mathbb{Z}}_+} \frac{\ensuremath{\mathbb{E}}_{\mathbf{U}\sim\Exp(2^k)^{\otimes n},\mathbf{V}\sim\Exp(2^\ell)^{\otimes n}}\left[ \mathbbm{1}_{\Omega}(\mathbf{U}) \mathbbm{1}_{I^n}(\mathbf{V}) \mathbbm{1}_{I_k}(\|\mathbf{U}\|_1) \mathbbm{1}_{I_\ell}(\|\mathbf{V}\|_1)\right]}{2^{(k+\ell)n} \exp( 2^{-k-l}n^2/C^4 )} \nonumber\\
&\le \frac{e^{-\Theta(n^2)}}{n!} + e^{6n} \max_{k\in\ensuremath{\mathbb{Z}}_+} \frac{\ensuremath{\mathbb{P}}_{\mathbf{U}\sim\Exp(2^k)^{\otimes n}}(\mathbf{U}\in \Omega, \|\mathbf{U}\|_1\in I_k)}{2^{(k+\ell)n} \exp( 2^{-k-l}n^2/C^4 )}.
\end{align}
Let $\alpha = 2^{-(k+\ell)}$. When $\alpha\le C^8/n$, the denominator of the second term can be bounded as $\alpha^{-n}e^{\alpha n^2/C^4} \ge n^n/C^{8n} \ge n!/C^{8n}$; when $\alpha > C^8/n$, let $\tau = n\alpha > C^8$ and we have $\alpha^{-n}e^{\alpha n^2/C^4} = \frac{n^n}{\tau^n} e^{n\tau/C^4} = n^n \exp\big(n(\tau/C^4 - \ln \tau)\big) \ge n^n \exp\big(n(C^4 - 4\ln C)\big) \ge n!$. Thus, the denominator is bounded below by $n!/C^{8n}$ and
\begin{equation}\label{Eqn_proof_X1_O_V1_last_reusable_bound}
P \le \frac{e^{-\Theta(n^2)}}{n!} + \frac{e^{6n} C^{8n}}{n!} \max_{k\in\ensuremath{\mathbb{Z}}_+} \ensuremath{\mathbb{P}}_{\mathbf{U}\sim\Exp(2^k)^{\otimes n}}(\mathbf{U}\in \Omega, \|\mathbf{U}\|_1\in I_k) \le \frac{1}{n!}(e^{-\Theta(n^2)} + g(n)).
\end{equation}
The claim follows immediately.
\end{proof}
\begin{lemma}\label{Lemma_bdd_x_delta_infty_norm}
For any fixed $\delta > 0$ and $\kappa > 0$, there exists an absolute constant $\theta_5$ (depending on $\delta,\kappa$, and $C$) such that, with probability $1-\exp(-\kappa n)$,
there exist no stable matchings $\mu$ with $\|\mathbf{X}_\delta(\mu)\|_\infty \le \theta_5$. (Recall that $\mathbf{X}_\delta(\mu)$ is the value vector of the $(1-\delta)$-partial matching obtained from $\mu$ that excludes the least happy $\delta/2$ fraction of men and women.)
\end{lemma}
\begin{proof}
Since $\hat{\mathbf{X}}$ and $\mathbf{X}$ differ by at most a factor of $C$ component-wise, we have
\begin{equation}\label{Eqn_proof_Lemma_bdd_x_delta_infty_norm_translate_to_U_quantile}
\|\mathbf{X}_\delta(\mu)\|_\infty \le X_{(n-\floor{\delta n/2})}(\mu) \le C \hat{X}_{(n-\floor{\delta n/2})}(\mu) = -C\log\big(1-U_{(n-\floor{\delta n/2})}(\mu)\big).
\end{equation}
Thus, it suffices to bound the upper $\delta/2$ quantile $U_{(n-\floor{\delta n/2})}(\mu)$ away from $1$.
Let $\Omega = \{\mathbf{u}\in I^n:\mathbf{u}_{(n-\floor{\delta n/2})} > 1-e^{-s}\}$ for some $s\ge 1$ that we will specified later. Then $\mathbf{W}\in\Omega$ implies that $\sum_{i=1}^n \mathbbm{1}_{(1-e^{-s},1]}(W_i) \ge \delta n/2$. For $W_i\sim\Exp(2^k)$, we have
\begin{equation*}
\ensuremath{\mathbb{P}}(W_i\in (1-e^{-s},1]) = \int_{1-e^{-s}}^1 2^k e^{-2^k t} dt \le e^{-s}\cdot 2^ke^{-2^{k-1}} \le e^{-s}.
\end{equation*}
Thus, $\sum_{i=1}^n \mathbbm{1}_{(1-e^{-s},1]}(W_i)$ is stochastically dominated by a $\Binom(n, e^{-s})$ random variable, and as a result
\begin{multline*}
\ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in\Omega, \|\mathbf{W}\|_1\in I_k) \le \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in\Omega) \\
\le \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}\left(\sum_{i=1}^n \mathbbm{1}_{(1-e^{-s},1]}(W_i) \ge \frac{\delta n}{2}\right)
\le \ensuremath{\mathbb{P}}_{Z\sim\Binom(n, e^{-s})}\bigg(Z\ge \frac{\delta n}{2}\bigg) \le \exp(-n D(\delta/2 \| e^{-s})).
\end{multline*}
Since $D(\delta/2 \| z)\to\infty$ as $z\to 0$, it suffices to take $s$ sufficiently large to guarantee $D(\delta/2\|e^{-s}) > \kappa + 6 + 8 \log C$. Proposition~\ref{Prop_characterize_low_prob_events} then guarantees with probability $1-\exp(-\kappa n)$ that no stable matchings $\mu$ have $\mathbf{U}(\mu) \in \Omega$, and as a result of \eqref{Eqn_proof_Lemma_bdd_x_delta_infty_norm_translate_to_U_quantile}, with at least the desired probability, no stable matchings $\mu$ should have $\|\mathbf{X}_\delta(\mu)\|_\infty > \theta_5 := C s$.
\end{proof}
\begin{lemma}\label{Lemma_X1_le_O_U1}
For any $\delta > 0$ and $\kappa > 0$, there exists an absolute constant $\theta_6$ (again depending on $\delta,\kappa$, and $C$) such that, with probability $1-\exp(-\kappa n)$, there exist no stable matchings $\mu$ with $\|\mathbf{X}_\delta(\mu)\|_1 \ge \theta_6 \|\mathbf{U}(\mu)\|_1$.
\end{lemma}
\begin{proof}
Take $\theta_6 = C \sup_{0<x\le C\theta_5} \frac{x}{F(x)} = \frac{C^2\theta_5}{1-e^{-C\theta_5}}$, where $\theta_5$ is the constant in Lemma~\ref{Lemma_bdd_x_delta_infty_norm}. Assume that $\|\mathbf{X}_\delta(\mu)\|_\infty \le \theta_5$ for all stable matchings $\mu$ since the probability otherwise is at most $\exp(-\kappa n)$ as desired. Note that for each $i$ in the support of $\mathbf{X}_\delta(\mu)$ (i.e., $(X_\delta)_i(\mu) > 0$), we have
\begin{equation}
\hat{X}_i(\mu) \le C X_i(\mu) = C (X_\delta)_i(\mu) \le C\theta_5,
\end{equation}
and subsequently
\begin{equation}
U_i(\mu) = F(\hat{X}_i(\mu)) \ge \frac{C\hat{X}_i(\mu)}{\theta_6} \ge \frac{X_i(\mu)}{\theta_6} = \frac{(X_\delta)_i(\mu)}{\theta_6},
\end{equation}
and this final inequality is trivial for any $i$ not in the support of $\mathbf{X}_\delta(\mu)$. The claim then follows immediately.
\end{proof}
\begin{lemma}\label{Lemma_Xdelta2sq_le_O_V1sq}
For any fixed $\delta > 0$ and $\kappa > 0$, there exists an absolute constant $\theta_7$ (depending on $\delta,\kappa$, and $C$) such that, with probability $1-\exp(-\kappa n)$,
there exist no stable matchings $\mu$ with $\|\mathbf{X}_\delta(\mu)\|_2^2 \ge \theta_7 \|\mathbf{U}(\mu)\|_1^2 /n$.
\end{lemma}
\begin{proof}
Taking advantage of Lemma~\ref{Lemma_bdd_x_delta_infty_norm}, let us assume that $X_{(n-\floor{\delta n/2})}(\mu) \le \theta_5$ is satisfied simultaneously by all stable matchings $\mu$ (see the proof, in particular \eqref{Eqn_proof_Lemma_bdd_x_delta_infty_norm_translate_to_U_quantile}, for more details); the event otherwise has probability bounded by $\exp(-\kappa n)$.
Notice that
\begin{equation*}
\|\mathbf{X}_\delta(\mu)\|_2^2 \le \sum_{i=1}^{n-\floor{\delta n/2}} X_{(i)}(\mu)^2 \le C^2 \sum_{i=1}^{n-\floor{\delta n/2}} \hat{X}_{(i)}(\mu)^2 \le \theta_6' \sum_{i=1}^{n-\floor{\delta n/2}} U_{(i)}(\mu)^2,
\end{equation*}
where the $(i)$ subscript denotes the $i$-th (lower) order statistics (and in particular, $\hat{X}_{(i)}(\mu)$ is the $i$-th smallest entry of $\hat{\mathbf{X}}(\mu)$) with $\theta_6' = C^2 \left(\frac{\theta_5}{F(\theta_5)}\right)^2$. Now it suffices to compare $\sum_{i=1}^{n-\floor{\delta n/2}} U_{(i)}(\mu)^2$ with $\|\mathbf{U}(\mu)\|_1^2 /n$.
Consider $\Omega := \{\mathbf{w} \in I^n : \sum_{i=1}^{n-\floor{\delta n/2}} w_{(i)}^2 \ge \gamma \|\mathbf{w}\|_1^2/n\}$ for some $\gamma\in\ensuremath{\mathbb{R}}_+$ to be specified. By Proposition~\ref{Prop_characterize_low_prob_events}, it suffices to show that for some appropriate value of $\gamma$ we have
\begin{equation*}
\ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in\Omega, \|\mathbf{W}\|_1\in I_k) \le e^{-(\kappa+6)n}C^{-8n}
\end{equation*}
for all $k\in\ensuremath{\mathbb{Z}}_+$. Observe that
\begin{align*}
\ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in\Omega, \|\mathbf{W}\|_1\in I_k) &= \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}\bigg(\sum_{i=1}^{n-\floor{\delta n/2}} W_{(i)}^2 \ge \frac{\gamma \|\mathbf{W}\|_1^2}{n}, 2^{-k}n<\|\mathbf{W}\|_1\le 2^{-k+1}n\bigg) \\
&\le \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}\bigg(\sum_{i=1}^{n-\floor{\delta n/2}} W_{(i)}^2 \ge \gamma 2^{-2k} n\bigg) \\
&\le \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(1)^{\otimes n}}\bigg(\sum_{i=1}^{n-\floor{\delta n/2}} W_{(i)}^2 \ge \gamma n\bigg) \\
&\le \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(1)^{\otimes n}}(W_{(n-\floor{\delta n/2})} \ge \sqrt{\gamma}) \\
&\le \ensuremath{\mathbb{P}}_{Z\sim\Binom(n, e^{-\sqrt{\gamma}})}\left(Z \ge \frac{\delta n}{2}\right).
\end{align*}
By the large deviation bound for binomial distribution, choosing $\gamma$ sufficiently large such that $D(\delta/2 \| e^{-\sqrt{\gamma}}) > \kappa + 6 + 8\log C$ ensures that this probability is $o(e^{-(\kappa+6)n}C^{-8n})$. This finishes the proof with the choice of $\theta_7 = \theta_6' \gamma$.
\end{proof}
\begin{remark}
This is the only part of our analysis that relies on the $\delta$-truncation of values. Without the truncation, $\frac{1}{n}\|\mathbf{W}\|_2^2$ would concentrate poorly -- in fact not even having a finite mean -- for $\mathbf{W}\sim\Exp(1)^{\otimes n}$.
\end{remark}
\begin{corollary}\label{Cor_X_delta_truncate_at_C_over_2_is_small}
For any constant $c \in (0, 1/2)$, there exists a constant $\theta_8> 0$ such that,
with probability $1-\exp(-n^c)$,
there exist no stable matchings $\mu$ with $\sum_{i=1}^n (X_\delta)_i \mathbbm{1}_{[2/C,\infty)}\big((X_\delta)_i\big) \ge \theta_8 \|\mathbf{U}\|_1/(\ln n)^{7/8}$.
\end{corollary}
\begin{proof}
Notice that
\begin{equation*}
\|\mathbf{X}_\delta\|_2^2 \ge \sum_{i=1}^n (X_\delta)_i^2 \mathbbm{1}_{[2/C,\infty)}\big((X_\delta)_i\big) \ge \frac{2}{C} \sum_{i=1}^n (X_\delta)_i \mathbbm{1}_{[2/C,\infty)}\big((X_\delta)_i\big).
\end{equation*}
The statement then follows from Lemma~\ref{Lemma_Xdelta2sq_le_O_V1sq} and Corollary~\ref{Cor_V1_le_n_over_ln}.
\end{proof}
\begin{lemma}\label{Lemma_Xdelta1_ge_U1}
For any fixed $\delta > 0$ and $\kappa > 0$, there exists an absolute constant $\theta_9$ (depending on $\delta,\kappa$, and $C$) such that, with probability $1-\exp(-\kappa n)$,
there exist no stable matchings $\mu$ with $\|\mathbf{X}_\delta(\mu)\|_1 \le \theta_9 \|\mathbf{U}(\mu)\|_1$.
\end{lemma}
\begin{proof}
Since $\mathbf{X} \succeq \mathbf{U}$ component-wise, we have $\|\mathbf{X}_\delta(\mu)\|_1 \ge \|\mathbf{U}_\delta(\mu)\|_1$. Thus, it suffices to consider the condition $\|\mathbf{U}_\delta(\mu)\|_1 \le \theta_9 \|\mathbf{U}(\mu)\|_1$.
Consider $\Omega := \{\mathbf{w} \in I^n : \exists S\subseteq[n], |S|=n-\floor{\delta n},\sum_{i\in S} w_i \le \alpha \|\mathbf{w}\|_1\}$ for some $\alpha\in\ensuremath{\mathbb{R}}_+$ to be specified. By union bound, for any $k\in\ensuremath{\mathbb{Z}}_+$,
\begin{align*}
\ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in\Omega, \|\mathbf{W}\|_1\in I_k) &\le \binom{n}{\floor{\delta n}} \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}\bigg(\sum_{i=1}^{n-\floor{\delta n}} w_i \le \alpha \|\mathbf{W}\|_1\le 2^{-k+1}\alpha n\bigg) \\
&= \binom{n}{\floor{\delta n}} \ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(1)^{\otimes n}}\bigg(\sum_{i=1}^{n-\floor{\delta n}} w_i \le 2\alpha n\bigg) \\
&= \exp(-h(\delta) n + o(n)) \cdot \bigg(\frac{2\alpha e}{1-\delta}\bigg)^{n-\floor{\delta n}},
\end{align*}
where in the last step we use Stirling's approximation to bound the first factor and standard (lower) concentration of $\Exp(1)$ to bound the probability term (e.g., see Lemma~\ref{Lemma_weighted_exp_chernoff}).
For $\alpha$ sufficiently small, e.g., $\alpha < \exp\big(\frac{1}{1-\delta}(h(\delta)-\kappa - 6-8\ln C)-h(\delta)\big)$, we have
\begin{equation*}
\ensuremath{\mathbb{P}}_{\mathbf{W}\sim\Exp(2^k)^{\otimes n}}(\mathbf{W}\in\Omega, \|\mathbf{W}\|_1\in I_k) \le e^{-(\kappa+6)n}C^{-8n}
\end{equation*}
for all $k\in\ensuremath{\mathbb{Z}}_+$. Invoking Lemma~\ref{Prop_characterize_low_prob_events} concludes the proof with $\theta_9 = \alpha$.
\end{proof}
\begin{corollary}\label{Cor_Xdelta1_ge_ln_n}
For any constant $c \in (0, 1/2)$, there exists a constant $\theta_{10}> 0$ such that,
with probability $1-\exp(-n^c)$,
there exist no stable matchings $\mu$ with $\|\mathbf{X}_\delta(\mu)\|_1 \le \theta_{10} \ln n$.
\end{corollary}
\begin{proof}
This follows from Lemmas~\ref{Lemma_U1_ge_ln_n} and \ref{Lemma_Xdelta1_ge_U1}, with $\theta_{10} = \theta_2\theta_9$.
\end{proof}
The following Corollary combines all the previous into the typical behavior of value vectors in stable matchings.
\begin{corollary}\label{Cor_Rstar_likely}
Define $\mathcal{R}^\star(\mu)\subseteq \ensuremath{\mathbb{R}}_+^n \times \ensuremath{\mathbb{R}}_+^n$, in the context of a matching $\mu$, to be the set of all pairs of vectors $(\mathbf{x},\mathbf{y})\in \ensuremath{\mathbb{R}}_+^n \times \ensuremath{\mathbb{R}}_+^n$ that satisfy all of the following conditions:
\begin{align}
\theta_2 \ln n \le \|\mathbf{u}\|_1&,\|\mathbf{v}\|_1 \le \frac{\theta_4 n}{(\ln n)^{7/8}}, \label{Eqn_def_Rstar_1} \\
\|\mathbf{u}\|_1\|\mathbf{v}\|_1 &\le \theta_3 n(\ln n)^{1/8}, \label{Eqn_def_Rstar_2} \\
\|\mathbf{x}_\delta\|_1 \le \theta_6 \|\mathbf{u}\|_1
&\text{ and } \|\mathbf{y}_\delta\|_1 \le \theta_6 \|\mathbf{v}\|_1, \label{Eqn_def_Rstar_3} \\
\|\mathbf{x}_\delta\|_2^2 \le \frac{\theta_7 \|\mathbf{u}\|_1^2}{n}
&\text{ and } \|\mathbf{y}_\delta\|_2^2 \le \frac{\theta_7 \|\mathbf{v}\|_1^2}{n}, \label{Eqn_def_Rstar_4} \\
\sum_{i=1}^n (x_\delta)_i \mathbbm{1}_{[2/C,\infty)}\big((x_\delta)_i\big) \le \frac{\theta_8 \|\mathbf{u}\|_1}{(\ln n)^{7/8}}
&\text{ and } \sum_{i=1}^n (y_\delta)_i \mathbbm{1}_{[2/C,\infty)}\big((y_\delta)_i\big) \le \frac{\theta_8 \|\mathbf{v}\|_1}{(\ln n)^{7/8}}, \label{Eqn_def_Rstar_5} \\
\|\mathbf{x}_\delta\|_1,\|\mathbf{y}_\delta\|_1 &\ge \theta_{10} \ln n, \label{Eqn_def_Rstar_6}
\end{align}
where $u_i = F(a_{i,\mu(i)}x_i)$ and $v_j = F(b_{j,\mu^{-1}(j)}y_j)$ for $i,j\in[n]$; $\mathbf{x}_\delta$ and $\mathbf{y}_\delta$ denote the truncated version of $\mathbf{x}$ and $\mathbf{y}$;
$\theta_2,\theta_3, \theta_6, \theta_7, \theta_4, \theta_8,\theta_{10}\in\ensuremath{\mathbb{R}}_+$ are absolute constants (independent of $\mu$) chosen appropriately as in Lemmas~\ref{Lemma_U1_ge_ln_n}, \ref{Lemma_U1V1_le_O_n_ln},
\ref{Lemma_X1_le_O_U1}, \ref{Lemma_Xdelta2sq_le_O_V1sq}, and Corollaries \ref{Cor_V1_le_n_over_ln}, \ref{Cor_X_delta_truncate_at_C_over_2_is_small}, and \ref{Cor_Xdelta1_ge_ln_n}. Then, for any $c\in(0,1/2)$, with probability $1-\exp(-n^c)$, $(\mathbf{X}(\mu),\mathbf{Y}(\mu)) \in \mathcal{R}^\star(\mu)$ for all stable matchings $\mu$.
\end{corollary}
The proof simply summarizes the aforementioned Lemmas and Corollaries and shall be omitted.
\propRatioPQHighProbeon*
\begin{proof}
Note that $1-e^{-tx} \ge \left(tx - \frac{t^2 x}{2}\right)$ for all $x, t \ge 0$. In particular, $1-e^{-tx} \ge \left(tx - \frac{t^2 x}{2}\right) \mathbbm{1}_{[0, 2/t]}(x) \ge 0$. Using this to approximate $p(\mathbf{x},\mathbf{y})$ gives
\begin{align}
p(\mathbf{x},\mathbf{y}) &= \prod_{\substack{i\ne j}} \left(1 - \big(1-e^{-a_{ij}x_i}\big)\big(1-e^{-b_{ji}y_j}\big) \right) \nonumber\\
&\le \prod_{\substack{i\ne j}} \left(1 - \mathbbm{1}_{[0, 2/a_{ij}]}(x_i) \mathbbm{1}_{[0, 2/b_{ji}]}(y_j) \bigg(a_{ij}x_i - \frac{a_{ij}^2}{2}x_i^2\bigg)\bigg(b_{ji}y_j-\frac{b_{ji}^2}{2}y_j^2\bigg) \right) \nonumber\\
&\le \exp\left( -\sum_{i\ne j} \mathbbm{1}_{[0, 2/C]}(x_i) \mathbbm{1}_{[0, 2/C]}(y_j) \bigg(a_{ij}x_i - \frac{a_{ij}^2}{2}x_i^2\bigg)\bigg(b_{ji}y_j-\frac{b_{ji}^2}{2}y_j^2\bigg) \right).
\end{align}
Taking logarithm for simplicity and expanding the expression above gives
\begin{align}
\ln p(\mathbf{x},\mathbf{y}) & \le - \sum_{i\ne j} \Bigg( a_{ij}b_{ji} x_i y_j - \big(\mathbbm{1}_{(2/C,\infty)}(x_i) + \mathbbm{1}_{(2/C,\infty)}(y_j)\big) a_{ij} b_{ji} x_i y_j \nonumber \\
&\qquad\qquad - \mathbbm{1}_{[0, 2/C]}(x_i) \mathbbm{1}_{[0, 2/C]}(y_j) \bigg( a_{ij}^2 b_{ji} x_i^2 y_j + a_{ij} b_{ji}^2 x_i y_j^2
\bigg) \Bigg) \nonumber \\
&\le
-\sum_{i, j=1}^n a_{ij}b_{ji} x_i y_j
+ \sum_{i=1}^n C^2 x_i y_i \nonumber \\
&\qquad\qquad + \sum_{i, j=1}^n \Bigg( C^2 \big(\mathbbm{1}_{(2/C,\infty)}(x_i) + \mathbbm{1}_{(2/C,\infty)}(y_j)\big) x_i y_j + C^3 \bigg( x_i^2 y_j + x_i y_j^2 \bigg) \Bigg). %
\end{align}
Notice that $-\ln q(\mathbf{x},\mathbf{y}) = \sum_{i, j=1}^n a_{ij}b_{ji}\frac{x_i}{a_{ii}}\frac{y_j}{b_{jj}}$. Thus,
\begin{multline}\label{Eqn_proof_ratio_p_q_high_prob_diff_pq}
\ln \frac{p(\mathbf{x},\mathbf{y})}{q(\mathbf{x},\mathbf{y})} \le C^2 \mathbf{x}^\top \mathbf{y} + C^2\left(\|\mathbf{x}\|_1 \sum_{i=1}^n\mathbbm{1}_{(2/C,\infty)}(y_j) y_j + \|\mathbf{y}\|_1 \sum_{i=1}^n\mathbbm{1}_{(2/C,\infty)}(x_i) x_i\right) \\
+ C^3 \left(\|\mathbf{x}\|_2^2 \|\mathbf{y}\|_1 + \|\mathbf{x}\|_1 \|\mathbf{y}\|_2^2\right).
\end{multline}
In light of Corollary~\ref{Cor_Rstar_likely}, it suffices to upper bound $\ln\frac{p(\mathbf{x}_\delta,\mathbf{y}_\delta)}{q(\mathbf{x}_\delta,\mathbf{y}_\delta)}$ by $cn/(\ln n)^{1/2}$ for all $(\mathbf{x},\mathbf{y})\in\mathcal{R}^\star(\mu)$ and for all $\mu$. To simplify notation, we will make the dependency on $\mu$ implicit in the rest of the proof.
By Cauchy-Schwarz inequality, the first term in \eqref{Eqn_proof_ratio_p_q_high_prob_diff_pq}, up to a factor of $C^2$, is at most
\begin{equation*}
\|\mathbf{x}_\delta\|_2 \|\mathbf{y}_\delta\|_2 \le \frac{\theta_7\|\mathbf{u}\|_1 \|\mathbf{v}\|_1}{n} \le \theta_3\theta_7(\ln n)^{1/8} = o\left(\frac{n}{(\ln n)^{1/2}}\right)
\end{equation*}
by \eqref{Eqn_def_Rstar_4} and \eqref{Eqn_def_Rstar_2}.%
The middle term in \eqref{Eqn_proof_ratio_p_q_high_prob_diff_pq}, up to a factor of $2C^2$, is at most%
\begin{equation*}
\|\mathbf{x}_\delta\|_1 \sum_{i=1}^n\mathbbm{1}_{(2/C,\infty)}((y_\delta)_j) (y_\delta)_j \le \theta_6\|\mathbf{u}\|_1 \frac{\|\mathbf{v}\|_1}{(\ln n)^{7/8}} \le \theta_3\theta_6\frac{n}{(\ln n)^{3/4}}
\end{equation*}
by \eqref{Eqn_def_Rstar_3}, \eqref{Eqn_def_Rstar_5}, and \eqref{Eqn_def_Rstar_2}.
Finally, the last term, up to a factor of $2C^2$, is upper bounded by
\begin{multline*}
\|\mathbf{x}_\delta\|_2^2 \|\mathbf{y}_\delta\|_1 = \frac{\|\mathbf{x}_\delta\|_2^2}{\|\mathbf{u}\|_1^2} \frac{\|\mathbf{y}_\delta\|_1}{\|\mathbf{v}\|_1} \frac{1}{\|\mathbf{v}\|_1} (\|\mathbf{u}\|_1\|\mathbf{v}\|_1)^2 \\
\le \frac{\theta_7}{n} \cdot \theta_6 \cdot \frac{1}{\theta_2\ln n} \cdot \theta_3^2 n^2(\ln n)^{1/4} = \frac{\theta_7\theta_6\theta_3^2}{\theta_2}\frac{n}{(\ln n)^{3/4}}
\end{multline*}
due to \eqref{Eqn_def_Rstar_4}, \eqref{Eqn_def_Rstar_3}, \eqref{Eqn_def_Rstar_1}, and \eqref{Eqn_def_Rstar_2}. Putting these together gives the proposition.
\end{proof}
\section{Generalization to approximately stable matchings}\label{Sec_approx_stable}
Exact stability is an arguably overly stringent requirement.
analysis can be further extended to understand behaviors of matchings that are approximately stable in the following perspective.
\begin{definition}
We say a matching $\mu$ between $\mathcal{M}$ and $\mathcal{W}$ is $\alpha$-stable for some $0 < \alpha < 1$ if there exists a sub-market of size at least $(1-\alpha) n$ on which $\mu$ is stable; that is, there exist subsets $\mathcal{M}'\subseteq \mathcal{M}$ and $\mathcal{W}' \subseteq \mathcal{W}$ both with cardinality $|\mathcal{M}'|=|\mathcal{W}'| \ge (1-\alpha) n$ such that $\mu(\mathcal{M}') = \mu(\mathcal{W}')$ and the partial matching induced by $\mu$ between $\mathcal{M}'$ and $\mathcal{W}'$ is stable (within this sub-market). We refer to the stable sub-matching between $\mathcal{M}'$ and $\mathcal{W}'$ as the \emph{stable part} of $\mu$.
Denote the set of $\alpha$-stable matchings by $\mathcal{S}_\alpha$.
\end{definition}
A simple adaptation of our previous results and proofs for fully stable matchings gives the following Theorem, analogous to Theorem~\ref{Thm_main_happiness_dist_body} and \ref{Thm_main_rank_dist_body}.
\begin{theorem}\label{Thm_dist_body_approx_stable}
Assume $\alpha = \alpha(n)$ is such that $h(\alpha) > n^{-\eta}$ for some constant $\eta > 0$. Then, for any fixed $\ensuremath{\epsilon} > 0$,
\begin{equation}\label{Eqn_happiness_dist_approx_stable}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}_\alpha} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\mathbf{X}(\mu))-F_\lambda\|_\infty \le \ensuremath{\epsilon}\bigg) \to 1 \text{ as } n\to\infty,
\end{equation}
and
\begin{equation}\label{Eqn_rank_dist_approx_stable}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}_\alpha} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\bm{\phi}^{-1}\circ\mathbf{R}(\mu))-F_\lambda\|_\infty \le \ensuremath{\epsilon}\bigg) \to 1 \text{ as } n\to\infty,
\end{equation}
where $\phi_i = \sum_{j=1}^n a_{ij}$.
\end{theorem}
The proof is straightforward and is deferred to Appendix~\ref{appendix_extra_proofs}. The key observation is that, if the entire market satisfies the cohesion assumption, then each sub-market of it is also cohesive, and we can apply union bound over all sub-markets of size $(1-\alpha)n$. This argument goes through even when the market is imbalanced.
As an immediate corollary, we have the following result for slightly imbalanced markets.
\begin{corollary}\label{Cor_body_imbalance}
Consider a market consisting of $n-k$ men and $n$ women, where $h_b(k/n) > n^{-\eta}$ for some constant $\eta > 0$. Assume that the cohesion condition holds as in Assumption~\ref{Assumption_C_bounded}, i.e., the pairwise scores are bounded on $[1/C, C]$. Then, for any fixed $\ensuremath{\epsilon} > 0$,
\begin{equation}\label{Eqn_happiness_dist_imbalance}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\mathbf{X}(\mu))-F_\lambda\|_\infty \le \ensuremath{\epsilon}\bigg) \to 1 \text{ as } n\to\infty,
\end{equation}
and
\begin{equation}\label{Eqn_rank_dist_imbalance}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\bm{\phi}^{-1}\circ\mathbf{R}(\mu))-F_\lambda\|_\infty \le \ensuremath{\epsilon}\bigg) \to 1 \text{ as } n\to\infty,
\end{equation}
where $\phi_i = \sum_{j=1}^n a_{ij}$.
\end{corollary}
This follows directly from Theorem~\ref{Thm_dist_body_approx_stable} as we can backfill the market with $k$ men with unit scores with all women, so that any stable matching in the original imbalanced market can be extended to an $\frac{k}{n}$-stable matching in the extended market.
\begin{remark}
The results will not hold if there is a linear imbalance in the market, i.e., $k \propto n$. This is because in such markets the men achieve a constant average rank \cite{ashlagi2017unbalanced}, and therefore the convergence to the exponential distribution is impossible.
\end{remark}
One should be able to further strengthen the results through a more careful analysis. However, we choose to leave these potentials for future studies. %
\section{Conclusion}
We studied the welfare structure in stable outcomes in large two-sided matching markets with logit-based preferences. Under a contiguity condition that prevents agents from disproportionately favoring or unfavoring other agents, we characterize stable and almost stable matchings outcomes in terms of the empirical distribution of latent values and ranks.
In particular, our results suggest that the welfare of an agent in a stable matching can be decomposed into three parts: a global parameter that determines the trade-off between the two sides, a personal intrinsic fitness computed from the systematic scores, and an exogenous factor behaving as a standard exponential random variable. In other words, given the market structure (i.e., the systematic scores), the average rank (or value) of the men (or women) is essentially a sufficient statistic for the outcome distribution.
\section{Empirical distribution of values and ranks}\label{sec_distribution}
\subsection{Empirical distribution of values}
Knowing the eigenspace property of the value vectors allows us to characterize the empirical distribution of values.
\begin{restatable}{lemma}{propOempeLikelyForHappinessEmpDist}\label{Prop_Oempe_likely_for_happiness_emp_distr}
Fix any $\delta,\zeta > 0$. Let $\mu'$ be a partial matching of size $n-\floor{\delta n}$ on $\mathcal{M}'\subseteq\mathcal{M}$ and $\mathcal{W}'\subseteq\mathcal{W}$. For any $\ensuremath{\epsilon} > 0$, consider
\begin{equation}\label{Eqn_Prop_Oempe_likely_for_happiness_oempe_def}
\Omega_{\text{emp}}(\eps) := \left\{(\mathbf{x},\mathbf{y})\in \ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \exists \lambda\in\ensuremath{\mathbb{R}}_+, \big\|\hat{\mathcal{F}}(\mathbf{x}) - F_\lambda\|_\infty \le \ensuremath{\epsilon} + \Theta(\delta + \sqrt{\zeta})\right\}.
\end{equation}
Then
\begin{equation}
\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \cap \Omega_{\text{eig}}(\zeta)\backslash\Omega_{\text{emp}}(\eps)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big] \le \exp(o_\delta(n)-\Theta(\ensuremath{\epsilon}^2 n)) \cdot \frac{(\delta n)!}{n!} \prod_{i\in\mathcal{M}'}a_{i,\mu'(i)}b_{\mu'(i),i},
\end{equation}
where again the implicit constants are uniform over all $\mathcal{M}',\mathcal{W}'$, and $\mu'$.
\end{restatable}
The proof formalizes the intuition that, conditional on stability of $\mu'$ and $\mathbf{Y}_{\mathcal{W}'}=\mathbf{y}$, the value $X_i$ for $i\in\mathcal{M}'$ should behave approximately as $\Exp(\lambda_i)$ for some $\lambda_i = (1+ \Theta(\delta+\sqrt{\zeta})) \|\mathbf{y}\|_1$.
The full proof is deferred to Appendix~\ref{Proof_prop_Oempe_likely}.
Hence, instead of looking for the optimal $\lambda$ that minimizes $\|\hat{\mathcal{F}}(\mathbf{X}_{\mathcal{M}'})-\mathcal{F}(\Exp(\lambda))\|_\infty$ in the definition \eqref{Eqn_Prop_Oempe_likely_for_happiness_oempe_def} of $\Omega_{\text{emp}}(\eps)$, we may simply choose $\lambda = \|\mathbf{y}\|_1$, which only differs from the right choice by at most a tolerable $\Theta(\sqrt{\zeta}+\delta)$ factor. In other words, if we define
\begin{equation*}
\tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon}) := \left\{(\mathbf{x},\mathbf{y})\in \ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \big\|\hat{\mathcal{F}}(\mathbf{x}) - \mathcal{F}(\Exp(\|\mathbf{y}\|_1))\|_\infty \le \ensuremath{\epsilon} + \Theta(\delta + \sqrt{\zeta})\right\},
\end{equation*}
albeit with a worse implicit constant in $\Theta(\delta+\sqrt{\zeta})$,
the same conclusion holds as in Lemma~\ref{Prop_Oempe_likely_for_happiness_emp_distr} with $\Omega_{\text{emp}}(\eps)$ replaced by $\tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon})$; that is,
\begin{equation}\label{Eqn_tilde_Oempe_likely_for_happi_emp_distr}
\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R} \cap \Omega_{\text{eig}}(\zeta)\backslash\tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon})}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big] \le \exp(o_\delta(n)-\Theta(\ensuremath{\epsilon}^2 n)) \cdot \frac{(\delta n)!}{n!} \prod_{i\in\mathcal{M}'}a_{i,\mu'(i)}b_{\mu'(i),i}.
\end{equation}
Using Lemma~\ref{Prop_Oempe_likely_for_happiness_emp_distr}, we now prove our first main theorem about the uniform limit of empirical distribution of men's (or women's) value in stable matchings.
\begin{theorem}[Empirical distribution of value]\label{Thm_main_happiness_dist_body}
Fix any $\ensuremath{\epsilon}>0$. Then
\begin{equation}\label{Eqn_happiness_dist_main_body_thm_whp}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\mathbf{X}(\mu))-F_\lambda\|_\infty > \ensuremath{\epsilon}\bigg) \lesssim e^{-n^c}
\end{equation}
asymptotically as $n\to\infty$.
In particular, the infimum over $\lambda$ in \eqref{Eqn_happiness_dist_main_body_thm_whp} can be replaced with the choice of $\lambda=\|\mathbf{Y}_\delta(\mu)\|_1$ for $\delta$ sufficiently small.
\end{theorem}
\begin{proof}
Plugging \eqref{Eqn_tilde_Oempe_likely_for_happi_emp_distr} into Lemma~\ref{Lemma_reduction_to_q} and repeating the same arithmetic as in \eqref{Eqn_sum_expectation_Omega_zeta} and \eqref{Eqn_sum_expectation_Omega_zeta_summation_bound} immediately give
\begin{equation}
\ensuremath{\mathbb{P}}(\exists \mu\in\mathcal{S}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\in \Omega_{\text{eig}}(\zeta)\backslash\tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon})) \le e^{-n^c} + \exp(o_\delta(n) - \Theta(\ensuremath{\epsilon}^2 n)) \lesssim e^{-n^c},
\end{equation}
granted that $\ensuremath{\epsilon} \ge \ensuremath{\epsilon}_0(\delta)$, where the function $\ensuremath{\epsilon}_0(\delta)\to 0$ as $\delta\to 0$.
Corollary~\ref{Cor_no_stable_outside_Oeigz} implies that with probability at least $1-\Theta(e^{-n^c})$ there exists no stable matching $\mu$ with $(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin \Omega_{\text{eig}}(\zeta)$, and hence
\begin{equation}\label{Eqn_proof_Thm_main_happiness_dist_tOemp}
\ensuremath{\mathbb{P}}(\exists \mu\in\mathcal{S}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin \tilde{\Omega}_{\text{emp}}(\ensuremath{\epsilon}/3)) \lesssim e^{-n^c},
\end{equation}
granted that $\ensuremath{\epsilon}_0(\delta) < \ensuremath{\epsilon} / 3$.
By choosing $\delta$ (and hence also $\zeta=\zeta(\delta)$) sufficiently small so that the $\Theta(\delta+\sqrt{\zeta})$ term in the definition of $\tilde{\Omega}_{\text{emp}}$ is upper bounded by $\ensuremath{\epsilon}/3$, we ensure
\begin{equation}
\ensuremath{\mathbb{P}}(\exists \mu\in\mathcal{S}, \big\|\hat{\mathcal{F}}(\mathbf{X}_\delta(\mu)) - \mathcal{F}(\Exp(\|\mathbf{Y}_\delta(\mu)\|_1))\|_\infty \ge 2\ensuremath{\epsilon}/3) \lesssim e^{-n^c}.
\end{equation}
By further restricting $\delta$ to be sufficiently small, we may absorb the difference caused by the $\delta$-truncation on $\mathbf{X}(\mu)$ into an extra term of $\Theta(\delta)\le \ensuremath{\epsilon}/3$, since $\|\hat{\mathcal{F}}(\mathbf{X}_\delta(\mu))-\hat{\mathcal{F}}(\mathbf{X}(\mu))\|_\infty \le \delta$. The theorem follows immediately.
\end{proof}
With essentially the same analysis as in Lemma~\ref{Prop_Oempe_likely_for_happiness_emp_distr} and Theorem~\ref{Thm_main_happiness_dist_body}, except for replacing the DKW inequality with Bernstein's inequality for empirical average, we can also deduce the fillowing result. The proof is omitted.
\begin{proposition}
For any fixed $\ensuremath{\epsilon},\delta > 0$ and $0< c < 1/2$,
\begin{equation}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} |n^{-1}\|\mathbf{X}_\delta(\mu)\|_1 \|\mathbf{Y}_\delta(\mu)\|_1 - 1| > \ensuremath{\epsilon}\bigg) \lesssim e^{-n^c}.
\end{equation}
\end{proposition}
The effect of the $\delta$-truncation is nontrivial to remove because the sum of values can be sensitive to outliers, in particular given the heavy tail of the exponential distribution. We believe, however, that a refined analysis should suggest that $\sup_{\mu\in\mathcal{S}}\big|n^{-1}\|\mathbf{X}(\mu)\|_1\|\mathbf{Y}(\mu)\|_1 - 1\big| \overset{p}{\to} 0$. This is the analogue of the ``law of hyperbola'' in \citet{pittel1992likely}.
\subsection{Empirical distribution of ranks}
Based on the previous discussion on the empirical distribution of value, we now extend the result to ranks and prove our second main theorem.
\begin{theorem}
[Empirical distribution of ranks]\label{Thm_main_rank_dist_body}
For any fixed $\ensuremath{\epsilon}>0$,
\begin{equation}\label{Eqn_rank_dist_main_thm_body_whp}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\bm{\phi}^{-1}\circ\mathbf{R}(\mu))-F_\lambda\|_\infty > \ensuremath{\epsilon}\bigg) \lesssim e^{-n^c}
\end{equation}
asymptotically as $n\to\infty$,
where $\bm\phi$ is men's fitness vector.
As in Theorem~\ref{Thm_main_happiness_dist_body}, the infimum over $\lambda$ in \eqref{Eqn_happiness_dist_main_body_thm_whp} can be replaced with the choice of $\lambda=\|\mathbf{Y}_\delta(\mu)\|_1$ for $\delta$ sufficiently small.
\end{theorem}
Heuristically, we would expect the rank $R_i(\mu)$ for a man to be proportional to his value $X_i(\mu)$ when stability of $\mu$ and the values $X_i(\mu)=x_i$ and $\mathbf{Y}(\mu)=\mathbf{y}$ are conditioned upon. Indeed, a woman $w_j$ with $j\ne\mu(i)$ stands ahead of $w_{\mu(i)}$ in the preference of man $m_i$ exactly when $X_{ij}<x_i$ and $Y_{ji}>y_{\mu(i)}$. As $X_{ij}$ and $Y_{ji}$ jointly follow the product distribution $\Exp(a_{ij})\otimes\Exp(b_{ji})$ conditional on the event that $X_{ij}<x_i$ and $Y_{ji}<y_{\mu(i)}$ do not simultaneously happen, the conditional probability that $\ensuremath{\mathsf{w}}_j\succeq_{\ensuremath{\mathsf{m}}_i} \ensuremath{\mathsf{w}}_{\mu(i)}$ is $\frac{(1-e^{-a_{ij}x_i})e^{-b_{ji}y_j}}{1-(1-e^{-a_{ij}x_i})(1-e^{-b_{ji}y_j})} \approx a_{ij} x_i$. By summing over $j\ne \mu(i)$, we should expect that the rank $R_i(\mu)$ to be in expectation close to $x_i \sum_{j\ne i} a_{ij} \approx x_i \phi_i$; further, as the sum of independent Bernoulli random variables, $R_i(\mu)$ should concentrate at its expectation, therefore leading to $R_i(\mu) \approx x_i \phi_i$ simultaneously for most $i\in [n]$.
This intuition is formalized in the proof, which is given in Appendix~\ref{Append_proof_thm_main_rank}.
\section{Introduction}
This paper is concerned with the welfare in random two-sided matching markets. In a two-sided matching market there are two kinds of agents, where each agent has preferences over potential partners of the other kind. We assume that the outcome is stable \citep{gale1962college}, meaning that there are no blocking pairs of agents who would rather match to each other over the their assigned partners.
A large literature initiated by \citep{gale1962college} has deepened our understanding of two-sided matching markets, generating a blend of rich theory and market designs.\footnote{See, e.g., \citet{roth1992two,roth2018marketplaces}.} Less understood, however are welfare properties in typical markets. We study the welfare structure in matching markets when agents have latent preferences generated according to observed characteristics. Specifically we are interested in the empirical welfare distribution of agents on each side of the market under stable outcomes as well as the relation between the outcomes of each side of the market.%
We study this question in large randomly generated markets, which allow for both vertical and horizontal differentiation. The model assumes that every agent has an observed personal score for every other agent in the market, and her preferences follows a Logit model based on these scores. We impose that no agent is a-priori overwhelmingly more desirable than any other agent. We find that the observed characteristics alone determine the empirical welfare distribution on each side of the market. Moreover, the joint surplus in the market is fixed, and the average welfare of one side of the market is a sufficient statistic to determine the empirical welfare distribution on both sides of the market.
The model we consider has an equal number of men and women. For every man $\ensuremath{\mathsf{m}}_i$ and every woman $\ensuremath{\mathsf{w}}_j$, we are given non-negative scores $a_{ij}$ and $b_{ji}$, which %
can be viewed as generated from observed characteristics.
Each man and each woman have strict preference rankings generated independently and proportionally to these latent scores, as in the Logit model.\footnote{Numerous empirical papers that study two-sided matching market assume agents' preferences follow a logit model (see e.g., \cite{agarwal2018demand,hitsch2010matching}).} Equivalently, each man $\ensuremath{\mathsf{m}}_i$ has a latent value from matching with woman $\ensuremath{\mathsf{w}}_j$ that is distributed exponentially with rate $a_{ij}$ (smaller values are considered better).\footnote{One can view the utility of an agent for her match to be the negative of the corresponding latent value.} Women's latent values for men are generated similarly.\footnote{Special cases of this general model are markets with uniformly random preferences \citep{knuth1990stable,pittel1989average,knuth1997stable,pittel1992likely, ashlagi2017unbalanced} or when agents have common public scores \citep{mauras2021two,ashlagi2020tiered}.}
We identify an intrinsic fitness for each agent that represents her relative competitiveness in the market, independent of the realized stable outcome. For every pair of agents on opposing sides of the market, we can obtain a mutual score of the pair’s match. If we write these scores in a matrix, the intrinsic fitness values
correspond to scaling coefficients that make the mutual matrix bi-stochastic.\footnote{This representation is valid since preferences are invariant under such transformations.} Intuitively, this bi-stochastic mutual matrix can be thought of as consisting of {\em a-priori} probabilities of each pair matching. In particular, this representation captures the interactions between the sides of the market. We exploit this representation to further analyze typical realized outcomes in the market.
We find that the welfare, or the ranks of the agents, when scaled by their intrinsic fitness, have an approximately exponential empirical distribution on each side of the market. Moreover, the average welfare of agents on one side of the market is sufficient to determine the average on the other side. Overall, each agent’s welfare can be seen as determined by a global parameter, her intrinsic fitness, and an extrinsic factor with exponential distribution across the population. This characterization holds with high probability in every stable matching. In fact, this structure extends to matchings that are only approximately stable, which can tolerate a vanishing fraction of blocking pairs.
At its core, since our proof needs to apply to all stable matchings (and even to nearly-stable matchings), it is a union bound argument. We use inequalities derived from the integral formula for the probability that a given matching is stable, first introduced by \citet{knuth1976mariages}. The heterogeneous preferences brings great difficulty, which we overcome with a truncation technique to accommodate heavy tails of agents' outcomes and a fixed-point argument on the eigenspace of the characterizing matrix of the market. The exponential empirical distribution part of the result holds intuitively because there are not too many stable matchings in expectation, and the exponential distribution has the highest entropy of all non-negative distributions with a given mean.
Closely related to our work is the remarkable paper \cite{menzel2015large}, which finds that the joint surplus in the market is unique. The focus in \cite{menzel2015large} is on analyzing the matching rates between agents of different types, rather than the rankings and agents' welfare. Menzel's preference model is more general.\footnote{We note that both his and our model assume that the ratio between any two systematic scores is bounded.} Menzel establishes that, at the limit, agents choose partners according to a logit model from opportunity sets, while we consider large markets and assume agents' preferences are logit based. There are several other key differences. First, his model requires many agents of each type (with the same characteristics), while every agent in our model may have different characteristics. Second, while in our model every agent is matched, he assumes agents have a non-negligible outside option resulting in a large number of unmatched agents\footnote{\citet{menzel2015large} identifies how to scale the market under this assumption to capture realistic outcomes.}; this assumption allows him to apply a fixed point contraction argument and establish the uniqueness and characterization result.\footnote{Technically, such substantial outside options keep rejection chains short and prevent them from cycling.}
\subsection{Literature}
The analysis of random two-sided markets goes back to \citet{knuth1990stable,pittel1989average,pittel1992likely}, who consider markets with uniformly random complete preference lists. These papers establish the number of stable matchings as well as the average ranks on each side. A key finding is that the product of there average rank of agents on each side of the market is approximately the size of the market \citep{pittel1992likely}, implying that stable matchings essentially lie on a parabola. Our findings generalize these findings to markets to random logit markets. We also expand these findings to describe the distributional outcomes in the market.
Several papers consider markets with uniformly drawn preferences with an unequal number of agents on each side of market \citep{ashlagi2017unbalanced,pittel2019likely,cai2019short}.
A key finding is there is an essentially unique stable matching and agents on the short side have a substantial advantage. We believe that similar findings hold in random logit markets. Since our results hold for approximately stable matches, our findings extend to the imbalanced case as long as the imbalance is not too large.%
Our paper further contributes to the above literature by considering also outcomes that are approximately stable outcomes.
Several papers study markets random markets when (at least on one side) agents' preferences are generated proportionally to public scores. \citep{immorlica2015incentives,kojima2009incentives,ashlagi2014stability} look at the size of the core.\footnote{They further consider the related issue of strategizing under stable matching mechanisms.} Their analysis relies on a certain market structure (keeping preference lists short), which leaves many agents unmatched. %
\cite{gimbert2019popularity} and \citet{ashlagi2020tiered} assume agents have complete preference lists and their focus is on the size of the core or agents' average rank.
\subsection{Notations}
Denote $[n] = \{1,\ldots,n\}$. Boldface letters denote vectors (lower case) and matrices (upper case), e.g., $\mathbf{x} = (x_i)_{i\in[n]}$ and $\mathbf{A} = (a_{ij})_{i\in[n],j\in [m]}$, and capital letters denote random variables.
For two identically shaped matrices (or vectors) $\mathbf{M}$ and $\mathbf{N}$, $\mathbf{M}\circ \mathbf{N}$ denotes their Hadamard (entry-wise) product. For a vector $\mathbf{x}\in\ensuremath{\mathbb{R}}^n$ with non-zero entries, denote its coordinate-wise inverse by $\mathbf{x}^{-1}$. $\diag(\mathbf{x})$ denotes the diagonal matrix whose $i$-th entry on the diagonal is $x_i$.
$\Exp(\lambda)$ and $\Poi(\lambda)$ denote, respectively, the exponential distribution and the Poisson distribution with rate $\lambda$. We denote the probability density function (pdf) and cumulative distribution function (CDF) of $\Exp(\lambda)$ by $f_\lambda$ and $F_\lambda$, respectively. %
$\Bern(p)$ denotes the Bernoulli distribution with success probability $p\in[0,1]$. %
For distributions $\mathcal{D}_1$ and $\mathcal{D}_2$ over space $\mathcal{X}$, $\mathcal{D}_1\otimes \mathcal{D}_2$ denotes their product distribution over $\mathcal{X}^2$.
$\hat{\mathcal{F}}(\mathbf{x})$ denotes the empirical distribution function for the components of a vector $\mathbf{x}$, treated as a function from $\ensuremath{\mathbb{R}}$ to $[0,1]$. $\mathcal{F}(\mathcal{D})$ denotes the CDF of a distribution $\mathcal{D}$ on $\ensuremath{\mathbb{R}}$.
For real-valued random variables $X$ and $Y$, $X\preceq Y$ denotes stochastic domination of $X$ by $Y$.
We use the standard $O(\cdot)$, $o(\cdot)$, $\Omega(\cdot)$, and $\Theta(\cdot)$ notations to hide constant factors. For functions $f,g:\ensuremath{\mathbb{N}}\to\ensuremath{\mathbb{R}}_+$, we say $f = O(g)$ (resp. $\Omega(g)$) if there exists an absolute constant $K\in(0,\infty)$ such that $f \le K g$ (resp. $f \ge K g$) for $n$ sufficiently large; $f=o(g)$ if $f/g \to 0$ as $n\to\infty$; and $f=\Theta(g)$ if $f=O(g)$ and $f=\Omega(g)$. We say $f = o_\alpha(g)$ if $f/g\to 0$ as $\alpha\to 0$ (uniformly over all other parameters, such as $n$). For example, $\sqrt{\ensuremath{\epsilon}} = o_\ensuremath{\epsilon}(1)$.
\section{Main results}
We denote the (random) set of stable matchings by $\mathcal{S}$. Recall that for a matching $\mu$, $\mathbf{X}(\mu)$ and $\mathbf{R}(\mu)$ denote men's value and rank vectors, respectively, under $\mu$. Denote by $\hat{\mathcal{F}}(\mathbf{v})$ the empirical distribution of the components of a vector $\mathbf{v}$ (viewed as a function from $\ensuremath{\mathbb{R}}$ to $[0,1]$), and $F_\lambda$ denotes the CDF of $\Exp(\lambda)$.
\begin{theorem}[Empirical distribution of values]\label{Thm_main_happiness_dist}
For any fixed $\ensuremath{\epsilon}>0$,
\begin{equation}\label{Eqn_happiness_dist_main_thm_whp}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\mathbf{X}(\mu))-F_\lambda\|_\infty \le \ensuremath{\epsilon}\bigg) \to 1 \text{ as } n\to\infty.
\end{equation}
That is, with high probability, in all stable matchings simultaneously, the empirical distribution of the men's values is arbitrarily close to some exponential distribution $\Exp(\lambda)$ in Kolmogorov-Smirnov norm, where the parameter $\lambda$ depends on the specific stable matching. In particular, the infimum over $\lambda$ in \eqref{Eqn_happiness_dist_main_thm_whp} can be replaced with the choice of $\lambda$ that can be computed from the women's value vector $\mathbf{Y}(\mu)$.
\end{theorem}
\begin{theorem}
[Empirical distribution of ranks]\label{Thm_main_rank_dist}
For any fixed $\ensuremath{\epsilon}>0$,
\begin{equation}\label{Eqn_rank_dist_main_thm_whp}
\ensuremath{\mathbb{P}}\bigg(\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\bm{\phi}^{-1}\circ\mathbf{R}(\mu))-F_\lambda\|_\infty \le \ensuremath{\epsilon}\bigg) \to 1 \text{ as } n\to\infty,
\end{equation}
where $\bm{\phi}$ is the fitness vector (and can be computed as $\phi_i = \sum_{j=1}^n a_{ij}$).
That is, with high probability, in all stable matchings simultaneously, the empirical distribution of rescaled ranks of the men is arbitrarily close to some exponential distribution $\Exp(\lambda)$ in Kolmogorov-Smirnov norm, where the parameter $\lambda$ depends on the specific stable matching (yet the scaling doesn't). Again, we may replace the infimum with the choice of $\lambda$ that can be computed from the women's latent value vector $\mathbf{Y}(\mu)$.%
\end{theorem}
\subsection{Discussion}
The results characterize outcomes of all stable matchings. A slight refinement of Theorem~\ref{Thm_main_happiness_dist} will imply that the average value of one side of the market is essentially sufficient to determine the average value of the other side. Roughly, for a given stable matching $\mu$, the value of $\lambda$ in \eqref{Eqn_happiness_dist_main_thm_whp} and \eqref{Eqn_rank_dist_main_thm_whp} is approximately the sum of the women's values in $\mu$.\footnote{Technically, the choice of $\lambda$ can be taken as the sum of the values of women after excluding a small fraction $\delta$ of the women who are the least satisfied (those with the highest latent values) under the matching $\mu$. This truncation, which is also done for technical reasons, avoid outliers and in fact shows that the predictions still hold under even weaker notions of stability. We believe that such trimming is unnecessary with a more careful analysis.} This suggests that the average value of men is approximately $1/\lambda \approx 1/\|\mathbf{Y}(\mu)\|_1$. Therefore multiplying the average values of the two sides of the market gives approximately $1/n$ simultaneously in all stable matchings with high probability. While we will establish such an approximation, we believe that, with a refined analysis, one should be able to show $\sup_{\mu\in\mathcal{S}}\big|n^{-1}\|\mathbf{X}(\mu)\|_1\|\mathbf{Y}(\mu)\|_1 - 1\big| \overset{p}{\to} 0$. %
Moreover, the average value of men is also sufficient to predict the empirical value distribution on each side of the market. For example, if we find that $30\%$ of the men have value $h$ or higher, then we should expect $9\%$ to have value $2h$ or higher. %
Theorem \ref{Thm_main_rank_dist} is similar but with respect to ranks; it implies that the product of the average scaled ranks of men and women should be asymptotically $n$, and the the average rank on each side determines the empirical rank distributions.
Observe that the scaling in \eqref{Eqn_rank_dist_main_thm_whp} is consistent with the intuition of $\phi_i=\sum_{j=1}^n a_{ij}$ being the average fitness of $\ensuremath{\mathsf{m}}_i$. Within a stable matching, a more popular man should, on average, achieve a better (smaller) rank than a less popular one.
For instance, in a market with bounded public scores (Example~\ref{Ex_public_scores}), each man receives a number of proposals roughly inversely proportional to his fitness during the woman-proposing deferred acceptance algorithm,
implying that his rank is proportional to $\phi_i$ in the woman optimal stable matching.
The proof of Theorem~\ref{Thm_main_happiness_dist} also offers evidence that the number of stable matchings should essentially be sub-exponential. This is formally stated in Corollary~\ref{Cor_subexp_num_stable_match}.
\subsection{Results for approximately stable matchings}
The proof suggest that the characterization further extends to matchings that are only approximately stable in the following sense.
\begin{definition}
We say a matching $\mu$ between $\mathcal{M}$ and $\mathcal{W}$ is $\alpha$-stable for some $0 < \alpha < 1$ if there exists a sub-market of size at least $(1-\alpha) n$ on which $\mu$ is stable; that is, there exist subsets $\mathcal{M}'\subseteq \mathcal{M}$ and $\mathcal{W}' \subseteq \mathcal{W}$ both with cardinality $|\mathcal{M}'|=|\mathcal{W}'| \ge (1-\alpha) n$ such that $\mu(\mathcal{M}') = \mu(\mathcal{W}')$ and the partial matching induced by $\mu$ between $\mathcal{M}'$ and $\mathcal{W}'$ is stable (within this sub-market). We refer to the stable sub-matching between $\mathcal{M}'$ and $\mathcal{W}'$ as the \emph{stable part} of $\mu$.
Denote the set of $\alpha$-stable matchings by $\mathcal{S}_\alpha$.
\end{definition}
The following Theorem can be derived from the quantitative versions of Theorem~\ref{Thm_main_happiness_dist} and \ref{Thm_main_rank_dist}, which will be presented in Section~\ref{sec_distribution}.
\begin{restatable}{theorem}{ThmMainApproxStable}\label{Thm_dist_body_approx_stable}
Assume $\alpha < n^{-\eta}$ for some constant $\eta > 1/2$. Then, as $n\to\infty$,
\begin{equation}\label{Eqn_happiness_dist_approx_stable}
\max_{\mu\in\mathcal{S}_\alpha} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\mathbf{X}(\mu))-F_\lambda\|_\infty \overset{p}{\to} 0 \quad\text{ and }\quad \max_{\mu\in\mathcal{S}_\alpha} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\bm{\phi}^{-1}\circ\mathbf{R}(\mu))-F_\lambda\|_\infty \overset{p}{\to} 0.
\end{equation}
\end{restatable}
The approximately exponential empirical distribution applies to any matching that is stable except for $o(\sqrt{n})$ agents. The key observation is that, if the entire market satisfies the contiguity assumption, then each sub-market of it is also contiguous, and we can apply union bound over all sub-markets of size $(1-\alpha)n$.
As a corollary, we have the following result for slightly imbalanced markets.
\begin{restatable}{corollary}{CorImbalanceMarket}\label{Cor_body_imbalance}
Consider a market consisting of $n-k$ men and $n$ women, where $k < n^{\beta}$ for some constant $\beta < 1/2$. Assume that the contiguity condition holds as in Assumption~\ref{Assumption_C_bounded}. Then, as $n\to\infty$,
\begin{equation}\label{Eqn_happiness_dist_imbalance}
\max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\mathbf{X}(\mu))-F_\lambda\|_\infty \overset{p}{\to} 0 \quad\text{ and }\quad \max_{\mu\in\mathcal{S}} \inf_{\lambda\in\ensuremath{\mathbb{R}}_+} \|\hat{\mathcal{F}}(\bm{\phi}^{-1}\circ\mathbf{R}(\mu))-F_\lambda\|_\infty \overset{p}{\to} 0.
\end{equation}
\end{restatable}
\begin{remark}
The results will not hold if there is a linear imbalance in the market, i.e., $k \propto n$. This is because in such markets the men achieve a constant average rank \cite{ashlagi2017unbalanced}, and therefore the convergence to the exponential distribution is impossible.
\end{remark}
These results are not necessarily tight and one may weaken the constraints on $\alpha$ and $k$ with a more careful analysis. Exploring other notions of approximate stability is left for future work. %
\section{Model}
We study two-sided matching markets with randomly generated preferences. Next we formalize the model, how preferences are generated and key assumptions.
\paragraph{Setup.} A matching market consists of two sets of agents, referred to as men $\mathcal{M}$ and women $\mathcal{W}$. Unless specified otherwise, we assume that $|\mathcal{M}| = |\mathcal{W}| = n$, men are labeled $\ensuremath{\mathsf{m}}_1,\ldots, \ensuremath{\mathsf{m}}_n$ and women are labeled $\ensuremath{\mathsf{w}}_1,\ldots, \ensuremath{\mathsf{w}}_n$.
Each man $\ensuremath{\mathsf{m}}_i$ has a complete strict preference list $\succ_{\ensuremath{\mathsf{m}}_i}$ over the the set of women and each woman $\ensuremath{\mathsf{w}}_j$ has a complete strict preference list $\succ_{\ensuremath{\mathsf{w}}_j}$ over the set of men. A \emph{matching} is a bijection $\mu : \mathcal{M}\to\mathcal{W}$. To simplify the notation, men and women will be presented using the set of integers $[n]=\{1,2,\ldots,n\}$ and we write $\mu:[n]\to[n]$ so that $\mu(i)=j$ and $\mu^{-1}(j)=i$ means that $\ensuremath{\mathsf{m}}_i$ is matched with $\ensuremath{\mathsf{w}}_j$ in $\mu$. The \emph{rank} for man $\ensuremath{\mathsf{m}}_i$, denoted by $R_i(\mu)$, is the position of $\mu(i)$ on $\ensuremath{\mathsf{m}}_i$'s preference list (e.g., if an agent is matched to the second agent on her list, her rank is two). Write $\mathbf{R}(\mu):=(R_i(\mu))_{i\in[n]}$ for the men's rank vector in matching $\mu$.
The matching $\mu$ is \emph{unstable} if there is a pair of man $\ensuremath{\mathsf{m}}_i$ and woman $\ensuremath{\mathsf{w}}_j$ such that $\ensuremath{\mathsf{w}}_j \succ_{\ensuremath{\mathsf{m}}_i} \ensuremath{\mathsf{w}}_{\mu(i)}$ and $\ensuremath{\mathsf{m}}_i \succ_{\ensuremath{\mathsf{w}}_j} \ensuremath{\mathsf{w}}_{\mu^{-1}(j)}$. A matching is said to \emph{stable} otherwise. It is well-known that the set of stable matchings is not empty.
\paragraph{Logit-based random markets: the canonical form.} We consider markets in which complete preferences are randomly generated as follows. %
For each man $\ensuremath{\mathsf{m}}_i$, we are given a stochastic vector $\hat{\mathbf{a}}_i = (\hat{a}_{ij})_{j\in[n]} \in\ensuremath{\mathbb{R}}^n_+$. Then, $\ensuremath{\mathsf{m}}_i$'s preference list is generated from a logit model based on $\hat{\mathbf{a}}_i$. In particular, let $\mathcal{D}_i$ be the distribution on $\mathcal{W}$ that places on $\ensuremath{\mathsf{w}}_j$ a probability proportional to $\hat{a}_{ij}$; then $\ensuremath{\mathsf{m}}_i$ samples from $\mathcal{D}_i$ for his favorite partner, and repeatedly sample from it without replacement for his next favorite partner until completing his list.
Similarly, each woman $\ensuremath{\mathsf{w}}_j$ preference list is generated from a logit model based on a given stochastic vector $\hat{\mathbf{b}}_j = (\hat{b}_{ji})_{i\in[n]}$.
Denote by $\hat{\mathbf{A}} = (\hat{a}_{ij})_{i,j\in[n]}$ and $\hat{\mathbf{B}} = (\hat{b}_{ji})_{j,i\in[n]}$ the row-stochastic matrices. We refer to this matrix representation of the preference model as the \emph{canonical form} and to $\hat{a}_{ij}$ (resp. $\hat{b}_{ji}$) as the \emph{canonical score} that $\ensuremath{\mathsf{m}}_i$ (resp. $\ensuremath{\mathsf{w}}_j$) assigns to $\ensuremath{\mathsf{w}}_j$ (resp. $\ensuremath{\mathsf{m}}_i$).
This model captures the multinomial logit (MNL) choice model, in which scores are closely related to the systematic utilities for agents over matches. The special case in which $\hat{a}_{ij} = \hat{b}_{ji} = 1/n$ for all $i,j\in[n]$ corresponds to the uniformly random preference model.
\paragraph{Mutual matrix and intrinsic fitness: the balanced form.}
While the canonical form is a useful way to describe the market, it will be helpful for the analysis to describe it using an alternative scaling scheme, which we refer to as the \emph{balanced form}.
Observe that multiplying any row of $\hat{A}$ and $\hat{B}$ by a constant does not change the behavior of the market.
We look for scaling vectors $\bm{\phi},\bm{\psi}\in\ensuremath{\mathbb{R}}^n_+$ for the rows of $\hat\mathbf{A}$ and $\hat\mathbf{B}$ such that $\mathbf{M} = n^{-1} \mathbf{A} \circ \mathbf{B}$ is bistochastic\footnote{The nonnegative matrix $\mathbf{M}$ is bistochastic if the sum of entries in each row and each column is one.}, where $\mathbf{A} = \diag(\bm{\phi}) \hat\mathbf{A}$ and $\mathbf{B} = \diag(\bm{\psi}) \hat\mathbf{B}$. As is shown by \citet[Theorem~1]{sinkhorn1964relationship}, such a bistochastic matrix $\mathbf{M}$ always uniquely exists, and the scaling vectors $\bm{\phi}$ and $\bm{\psi}$ are unique up to constant rescaling. That is, $\bm{\phi}$ and $\bm{\psi}$ jointly solve
\begin{equation}
\frac{1}{n} \diag(\bm{\phi}) (\hat{\mathbf{A}} \circ \hat{\mathbf{B}}^\top) \diag(\bm{\psi}) \mathbf{1} = \mathbf{1} \quad\text{ and }\quad \frac{1}{n} \diag(\bm{\psi}) (\hat{\mathbf{B}} \circ \hat{\mathbf{A}}^\top) \diag(\bm{\phi}) \mathbf{1} = \mathbf{1},
\end{equation}
where $\mathbf{1}$ is the vector consisting of all $1$'s.
The matrix $\mathbf{M}$ will be referred to as the \emph{mutual matrix}. %
In the remainder of the paper we assume without loss of generality that the market is described in the balanced form, using $\mathbf{A},\mathbf{B}$ and the mutual matrix $\mathbf{M}$.
The bistochasticity constraint incurs the following relationship: if $\hat{b}_{ji}$'s increase (resp. decrease) by a factor of $\alpha$ simultaneously for all $j\in[n]$, the scaling factor $\phi_i$, and hence all $a_{ij}$'s for $j\in[n]$, must decrease (resp. increase) by the same factor to maintain bistochasticity of $\mathbf{M}$. In other words, a uniform increase (resp. decrease) of $\ensuremath{\mathsf{m}}_i$'s popularity among the women will lead to a proportional decrease (resp. increase) in $\phi_i$. Thus, we can view the $\phi_i$ as reflecting the ``average popularity'' of man $\ensuremath{\mathsf{m}}_i$ among the women: Loosely speaking, the smaller $\sum_{j=1}^n a_{ij}$ is, the more popular $\ensuremath{\mathsf{m}}_i$ is (reflected by larger values of $b_{ji}$'s).
We refer to the vector $\bm{\phi}$ and $\bm{\psi}$ as the men's and women's \emph{intrinsic fitness} vector, respectively (and note that a smaller intrinsic fitness value means the agent is more competitive). Note that since $\hat{\mathbf{A}} = \diag(\bm{\phi})^{-1} \mathbf{A}$ is row-stochastic, we conveniently have $\phi_i = \sum_{j=1}^n a_{ij}$, and similarly $\psi_j = \sum_{i=1}^n b_{ji}$ in the balanced form.
\begin{example}
[Markets with public scores]\label{Ex_public_scores}
We say a matching market has public scores when $\hat{\mathbf{a}}_i=\hat{\mathbf{a}}\in\ensuremath{\mathbb{R}}_+^n$ for all $i\in[n]$ and $\hat{\mathbf{b}}_j=\hat{\mathbf{b}}\in\ensuremath{\mathbb{R}}_+^n$ for all $j\in[n]$. In other words, agents on the same side of the market share an identical preference distribution. %
The fitness vectors are simply $\bm{\phi} = \hat{\mathbf{b}}^{-1}$ and $\bm{\psi} = \hat{\mathbf{a}}^{-1}$, where the inverse is taken component-wise. The mutual matrix $\mathbf{M} = \mathbf{J} := (n^{-1})_{i,j,\in[n]}$ in this case.
\end{example}
\paragraph{Latent values.}
The logit-based preference model can be generated equivalently in the following way.
Let $\mathbf{X},\mathbf{Y}\in\ensuremath{\mathbb{R}}_+^{n\times n}$ be two random matrices with independent entries $X_{ij}$ (resp. $Y_{ji}$) sampled from $\Exp(a_{ij})$ (resp. $\Exp(b_{ji})$). The preference profile is then derived from $\mathbf{X}$ and $\mathbf{Y}$ as follows:
\[
\ensuremath{\mathsf{w}}_{j_1}\succeq_{\ensuremath{\mathsf{m}}_i}\ensuremath{\mathsf{w}}_{j_2} \quad\Longleftrightarrow\quad X_{ij_1} < X_{ij_2},
\]
\[
\ensuremath{\mathsf{m}}_{i_1} \succeq_{\ensuremath{\mathsf{w}}_j} \ensuremath{\mathsf{m}}_{i_2} \quad\Longleftrightarrow\quad Y_{ji_1} < Y_{ji_2}.
\]
We refer to each $X_{ij}$ (resp. $Y_{ji}$) for $i,j\in[n]$ as the \emph{latent value} (or simply {\em value}) of $\ensuremath{\mathsf{m}}_i$ (resp. $\ensuremath{\mathsf{w}}_j$) if matched with $\ensuremath{\mathsf{w}}_j$ (resp. $\ensuremath{\mathsf{m}}_i$).
Note that for every agent, a lower rank implies a lower latent value (and therefore lower values of rank and latent value are better).
\paragraph{Regularity assumption.}
We study the asymptotic behavior of two-sided matching markets as the market size grows large. Informally, we restrict attention to contiguous markets, in the sense that, ex ante, no agent finds any other agent (on the opposite side of the market) disproportionately favorable or unfavorable to other agents. The condition is formalized as follows.
A matrix $\mathbf{L}\in\ensuremath{\mathbb{R}}^{n\times n}$ with non-negative entries is called \emph{$C$-bounded} for some constant $C\ge 1$ if $\ell_{ij}\in[1/C,C]$ for all $1\le i,j\le n$. When $\mathbf{L}$ is (bi-)stochastic, we will abuse notation and say $\mathbf{L}$ is $C$-bounded if $n\mathbf{L}$ satisfies the definition above.
\begin{assumption}[Contiguity]\label{Assumption_C_bounded}
We assume that, by choosing an appropriate scaling of $\bm{\phi}$ and $\bm{\psi}$ in the balanced form,
there exist absolute constants $C\in[1,\infty)$ and $n_0<\infty$ such that $\mathbf{A}$, $\mathbf{B}$, and $n\mathbf{M}=\mathbf{A}\circ \mathbf{B}^\top$ are all $C$-bounded for all $n\ge n_0$; that is, there exists $C\in[1,\infty)$ such that
\begin{equation}\label{Eqn_assumpt_C_bound_main}
\frac{1}{C} \le \min_{i,j\in[n]} \min\{a_{ij}, b_{ji}, nm_{ij}\} \le \max_{i,j\in[n]} \max\{a_{ij}, b_{ji}, nm_{ij}\} \le C \quad\text{ for all }\; n > n_0.
\end{equation}
\end{assumption}
\begin{remark}
It is easy to verify that Assumption~\ref{Assumption_C_bounded} holds when no agent finds any potential partner disproportionately favorable or unfavorable based on their canonical scores:
If $\hat\mathbf{A}$ and $\hat\mathbf{B}$ are $C$-bounded, then there exists a choice of $\bm{\phi}$ and $\bm{\psi}$ with all entries in $[n/C^2, nC^2]$ in the balanced form; further, $\mathbf{M} $ is $C^4$-bounded.
Thus, Assumption~\ref{Assumption_C_bounded} is equivalent to the existence of an absolute upper bound on the ratio between pairs of entries within the same row of $\mathbf{A}$ or $\mathbf{B}$; that is
\begin{equation}
\limsup_{n\to\infty} \max_{i,j_1,j_2\in[n]} \frac{a_{ij_1}}{a_{ij_2}} < \infty \qquad\text{ and }\qquad \limsup_{n\to\infty} \max_{j,i_1,i_2\in[n]} \frac{b_{ji_1}}{b_{ji_2}} < \infty.
\end{equation}
This condition is agnostic to scaling of the matrices and hence easy to certify. However, the lower and upper bounds in \eqref{Eqn_assumpt_C_bound_main} are more convenient in our later analysis, where the constant $C$ will make an appearance (although often made implicit in the results).
\end{remark}
\begin{remark}
Assumption~\ref{Assumption_C_bounded} offers a strong contiguity condition on the market, in that the attractiveness among all pairs of men and women vary at most by an (arbitrarily large) constant factor as the market grows. We expect the results to hold under a weaker assumption, which can be described through the spectral gap of the matrix $\mathbf{M}$. Recall that, as a bistochastic matrix, $\mathbf{M}$ has a largest eigenvalue of $1$ and all other eigenvalues of magnitude at most $1$. We may think of the market as contiguous in this weaker sense if the spectral gap of $\mathbf{M}$, given by $1-|\lambda_{\max}(\mathbf{M}-\mathbf{J})|$, is bounded away from zero as the market grows. The spectral gap is a common and powerful notion when studying the structure of networks and communities.\footnote{In our model of the matching market, the spectral gap of $\mathbf{M}$ describes the extent to which the market interconnects globally (contiguity) or decomposes into multiple sub-markets (modularity). A larger spectral gap means that the market is more cohesive, with more uniform or homogeneous preferences. For instance, the uniform market with $\mathbf{M}=\mathbf{J}$ has a unit spectral gap, the maximum possible value. On the other hand, a smaller spectral gap means that the market is more clustered, with a clearer boundary between communities and poorly mixed preferences. For instance, any block-diagonal bistochastic matrix (with more than one blocks) has a zero spectral gap, and corresponds to a market that decomposes into two or more independent sub-markets --- one cannot hope to have a uniform structure result in such markets.} We impose Assumption \ref{Assumption_C_bounded} as it simplifies substantially the analysis and exposition.
\end{remark}
\subsection{Estimating the (unconditional) stability probability}\label{Subsec_uncond_stable_prob}
Using concentration inequalities given in Lemma~\ref{Lemma_weighted_exp_chernoff} and \ref{Lemma_wgt_exp_cond_concentration}, we derive the following upper bound, which essentially characterizes the (approximate) probability that a partial matching of size $n-\floor{\delta n}$ is stable with a probable value vector for the women (i.e., $Y_{\mathcal{W}'}\in\mathcal{R}_1$).
\begin{restatable}{proposition}{propEqXYBound}\label{Prop_EqXY_bound}
For a fixed a partial matching $\mu'$ on $\mathcal{M}'$ and $\mathcal{W}'$ of size $n-\floor{\delta n}$,
\begin{equation}\label{Eqn_prop_EqXY_target_bound}
\ensuremath{\mathbb{E}}[q(\mathbf{X}_{\mathcal{M}'},\mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}_2}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot\mathbbm{1}_{\mathcal{R}_1}(\mathbf{Y}_{\mathcal{W}'})] \le e^{o(n)+o_\delta(n)} \frac{(\delta n)!}{n!}\prod_{i\in\mathcal{M}'} a_{i,\mu'(i)} b_{\mu'(i),i}.
\end{equation}
\end{restatable}
The proof of Proposition~\ref{Prop_EqXY_bound} will be deferred to Appendix~\ref{appendix_extra_proofs}, where we will develop intermediate results that characterize the typical behavior of $\mathbf{X}_{\mathcal{M}'}$ and $\mathbf{Y}_{\mathcal{W}'}$ relative to each other (see Appendix~\ref{Append_proof_prop_eigenvec_of_M}).
Proposition~\ref{Prop_EqXY_bound} provides evidence that, heuristically, the expected number of stable partial matchings should be sub-exponential.
\begin{restatable}[Number of stable partial matchings]{corollary}{corSubExpNumOfStableMatch}
\label{Cor_subexp_num_stable_match}
Fix any $\delta > 0$ and $c\in(0,1/2)$. Let $N_\delta$ denote the number of stable partial matchings of size $n-\floor{\delta n}$ satisfying the condition in Corollary~\ref{Cor_Rstar_likely} (i.e., $\mathcal{R}^\star$) in a random instance of the matching market. Then, $\ensuremath{\mathbb{E}}[N_\delta] \le \exp(o_\delta(n))$ granted that $n$ is sufficiently large. Further, with probability at least $1-e^{-n^c}$, the condition $\mathcal{R}^\star$ is satisfied by all $\delta$-truncated stable matchings.
\end{restatable}
\begin{remark}
Corollary~\ref{Cor_subexp_num_stable_match} falls short of establishing a sub-exponential bound for the expected number of stable matchings in two aspects.
\begin{itemize}
\item While stable matchings that violate $\mathcal{R}^\star$ (when truncated) will not exist with high probability, we have not yet proved a bound for the expected number of such stable matchings. We believe that this can be overcome with a refined analysis of deferred acceptance, which should lead to stronger results than Lemma~\ref{Lemma_DA_prop_num}. Note that all high probability results in Appendix~\ref{Append_weak_regular_scores} after this lemma come with an upper bound on the expected number of stable matchings under various conditions.
\item In general, it is possible to have multiple, in the worst case $\floor{\delta n}!$, stable matchings that produces the same $\delta$-truncated stable partial matching.
\end{itemize}
We believe that a sub-exponential bound for the number of stable matchings is possible with a more refined analysis.
\end{remark}
\subsection{Opportunity sets and an eigenspace property for the value vectors}\label{Subsec_proximity_eigensubsp}
Our next result states that value vectors in stable matchings are not only controlled in terms of their first and second moments, but also in a sense ``close'' to some constant vector, i.e., $t\mathbf{1}$ for some $t\in\ensuremath{\mathbb{R}}_+$, which are eigenvectors of $\mathbf{M}$ corresponding to its maximal eigenvalue $\lambda_1(\mathbf{M})=1$.
Let us fix the women's values to be $\mathbf{Y}_{\mathcal{W}'}=\mathbf{y}$ and consider the implication for the men's outcome in any (partial) matching $\mu'$. For a man $\ensuremath{\mathsf{m}}_i$ with value $x_i$, the expected number of blocking pairs between him and the women, conditional on $x_i$ and $\mathbf{y}$, is
\begin{equation*}
\sum_{j\ne \mu(i)} (1-e^{-a_{ij} x_i}) (1-e^{-b_{ji} y_j}) \approx \sum_{j=1}^n a_{ij} b_{ji} x_i y_j = n (\mathbf{M} \mathbf{y})_i x_i.
\end{equation*}
The next result suggests that, in a typical market, the burden of avoiding blocking pairs falls roughly equally on the men in the sense that the entries of $\mathbf{M} \mathbf{Y}_{\mathcal{W}'}$ are largely the same.
\begin{restatable}{lemma}{PropEigenVecOfMHighProf}\label{Prop_eigenvec_of_M_high_prob}
Let $\mu'$ be a partial matching of size $n-\floor{\delta n}$ on $\mathcal{M}'\subseteq\mathcal{M}$ and $\mathcal{W}'\subseteq\mathcal{W}$. Fix any $\zeta > 0$, and let
\begin{equation}
\Omega_{\text{eig}}(\zeta) := \left\{(\mathbf{x},\mathbf{y})\in \ensuremath{\mathbb{R}}^n\times\ensuremath{\mathbb{R}}^n : \exists t\in\ensuremath{\mathbb{R}}_+, \sum_{i=1}^n \mathbbm{1}\left\{|(\mathbf{M} \mathbf{y})_i-t| \ge \sqrt{\zeta} t \right\} \le \sqrt{\zeta} n\right\}.
\end{equation}
Then
\begin{equation}\label{Eqn_Prop_eigenvec_of_M_Expectation_is_small}
\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'}) \cdot \mathbbm{1}_{\mathcal{R}\backslash\Omega_{\text{eig}}(\Theta(\delta)+\zeta)}(\mathbf{X}_{\mathcal{M}'}, \mathbf{Y}_{\mathcal{W}'})\big]
\le %
\exp(o_\delta(n)-\Theta(\zeta^2 n)) \cdot \frac{(\delta n)!}{n!} \prod_{i\in\mathcal{M}'}a_{i,\mu'(i)}b_{\mu'(i),i},
\end{equation}
with the implicit constants uniform over all $\mathcal{M}',\mathcal{W}'$, and $\mu'$.
\end{restatable}
The proof of Lemma~\ref{Prop_eigenvec_of_M_high_prob} is deferred to Appendix~\ref{Append_proof_prop_eigenvec_of_M}.
Let us observe the immediate corollary of this Lemma, the proof of which is similar to that of Lemma~\ref{Lemma_reduction_to_q} and deferred to Appendix~\ref{Append_proof_no_stable_outside_oeigz}.
\begin{restatable}{corollary}{CorNoStableOutsideOeigz}\label{Cor_no_stable_outside_Oeigz}
For $\delta > 0$ sufficiently small, there exists a choice of $\zeta =\zeta(\delta) > 0$ such that $\zeta\to 0$ as $\delta \to 0$ and that
\begin{equation}
\ensuremath{\mathbb{P}}(\exists \mu\text{ stable}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin\Omega_{\text{eig}}(\zeta)) \lesssim e^{-n^c}
\end{equation}
asymptotically as $n\to\infty$.
\end{restatable}
Corollary~\ref{Cor_no_stable_outside_Oeigz} roughly states that, in a contiguous market, conditioning on the women's outcomes in a stable matching has an almost even impact on the men's values.
\section{Intuition and Proof Ideas \label{sec_prep_proof}}
This section offers intuition and the key ideas behind the proofs.
\subsection{Intuition}
Let us start with providing a high-level intuition of both why the result is true, and how we should expect the proof to go. The actual proof does not follow the intuition exactly due to some technical difficulties that need to be overcome. It is possible that one can find a proof that follows the intuition below more directly.
At a very high level, the result follows from a {\em union bound}. There are $n!$ potential matchings. Based on {\em a-priori} preferences, for each matching $\mu$, one could compute the probability $P_\mu$ that $\mu$ is stable under the realized preferences. A union bound argument just establishes that
\begin{equation}\label{eq:int1}
\sum_{\text{$\mu$ does not satisfy the conditions of \eqref{Eqn_happiness_dist_main_thm_whp}}} P_\mu = \exp(-\Omega_\epsilon(n))
\end{equation}
Establishing \eqref{eq:int1} directly appears to be difficult. A more approachable statement is a universal bound on the number of stable matchings overall:
\begin{equation}\label{eq:int2}
\sum_{\mu} P_\mu = \exp(o(n)).
\end{equation}
That is, in expectation, there are not so many stable matchings.
Consider the triplet of random variable $(X,Y,\mu)$, where $X$ and $Y$ are preferences sampled according to the model, and $\mu$ is a uniformly random matching. Let $\mathcal{S}$ be the event that $\mu$ is stable under preference profiles $(X,Y)$. If there are few stable matchings overall, as \eqref{eq:int2} implies, then we have
\begin{equation}
\label{eq:int3}
\ensuremath{\mathbb{P}}(\mathcal{S})=\exp(o(1))\cdot n!^{-1}
\end{equation}
Another way of uniformly sampling from $\mathcal{S}$ is as follows.
First, sample $(X_1,Y,\mu)\in_U \mathcal{S}$. Then resample $X_2$ conditioned on $(X_2,Y,\mu)\in \mathcal{S}$. The triple $(X_2,Y,\mu)$ is a uniform element of $\mathcal{S}$. Note that for a fixed $(Y,\mu)$ the marginal distribution of $X_2$ conditioned on
$(X_2,Y,\mu)\in\mathcal{S}$ is fairly simple to reason about: each member should prefer the pairing assigned to them by $\mu$ to all other potential blocking matches. In such a resampling, as we shall see, the empirical exponential distribution appears naturally from large deviations theory.
Suppose we prove that for all $(Y,\mu)$,
\begin{equation}
\label{eq:int4}
\ensuremath{\mathbb{P}}_{X_2:(X_2,Y,\mu)\in\mathcal{S}}(\text{$X_2$ does not satisfy the conditions of \eqref{Eqn_happiness_dist_main_thm_whp}} ) = \exp(-\Omega_\epsilon(n))
\end{equation}
Putting these together, we would get
\begin{multline*}
\ensuremath{\mathbb{P}}\big((X_2,Y,\mu)\in \mathcal{S} \wedge(\text{$X_2$ does not satisfy the conditions of \eqref{Eqn_happiness_dist_main_thm_whp}} )\big) \\ = \ensuremath{\mathbb{P}}((X_1,Y,\mu)\in \mathcal{S}) \cdot \exp(-\Omega_\epsilon(n)) = n!^{-1}\cdot \exp(-\Omega_\epsilon(n)),
\end{multline*}
implying \eqref{eq:int1} (together with a similar statement about $Y$).
A certain amount of technical work is needed to make the above blueprint go through. In particular, since our bounds need to be
pretty tight, we need to worry about tail events. We end up having to perform the above resampling trick multiple times.
\paragraph{Why do we need the boundedness assumption \ref{Assumption_C_bounded}?} It is worth noting that while the boundedness assumption might not be the weakest assumption under which our results hold, some assumptions on the market are inevitable. In particular, if the market can be split into two independent balanced markets $A$ and $B$, then there is no connection between the fortunes of men in market $A$ and their fortunes in market $B$, and the empirical distribution of values on each side will be a mixture of two exponential distributions.
Things will get even more complicated if markets $A$ and $B$ are not entirely independent, but are connected by a small number of agents. It is still possible that some version of Theorem~\ref{Thm_main_happiness_dist} holds, but it will need to depend on the eigenspaces corresponding to large eigenvectors of the matrix.
It is worth noting that even \eqref{eq:int2} fails to hold when we do not have the boundedness assumption. Consider a market consisting of $n/2$ small markets with just $2$ men and $2$ women in each. Under the uniform preferences within each market, the expected number of stable matchings is $9/8$, thus
\begin{equation*}
\sum_{\mu} P_\mu = (9/8)^{n/2} \neq \exp(o(n)).
\end{equation*}
\subsection{Proof sketch}
\paragraph{Case with uniformly random preferences.}
Let us first look at the classic case where agents' preferences are generated independently and uniformly at random, i.e.,
all canonical scores equal $1/n$. %
The proof in this case is more straightforward due to the symmetry among agents and the established high probability bound on the number of stable matchings \citep{pittel1989average}. We keep however the discussion informal in this section.
For a given matching $\mu$, we study the conditional distribution of the value vectors $\mathbf{X}(\mu),\mathbf{Y}(\mu)\in\ensuremath{\mathbb{R}}^n$ conditional on stability of $\mu$. Conditional also on women's value vector $\mathbf{Y}(\mu)=\mathbf{y}$, man $\ensuremath{\mathsf{m}}_i$'s value $X_i(\mu)$ must satisfy $X_i(\mu) < X_{ij}$ for each $j\ne \mu(i)$ with $Y_{ji} < y_j$ in order not to form a blocking pair. Since $X_{ij}$ and $Y_{ji}$ are i.i.d. samples from $\Exp(1)$ for each $j\ne \mu(i)$, one should expect $X_i(\mu)$ to be effectively less than the minimum of about $\sum_{j\ne \mu(i)} 1 - e^{-y_j} \approx \|\mathbf{y}\|_1$ number of $\Exp(1)$ random variables. Such a constraint acts independently on each $X_i$ (conditional on $\mathbf{Y}(\mu)=\mathbf{y}$), and therefore in the posterior distribution one should expect $\mathbf{X}(\mu)$ to behave like i.i.d. samples from $\Exp(\|\mathbf{y}\|_1)$.
Concretely, conditional on $\mathbf{U}(\mu)=\mathbf{u}=1-F(\mathbf{x})$ and $\mathbf{V}(\mu)=\mathbf{v}=1-F(\mathbf{y})$, where we recall that $F(z) = 1-e^{-z}$ is the CDF of $\Exp(1)$ and is applied component-wise on the value vectors. The likelihood of $(\mathbf{x},\mathbf{y})$ when $\mu$ is stable is $\prod_{j\ne \mu(i)} (1-u_iv_j)$ \citep{pittel1989average}. With some crude first order approximation (namely, $1-z\approx e^{-z}$), this expression can be approximated by
\begin{equation}\label{Eqn_approx_intuition}
\prod_{j\ne \mu(i)} (1-u_iv_j)\approx \exp\left(-\sum_{i,j\in[n]}x_i y_j\right)=\exp(-\|\mathbf{x}\|_1\|\mathbf{y}\|_1),
\end{equation}
where we also put in the terms $x_i y_{\mu(i)}$ for $i\in [n]$ despite their absence in the original product. By the Bayes rule, conditional on $\mathbf{Y}(\mu)=\mathbf{y}$ and that $\mu$ is stable, the distribution of $\mathbf{X}(\mu)$ is approximately $p(\mathbf{x}|\mu\in\mathcal{S}, \mathbf{Y}=\mathbf{y})\propto \exp(-\|\mathbf{x}\|_1\|\mathbf{y}\|_1)\cdot\prod_{i=1}^n e^{-x_i} = \prod_{i=1}^n \exp(-(1+\|\mathbf{y}\|_1)x_i)$. Note that this is the joint density of the $n$-fold product of $\Exp(1+\|\mathbf{y}\|_1)$. Our main theorems in the case with uniformly random preferences follow directly from the the convergence of empirical measures.
\paragraph{The general case.}
The entire result can be viewed abstractly from the following lens: For any matching $\mu$, we expect the value vectors $(\mathbf{X}(\mu),\mathbf{Y}(\mu))$ to behave ``nicely'' with very high probability conditional on stability of $\mu$, so that even if we apply union bound on all stable matchings (which we will show to be ``rare'' separately) it is still unlikely to see any ``bad'' value vectors. To do so requires a careful analysis on the conditional distribution of value (given stability) $\mathcal{D}_\mu\in\Delta(\ensuremath{\mathbb{R}}^n_+\times\ensuremath{\mathbb{R}}^n_+)$, which depends on both the (unconditional) preference distribution (the ``prior'') and the conditional probability that $\mu$ is stable given a pair of value vector $\mathbf{X}(\mu)=\mathbf{x}$ and $\mathbf{Y}(\mu)=\mathbf{y}$, which we will denote by $p_\mu(\mathbf{x},\mathbf{y})$. We will define the ultimate ``nice'' event to be the $\ensuremath{\epsilon}$-proximity of empirical distribution of value (or rescaled rank) to some exponential distribution, but unsurprisingly it is hard to analyze this event directly from $\mathcal{D}_\mu$, which is complicated itself, in one single step. Instead, we will follow a ``layer-by-layer peeling'' of the desirable events. Namely, we will find a nested sequence of subsets $\ensuremath{\mathbb{R}}^n_+\times \ensuremath{\mathbb{R}}^n_+ = \Omega_0 \supseteq \Omega_1 \supseteq \cdots\supseteq \Omega_K$ representing events on the joint value vectors of a stable matching, with $\Omega_K$ the desired event that the empirical distribution of value for men is $\ensuremath{\epsilon}$-close to some exponential distribution. Step by step, we will show that a stable matching, conditional on its value vectors in $\Omega_i$, must have value vectors in $\Omega_{i+1}$ with very high probability. Here is the roadmap to establishing these increasingly ``nice'' events:
\begin{enumerate}[listparindent=1cm, label=(\alph*)] \item\label{Step_1_in_proof_sketch} As a first step, we approximate $p_\mu(\mathbf{x},\mathbf{y})$, the likelihood of value vectors $\mathbf{x}$ and $\mathbf{y}$ in a stable matching $\mu$, by the function $q(\mathbf{x},\mathbf{y})=\exp(-n\mathbf{x}^\top\mathbf{M}\mathbf{y})$. That is, the log likelihood of value vectors in a stable matching is approximately bilinear in the two vectors. To establish this, we identify a weak regularity condition on the value vectors of all stable matchings in terms of the first and second moments and extremal quantiles of the value vectors, under which the approximation holds (see Section~\ref{subsec_reg_of_happiness_moment_conditions}). Such a condition is met by all stable matchings with high probability (see Appendix~\ref{Append_weak_regular_scores} for details). The proof primarily consists of standard analysis of the deferred acceptance algorithm and careful use of first- and second-order approximation of $p_\mu(\mathbf{x},\mathbf{y})$. Here we use the fact that the men-proposing and women-proposing deferred acceptance algorithms output the extremal outcomes with respect to the two sides' values among all possible stable matchings.
\item\label{Step_2_in_proof_sketch} In the expression for $q(\vx,\vy)$, the value vectors relate through the matching matrix $\mathbf{M}$. However, we show next that, in stable matchings, we can further simplify things by approximately factoring $n\mathbf{X}(\mu)^\top\mathbf{M}\mathbf{Y}(\mu)$ into a product $\|\mathbf{X}(\mu)\|_1\|\mathbf{Y}(\mu)\|_1$ of sums of values on the two sides. More specifically, both $\mathbf{M}\mathbf{Y}(\mu)$ and $\mathbf{M}^\top\mathbf{X}(\mu)$ lie near the maximal eigenspace of $\mathbf{M}$, which is the span of $\mathbf{1}$ under Assumption~\ref{Assumption_C_bounded} (see Section~\ref{Subsec_proximity_eigensubsp}). The proof uses a fixed point argument to deduce that $\mathbf{M}\mathbf{Y}(\mu)$ depends almost deterministically on $\mathbf{M}^\top\mathbf{X}(\mu)$ and, symmetrically, $\mathbf{M}^\top\mathbf{X}(\mu)$ on $\mathbf{M}\mathbf{Y}(\mu)$, which forces both quantities to lie near the eigenspace.
Along the way, we also deduce an upper bound for the (unconditional) probability for $\mu$ to be stable (see Section~\ref{Subsec_uncond_stable_prob}), suggesting an sub-exponential upper bound on the typical number of stable matchings (Corollary~\ref{Cor_subexp_num_stable_match}).
\item\label{Step_3_in_proof_sketch} Under the previous event, the men's values behaves approximately like i.i.d. exponential samples with rate $\|\mathbf{Y}(\mu)\|_1$ conditional on stability of $\mu$ and $\mathbf{Y}(\mu)$ -- in fact, they are conditionally independent and nearly identically distributed. The result on the empirical distribution of men's values
follows immediately from a concentration inequality of Dvoretzky–Kiefer–Wolfowitz (DKW) type, generalized for nearly identically distributed independent random variables (Lemma~\ref{Lemma_dkw_non_identical}).%
\item Finally, we translate values into ranks. Using the classic first- and second-moment method, we show that for majority of the agents in the market, the rescaled rank (based one's own scores) lies close to the value. This implies Theorem~\ref{Thm_main_rank_dist}.
\end{enumerate}
There is one caveat, however: In \ref{Step_1_in_proof_sketch}, a second order expansion of $p_\mu(\mathbf{x},\mathbf{y})$ is required in order to justify the approximation with $q(\mathbf{x},\mathbf{y})$. As a result, we need to control second order behavior of value, i.e., $\|\mathbf{X}(\mu)\|_2^2=\sum_{i=1}^n X_i(\mu)^2$, in any stable matching $\mu$. However, the second moment cannot be easily controlled due to the heavy tail of $\Exp(1)$ (indeed, the moment generating function for $X^2$ does not exist for $X\sim\Exp(1)$). To resolve this issue, we perform a truncation in the upper $\delta/2$-quantile of the values on each side. By choosing $\delta$ sufficiently small, we can ensure that the truncation only affects the empirical distribution by an arbitrarily small amount in $\ell^\infty$ norm. As a price to pay, in \ref{Step_2_in_proof_sketch} and \ref{Step_3_in_proof_sketch}, we will have to deal with not just all stable matchings, but all \emph{partial} matchings on any $(1-\delta)$-fraction of the market that is stable. See Section~\ref{Subsec_partial_match_and_truncation} for the technical definition of truncated and partial matchings.
\section{Preliminaries}
\subsection{Probability of stability and its approximation}
For each matching $\mu$, define the function $p_\mu: \ensuremath{\mathbb{R}}^n_+\times\ensuremath{\mathbb{R}}^n_+ \to [0,1]$ to be probability that $\mu$ is stable given values of men and women in $\mu$. That is
\begin{equation}\label{Eqn_def_pmu}
p_\mu(\mathbf{x},\mathbf{y}) = \ensuremath{\mathbb{P}}(\mu\in\mathcal{S} | \mathbf{X}(\mu)=\mathbf{x},\mathbf{Y}(\mu)=\mathbf{y}).
\end{equation}
Just like the integral formula used in \citet{knuth1976mariages} and \citet{pittel1989average,pittel1992likely} to study matching markets with uniformly random preferences, the probability of a matching $\mu$ being stable can be similarly characterized by an integral
\begin{multline}\label{Eqn_integral_formula_orig}
\ensuremath{\mathbb{P}}(\mu\in\mathcal{S}) = \ensuremath{\mathbb{E}}_{\mathbf{X}\sim\bigotimes_{i=1}^n\Exp(a_{i,\mu(i)}),\mathbf{Y}\sim\prod_{i=1}^n\Exp(b_{i,\mu^{-1}(i)})}[p_\mu(\mathbf{X},\mathbf{Y})] \\
= \int_{\ensuremath{\mathbb{R}}_+^n\times \ensuremath{\mathbb{R}}_+^n} p_\mu(\mathbf{x},\mathbf{y}) \prod_{i=1}^n f_{a_{i,\mu(i)}}(x_i)f_{b_{i,\mu^{-1}(i)}}(y_i) \ensuremath{\,d}\mathbf{x} \ensuremath{\,d}\mathbf{y}.
\end{multline}
The function $p_\mu$ can be further expressed in closed form. Condition on the value vector $\mathbf{X}(\mu)
= \mathbf{x}$ and $\mathbf{Y}(\mu)
= \mathbf{y}$ and sample the rest of the values $X_{ij}$ and $Y_{ji}$ for all $j\ne \mu(i)$. Each pair of $i,j\in[n]$ with $j\ne \mu(i)$ may form a blocking pair when $X_{ij} < x_i$ and $Y_{ji} < y_j$, which event happens with probability $(1-\exp(-a_{ij}x_i))(1-\exp(-b_{ji}y_j))$. For $\mu$ to be stable, there must be no blocking pairs and thus
\begin{equation}
p_\mu(\mathbf{x},\mathbf{y}) = \prod_{\substack{i,j\in[n]\\j\ne \mu(i)}} \left(1 - \big(1-e^{-a_{ij}x_i}\big)\big(1-e^{-b_{ji}y_j}\big) \right).
\end{equation}
Under Assumption~\ref{Assumption_C_bounded}, i.e., $\max_{i,j_1,j_2} a_{ij_1}/a_{ij_2} \le C^2$ and $\max_{j,i_1,i_2} b_{ji_1}/b_{ji_2} \le C^2$, we observe a simple upper bound
\begin{equation}\label{Eqn_naive_bound_pxy}
p_\mu(\mathbf{x},\mathbf{y}) \le \prod_{\mu(i)\ne j} \Big(1 - \big(1-e^{-\hat{x}_i/C^2}\big)\big(1-e^{-\hat{y}_j/C^2}\big) \Big),
\end{equation}
where $\hat{x}_i = x_i a_{i,\mu(i)}$ and $\hat{y}_j = y_j b_{j,\mu^{-1}(j)}$ for $i,j\in[n]$ are the renormalized values (thus named because they have unit mean).
This bound is fairly conservative and crude for our final purpose, but will prove useful for establishing preliminary results.
To further simplify the analysis, we recognize that, through first order approximation,
\begin{equation}\label{Eqn_goal_approx_p_with_q}
p_\mu(\mathbf{x},\mathbf{y}) \approx \prod_{i\neq j}\left(1-a_{ij}b_{ji} x_i y_j\right) \le \exp\Big(-\sum_{i\ne j} a_{ij}b_{ji} x_i y_j\Big) \approx \exp(-n \mathbf{x}^\top \mathbf{M} \mathbf{y}).
\end{equation}
Define the function
\begin{equation*}
q(\mathbf{x},\mathbf{y}) := \exp(-n \mathbf{x}^\top \mathbf{M} \mathbf{y}) = \exp\bigg(-n \sum_{i,j=1}^n m_{ij}x_i y_j\bigg).
\end{equation*}
In the next section we discuss conditions under which the function $q(\mathbf{x},\mathbf{y})$ offers a good approximation for $p_\mu(\mathbf{x},\mathbf{y})$.
\subsection{Partial matchings and truncation} \label{Subsec_partial_match_and_truncation}
In order to study also approximately stable matchings, as well as for technical reasons, we need to consider matchings that are stable on a significant subset of the market. We first formalize a general partial matching then describe a
particular way to form stable partial matchings. %
Let $\mathcal{M}'\subseteq \mathcal{M}$ and $\mathcal{W}'\subseteq \mathcal{W}$ be subsets of the men and women with cardinality $|\mathcal{M}'|=|\mathcal{W}'|= n'$. A {\em partial matching} $\mu':\mathcal{M}'\to\mathcal{W}'$ is a bijection between $\mathcal{M}'$ and $\mathcal{W}'$. Denote the values of men among $\mathcal{M}'$ and women among $\mathcal{W}'$ in the partial matching $\mu'$ by $\mathbf{X}_{\mathcal{M}'}(\mu')$ and $\mathbf{Y}_{\mathcal{W}'}(\mu')$, respectively. While it may be natural to view $\mathbf{X}_{\mathcal{M}'}(\mu')$ and $\mathbf{Y}_{\mathcal{W}'}(\mu')$ as $n'$-dimensional vector, we choose to view them as $n$-dimensional vectors where components corresponding to men in $\mathcal{M}\backslash\mathcal{M}'$ and women in $\mathcal{W}\backslash\mathcal{W}'$ are zero (recall that since small is better, zero is the best possible latent value). Therefore, conditional on $\mathbf{X}_{\mathcal{M}'}(\mu')=\mathbf{x}'$ and $\mathbf{Y}_{\mathcal{W}'}(\mu')=\mathbf{y}'$ for $\mathbf{x}',\mathbf{y}'\in\ensuremath{\mathbb{R}}^n$ supported on $\mathcal{M}'$ and $\mathcal{W}'$, respectively, the probability that $\mu'$ is stable (as a matching between $\mathcal{M}'$ and $\mathcal{W}'$) is simply $p_{\mu'}(\mathbf{x}',\mathbf{y}')$.%
Given a full stable matching $\mu$ and any $\delta > 0$, we define the following routine to construct a stable partial matching of size $n-\floor{\delta n}$: Let $\bar{\mathcal{M}}_{\mu,\delta/2}\subseteq \mathcal{M}$ be the subset of $\floor{\delta n / 2}$ men with the largest value (i.e., the least happy men) in $\mu$, and similarly let $\bar{\mathcal{W}}_{\mu,\delta/2}\subseteq \mathcal{W}$ be the set of $\floor{\delta n / 2}$ least happy women. Construct $\mathcal{M}'_{\mu,\delta}\subseteq \mathcal{M} \backslash (\bar{\mathcal{M}}_{\mu,\delta/2} \cup \mu(\bar{\mathcal{W}}_{\mu,\delta/2}))$ of cardinality $n-\floor{\delta n}$. This is always possible because $|\bar{\mathcal{M}}_{\mu,\delta/2} \cup \mu(\bar{\mathcal{W}}_{\mu,\delta/2})| \le 2 \floor{\delta n / 2} \le \floor{\delta n}$ and in fact here can be multiple ways to choose $\mathcal{M}'_{\mu,\delta}$. The specific way $\mathcal{M}'_{\mu,\delta}$ is chosen (when $|\mathcal{M} \backslash (\bar{\mathcal{M}}_{\mu,\delta/2} \cup \mu(\bar{\mathcal{W}}_{\mu,\delta/2}))| > n-\floor{\delta n / 2}$) is irrelevant to our discussion, but it may be helpful to assume that the choice is made based on some canonical ordering of the men so there is no extra randomness. Let $\mu_\delta:\mathcal{M}'_{\mu,\delta}\to\mu(\mathcal{M}'_{\mu,\delta})$ be the partial matching induced by $\mu$ on $\mathcal{M}'_{\mu,\delta}$ and their partners. Define the \emph{$\delta$-truncate value} vector for $\mu$ to be $\mathbf{X}_\delta(\mu):= \mathbf{X}_{\mathcal{M}'_{\mu,\delta}}(\mu_\delta)$ and $\mathbf{Y}_\delta(\mu):= \mathbf{Y}_{\mu(\mathcal{M}'_{\mu,\delta})}(\mu_\delta)$.
\section{Regularity of values in stable matchings}\label{sec_reg_of_happiness}
In this section, we establish several (high probability) properties of stable matchings.
\subsection{Moment behavior and approximation of the conditional stable probability}\label{subsec_reg_of_happiness_moment_conditions}
We first consider a set of events in the value space, which can be thought of as regularity conditions for the approximation \eqref{Eqn_goal_approx_p_with_q} of $p_\mu(\mathbf{x},\mathbf{y})$ by $q(\mathbf{x},\mathbf{y})$. Define
\begin{equation}\label{Eqn_def_underR1}
\underline{\mathcal{R}}_1 = \{\mathbf{u}\in\ensuremath{\mathbb{R}}_+^n : \|\mathbf{u}\|_1 \ge \underline{c}_1 \log n\},
\end{equation}
\begin{equation}\label{Eqn_def_overR1}
\overline{\mathcal{R}}_1 = \{\mathbf{u}\in\ensuremath{\mathbb{R}}_+^n : \|\mathbf{u}\|_1 \le \overline{c}_1 n (\log n)^{-7/8}\},
\end{equation}
\begin{equation}\label{Eqn_def_R2}
\mathcal{R}_{2} = \{(\mathbf{u},\mathbf{v})\in\ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n : \mathbf{u}^\top\mathbf{M}\mathbf{v} \le c_2 (\log n)^{1/8}\},
\end{equation}
where $\underline{c}_1,\overline{c}_1,c_2\in\ensuremath{\mathbb{R}}_+$ are constants to be specified later.
Let
\begin{equation*}
\mathcal{R}_1 = \underline{\mathcal{R}}_1\cap\overline{\mathcal{R}}_1 \enspace\text{ and }\enspace \mathcal{R} = (\mathcal{R}_1 \times \ensuremath{\mathbb{R}}_+^n) \cap (\ensuremath{\mathbb{R}}_+^n \times \mathcal{R}_1) \cap \mathcal{R}_2 = \{(\mathbf{x},\mathbf{y})\in\mathcal{R}_2: \mathbf{x},\mathbf{y}\in \mathcal{R}_1 \}.
\end{equation*}
The region $\mathcal{R}$ should capture the typical behavior of $(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))$ for any stable matching $\mu$.
\begin{proposition}\label{Prop_R_likely}
For any fixed $\delta > 0$ and $c \in (0,1/2)$, the constants $\underline{c}_1,\overline{c}_1$, and $c_2$ in \eqref{Eqn_def_underR1}-\eqref{Eqn_def_R2} can be appropriately chosen such that
\begin{equation}
\ensuremath{\mathbb{P}}(\exists \mu\in\mathcal{S}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin\mathcal{R}) \lesssim e^{-n^c}
\end{equation}
asymptotically as $n\to\infty$.
\end{proposition}
\begin{remark}
It is helpful to compare the bounds \eqref{Eqn_def_underR1}-\eqref{Eqn_def_R2} to classic results in the setting with uniform preferences (cf. \citep{pittel1989average,pittel1992likely}): Namely, the optimal average rank is $\Theta(\log n)$, the pessimal average rank is $\Theta(n/\log n)$, and the product of the average ranks on the two side is asymptotic to $n$ in all stable matchings. Here, due to heterogeneity of preferences, we pay a small price of an extra constant or $(\log n)^{1/8}$ factor.
\end{remark}
We will defer the proof to Appendix~\ref{Append_weak_regular_scores}. %
In fact, we will establish even finer control over the truncated value vectors in stable matchings. For a matching $\mu$, we define $\mathbf{U}(\mu) = F(\hat{\mathbf{X}}(\mu))$ and $\mathbf{V}(\mu) = F(\hat{\mathbf{Y}}(\mu))$, where the (standard exponential CDF) function $F(z)=1-e^{-z}$ is applied coordinate-wise to the renormalized value vectors. Through relating $\mathbf{X}$ and $\mathbf{Y}$ to $\mathbf{U}$ and $\mathbf{V}$, we will specify a subregion $\mathcal{R}^\star \subseteq \mathcal{R}$ in which $p_\mu(\mathbf{x},\mathbf{y})$ can be well approximated by $q(\mathbf{x},\mathbf{y})$.\footnote{Technically, $\mathcal{R}^\star$ has to be defined in the context of a matching $\mu$, as $\mathbf{U}$ and $\mathbf{V}$. Here we drop the dependency for convenience. See Corollary~\ref{Cor_Rstar_likely} for the formal definition of $\mathcal{R}^\star(\mu)$.} We will see in Corollary~\ref{Cor_Rstar_likely} that with high probability no stable matchings $\mu$ have $(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))$ outside $\mathcal{R}^\star$, from which Proposition~\ref{Prop_R_likely} follows.
The conditions for $\mathcal{R}^\star$ are sufficiently strong to bound the functions $p_\mu$ and $q$ within an $\exp(o(n))$ factor of each other. This is formalized as follows.
\begin{restatable}{proposition}{propRatioPQHighProbeon}\label{Prop_ratio_p_q_high_prob}
For any $\delta>0$ and $c \in (0,1/2)$, there exists an absolute constant $\theta\in(0,\infty)$ such that the probability that a matching $\mu$ is stable with value vectors \emph{not} satisfying
\begin{equation}\label{Eqn_prop_ratio_p_q_high_tag}
\frac{p_\mu(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))}{q(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))} \le \exp\left(\frac{\theta n}{(\log n)^{1/2}}\right)\tag{$\star$}
\end{equation}
is $\frac{\exp(-n^c)}{n!}$.
In other words, with high probability, there exist no stable matchings $\mu$ whose post-truncation value vectors $\mathbf{X}_\delta(\mu)$ and $\mathbf{Y}_\delta(\mu)$ satisfy \eqref{Eqn_prop_ratio_p_q_high_tag}.
\end{restatable}
Again, the proof of Proposition~\ref{Prop_ratio_p_q_high_prob}
is deferred to Appendix~\ref{Append_weak_regular_scores}.
The reason for using $\delta$-truncated value vectors is that, when approximating $p(\mathbf{x},\mathbf{y})$ to second order, there will be terms of $\|\mathbf{x}\|_2^2$ and $\|\mathbf{y}\|_2^2$, which are hard to control due to the heavy tail of the exponential distribution.%
\footnote{Note that moment generating function does not exist for $X^2$ where $X\sim\Exp(1)$ so the classic Hoeffding- or Bernstein-type bounds fail to apply.} On the other hand, changing the values of a $\delta$ fraction should affect the empirical CDF by at most $\delta$ in $\ell^\infty$ distance. Therefore, %
it suffices to show that for small enough $\delta$ all stable partial matchings of size $n-\floor{\delta n}$ have values and ranks empirically distributed close to some exponential distribution.
The function $p_\mu(\mathbf{x},\mathbf{y})$
cannot be approximated globally by $q(\mathbf{x},\mathbf{y}) = \exp(-n\mathbf{x}^\top \mathbf{M}\mathbf{y})$.
However, we can find a region in $\ensuremath{\mathbb{R}}_+^n\times\ensuremath{\mathbb{R}}_+^n$ where $p_\mu(\mathbf{x},\mathbf{y})$ and $q(\mathbf{x},\mathbf{y})$ are close (uniformly for all stable matchings) in the sense that \eqref{Eqn_prop_ratio_p_q_high_tag} holds; Meanwhile, Proposition~\ref{Prop_ratio_p_q_high_prob} states that with high probability, no stable matchings will ever have $\delta$-truncated value vectors outside this region.
\subsection{A key reduction lemma}
Proposition~\ref{Prop_ratio_p_q_high_prob} allows us to study high probability behaviors in stable partial matchings obtained from truncating stable (full) matchings pretending that the conditional probability of stability were given by $q(\mathbf{x},\mathbf{y})$. Concretely, consider a fixed constant $\delta > 0$ and a region $\Omega \subseteq \ensuremath{\mathbb{R}}_+^n \times \ensuremath{\mathbb{R}}_+^n$ that defines an event on the (truncated) value vectors. If there exists a stable matching $\mu$ whose truncated value vectors $(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\in\Omega$, the induced partial matching $\mu_\delta$ of size $n-\floor{\delta n}$ between $\mathcal{M}'_{\mu,\delta}$ and $\mu(\mathcal{M}'_{\mu,\delta})$ must also be stable with value vectors $\mathbf{X}_{\mathcal{M}'_{\mu,\delta}}(\mu_\delta) = \mathbf{X}_\delta(\mu)$ and $\mathbf{Y}_{\mu(\mathcal{M}'_{\mu,\delta})}(\mu_\delta) = \mathbf{Y}_\delta(\mu)$. Thus, we will end up either having a stable matching whose truncated value violates \eqref{Eqn_prop_ratio_p_q_high_tag}, or a stable partial matching of size $n-\floor{\delta n}$ whose value vectors (already truncated) lies in $\Omega\cap\mathcal{R}^\star$. By Proposition~\ref{Prop_ratio_p_q_high_prob}, the former event happens with probability $o(1)$. Therefore, we may focus on the second event, where a stable partial matching of size $n-\floor{\delta n}$ exists with value vectors in $\Omega\cap\mathcal{R}^\star$. This is summarized by the following Lemma, which will be a major tool in the remainder of the proof.%
\begin{lemma}\label{Lemma_reduction_to_q}
Let $\delta > 0$, $c \in (0,1/2)$, and $\Omega \subseteq \ensuremath{\mathbb{R}}_+^n \times \ensuremath{\mathbb{R}}_+^n$. Then,
\begin{multline}
\ensuremath{\mathbb{P}}(\exists \mu\text{ stable}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\in\Omega) \le e^{-n^c} + \exp\left(\Theta\Big(\frac{n}{(\log n)^{1/2}}\Big)\right) \cdot \\
\sum_{\substack{\mathcal{M}'\subseteq\mathcal{M},\mathcal{W}'\subseteq\mathcal{W}\\|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}}}\sum_{\substack{\mu':\mathcal{M}'\to\mathcal{W}'\\\text{bijection}}} \ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu')) \cdot \mathbbm{1}_{\mathcal{R}\cap\Omega}(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\big].
\end{multline}
\end{lemma}
\begin{proof}
Note that
\begin{multline}\label{Eqn_proof_reduction_to_q_1}
\ensuremath{\mathbb{P}}(\exists \mu\text{ stable}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\in\Omega) \le \ensuremath{\mathbb{P}}(\exists \mu\text{ stable}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\notin\mathcal{R}^\star) \\
+ \ensuremath{\mathbb{P}}(\exists \mu\text{ stable}, (\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\in\mathcal{R}^\star\cap\Omega),
\end{multline}
where the first term is $e^{-n^c}$ by Corollary~\ref{Cor_Rstar_likely}.
Let $\mathcal{E}$ denote the event that there exists a {\em stable} partial matching $\mu'$ between $\mathcal{M}'\subseteq\mathcal{M}$ and $\mathcal{W}'\subseteq\mathcal{W}$ with $|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}$ where $(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\in\mathcal{R}^\star\cap\Omega$. Clearly, the existence of a stable matching $\mu$ with $(\mathbf{X}_\delta(\mu),\mathbf{Y}_\delta(\mu))\in\mathcal{R}^\star\cap\Omega$ implies $\mathcal{E}$.
Thus, the second term in \eqref{Eqn_proof_reduction_to_q_1} is bounded by
\begin{equation}
\ensuremath{\mathbb{P}}(\mathcal{E})
\le \sum_{\substack{\mathcal{M}'\subseteq\mathcal{M},\mathcal{W}'\subseteq\mathcal{W}\\|\mathcal{M}'|=|\mathcal{W}'|=n-\floor{\delta n}}}\sum_{\substack{\mu':\mathcal{M}'\to\mathcal{W}'\\\text{bijection}}} \ensuremath{\mathbb{P}}(\mu'\text{ stable}, (\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\in\mathcal{R}^\star\cap\Omega).
\end{equation}
using union bound. For each $\mathcal{M}',\mathcal{W}'$, and $\mu'$ in the summation, we compute the above probability through conditioning on $\mathbf{X}_{\mathcal{M}'}(\mu')$ and $\mathbf{Y}_{\mathcal{W}'}(\mu')$ as
\begin{align}
\ensuremath{\mathbb{P}}(\mu'\text{ stable}&, (\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\in\mathcal{R}^\star\cap\Omega) \nonumber\\
&= \ensuremath{\mathbb{E}}\big[\ensuremath{\mathbb{P}}\big(\mu'\text{ stable} \big| \mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu')\big); (\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\in\mathcal{R}^\star\cap\Omega\big] \nonumber\\
&= \ensuremath{\mathbb{E}}\big[p_{\mu'}(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu')) \cdot\mathbbm{1}_{\mathcal{R}^\star\cap\Omega}(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\big] \nonumber\\
&\le \ensuremath{\mathbb{E}}\left[\exp\left(\Theta\Big(\frac{n}{(\log n)^{1/2}}\Big)\right)q(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu')) \cdot\mathbbm{1}_{\mathcal{R}^\star\cap\Omega}(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\right] \nonumber\\
&\le \exp\left(\Theta\Big(\frac{n}{(\log n)^{1/2}}\Big)\right) \ensuremath{\mathbb{E}}\left[q(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu')) \cdot\mathbbm{1}_{\mathcal{R}\cap\Omega}(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{Y}_{\mathcal{W}'}(\mu'))\right],
\end{align}
which completes the proof.
\end{proof}
\begin{remark}
The choice of the constant $c\in(0,1/2)$ affects the implicit constants in defining $\mathcal{R}$ and $\mathcal{R}^\star$. Once we have a target convergence of $e^{-n^c}$ for some $c\in(0,1/2)$, we will assume that $c$ is fixed in the rest of our discussion unless otherwise mentioned.
\end{remark}
Lemma~\ref{Lemma_reduction_to_q} will be a key tool in the proof to establish further likely behaviors of value (and rank) vectors. It will be a recurring theme where we first identify a likely region $\Omega_\text{likely}$ for truncated value vectors of stable (full) matchings to fall in ($\mathcal{R}$ to start with), then rule out a bad event $\Omega_\text{bad}$ within $\Omega_\text{likely}$ by showing $\Omega_\text{likely}\cap\Omega_\text{bad}$ is unlikely for value vectors of any stable \emph{partial} matching, and apply Lemma~\ref{Lemma_reduction_to_q} to conclude that $\Omega_\text{bad}$ is unlikely for truncated value vectors of any stable matching and that $\Omega_\text{likely}\cap\Omega_\text{bad}$ can be used as the likely region moving forward.
Based on Lemma~\ref{Lemma_reduction_to_q}, it now suffices to consider partial matchings $\mu'$ of size $n-\floor{\delta n}$ between $\mathcal{M}'$ and $\mathcal{W}'$ and upper bound $\ensuremath{\mathbb{E}}\big[q(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{X}_{\mathcal{W}'}(\mu')) \cdot \mathbbm{1}_\Omega(\mathbf{X}_{\mathcal{M}'}(\mu'), \mathbf{X}_{\mathcal{W}'}(\mu'))\big]$.
From now on, we fix %
$\mathcal{M}'$, $\mathcal{W}'$, and $\mu'$, and make the dependency of $\mathbf{X}_{\mathcal{M}'}$ and $\mathbf{Y}_{\mathcal{W}'}$ on $\mu'$ implicit when the context is clear.
|
{
"arxiv_id": "2302.08583",
"language": "en",
"timestamp": "2023-02-20T02:02:08",
"url": "https://arxiv.org/abs/2302.08583",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
End-to-end (E2E) models have achieved the strong performance for automatic speech recognition (ASR) \cite{chiu2018state, jain2019rnn, sainath2020streaming, li2020developing, zeyer2020new} by directly mapping the speech signal into word sequences. However, even trained with a large amount of audio-transcript pairs, the E2E models still performs poorly when evaluated on utterances including words that appear infrequently in the training data (rare words) \cite{sainath2021efficient}. Moreover, this supervised speech obtained via human transcription is expensive. To overcome this, utilizing knowledge from large-scale unpaired text during training or inference is a promising solution since unpaired text is orders of magnitude more plentiful than audio-transcript pairs and covers a much larger vocabulary of words.
Language model (LM) fusion is a common approach to improve E2E ASR by using unpaired text.
An external LM is first trained with unpaired text. In shallow fusion \cite{hannun2014deep,gulcehre2015on}, a log-linear interpolation between the E2E model score and the LM score is computed at each step of the beam search.
To improve shallow fusion, internal LM estimation-based fusion \cite{mcdermott2019density, variani2020hybrid, meng2021ilme, zeyer2021librispeech, kanda2016maximum, meng2021minimum} was proposed to estimate an internal LM (ILM) score and subtract it from the shallow fusion score.
However, all these methods require an external LM during inference, increasing decoding time and computational cost.
To overcome this, various research has looked at incorporating unpaired text into the training stage of E2E models. One intuitive solution is to synthesize speech from unpaired text and use it to train the E2E model \cite{zhao2019shallow,rosenberg2019speech,deng2021improving,zheng2021using}.
However, training a text-to-speech (TTS) model and synthesizing speech are both computationally expensive.
To circumvent this, modality matching approaches \cite{bapna2021slam,tang2022unified,thomas2022towards,chen2022maestro,sainath2022joist} were proposed to map unpaired text to a latent space shared by speech and text, and then use latent embeddings to train the E2E model.
Alternatively, the decoder (and joint network for a transducer model) of an E2E model behaves like an ILM when we zero out the encoder output \cite{variani2020hybrid, meng2021ilme, meng2021ilmt}. To achieve fast text-only adaptation, unpaired text is injected into the decoder of a well-trained E2E model \cite{pylkkonen2021fast, meng2021ilma, chen2022factorized, meng2022modular, gao2021pre}.
These methods take one extra adaptation step of fine-tuning ILM of the E2E model
using text-only data to minimize a cross-entropy loss after one or two stages of E2E training.
In addition, Kullback-Leibler divergence (KLD) \cite{kullback1951information} regularization is performed to maintain the source-domain ASR performance.
The novel contributions of this work are: (1) We propose a joint E2E model and ILM training ({JEIT}{}) that simplifies decoder text injection by combining it into a single-stage E2E training. {JEIT}{} outperforms text-only adaptation without KLD regularization.
(2) We further propose a combined {JEIT}{} and JOIST \cite{sainath2022joist} training (CJJT) and demonstrate that decoder text-injection via ILM is complementary to encoder text-injection (via JOIST) and that the improvements are additive.
(3) We show that all text-injection methods can facilitate a more effective LM fusion.
(4) We validate our methods on Google's large-scale streaming production task where {JEIT}{} and CJJT offer up to 10.2\% and 16.4\% relative reductions in WER, respectively, on rare-word test sets without affecting voice search performance.
\section{Related Work}
\label{sec:format}
\subsection{Hybrid Autoregressive Transducer (HAT)}
\label{sec:hat}
An E2E model estimates the posterior distribution $P(\mathbf{Y} |
\mathbf{X};\theta_\text{E2E})$ over sequences of output labels $\mathbf{Y}=\{y_1, \ldots,
y_U\}$ given a sequence of input speech features $\mathbf{X}=\{\mathbf{x}_1,
\ldots, \mathbf{x}_T\}$, where $y_u \in \mathcal{V}, u = 1, \ldots, U$, and $\mathbf{x}_t
\in \mathbbm{R}^{d_x}, t = 1, \ldots, T$. $\mathcal{V}$ is the set of all possible labels, e.g., word pieces, etc. $y_0$ is the start of sentence token.
HAT \cite{variani2020hybrid} consists of an acoustic encoder, a label decoder and a joint network. In Fig \ref{fig:jeit_hat}, the encoder transforms input speech features $\mathbf{X}$ into acoustic embedding vectors $\mathbf{F} = \{\mathbf{f}_1, \ldots, \mathbf{f}_T\}, \; \mathbf{f}_t \in \mathbbm{R}^{d_f}$, i.e., $\mathbf{F} = \text{Encoder}(\mathbf{X})$. The label decoder takes in previous labels to generate the current label embedding $\mathbf{g}^\text{L}_u \in \mathbbm{R}^{d^\text{L}_g}$, i.e., $\mathbf{g}^\text{L}_u = \text{LabelDecoder}(\mathbf{Y}_{0:u - 1})$.
The joint network combines the acoustic and label embeddings to computes a blank distribution
\begin{align}
b_{t, u} = \text{Sigmoid}[\mathbf{w}^\intercal \phi(\mathbf{W}_1 \mathbf{f}_t + \mathbf{W}_2 \mathbf{g}^\text{L}_u)], \label{eqn:blank_posterior}
\end{align}
where $\mathbf{w} \in \mathbbm{R}^{d_h}$ is a vector, $\mathbf{W}_1 \in \mathbbm{R}^{d_h\times d_f}$ and $\mathbf{W}_2 \in \mathbbm{R}^{d_h\times d^\text{L}_g}$ are projection matrices.
$\phi(\cdot)$ is a non-linear function.
The label posteriors given previous speech features and labels are computed as
\begin{align}
P(y_u|\mathbf{X}_{1:t}, \mathbf{Y}_{0:u - 1}) = \text{Softmax}[\mathbf{W} \phi(\mathbf{W}_1 \mathbf{f}_t + \mathbf{W}_2 \mathbf{g}^\text{L}_u)], \label{eqn:label_posterior}
\end{align}
where $\mathbf{W} \in \mathbbm{R}^{|\mathcal{V}|\times d_h}$ is a projection matrix.
The label posterior given the alignment history is therefore $(1 - b_{t,u}) P(y_u|\mathbf{X}_{1:t}, \mathbf{Y}_{0:u - 1})$.
\subsection{Modular HAT (MHAT)}
To achieve more robust text-only adaptation, we proposed MHAT in \cite{meng2022modular} to structurally separate the ILM score prediction from the acoustic model score or blank score predictions.
As in Fig. \ref{fig:jeit_mhat}, MHAT introduces a blank decoder that takes in the same previous labels as the label decoder to generate the current label embeddings below
\begin{align}
\mathbf{g}^\text{B}_u = \text{BlankDecoder}(\mathbf{Y}_{0:u - 1}),
\end{align}
where $\mathbf{g}^\text{B}_u \in \mathbbm{R}^{d^\text{B}_g}$ is the label embedding of the label decoder.
The blank posterior $b_{t, u}$ is obtained by Eq. \eqref{eqn:blank_posterior} using $\mathbf{g}^\text{B}_u$.
$\mathbf{f}_t$ and $\mathbf{g}^\text{L}_u$ are projected and then normalized to be $|\mathcal{V}|$-dimensional (dim) vectors of AM log probabilities $\mathbf{a}_t$ and ILM log probabilities $\mathbf{l}_u$, respectively
\begin{align}
\mathbf{a}_t = \text{LogSoftmax}(\mathbf{W}_3 \mathbf{f}_t), \;\;
\mathbf{l}_u = \text{LogSoftmax}(\mathbf{W}_4 \mathbf{g}^\text{L}_u), \label{eqn:ilm_mhat}
\end{align}
where $\mathbf{W}_3 \in \mathbbm{R}^{|\mathcal{V}|\times d_f}$ and $\mathbf{W}_4 \in \mathbbm{R}^{|\mathcal{V}|\times d^\text{L}_g}$ are projection matrices.
$\mathbf{a}_t$ and $\mathbf{l}_u$ are added and then normalized to compute label posteriors
\begin{align}
P(y_u|\mathbf{X}_{1:t}, \mathbf{Y}_{0:u - 1}) = \text{Softmax}\left(\mathbf{a}_t + \mathbf{l}_u \right). \label{eqn:label_posterior_2}
\end{align}
\subsection{ILM Training (ILMT) and ILM adaptation (ILMA)}
ILMT \cite{variani2020hybrid, meng2021ilmt} minimizes an additional ILM loss during E2E model training. While the E2E loss is computed with audio-transcript pairs, the ILM loss is derived from only the training transcript.
ILMT aims to encourage ILM to behave also like a standalone neural LM such that (1) accurate ILM scores can be estimated to improve ILME-based fusion \cite{meng2021ilmt} (2) ILM can be further adapted to text-only data \cite{meng2021ilma}. ILMT makes no use of unpaired text and it does not improve the ASR performance on either source-domain or rare-word test sets \cite{meng2021ilmt}. Unlike ILMT, {JEIT}{} injects \emph{unpaired} text into ILM during E2E training with the goal of improving rare-word recognition.
ILMA \cite{meng2021ilma} performs fast text-only adaptation of an E2E model to improve rare-word ASR.
In ILMA, we first conduct ILMT of E2E model and then fine-tune ILM to minimize a cross-entropy ILM loss using unpaired text.
To prevent the source-domain ASR performance from degrading, we minimize an additional KLD between the output distributions of the unadapted and adapted ILMs during ILMA.
To simplify ILMA, {JEIT}{} combines two stages of ILMT and ILMA into one training stage and obviates the need for KLD regularization.
\subsection{Joint Speech and Text Modeling (JOIST)}
JOIST \cite{sainath2022joist} incorporates unpaired text into E2E training and significantly improves rare-word recognition.
It injects unpaired text through the encoder so that text data can benefit the entire E2E model.
In JOIST, unpaired text is first tokenized to word-piece or phoneme sequences and is then upsampled by replicating each token a fixed or random number of times. The upsampled text is masked and then fed into a text encoder to generate token embeddings which
are further passed to the decoder input or a layer of the encoder.
JOIST minimizes a weighted sum of two E2E losses derived from audio-transcript pairs $\mathcal{D}_\text{P}$ and unpaired text $\mathcal{D}_\text{UP}$, respectively
\begin{align}
& \hspace{-0pt} \mathcal{L}_{\text{JOIST}}(\mathcal{D}_\text{P}, \mathcal{D}_\text{UP}) = \mathcal{L}_{\text{E2E}}(\mathcal{D}_\text{P}; \theta_\text{E2E}) +
\alpha \mathcal{L}_{\text{E2E}}(\mathcal{D}_\text{UP}; \theta_\text{E2E}), \label{eqn:ilmt}
\end{align}
where the two E2E losses are defined as
\begin{align}
\hspace{-2pt}\mathcal{L}_{\text{E2E}}(\mathcal{D}_\text{P}; \theta_\text{E2E}) & = -\sum_{(\mathbf{X}, \mathbf{Y}) \in \mathcal{D}_\text{P}} \log P(\mathbf{Y}|\mathbf{X};\theta_\text{E2E}), \\
\hspace{-0pt}\mathcal{L}_{\text{E2E}}(\mathcal{D}_\text{UP};\theta_\text{E2E}) & = - \hspace{-0pt}\sum_{\mathbf{Y} \in \mathcal{D}_\text{UP}} \log P\left(\mathbf{Y}|F(\mathbf{Y});\theta_\text{E2E}\right), \label{eqn:e2e_loss}
\end{align}
where $F(\cdot)$ is a function that tokenizes, unsamples and masks an unpaired sentence in $\mathcal{D}_\text{UP}$. $\alpha>0$ is the weight of unpaired E2E losses.
In this work, we incorporate ILM loss into JOIST to further improve ASR performance.
\section{Joint E2E and ILM Training (JEIT)}
ILM probability can be estimated by the E2E model output after zeroing out the encoder output \cite{variani2020hybrid, meng2021ilme}.
ILM is the decoder and the joint network of HAT, is the label decoder and output projection $\mathbf{W}_4$ of MHAT, and is the decoder of an AED model.
\begin{figure}[htpb!]
\hspace{8pt}
\includegraphics[width=0.8\columnwidth]{jeit_hat.png}
\vspace{-10pt}
\caption{JEIT of HAT. Blue and red arrows represent the forward propagation path of audio-transcript pairs ($\mathbf{X}^\text{P}_{1:t}$, $\mathbf{Y}^\text{P}_{0:u-1}$) and unpaired text $\mathbf{Y}^\text{UP}_{0:u-1}$, respectively.}
\label{fig:jeit_hat}
\end{figure}
Our goal is to improve the ASR accuracy on rare-word test sets by making use of large-scale \emph{unpaired text} while maintaining WER on source-domain task (e.g., voice search).
In this work, we propose {JEIT}{}, a joint training of E2E model and ILM that injects unpaired text into ILM during E2E training.
\begin{figure}[htpb!]
\hspace{0pt}
\includegraphics[width=0.9\columnwidth]{jeit_mhat.png}
\vspace{-10pt}
\caption{{JEIT}{} of MHAT. Blue and red arrows represent the forward propagation path of audio-transcript pairs ($\mathbf{X}^\text{P}_{1:t}$, $\mathbf{Y}^\text{P}_{0:u-1}$) and unpaired text $\mathbf{Y}^\text{UP}_{0:u-1}$, respectively.}
\label{fig:jeit_mhat}
\end{figure}
As shown in Figs. \ref{fig:jeit_hat} and \ref{fig:jeit_mhat}, ILM is trained with \emph{unpaired} text to minimize an ILM loss while the entire E2E model is trained with audio-transcript pairs to minimize an E2E loss. The ILM loss minimization makes ILM a strong neural LM in the target domain while the E2E loss serves as a regularization to ensure ILM can work well with the other E2E model components to predict accurate E2E scores.
Specifically, {JEIT}{} minimizes a weighted sum of the E2E loss and ILM loss below
\begin{align}
& \hspace{-0pt} \mathcal{L}_{\text{{JEIT}{}}}(\mathcal{D}_\text{P}, \mathcal{D}_\text{UP}) = \mathcal{L}_{\text{E2E}}(\mathcal{D}_\text{P};\theta_\text{E2E}) +
\beta \mathcal{L}_{\text{ILM}}(\mathcal{D}_\text{UP};\theta_\text{ILM}), \label{eqn:ilmt}
\end{align}
where $\beta > 0$ is the weight of the ILM loss. The ILM loss is the summed negative log probability of all label sequences predicted by ILM on the unpaired text $\mathcal{D}_\text{UP}$ as follows
\begin{align}
\hspace{-2pt}\mathcal{L}_{\text{ILM}}(\mathcal{D}_\text{UP};\theta_\text{ILM})
=-\sum_{\mathbf{Y} \in \mathcal{D}_\text{UP}} \sum^{U}_{u=1}\log P(y_u|\mathbf{Y}_{0:u-1};\theta_\text{ILM}),
\label{eqn:ilm_loss_train}
\end{align}
where $\theta_\text{ILM} \subseteq \theta_\text{E2E}$ denotes ILM parameters.
Compared to text-only adaptation \cite{pylkkonen2021fast, meng2021ilma, chen2022factorized, meng2022modular}, {JEIT}{} significantly simplifies the entire learning process: 1) {JEIT}{} reduces two steps of audio-transcript training and unpaired text adaptation to one step of joint training, decreasing the computational cost and training/adaptation time. 2) {JEIT}{} avoids the need for KLD regularization of the ILM output distribution.
To improve rare-word recognition, {JEIT}{} injects unpaired text into the label decoder of an E2E model to minimize $\mathcal{L}_{\text{E2E}}(\mathcal{D}_\text{P};\theta_\text{E2E})$ and $\mathcal{L}_{\text{ILM}}(\mathcal{D}_\text{UP};\theta_\text{ILM})$ while JOIST injects it through the encoder to minimize $\mathcal{L}_{\text{E2E}}(\mathcal{D}_\text{P};\theta_\text{E2E})$ and $\mathcal{L}_{\text{E2E}}(\mathcal{D}_\text{UP};\theta_\text{E2E})$. To benefit from both methods, we proposed a combined {JEIT}{} and JOIST training (CJJT) to minimize a weighted sum of an E2E loss derived from audio-transcript pairs, an E2E loss derived from unpaired text and an ILM loss derived from unpaired text as follows
\begin{align}
\hspace{-0pt} \mathcal{L}_{\text{CJJT}}(\mathcal{D}_\text{P}, \mathcal{D}_\text{UP}) &= \mathcal{L}_{\text{E2E}}(\mathcal{D}_\text{P};\theta_\text{E2E}) + \alpha \mathcal{L}_{\text{E2E}}(\mathcal{D}_\text{UP};\theta_\text{E2E}) \nonumber \\
& + \beta \mathcal{L}_{\text{ILM}}(\mathcal{D}_\text{UP};\theta_\text{ILM}). \label{eqn:ilmt}
\end{align}
We show in the experiments that {JEIT}{} and JOIST are complementary to each other and CJJT achieves better ASR performance than either method alone.
During inference, we can integrate an external LM into the E2E model after {JEIT}{} or CJJT to further improve the rare-word ASR. We show that LM fusion is complementary to both {JEIT}{} and CJJT even if the external LM is trained with same unpaired text $\mathcal{D}_\text{UP}$ as in {JEIT}{}.
\section{Experiments}
\subsection{Dataset}
\label{sec:production_data}
We use $\sim$650M multi-domain English audio-transcript pairs as supervised training data \cite{sainath2022joist}. It covers multiple domains including Voice Search, Dictation, YouTube, Telephony and etc. YouTube transcripts are generated in a semi-supervised fashion \cite{liao2013large} while other data is anonymized and hand-transcribed \cite{googleai}.
In addition, multi-condition training \cite{kim2017generation}, random 8kHz down-sampling \cite{li2012improving} and SpecAug \cite{park2019specaugment} are applied to augment and diversify the data.
The unpaired text used in training or adaptation consists of 100B anonymized sentences across the domains of Maps, Google Play, Web, and YouTube, and is more than two orders of magnitude larger than audio-transcript pairs. The external LM is trained with 50\% transcripts of the paired data and 50\% unpaired text to ensure the quality on base Voice Search task does not degrade.
We evaluate our models on a Voice Search (VS) test set containing $\sim$12K anonymized and hand-transcribed voice search utterances with an average duration of 5.5 s. To evaluate ASR performance on long-tail words, we construct rare-word test sets for each of the 5 domains: Maps, Google Play, Web and YouTube (YT) domains. All test sets include rare proper nouns that appear fewer than 5 times in the training set and are synthesized by a TTS system \cite{gonzalvo2016recent}.
Our goal is to improve the ASR accuracy on 4 rare-word test sets without degrading the WER on Voice Search.
\vspace{-1pt}
\subsection{Modeling}
\label{sec:model}
We train HAT and MHAT with 2-pass cascaded encoders and separate decoders as in \cite{ding2022unified, narayanan2021cascaded}. They share the same front-end and encoder architecture.
Specifically, 128-dim log Mel filterbanks are extracted from speech signal and are subsampled to form a 512-dim feature every 30 ms.
Each speech feature is appended with a 16-dim domain ID \cite{narayanan2019recognizing}.
The causal encoder is a 7-layer conformer with causal convolution and left-context attention.
The non-causal encoder is a 10-layer conformer with right-context attention that processes 900 ms of speech into the future. Each conformer layer uses a 512-dim 8-head self-attention and a convolution kernel of size 15.
The causal and non-causal decoders of HAT or MHAT decode using the outputs of the causal and non-causal encoders, respectively. The label decoders of HAT and MHAT are 2-layer LSTMs with 2048 hidden units in each layer.
In HAT, the label decoder output passes through a 640-dim feedforward joint network before projected to 4096 output units representing word pieces \cite{schuster2012japanese}. In MHAT, the label decoder output is directly projected to the output layer of the same size.
ILMs of HAT and MHAT have 30.7M and 30M parameters, respectively. The blank decoder of MHAT is a 320-dim $V^2$ embedding decoder \cite{botros2021tied, ghodsi2020rnn} with a look-up table shared between the last 2 tokens and has 1.5M parameters.
Overall, HAT and MHAT have in total 205M and 210M model parameters, respectively. We report only the 2nd pass WER in this paper. We train baselines with only audio-transcript pairs and show their WERs in Table \ref{table:ilma_jeit}.
Moreover, we train a 12-layer conformer LM with 384-dim self-attention and 3072-dim feedforward layer \cite{sainath2021efficient}. The external LM has left attention context of 31 and has in total 70M parameters.
\subsection{ILMA of HAT and MHAT}
We first train an ILMT \cite{meng2021ilmt} model with an ILM loss weight of 0.1 and use it as the seed for ILMA \cite{meng2021ilma}.
For both ILMA and {JEIT}{}, we adopt minibatch sizes of 4,096 and 32,768 for paired audio-transcript data and unpaired text, respectively. During ILMA, a KLD regularization with a weight of $0.5$ is applied for both HAT and MHAT.
In Fig. \ref{fig:ilma}, we plot the WERs of HAT ILMA and MHAT ILMA with respect to number of training steps. WER of HAT ILMA sharply increases after reaching its best one at 5K training step while the WER of MHAT gradually decreases until after 200K step. This is because without a structural factorization, HAT is not able to work with an increasingly stronger ILM and will lose its functionality of performing E2E ASR. This shows that MHAT is superior to HAT for ILMA because its structurally independent ILM
allows MHAT to constantly improve its ASR capability as ILM becomes stronger. We list the best WERs of ILMA in Table \ref{table:ilma_jeit}.
\begin{figure}[htb]
\vspace{-2pt}
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[width=4.48cm]{ilma_hat.png}}
\label{fig:ilma_hat}
\centerline{(a) HAT ILMA}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[width=4.48cm]{ilma_mhat.png}}
\centerline{(b) MHAT ILMA}\medskip
\label{fig:ilma_mhat}
\end{minipage}
\vspace{-13pt}
\caption{WERs (\%) of ILMA at different number of training steps (K) for HAT and MHAT with LSTM label decoders.}
\label{fig:ilma}
\end{figure}
\vspace{-5pt}
\subsection{{JEIT}{} of HAT and MHAT}
We perform {JEIT}{} with the same minibatch sizes as ILMA and using ILM loss weights $\beta$ of 0.2 and 4.0 for HAT and MHAT, respectively. Significantly larger optimal ILM loss weight for MHAT signifies its advantage over HAT due to factorization - MHAT can work with increasingly stronger ILM to perform better ASR while HAT cannot.
In Table \ref{table:ilma_jeit}, MHAT {JEIT}{} performs the best among all methods, achieving 4.8\%--10.2\% relative WER reduction from the baseline HAT on rare-word test sets.
For MHAT, {JEIT}{} gets better WERs than ILMA on all test sets.
MHAT {JEIT}{} consistently outperforms HAT {JEIT}{} by up to 3.5\% relatively in terms of lower WER.
As {JEIT}{} goes on, WERs of both HAT and MHAT reduce continuously without any sudden increase.
This implies that E2E loss in {JEIT}{} serves as a much better regularization than KLD in ILMA.
Overall, we show for the first time that joint training of ILM is better than adaptation.
\begin{table}[h]
\centering
\setlength{\tabcolsep}{3.8pt}
\begin{tabular}[c]{c|c|c|c|c|c|c}
\hline
\hline
\multirow{2}{*}{\begin{tabular}{@{}c@{}} Model \end{tabular}} &
\multirow{2}{*}{\begin{tabular}{@{}c@{}} Exp \end{tabular}} & \multirow{2}{*}{\begin{tabular}{@{}c@{}} VS \hspace{0pt} \end{tabular}} & \multicolumn{4}{c}{Rare Words} \\
\hhline{~~~----}
& & & \hspace{-1pt} Maps \hspace{-1pt} & \hspace{1pt} Play \hspace{0pt} & \hspace{0pt} Web \hspace{0pt} & \hspace{1pt} YT \hspace{1pt} \\
\hline
\multirow{4}{*}{\begin{tabular}{@{}c@{}} HAT \end{tabular}} &
Base & \textbf{6.1} & 14.0 & 37.3 & 21.6 & 24.6 \\
& ILMT & 6.1 & 14.1 & 37.6 & 21.7 & 24.8 \\
& ILMA & 6.2 & 13.6 & 36.3 & 20.8 & 24.2 \\
& {JEIT}{} & 6.3 & 13.3 & 36.8 & 20.1 & 23.2 \\
\hline
\multirow{4}{*}{\begin{tabular}{@{}c@{}} MHAT \end{tabular}} &
Base & 6.2 & 14.1 & 37.4 & 21.4 & 24.6 \\
& ILMT & 6.2 & 14.3 & 37.8 & 21.8 & 25.0 \\
& ILMA & 6.3 & 13.3 & 35.5 & 20.3 & 23.5 \\
& {JEIT}{} & 6.2 & \textbf{13.2} & \textbf{35.5} & \textbf{19.4} & \textbf{22.6} \\
\hline
\hline
\end{tabular}
\caption{WERs (\%) of HAT, MHAT with LSTM label decoders using various training or adaptation methods.
Baseline and ILMT \cite{meng2021ilmt} models are trained with 650M multi-domain (MD) audio-transcript pairs. The same paired data and 100B MD unpaired sentences are used for ILMA \cite{meng2021ilma} and the proposed {JEIT}{}.}
\label{table:ilma_jeit}
\vspace{-10 pt}
\end{table}
\subsection{{JEIT}{} with Different Decoders}
We vary the type and size of the MHAT label decoders while keeping cascaded encoders in Section \ref{sec:model} unchanged.
Besides LSTM, we explore simpler and smaller label decoders: $V^2$ embedding and $V^4$ embedding \cite{botros2021tied} which have 640-dim embeddings and condition on the last 2 and 4 tokens, respectively. Each previous token has a separate look-up table. The same blank decoder in Section \ref{sec:model} is used.
MHATs with $V^2$ and $V^4$ embedding decoders have 8.6M, 14.6M parameters for their ILMs and have in total 169M, 182M parameters, respectively.
In Tables \ref{table:jeit_decoder} and \ref{table:ilma_jeit}, {JEIT}{} of MHATs with $V^2$ embedding, $V^4$ embedding and LSTM decoders achieve 1.6\%--4.1\%, 1.6\%-4.9\% and 4.3\%--8.8\% relative WER reductions from the baseline MHAT on Maps, Play, Web and YT, respectively.
For all 3 decoders, {JEIT}{} obtains no WER degradation on rare-word test sets. This shows that {JEIT}{} is beneficial to label decoders of various types and sizes. The effectiveness of {JEIT}{} increases as the ILM size grows and also as the label decoder's conditioning history extends.
\begin{table}[h]
\centering
\setlength{\tabcolsep}{3.0pt}
\begin{tabular}[c]{c|c|c|c|c|c|c|c}
\hline
\hline
\multirow{2}{*}{\begin{tabular}{@{}c@{}} Label \\ Dec \end{tabular}} &
\multirow{2}{*}{\begin{tabular}{@{}c@{}} Params \\ (M) \end{tabular}} &
\multirow{2}{*}{\begin{tabular}{@{}c@{}} Exp \end{tabular}} & \multirow{2}{*}{\begin{tabular}{@{}c@{}} VS \hspace{0pt} \end{tabular}} & \multicolumn{4}{c}{Rare Words} \\
\hhline{~~~~----}
& & & & \hspace{-1pt} Maps \hspace{-1pt} & \hspace{1pt} Play \hspace{0pt} & \hspace{0pt} Web \hspace{0pt} & \hspace{1pt} YT \hspace{1pt} \\
\hline
\multirow{2}{*}{\begin{tabular}{@{}c@{}} $V^2$ \\ Embed \end{tabular}} &
\multirow{2}{*}{\begin{tabular}{@{}c@{}} 8.6 \end{tabular}} &
Base & 6.2 & 14.5 & 37.9 & 21.9 & 24.9 \\
& & {JEIT}{} & 6.3 & 13.9 & \textbf{36.6} & 21.3 & 24.5 \\
\hline
\multirow{2}{*}{\begin{tabular}{@{}c@{}} $V^4$ \\ Embed \end{tabular}} &
\multirow{2}{*}{\begin{tabular}{@{}c@{}} 14.6 \end{tabular}} &
Base & 6.3 & 14.4 & 37.5 & 22.1 & 25.0 \\
& & {JEIT}{} & \textbf{6.2} & \textbf{13.7} & 36.9 & \textbf{21.1} & \textbf{24.3} \\
\hline
\hline
\end{tabular}
\caption{WERs (\%) of MHATs with $V^2$ embedding, $V^4$ embedding decoders.
Cascaded encoders and blank decoders of the two MHATs are the same as those of MHAT with LSTM label decoder in Table \ref{table:ilma_jeit}}
\label{table:jeit_decoder}
\end{table}
\vspace{-10 pt}
\subsection{Combining {JEIT}{} with Other Text Injection Methods}
We train JOIST MHAT with phoneme-based unpaired text following the setup in \cite{sainath2022joist}.
The text encoder output is fed to the 3rd conformer layer of causal encoder. Unpaired E2E loss weight $\alpha$ is 0.25.
We conduct combined {JEIT}{} and JOIST training (CJJT) with an ILM loss weight $\beta$ of 1.5. We subtract ILM scores during LM fusion.
\begin{table}[h]
\centering
\setlength{\tabcolsep}{2.5pt}
\begin{tabular}[c]{c|c|c|c|c|c|c}
\hline
\hline
\multirow{2}{*}{\begin{tabular}{@{}c@{}} Exp \end{tabular}} &
\multirow{2}{*}{\begin{tabular}{@{}c@{}} Params \\ (M) \end{tabular}} &
\multirow{2}{*}{\begin{tabular}{@{}c@{}} VS \hspace{0pt} \end{tabular}} & \multicolumn{4}{c}{Rare Words} \\
\hhline{~~~----}
& & & \hspace{-1pt} Maps \hspace{-1pt} & \hspace{1pt} Play \hspace{0pt} & \hspace{0pt} Web \hspace{0pt} & \hspace{1pt} YT \hspace{1pt} \\
\hline
Base &
\multirow{4}{*}{\begin{tabular}{@{}c@{}} 210 \end{tabular}} &
6.2 & 14.1 & 37.4 & 21.4 & 24.6 \\
{JEIT}{} &
& 6.2 & 13.2 & 35.5 & 19.4 & 22.6 \\
\hhline{-~-----}
JOIST &
& 6.2 & 12.4 & 34.7 & 18.5 & 22.1 \\
{JEIT}{} + JOIST (CJJT) &
& 6.2 & 12.0 & 33.8 & 17.9 & 21.3 \\
\hline
Base + LM &
\multirow{4}{*}{\begin{tabular}{@{}c@{}} 280 \end{tabular}}
& 6.0 & 11.8 & 34.4 & 17.9 & 23.2 \\
{JEIT}{} + LM &
& 6.0 & 11.7 & 34.0 & 17.0 & 22.6 \\
\hhline{-~-----}
\multirow{2}{*}{\begin{tabular}{@{}c@{}} CJJT + LM \\ CJJT + MWER + LM \end{tabular}} &
& \textbf{6.0} & 10.6 & 31.9 & 15.6 & 20.8 \\
\hhline{~~~~~~~}
& & 6.1 & \textbf{10.1} & \textbf{30.7} & \textbf{14.7} & \textbf{19.3} \\
\hline
\hline
\end{tabular}
\caption{WERs (\%) of MHATs with LSTM label decoders when combining {JEIT}{} with various text injection methods and MWER.}
\label{table:jeit_combine}
\vspace{-5 pt}
\end{table}
CJJT consistently outperforms both JOIST and {JEIT}{}, indicating text injection into the encoder and decoder are complementary and the gains are additive.
It is worth noting that CJJT achieves similar or even better WER than LM fusion with base MHAT on rare-word test sets, despite having 70M fewer model parameters.
LM fusion with {JEIT}{}/CJJT MHAT achieves 2.3\%--12.8\% \emph{additional} gains relatively,
so conducting {JEIT}{} or CJJT early on is extremely beneficial to LM fusion. LM fusion with CJJT performs better than with {JEIT}{}, suggesting {JEIT}{}, JOIST and LM fusion are complementary to each other.
Finally, we perform CJJT, minimum word error rate (MWER) training \cite{prabhavalkar2018minimum} and LM fusion, and obtain the best WER over all systems with 17.9\%--31.3\% relative WER reductions from the baseline.
\section{Conclusion}
We propose {JEIT}{} to inject unpaired text into ILM via a single-stage joint training.
{JEIT}{} simplifies two-stage ILMA and eliminates KLD regularization, achieving up to 10.2\% relative WER reductions from baseline on rare-word test sets.
MHAT performs better than HAT after {JEIT}{}, and is much more robust than HAT during ILMA. Text injection into encoder and decoder are complementary, combining them (CJJT) achieves up to 16.4\% relative gain. LM fusion further improves all text-injection methods by up to 12.8\% relatively.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.08650",
"language": "en",
"timestamp": "2023-02-20T02:05:00",
"url": "https://arxiv.org/abs/2302.08650",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Speech emotion recognition (SER) is of great significance to understanding human communication. SER techniques have been applied to many fields, such as video understanding \cite{gao2021pairwise}, human-computer interaction \cite{cowie2001emotion}, mobile services \cite{huahu2010application} and call centers \cite{gupta2007two}. Up to now, plenty of deep learning based SER methods have been proposed \cite{parry2019analysis, poria2017context}. Recently, multimodal emotion recognition attracted more attention \cite{atmaja2019speech, delbrouck-etal-2020-modulated, sun2021multimodal} because of its richer representation and better performance. Most of these studies assigned one-hot labels to utterances. In fact, experts often have inconsistent cognition of emotional data, thus, the one-hot label is obtained by majority voting. Figure \ref{fig:1}(a) shows the emotional data distribution in the IEMOCAP dataset \cite{busso2008iemocap}.
\begin{figure}[htb]
\centering
\centerline{\includegraphics[width=8.5cm]{Fig1.2.pdf}}
\caption{(a) Data distribution in the IEMOCAP when the label is one-hot. (b) The statistics of clear and ambiguous samples in the IEMOCAP. Each vertex represents an emotion category with only clear samples. The bold number is the quantity of clear samples. Each directed edge denotes a set of ambiguous samples, whose tail is the major emotion and head is the minor emotion. The quantity of ambiguous samples is on the edge.}
\label{fig:1}
\end{figure}
Later, some studies argued that the one-hot label might not represent emotions well. They either addressed this problem through multi-task learning \cite{Lotfian2018}, soft-label \cite{fayek2016modeling, ando2018soft} and multi-label \cite{Ando2019}, or enhanced the model’s learning ability of ambiguous emotions by dynamic label correction \cite{Fujioka2020} and label reconstruction through interactive learning \cite{zhou2022multi}. These works usually defined the samples with consistent
experts’ votes as clear samples and those with inconsistent votes as ambiguous samples. The statistics of clear and ambiguous samples in the IEMOCAP dataset are shown in Fig. \ref{fig:1}(b). After statistical analysis, we find that the data distribution is imbalanced, especially for ambiguous samples. For example, the quantity of ambiguous samples between \textit{anger} and \textit{frustration} is abundant, while that between \textit{anger} and \textit{sadness} is very few. Meanwhile, the quantity of clear samples of \textit{happiness} counts much less than other clear emotion categories. We consider such distribution is unreasonable that may prevent the model from learning a better emotional representation. The possible reason is that the votes come from few experts, which is a rather sparse sampling of human population.
\begin{figure}[htb]
\centering
\centerline{\includegraphics[width=8.5cm]{Fig2.3.pdf}}
\caption{The data distributions before and after smoothing. The purple bars represent clear samples and the orange bars represent ambiguous samples.}
\label{fig:2}
\end{figure}
\begin{figure}[htb]
\centering
\centerline{\includegraphics[width=8.5cm]{Fig3.3.pdf}}
\caption{The framework of our approach. It consists of three modules: (1) Data preprocessing; (2) PDDS; (3) CLTNet.}
\label{fig:3}
\end{figure}
We think that clear and ambiguous emotional data shall follow a smooth statistical distribution in the real world. Based on this assumption, we propose the Pairwise-emotion Data Distribution Smoothing (PDDS) to address the problem of unreasonable data distribution. PDDS applies Gaussian smoothing on the data distribution between clear emotion-pairs, which augments ambiguous samples up to reasonable quantities, meanwhile balances the quantities of clear samples in all categories to be close to each other. Figure \ref{fig:2} shows the data distributions before and after smoothing. To complement the missing data, we use a feature-level mixup between clear samples to augment the data.
As PDDS is model and modality agnostic, we evaluate it on three SOTA methods, as well as the unimodal and bimodal data on the IEMOCAP dataset. The results show that these models are improved by 0.2\% $\sim$ 4.8\% and 1.5\% $\sim$ 5.9\% in terms of WA and UA. Our proposed CLTNet achieves the best performance. The ablation study reveals that the nature of superior performance of PDDS is the reasonable data distribution rather than simply increasing the data size.
\section{Method}
\label{sec:method}
\begin{figure}[htb]
\centering
\centerline{\includegraphics[width=8.0cm]{Fig4.1.pdf}}
\caption{An example of smoothing the quantity distribution of an emotion-pair.}
\label{fig:4}
\end{figure}
Figure \ref{fig:3} shows that our proposed framework includes three modules: (1) Preprocessing module. It extracts audio features using the pre-trained data2vec \cite{pmlr-v162-baevski22a} and text features using the pre-trained Bert \cite{devlin-etal-2019-bert}. (2) Pairwise-emotion Data Distribution Smoothing module (PDDS). It smooths the unreasonable data distribution, and is a plug-in module that can be applied to other SER methods. (3) CLTNet: a proposed model for utterance-level multimodal emotion recognition.
\subsection{Pairwise Data Distribution Smoothing}
\label{ssec:pdds}
\subsubsection{Quantity smoothing}
\label{sssec:qs}
Suppose there are $c$ emotions in the dataset. As the label of an ambiguous sample often comes from two distinct emotions, we construct a clear-ambiguous-clear distribution for the population of four types of samples between every two emotions $i$ and $j$, where $i, j\in \{1,…,c \}, i\neq j$. They are \textit{clear samples} $I$ only containing emotion $i$, \textit{ambiguous samples} $I_J$ containing major emotion $i$ and minor emotion $j$, \textit{ambiguous samples} $J_I$ containing major emotion $j$ and minor emotion $i$, and \textit{clear samples} $J$ only containing emotion $j$, as shown in Fig. \ref{fig:4}.
We think that the quantity distribution of these four types of samples in every emotion-pair should be statistically smooth, so a Gaussian kernel is convolved with the distribution to have a smoothed version,
\begin{equation}
n_k=\sum_{k' \in K} e^{-\frac{\|k-{k}' \|^2}{2\sigma^2}}n_{{k}'},
\label{eq:1}
\end{equation}
where $K= \{k-1, k, k+1\}$ denotes the indexes of $\{I, I_J, J_I\}$ or $\{ I_J, J_I, J\}$ when $k$ is the index of $I_J$ or $J_I$, $ n_{{k}'}$ is the quantity of samples in type $k$ before smoothing, and $n_k$ is the quantity of samples after smoothing. For clear samples, if their quantity in an emotion category is too small, they are augmented until the quantity reaches that in other categories. The smoothed quantity distribution of all emotion-pairs is shown in Fig. \ref{fig:2}(b).
\subsubsection{Mixup augmentation}
\label{sssec:mixup}
After smoothing the data distribution, the quantities of some ambiguous samples in the original dataset are less than the expectation. To complete data, a feature-level data augmentation, Mixup \cite{zhang2018mixup}, is applied to augment those samples. Mixup can improve the generalization ability of the model by generating new training samples by linearly combining two clear samples and their labels. It follows the rules
\begin{equation}
x_{mix}=p x_\alpha+(1-p)x_\beta,
\label{eq:2}
\end{equation}
\begin{equation}
y_{mix}=p y_\alpha+(1-p)y_\beta,\; p \in \left [ 0,1\right ],
\label{eq:3}
\vspace{0.1cm}
\end{equation}
where $x_\alpha$ and $x_\beta$ are the features from a clear sample of the major emotion and a clear sample of the minor emotion, respectively. $x_{mix}$ is the feature of the new sample. $y_\alpha$ and $y_\beta$ are the one-hot labels of $x_\alpha$ and $x_\beta$, and $y_{mix}$ is the label distribution of the new sample. To avoid undersampling, we use the original data when the original quantity of ambiguous samples has met or exceeded the smooth distribution.
\subsection{CLTNet}
\label{ssec:cltnet}
To further verify the effectiveness of PDDS, we design a simple but effective utterance-level multimodal fusion network, named CLTNet, which uses CNN, LSTM and Transformer to extract multimodal emotional features as shown in Fig. \ref{fig:3}. Firstly, for the audio modality, the acoustic features are fed into three convolutional blocks to capture the local patterns. Each of them has a 1D convolutional layer and a global pooling layer. To capture the temporal dependencies in the acoustic sequence, an LSTM layer and a pooling layer are employed. The obtained 4 encoded feature vectors are concatenated and fed into a fully-connected layer to get the audio representations as follows,
\begin{equation}
\begin{array}{c}
h_A=Concat(x_{conv_A^1},x_{conv_A^2},x_{conv_A^3},x_{lstm_A})W_A+b_A, \\
\end{array}
\label{eq:6}
\end{equation}
\begin{equation}
x_{conv_A^i}=ConvBlock(x_A),
\nonumber
\label{eq:4}
\end{equation}
\begin{equation}
x_{lstm_A}=LSTMBlock(x_A),
\nonumber
\vspace{0.1cm}
\label{eq:5}
\end{equation}
where $ConvBlock(\cdot)=MaxPool(Relu(Conv1D(\cdot))), \\ LSTMBlock(\cdot)=MaxPool(LSTM(\cdot)),\; x_A\in \mathbb{R}^{t_A\times d_A}$ and $ x_{conv_A^i}, \; x_{lstm_A}\in \mathbb{R}^{d_1}$ are input features and output features of convolution blocks and LSTM blocks, respectively, $W_A\in \mathbb{R}^{4d_1\times d}$ and $b_A\in \mathbb{R}^d$ are trainable parameters. For the text modality, the text features are fed into a transformer encoder of $N$ layers to capture the interactions between each pair of textual words. An attention mechanism \cite{lian2021ctnet} is applied to the outputs of the last block to focus on informative words and generate text representations,
\begin{equation}
h_T=(a_{fuse}^T z_T ) W_T+b_T,
\label{eq:9}
\end{equation}
\begin{equation}
z_T=TransformerEncoder(x_T),
\nonumber
\label{eq:7}
\end{equation}
\begin{equation}
a_{fuse}=softmax(z_T W_z+b_z ),
\nonumber
\vspace{0.1cm}
\label{eq:8}
\end{equation}
where $x_T\in \mathbb{R}^{t_T\times d_T}$and $z_T\in\mathbb{R}^{t_T\times d_2}$ are input features and output features of TransformerEncoder, $W_z\in \mathbb{R}^{d_2\times 1},\; b_z\in\mathbb{R}^1,\; W_T\in \mathbb{R}^{d_2\times d}$ and $ b_T\in\mathbb{R}^d$ are trainable parameters. $a_{fuse}\in\mathbb{R}^{t_T\times 1}$ is the attention weight.
Finally, the representations of the two modalities are concatenated and fed into three fully-connected layers with a residual connection, followed by a softmax layer. As using the label distribution to annotate emotional data, we choose the KL loss to optimize the model,
\begin{equation}
Loss_{KL}=\sum_{i=1}^C y_i log\frac{y_i}{\hat{y_i}},
\label{eq:12}
\end{equation}
\begin{equation}
h=Concat(h_A, h_L)W_u+b_u,
\nonumber
\label{eq:10}
\end{equation}
\begin{equation}
\hat{y}=Softmax((h+ReLU(hW_h+b_h )) W_c+b_c ),
\nonumber
\vspace{0.1cm}
\label{eq:11}
\end{equation}
where $W_u\in R^{2d\times d},\; b_u\in \mathbb{R}^d, \; W_h\in R^{d\times d},\; b_h\in \mathbb{R}^d, \; W_c\in\mathbb{R}^{d\times c} $ and $ b_c\in\mathbb{R}^c$ are trainable parameters, $y\in\mathbb{R}^c$ is the true label distribution, $\hat{y}\in\mathbb{R}^c$ is the predicted label distribution.
\section{Experiment}
\label{sec:experiment}
\subsection{Dataset and evaluation metrics}
\label{ssec:dataset}
PDDS is evaluated on the most commonly used dataset (IEMOCAP) in SER\cite{busso2008iemocap}. There are five sessions in the dataset, where each sentence is annotated by at least three experts. Following previous work \cite{hazarika2018icon}, we choose six emotions: \textit{anger}, \textit{happiness}, \textit{sadness}, \textit{neutral}, \textit{excited} and \textit{frustration}. Session 1 to 4 are used as the training set and session 5 is used as the testing set, and PDDS is only applied on the training set. The weighted accuracy (WA, i.e., the overall accuracy) and unweighted accuracy (UA, i.e., the average accuracy over all emotion categories) are adopted as the evaluation metrics.
\subsection{Implementation Details}
\label{datails}
For audio, 512-dimensional features are extracted from the raw speech signal by the pre-trained data2vec \cite{pmlr-v162-baevski22a}. The frame size and frame shift are set to 25 ms and 20 ms, respectively. For text, the pre-trained Bert \cite{devlin-etal-2019-bert} is used to embed each word into a 768-dimensional vector.
In the original dataset, the quantity of clear samples belonging to happiness is very small, we then augment the data using mixup in order to balance the quantities of clear samples in all emotion categories. The augmentation settings are $p=0.5$, $\alpha=\beta=$ \textit{happiness}, and the quantity is increased from 43 to 215. Afterward, the data distribution smoothing is applied to the ambiguous samples of each emotion-pair with Gaussian kernel $\sigma=1$.
The kernel sizes of three CNN blocks in CLTNet are 3, 4 and 5, and the number of hidden units is 300 in LSTM. The number of layers in the transformer encoder is 5 and each layer is with 5 attention heads. The embedding dimensions $d_1,d_2$ and $d$ in Eq. (\ref{eq:6}) and (\ref{eq:9}) are set to 300, 100 and 100, respectively. CLTNet is trained by the Adam optimizer. The learning rate, weight decay and batch size are set to 0.001, 0.00001 and 64, respectively. The training stops if the loss does not decrease for 10 consecutive epochs, or at most 30 epochs.
\subsection{Validation experiment}
\label{ssec:val}
As PDDS is model-modality agnostic, we select SpeechText \cite{atmaja2019speech}, MAT \cite{delbrouck-etal-2020-modulated}, and MCSAN \cite{sun2021multimodal}, which aim at utterance-level emotion recognition, to evaluate its effectiveness. Our proposed CLTNet and its unimodal branches are also tested. The experimental results are shown in Table \ref{tab:1}. We can observe that all models are significantly improved when the training data are processed by PDDS, with 0.2\% $\sim$ 4.8\% increase on WA and 1.5\% $\sim$ 5.9\% increase on UA. Among them, our proposed CLTNet achieves the best performance. This result demonstrates that a reasonable data distribution in the training set does help models learn a better emotional representation.
\begin{table}[]
\centering
\caption{The comparison of models without and with PDDS.}
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{@{}lcccc@{}}
\toprule
\multirow{2}{*}{Model} & \multicolumn{2}{c}{w/o PDDS} & \multicolumn{2}{c}{w/ PDDS} \\ \cmidrule(l){2-5}
& WA (\%) & UA (\%) & WA (\%) & UA (\%) \\ \midrule
SpeechText \cite{atmaja2019speech} & 55.7 & 50.6 & \textbf{58.8} & \textbf{55.1} \\
MAT \cite{delbrouck-etal-2020-modulated} & 54.5 & 51.0 & \textbf{58.0} & \textbf{55.5} \\
MCSAN \cite{sun2021multimodal} & 60.0 & 56.5 & \textbf{60.2} & \textbf{58.0} \\
CLTNet(audio only) & 44.9 & 39.0 & \textbf{48.4} & \textbf{44.9} \\
CLTNet(text only) & 50.9 & 45.8 & \textbf{51.6} & \textbf{50.1} \\
CLTNet & 55.9 & 52.3 & \textbf{60.7} & \textbf{58.2} \\ \bottomrule
\end{tabular}
}
\vspace{0.1cm}
\label{tab:1}
\end{table}
A more detailed analysis, the confusion matrices of CLTNet with and without PDDS, are shown in Fig. \ref{fig:5}. One can see that the classification accuracies in most emotion categories are increased. Only two more samples are misclassified in \textit{frustration}. However, the rightmost columns in the confusion matrices illustrate that the model trained on the original dataset inclines to misclassify more samples as \textit{frustration}. By contrast, PDDS considerably alleviates this tendency.
\begin{figure}[htb]
\centering
\centerline{\includegraphics[width=8.5cm]{Fig5.1.pdf}}
\caption{Confusion matrices of CLTNet without and with PDDS.}
\label{fig:5}
\end{figure}
\subsection{Ablation Study}
\label{ssec:ablation}
To verify the rationality of PDDS, two additional experiments are designed and tested on the CLTNet model: (a) Only balancing \textit{happiness}: Mixup augmentation is only applied on clear samples of \textit{happiness} rather than ambiguous samples. (b) Uniform balancing: the clear samples in each category and the ambiguous samples of each emotion-pair are augmented to 400 if their quantity is less than 400 (Most of them are less than 400 in the IEMOCAP dataset).
The results are shown in Table \ref{tab:2}. We can observe that all three data augmentations boost the performance of the model. Compared to only balancing the \textit{happiness} category, augmenting both ambiguous and clear samples can help the model perform better. Although Uniform balancing has the largest training dataset, PDDS performs best on WA and UA with 4.8\% and 5.9\% improvements. This result reveals the nature of advantage of PDDS is the reasonable data distribution instead of simply increasing the data size.
\begin{table}[]
\centering
\caption{Evaluation of different data augmentation methods.}
\label{tab:my-table}
\begin{tabular}{@{}lcc@{}}
\toprule
Data & WA (\%) & UA (\%) \\ \midrule
Original training data & 55.9 & 52.3 \\
Only balancing \textit{happiness} & 57.4 & 54.1 \\
Uniform balancing & 58.6 & 56.0 \\
With PDDS & \textbf{60.7} & \textbf{58.2} \\ \bottomrule
\end{tabular}
\label{tab:2}
\end{table}
We further evaluate the effectiveness of $p$ values in the mixup augmentation, and show the result in Fig. \ref{fig:6}. One can see the best results are achieved when $p=0.8$.
\begin{figure}[htb]
\centering
\centerline{\includegraphics[width=6.0cm]{Fig6.1.pdf}}
\caption{Effect of different $p$ values.}
\label{fig:6}
\end{figure}
\section{Conclusion}
\label{sec:prior}
In this paper, we address the imbalanced data distribution in the IEMOCAP dataset by proposing the Pairwise-emotion Data Distribution Smoothing (PDDS) method. PDDS constructs a more reasonable training set with smoother distribution by applying Gaussian smoothing to emotion-pairs, and complements required data using a mixup augmentation. Experimental results show that PDDS considerably improves three SOTA methods. Meanwhile, our proposed CLTNet achieves the best result. More importantly, the ablation study verifies that the nature of superiority of PDDS is the reasonable data distribution instead of simply increasing the amount of data. In future work, we will explore a more reasonable data distribution for a better emotional representation learning.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.08577",
"language": "en",
"timestamp": "2023-02-20T02:01:50",
"url": "https://arxiv.org/abs/2302.08577",
"yymm": "2302"
} | \section{Introduction}
\label{sec: Introduction}
Large Language Models (LLMs) like GPT-3 can now generate fluid and, at times, humanlike text \citep{brown2020language,dou2022gpt}.
But even top-performing models are still more likely to produce text that is incoherent, self-contradictory, off-prompt, or redundant compared to human text.
\citet{dou2022gpt} found that annotators
marked about 11\% of GPT-3 tokens as part of self-contradictory text spans, compared to around 6\% for humans.
And they marked about 20\% of GPT-3 tokens as part of redundant spans, compared to just about 10\% for humans.\footnote{Our code and results are available at \url{https://anonymous.4open.science/r/nli_text_generation}.}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/ratings.png}
\caption{Average holistic ratings for generations from vanilla GPT-J (control), vs. \textsc{NLI Strategies}{} of maximizing for neutral, contradiction, or entailment, for 2 different choices of parameter values. Neutral performs best in all cases (significantly better than control), but maximizing contradictions is better than the control when randomness is low, and maximizing entailment is better than the control when randomness is high.}
\label{fig:ratings}
\end{figure}
A possible reason for this is that, whereas models rely on statistical regularities to generate text, humans have the ability to engage in careful, logical reasoning.
\citet{nye2021improving} liken this to the System 1 and System 2 idea in cognitive science, whereby humans are posited to have both a fast cognitive system for making quick intuitive judgments as well as a slower, more thoughtful system \citep{kahneman2011thinking}.
For instance, System 1 might be used for recognizing a friend's face, whereas System 2 is used for solving SAT questions.
This dichotomy is also relevant in language. Imagine encountering the text: ``As a Miami attorney, I help local residents file their state income tax returns''.
It's grammatical and reasonable at a glance.
But, if you reflect on it and draw on some world knowledge, you might realize that Miami is in Florida and Florida does not have a state income tax, and so the statement is less sensible than it seemed.
Perhaps it is this lack of ability to edit utterances post-hoc (as opposed to just predicting the next word) that makes language models susceptible to incoherence or redundancy.
\citet{nye2021improving} use this insight to develop a system that combines LLM generation with a symbolic reasoning model, essentially using the LLM as a fast generation system and then using the symbolic reasoner to refine and rerank the output.
While it is costly to build rich world-reasoning models, there has been an enormous amount of work on Natural Language Inference (NLI) tasks and models for solving them \citep[e.g.,][]{conneau-etal-2018-xnli,williams-etal-2018-broad,poliak-etal-2018-collecting,nie-etal-2020-adversarial,bowman-etal-2015-large}.
The NLI task is traditionally framed takes a premise and a hypothesis and queries whether the hypothesis is entailed by the premise, contradicts the premise, or is neutral with respect to the premise \citep{nie-etal-2020-adversarial,bowman-etal-2015-large}.
We propose that we can use these models, and the extensive body of work around them, in order to guide model generation---essentially using NLI ratings as a way of reranking \citep{shen-etal-2019-pragmatically,holtzman-etal-2018-learning} the ``fast'' output of an LLM, using a well-established logical inference task.
In this vein, \citet{holtzman-etal-2018-learning} used an NLI approach, alongside other approaches, to try to get RNNs to generate more coherent text.
Specifically, they focused on selecting sentences that maximize the probability of being neutral to what preceded them, but did not conduct a thorough evaluation of whether this was the best strategy.
While they showed that their models helped generations overall, the NLI part showed mixed results and, on its own, did not help much.
Should they instead have maximized entailments? There is intuitive appeal to that idea, since it would guarantee that the text is coherent. But, since entailments are logically implied by what precedes them, maximizing entailment might lead to increased redundancy. (See \citealt{merrill2022entailment} for proof of how text generated by a perfectly informative agent can be used to learn semantic relationships between sentences.).
In this work, we evaluate several strategies, not just the neutral one.
Moreover, the state of natural language generation has dramatically improved since the RNNs in \citet{holtzman-etal-2018-learning}, and so the ways in which models fail differ from the state-of-the-art in 2018.
\begin{table}[t]
\footnotesize
\centering
\begin{tabular}{rlrrr}
\hline
$p$ (nucleus sampling param) & CON & ENT & NEU \\
\hline
.40 & 3.44 & 12.93 & 83.62 \\
.96 & 12.41 & 1.37 & 86.20 \\
\hline
\end{tabular}
\caption{For high and low $p$ parameters in \textsc{Scarecrow}, breakdown of NLI classes for generated text. Neutral is by far the most common in both settings, but entailment is more common than contradiction when randomness is low, and \textit{vice versa} when randomness is high.}
\label{fig:nli_class_distribution}
\end{table}
\begin{table}[t]
\footnotesize
\centering
\resizebox{\columnwidth}{!}
\begin{tabular}{c|| c | c | c || c | c | c |}
\hline\hline
\multirow{2}{*}{} & \multicolumn{3}{c||}{Low p (0.4)} & \multicolumn{3}{c|}{High p (0.96)} \\\cline{2-7}
\makecell{} & \makecell{\textbf{CON}} & \makecell{\textbf{ENT}} & \makecell{\textbf{NEU}} & \makecell{\textbf{CON}} & \makecell{\textbf{ENT}} & \makecell{\textbf{NEU}} \\\hline
{All} & {3.44} & {12.93} & {83.62} & {12.41} & {1.37} & {86.20} \\\hline
{CO} & {1.66} & {3.33} & {95.00} & {7.85} & {0.52} & {91.62} \\\hline
{OP} & {12.50} & {0.00} & {87.50} & {23.07} & {0.00} & {76.92} \\\hline
{SC} & {25.00} & {0.00} & {75.00} & {20.00} & {0.00} & {80.00} \\\hline
{IN} & {0.00} & {0.00} & {0.00} & {31.81} & {4.54} & {63.63} \\\hline
{RD} & {2.22} & {28.88} & {68.88} & {16.66} & {6.66} & {76.66} \\\hline\hline
\end{tabular}
}
\caption{Distribution of spans in \textsc{Scarecrow}{} text marked with each error type (CO: correct, Off-Prompt: off-prompt, SC: self-contradiction; IN: incoherent, RD: redundant) by at least half of annotators, broken down by NLI class.}
\label{table:evaluation_scarecrow_nli_distribution}
\end{table}
To that end, we investigate how NLI can inform text generation, by first systematically investigating whether an NLI task can predict the kinds of errors made by GPT-3 in the publicly-available-for-research \textsc{Scarecrow}{} dataset \citep{dou2022gpt}.
Since generated text has different failure modes depending on the parameters (i.e. for the nucleus sampling parameter $p$, text generated with high values is more likely to be incoherent/off-prompt, text generated with lower values is more likely to be redundant), we pay particular attention to how the NLI task interacts with the nucleus sampling parameter.
We use the results of this analysis to motivate a number of possible ways in which the NLI task can be used to improve generated outputs.
Comparing across conditions, we generate new text using GPT-J \citep{gpt-j} and evaluate it on a subset of the \textsc{Scarecrow}{} criteria.
We show that using an NLI filter can improve GPT-J generations.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{figures/scarecrow_errortypes.png}
\caption{Proportion of erroneous examples in Scarecrow per error type for high and low $p$ parameter.}
\label{fig:proportion_of_error_types}
\end{figure}
\section{Analysis of GPT-3 Text Through Natural Language Inference}
\label{sec: Analysis of GPT-3 Text}
First, we conduct a systematic exploration on a subset of the \textsc{Scarecrow}{} \cite{dou2022gpt} dataset (GPT-3 generations with a static temperature parameter value of 1, and either a high or low nucleus sampling parameter $p$ as described).
Each dataset entry provides the prompt, the generated text and a list of errors identified by human annotators.
For our analysis and evaluation, we focus on ``language errors'' in \textsc{Scarecrow}~, specifically the categories off-prompt (OP), self-contradiction (SC), incoherent (IC), and redundant (RD) error types, as assigned by at least 50\% of human annotators.
For attaining NLI ratings, we use an
off-the-shelf pre-trained \texttt{BART-large-mnli model} for natural language inference.
We treat the \textsc{Scarecrow}{} prompt as the premise and the GPT-3-generated text as the hypothesis and run the NLI task.
The distribution of NLI ratings appears in Table \ref{fig:nli_class_distribution}.
\begin{figure}
\centering
\includegraphics[width=.9\columnwidth]{figures/heatmap_table1.png}
\caption{For high and low $p$ (randomness) parameters in \textsc{Scarecrow}, rank correlation between proportion of text showing an error type (y-axis) and the probability of the given NLI class (x-axis).}
\label{fig:tab1}
\end{figure}
We present the distribution of error types across each NLI class in Table \ref{table:evaluation_scarecrow_nli_distribution}.
It shows that Correct (CO) segments are very likely to be neutral (95\% of the time for low randomness, 92\% of the time for high randomness).
But, when there is a redundancy (RD) error, the probability of entailment goes way up: to 29\% in the low randomness condition and 7\% in the high randomness condition (although in all cases, the neutral class remains dominant).
When there are off-prompt or self-contradictory errors, in both settings, the probability of the contradiction class increases sharply.
As shown in Figure \ref{fig:tab1}, we computed Spearman rank correlations between the proportion of text marked with each error type and the NLI probability assigned to each category (entailment/neutral/contradiction).
In the low randomness setting, the contradiction NLI class was significantly associated with more self-contradictions and the entailment category with fewer self-contradictions but more redundancy,
and the neutral category with less redundancy.
In the high randomness setting, the contradiction NLI class was significantly associated with more off-prompt and incoherent errors,
entailments with more off-prompt errors,
and neutral with \textit{fewer} off-prompt and incoherent errors.
All other correlations were not significant at $p < .05$.
Overall,
we observe that a low randomness setting leads to a higher probability of text classified as entailment, and that such text is also more likely to be redundant. In contrast, a high randomness setting leads to a higher probability of text classified as contradiction, which is also more likely to be off-prompt or incoherent.
In both settings, text with no errors is significantly more likely to be classified as neutral---lending support to the idea that the neutral class is preferable.
\section{Realtime NLI to Improve Generation}
\label{sec: Our Approach}
\paragraph{Method}
Motivated by this finding, we propose a novel approach for overcoming the issues present in the generated text of LLMs in order to improve its quality by incorporating \textit{natural language inference} in the text generation pipeline.
For text generation, we use the open-source GPT-J \citep{gpt-j} while for NLI we use a pre-trained \texttt{BART-large-mnli} model, as above.
Using a random subset of 50 prompts contained in the \textsc{Scarecrow}{} dataset, we generate a continuation for each of 2 nucleus sampling parameters $p$ (0.4 or 0.96) x 4 conditions (one for vanilla GPT with no NLI constraints, and one for each of our three \textsc{NLI Strategies}{} as described below), for a total of 8 continuations for each of the 50 prompts (n=400 total).
For the vanilla GPT-J setting, we generate using the relevant parameter for nucleus sampling (0.4 or 0.96), with a max length of 256 tokens including the initial prompt.
For the \textsc{NLI Strategies}, we first generate a continuation with a maximum length of 128 tokens including the initial prompt.
Then, the chosen \textsc{NLI Strategy}{} is applied to each candidate sentence of the generated text.
The candidate sentence is treated as the ``hypothesis'', while each sentence of the prompt is used as the ``premise''.
The candidate sentence is appended to the continuation (and therefore becomes part of the hypothesis for subsequent candidates) \textit{iff} it satisfies the chosen \textsc{NLI Strategy}{} for every sentence in the preceding text.
Otherwise it is discarded along with the remaining candidate sentences.
The \textsc{NLI Strategy}{} conditions are:
\vspace{.3in}
\begin{itemize}
\item \textbf{ENT}: \textit{P(entailment) > P(contradiction)}
\item \textbf{CON}: \textit{P(contradiction) > P(entailment)}
\item \textbf{NEU}: \textit{P(neutral) > 0.85}
\end{itemize}
For instance, if we are applying the \textbf{NEU} strategy, we reject a sentence assigned neutral probability <~0.85, relative to \textit{any} sentence in the prompt \textit{or} \textit{any} previously generated sentence appended to the continuation. We also reject all sentences after it.
The process is repeated until there are seven \textit{consecutive} failed attempts to expand the continuation or until the continuation exceeds 256 characters and has 3 or more sentences.
In order to ensure sufficient length of generated text for our evaluation, we restarted the process from the initial prompt for any examples whose produced text included less than 2 sentences.
After running this process twice, in all cases, at least one sentence was generated that passed the NLI check.
See Appendix \ref{sec:appa} for details on the compute set-up.
Two annotators (both undergraduate students at our institution, paid \$15 per hour and informed of how the data would be used) evaluated the generations using the \textsc{Scarecrow}{} annotation framework, as well as a 1-5 holistic rating of overall generation quality (see Appendix~\ref{sec:appb} for guidelines).
Because some of the prompts were either erroneously not annotated by both annotators or included sensitive content that we removed from our final data set, we ended up with 326 examples with ratings by both annotators and used those for analysis.
\paragraph{Results}
Focusing first on the average holistic rating assigned to the generation, we observe that maximizing neutral NLI status improves generation.
We ran a regression predicting average holistic rating for a prompt, based on the \textsc{NLI Strategy}{} used, with the control (vanilla GPT-J) as the baseline.
When $p$ was 0.4 (low randomness), \textbf{NEU} (the neutral strategy) was rated significantly higher than the control ($\beta = .54, p < .05$), as was the case in the high randomness condition ($\beta = .70, p < .01$).
As shown in Figure \ref{fig:ratings}, when randomness is low, the \textbf{CON} (contradiction strategy) outperforms the control and \textbf{ENT} (entailment strategy) underperforms it, but neither significantly so.
In the high randomness condition, \textbf{ENT} is significantly better than the control ($\beta = .49, p < .01$) but still worse than neutral, while \textbf{CON} performs similarly to control.
To better understand the source of the difference, we considered the specific error annotations, as shown in Figure \ref{fig:annotationerrors}.
We treated the control as the baseline and ran regressions iteratively for error types and the randomness parameter $p$.
For parameter $p = 0.96$ (high randomness), relative to control, \textbf{NEU} showed significantly fewer off-prompt errors ($p< .001$) and incoherent errors ($p< .05$).
For parameter $p = 0.40$ (low randomness), relative to control, \textbf{NEU} did not show significant differences but was particularly better on redundancy errors.
When randomness is high, we also observe that \textbf{ENT} is significantly better on off-prompt errors but significantly \textit{worse} on redundancy errors.
When randomness is low, \textbf{CON} is significantly worse for off-prompt errors.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{figures/errors.png}
\caption{For our text generation task, the average human annotator ratings for each of 4 \textsc{Scarecrow}{} error types, broken up by whether we use vanilla GPT-J output (control), maximize neutral NLI relationships in generated text, maximize entailments, or maximize contradictions. Maximizing neutral is best overall, but maximizing entailment is better than maximizing contradiction when randomness is high and \textit{vice versa} when randomness is low.}
\label{fig:annotationerrors}
\end{figure}
\section{Conclusion}
Using NLI as a mechanism for choosing among possible generations can be a productive way to imbue text generation systems with something more like System 2-style reasoning.
In particular, maximizing neutral sentences seems most promising.
But it also seems that, in cases where one wants to generate text with a high randomness parameter, maximizing entailments could be productive. Similarly, in a low randomness setting, maximizing contradictions could actually make the text better by avoiding redundancy.
The NLI task is particularly valuable for this purpose because of its extremely wide use and abundance of pre-trained models.
\section*{Limitations}
First, while our method incorporates NLI into the text generation process in a real-time and zero-shot fashion, there is still the issue of computational efficiency. Specifically, the \textsc{NLI Strategies}{} that maximize entailment and contradiction often require multiple generations to produce a sentence which passes their respective NLI checks.
Because LLM text generation is already slow for some use cases, this process may cause a bottleneck.
Second, as \citet{nye2021improving} show, there is much the NLI task does not capture. NLI tasks capture only a fraction of the possible real-world use cases that one might want to use for guiding generation.
Future work might explore using additional kinds of tasks, with the caveat that increasing the complexity of the task could slow generation down even more.
Finally, we tested the generation procedure on only GPT-J, but are open to the possibility that more sophisticated models (especially those like ChatGPT that already include human feedback) might already do better at some of the errors identified in \textsc{Scarecrow}, and so could benefit less from our procedure.
\section*{Acknowledgements}
We thank Clara Meister and Jessy Li for helpful conversations.
K.M. acknowledges funding from NSF Grant 2104995.
|
{
"arxiv_id": "2302.08661",
"language": "en",
"timestamp": "2023-02-20T02:05:46",
"url": "https://arxiv.org/abs/2302.08661",
"yymm": "2302"
} |
\subsubsection{A mechanism for statistical queries}
\label{subsec:SQ}
Our main application is an extremely simple and accurate mechanism for the broad class of \emph{statistical queries}. Statistical queries, introduced by Kearns \cite{Kea98}, are parameterized by a function $\phi: X \to [0,1]$. A valid answer to such a query is any value close to $\phi(\mcD)$. Many natural analyses can be cast as sequence of statistical queries. This includes most algorithms for both supervised and unsupervised learning, such as least-squares regression, gradient-descent, and moment based methods. We refer the reader to \cite{FGRVX17} for a more extensive list.
\begin{figure}[ht]
\captionsetup{width=.9\linewidth}
\begin{tcolorbox}[colback = white,arc=1mm, boxrule=0.25mm]
\vspace{2pt}
\textbf{Input:} A sample $S \in X^n$ and parameters $\eps, k$.\vspace{2pt}
For each time step $t \in [T]$,\vspace{2pt}
\begin{enumerate}[nolistsep,itemsep=2pt]
\item Receive a query $\phi_t: X\to [0,1]$.
\item Define,
\begin{equation*}
\phi_t'(x) \coloneqq \begin{cases}
\eps & \text{if }\phi_t(x) \leq \eps,\\
1-\eps & \text{if }\phi_t(x) \geq 1-\eps, \\
\phi_t(x)&\text{otherwise.}
\end{cases}
\end{equation*}
\item Sample uniformly $\bx_1, \ldots, \bx_k \iid \mathrm{Unif}(S)$ and for each, sample $\bv_i \sim \mathrm{Bernoulli}(\phi_t'(\bx_i))$.
\item Set $\by_t$ to the mean of $\bv_1, \ldots, \bv_k$ and output $\by_t$.
\end{enumerate}
\end{tcolorbox}
\caption{A mechanism for statistical queries using subsampling.}
\label{fig:SQ-mechanism}
\end{figure}
The mechanism for answering statistical queries is given in \Cref{fig:SQ-mechanism}. Note that an even simpler mechanism that does not ``squash," taking $\phi' = \phi$, still provides accurate answers to statistical queries. The difference is just a log-factor on the sample size as such queries are not guaranteed to be $p$-uniform, and so the improvement of \Cref{thm:main-noisy} to \Cref{thm:high-probability} does not hold.
\begin{theorem}[Accuracy of our mechanism for answering SQs]
\label{thm:SQ}
For any parameters $\tau, \delta > 0$, adaptively chosen sequence of statistical queries $\phi_1, \ldots, \phi_T: X \to [0,1]$, and sample size
\begin{equation}
\label{eq:SQ-sample-size}
n \geq \Omega\paren*{\frac{\sqrt{T \log(T/\delta) \log(1/\delta)}}{\tau^2}},
\end{equation}
if the mechanism of \Cref{fig:SQ-mechanism} is given a sample $\bS \sim \mcD^n$ and parameters
\begin{equation*}
\eps \coloneqq O\paren*{\frac{\log(1/\delta)}{n}} \quad\quad\quad\quad\text{and} \quad\quad\quad\quad k \coloneqq O\paren*{\frac{\log(T/\delta)}{\tau^2}},
\end{equation*}
with probability at least $1 - \delta$, it answers all queries $t \in [T]$ to accuracy
\begin{equation}
\label{eq:SQ-accuracy}
|\by_t - \Ex[\phi_t(\mcD)]| \leq \max(\tau \cdot \std(\phi_t), \tau^2)
\end{equation}
where $\std(\phi) \coloneqq \sqrt{\Ex[\phi(\mcD)](1 - \Ex[\phi(\mcD)])} \leq 1$.
\end{theorem}
The proof of \Cref{thm:SQ} is a simple corollary of \Cref{thm:main-noisy,thm:high-probability}: Each vote ($\bv_i$ in \Cref{fig:SQ-mechanism}) is the output of a subsampling query $\phi:X \to \zo$ and so fits within our framework with $w = 1$, and $|Y| = 2$.
In many settings, the bound of \Cref{thm:SQ} is state of the art. When $\std(\phi) = o(1)$, its accuracy improves upon prior state of the arts, both from Feldman and Steinke.\footnote{It's worth noting that both of these works use a more strict definition of $\std(\phi) = \sqrt{\Var_{\bx \sim \mcD}[\phi(\bx)]}$. If the range of $\phi$ is $\zo$, their definition and ours coincide. Otherwise, their definition can be smaller than ours.}
\begin{enumerate}
\item In \cite{FS17}, they gave a mechanism with the accuracy guarantees of \Cref{thm:SQ}, but requiring a larger sample size of
\begin{equation*}
n \geq \Omega\paren*{\frac{\sqrt{T \log(1/\delta)} \log(T/\tau\delta)}{\tau^2}}.
\end{equation*}
In addition to requiring a larger sample size than \Cref{thm:SQ}, that mechanism is also more complicated than ours. It splits the dataset into chunks of size $\frac{1}{\tau^2}$. Given a query, it computes the average of that query on each chunk, and then computes an approximate median of those averages via a differentially private algorithm. The same mechanism actually solves the approximate median problem described in \Cref{subsec:approx-median}.
\item In \cite{FS18}, they gave a mechanism with a slightly worse sample size\footnote{Their sample size is a $\sqrt{\log(T)}$ multiplicative factor worse than ours when $\delta$ is constant.} \Cref{thm:SQ} when the failure probability, $\delta$, is a constant. Their mechanism is also simple: For a sample $S$ and query $\phi$, they compute $\phi(S) + \zeta$ where $\zeta$ is a Gaussian with mean $0$ and variance that scales with $\std(\phi)$. However, their dependence on $\delta$ is a multiplicative $1/\delta$ factor, exponentially worse than ours.
\end{enumerate}
The pessimistic setting where all the queries satisfy $\std(\phi) = \Theta(1)$ is more well studied \cite{DFHPRR15,BNSSSU16,SU15,SU15between,HU14,DK22}. The state of the art was given recently by Dagan and Kur \cite{DK22}, who showed that a sample of size
\begin{equation*}
n = O\paren*{\frac{\sqrt{T\log(1/\tau \delta)}}{\tau^2}}
\end{equation*}
is sufficient whenever $\tau\delta \geq 2^{-\tilde{O}(T)}$. Their mechanism works by returning $y_t = \phi_t(S) + \zeta$ where $\zeta$ is drawn from a very carefully constructed \emph{bounded} distribution. Our mechanism has a slightly better dependence on $\tau$ and a better accuracy when $\std(\phi)$ is small, but slightly worse dependencies on $T$ and $\delta$.
\paragraph{Advantages of our mechanism.}
Our mechanism has advantages beyond the quantitative bound on the sample size needed for low bias. First, it naturally runs in sublinear time as answering each query requires only looking at $k = n/\sqrt{T}$ of the points in $S$.
Furthermore, our mechanism easily extends to the setting where the analyst does not know ahead of time how many samples, the parameter $k$ in \Cref{fig:SQ-mechanism}, they want for a particular query. Rather, they can sequentially sample $\bv_1, \bv_2, \ldots$ while continually updating an estimate for $\phi(\mcD)$ and stop at any point. The bounds of \Cref{thm:SQ} hold as long as the total number of samples is at most $n\sqrt{T}$. Early stopping can appear naturally in practice. For example,
\begin{enumerate}
\item The analyst may only desire accuracy $\pm \tau$, regardless of what $\std(\phi_t)$ is. If, based on the first $k' < k$ samples, the analyst can determine that $\std(\phi_t)$ is small, they can stop early as a small $\std(\phi_t)$ means fewer samples are needed to acheive the desired accuracy.
\item The analyst may wish to verify whether $\phi(\mcD) \approx c$ for some value $c$. If after the first $k' < k$ samples, the average is far from $c$, the analyst can already determine that $\phi(\mcD)$ is far from $c$. This setting has previously been studied in the influential work of \cite{DFHPR15,DFHPR15b} which showed that there exists mechanisms that answer exponentially many such verification queries, as long as all but a tiny fraction of the inequalities to verify are true. Our analysis does not extend to exponentially many queries, but it can easily intertwine standard queries with verification queries, with the later being cheaper.
\end{enumerate}
\subsubsection{A mechanism for finding approximate medians}
\label{subsec:approx-median}
We also consider a generalization of statistical queries, each of which map $w$ inputs to some set $R \subseteq \R$. For such queries, we give a mechanism for determining an \emph{approximate median} of the distribution $\phi^{(\mathrm{dist})}(\mcD)$.
\begin{definition}[Approximate median]
For a distribution $\mcE$ with support in $\R$, we say a value $y$ is an \emph{approximate median of $\mcE$} if,
\begin{equation*}
\min\left(\Pr_{\bx \sim \mcE}[\bx < y], \Pr_{\bx \sim \mcE}[\bx > y]\right) \geq 0.4.\footnote{All of our results hold as long as this $0.4$ is $0.5 - \eps$ for any fixed $\eps > 0$.}
\end{equation*}
\end{definition}
One particular application for approximate median queries is, once again, for answering statistical queries. Given an SQ $\phi: X \to [0,1]$, we can construct
\begin{equation*}
\phi'(x_1, \ldots, x_w) = \Ex_{\bi \in [w]}[\phi(\bx_i)].
\end{equation*}
Since $\phi'$ and $\phi$ have the same mean, and $\phi'$ has a smaller variance, Chebyshev's inequality implies that any approximate median of $\phi'$ will be within $2/\sqrt{w}$ standard deviations of $\phi(\mcD)$. As a result, the mechanism of \Cref{fig:median-mechanism} can give similar accuracy results as guaranteed in \Cref{thm:SQ}. The sample size required is larger (by log factors), but in exchange, it provides better accuracy when $\std(\phi)$ is smaller than $\tau$. In that setting, the $\tau^2$ term of \Cref{eq:SQ-accuracy} dominates, but the median-based mechanism does not incur it.
\begin{figure}[h]
\captionsetup{width=.9\linewidth}
\begin{tcolorbox}[colback = white,arc=1mm, boxrule=0.25mm]
\vspace{2pt}
\textbf{Input:} A sample $S \in X^n$ split into $k$ groups $S^{(1)}, \ldots, S^{(k)}$ each containing $\geq \floor{n/k}$ elements.
For each time step $t \in [T]$,\vspace{2pt}
\begin{enumerate}[nolistsep,itemsep=2pt]
\item Receive a query $\phi_t: X^{w_t}\to R_t$ where $R_t \subseteq \R$.
\item Perform binary search on the mechanism's output $\by_t \in R_t$ where, to determine whether $\by_t \geq r$, the following procedure is used.
\begin{enumerate}
\item For each group $i \in k$ generate a vote $\bv_i$ by first sampling a random subset $\bS' \sim \binom{S^{(i)}}{w_t}$ and then setting $\bv_i = \Ind[\phi_t(\bS') \geq r]$.
\item Add noise to each vote $\bv_i$ by flipping it with probability $\frac{w}{|S^{(i)}|}$.\footnote{This mechanism still works without the added noise, though with slightly worse parameters.}
\item Determine $\by_t \geq r$ iff at least half of the votes are $1$.
\end{enumerate}
\item After the $\ceil{\log_2(R_t)}$ steps of binary search have finished, a single value $\by_t = r$ will be determined. Output it.
\end{enumerate}
\end{tcolorbox}
\caption{The subsampling mechanism answering approximate median queries.}
\label{fig:median-mechanism}
\end{figure}
\begin{theorem}[Accuracy of our mechanism for answering approximate median queries]
\label{thm:median}
For any adaptively chosen sequence of queries $\phi_1: X^{w_1} \to R_1 , \ldots, \phi_T: X^{w_T} \to R_T$ and $\delta > 0$, if the sample size satisfies
\begin{equation*}
n \geq \Omega\paren*{\log\paren*{\frac{T \log R_{\max}}{\delta}} \sqrt{ w_{\max} \sum_{t \in T} w_t} }
\end{equation*}
where $w_{\max}$ and $R_{\max}$ are upper bounds on $w_t$ and $|R_t|$ respectively, and $k \coloneqq \log\paren*{\frac{T \log R_{\max}}{\delta}}$, then with probability at least $1 - \delta$, for all $t \in [T]$, the output $\by_t$ of \Cref{fig:median-mechanism} is an approximate median for $\phi^{(\mathrm{dist})}_t(\mcD)$.
\end{theorem}
Feldman and Steinke also give a mechanism for answering approximate median queries \cite{FS17}. Their mechanism needs a sample size of
\begin{equation*}
n \geq \Omega\paren*{\log\paren*{\frac{T R_{\max}}{\delta}} \sqrt{\log(1/\delta) T w_{\max}^2} }.
\end{equation*}
Our sample size bound is similar to theirs in the pessimistic settings where $w_t \approx w_{\max}$ for all $t$, with slight improvements on some of the other dependencies. For example, we have a linear dependence on $\log(1/\delta)$, whereas they have a $\log(1/\delta)^{3/2}$ dependence.
Most interestingly, our mechanisms and that of \cite{FS17} is fairly similar -- both rely on splitting the dataset and, roughly speaking, computing an approximate median of the queries' value on each group -- but the analyses are wholly different. Their analysis is based on differential privacy. In contrast, \Cref{thm:median} is a simple corollary of \Cref{thm:main-noisy}. Indeed, it's not difficult to show that our mechanism does \emph{not} satisfy standard $(\eps, \delta)$ differential privacy with strong enough parameters to give a sample size bound close to that of \Cref{thm:median}.
\section{Applications}
\label{sec:apps}
\subsection{The statistical-query mechanism: Proof of \texorpdfstring{\Cref{thm:SQ}}{Theorem 6}}
The proof of \Cref{thm:SQ} combines two bounds: We'll proof that, with high probability, all statistical queries asked have low bias, meaning $\phi(\bS)$ and $\phi(\mcD)$ are close. We'll further show that, with high probability, the answer the mechanism gives when receiving a statistical query $\phi$ is close to $\phi(\bS)$. These two results are combined with the following proposition.
\begin{proposition}[Triangle-like inequality]
\label{prop:triangle-like}
For any $\tau > 0$ and $a,b,c \in [0,1]$ satisfying
\begin{equation*}
\abs{b - a} \leq \max(\tau^2, \tau \sqrt{a(1-a)}) \quad\quad\quad\quad\abs{c - b} \leq \max(\tau^2, \tau \sqrt{b(1-b)})
\end{equation*}
it also holds that
\begin{equation*}
\abs{c - a} \leq 3\max(\tau^2, \tau \sqrt{a(1-a)}).
\end{equation*}
\end{proposition}
\begin{proof}
First, consider the case where $\abs{c - b}\leq \tau^2$. Then,
\begin{equation*}
\abs{c-a} \leq \abs{a-b} + \abs{c-b} \leq \max(\tau^2, \tau \sqrt{a(1-a)}) + \tau^2 \leq 2\max(\tau^2, \tau \sqrt{a(1-a)}).
\end{equation*}
In the other case,
\begin{align*}
\abs{c-b} &\leq \tau\sqrt{b(1-b)} \\
&= \tau \sqrt{(a + (b-a))(1 - a - (b-a))} \\
&= \tau\sqrt{a(1-a) + (b-a)(1-2a) - (b-a)^2}\\
&\leq \tau\sqrt{a(1-a) + (b-a)} \\
&\leq \tau\sqrt{a(1-a)} + \tau\sqrt{b-a} \\
&\leq \tau\sqrt{a(1-a)} + \tau\sqrt{\max(\tau^2, \tau\sqrt{a(1-a)}} \\
&= \tau\sqrt{a(1-a)} + \max(\tau^2, \sqrt{\tau^2 \cdot \tau\sqrt{a(1-a)}}) \\
&\leq 2 \max(\tau^2, \tau \sqrt{a(1-a)}).
\end{align*}
The desired result follows from the standard triangle inequality.
\end{proof}
Next, we give a short proof that the answer's of the mechanism are all close to $\phi(\bS)$ with high probability.
\begin{proposition}
\label{prop:SQ-all-close}
In the setting of \Cref{thm:SQ}, with probability at least $1 - \delta$, for all $t \in [T]$,
\begin{equation*}
\abs{\phi_t(\bS) - \by_t} \leq 3\max\paren*{\tau^2, \tau \sqrt{\phi_t(\bS)(1 - \phi_t(\bS))}}.
\end{equation*}
\end{proposition}
\begin{proof}
The distribution of $\by_t$ is the sum of $k$ independent $\zo$ bits each with mean within $\pm \eps$ of $\phi_t(\bS)$. A standard application of Chernoff bounds and a union bound over the $T$ queries gives that, since $k \coloneqq O\paren*{\frac{\log(T/\delta)}{\tau^2}}$, with probability at least $1 - \delta$, for all $t \in [T]$
\begin{equation*}
\abs{\by_t - \Ex[\by_t]} \leq \max\paren*{\tau^2, \tau \sqrt{\Ex[\by_t](1 - \Ex[\by_t)}}.
\end{equation*}
Based on the parameters in \Cref{thm:SQ},
\begin{equation*}
\abs{\Ex[\by_t] - \phi_t(\bS)} \leq \eps \leq \frac{\tau^2}{\sqrt{T}} \leq \tau^2.
\end{equation*}
The desired result follows from \Cref{prop:triangle-like}.
\end{proof}
Finally, we prove the main result of this subsection.
\begin{proof}[Proof of \Cref{thm:SQ}]
We'll apply the ``monitor" technique of Bassily et al. \cite{BNSSSU16}, setting the test query $\psi$ to the SQ with the ``worst" response. Specifically, after receiving responses $\by_1, \ldots, \by_T$ to SQs $\phi_1, \ldots, \phi_T$, the analyst sets the test query to
\begin{equation*}
\psi \coloneqq \phi_{t^\star} \quad\text{where}\quad t^\star \coloneqq \argmax_{t \in [T]} \frac{\abs{\by_t - \phi_t(\mcD)}}{\max(\tau^2, \tau \sqrt{\phi_t(\mcD)(1-\phi_t(\mcD))}}.
\end{equation*}
It is sufficient to show that \Cref{eq:SQ-accuracy} holds for $t = t^\star$ as, based on how we defined $t^\star$, it then holds for all $t \in [T]$.
The mechanism for answering statistical queries in \Cref{fig:SQ-mechanism} makes a total of $Tk$ subsampling queries each with $w = 1$ and a range $|Y| = |\zo| = 2$. Each query is $(\eps \coloneqq \frac{\log(1/\delta)}{n})$-uniform. Applying the improved cost function in \Cref{thm:main-noisy} to \Cref{thm:high-probability}, the budget is $b = O(Tk/n)$, and so, with probability at least $1 - \delta$,
\begin{equation*}
\mathrm{error}(\psi, \bS, \mcD) \leq O\paren*{\log(1/\delta) \cdot \frac{Tk + n}{n^2}} \leq \tau^2.
\end{equation*}
As $\Var_{\mcD}(\psi) \leq \sqrt{\mcD(\psi)(1 - \mcD(\psi))}$, this implies that
\begin{equation*}
\abs{\psi(\bS) - \psi(\mcD)} \leq \max\paren*{\tau^2, \tau \sqrt{\mcD(\psi)(1 - \mcD(\psi))}}.
\end{equation*}
Furthermore, by \Cref{prop:SQ-all-close}, with probability at least $1 - \delta$, for all $t \in [T]$
\begin{equation*}
\abs{\phi_t(\bS) - \by_t} \leq 3\max\paren*{\tau^2, \tau \sqrt{\phi_t(\bS)(1 - \phi_t(\bS))}}.
\end{equation*}
In particular, the above holds for $\phi_{t^\star}$. We union bound over both of the above equations and apply \Cref{prop:triangle-like}: With probability at least $1 - 2\delta$ \Cref{eq:SQ-accuracy} holds for $t = t^\star$ up to a constant factor. As $t^\star$ is worst-case, it therefore applies for all $t \in [T]$.
The desired result follows by renaming $\delta' \coloneqq \delta/2$ and $\tau' \coloneqq \tau/c$ for appropriate constant $c$.
\end{proof}
\subsection{The median-finding mechanism: Proof of \texorpdfstring{\Cref{thm:median}}{Theorem 7}}
In this section, we prove \Cref{thm:median}. We begin with the following definition.
\begin{definition}[Bad sample]
For a query $\phi:X^w \to R \subseteq \R$, and threshold $r \in R$ that is not an approximate median of $\phi(\mcD)$, we say a sample $S \in X^n$ is $(\phi,r)$-bad if, with probability at least $0.45$ over $\bS' \sim \binom{S}{w}$,
\begin{equation*}
\Ind[\phi(\bS') \geq r] \neq \Ind[\mathrm{median}(\phi(\mcD)) \geq r].
\end{equation*}
\end{definition}
Intuitively, the proof of \Cref{thm:median} is separated into two pieces: We show that, with high probability, for all queries $\phi$ and thresholds $r$ used in \Cref{fig:median-mechanism}, only a small fraction $(\leq 0.02k)$ of the groups are ``bad samples. Then, we show that as long as this is true, with high probability, all answers output by the mechanism are approximate medians.
\begin{proof}[Proof of \Cref{thm:median}]
For each $t \in [T]$ and threshold $r \in R_t$ considered the mechanism, let $p(t,r)$ as follows: If $r$ is an approximate median of $\phi_t(\mcD)$, we define $p(t,r) = 0$. If $r$ is smaller than all approximate medians of $\phi_t(\mcD)$, then we define $p(t,r)$ to be the fraction of votes $\bv_1, \ldots, \bv_k$ that were set to $0$ when determining whether $\by_t \geq r$. If $r$ is larger than all approximate medians of $\phi_t(\mcD)$, then we define $p(t,r)$ to be the fraction of such votes that were set to $1$.
If, for all choices of $t, r$, $p(t,r) < 0.5$, then all $\by_t$ output by the median mechanism must be an approximate median. Let $t^\star, r^\star$ be the choices maximizing $p(t^\star, r^\star)$. Then, it is sufficient to prove that with probability at least $1-\delta$, $p(t^\star, r^\star) < 0.5$. We define the test query $\psi:X^{w_{t^\star}} \to \zo$,
\begin{equation*}
\psi(S) \coloneqq \Ind[\phi_{t^\star}(S) \geq r_{\star}].
\end{equation*}
Here, we apply the improved cost function of \Cref{thm:main-noisy} to \Cref{thm:main-binary}. Each group $\bS^{(i)}$ is a random sample from $\mcD^{n'}$ where $n' \geq \floor{n/k}$. Then the total budget of queries asked to that group is
\begin{equation*}
b \coloneqq \sum_{t \in T} \frac{w_t \cdot 2}{n' - w_t} \cdot (1 + \log 2) \leq O\paren*{\frac{\sum_{t \in T} w_t}{n'}}
\end{equation*}
where the inequality uses that $n' \gg w_{\max}$. Applying Markov's inequality to the conclusion of \Cref{thm:main-binary}, as well as $n' \geq \floor{n/k}$,
\begin{equation*}
\Pr[\mathrm{error}(\psi, \bS^{(i)}, \mcD) \geq 0.05] \leq O\paren*{\frac{b + 1}{n'}} = O\paren*{\frac{k^2\sum_{t \in T} w_t}{n^2} + \frac{k}{n}}
\end{equation*}
Based on how we set $n$ in \Cref{thm:median}, the above probability is at most $0.01$. Furthermore if $\mathrm{error}(\psi, \bS^{(i)}, \mcD) < 0.05$, then $\bS^{(i)}$ cannot be $(\phi_{t^\star}, r^\star)$-bad. Therefore, for each group $i \in [k]$, the probability that $\bS^{(i)}$ is $(\phi_{t^\star}, r^\star)$-bad is at most $0.01$. Applying \Cref{lem:direct-product} and the Chernoff bound of \Cref{fact:chernoff}, with probability at least $1 - \exp(-k/300)$, at most $0.02k$ groups $i\in[k]$ are $(\phi_{t^\star}, r^\star)$-bad.
Next, we note that for any single choice of $\phi_t$ and $r \in R_t$, if at most $0.02k$ groups are $(\phi_t, r_t)$-bad, then the expected value of $p(t,r)$ is at most $0.47$ and, by a Chernoff bound, the probability $p(t,r) \geq 0.5$ is at most $\exp(-\Omega(k))$. We chose $k$ large enough to union bound over all $T \cdot \log_2(R_{\max})$ choices of $t \in [T]$ and $r \in R_t$. Specifically, with probability at least $1 - \delta$, for each $t \in [T]$ and $r_t \in R$ for which at most $0.02k$ groups are bad, $p(t, r) < 0.5$. In particular, this includes $p(t^\star, r^\star)$, proving the desired result.
\end{proof}
\section{From small mutual information to small bias}
\label{sec:gen}
In this section, we connect a mutual information bound to generalization error, completing the proof of \Cref{thm:main-binary} and its generalization in \Cref{thm:main-noisy}. Recall our definition of \emph{error} for a test query.
\begin{definition}[\Cref{def:error-simple}, restated]
\label{def:error-second}
For any $\psi:X^w \to [0,1]$ and distribution $\mcD$ over $X$, we define
\begin{equation*}
\mathrm{error}(\psi, S, \mcD) \coloneqq \frac{1}{w} \cdot \min\paren*{\Delta, \frac{\Delta^2}{\sigma^2}
}
\end{equation*}
where
\begin{equation*}
\Delta \coloneqq \abs{\psi(S) - \psi(\mcD)} \quad\quad\text{and}\quad\quad \sigma^2 \coloneqq \Varx_{\bS \sim \mcD^w}[\psi(\bS)].
\end{equation*}
\end{definition}
Note that if a query has error $\eps$, this corresponds to $\Delta = O(w\eps + \sqrt{w\eps \sigma^2})$.
\begin{theorem}[Mutual information bounds bias]
\label{thm:MI-bounds-bias}
For any $\bS \sim \mcD^n$ and $\by \in Y$, as well as $\Psi^{(\by)}$ a set of test queries each mapping $X^* \to [0,1]$ chosen as a function of $\by$,
\begin{equation*}
\Ex_{\bS, \by}\bracket*{\sup_{\psi \in \Psi^{(\by)}} \mathrm{error}(\psi, \bS, \mcD)} \leq \frac{8I(\bS; \by) + 4\Ex_{\by}\bracket*{\log\abs*{\Psi^{(\by)}}}+ 12 \log 2}{n}.
\end{equation*}
\end{theorem}
Our proof will use the following fact.
\begin{fact}[\cite{FS18}]
\label{fact:gen}
For any random variables $\bS \in X$ and $\bpsi: X \to \R \in \Psi$, as well as $\lambda > 0$,
\begin{equation*}
\Ex_{\bS, \by}\bracket*{\bpsi(\bS)} \leq \frac{1}{\lambda}\paren*{I(\bS; \bpsi) + \sup_{\psi \in \Psi}\log \paren*{\Ex_{\bS}\bracket*{\exp\paren*{\lambda \psi(\bS)}}}}.
\end{equation*}
\end{fact}
\begin{corollary}
\label{cor:MI-expectation}
For any random variables $\bx \in X$ and $\boldf:X \to \R \in F$ satisfying, for all $t \geq 0$ and $f \in F$, $\Pr_{\bx}[f(\bx) \geq t] \leq e^{-t}$,
\begin{equation*}
\Ex[\boldf(\bx)] \leq 2(I(\bx; \boldf) + \log 2).
\end{equation*}
\end{corollary}
\begin{proof}
Fix an arbitrary $f \in F$. We first bound the moment generating function of $f(\bx)$.
\begin{align*}
\Ex[\exp(f(\bx)/2)] &= \int_{0}^\infty \Pr[\exp(f(\bx)/2) \geq t]dt\\
&=1 + \int_{1}^\infty \Pr[f(\bx) \geq 2\log t]dt \\
&\leq 1 + \int_{1}^\infty t^{-1/2}dt = 2.
\end{align*}
The desired result follows from \Cref{fact:gen} with $\lambda = 1/2$.
\end{proof}
\emph{Bernstein's inequality} will give the sorts high probability bounds needed to apply \Cref{cor:MI-expectation}.
\begin{fact}[Bernstein's inequality]
\label{fact:bernstein}
For any iid mean-zero random variables $\ba_1, \ldots, \ba_n$ satisfying $|\ba_i| \leq 1$ almost surely and with variance $\sigma^2$, let $\bA = \frac{1}{n} \cdot \sum_{i \in [n]} \ba_i$
\begin{equation}
\label{eq:berstein-bound}
\Pr\bracket*{|\bA| \geq \Delta} \leq 2\exp\paren*{-\frac{\Delta^2n}{2(\sigma^2 +\frac{\Delta}{3})}}.
\end{equation}
\end{fact}
For our setting, a black-box application of Bernstein's inequality is not sufficient. We wish to prove concentration of a random variable that is \emph{not} necessarily the sum of iid random variables. Fortunately, the proof of Bernstein's inequality only uses a bound on the moment generating function of $\bA$: It proceeds by applying Markov's inequality to the random variable $e^{\lambda \bA}$ for an appropriate choice of $\lambda \in \R$. As a result, the following also holds.
\begin{fact}[Generalization of Bernstein's inequality]
\label{fact:Bernstein-MGF} Let $\bB$ be any random variable satisfying, for all $\lambda \in \R$,
\begin{equation*}
\Ex[e^{\lambda \bB}] \leq \Ex[e^{\lambda \bA}],
\end{equation*}
where $\bA$ is as in \Cref{fact:bernstein}. Then, the bound of \Cref{eq:berstein-bound} also holds with $\bB$ in place of $\bA$.
\end{fact}
We'll use the following to produce a bound in our setting.
\begin{proposition}
\label{prop:convex}
For any $\psi:X^w \to \R$, distribution $\mcD$ over $X$, and convex function $f: \R \to \R$, set $m = \floor{n/w}$. Then,
\begin{equation*}
\Ex_{\bS \sim \mcD^n}\bracket*{f\paren*{\psi(\bS)}} \leq \Ex_{\bS^{(1)}, \ldots, \bS^{(m)} \iid \mcD^w}\bracket*{f\paren*{\Ex_{\bi \sim [m]}\bracket*{\psi(\bS^{(i)})]}}}.
\end{equation*}
\end{proposition}
\begin{proof}
Given $S \in X^n$, G, let $\mcG(S)$ be the uniform distribution over all disjoint groupings, $\bS^{(1)}, \ldots, \bS^{(m)} \in X^w$, of $S$. In particular, $\bS^{(1)} \sim \binom{S}{w}$, $\bS^{(2)} \sim \binom{S \setminus \bS^{(1)}}{w}$, and so on. Note that each $\bS^{(i)}$ has a marginal distribution equal to that of a sample from $\binom{S}{w}$. Since $mw \leq n$, all the groups are disjoint. Then,
\begin{align*}
\Ex_{\bS \sim \mcD^n}&\bracket*{f\paren*{\psi(\bS)}} \\
&= \Ex_{\bS \sim \mcD^n}\bracket*{f\paren*{\Ex_{\bS' \sim \binom{\bS}{w}}[\psi(\bS')]}}\tag{\Cref{def:error-simple}} \\
&= \Ex_{\bS \sim \mcD^n}\bracket*{f\paren*{\Ex_{\bi \sim [m]}\bracket*{\Ex_{\bS^{(1)}, \ldots, \bS^{(m)} \iid \binom{\bS}{w}}\bracket*{ \psi(\bS^{(\bi)})}}}}\tag{$\Ex[c] = c$}\\
&= \Ex_{\bS \sim \mcD^n}\bracket*{f\paren*{\Ex_{\bS^{(1)}, \ldots, \bS^{(m)} \sim \mcG(\bS)}\bracket*{\Ex_{\bi \sim [m]}\bracket*{ \psi(\bS^{(\bi)})}}}}\tag{$\bS^{(i)}$ marginally from $\binom{\bS}{w}$} \\
&\leq \Ex_{\bS \sim \mcD^n}\bracket*{\Ex_{\bS^{(1)}, \ldots, \bS^{(m)} \sim \mcG(\bS)}\bracket*{f\paren*{\Ex_{\bi \sim [m]}\bracket*{\psi(\bS^{(i)})]}}}}\tag{Jensen's inequality}
\end{align*}
For $\bS \sim \mcD^n$ and $\bS^{(1)}, \ldots, \bS^{(m)} \sim \mcG(\bS)$, since the groups are disjoint, they are each iid draws from $\mcD^w$. Therefore,
\begin{equation*}
\Ex_{\bS \sim \mcD^n}\bracket*{f\paren*{\psi(\bS)}} \leq \Ex_{\bS^{(1)}, \ldots, \bS^{(m)} \iid \mcD^w}\bracket*{f\paren*{\Ex_{\bi \sim [m]}\bracket*{\psi(\bS^{(i)})]}}} \qedhere
\end{equation*}
\end{proof}
Since $x \to \exp{\lambda x}$ is convex, we arrive at the following corollary.
\begin{corollary}
\label{cor:bernstein}
For any $\psi:X^w \to [-1,1]$ and distribution $\mcD$ over $X$ such that $\Ex_{\bS \sim \mcD^w}[\psi(\bS)] = 0$ and $\Varx_{\bS \sim \mcD^w}[\phi(\bS)] = \sigma^2$,
\begin{equation*}
\Prx_{\bS \sim \mcD^n}[|\psi(\bS)| \geq \Delta] \leq 2\exp\paren*{-\frac{\Delta^2m}{2(\sigma^2 + \frac{\Delta}{3})}} \leq 2\exp\paren*{-\frac{m}{2} \min(\Delta, \Delta^2/\sigma^2)}
\end{equation*}
where $m \coloneqq \floor*{\frac{n}{w}}$.
\end{corollary}
Finally, we complete the proof of \Cref{thm:MI-bounds-bias}.
\begin{proof}[Proof of \Cref{thm:MI-bounds-bias}]
For each $y \in Y$, we define $f^{(y)}:X^n \to \R$ as
\begin{equation*}
f^{(y)}(S) \coloneqq \sup_{\psi \in \Psi^{(y)}} \frac{n \cdot\mathrm{error}(\psi, S, \mcD)}{4} - \log (2|\Psi^{(y)}|).
\end{equation*}
We claim that for all $y \in Y$ and $t > 0$, $\Pr[f^{(y)}(\bS) \geq t] \leq e^{-t}$. First, consider a single test function $\psi:X^w \to [0,1]$. By \Cref{cor:bernstein} applied to the centered query $\psi' \coloneqq \psi - \psi(\mcD)$, as well as the bound $n/(2w) \leq m$ when $m \coloneqq \floor{n/w}$
\begin{equation*}
\Prx_{\bS \sim \mcD^n}[\mathrm{error}(\psi, \bS, \mcD) \cdot n/4 \geq \eps] \leq 2\exp(-\eps)
\end{equation*}
By the union bound,
\begin{align*}
\Prx_{\bS}[f^{(y)}(\bS) \geq t] &\leq 2 \sum_{\psi \in \Psi^{(y)}} \Pr\bracket*{\frac{n\cdot \mathrm{error}(\psi, \bS, \mcD)}{4} - \log (2|\Psi^{(y)}|) \geq t} \\
& \leq 2|\Psi^{(y)}|\cdot e^{-t - 2|\Psi^{(y)}|} = e^{-t}.
\end{align*}
Therefore, by \Cref{cor:MI-expectation},
\begin{equation*}
\Ex_{\bS, \by}\bracket*{\sup_{\psi \in \Psi^{(\by)}}\frac{n\cdot \mathrm{error}(\psi, \bS, \mcD)}{4} - \log (2|\Psi^{(\by)}|)} \leq 2(I(\bS; \by) + \log 2).
\end{equation*}
Rearranging yields,
\begin{equation*}
\Ex_{\bS, \by}\bracket*{\sup_{\psi \in \Psi^{(\by)}} \mathrm{error}(\psi, \bS, \mcD)} \leq \frac{8I(\bS; \by) + 4\Ex_{\by}\bracket*{\log\abs*{\Psi^{(\by)}}} +12\log 2}{n}. \qedhere
\end{equation*}
\end{proof}
We've now assembled all the ingredients to prove \Cref{thm:main-binary} and the first part of \Cref{thm:main-noisy}.
\begin{proof}
\Cref{thm:main-binary} is a special case of the first part of \Cref{thm:main-noisy}, so we only prove the later. Since the error of a query is trivially upper bounded by $1$, \Cref{thm:main-noisy} is vacuous if $n = 1$. Therefore, without loss of generality, we may assume that $n \geq 2$.
Let $\by \coloneqq \by_1, \ldots, \by_T$ be the queries responses. If the $t^{\text{th}}$ query is $p_t$-uniform and maps $X^{w_t}$ to $Y_t$, by \Cref{thm:MI-bound-formal}
\begin{align*}
I(\bS ; \by) &\leq n\cdot\Ex\bracket*{\sum_{t \in T} \frac{w_t (|Y_t| - 1)}{(n-1)(n-w_t)} \cdot \min \paren*{5 + 4 \log n, 1 + \log\paren*{1 + \frac{w_t}{np_t}}}}\\
&\leq 10\cdot \Ex\bracket*{\sum_{t \in T} \frac{w_t Y_t}{(n-w_t)} \cdot \min \paren*{1 + \log n, 1 + \log\paren*{1 + \frac{w_t}{np_t}}}} \tag{$\frac{n}{n-1} \leq 2$}
\end{align*}
The desired result then follows from \Cref{thm:MI-bounds-bias}.
\end{proof}
For completeness, we further show that \Cref{thm:formal-simple} is an easy consequence of \Cref{thm:main-binary}.
\begin{proof}[Proof of \Cref{thm:formal-simple} from \Cref{thm:main-binary}]
First note that if $w > n/2$, the guarantee of \Cref{thm:formal-simple} is vacuous. Therefore, we may assume that $w \leq n/2$. Clearly, the analyst in \Cref{thm:formal-simple} is $(\mathrm{cost}_n, b)$-budgeted for
\begin{equation*}
b \coloneqq \frac{tw|Y| \log n}{n-w} \leq \frac{2tw|Y| \log n}{n}.
\end{equation*}
The analyst can choose, for each query $\phi_t$ and outcome $y \in Y$, a test function $\psi_{t,y}(S) \coloneqq \Ind[\phi_t(S) = y]$. The total number of test functions is $m \coloneqq t|Y|$. If $m > n^2$, then, once again, the guarantee of \Cref{thm:formal-simple} is vacuous, so we assume $m \leq n^2$. Applying \Cref{thm:main-binary} as well as the inequality $\paren*{\psi(S) - \psi(\mcD)}^2 \leq w\cdot \mathrm{error}(\psi, S, \mcD)$ gives
\begin{equation*}
\Ex\bracket*{\sup_{t \in [T], y \in Y} \paren*{\phi^{(n)}_t(\bS)(y) - \phi^{(\mathrm{dist})}_T(\mcD)(y)}^2} \leq O\paren*{w\frac{b + \log (n^2) + 1}{n}} \leq O\paren*{w \log n \cdot \paren*{\frac{tw|Y|}{n^2} + \frac{1}{n}}}
\end{equation*}
as desired.
\end{proof}
\section{Introduction}
Data is a scarce and valuable resource. As a result, data analysts often reuse the same dataset to answer multiple queries. \cite{DFHPRR15,HU14} initiated the study of \emph{adaptive data analysis}, which aims to give provable guarantees that the query answers will have low bias, i.e. be representative of the full population, even when a data set is reused adaptively. Since then, there have been a number of works exploring the adaptive reuse of data \cite{DFHPR15,DFHPR15b,SU15,SU15between,BNSSSU16,RZ16,RRST16,smith2017survey,FS17,FS18,FRR20,DK22}.
Prior work can be split into two camps. The first focuses on the design of \emph{mechanisms}, a layer between the analysts and dataset. In those works, analysts do not have direct access to the data. Instead, when they wish to ask a query $\phi$, they pass it to the mechanism. The mechanism uses the dataset to answer $\phi$ without revealing too much information about the dataset; e.g. by adding noise to the output of $\phi$ applied to the data set. \cite{DFHPR15,DFHPR15b,DK22,FS18,FS17,SU15between}
The second camp provides ways to quantify the amount of information that has so far been revealed about the dataset and guarantees that when that quantity is small, query answers will have low bias. Such measures include those based on mutual information \cite{FS18,RZ16}, max information \cite{RRST16}, and differential privacy \cite{DFHPRR15,BNSSSU16}. Work in this camp is closely intertwined with the first camp: Generally, these information measures are proposed in conjunction with a specific mechanism design in mind. In order to prove the mechanism's responses have low bias, the first step is prove it reveals little information, and second step is to prove that revealing little information implies low bias.
This work is motivated by a simple question,
\begin{quote}
\emph{What minimal assumptions can we make about the queries to guarantee that the results will have low bias, even without an explicit mechanism?}
\end{quote}
The purpose of asking this question is twofold. First, it models a reality in which data analysts often do not explicitly use mechanisms to obfuscate query results before looking at them. Second, if these assumptions are sufficiently easy to explain to an analyst, they can be actionable, as the analyst can keep the assumptions in mind when deciding how to analyze data.
A first attempt at answering this question is that the query should reveal little information as quantified by any of the aforementioned measures. We find this to be an unsatisfactory response. It is often difficult to determine whether a natural sequence of queries satisfy those measures, so it is not clear if the measures form a good model of reality. Furthermore, it is not clear what takeaways an analyst not versed in the intricacies of information theory is to glean from the idea that they should try to minimize mutual information or some differential privacy parameters.
\pparagraph{Our approach.} We show that as long as each query takes as input a random subsample of the dataset and outputs to a small range, the results will have low bias. Quantitatively, our results depend on both the size of the subsample and the number of possible outputs the query has. Unlike previous information measures, this requirement is completely syntactic and trivial to verify. The quantities are also intuitive and easy to explain to a data analyst who may be interested in ensuring their results have low bias.
One interpretation of this framework is that it eliminates the need to design a noise distribution for each task. Prior works design mechanisms to bound the bias by adding an appropriate amount of noise to the true result before returning it (e.g. by adding a mean-$0$ Gaussian). Our work shows that the noise inherent in subsampling suffices. It also extends to tasks where it is difficult to design an appropriate noise distribution -- for example, when the output of each query is categorical rather than numerical.
As easy corollaries of this subsampling approach, we give simple mechanisms for two foundational tasks, statistical queries and median finding, demonstrating the power of this framework. In particular, our mechanism for answering the broad and influential class of \emph{statistical queries} (SQs) \cite{Kea98,FGRVX17} achieves state of the art accuracy in many parameter regimes. In addition to their broad applicability, statistical queries have been the standard bearer by which we assess the utility of approaches for adaptive data analysis since the early works of \cite{DFHPRR15,BNSSSU16}. Our SQ mechanism had advantages beyond its accuracy: It runs in sublinear time, and its extreme simplicity renders it broadly applicable in non-standard setting.
\section{Bounding the mutual information}
\label{sec:MI}
In this section, we'll use $\chi^2$ stability to bound the mutual information between the sample $\bS$ and the sequence of responses of the analyst $\by_1, \ldots, \by_T$. Explicitly, we'll prove the following.
\begin{theorem}[Formal version of \Cref{thm:MI-informal}]
\label{thm:MI-bound-formal}
For any deterministic analyst $\mcA$, distribution $\mcD$, and sample size $n \geq 2$, draw $\bS \sim \mcD^n$. For each $t \in [T]$, let $\phi_t = \mcA(t, \by_1, \ldots, \by_{t-1})$ and $\by_t \sim \phi^{(n)}_t(\bS)$, then
\begin{equation*}
I(\bS ; (\by_1, \ldots, \by_T)) \leq n\cdot\Ex\bracket*{\sum_{t \in {T} }\mathrm{cost}(\phi_t)}
\end{equation*}
where the cost of a $p$-uniform query $\phi:X^w\to Y$ is
\begin{equation*}
\mathrm{cost}(\phi) \coloneqq \frac{w (|Y| - 1)}{(n-1)(n-w)} \cdot \min \paren*{5 + 4 \log n, 1 + \log\paren*{1 + \frac{w}{np}}}.
\end{equation*}
\end{theorem}
Before proving \Cref{thm:MI-bound-formal}, we'll be explicit about all sources of randomness. The strategy of the analyst, mapping previous query responses to each new query, is assumed to be deterministic. This is without loss of generality (see \Cref{lem:rand-to-det}) and to simplify notation. Recall, from \Cref{def:p-uniform}, for $\phi:X^w \to Y$, it must be the case that for every $S' \in X^w$ and $y \in Y$, $\Pr[\phi(S') = y] \geq p$. For any $p > 0$, this requires that the query $\phi$ have a random output. Therefore, the explicit sampling process is:
\begin{enumerate}
\item A sample $\bS \sim \mcD^n$ is sampled.
\item At each time step $t \in [T]$, as a \emph{deterministic} function of previous responses $\by_1, \ldots, \by_{t-1}$, the analyst chooses a query $\phi_t:X^w \to Y$. This query is answered by first subsampling $\bS' \sim \binom{\bS}{w}$ and then drawing a response $\by_t \sim \phi_t(\bS')$.
\end{enumerate}
The main technical ingredient of \Cref{thm:MI-bound-formal} is the following bound on the $\chi^2$-stability of a single (possibly randomized) subsampling query.
\begin{lemma}
\label{lem:bound-chi-stab}
For any query $\phi:X^w \to Y$, $\phi^{(n)}$ is $\eps$-$\chi^2$ stable with respect to $\phi^{(n-1)}$ for
\begin{equation*}
\eps \coloneqq \frac{w(|Y| - 1)}{(n-1)(n-w)}.
\end{equation*}
\end{lemma}
\Cref{lem:bound-chi-stab} is an easy corollary of the following, slightly more general, lemma.
\begin{lemma}
\label{lem:var-bound}
For any nonnegative integers $w \leq n - 1$, and $f: \binom{[n]}{w} \to \R$,
\begin{equation}
\label{eq:var-ratio}
\Varx_{\bi \sim [n]} \bracket*{\Ex_{\bT \sim \binom{[n]}{w}}[f(\bT) \mid i \notin \bT] } \leq \frac{w}{(n-1)(n-w)} \Varx_{\bT \sim \binom{[n]}{w}}[f(\bT)].
\end{equation}
\end{lemma}
\begin{proof}[Proof of \Cref{lem:bound-chi-stab} from \Cref{lem:var-bound}]
Fix a sample $S \in X^n$. For each $y \in Y$, we define, $f_y: \binom{[n]}{w} \to \R$ which, upon receiving $i_1, \ldots, i_w \in [n]$, outputs $\Pr[\phi(S_{i_1}, \ldots, S_{i_w}) = y]$. Then,
\begin{align*}
\Ex_{\bi \sim [n]} & \bracket*{\chisqbig{\phi^{(n)}(S)}{\phi^{(n-1)}(S_{-\bi})}} \\
& =\Ex_{\bi \sim [n]}\bracket*{\sum_{y \in Y} \frac{\paren*{\phi^{(n)}(S)(y) - \phi^{(n-1)}(S_{-\bi})(y)}^2}{\phi^{(n)}(S)(y)}} \\
&= \sum_{y \in Y} \frac{\Varx_{\bi \sim [n]}[\phi^{(n-1)}(S_{-\bi})(y)]}{\phi^{(n)}(S)(y)} \tag{$\phi^{(n)}(S)(y) = \Ex_{\bi \sim [n]}[\phi^{(n-1)}(S_{-\bi})(y)$}\\
&= \sum_{y \in Y} \frac{\Varx_{\bi \sim [n]}[\phi^{(n-1)}(S_{-\bi})(y)]}{\phi^{(n)}(S)(y)(1 - \phi^{(n)}(S)(y))} \cdot (1 - \phi^{(n)}(S)(y))
\end{align*}
Note that for any random variable $\bx$ bounded within $[0,1]$ with mean $\mu$, $\Ex[\bx^2] \leq \mu$ and so $\Varx[\bx] \leq \mu(1 - \mu)$. Applying this to the random variable $f_y(\bT)$ gives a lower bound on the denominator in the above expression, and so,
\begin{align*}
\Ex_{\bi \sim [n]} \bracket*{\chisqbig{\phi^{(n)}(S)}{\phi^{(n-1)}(S_{-\bi})}} &\leq \sum_{y \in Y} \frac{\Varx_{\bi \sim [n]} \bracket*{\Ex_{\bT \sim \binom{[n]}{w}}[f_y(\bT) \mid i \notin \bT]}}{\Varx_{\bT \sim \binom{[n]}{w}}[f_y(\bT)]} \cdot (1 - \phi^{(n)}(S)(y)) \\
& \leq \sum_{y \in Y} \frac{w}{(n-1)(n-w)}\cdot (1 - \phi^{(n)}(S)(y)) \tag{\Cref{lem:var-bound}} \\
&= \frac{w}{(n-1)(n-w)} \cdot \paren*{|Y| - \sum_{y \in Y} \phi^{(n)}(S)(y)} \\
&= \frac{w(|Y| - 1)}{(n-1)(n-w)} = \eps.
\end{align*}
Since this holds for any $S \in X^n$, $\phi^{(n)}$ is $\eps$-$\chi^2$ stable w.r.t. $\phi^{(n-1)}$.
\end{proof}
\begin{proof}[Proof of \Cref{lem:var-bound}]
In order to prove \Cref{lem:var-bound}, it is sufficient to restrict ourselves to mean-$0$ functions $f$, as both sides of \Cref{eq:var-ratio} are invariant to translations. We'll consider the vector space of all such functions
\begin{equation*}
\mcV \coloneqq \set*{f: \binom{[n]}{w} \to \R \,\bigg| \Ex_{\bT \sim \binom{[n]}{w}}[f(\bT)] = 0},
\end{equation*}
endowed with the inner product
\begin{equation*}
\langle f, g \rangle \coloneqq \Ex_{\bT \sim \binom{[n]}{w}}[f(\bT)g(\bT)].
\end{equation*}
With this choice of inner product, for any $f \in \mcV$, $\Varx[f] = \langle f, f\rangle = \norm{f}^2$. Defining $\phi: \mcV \times \mcV \to \R$,
\begin{equation*}
\phi(f, g) \coloneqq \Ex_{\bi \sim [n]} \bracket*{\Ex_{\bT \sim \binom{[n]}{w}}[f(\bT) \mid \bi \notin T] \cdot \Ex_{\bT \sim \binom{[n]}{w}}[g(\bT) \mid \bi \notin T]}.
\end{equation*}
we have that for any mean-zero $f$, the left hand side of \Cref{eq:var-ratio} is equal to $\phi(f,f)$. Clearly $\phi$ is bilinear, symmetric, and positive semi-definite. Our goal is to find the maximum $\phi(f,f)$ among $f$ with $\norm{f} \leq 1$, which is just the maximum eigenvalue of the linear map corresponding to $\phi$. We'll show that this maximum occurs for a \emph{linear} $f$, where linear functions are defined as
\begin{equation}
\label{eq:def-linear}
\mcL \coloneqq \set*{T \mapsto \sum_{i \in T} \alpha_i \bigg|\, \alpha \in \R^n} \subseteq \R^{\binom{[n]}{w}}.
\end{equation}
Consider any $f \in \mcV$ that is orthogonal to every linear function. Since the function $T \mapsto \Ind[i \notin T]$ is linear\footnote{This function is formed by setting $\alpha_j = \frac{1}{w} - \Ind[j = i]$ in \Cref{eq:def-linear}.}, for any $i \in [n]$, we have that $\Ex[f(\bT) \mid i \notin \bT] = 0$. Therefore, for any $g \in \mcV$,
\begin{equation*}
\phi(f, g) = \Ex_{\bi \sim [n]} \bracket*{0 \cdot \Ex[g(\bT) \mid \bi \notin T]} = 0.
\end{equation*}
As a result, all $f$ in $\mcL^{\perp}$ are in the kernel of the linear map corresponding to $\phi$. Hence, to find the maximum of $\phi(f,f)$ among $\norm{f} \leq 1$, it is sufficient to just consider linear functions. Pick an arbitrary mean-zero $\alpha \in \R^n$ and set $f = T \mapsto \sum_{i \in T} \alpha_i$. We compute the variance of $f$:
\begin{align*}
\norm{f}^2 &= \Ex[f(\bT)^2] \\
&= \Ex\bracket*{\paren*{\sum_{i \in \bT} \alpha_i}^2} \\
&= \sum_{i \in [n]} \Pr[i \in \bT] \alpha_i^2 + \sum_{i \neq j} \Pr[i,j \in \bT] \alpha_i\alpha_j \tag{Linearity of expectation} \\
&= \frac{w}{n}\sum_{i \in [n]} \alpha_i^2 + \frac{w(w-1)}{n(n-1)}\sum_{i \neq j } \alpha_i \alpha_j \\
&= \frac{w}{n} \paren*{\sum_{i \in [n]} \alpha_i \cdot\paren*{\alpha_i + \frac{w-1}{n-1} \sum_{j \neq i} \alpha_j}}\\
&= \frac{w}{n} \paren*{\sum_{i \in [n]} \alpha_i \cdot\paren*{\frac{n-w}{n-1} \alpha_i + \frac{w-1}{n-1} \sum_{j \in [n]} \alpha_j}}\\
&= \frac{w}{n} \paren*{\sum_{i \in [n]} \alpha_i \cdot\paren*{\frac{n-w}{n-1} \alpha_i}} \tag{$f$ is mean-zero} \\
&= \frac{w(n-w)}{n(n-1)} \sum_{i \in [n]} \alpha_i^2.
\end{align*}
Finally, we bound $\phi(f,f)$. In order to do so, we first compute, for any $i \in [n]$
\begin{align*}
\Ex[f(\bT) \mid i \notin \bT] &= \sum_{j \neq i} \Pr[j \in \bT \mid i \notin \bT] \cdot \alpha_j \\
&=\sum_{j \neq i} \frac{w}{n-1} \cdot \alpha_j \\
&= \frac{w}{n-1} \left(\sum_{j \in [n]} \alpha_j\right) - \frac{w}{n-1} \alpha_i \\
&= -\frac{w}{n-1} \alpha_i \tag{$f$ is mean-zero.}
\end{align*}
This allows us to compute,
\begin{align*}
\phi(f,f) &= \Ex_{\bi \sim [n]} \bracket*{\Ex[f(\bT) \mid \bi \notin T]^2} \\
&= \Ex_{\bi \sim [n]} \bracket*{\paren*{-\frac{w}{n-1} \alpha_i}^2} \\
&= \frac{w^2}{n(n-1)^2} \sum_{i \in [n]} \alpha_i^2.
\end{align*}
Comparing the expressions for $\phi(f,f)$ and $\norm{f}^2$, we conclude that for any linear $f$, \Cref{eq:var-ratio} holds with equality. Since the maximum of $\phi(f,f)$ among $f$ with $\norm{f}^2 \leq 1$ occurs for a linear function, we are done.
\end{proof}
The next step in the proof of \Cref{thm:MI-bound-formal} is to convert the $\eps$-$\chi^2$ stability of each subsampling query to $\eps'$-ALKL stability for appropriately chosen $\eps'$.
\begin{corollary}
\label{cor:bound-ALKL-stab}
For any $p$-uniform query $\phi:X^w \to Y$, $\phi^{(n)}$ is $\eps'$-ALKL stable for
\begin{equation*}
\eps' \coloneqq \frac{w (|Y| - 1)}{(n-1)(n-w)} \cdot \min \paren*{5 + 4 \log n, 1 + \log\paren*{1 + \frac{w}{np}}}.
\end{equation*}
\end{corollary}
\begin{proof}
First note that if $|Y| = 1$, the query is trivially $0$-ALKL stable as its output cannot depend on its input. Therefore, we may assume that $|Y| \geq 2$. By \Cref{lem:bound-chi-stab}, $\phi^{(n)}$ is $\eps$-$\chi^2$ with respect to $\phi^{(n-1)}$ for $\eps = \frac{w (|Y| - 1)}{(n-1)(n-w)}$. By the first guarantee of \Cref{thm:chi-to-KL}, $\phi^{(n)}$ is $\eps'$-ALKL stable for
\begin{align*}
\eps' &\coloneqq \eps \cdot(3 + 2 \log(|Y|/\eps)) \\
&= \eps \cdot \paren*{3 + 2 \log\paren*{\frac{|Y|}{\frac{w (|Y| - 1)}{(n-1)(n-w)}}}}\\
&= \eps \cdot \paren*{3 + 2 \log\paren*{\frac{|Y|}{|Y|-1} \cdot {\frac{(n-1)(n-w)}{w}}}}\\
&\leq \eps \cdot \paren*{3 + 2 \log\paren*{2n^2}} \tag{$|Y| \geq 2, w \geq 1$} \\
&= \eps \cdot\paren*{3 + 2\log 2 + 4 \log n} \leq \eps \cdot(5 + 4 \log n).
\end{align*}
This gives the first of the two desired bounds. For the other bound, we take advantage of the $p$-uniformity of $\phi$. Consider any $S \in X^n$, $y \in Y$, and $i \in [n]$. We can write,
\begin{align*}
\phi^{(n)}(S)(y) &= \Ex_{\bS' \sim \binom{S}{w}}[\Pr[\phi(\bS') = y]] \\
&= \Pr[i \in \bS']\Ex_{\bS' \sim \binom{S}{w}}[\Pr[\phi(\bS') = y] \mid i \in \bS'] + \Pr[i \notin \bS']\Ex_{\bS' \sim \binom{S}{w}}[\Pr[\phi(\bS') = y] \mid i \notin \bS'] \\
&= \frac{w}{n}\Ex_{\bS' \sim \binom{S}{w}}[\Pr[\phi(\bS') = y] + (1 - \frac{w}{n}) \cdot \phi^{(n-1)}(S_{-i})(y)\\
&\leq \frac{w}{n} + (1 - \frac{w}{n}) \cdot \phi^{(n-1)}(S_{-i})(y) \\
&\leq \frac{w}{np}\cdot \phi^{(n-1)}(S_{-i})(y) + (1 - \frac{w}{n}) \cdot \phi^{(n-1)}(S_{-i})(y) \tag{$\phi^{(n-1)}(S_{-i})(y) \geq p$}\\
&\leq \paren*{\frac{w}{np} + 1}\cdot \phi^{(n-1)}(S_{-i})(y).
\end{align*}
Therefore, setting $\tau = (1 + \frac{w}{np})^{-1}$, we have that $\phi^{(n-1)}(S_{-i})(y) \geq \tau \cdot \phi^{(n)}(S)(y)$. Applying the second guarantee of \Cref{thm:chi-to-KL}, $\phi^{(n)}$ is $\eps'$-ALKL stable for
\begin{equation*}
\eps' \coloneqq \eps \cdot(1 + \log(1/\tau)) = \eps \cdot \paren*{1 + \log\paren*{1 + \frac{w}{np}}}.
\end{equation*}
The desired result follows from taking whichever of the two ALKL stability bounds is better.
\end{proof}
Given the above ALKL stability bound, the proof of \Cref{thm:MI-bound-formal} is \emph{almost} completed by \Cref{fact:KL-MI} from \cite{FS18} which connects ALKL stability to mutual information. However, to directly apply \Cref{fact:KL-MI}, we would need an \emph{a priori} bound on the ALKL stability of each query. Instead, we allow the analyst to choose its query as a function of the previous responses and only require the total cost be bounded in expectation. Specifically, we want to show an adaptive composition of ALKL stability even when the stability parameters of later queries are a function of prior queries responses. This has recently been shown to hold in \cite{FZ21}. However, their setting is slightly different, focusing on analysts that are budgeted almost surely rather than in expectation, so we cannot use their results black box. Instead, we prove the following.
\begin{lemma}[Generalization of \Cref{fact:KL-MI}]
\label{lem:KL-to-MI-expectation}
Draw $\bS \sim \mcD^n$. Then, for each $t \in [T]$, let an analyst choose a randomized algorithm $\mcM_t:X^n \to Y$ as a function of $\by_1, \ldots, \by_{t-1}$ and draw an output $\by_t \sim \mcM_t(\bS)$. If $\mcM_t$ is $\beps_t$-ALKL stable, then,
\begin{equation*}
I(\bS; (\by_1, \ldots, \by_T)) \leq n \cdot \Ex\bracket*{\sum_{t \in [T]}\beps_t}
\end{equation*}
where the expectation is taken over the analysts choices for the queries, which in turn depends on the randomness of $\bS$ and $\by_1, \ldots, \by_T$.
\end{lemma}
\Cref{thm:MI-bound-formal} is a direct consequence of \Cref{lem:KL-to-MI-expectation} and \Cref{cor:bound-ALKL-stab}. The proof of \Cref{lem:KL-to-MI-expectation} is mostly a matter of looking through \cite{FS18}'s proof of \Cref{fact:KL-MI} and confirming that everything still holds in this more general setting.
The first statement of \Cref{fact:KL-MI} is a special case of the following more general fact, implicit in \cite{FS18}'s proof.
\begin{fact}[Implicit in \cite{FS18}'s proof of \Cref{fact:KL-MI}]
\label{fact:KL-to-MI-general}
For any randomized algorithm $\mcM:X^n \to Y$ and $\bS \sim \mcD^n$, if $\by \sim \mcM(\bS)$ then for any randomized algorithm $\mcM':X^{n-1} \to Y$
\begin{equation*}
I(\bS; \by) \leq n \cdot \Ex_{\bS}\bracket*{\Ex_{\bi \sim [n]}\bracket*{\KLbig{\mcM(\bS)}{\mcM'(\bS_{-\bi})}}}.
\end{equation*}
\end{fact}
To take advantage of \Cref{fact:KL-to-MI-general}, we'll define the ALKL stability with respect to a distribution. Note that if a randomized algorithm is $\eps$-ALKL stable, it is $\eps$-ALKL stable with respect to all distributions.
\begin{definition}[ALKL stability with respect to a distribution]
A randomized algorithm $\mcM: X^n \to Y$ is $\eps$-ALKL stable with respect to a (not necessarily product) distribution $\mcD$ over $X^n$ if there is a randomized algorithm $\mcM': X^{n-1} \to Y$ for which
\begin{equation*}
\Ex_{\bS \sim \mcD, \bi \sim [n]} \bracket*{\KLbig{\mcM(S)}{\mcM'(S_{-\bi})}} \leq \eps.
\end{equation*}
\end{definition}
Then, we generalize \cite{FS18}'s adaptive composition of ALKL stability.
\begin{proposition}[Adaptive composition of ALKL stability with respect to a distribution]
\label{prop:adaptive-composition-dist}
For any $\mcM_1: X^n \to Y_1$, $\mcM_2: Y_1 \times X^n \to Y_2$, and distribution $\mcD$ over $X^n$ satisfying
\begin{enumerate}
\item $\mcM_1:X^n \to Y_1$ is $\eps_1$-ALKL stable with respect to $\mcD$.
\item For $\bS \sim \mcD$ and any $y_1 \in Y_1$, $\mcM_2(y_1, \cdot)$ is $(\eps_2^{(y_1)})$-ALKL stable with respect to $(\mcD \mid \mcM_1(\bS) = y_1)$.
\end{enumerate}
The randomized algorithm mapping $S$ to $\mcM_2(\mcM_1(S), S)$ is $\eps'$-ALKL stable with respect to $\mcD$ for
\begin{equation*}
\eps' \coloneqq\eps_1 + \Ex_{\bS \sim \mcD,\by_1 \sim \mcM_1(\bS) }[\eps_2^{(\by_1)}].
\end{equation*}
\end{proposition}
\begin{proof}
This proof uses the well-known chain rule of KL divergence. For any distributions $\mcD$ and $\mcE$ over domain $X \times Y$,
\begin{equation*}
\KL{\mcD}{\mcE} = \KL{\mcD(x)}{\mcE(x)} + \Ex_{\bx' \sim \mcD(x)}\bracket*{\KLbig{\mcD(y \mid x = x')}{\mcE(y \mid x = x')}}
\end{equation*}
where $\mcD(x)$ denotes the marginal distribution of $\mcD$ over $X$ and $\mcD(y \mid x = x')$ its conditional distribution over $Y$. Then, we bound
\begin{align*}
\eps' &= \Ex_{\bS \sim \mcD, \bi \sim [n]} \bracket*{\KLbig{\mcM_2(\mcM_1(\bS), \bS)}{\mcM_2'(\mcM_1'(\bS_{-\bi}), \bS_{-\bi})}} \\
&= \Ex_{\bS \sim \mcD, \bi \sim [n]} \bracket*{\KLbig{\mcM_1(\bS)}{\mcM_1'(\bS)}} + \Ex_{\bS \sim \mcD, \bi \sim [n], \by_1} \bracket*{\KLbig{\mcM_2(\by_1,\bS)}{\mcM_2'(\by_1,\bS)}} \\
&= \eps_1 + \Ex_{\bS \sim \mcD,\by_1 \sim \mcM_1(\bS) }[\eps_2^{(\by_1)}].\qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of \Cref{lem:KL-to-MI-expectation}]
By repeatedly applying \Cref{prop:adaptive-composition-dist}, the mechanism that takes as input $\bS$ and outputs $\by = (\by_1, \ldots, \by_t)$ is $\paren*{\Ex\bracket*{\sum_{t \in [T]}\beps_t}}$-ALKL stable w.r.t. $\mcD^n$. The desired result follows from \Cref{fact:KL-to-MI-general}.
\end{proof}
\subsection{Other related work}
Subsampling has been thoroughly explored in the context of privacy amplification (see e.g. \cite{BBG18,ZW19} or the book chapter \cite{steinkeBookChapter}): if $\mcA$ is a differentially private algorithm, running $\mcA$ on a random subset of the data gives an algorithm with even better privacy parameters. Given the previous applications of differential privacy to adaptive data analysis, this seems like a natural starting point for our work. However, such an approach is not sufficient to analyze subsampling queries. Indeed, subsampling queries do not necessarily satisfy $(\eps, \delta)$-differential privacy with sufficiently good parameters to give useful bounds on the bias.
Fish, Reyzin, and Rubinstein explored the use of subsampling to speed up classical mechanisms for adaptive data analysis \cite{FRR20}. For example, their mechanism for answering a statistical query $\phi$, computes $\phi$ on a random subsample of the data \emph{and} adds Laplacian noise to that result. This allows them to retain the accuracy guarantees of prior mechanisms that added Laplacian noise \cite{BNSSSU16} while also running in sublinear time. In contrast, our work shows that subsampling alone is sufficient, and achieves sample size bounds that improve upon prior work.
\subsection{Our results}
We'll show that an analyst can ask an adaptive sequence of \emph{subsampling queries} without incurring large bias.
\begin{definition}[Subsampling query]
\label{def:subsampling-query}
For any sample $S \in X^n$ and query $\phi:X^w \to Y$ where $w \leq n$, the \emph{subsampling query} $\phi$ is answered by drawing $\bx_1, \ldots, \bx_w$ uniformly without replacement from $S$, and then providing the answer $\by = \phi(\bx_1, \ldots, \bx_w)$.
The notation $\phi^{(n)}(S)$ denotes the distribution of $\by$ defined above. Similarly, for any distribution $\mcD$ supported on $X$, the notation $\phi^{(\mathrm{dist})}(\mcD)$ denote the distribution of $\by' = \phi(\bx_1', \ldots, \bx_w')$ when $\bx_1', \ldots, \bx_w' \iid \mcD$.
\end{definition}
We allow the analyst to be \emph{adaptive}: The analyst's choice for the $t^{\text{th}}$ query may depend arbitrarily on the responses to the first $t - 1$ queries, as summarized in \Cref{fig:adaptive-analyst}. We first give an informal result bounding the sample size needed to ensure the results have low bias.
\begin{theorem}[The subsampling mechanism has low bias]
\label{thm:informal}
Suppose an analyst asks an adaptive sequence of $T$ subsampling queries, each mapping $X^w$ to $Y$, to a sample $\bS \sim \mcD^n$. As long as
\begin{equation*}
n \geq \tilde{\Omega}(w\sqrt{T\cdot |Y|}),
\end{equation*}
with high probability, all of the queries will have low bias.
\end{theorem}
\Cref{thm:informal} can be compared to a naive approach which takes a fresh batch of $w$ samples for each query. Subsampling has a quadratically better dependence on $T$ than that approach which requires $n \geq wT$. The following formal version of \Cref{thm:informal} quantifies the bias of each query.
\begin{figure}[b]
\captionsetup{width=.9\linewidth}
\begin{tcolorbox}[colback = white,arc=1mm, boxrule=0.25mm]
\vspace{2pt}
\textbf{Input:} A sample $S \in X^n$ not known to the analyst.\vspace{2pt}
For each time step $t \in [T]$, the analyst \vspace{2pt}
\begin{enumerate}[nolistsep,itemsep=2pt]
\item Selects a query $\phi_t: X^{w_t} \to Y_t$ which can depend on the previous responses $\by_1, \ldots, \by_{t-1}$.
\item Receives the response $\by_t \sim \phi_t(S)$.
\end{enumerate}
\end{tcolorbox}
\caption{An analyst asking an adaptive sequence of subsampling queries.}
\label{fig:adaptive-analyst}
\end{figure}
\begin{theorem}[Formal version of \Cref{thm:informal}]
\label{thm:formal-simple}
For any distribution $\mcD$ over domain $X$, and analyst making a series of adaptive queries $\phi_1, \ldots, \phi_T:X^w \to Y$ to a sample $\bS \sim \mcD^n$,
\begin{equation*}
\Ex\bracket*{\sup_{t \in [T], y \in Y} \paren*{\phi^{(n)}_t(\bS)(y) - \phi^{(\mathrm{dist})}_t(\mcD)(y)}^2} \leq O\paren*{w\log n\paren*{\frac{wT |Y|}{n^2} + \frac{1}{n}}}.
\end{equation*}
where the expectation is both over the sample $\bS$ and the analysts choice of queries, and the notation $\mcD(x)$ denotes $\Prx_{\bx \sim \mcD}[\bx = x]$.
\end{theorem}
By Markov's inequality, with probability at least $0.9$, for all $t \in [T]$ and $y \in Y$
\begin{equation*}
\abs*{\Pr[\phi^{(n)}_t(\bS)(y) - \phi^{(\mathrm{dist})}_t(\mcD)(y)} \leq \tilde{O}\paren*{\frac{w\sqrt{T|Y|}}{n} + \sqrt{\frac{w}{n}}}.
\end{equation*}
The second term, $\sqrt{\frac{w}{n}}$, quantifies the \emph{inherit} bias in a sample: There are\footnote{One such example, for $\mcD = \mathrm{Unif}(\{-1,1\})$, is the query $\phi(S) = \Ind[\sum_{x \in X} x \geq 0]$.} queries $\phi:X^w \to \zo$, for which, even with a fresh sample $\bS$, $\Ex_{\bS \sim \mcD^n}[|\phi^{(n)}(\bS)(1) - \phi^{(\mathrm{dist})}(\mcD)(1)|] = \Theta(\sqrt{\frac{w}{n}})$.
The first term therefore quantifies the extra bias. When $T |Y| \leq \frac{n}{w}$, that first term is dominated by the inherit bias. The guarantee than slowly degrades from the inherit bias to vacuous as $T|Y|$ varies between $\frac{n}{w}$ and $\frac{n^2}{w^2}$. In contrast, naive sample splitting works well from $T \leq \frac{n}{w}$ but cannot move beyond that regime.
Our next theorem generalizes \Cref{thm:formal-simple} in a number of ways. First, it allows the analyst to choose a different domain and range size for each query -- As a function of the responses $\by_1, \ldots, \by_{t-1}$, the analyst chooses $w_t, Y_t$ and a subsampling query $\phi:X^{w_t} \to Y_t$. The only requirement is that the analyst not exceed a total \emph{budget}
\begin{definition}[Budgeted analyst]
\label{def:budgeted-analyst}
For any distribution $\mcD$, sample size $n$, cost function, $\mathrm{cost}$, mapping queries to $\R_{\geq 0}$ and budget $b \geq 0$, we say an analyst is \emph{$(\mathrm{cost},b)$-budgeted in expectation} if
\begin{equation*}
\Ex\bracket*{\sum_{t \in [T]}\mathrm{cost}(\phi_t)} \leq b
\end{equation*}
where the expectation is over the analysts choice of queries $\phi_1, \ldots, \phi_T$ which in turn also depends on the randomness of $\bS \sim \mcD^n$ and the prior query outputs, $\by_1 ,\ldots, \by_{T-1}$ in \Cref{fig:adaptive-analyst}. Similarly, we say that an analyst is \emph{$(\mathrm{cost},b)$-budgeted almost surely} if $\sum_{t \in [T]}\mathrm{cost}(\phi_t) \leq b$ holds almost surely.
\end{definition}
We further generalize \Cref{thm:formal-simple} by disconnecting the test queries from the first $T$ queries the analyst asks. After receiving the response $\by_1, \ldots, \by_T$, the analyst chooses any set of test queries $\psi_1:X^{v_1} \to [0,1], \ldots, \psi_m:X^{v_m} \to [0,1]$ for which we bound $\abs{\Ex_{\by \sim \psi^{(n)}(\bS)}[\by] - \Ex_{\by \sim \psi^{(\mathrm{dist})}(\mcD)}[\by]}$. To recover \Cref{thm:formal-simple}, we define a test query $\psi(x_1, \ldots, x_w) \coloneqq \Ind[\psi_t(x_1, \ldots, x_w)=y]$ for each $t \in [T]$ and $y \in Y_t$. The following notation will be convenient.
\begin{definition}[Expectation of a query]
\label{def:expectation}
For a query $\phi:X^w \to Y \subseteq \R$ and sample $S \in X^n$, we use the notation $\phi(S)$ as shorthand for $\Ex_{\by \sim \phi^{(n)}}[\by]$. Similarly, for a distribution $\mcD$ over domain $X$, we use the notation $\phi(\mcD)$ as shorthand for $\Ex_{\by \sim \phi^{(\mathrm{dist})}}[\by]$.
\end{definition}
A more technical improvement of \Cref{thm:main-binary} over \Cref{thm:formal-simple} is that the error bound improves when $\Var_{\mcD}(\psi) \coloneqq \Varx_{\by \sim \psi^{(\mathrm{dist})}(\mcD)}[\by]$ is small. \Cref{thm:formal-simple} corresponds to the pessimistic bound of $\Var_{\mcD}(\psi) \leq 1$. This improvement will be important to the application of answering statistical queries given in \Cref{subsec:SQ}. To state \Cref{thm:main-binary}, we define the \emph{error} of a test query.
\begin{definition}[Error of a query]
\label{def:error-simple}
For any $\psi:X^w \to [0,1]$, distribution $\mcD$ over $X$ and sample $S \in X^n$, we define
\begin{equation*}
\mathrm{error}(\psi, S, \mcD) \coloneqq \frac{1}{w} \cdot \min\paren*{\Delta, \frac{\Delta^2}{\Var_{\mcD}(\psi)}} \quad\quad\text{where }\Delta \coloneqq \abs{\psi(S) - \psi(\mcD)}.
\end{equation*}
\end{definition}
If $\mathrm{error}(\psi, S, \mcD)\leq \eps$, then $\Delta \leq \max(w\eps, \sqrt{w\eps \Var_{\mcD}(\psi)})$. When $\Var_{\mcD}(\psi) = 1$, the second term, or the trivial bound of $\Delta \leq 1$, dominates. As $\Var_{\mcD}(\psi)$ decreases, the bound improves until it hits a floor of $w\eps$.
\begin{theorem}[Generalization of \Cref{thm:formal-simple}]
\label{thm:main-binary}
For sample size $n$, define the cost of a query as
\begin{equation*}
\mathrm{cost}_n(\phi:X^w \to Y) \coloneqq \frac{w |Y| \log n}{n-w}.
\end{equation*}
For any distribution $\mcD$ over domain $X$ and adaptive analyst that is $(\mathrm{cost}_n, b)$-budgeted in expectation, let $\Psi$, $|\Psi| \leq m$, be a collection of tests the analyst chooses after receiving query responses $\by_1, \ldots, \by_T$. Then
\begin{equation*}
\Ex\bracket*{\sup_{\psi \in \Psi} \mathrm{error}(\psi, \bS, \mcD)} \leq O \paren*{\frac{b + \log m + 1}{n}}
\end{equation*}
where the expectation is both over the sample $\bS \sim \mcD^n$ and the analyst's decisions.
\end{theorem}
While \Cref{thm:main-binary} only guarantees low bias in expectation, we further show that a high probability guarantee holds when $w_t = 1$ for all $t \in T$.
\begin{theorem}[Improved dependence on failure probability]
\label{thm:high-probability}
In the setting of \Cref{thm:main-binary}, if the analyst is $(\mathrm{cost}_n, b)$-budgeted \emph{almost surely}, chooses $w_t = 1$ for all $t \in [T]$ and, as a function of the responses $\by_1, \ldots, \by_T$, chooses a single test $\psi:X \to [0,1]$, then, for any failure probability $\delta > 0$,
\begin{equation*}
\Pr\bracket*{\mathrm{error}(\psi, \bS, \Delta)\geq O\paren*{\log(1/\delta)\cdot\paren*{\frac{b + 1}{n}}}} \leq \delta.
\end{equation*}
\end{theorem}
Note that the logarithmic dependence on $\delta$ means that a union bound suffices to handle the case where the analysts chooses $m$ tests. In that case, the $\log(1/\delta)$ dependence is instead $\log(m/\delta)$.
The case of $w = 1$ is particularly interesting for two reasons.
\begin{enumerate}
\item It is sufficient for our application of answering statistical queries, a widely-applicable query class, given in \Cref{subsec:SQ}. Indeed, one way to characterize statistical queries are precisely those queries $\phi:X^n \to [0,1]$ for which an unbiased (and bounded) estimator of $\phi(S)$ can be computed given a single $\bx \sim \mathrm{Unif}(S)$. Our mechanism for answering statistical queries simply averages many of these unbiased estimators.
\item One way to answer a query with $w \geq 2$ is to cast a sample of $n$ points from $\mcD$ as a sample of $\floor{n/w}$ points each drawn from $\mcD^w$. By doing so, each query $\phi:X^w \to Y$ can be answered by looking at one ``grouped point," and so \Cref{thm:high-probability} gives a high probability guarantee. We conjecture that such grouping is unnecessary and that \Cref{thm:high-probability} would directly hold without the restriction that $w_t = 1$ for all $t \in [T]$. That said, the proof breaks in this setting, and so we consider extending it to be an intriguing open problem.
\end{enumerate}
Lastly, we show it is possible to drop the $\log n$ dependence from \Cref{thm:main-binary,thm:high-probability} for queries that are sufficiently \emph{uniform}, meaning each output of the query is fairly likely to occur. To define uniform queries, we expand \Cref{def:subsampling-query} to allow for subsampling queries that are not deterministic functions. Rather, given a subsample $x_1, \ldots, x_w$, the output of the query may still be a random variable. Equivalently, we may think of every query as accepting as input random bits in addition to the subsample, though we will suppress those random bits from out notation.
\begin{definition}[$p$-uniform]
\label{def:p-uniform}
A subsampling query $\phi:X^w \to Y$ is said to be $p$-\emph{uniform} if, for every $x_1, \ldots, x_w \in X$ and every $y \in Y$, $\Pr[\phi(x_1, \ldots, x_w) = y] \geq p$.
\end{definition}
\begin{theorem}[Improved bounds for $p$-uniform queries]
\label{thm:main-noisy}
\Cref{thm:main-binary} holds, where for a $p$-uniform query $\phi:X^w \to Y$, the improved cost function
\begin{equation}
\label{eq:cost-noisy-exp}
\mathrm{cost}_n(\phi) \coloneqq \frac{w |Y|}{n-w} \cdot \min\paren*{\log n, 1 + \log\paren*{1 + \frac{w}{np}}}
\end{equation}
is used. For \Cref{thm:high-probability}, the improved cost function also incorporates the desired failure probability, $\delta$, and is defined
\begin{equation}
\label{eq:cost-noisy-hp}
\mathrm{cost}_{n,\delta}(\phi) \coloneqq \frac{|Y|}{n} \cdot \min\paren*{\log n, 1 + \log\paren*{1 + \frac{\log(1/\delta)}{np}}}
\end{equation}
\end{theorem}
When $p$ is large enough, the cost function in \Cref{thm:main-noisy} improves upon that of \Cref{thm:main-binary,thm:high-probability} by a $\log n$ factor. In the applications of \Cref{subsec:SQ,subsec:approx-median}, we will add a small amount of noise to eliminate that log factor. Without that noise, the mechanisms would be even simpler, though the sample size needed would increase by a log factor.
\section{Notation and key definitions}
\label{sec:prelim}
\paragraph{Sets and multisets.}
For a natural number $n$, we use $[n]$ to denote the set $\set{1,\ldots, n}$. For a multiset $S \in X^n$ we use the notation $S_i$ to indicate the $i^\text{th}$ element of $S$, and $S_{-i} \coloneqq (S_1, \ldots, S_{i-1}, S_{i+1}, \ldots, S_n)$ denotes the remaining $n-1$ elements. For $w \leq n$, we use the notation $\binom{S}{w}$ to indicate the set of all size-$w$ multisets $S'$ that are contained within $S$.
\paragraph{Random variables and distributions.}
We use \textbf{boldfont} (e.g. $\bx \sim \mcD$) to denote random variables, and generally will use calligraphic font to denote distribution. The notation $\bx \sim S$ for a (multi)set $S$ is shorthand for $\bx \sim \mathrm{Unif}(S)$. For a distribution $\mcD$ over domain $X$ and element $x \in X$, we use $\mcD(x)$ to denote the probability mass function of $\mcD$ evaluated at $x$. Similarly, for a subset $X' \subset X$, the notation $\mcD(X')$ is shorthand for $\sum_{x \in X'}\mcD(x)$. We use $\supp(\mcD) \coloneqq \{x \in X \mid \mcD(x) > 0\}$ for the support of $\mcD$. For convenience, all domains and distributions will be discrete.
\paragraph{Logarithms and exponentials.}
We use $\log$ to denote the natural logarithm, $\log_2$ to denote logarithms in base 2, and $\exp$ to denote the function $x \mapsto e^x$.
\paragraph{Properties of distributions and random variables.}
Throughout this paper, we will use the following two notions of ``closeness" of two distributions.
\begin{definition}[Kullback-Leibler (KL) Divergence]
\label{def:KL}
For distributions $\mcD$, $\mcE$ supported on a domain $X$, the \emph{KL divergence} between $\mcD$ and $\mcE$ is defined as,
\begin{equation*}
\KL{\mcD}{\mcE} \coloneqq \Ex_{\bx \sim \mcD}\bracket*{\log \paren*{\frac{\mcD(\bx)}{\mcE(\bx)}}}.
\end{equation*}
\end{definition}
\begin{definition}[Neyman's $\chi^2$ divergence]
\label{def:chi-dist}
For distribution $\mcD$, $\mcE$ supported on a domain $X$, we define the \emph{$\chi^2$ divergence} between $\mcD$ and $\mcE$
\begin{equation*}
\chisq{\mcD}{\mcE} \coloneqq \Ex_{\bx \sim \mcD} \bracket*{\frac{(\mcD(\bx) - \mcE(\bx))^2}{\mcD(\bx)^2}} = \sum_{x \in X}\frac{(\mcD(x) - \mcE(x))^2}{\mcD(x)}.
\end{equation*}
\end{definition}
Note that our definition of $\chi^2$ divergence reverses the arguments relative to Pearson's $\chi^2$ divergence.
Furthermore, mutual information will play a critical role in our proofs.
\begin{definition}[Mutual information]
\label{def:MI}
For random variables $\bx,\by$ jointly distributed according to a distribution $\mcD$, let $\mcD(x)$ and $\mcD(y)$ be the marginal distributions of $\bx$ and $\by$ respectively. The mutual information between $\bx$ and $\by$ is defined as
\begin{equation*}
I(\bx ; \by) \coloneqq \KL{\mcD}{\mcD(x) \times \mcD(y)} = \Ex_{\by} \bracket*{\KL{\mcD(x \mid \by = y)}{\mcD(y)}}.
\end{equation*}
\end{definition}
\subsection{Formal model of the analyst}
\Cref{fig:adaptive-analyst} summarizes the interaction of the analyst with a sample $S \in X^n$. Formally, we model the analyst $\mcA$ as a function mapping a time step $t \in [T]$, previous query responses $y_1, \ldots, y_{t-1}$, and a source of randomness $\bz \sim \mcZ$ to a subsampling query $\phi_t$. After the $T^{\text{th}}$ step, the analyst outputs a series of test queries $\psi_1:X^{v_1} \to \zo, \psi_{m}:X^{v_m} \to \zo$, also as a function of $y_1, \ldots, y_T$ and $\bz$. With this formal description of the analyst, we can give the following lengthy but fully explicit description of \Cref{thm:main-binary}.
\begin{theorem}[\Cref{thm:main-binary} restated with a formal model of the analyst]
\label{thm:main-analyst}
For any analyst $\mcA$ and distributions $\mcD, \mcZ$, draw $\bS \sim \mcD^n$ and $\bz \sim \mcZ$. For each $t \in [T]$, let $\phi_t = \mcA(t, \by_1, \ldots, \by_{t-1}, \bz)$ and $\by_t \sim \phi^{(n)}_t(\bS)$. Then, for $\psi_1, \ldots, \psi_m = \mcA(T,\by_1, \ldots, \by_T, \bz)$,
\begin{equation}
\label{eq:expectation-analyst}
\Ex[\sup_{i \in [m]} \mathrm{error}(\psi_i, \bS, \mcD)] \leq O\paren*{\Ex\bracket*{\frac{\sum_{t \in T} \mathrm{cost}_n(\phi_t) + \log m + 1}{n} }}.
\end{equation}
\end{theorem}
We will restrict our attention to \emph{deterministic} analysts, where the analysts output is a deterministic function of the previous responses. Deterministic analysts do not take in a source of randomness (previously denoted $\bz \sim \mcZ$). Through the following simple argument, this is without loss of generality.
\begin{lemma}[Deterministic analysts are as powerful as randomized analysts]
\label{lem:rand-to-det}
If \Cref{thm:main-binary} holds for deterministic analysts, it also holds for randomized analysts. The same is true for \Cref{thm:high-probability,thm:main-noisy}.
\end{lemma}
\begin{proof}
Let $\mcA$ be a randomized analyst. We can think of it as a mixture of deterministic analysts: $\mcA$ first draws $\bz \sim \mcZ$ and then executes the deterministic strategy $\mcA_{\bz} \coloneqq \mcA(\cdot, \bz)$. Then,
\begin{align*}
\Ex[\text{Error with $\mcA$ as the analyst}] &= \Ex_{\bz}[\Ex[\text{Error with $\mcA_{\bz}$ as the analyst}]] \\
&\leq \sup_{z} \Ex[\text{Error with $\mcA_{z}$ as the analyst}]
\end{align*}
The left-hand side of \Cref{eq:expectation-analyst} when $\mcA$ is the analyst is the expectation of the same quantity when $\mcA_{\bz}$ is the analyst. Therefore, if it is small for all deterministic analysts, it is also small for all randomized analysts. Similar arguments hold for \Cref{thm:high-probability,thm:main-noisy}.
\end{proof}
\section{Boosting the success probability}
\label{sec:reduction}
\Cref{thm:main-binary} proves that, with constant success probability, the analyst cannot find a test on which the sample is biased. In this section, we prove \Cref{thm:high-probability} showing that, when all of the analysts queries $\phi:X^w \to Y$ satisfy $w = 1$, that guarantee holds with high probability. We do this via a reduction from small failure probability to constant failure probability.
\pparagraph{Notation} Throughout this section, we will only consider analysts who make queries of the form $\phi:X \to Y$ and tests of the form $\psi:X \to [0,1]$ (corresponding to $w = 1$), and analysts that only output a single test. Given an analyst $\mcA$ and sample $S \in X^n$, we'll use the notation $\mcA(S)$ as shorthand for the distribution of tests $\mcA$ asks on the sample $S$. I.e., $\bpsi \sim \mcA(S)$ is shorthand for, at each $t \in [T]$, setting $\phi_t = \mcA(\by_1, \ldots, \by_{t-1})$ and $\by_t \sim \phi^{(n)}_t(S)$, and then setting $\bpsi = \mcA(\by_1, \ldots, \by_T)$.
\begin{lemma}[Boosting from constant to small failure probability]
\label{lem:auto-boost}
For any distribution $\mcD$ over $X$, sample size $n$, budget $b$, cost function $\mathrm{cost}$, and threshold $\tau_{\psi} \in [0,1]$ for each $\psi:X \to [0,1]$, suppose that for all analysts $\mcA$ that are $(\mathrm{cost}, b)$-budgeted almost surely,
\begin{equation*}
\Prx_{\bS \sim \mcD^n, \bpsi \sim \mcA(\bS)}[\psi(\bS) \geq \tau_{\psi}] \leq \frac{1}{100}.
\end{equation*}
Then, for any sample size $N \geq n$, $k = \floor{N/n}$, and all analysts $\mcA'$ that are $(\mathrm{cost}, B \coloneqq bk/100)$-budgeted almost surely,
\begin{equation*}
\Prx_{\bS \sim \mcD^{N}, \bpsi \sim \mcA'(\bS)}\bracket*{\psi(\bS) \geq \tau_{\psi} + 1/n} \leq \exp(-\Omega(k)).
\end{equation*}
\end{lemma}
At a high level, the proof of \Cref{lem:auto-boost} exploits a classic and well-known technique for boosting success probabilities: Given an algorithm that fails with constant probability, if we run $k$ independent copies, with probability $1 - 2^{-\Omega(k)}$, a large fraction of those copies will succeed. Typically, this technique gives a framework for modifying existing algorithms -- for example, by taking the majority of multiple independent runs -- in order to produce a new algorithm with a small failure probability.
Interestingly, in our setting, no modification to the algorithm is necessary. If we answer subsampling queries in the most natural way, they ``automatically" boost their own success probability. The key insight is a single large sample $\bS \sim \mcD^{N \coloneqq nk}$ can be cast as $k$ groups of samples $\bS^{(1)}, \ldots, \bS^{(k)} \iid \mcD^n$ and the output of a query $\by \sim \phi^{(N)}(\bS)$ is the same as that of $\by' \sim \phi^{(n)}(\bS^{(\bi)})$ where $\bi \sim [k]$ is a random group. Using this insight, we are able to prove the following Lemma.
\begin{lemma}[It is exponentially unlikely many groups have high error]
\label{lem:many-groups}
In the setting of \Cref{lem:auto-boost}, let $\bS^{(1)}$ be the first $n$ elements of $\bS$, $\bS^{(2)}$ the next $n$, and so on. Then,
\begin{equation*}
\Prx_{\bS \sim \mcD^N, \bpsi \sim \mcA'(\bS)}\bracket*{\sum_{i \in [k]}\Ind[\bpsi(\bS^{(i)}) \geq \tau_{\bpsi}] \geq 0.03k} \leq e^{-k/300}.
\end{equation*}
\end{lemma}
The hypothesis of \Cref{lem:auto-boost} roughly corresponds to $\Pr[\bpsi(\bS^{(i)}) \geq \tau_{\bpsi}] \leq 1/100$. If these events were independent for each $i \in [k]$, then the conclusion of \Cref{lem:many-groups} would follow from a standard Chernoff. Unfortunately, they are not necessarily independent. To get around that, in \Cref{lem:direct-product} we extend a direct product theorem of Shaltiel's showing, roughly speaking, those events are no worse than independent.
We combine \Cref{lem:many-groups} with the following Lemma.
\begin{lemma}
\label{lem:groups-to-overall}
For $N \geq nk$, let $\bS \in X^N$ be drawn from any permutation invariant distribution, meaning $\Pr[\bS = S] = \Pr[\bS = \sigma(S)]$ for any $\sigma$ a permutation of the indices. Let $\bS^{(1)}$ be the first $n$ elements of $\bS$, $\bS^{(2)}$ the next $n$, and so on. For any query $\psi:X \to Y$ and threshold $\tau$, let $\bb$ be the random variable counting the number of $i \in [k]$ for which $\psi(\bS^{(i)}) \geq \tau$. Then,
\begin{equation*}
\Pr[\psi(\bS) \geq \tau + 1/n]\leq 200 \Pr[\bb \geq 0.03k].
\end{equation*}
\end{lemma}
\Cref{lem:auto-boost} is a straightforward consequence of the above two Lemma.
\begin{proof}[Proof of \Cref{lem:auto-boost} assuming \Cref{lem:many-groups,lem:groups-to-overall}]
The analyst chooses a query $\psi^{(\by)}$ as a function of the responses $\by \coloneqq \by_1 \sim \phi_1(\bS), \ldots, \by_T \sim \phi_T(\bS)$. Our goal is to show that $\Pr[\psi^{(\by)}(\bS) \geq \tau_{\psi^{(\by)}} + 1/n] \leq 200 \exp(-k/300)$.
Conditioned on any possible sequence of response $\by = y$, the distribution of $\bS$ is permutation invariant. Therefore,
\begin{align*}
\Pr[\psi^{(\by)}(\bS) \geq \tau_{\psi^{(\by)}}] &= \Ex_{\by}\bracket*{\Prx_{\bS \mid \by} \bracket*{\psi^{(\by)}(\bS) \geq \tau_{\psi^{(\by)}} + 1/n}}\\
&\leq\Ex_{\by}\bracket*{ 200 \Prx_{\bS}\bracket*{\sum_{i \in [k]} \Ind[\psi^{(\by)}(\bS^{(i)}) \geq \tau_{\psi^{(\by)}} ] \geq 0.03k}} \tag{\Cref{lem:groups-to-overall}} \\
&= 200 \Prx_{\bS \sim \mcD^N, \bpsi \sim \mcA'(\bS)}\bracket*{\sum_{i \in [k]}\Ind[\bpsi(\bS^{(i)}) \geq \tau_{\bpsi}] \geq 0.03k} \\
&\leq 200 e^{-k/300}\tag{\Cref{lem:many-groups}}.
\end{align*}
\end{proof}
\subsection{Proof of \texorpdfstring{\Cref{lem:many-groups}}{Lemma 7.2}}
In order to prove \Cref{lem:many-groups}, in \Cref{fig:analyst-game} we will define an ``analyst game" formalizing the setting in which an analyst has multiple distinct samples each of which they can ask queries to. We then prove a direct product theorem analogous to to Shaltiel's direct product theorem for fair decision trees \cite{Sha04}.
\begin{figure}[ht]
\captionsetup{width=.9\linewidth}
\begin{tcolorbox}[colback = white,arc=1mm, boxrule=0.25mm]
\vspace{2pt}
\textbf{Parameters:} A product distribution $\mcD \coloneqq \mcD_1 \times \cdots \times \mcD_k$ over domain $X$, budgets $b \in \R^k$, a class of queries $\Phi$ each mapping $X$ to a distribution of possible responses, function $\mathrm{cost}: \Phi \to \R_{\geq 0}$, and class of tests $\Psi$ each mapping $X$ to $\zo$. \\
\textbf{Setup:} Samples $\bx_1, \ldots, \bx_k \sim \mcD$ are drawn and \emph{not} revealed to the analyst.\\
\textbf{Execution:} The analyst repeats as many times as desired:
\begin{enumerate}[nolistsep,itemsep=2pt]
\item The analyst chooses a query $\phi \in \Phi$ and index $i \in [k]$
\item The analyst receives the response $\by \sim \phi(\bx_i)$.
\item The budget is decremented: $b_i \leftarrow b_i - \mathrm{cost}(\phi)$.
\end{enumerate}\vspace{4pt}
Afterwards, the analyst chooses tests $\psi_1, \ldots, \psi_k$. The analyst wins if $b_i \geq 0$ and $\psi_i(\bx_i) = 1$ for all $i \in [k]$.
\end{tcolorbox}
\caption{An analyst game.}
\label{fig:analyst-game}
\end{figure}
\begin{lemma}[Direct product theorem for analyst games]
\label{lem:direct-product}
In \Cref{fig:analyst-game}, fix the domain $X$, query class $\Phi$, test class $\Psi$ and cost function $\mathrm{cost}$ for which $\inf_{\phi \in \Phi}(\mathrm{cost}(\phi)) > 0$. For any distributions $\mcD \coloneqq \mcD_1 \times \cdots \mcD_k$ and budgets $b \in \R^k$, let $\mathrm{AG}(\mcD, b)$ be the maximum probability an analyst wins the game described in \Cref{fig:analyst-game}. Then,
\begin{equation}
\label{eq:direct-prod}
\mathrm{AG}(\mcD, b) \leq \prod_{i \in [k]} \mathrm{AG}(\mcD_i, b_i).
\end{equation}
\end{lemma}
It's straightforward to see that the $(\geq)$ direction of \Cref{eq:direct-prod} also holds, but we will not need that direction in this paper.
\begin{proof}
First note that if, for any $i \in [k]$, $b_i < 0$, then both sides of \Cref{eq:direct-prod} are equal to $0$. We may therefore assume, without loss of generality that $b_i \geq 0$ for all $i \in [k]$.
Consider an arbitrary analyst, $\mcA$, for distribution $\mcD$ and budget $b$. We will prove that the probability $\mcA$ wins is at most $\prod_{i \in [k]} \mathrm{AG}(\mcD_I, b_i)$ by induction on the number of iterations of the loop in \Cref{fig:analyst-game} that $\mcA$ executes.\footnote{Note that since $\inf_{\phi \in \Phi}(\mathrm{cost}(\phi)) > 0$, the number of loop executions is finite.} In the base case, $\mcA$ executes the loop zero times and directly chooses tests. Then, $\mcA$'s probability of winning is upper bounded by
\begin{align*}
\sup_{\phi_1, \ldots, \phi_k \in \Psi} \Prx_{\bx \sim \mcD}\bracket*{ \prod_{i \in [k]} \psi_i(\bx_i)} &= \sup_{\phi_1, \ldots, \phi_k \in \Psi} \prod_{i \in [k]}\Prx_{\bx \sim \mcD}\bracket*{ \psi_i(\bx_i)} \tag{$\mcD$ is product} \\
&= \prod_{i \in [k]}\sup_{\phi_i \in \Psi} \Prx_{\bx_i \sim \mcD_i}\bracket*{ \psi_i(\bx_i)} \leq \mathrm{AG}(\mcD_i, b_i).
\end{align*}
In the case where $\mcA$ executes the loop $\geq 1$ times, let $\phi \in \Phi$ and $i\in[k]$ be the query and group respectively that $\mcA$ chooses in the first iteration of the loop. Using $b' \in \R^k$ as the vector satisfying $b'_i = b_i - \mathrm{cost}(\phi)$ and $b'_j = b_j$ for all $j \neq i$, the success probability of $\mcA$ is upper bounded by
\begin{align*}
\Ex_{\bx \sim \mcD, \by \sim \phi(\bx_i)} &\bracket*{\mathrm{AG}(\mcD \mid \phi(\bx_i) = \by, b')} \\
&\leq \Ex_{\bx \sim \mcD, \by \sim \phi(\bx_i)} \bracket*{\prod_{j \in [k]}\mathrm{AG}((\mcD \mid \phi(\bx_i) = \by)_j, b'_j)} \tag{inductive hypothesis} \\
& =\Ex_{\bx \sim \mcD, \by \sim \phi(\bx_i)} \bracket*{\mathrm{AG}((\mcD \mid \phi(\bx_i) = \by)_i, b_i - \mathrm{cost}(\phi)) \cdot \prod_{j \neq i} \mathrm{AG}(\mcD_j, b_j)}\\
&= \prod_{j \neq i} \mathrm{AG}(\mcD_j, b_j) \cdot \Ex_{\bx \sim \mcD, \by \sim \phi(\bx_i)} \bracket*{\mathrm{AG}((\mcD \mid \phi(\bx_i) = \by)_i, b_i - \mathrm{cost}(\phi)}.
\end{align*}
The quantity $\Ex_{\bx \sim \mcD, \by \sim \phi(\bx_i)} \bracket*{\mathrm{AG}((\mcD \mid \phi(\bx_i) = \by)_i, b_i - \mathrm{cost}(\phi)}$ is exactly the win probability on $\mcD_i, b_i$ of the analyst whose first query is $\phi$, and remaining strategy is optimal. Therefore, it is upper bounded by $\mathrm{AG}(\mcD_i, b_i)$ and so the win probability of $\mcA$ is upper bounded by $\prod_{i \in [k]} \mathrm{AG}(\mcD_i, b_i)$, as desired.
\end{proof}
We'll also use the following version of the classic Chernoff bound.
\begin{fact}[Chernoff bound]
\label{fact:chernoff}
Let $\bx_1, \ldots, \bx_k$ be random variables each taking on values in $\zo$ and satisfying, for some $p < 1$ and all $S \subseteq [k]$,
\begin{equation*}
\Pr\bracket*{\bigcap_{i \in S} \bx_i} \leq p^{|S|}.
\end{equation*}
Then, for any $\delta > 0$,
\begin{equation*}
\Pr\bracket*{\sum_{i \in [k]} \bx_i \geq (1 +\delta)pk} \leq \exp\paren*{-\frac{\delta^2 pk}{ 2+\delta}}.
\end{equation*}
\end{fact}
\begin{proof}[Proof of \Cref{lem:many-groups}]
Let $\bS^{(k+1)}$ be the remaining $N - nk$ elements not in $\bS^{(1)}, \ldots, \bS^{(k)}$. At each time step $t \in [T]$, the analyst asks a query $\phi_t:X \to Y_t$ and receives the response $\by_t = \phi_t(\bx_t)$. We can think of $\bx_t$ as being chosen via a two tiered process: First, a group $\bi_t \in [k+1]$ is chosen, and then $\bx_t$ is chosen uniform from $\bS^{(\bi)}$. We will show that \Cref{lem:many-groups} holds even if the analyst gets to choose $\bi_t$ at each step.
For each group $i \in [k]$, let $\ba_i$ be the indicator that the total cost of queries to $\bS^{(i)}$ is at most $b$ and $\psi(\bS^{(i)}) \geq \tau_{\psi}$. Since there is a total of $B = bk/100$ budget, regardless of the analyst partitions the queries to groups, at most $\frac{k}{100}$ groups will receive queries with total cost $\geq b$. It is therefore sufficient to bound the probability that $\sum_{i \in [k]} \ba_i \geq 0.02k$. Applying \Cref{lem:direct-product} and the hypothesis of \Cref{lem:auto-boost}, for any set of groups $G \subseteq [k]$,
\begin{equation*}
\Pr\bracket*{\bigcap_{i \in G} \ba_i} \leq \paren*{\frac{1}{100}}^{|G|}.
\end{equation*}
Applying the Chernoff bound in \Cref{fact:chernoff},
\begin{equation*}
\Pr\bracket*{\sum_{i \in [k]} \ba_i \geq 0.02k} \leq \exp\paren*{-\frac{k}{300}}. \qedhere
\end{equation*}
\end{proof}
\subsection{Proof of \texorpdfstring{\Cref{lem:groups-to-overall}}{Lemma 7.3}}
We begin by proving the following technical Lemma.
\begin{restatable}{lemma}{exceedMean}
\label{lem:sample-exceeds-mean}
For any $S \in [0,1]^N$ and $n < N$, let $\bx$ be sampled by taking the sum of $n$ elements from $S$ chosen uniformly without replacement. Then, if $\Var[\bx]$ is nonzero,
\begin{equation*}
\Pr[\bx > \Ex[\bx]] \geq \frac{2 \sqrt{3} - 3}{12 + \frac{4}{\Var[\bx]}}
\end{equation*}
\end{restatable}
\Cref{lem:sample-exceeds-mean} implies that one of two things must be true: Either $\Var[\bx] \leq 1$, in which case standard techniques give that the median of $\bx$ is within $1$ of the mean. Otherwise, $\bx$ exceeds its mean with a constant probability. To prove \Cref{lem:sample-exceeds-mean}, we use the following inequality connecting the moments of $\bx$ to the probability it exceeds its mean.
\begin{fact}[\cite{V08}]
\label{fact:moment-to-exceed-mean}
For any mean-$0$ random variable $\bx$ with nonzero variance,
\begin{equation*}
\Pr[\bx > 0] \geq \frac{2\sqrt{3} - 3}{\frac{\Ex[\bx^4]}{\Ex[\bx^2]^2}}.
\end{equation*}
\end{fact}
Hence, to prove \Cref{lem:sample-exceeds-mean}, it suffices to estimate the moments of $\bx$. The moments of $\bx$ can be explicitly computed as a function of the elements in $S$, but this results in a rather large number of terms. To make the arithmetic simpler, we instead compare to the scenario where the sample was performed \emph{with replacement}.
\begin{fact}[\cite{hoeffding1994probability}]
\label{fact:hoe-replacement}
For some set $S \in R^N$ and $n \leq N$, let $\bx$ be the sum of $n$ elements sampled uniformly \emph{without replacement} from $S$, and $\by$ be the sum of $n$ elements sampled \emph{with replacement} from $S$. Then, for any convex $f$,
\begin{equation*}
\Ex[f(\bx)]\leq\Ex[f(\by)].
\end{equation*}
\end{fact}
\begin{corollary}
\label{cor:reasonable-replacement}
For any set $S \in R^N$ whose elements sum to $0$, let $\bx$ and $\by$ be as in \Cref{fact:hoe-replacement}. Then,
\begin{equation*}
\frac{\Ex[\bx^4]}{\Ex[\bx^2]^2} \leq \paren*{\frac{N-1}{N-n}}^2 \cdot \frac{\Ex[\by^4]}{\Ex[\by^2]^2}.
\end{equation*}
\end{corollary}
\begin{proof}
By \Cref{fact:hoe-replacement}, $\Ex[\bx^4] \leq \Ex[\by^4]$. Therefore it suffices to show that $\Ex[\bx^2]= \Ex[\by^2] \cdot \frac{N-n}{N-1}$.
Let $\bx_1, \ldots, \bx_n$ and $\by_1, \ldots, \by_n$ be the $n$ elements of $S$ chosen without replacement and with replacement respectively. Then, since the $\by_i$ are mean-$0$ and independent, for $i \neq j \in [n]$, $\Ex[\by_i \by_j] = 0$. Meanwhile, using the fact that the sum of elements in $S$ is $0$,
\begin{equation*}
\Ex[\bx_i \bx_j] = \frac{1}{N(N-1)}\sum_{a \in [N]}\sum_{b \in [N] \setminus \set{a}}S_a S_b = \frac{1}{N(N-1)}\sum_{a \in [N]} S_a \paren*{\sum_{b \in [N]}S_b - S_a} = -\frac{1}{N(N-1)} \sum_{a \in [N]}S_a^2.
\end{equation*}
Furthermore, we have for any $i \in [n]$, that
\begin{equation*}
\Ex[\bx_i^2] = \Ex[\by_i^2] = \frac{1}{N} \sum_{a \in [N]} S_a^2.
\end{equation*}
Next, we can compute the second moment of $\by$,
\begin{equation*}
\Ex[\by^2] = \sum_{i \in [n]}\Ex[\by_i^2] = \frac{n}{N}\sum_{a \in [N]} S_a^2.
\end{equation*}
For $\bx$,
\begin{equation*}
\Ex[\bx^2] = \sum_{i \in [n]}\Ex[\bx_i^2] + \sum_{i \neq j} \Ex[\bx_i \bx_j] = \frac{n}{N}\sum_{a \in [N]} S_a^2 - \frac{n(n-1)}{N(N-1)}\sum_{a \in [N]} S_a^2 = \frac{n}{N} \paren*{1 - \frac{n-1}{N-1}}\sum_{a \in [N]} S_a^2.
\end{equation*}
Comparing the above gives the desired bound.
\end{proof}
Next, we bound the desired moment ratio in the setting where the elements are sampled \emph{with} replacement.
\begin{proposition}
\label{prop:moment-ratio}
Let $\by_1, \ldots, \by_n$ be iid mean-$0$ variables each bounded on $[-1,1]$. Then, for $\by \coloneqq \sum_{i \in [n]}\by_i$
\begin{equation*}
\frac{\Ex[\by^4]}{\Ex[\by^2]^2} \leq 3 + \frac{1}{\Var[\by]}
\end{equation*}
\end{proposition}
\begin{proof}
We'll denote $\sigma^2 \coloneqq \Ex[\by_i^2]$. Since $\by_i$ is bounded, we further have that $\Ex[\by_i^4] \leq \sigma^2$. Then,
\begin{equation*}
\Ex[\by^2] = n \sigma^2.
\end{equation*}
Expanding $\Ex[\by^4]$ gives many terms. Luckily, since the $\by_i$ are each independent and mean-$0$, most of those terms cancel. We are left with,
\begin{equation*}
\Ex[\by^4] = n\Ex[\by_i^4] + 3n(n-1)(\sigma^2)^2 \leq n\sigma^2 + 3n^2\sigma^4.
\end{equation*}
This gives
\begin{equation*}
\frac{\Ex[\by^4]}{\Ex[\by^2]^2} \leq \frac{n\sigma^2 + 3n^2 \sigma^4}{n^2\sigma^4} = 3 + \frac{1}{n\sigma^2}.
\end{equation*}
The desired result follows from $\Var[\by] = n\sigma^2$.
\end{proof}
Finally, we complete the proof of \Cref{lem:sample-exceeds-mean}
\begin{proof}
Let $\mu \coloneqq \frac{1}{N} \sum_{x \in S} x$, and $S'$ be a copy of $S$ with each element shifted by $-\mu$. Clearly, each element of $S'$ is bounded on $[-1,1]$ and they sum to $0$. It suffices to bound $\Pr[\bx' > 0]$ for $\bx'$ being the sum of $n$ uniform elements chosen without replacement from $S'$. Furthermore, without loss of generality, we may assume that $n \leq \frac{N}{2}$ as, if $n > N/2$, we may instead consider the sum of the elements \emph{not} sampled. $\bx' > 0$ iff the sum of the elements not sampled is negative.
Let $\by'$ be the sum of $n$ elements chosen uniformly without replacement from $S'$. Then,
\begin{align*}
\Pr[\bx > \Ex[\bx]] &= \Pr[\bx' > 0] \\
&\geq \frac{2\sqrt{3} - 3}{\frac{\Ex[\bx'^4]}{\Ex[\bx'^2]^2}} \tag{\Cref{fact:moment-to-exceed-mean}} \\
& \geq \frac{2\sqrt{3} - 3}{\paren*{\frac{N-1}{N-n}}^2\frac{\Ex[\by'^4]}{\Ex[\by'^2]^2}} \tag{\Cref{cor:reasonable-replacement}}\\
& \geq \frac{2\sqrt{3} - 3}{4\frac{\Ex[\by'^4]}{\Ex[\by'^2]^2}} \tag{$n \leq N/2$}\\
& \geq \frac{2\sqrt{3} - 3}{4\paren*{3 + \frac{1}{\Var[\by']}}} \tag{\Cref{prop:moment-ratio}}\\
& \geq \frac{2\sqrt{3} - 3}{4\paren*{3 + \frac{1}{\Var[\bx]}}} \tag{\Cref{fact:hoe-replacement}}
\end{align*}
\end{proof}
We'll use the following one-sided variant of Chebyshev's inequality
\begin{fact}[Cantelli's inequality]
\label{fact:cantelli}
Let $\bx$ be a random variable with variance $\sigma^2$. Then,
\begin{equation*}
\Pr[\bx - \Ex[\bx] \geq \eps \sigma] \leq \frac{1}{1 + \eps}.
\end{equation*}
\end{fact}
As an easy consequence of the above two results, we obtain the following.
\begin{corollary}
\label{cor:sample-exceeds-mean}
For $\bx$ as in \Cref{lem:sample-exceeds-mean},
\begin{equation*}
\Pr[\bx > E[\bx] - 1] \geq \frac{2 \sqrt{3} - 3}{13} > 0.0357.
\end{equation*}
\end{corollary}
\begin{proof}
If $\Var[\bx] \geq 4$, the desired result follows from \Cref{lem:sample-exceeds-mean}. Otherwise, it follows from \Cref{fact:cantelli}.
\end{proof}
\begin{proof}[Proof of \Cref{lem:groups-to-overall}]
We'll use $\ba$ as shorthand for $\Ind[\psi(\bS^{(i)}) \geq \tau + 1/n]$. Since $\Pr[\bb \geq 0.03k] \geq \Pr[\bb \geq 0.03k \mid \ba] \cdot \Pr[\ba]$,
\begin{equation*}
\Pr[\ba] \leq \frac{\Pr[\bb \geq 0.03k]}{\Pr[\bb \geq 0.03k \mid \ba]}.
\end{equation*}
Therefore, it suffices to show that $\Pr[\bb \geq 0.03k \mid \ba] \geq \frac{1}{200}$. By \Cref{cor:sample-exceeds-mean}, for any $i \in [k]$
\begin{equation*}
\Pr[\psi(\bS^{(i)}) \geq \tau \mid \ba] > 0.0357.
\end{equation*}
Using linearity of expectation,
\begin{equation*}
\Ex[\bb \mid \ba] > 0.0357k.
\end{equation*}
The random variable $k - \bb$ is nonnegative and satisfies
\begin{equation*}
\Ex[k - \bb \mid \ba] < k -0.0357k = 0.9643k.
\end{equation*}
Therefore, by Markov's inequality
\begin{equation*}
\Pr[k - \bb \geq 0.97k \mid \ba] \leq \frac{0.9643k}{0.97k} < 0.995.
\end{equation*}
Equivalently, $\bb \geq 0.03k$ with probability at least $0.005$ conditioned on $\psi(\bS) \geq\tau + 1/n$, which is exactly what we wished to show.
\end{proof}
\subsection{Proof of \texorpdfstring{\Cref{thm:high-probability}}{Theorem 4} and the second part of \texorpdfstring{\Cref{thm:main-noisy}}{Theorem 5}}
\begin{proof}
We'll prove the second part of \Cref{thm:main-noisy}, as \Cref{thm:high-probability} follows from it. Let $k = O(\log(1/\delta))$. First, we note that if $\mcA$ is $(\mathrm{cost}_{n,\delta}, b)$-budgeted almost surely (according to \Cref{eq:cost-noisy-hp}), then it is $(\mathrm{cost}_{n/k}, O(bk))$-budgeted almost surely (according to \Cref{eq:cost-noisy-exp}). For each test function $\psi:X^1 \to [0,1]$, define the threshold to be
\begin{equation*}
\tau_{\psi} \coloneqq \psi(\mcD) + O\paren*{\max\paren*{\frac{k(b+1)}{n}, \sqrt{\frac{k(b+1)}{n} \cdot \Var_{\mcD}(\psi)}}}.
\end{equation*}
By the first part of \Cref{thm:main-noisy} and Markov's inequality, for any analyst $\mcA$ that is $(\mathrm{cost}_{n/k}, O(b))$-budgeted,
\begin{equation*}
\Prx_{\bS \sim \mcD^n, \bpsi \sim \mcA(\bS)}[\psi(\bS) \geq \tau_{\psi}] \leq \frac{1}{100}.
\end{equation*}
By \Cref{lem:auto-boost}, for any analyst $\mcA'$ that is $(\mathrm{cost}_{n/k}, O(bk))$-budgeted almost surely, or equivalently, is $(\mathrm{cost}_{n,\delta}, b)$-budgeted almost surely
\begin{equation*}
\Prx_{\bS \sim \mcD^{n}, \bpsi \sim \mcA'(\bS)}\bracket*{\psi(\bS) \geq \tau_{\psi} + k/n} \leq \exp(-\Omega(k)) \leq \delta/2.
\end{equation*}
Therefore, by a union bound applied to both $\psi$ and $-\psi$,
\begin{equation*}
\Prx_{\bS \sim \mcD^{n}, \bpsi \sim \mcA'(\bS)}\bracket*{\abs{\psi(\bS) - \psi(\mcD)} > O\paren*{\max\paren*{\frac{k(b+1)}{n}, \sqrt{\frac{k(b+1)}{n} \cdot \Var_{\mcD}(\psi)}}} + k/n} \leq \delta.
\end{equation*}
Equivalently, substituting in the definition of error in \Cref{def:error-simple} and $k \coloneqq O(\log(1/\delta))$
\begin{equation*}
\Prx_{\bS \sim \mcD^{n}, \bpsi \sim \mcA'(\bS)}\bracket*{\mathrm{error}(\psi, \bS, \Delta)\geq O\paren*{\log(1/\delta)\cdot\paren*{\frac{b + 1}{n}}}} \leq \delta. \qedhere
\end{equation*}
\end{proof}
\section{Bounding ALKL-stability from \texorpdfstring{$\chi^2$}{chi-squared}-stability}
\label{sec:stability}
The starting point of our analysis is the work of \cite{FS18}. They introduce a notion of \emph{algorithmic stability}, measuring how much the output of the algorithm depends on its input.
\begin{definition}[Average leave-one-out KL stability, \Cref{def:ALKL-first} restated, \cite{FS18}]
A randomized algorithm $\mcM: X^n \to Y$ is $\eps$-ALKL stable with respect to a randomized algorithm $\mcM': X^{n-1} \to Y$ if, for all samples $S \in X^n$,
\begin{equation*}
\Ex_{\bi \sim [n]} \bracket*{\KLbig{\mcM(S)}{\mcM'(S_{-\bi})}} \leq \eps.
\end{equation*}
We simply say that $\mcM$ is $\eps$-ALKL stable if there exists some $\mcM$' with respect to which it is $\eps$-ALKL stable.
\end{definition}
Intuitively, the output of a mechanism that is $\eps$-ALKL cannot depend too much on individual points in the sample. Feldman and Steinke were able to formalize this intuition to show that, if each step of an adaptive mechanism is ALKL stable, it does not reveal too much information about its input.
\begin{fact}[ALKL stability bounds mutual information \cite{FS18}]
\label{fact:KL-MI}
if $\mcM:X^n \to Y$ is $\eps$-ALKL stable, then for $\bS \sim \mcD^n$ and $\by \sim \mcM(\bS)$,
\begin{equation*}
I(\bS; \by) \leq n\eps.
\end{equation*}
Furthermore, if $\mcM_1:X^n \to Y_1$ is $\eps_1$-ALKL stable, and for each $y \in Y_1$, $\mcM_2^{y_1}:X^n \to Y_2$ is $\eps_2$-ALKL stable, then the randomized algorithm that takes as input a sample $S$, draws $\by_1 \sim \mcM_1(S)$, $\by_2 \sim \mcM_2^{\by_1}(S)$, and outputs $\by = (\by_1, \by_2)$ is $(\eps_1 + \eps_2)$-ALKL stable.
\end{fact}
The second component of \Cref{fact:KL-MI} says that ALKL stability composes adaptively: To bound the ALKL stability of a mechanism, it is sufficient to bound the ALKL stability of each individual step. Taken together with the first part of \Cref{fact:KL-MI}, that ALKL stability upper bounds mutual information, it gives a convenient way to uppder bound the mutual information of a mechanism $\mcM$ and its input.
In this work, we will not directly bound the ALKL stability of subsampling queries. Instead, we find it easier to first introduce the following intermediate notion of stability.
\begin{definition}[Average leave-one-out $\chi^2$ stability, \Cref{def:chi-first} restated]
\label{def:chi-stability}
A randomized algorithm $\mcM: X^n \to Y$ is $\eps$-$\chi^2$ stable with respect to a randomized algorithm $\mcM': X^{n-1} \to Y$ if, for all samples $S \in X^n$,
\begin{equation*}
\Ex_{\bi \sim [n]} \bracket*{\chisqbig{\mcM(S)}{\mcM'(S_{-\bi})}} \leq \eps.
\end{equation*}
We simply say that $\mcM$ is $\eps$-$\chi^2$ stable if there exists some $\mcM$' with respect to which it is $\eps$-$\chi^2$ stable.
\end{definition}
The key technical advantage of $\chi^2$ stability is its quadratic nature. This makes it more amenable to the techniques of linear algebra used in \Cref{sec:MI}. Interestingly, $\chi^2$ does not appear to directly satisfy the same notion of adaptive composability satisfied by ALKL stability. However, we are able to show that it implies ALKL stability, and take advantage of its adaptive composition properties.
\begin{theorem}[$\chi^2$ to ALKL stability]
\label{thm:chi-to-KL}
If $\mcM: X^n \to Y$ is $\eps$-ALKL stable with respect to $\mcM': X^{n-1} \to Y$,
\begin{enumerate}
\item It is $\eps'$-ALKL stable for
\begin{equation*}
\eps' \coloneqq \eps \cdot(3 + 2 \log(|Y|/\eps)).
\end{equation*}
Note that this is with respect to a \emph{different} randomized algorithm than $\mcM'$.\footnote{Indeed, without further assumptions, there may not exist any finite $\eps'$ for which $\mcM$ is $\eps'$-ALKL stable with respect to $\mcM'$.}
\item If for some $\tau \in (0,1]$, all $S \in X^n$, $i \in [n]$, and $y \in Y$, $\mcM'(S_{-i})(y) \geq \tau \cdot \mcM(S)(y)$, then $\mcM$ is $\eps'$-ALKL stable with respect to $\mcM'$ for
\begin{equation}
\eps' \coloneqq \eps \cdot \paren*{1 + \log(1/\tau)}. \label{eq:kl-bound-ratio}
\end{equation}
\end{enumerate}
\end{theorem}
The proof of \Cref{thm:chi-to-KL} uses the following two technical propositions relating KL and $\chi^2$-divergences.
\begin{proposition}
\label{prop:chi-to-KL}
For any distributions $\mcD$, $\mcE$ supported on $Y$, if for any $\tau \in (0,1]$ and all $y \in Y$, $\mcE(y) \geq \tau \cdot \mcD(y)$, then
\begin{equation*}
\KL{\mcD}{\mcE} \leq (1 + \log(1/\tau)) \cdot \chisq{\mcD}{\mcE}.
\end{equation*}
\end{proposition}
\begin{proof}
Let $\by$ be drawn $\by \sim \mcD$ and $\bt$ be the random variable $\bt \coloneqq \frac{\mcE(\by)}{\mcD(\by)}$. Then, we can write KL-divergence and $\chi^2$-divergence as
\begin{equation*}
\KL{\mcD}{\mcE} \coloneqq \Ex\bracket*{-\log \bt} \quad\quad\quad\text{and}\quad\quad\quad \chisq{\mcD}{\mcE} \coloneqq \Ex\bracket*{(\bt-1)^2}.
\end{equation*}
Note that
\begin{equation*}
\Ex[\bt] = \sum_{y \in Y} \mcD(y) \frac{\mcE(\by)}{\mcD(\by)} = \sum_{y \in Y} \mcE(\by) = 1.
\end{equation*}
Therefore, we can equivalently write KL-divergence in the following form
\begin{equation*}
\KL{\mcD}{\mcE} = \Ex\bracket*{-\log \bt + (\bt - 1)}.
\end{equation*}
We claim that for all $t > 0$,
\begin{equation}
\label{eq:t-log-bound}
-\log t + (t - 1) \leq \begin{cases}
\frac{1}{2}(t-1)^2 & \text{if }t \geq 1 \\
(1 - \log t) \cdot (t - 1)^2 &\text{if }0 < t < 1.
\end{cases}
\end{equation}
For $t \geq 1$, consider the function $f(t) \coloneqq \frac{1}{2}(t-1)^2 - \paren*{-\log t + (t - 1)}$. It satisfies $f(1) = 0$ and $f'(t) = (t-1)^2/t$, which is nonnegative for $t \geq 0$. Therefore, $f(t) > 0$ for all $t \geq 1$, which is sufficient for the first case in \Cref{eq:t-log-bound}.
For the second case of $t \in (0,1)$, consider the function $g(t) \coloneqq (1 - \log t) \cdot (t - 1)^2 - \paren*{-\log t + (t - 1)}$. It satisfies $g(1) = 0$ and $g'(t) = (t-1)(1 - 2\log t)$. That derivative is nonpositive for $t \in (0,1)$, so $g(t) \geq 0$ for all $t \in (0,1)$, proving the second case of \Cref{eq:t-log-bound}.
Finally, we use the fact that for all $t$ in the support of $\bt$, $t \geq \tau$ where $\tau \in (0,1]$. This gives
\begin{align*}
\KL{\mcD}{\mcE} &= \Ex\bracket*{-\log \bt + (\bt - 1)} \\
&\leq \Ex\bracket*{(1 - \log \tau )\cdot(\bt - 1)^2} \\
&= (1 + \log(1/\tau)) \cdot \chisq{\mcD}{\mcE}. \qedhere
\end{align*}
\end{proof}
\begin{proposition}
\label{prop:chi-to-KL-mix}
For any distributions $\mcD$, $\mcE$ supported on $Y$ and $\tau \in (0,1]$, let $\mcE'$ be the mixture distribution $\mcE' = (1 - \tau)\cdot\mcE + \tau \cdot \mathrm{Unif}(Y)$. Then,
\begin{equation*}
\KL{\mcD}{\mcE'} \leq (1 + \log(|Y|/\tau)) \cdot (\chisq{\mcD}{\mcE} + \tau) + \tau.
\end{equation*}
\end{proposition}
\begin{proof}
As in the proof of \Cref{prop:chi-to-KL} let $\by$ be drawn $\by \sim \mcD$ and $\bt$ be the random variable $\bt \coloneqq \frac{\mcE(\by)}{\mcD(\by)}$. We also define
\begin{equation*}
\bt' \coloneqq \frac{\mcE'(\by)}{\mcD(\by)} = \frac{(1 - \tau)\cdot \mcE(\by) + \frac{\tau}{|Y}|}{\mcD(\by)} = (1 - \tau)\cdot\bt + \frac{\tau}{|Y|\cdot\mcD(\by)}.
\end{equation*}
Let $f(t) \coloneqq -\log t + t - 1$. We claim that, for all $t,\Delta > 0$,
\begin{equation}
\label{eq:t-delta-bound}
f(t + \Delta) \leq (1 - \log \min(1,\Delta))\cdot (t-1)^2 + \Delta.
\end{equation}
The proof of \Cref{eq:t-delta-bound} is separated into three cases.
\begin{enumerate}
\item[\textbf{Case 1:}] $t \geq 1$. Here, we use that $f'(t) \leq 1$ which means that $f(t +\Delta) \leq f(t) + \Delta$. The desired bound follows from \Cref{eq:t-log-bound} and $(1 - \log \min(1,\Delta)) \geq \frac{1}{2}$.
\item[\textbf{Case 2:}] $t < 1$ and $t + \Delta \geq 1$. Once again using $f'(t) \leq 1$, we have that $f(t + \Delta) \leq f(1) + (t + \Delta - 1)$. The desired bound follows from $f(1) = 0$ and $(t + \Delta - 1) \leq \Delta$.
\item[\textbf{Case 3:}] $t + \Delta < 1$. Then,
\begin{align*}
f(t + \Delta) &\leq (1 - \log(t + \Delta))\cdot(t + \Delta - 1)^2 \tag{\Cref{eq:t-log-bound}} \\
&\leq (1 - \log(t + \Delta))\cdot(t - 1)^2 \tag{$t + \Delta < 1$} \\
& \leq (1 - \log \Delta)(t- 1)^2.
\end{align*}
\end{enumerate}
Applying \Cref{eq:t-delta-bound} and using the fact that $\mcD(\by) \leq 1$,
\begin{equation*}
f(\bt') = f\paren*{(1 - \tau)\cdot\bt + \frac{\tau}{|Y|\cdot\mcD(\by)}} \leq \paren*{1 - \log\paren*{\frac{\tau}{|Y|}}} \cdot((1 - \tau)\cdot \bt - 1)^2 + \frac{\tau}{|Y|\cdot\mcD(\by)}.
\end{equation*}
For any $c \in [0,1]$ and $t \in R$,
\begin{equation}
\label{eq:quadratic-scaling}
(ct - 1)^2 - (t-1)^2 = (c^2 - 1)\paren*{t - \frac{1}{c+1}}^2 + \frac{c-1}{c+1} \leq \frac{1-c}{1+c} \leq 1-c.
\end{equation}
Finally, we bound,
\begin{align*}
\KL{\mcD}{\mcE'} &= \Ex[f(\bt')] \leq \Ex\bracket*{\paren*{1 - \log\paren*{\frac{\tau}{|Y|}}} \cdot((1 - \tau)\cdot \bt - 1)^2 + \frac{\tau}{|Y|\cdot\mcD(\by)}} \\
&\leq \Ex\bracket*{\paren*{1 - \log\paren*{\frac{\tau}{|Y|}}} \cdot((\bt - 1)^2 + \tau) + \frac{\tau}{|Y|\cdot\mcD(\by)}} \tag{\Cref{eq:quadratic-scaling}} \\
&= \paren*{1 + \log\paren*{\frac{|Y|}{\tau}}} \cdot \paren*{\tau + \chisq{\mcD}{\mcE}} + \sum_{y \in Y} \mcD(y) \cdot \frac{\tau}{|Y|\cdot\mcD(y)} \\
&= \paren*{1 + \log\paren*{\frac{|Y|}{\tau}}} \cdot \paren*{\tau + \chisq{\mcD}{\mcE}} + \tau. \qedhere
\end{align*}
\end{proof}
We conclude this section with a proof of \Cref{thm:chi-to-KL}.
\begin{proof}
We begin with the first case, aiming to prove that there is a mechanism $\mcM''$ with respect to which $\mcM$ is $\eps'$-ALKL stable for $\eps' \coloneqq \eps \cdot(3 + 2 \log(|Y|/\eps))$. Given a sample $S_{-i} \in X^{n-1}$, $\mcM''$ runs $\by \sim \mcM'(S_{-i})$. With probability $(1 - \eps)$, it outputs $\by$. Otherwise, with probability $\eps$, it output a draw from $\mathrm{Unif}(Y)$. Then, for any $S \in X^n$,
\begin{align*}
\Ex_{\bi \sim [n]} &\bracket*{\KLbig{\mcM(S)}{\mcM''(S_{-\bi})}} \\
&= \Ex_{\bi \sim [n]} \bracket*{\KLbig{\mcM(S)}{(1-\eps)\cdot \mcM'(S_{-\bi}) + \eps \cdot \mathrm{Unif}(Y)}} \\
&\leq \Ex_{\bi \sim [n]} \bracket*{(1 + \log(|Y|/\eps)) \cdot (\chisq{\mcM(S)}{\mcM'(S_{-\bi})} + \eps) + \eps} \tag{\Cref{prop:chi-to-KL-mix}}\\
&= (1 + \log(|Y|/\eps)) \cdot \paren*{\Ex_{\bi \sim [n]} \bracket*{\chisq{\mcM(S)}{\mcM'(S_{-\bi})}} + \eps} + \eps \tag{Linearity of expectation}\\
&\leq (1 + \log(|Y|/\eps)) \cdot \paren*{\eps + \eps} + \eps \tag{$\mcM$ is $\eps$-$\chi^2$ stable w.r.t. $\mcM'$} \\
&=\eps \cdot(3 + 2 \log(|Y|/\eps))= \eps'.
\end{align*}
The second case is similar, but we apply \Cref{prop:chi-to-KL} instead of \Cref{prop:chi-to-KL-mix}. Letting $\tau$ be as defined in the statement of \Cref{thm:chi-to-KL},
\begin{align*}
\Ex_{\bi \sim [n]} &\bracket*{\KLbig{\mcM(S)}{\mcM''(S_{-\bi})}} \\
&\leq \Ex_{\bi \sim [n]} \bracket*{(1 + \log(1/\tau)) \cdot \chisq{\mcM(S)}{\mcM'(S_{-\bi})} } \tag{\Cref{prop:chi-to-KL}}\\
&= (1 + \log(1/\tau)) \cdot\Ex_{\bi \sim [n]} \bracket*{\chisq{\mcM(S)}{\mcM'(S_{-\bi})}} \tag{Linearity of expectation}\\
&\leq (1 + \log(1/\tau)) \cdot \eps \tag{$\mcM$ is $\eps$-$\chi^2$ stable w.r.t. $\mcM'$} \\
&=\eps'.
\end{align*}
\end{proof}
\section{Technical Overview}
\label{sec:technical-overview}
We consider the entire transcript of interaction between the analyst and the random sample $\bS \sim \mcD^n$. This transcript, denoted $\by$, records the history of queries $\phi_1, \ldots, \phi_T$ asked by the analyst, as well as the responses $\by_t \sim \phi(\bS)$ for each $t \in [T]$. The bulk of the work in proving \Cref{thm:main-binary} and its generalization given in \Cref{thm:main-noisy} is the following mutual information bound.
\begin{theorem}
\label{thm:MI-informal}
In the settings of \Cref{thm:main-binary,thm:main-noisy}, let $\by$ be the transcript of interaction between the analyst and sample $\bS$. Then, the mutual information of $\bS$ and $\by$ is at most $O(nb)$.
\end{theorem}
The starting point for \Cref{thm:MI-informal} is the work of Feldman and Steinke \cite{FS18}. They introduce the following notion of algorithmic stability.
\begin{definition}[Average leave-one-out KL stability, \cite{FS18}]
\label{def:ALKL-first}
A randomized algorithm $\mcM: X^n \to Y$ is $\eps$-ALKL stable if there is some randomized algorithm $\mcM': X^{n-1} \to Y$ such that, for all samples $S \in X^n$,
\begin{equation*}
\Ex_{\bi \sim [n]} \bracket*{\KLbig{\mcM(S)}{\mcM'(S_{-\bi})}} \leq \eps
\end{equation*}
where $\KL{\cdot}{\cdot}$ is the KL-divergence and $S_{-i} \in X^{n-1}$ refers to all but the $i^{\text{th}}$ point of $S$.
\end{definition}
Intuitively, a mechanism that is ALKL stable cannot depend too much on any single point in the sample. Feldman and Steinke show that if a transcript $\by$ is produced via the adaptive composition of $T$ queries, each of which are individually $\eps$-ALKL stable, then the mutual information between $\by$ and $\bS$ is at most $nT\eps$. Hence, our proof of \Cref{thm:MI-informal} proceeds by showing that a subsampling query $\phi^{(n)}$ is $(O(\mathrm{cost}_n(\phi)))$-ALKL stable.
The most natural candidate for $\mcM'$ in \Cref{def:ALKL-first} is $\phi^{(n-1)}$. Unfortunately, its not hard to construct a query $\phi$, sample $S \in X^n$, and $i \in [n]$ for which $\KL{\phi^{(n)}(S)}{\phi^{(n-1)}(S_{-i})}$ is infinite, and so this candidate does not work. For example, if $X = \zo$, $\phi:X^1 \to \zo$ is the identity function, and $S = \{1,0,\ldots, 0\}$, the support of $\phi^{(n)}(S)$ is $\zo$ whereas the support of $\phi^{(n-1)}(S_{-i})$ is only $\{0\}$, leading to an infinite KL-divergence. To get around this, we define the following alternative notion of stability.
\begin{definition}[Average leave-one-out $\chi^2$ stability]
\label{def:chi-first}
A randomized algorithm $\mcM: X^n \to Y$ is $\eps$-$\chi^2$ stable if there is some randomized algorithm $\mcM': X^{n-1} \to Y$ such that, for all samples $S \in X^n$,
\begin{equation*}
\Ex_{\bi \sim [n]} \bracket*{\chisqbig{\mcM(S)}{\mcM'(S_{-\bi})}} \leq \eps
\end{equation*}
where $\chisqbig{\mcD}{\mcE}$ is the reversed $\chi^2$ divergence, which is infinite if $\supp(\mcE) \not \subseteq \supp(\mcD)$ and otherwise equal to $\Ex_{\bx \sim \mcD}\bracket*{\frac{(\mcD(\bx) - \mcE(\bx))^2}{\mcD(\bx)^2}}$.
\end{definition}
Whereas $\KL{\mcD}{\mcE}$ is finite iff $\supp(\mcD) \subseteq \supp(\mcE)$, $\chisq{\mcD}{\mcE}$ is finite iff $\supp(\mcE) \subseteq \supp(\mcD)$. As a result $\chisqbig{\phi^{(n)}(S)}{\phi^{(n-1)}(S_{-i})}$ is guaranteed to be finite. Furthermore, given the quadratic nature of $\chi^2$-divergence, we are able to use the techniques of linear algebra to give a bound on $\Ex_{\bi \sim [n]} \bracket*{\chisqbig{\phi^{(n)}(S)}{\phi^{(n-1)}(S_{-\bi})}}$. That bound is given in \Cref{sec:MI}.
We are further able to show, in \Cref{sec:stability}, a surprising connection between ALKL stability and $\chi^2$ stability: If $\mcM$ is $\eps$-$\chi^2$ stable, it is $(\eps' \coloneqq \eps \log(|Y|/\eps))$-ALKL stable. Since $\KL{\cdot}{\cdot}$ and $\chisq{\cdot}{\cdot}$ are finite in different regimes, we cannot use the same $\mcM'$ in \Cref{def:ALKL-first,def:chi-first}. Instead, we ``smooth" the $\mcM'$ of \Cref{def:chi-first} to ensure its support includes all elements of $Y$.
\Cref{sec:stability,sec:MI} together prove \Cref{thm:MI-informal}. Bounded mutual information is known to imply various generalization bounds. Our setting is non-standard, so in \Cref{sec:gen}, we connect mutual information to the generalization guarantee of \Cref{thm:main-binary}.
\subsection{An auto-boosting result}
\label{subsec:autoboost-overview}
While low mutual information is sufficient to guarantee generalization on average, it does not imply the sort of high probability guarantee given in \Cref{thm:high-probability}. Instead, we show an ``auto-boosting" result in the case where $w_t = 1$ at all time steps $t$.
\begin{lemma}[Informal version of \Cref{lem:auto-boost}]
\label{lem:auto-boost-informal}
Suppose that the analyst always asks subsampling queries $\phi:X^{w_t} \to Y_t$ where $w_t = 1$ for all $t \in [T]$. Any low-bias guarantee that holds with constant probability for a sample size of $n$ holds with probability $1 - 2^{\Omega(-N/n)}$ given a sample of size $N > n$.
\end{lemma}
The intuition behind \Cref{lem:auto-boost-informal} is the following natural way to boost success probabilities: Start by drawing $k$ disjoint samples $\bS^{(1)} ,\ldots, \bS^{(k)} \iid \mcD^n$. Then, whenever a query $\phi$ appears, choose a random group $\bi \in [k]$ and output $\phi(\bS^{(\bi)})$. The hypothesis of \Cref{lem:auto-boost-informal} then guarantees the following: For any test $\psi$ chosen as a function of the query responses, the probability $\bS^{(i)}$ is biased with respect to $\psi$ is at most a constant, say $0.1$, for each $i \in [k]$. If these events were independent, then the probability a large fraction, say $0.2k$, of the disjoint samples are ``biased" is $2^{-\Omega(k)}$.
It turns out there is some subtlety in applying this intuition. Because the analyst can choose which query to ask a group $S^{(i)}$ as a function of responses from some other group $S^{(i')}$, the events indicating whether each group is biased need not be independent. To handle this we generalize a direct product theorem of Shaltiel that was originally applied to fair decision trees \cite{Sha04}. That generalization, given in \Cref{lem:direct-product} shows that while those events need not be independent, they behave no worse than if they were.
Furthermore, the above natural way of boosting success probabilities happens automatically! Given a sample of size $S \in X^{N \coloneqq nk}$, the answer of a query $\by = \phi(S)$ where $\phi:X^1 \to Y$ is completely equivalent to the answer $\by' = \phi(S^{(\bi)})$ where $S^{(1)}, \ldots S^{(k)} \in X^n$ are partitions of $S$ and $\bi \sim \mathrm{Unif}([k])$. We can answer subsampling queries in the most natural way, and they automatically boost their own success probabilities. Note that this step importantly relies on $w_t = 1$ and fails for larger $w$.
Finally, in \Cref{sec:apps}, we prove that both of SQ mechanism and median-finding mechanism both have low bias. Given our framework, these proofs are simple.
\section{Acknowledgments}
The author thanks Li-Yang Tan and Jonathan Ullman for their helpful discussions and feedback. He furthermore thanks the STOC reviewers for their helpful feedback.
Guy is supported by NSF awards 1942123, 2211237, and 2224246.
\bibliographystyle{alpha}
|
{
"arxiv_id": "2302.08703",
"language": "en",
"timestamp": "2023-02-20T02:07:18",
"url": "https://arxiv.org/abs/2302.08703",
"yymm": "2302"
} | \section{Introduction}
There has been a great deal of recent interest in uncertainty quantification for deep learning models~\cite{sok-uncertainty-quant}. These techniques are critical for applications of machine learning, where machine learning models are used to guide human decision-makers (e.g., medical or financial decisions), since they convey confidence in the model predictions that can be used to aid downstream decisions.
One promising strategy is \emph{conformal prediction}~\cite{pred-set-iid-1,pred-set-iid-2}, a statistical technique that has recently been adapted to machine learning~\cite{set-cov-shift-1,bastani-pac,pred-set-iid-4}. These techniques focus on constructing \emph{prediction sets}, which capture uncertainty by predicting sets of labels rather than individual labels. A benefit of these approaches is that they come with provable guarantees---typically, that the prediction set includes the ground truth label with high probability.
Most of the work so far has focused on settings where the input $x$ may be complex, but the label $y$ has a relatively simple structure, such as classification and regression.\footnote{Regression problems have an infinite label space, but existing approaches restrict to prediction sets in the form of intervals, which automatically satisfy the monotonicity property described below.} While there is no theoretical obstacle to considering more structured labels, the prediction sets can easily become uninterpretable when the label space is complex as the uncertainty in the model is not easily identifiable. We consider the case of large language models for code generation, where the output is a sequence of tokens. For such models, the output space is exponentially large in the length of the generated sequence, meaning that even if a prediction set contains only a small fraction of labels, it may still be exponentially large.
While recent work has studied structured prediction problems such as object detection and image segmentation~\cite{schulz_behnke}, they avoid this problem by breaking up the prediction space into individual components for which compact prediction sets can be constructed. For instance, for object detection, while the output is a list of bounding boxes, we can construct a prediction set of bounding boxes that may occur, and then separately construct a prediction set for each bounding box.
A natural strategy to construct prediction sets for structured outputs is that rather than considering prediction sets consisting of arbitrary subsets of labels, we can restrict them to ones that can be represented in a compact way. However, existing prediction set algorithms are not designed to search over these structured spaces, meaning that new algorithms are necessary for inferring structured prediction sets.
In this paper, we study prediction sets for code generation, where the labels are sequences of tokens corresponding to valid lines of code. Recent work has demonstrated that large language models based on the GPT architecture~\cite{https://doi.org/10.48550/arxiv.2005.14165} are effective strategies for code generation from context~\cite{https://doi.org/10.48550/arxiv.2107.03374}. For this domain, we propose to represent prediction sets using \emph{partial programs}~\cite{solar2008program}, which are programs with some portions replaced with \emph{holes}. A partial program implicitly represents the prediction set of all programs that can be obtained by filling its holes to produce a valid program. Partial programs are both a natural way of presenting sets of programs to the user and provide needed structure on the space of sets of programs.
To construct PAC prediction sets in this setting, we propose a novel algorithm that modifies an existing one~\cite{bastani-pac} to account for the restricted search space. This algorithm operates by establishing a 1D search space over prediction sets---given a \emph{scoring function} $f(x,y)\in\mathbb{R}$ mapping features $x\in\mathcal{X}$ to labels $y\in\mathcal{Y}$ (typically the probabilities predicted by a traditional neural network).
The main challenge adapting this strategy to prediction sets represented by partial programs is that the search space of partial programs is not 1D. To address this problem, we artificially construct a 1D search space by \emph{pre-selecting} a set of partial programs to consider. As long as this set can be represented in the form in equation (\ref{eqn:predictionset}) for \emph{some} scoring function, then we can use an existing prediction set algorithm to select among prediction sets in this set. The key condition the set must satisfy is \emph{monotonicity} ---i.e., each partial program must be either a superset or subset of each of the others. Then, to compute this set, we devise an integer linear program that encodes the monotonicity constraint along with other constraints on the structure of the partial programs.
We empirically evaluate our approach on both PICARD~\cite{picard}, a state-of-the-art semantic parser based on T5~\cite{https://doi.org/10.48550/arxiv.1910.10683}, trained on the Spider dataset~\cite{spider} for SQL semantic parsing, as well as Codex~\cite{https://doi.org/10.48550/arxiv.2107.03374}, a GPT language model fine-tuned on publicly available GitHub code with proficiency in over a dozen programming languages. Our experiments demonstrate that our approach can generate prediction sets of the desired form that satisfy the PAC guarantee, while significantly outperforming a natural baseline in terms of a measure of prediction set size.
\textbf{Example.} In Figure~\ref{fig:target-ast}, we show an example of an SQL query from the Spider dataset along with the \emph{abstract syntax tree} exposing its syntactic structure. In Figure~\ref{fig:predicted-set}, we show the SQL query (and corresponding AST) as predicted by PICARD. Below the predicted query, we show the partial program obtained by our algorithm (it is obtained by deleting the nodes in the AST marked by red crosses). This partial program represents the prediction set of all programs that can be obtained by filling the ?? portions with some expressions. It is guaranteed to contain the ground truth query with high probability.
\textbf{Contributions.} Our contributions include the notion of partial programs as prediction sets for code generation, an algorithm for constructing PAC prediction sets in this setting, and an empirical evaluation demonstrating the efficacy of our algorithm. Finally, while we focus on code generation, we believe our techniques can be adapted to construct prediction sets for other structured prediction problems.
\begin{figure*}[h]
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=3.25 in]{exhibits/spider_ast.png} \\
{\scriptsize
\begin{align*}
&\texttt{SELECT COUNT(*) FROM countries AS t1} \\
&\texttt{JOIN car\_makers AS t2 on t1.countryid = t2.country} \\
&\texttt{JOIN model\_list as t3 on t2.id=t3.maker} \\
&\texttt{WHERE t1.countryname = "usa";}
\end{align*}
}
\caption{Ground truth abstract syntax tree (AST) and SQL query from the Spider dataset~\cite{spider}.}
\label{fig:target-ast}
\end{subfigure}
\hfill
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=3.25 in]{exhibits/pred_ast.png}
{\scriptsize
\begin{align*}
&\texttt{SELECT COUNT(*) FROM countries AS t1} \\
&\texttt{JOIN car\_makers as t2 on t1.countryid = t2.country} \\
&\texttt{WHERE t1.countryname = "usa";} \\\\
&\texttt{SELECT COUNT(*) FROM countries AS t1} \\
&\texttt{JOIN ?? on ?? WHERE t1.countryname = "usa";}
\end{align*}}
\caption{Predicted AST, predicted SQL query, and prediction set for the same task as in Figure~\ref{fig:target-ast} with $m=2$ holes.}
\label{fig:predicted-set}
\end{subfigure}
\caption{
Example of ground truth SQL query, predicted SQL query, and the constructed prediction set. Note that the predicted query is incorrect. With the two holes in the SQL query, the predicted AST code prediction set contains the ground truth query.}
\label{fig:example}
\end{figure*}
\section{Background}
In this section, we introduce the notion of PAC prediction sets, as well as the code generation task.
\subsection{PAC Prediction Sets} \label{PAC-SET}
PAC prediction sets based on conformal prediction have recently been proposed as a rigorous way to quantify uncertainty for deep neural networks~\cite{bastani-pac}. A \emph{prediction set model} is a function $F:\mathcal{X}\to2^{\mathcal{Y}}$ mapping inputs $x$ to sets of labels $F(x)\subseteq\mathcal{Y}$. We typically construct prediction sets based on a given \emph{scoring function} $f:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}$, which maps features $x\in\mathcal{X}$ to labels $y\in\mathcal{Y}$ (with higher scores corresponding to higher likelihood of being the true label). Scoring functions are often neural networks. Given $f$, we consider the family of prediction set models with a single parameter $\tau \in \mathbb{R}$:
\begin{equation} \label{eqn:predictionset}
F_{\tau}(x) \coloneqq \{y \in \mathcal{Y} \mid f(x,y) \geq \tau \},
\end{equation}
i.e., the set of all labels with score at least $\tau$. To define correctness, we assume that the data is sampled i.i.d. according to some distribution $p(x,y)$.
\begin{definition}
A parameter $\tau \in \mathbb{R}_{\geq 0}$ is \emph{$\epsilon$-correct} if
\begin{align*}
\mathbb{P}_{p(x,y)}\left[y \in F_{\tau}(x)\right] \geq 1 - \epsilon.
\end{align*}
\end{definition}
We let $\mathcal{T}_{\epsilon}\subseteq\mathbb{R}$ denote the set of $\epsilon$-correct $\tau$. Next, consider an estimator $\hat\tau:\mathcal{Z}^*\to\mathbb{R}$, where $\mathcal{Z}=\mathcal{X}\times\mathcal{Y}$ (with $\mathcal{Z}^*=\bigcup_{i=1}^\infty\mathcal{Z}^i$) is the set of calibration datasets. We let $p(Z)=\prod_{(x,y)\in Z}p(x,y)$ be the distribution over datasets.
\begin{definition}
\label{def:pacpredictionset}
An estimator $\hat{\tau}$ is \emph{$(\epsilon, \delta)$-probably approximately correct (PAC)} if
\begin{align*}
\mathbb{P}_{p(Z)}\left[\hat{\tau}(Z) \in \mathcal{T}_{\epsilon}\right] \geq 1 - \delta.
\end{align*}
\end{definition}
Our goal is to construct PAC prediction sets that are small in size, where size is defined as
\begin{align*}
\bar{S}(\tau)=\mathbb{E}_{p(x,y)}[S(F_\tau(x))],
\end{align*}
for some given size metric $S:2^{\mathcal{Y}}\to\mathbb{R}_{\ge0}$. Assuming $S$ is monotone (i.e., $Y\subseteq Y'$ implies $S(Y)\le S(Y')$), then larger values of $\tau$ correspond to smaller sizes. Thus, our goal is to maximize $\tau$ while guaranteeing that $\tau\in \mathcal{T}_{\epsilon}$.
For this setting, there exists an estimator $\hat{\tau}$ to construct a PAC prediction set. These algorithms optimize $\tau$ subject to a constraint on the number of errors in the validation set:
\begin{align}
\label{eqn:pacalgo}
\hat{\tau}(Z)=\operatorname*{\arg\max}_{\tau\in\mathbb{R}}\tau
~~\text{subj. to}~~
\sum_{i=1}^n\mathbbm{1}(y_i\not\in F_\tau(x_i))\le k,
\end{align}
where $k$ can be computed from $\epsilon$, $\delta$, and $n$ (see~\cite{bastani-pac} for details). In other words, we choose the largest $\tau$ subject to a bound $k$ on the number of errors made by the prediction set.
\subsection{Code Generation}
We consider the problem of generating code in some language, where the label is a sequence of tokens---i.e., we have $\mathcal{Y}=\Sigma^*$, where $\Sigma$ is the set of tokens. While our approach is general, we focus on strategies based on large language models (LLMs), where the input $x$ consists of the generation context (e.g., several lines of code preceding the line to be generated); recent work has demonstrated that LLMs are very effective models for solving this problem~\cite{textgen}. In this strategy, tokens represent parts or whole words~\cite{bpe}, which are then predicted, either autoregressively~\cite{https://doi.org/10.48550/arxiv.2210.14698} or sequence-to-sequence~\cite{https://doi.org/10.48550/arxiv.1409.3215}, to form a full target when concatenated.
\subsection{Abstract Syntax Trees}
Our strategy for constructing prediction sets relies on the \emph{abstract syntax tree} of the generated code. This data structure is similar to a parse tree obtained by using a context-free grammar (CFG) parser to parse a sentence, but aims to represent constructs in the code (e.g., variables, assignment statements, if statements, etc.) while abstracting away low-level syntax (e.g., parentheses matching).
Formally, given a (valid) sequence of tokens $y\in\Sigma^*$, an AST is a tree $T=(V,E)$, with nodes $v\in V$ and edges $(v,v')\in E\subseteq V\times V$. Each node $v\in V$ represents some subsequence of tokens in $y$ (the subsequence of a node must be contained in the subsequence of its parent). This subsequence is given by a mapping $\eta:V\to\mathbb{N}^2$, where $\eta(v)=(i,j)$ indicates that node $v$ corresponds to subsequence $y_i...y_{j-1}$ of $y$. Then, we assume that if $\eta(v)=(i,j)$, then its parent $v'$
satisfies $\eta(v')=(i',j')$ for some $i'\le i<j\le j'$ (here, the parent $v'$ is the unique node $v'\in V$ such that there is an edge $(v',v)\in E$).
ASTs are language-specific, and there can also vary from one implementation to another; we assume the AST construction is provided by the user. Figure~\ref{fig:target-ast} depicts an examples of an AST for an SQL query. In our approach, we use the AST to restrict the space of prediction sets we consider. Intuitively, we consider prediction sets that can be represented by replacing some subtree of the AST with a ``hole'' representing an arbitrary sequence of tokens.
\section{PAC Prediction Sets for Code Generation}
\label{sec:problem}
In this section, we formalize the notion of a PAC prediction set for code generation.
\subsection{Partial Programs}
Our algorithm takes as input a trained model $f$ represented as a scoring function $f:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}$, along with a calibration dataset $D$ of input-output pairs $Z = \{(x_i,y_i)\}_{i=1}^n$. We assume an algorithm that, given an input $x$, uses $f$ to produce a label $y=\bar{f}(x)$---e.g., we may compute $y$ using a greedy decoding strategy.
Our goal is to use $f$ to construct a PAC prediction set. The challenge is that there are an enormous number of possible labels (exponential in the length of the generated sequence), meaning a traditional prediction set will not be human-interpretable. Thus, we need to constrain the search space over prediction set models.
To this end, we consider prediction sets defined by \emph{partial programs}, which are sequences with special tokens called \emph{holes}; each hole can be \emph{filled} with an arbitrary subsequence, and filling all holes produces a complete sequence. Formally, we denote the special token by $??$; then, a partial program is a sequence $y\in(\Sigma\cup\{??\})^*$. Suppose that $y$ has $k$ holes:
\begin{align*}
y=y_0\;??\;y_1...\;y_{k-1}??\;y_k,
\end{align*}
where $y_0,y_1,...,y_k\in\Sigma^*$. Then, given $\bar{y}_1,...,\bar{y}_k\in\Sigma^*$, we can fill the holes in $y$ using these sequences to obtain
\begin{align*}
\bar{y}
=\text{fill}(y;\bar{y}_1,...,\bar{y}_k)
=y_0\bar{y}_1y_1...y_{k-1}\bar{y}_ky_k,
\end{align*}
i.e., replace the $i$th hole with $\bar{y}_i$. We call $\bar{y}\in\Sigma^*$ a \emph{completion} of $y$; we denote the set of completions of $y$ by
\begin{align*}
\pi(y)\coloneqq\{\bar{y}\in\Sigma^*\mid\exists y_1,...,y_k\in\Sigma^*\,.\,\bar{y}=\text{fill}(y;y_1,...,y_k)\},
\end{align*}
i.e., all possible ways in which the holes in $y$ can be filled (note that $\pi(y)$ may be an infinite set).
Now, our strategy is to restrict prediction sets to sets of programs represented by partial programs. At a high level, our algorithm starts with a concrete program $\bar{y}=\bar{f}(x)$, and then replaces a certain number of subsequences in $\bar{y}$ with holes to obtain a partial program $y$ such that $\pi(y)$ contains the true program $y^*$ with high probability.
\subsection{Leveraging AST Structure}
An additional constraint is that we want to remove subsequences that correspond to full ``units of code''. For example, if we start with the program \texttt{print([1,2,3])}, we might remove a subsequence to obtain \texttt{print([??)}; however, this partial program can be hard for the user to understand since the brackets do not match. Instead, we may want to require the algorithm to either remove only the elements of the array to obtain \texttt{print([??])}, or remove the full array to obtain \texttt{print(??)}. Thus, we only consider removing subsequences corresponding to nodes in the AST $T=(V,E)$ of $\bar{y}$---i.e., sequences $\bar{y}_i...\bar{y}_{j-1}$ with a hole $??$, where $(i,j)=\eta(v)$ for some $v\in V$.
\subsection{Bounded Number of Holes}
Even with the AST constraint, we can still obtain very complicated partial programs by removing a large number of leaf nodes. For example, we could remove three nodes from \texttt{print([1,2,3])} to obtain the partial program \texttt{print([??,??,??])}; for longer lists, the resulting partial program would be even more complex. A simpler solution would be to leave the entire contents of the list as a single hole: \texttt{print([??])}. To this end, we impose a constraint that at most $m$ subtrees of the AST are replaced with holes, resulting in a partial program with at most $m$ holes. Here, $m\in\mathbb{N}$ is a given hyperparameter; for instance, we may take $m$ to be 1-3 holes. In our experiments, we find that $m=1$ works well in practice.
\subsection{Problem Formulation}
Our goal is to design an algorithm $\mathcal{A}$ that maps a validation set $Z=\{(x_i,y_i)\}_{i=1}^n$ and a new input $x\in\mathcal{X}$ to a partial program $y=\mathcal{A}(x;Z)$ such that $\mathcal{A}$ satisfies the PAC property given in Definition~\ref{def:pacpredictionset}---i.e., we have
\begin{align*}
\mathbb{P}_{p(Z)}\left[\mathbb{P}_{p(x,y)}\left[y\in\pi(\mathcal{A}(x;Z))\right]\ge1-\epsilon\right]\ge1-\delta,
\end{align*}
where $p(Z)=\prod_{i=1}^np(x_i,y_i)$. As before, we additionally want to minimize the expected size of $\pi(\mathcal{A}(x;Z))$. As we describe in the next section, our algorithm constructs $y=\mathcal{A}(x;Z)$ by starting from the highest probability prediction $\bar{y}=\operatorname*{\arg\max}f(x,y)$ (more specifically, an approximate maximum based on greedy decoding), and then removes subtrees of the AST of $\bar{y}$ to obtain $y$. Then, we define size to be the fraction of nodes retained in $y$---i.e., $S(y)=|T'|/|T|$, where $T'$ is the AST of $y$ and $T$ is the AST of $\bar{y}$.
\section{Structured Prediction Set Algorithm} \label{algo}
We describe our algorithm $\mathcal{A}$ (in Algorithm~\ref{alg:main}) for constructing structured PAC prediction sets for code generation.
\subsection{General Strategy}
Recall that existing approaches to constructing prediction sets rely on a 1D search space over parameters $\tau\in\mathbb{R}$. Our approach is to design such a 1D search space and then apply existing prediction set algorithms. However, we cannot directly use the scoring function $f(x,y')$ to construct prediction sets, since its level sets $\{y'\mid f(x,y')\ge\tau\}$ may not have the desired structure described in section~\ref{sec:problem}---i.e., they may have not have form $\pi(y)$ for some partial program $y$.
Instead, we design a modified scoring function $\tilde{f}(x,y')$ whose level sets have the form $\{y'\mid\tilde{f}(x,y')\ge\tau\}=\pi(y)$ for some partial program $y$. To achieve this goal, we first note that for a single input $x$, if our goal is to obtain a prediction set of the form $\pi(y)$ for some partial program $y$, we need $\tilde{f}(x,y')>\tau$ for all $y'\in\pi(y)$ and $\tilde{f}(x,y')<\tau$ otherwise. For instance, we could define $\tilde{f}(x,y')=\tau+\gamma$ for all $y'\in\pi(y)$ (for an arbitrarily small $\gamma\in\mathbb{R}_{>0}$), and $\tilde{f}(x,y')=\tau-\gamma$ otherwise.
More generally, our strategy can be envisioned as follows:
\begin{enumerate}[topsep=0pt,itemsep=0ex,partopsep=1ex,parsep=0ex,leftmargin=3ex]
\item Design an algorithm $\tilde{\mathcal{A}}$ such that for every input $x$ and parameter $\tau$, it outputs a partial program $y=\tilde{\mathcal{A}}(x,\tau)$ that corresponds to a desired prediction set $\pi(y)$.
\item Construct a modified scoring function $\tilde{f}$ such that $\{y'\mid\tilde{f}(x,y')\ge\tau\}=\pi(y)$ for some $y$.
\item Use an existing PAC prediction set algorithm to choose a threshold $\hat{\tau}(Z)$ based on $\tilde{f}$.
\end{enumerate}
The key constraint on $\tilde{\mathcal{A}}$ is that we have to be able to construct $\tilde{f}$. We saw that we can construct $\tilde{f}$ for a single triple $(x,\tau,y)$, but given multiple triples, such a $\tilde{f}$ may not exist.
For $\tilde{f}$ to exist, these triples must satisfy \emph{monotonicity}---i.e.,for two parameters $\tau,\tau'\in\mathbb{R}$ such that $\tau\le\tau'$, the corresponding prediction sets satisfy $F_{\tau}(x)\supseteq F_{\tau'}(x)$. In other words, as the threshold on the score decreases, the size of the prediction set becomes larger \emph{uniformly} for all inputs $x\in\mathcal{X}$. Thus, we need to ensure that our partial programs also satisfy this property---i.e., if $\tau\le\tau'$, then letting $y=\tilde{\mathcal{A}}(x,\tau)$ and $y'=\tilde{\mathcal{A}}(x',\tau)$, we have $\pi(y)\supseteq\pi(y')$.
We impose a stronger constraint that implies monotonicity: if $\tau\le\tau'$, then we require that every node in $y=\tilde{\mathcal{A}}(x,\tau)$ also occurs in $y'=\tilde{\mathcal{A}}(x,\tau')$. It is easy to see that if the AST of $y$ contains a subset of the nodes in the AST of $y'$, then $\pi(y)\supseteq\pi(y')$. For simplicity, we refer to this stronger constraint as monotonicity for the remainder of the paper.
\begin{theorem}
\label{thm:main}
Assume that $\tilde{\mathcal{A}}$ is monotone. There exists a scoring function $\tilde{f}$ such that for all $x\in\mathcal{X}$, we have
\begin{align*}
\pi(\tilde{\mathcal{A}}(x,\tau))=\tilde{F}_{\tau}(x)\coloneqq\{y'\mid\tilde{f}(x,y')\ge\tau\}
\end{align*}
for all $\tau\in\mathbb{R}$ except a finite subset.
\end{theorem}
\begin{proof}
Recall that we have assumed that we choose $y$ to be a subset of the most likely program $\bar{y}=\operatorname*{\arg\max}_{y'}f(x,y)$ (i.e., remove some subset of nodes in $\bar{y}$). Thus, the space of possible partial programs $y$ for a given input $x$ is finite. Suppose the partial programs encountered by enumerating $\tau$ from $+\infty$ to $-\infty$ is $y_1,...,y_k$; this chain must be finite due to monotonicity. Also, $\pi(y_1)\supseteq\pi(y_2)\supseteq...\supseteq\pi(y_k)$.
Next, let the value of $\tau$ at which $\tilde{\mathcal{A}}(x,\tau)$ changes from $y_{i-1}$ to $y_i$ be $\tau_i$, and define $\tau_1=-\infty$ and $\tau_{k+1}=+\infty$. Then, we have $\tilde{\mathcal{A}}(x,\tau)=y_i$ for $\tau_i<\tau<\tau_{i+1}$ (the value at $\tau_i$ can be either $y_i$ or $y_{i-1}$). Now, we can define
\begin{align*}
\tilde{f}(x,y')=\tau_i\quad\text{where}\quad
y'\in\pi(y_i)\wedge y'\not\in\pi(y_{i+1}),
\end{align*}
noting that by monotonicity, the condition holds for at most one $i$; if it does not hold for any $i$, but $y'\in\pi(y_k)$, then we take $\tilde{f}(x,y')=\infty$; otherwise, we take $\tilde{f}(x,y')=-\infty$.\footnote{If we assume that $y_1=\;??$ is the partial program consisting of a single hole, $y'\in\pi(y_1)$ for all $y'$, so this last case is unnecessary.}
This strategy ensures that the scores $\tilde{f}(x,y')$ fall between the $\tau_i$. It is easy to see that with this choice of $\tilde{f}$, we have
\begin{align*}
\pi(y_i)=\{y'\mid\tilde{f}(x,y')\ge\tau\}
\end{align*}
for all $\tau\in\mathbb{R}\setminus\{\tau_1,...,\tau_k\}$.
By construction, we have $\tilde{\mathcal{A}}(x,\tau)\in\{y_i\}_{i=1}^k$, so the claim follows.
\end{proof}
In our algorithm, can avoid problematic values of $\tau$ simply by reducing $\hat\tau(Z)$ by a tiny amount---i.e., we use $\hat\tau(Z)-\gamma$ for an arbitrarily small $\gamma\in\mathbb{R}_{>0}$.
\begin{corollary}
\label{cor:main}
For sufficiently small $\gamma\in\mathbb{R}_{>0}$, the prediction set $\pi(\tilde{\mathcal{A}}(x,\hat\tau(Z)-\gamma))$ is PAC, where $\hat\tau(Z)$ is obtained by using an existing PAC prediction set algorithm with $\tilde{f}$.
\end{corollary}
\begin{proof}
Note that for sufficiently small $\gamma$, either $\hat\tau(Z)$ or $\hat\tau(Z)-\gamma$ is not problematic. If the former holds, then
\begin{align*}
\pi(\tilde{\mathcal{A}}(x,\hat\tau(Z)-\gamma))
\supseteq
\pi(\tilde{\mathcal{A}}(x,\hat\tau(Z)))
=
\tilde{F}_{\hat\tau(Z)}(x).
\end{align*}
If the latter holds, then
\begin{align*}
\pi(\tilde{\mathcal{A}}(x,\hat\tau(Z)-\gamma))
=
\tilde{F}_{\hat\tau(Z)-\gamma}(x)
\supseteq
\tilde{F}_{\hat\tau(Z)}(x).
\end{align*}
By assumption, $\tilde{F}_{\hat\tau(Z)}$ is PAC, so the claim follows.
\end{proof}
As a consequence, we can use $\pi(\tilde{\mathcal{A}}(x,\hat\tau(Z)-\gamma))$ as our PAC prediction set. It remains to describe our design of $\tilde{\mathcal{A}}$.
\subsection{Probabilities for ASTs}
First, we describe the objective that our choice of $\tilde{\mathcal{A}}$ uses to prioritize partial programs. A standard strategy for ranking candidate prediction sets is to consider the probability mass of labels in the set~\cite{angelopoulos2020uncertainty}. Then, we can prioritize prediction sets that cover higher probability mass, since they are more likely to contain the true label.
In our setting, the corresponding principle is to prioritize replacing portions of the program that are more uncertain (according to the original scoring function $f$) with holes, since these portions are the most likely to be incorrect compared to the true program. In particular, since the corresponding prediction set is constructed by filling holes with all possible subsequences, and since low-probability subtrees are the ones where other subsequences have higher probability, so replacing these subtrees with holes most increases the aggregate probability mass of the prediction set.
To formalize this notion, we assume that the AST of the top prediction $\bar{y}=\operatorname*{\arg\max}f(x,y)$ has probabilities associated with its leaf nodes. In particular, we assume that each leaf node $v$ in the AST $T$ of $\bar{y}$ is labeled with a value $\ell_v\in\mathbb{R}_{\ge0}$ denoting the negative log probability of $v$ conditioned on its ancestors\footnote{In practice, we often use sequence models that do not respect the structure of $T$, but as a heuristic, we can still use the predicted probability of the token labeling each leaf $v$ to construct $\ell_v$.}---i.e., $\ell_v=-\log p(v\mid v_1,...,v_k)$.
Then, the negative log probability of the entire AST $T$ is
\begin{align*}
\ell_T=\sum_{v\in\text{leaves}(T)}\ell_v.
\end{align*}
Now, if we replace a subtree with holes, we are considering enumerating all possible subtrees to fill that hole construct the prediction set. Thus, the aggregate negative log probability of that subtree goes from
$\sum_{v\in\text{subtree}}\ell_v$ to $0$ (i.e., its probability becomes $1$).
That is, if we replace a subtree with a hole, we can simply drop those leaves from the sum in $\ell_T$.
Thus, we can label holes $v$ in an AST $T'$ with negative log probability $\ell_v=1$. Then, we can extend the definition for the negative log probability of a tree to trees with holes:
\begin{align*}
\ell_{T'}=\sum_{v\in\text{leaves}(T')}\ell_v.
\end{align*}
We use this objective to prioritize partial programs.
\begin{figure*}[h]
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{exhibits/percent_nodes_removed.png}
\caption{Fraction of nodes removed as a function of $\epsilon$ for varying bound $m$ on the number of holes.}
\label{fig:spider_node_rm}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{exhibits/target_in_set.png}
\caption{Prediction set coverage rate as a function of $\epsilon$. The coverage always exceeds the desired $1-\epsilon$ rate (dotted line)}
\label{fig:spider_set_coverage}
\end{subfigure}
\caption{Node removal and code set coverage for varying number of holes and varying $\epsilon$ values. The experiment is conducted using the PICARD~\cite{picard} model evaluated on the SQL dataset~\cite{spider}.}
\label{fig:sql-experiments}
\end{figure*}
\begin{figure*}[h]
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{exhibits/codex_percent_nodes_removed.png}
\caption{Fraction of nodes removed as a function of $\epsilon$ for varying bound $m$ on the number of holes.}
\label{fig:codex_node_rm}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{exhibits/codex_target_in_set.png}
\caption{Prediction set coverage as a function of $\epsilon$. The coverage always exceeds the desired $1-\epsilon$ rate (dotted line).}
\label{fig:codex_set_coverage}
\end{subfigure}
\caption{Node removal and code set coverage for varying number of holes and varying $\epsilon$ values. The experiment is conducted using OpenAI's Codex Model~\cite{https://doi.org/10.48550/arxiv.2107.03374} evaluated on the Python datasets \cite{hendrycksapps2021, https://doi.org/10.48550/arxiv.2107.03374}.}
\label{fig:codex-experiments}
\end{figure*}
\subsection{Computing Partial Programs via Optimization}
Next, we describe our algorithm $\tilde{\mathcal{A}}$ for constructing
a partial program $y=\tilde{\mathcal{A}}(x,\tau)$ representing a prediction set for a given input $x$ and parameter $\tau$. Rather than compute this output for all possible $\tau$, we fix a finite subset of parameters $\tau_1<\tau_2<...<\tau_k$, and construct partial programs $y_1,y_2,...,y_k$ for these values. Then, we take $\tilde{\mathcal{A}}(x,\tau)=y_i$, where $\tau\in(\tau_{i-1},\tau_i]$. Then, $\tilde{\mathcal{A}}$ is formulated as an integer linear program (ILP), which we describe below.
\textbf{Optimization variables.}
Our program includes two sets of variables. First, for each node $v$ in $T$, where $T$ is the AST for $\bar{y}$, and for each parameter $\tau_i$ we include a binary variable $\alpha_{i,v}\in\{0,1\}$ indicating whether we remove the subtree rooted at $v$ from $\bar{y}$ to construct $y_i$. Second, for each node $v$, we include a binary variable $\beta_{i,v}\in\{0,1\}$ indicating whether we remove $v$ from $\bar{y}$ to construct $y_i$. We separately track removing subtrees from removing nodes so we can impose our bound on the number of subtrees removed.
\begin{algorithm}[t]
\caption{Our structured PAC prediction set algorithm.}
\label{alg:main}
\begin{algorithmic}
\STATE{\bfseries Input:} Scoring function $f$, Validation dataset $Z$, Confidence levels $\epsilon,\delta$
\FOR{$(x,\bar{y})\in Z$}
\STATE $\alpha,\beta\gets$ Solve Eqs.~\ref{eqn:opt1}-\ref{eqn:opt6} for $(x,y)$
\STATE $y_i\gets$ Remove nodes $v$ such that $\alpha_{i,v}=1$
\ENDFOR
\STATE \textbf{return} Largest $\tau_i$ such that
\begin{align*}
\sum_{(x,\bar{y})\in Z}\mathbbm{1}(\bar{y}\not\in\pi(y_i))\le k(\epsilon,\delta,|Z|)
\end{align*}
\end{algorithmic}
\end{algorithm}
\textbf{Overall program.} Overall, our goal is to minimize the leaf nodes removed on average across $y_i$:
\begin{align}
\min_{\alpha,\beta}~\sum_{i=1}^k\sum_{v\in\text{leaves}(T)}\beta_{i,v} \label{eqn:opt1}
\end{align}
subject to the following constraints:
\begin{align}
&\sum_{v\in V}\alpha_{i,v}\le m
\qquad(\forall i\in[k]) \label{eqn:opt2} \\
&\alpha_{i,v}\to\beta_{i,v}
\qquad(\forall v\in V,~i\in[k]) \label{eqn:opt3} \\
&\beta_{i,v}\to\beta_{i,v'}
\qquad(\forall(v,v')\in E) \label{eqn:opt4} \\
&\beta_{i,v}\rightarrow\beta_{i+1,v}
\qquad(v\in V,~\forall i\in\{2,...,k\}) \label{eqn:opt5} \\
&\sum_{v\in\text{leaves}(T)}\ell_v\cdot(1-\beta_{i,v})\le\tau_i
\qquad(\forall i\in[k]) \label{eqn:opt6}
\end{align}
We have used Boolean notation for clarity; we can enforce $\alpha\to\beta$ for $\alpha,\beta\in\{0,1\}$ via the constraint $\alpha\le\beta$. We describe these constraints in detail below. Once we have solved this program, our algorithm directly constructs $y_i$ by removing all subtrees for which $\alpha_{i,v}=1$.
\textbf{Removal constraints.}
We include constraints bounding how many subtrees we remove, and enforcing that if we remove a subtree, we remove all nodes in that subtree:
\begin{itemize}[topsep=0pt,itemsep=0ex,partopsep=1ex,parsep=0ex,leftmargin=3ex]
\item Eq.~\ref{eqn:opt2}: We remove at most $m$ subtrees.
\item Eq.~\ref{eqn:opt3}: If we remove the subtree at $v$, then we remove $v$.
\item Eq.~\ref{eqn:opt4}: If we remove $v$ and $v'$ is a child of $v$, then we also remove $v'$.
\item Eq.~\ref{eqn:opt5}: If we remove $v$ for $y_i$, then we also remove it for $y_{i+1}$ (i.e., enforce monotonicity).
\end{itemize}
\textbf{Probability mass constraint.}
Finally, Eq.~\ref{eqn:opt6} ensures that we remove sufficient probability mass from $T$ so $\pi(y_i)$ meets the $\tau_i$ threshold---in particular, it requires that the negative log likelihood of the AST $T_i$ of $y_i$ is below $\tau_i$. Note that we use $1-\beta_{i,v}$ since we are summing over leaves remaining in $T_i$. Also, note that $\ell_v$ is a real-valued constant.
\subsection{Structured PAC prediction sets.}
Given the partial programs $y_i=\tilde{\mathcal{A}}(x,\tau_i)$ computed using $\tilde{\mathcal{A}}$, the remaining question is how we actually choose the parameter $\hat\tau(Z)$. We could construct $\tilde{f}$ as described in the proof of Theorem~\ref{thm:main}; then, by Corollary~\ref{cor:main}, $\pi(\tilde{\mathcal{A}}(x,\hat{\tau}(Z)-\gamma))$ is PAC for sufficiently small $\gamma$.\footnote{In fact, we can take $\gamma=\min_i|\tau_i-\tau_{i+1}|$, where $\tau_i$ are our choices in the design of $\tilde{\mathcal{A}}$ (not the proof of Theorem~\ref{thm:main}).}
However, constructing $\tilde{f}$ can be computationally intractable. To avoid this issue, note that we do not need to explicitly construct $\tilde{f}$; instead, it suffices for such a $\tilde{f}$ to exist (as long as we can find a way to compute the parameter $\hat\tau(Z)$).
To compute $\hat\tau(Z)$, it suffices to be able to compute the number of errors in the validation set for a candidate value of $\tau$. Then, we can solve the maximization problem over $\tau$ in (\ref{eqn:pacalgo}) by enumerating over the fixed choices of parameters $\tau_i$ used by $\tilde{\mathcal{A}}$; since the other values of $\tau$ produce the same prediction sets, we can ignore them.
Given a validation example $(x,y)\in Z$, computing $\mathbbm{1}(y\in F_{\tau_i}(x))$ is straightforward---since $F_{\tau_i}(x)=\pi(y_i)$, we simply check whether $y\in\pi(y_i)$. Doing so is a straightforward tree comparison operation---i.e., we enumerate nodes in $y_i$ (excluding holes) and check if they are all contained in $y$; if so, then $y\in\pi(y_i)$, and if not, then $y\not\in\pi(y_i)$.
In Algorithm~\ref{alg:main}, this search is implemented at the end: it returns the largest $\tau_i$ that achieves a given number of errors $k(\epsilon,\delta,n)$ on the validation dataset, where
\begin{align*}
k(\epsilon,\delta,n)=\max_{k\in\mathbb{N}\cup\{0\}}k
~\text{subj. to}~
\sum_{i=0}^n\binom{n}{i}\epsilon^i(1-\epsilon)^{n-i}<\delta
\end{align*}
is the number of mistakes permitted by an existing PAC prediction set algorithm~\cite{bastani-pac}.
\section{Experiments}
We evaluate our proposed approach on two tasks: semantic parsing for SQL, and Python code completion.
\textbf{SQL generation.}
In this task, the input to the model is a natural language utterance and the target is an SQL program. For the scoring function $f$, we use a state-of-the-art model PICARD~\cite{picard}, which modifies the decoding process of T5~\cite{https://doi.org/10.48550/arxiv.1910.10683} to constrain decoding to valid SQL programs. This model is trained on the Spider dataset~\cite{spider}, a large multi-domain and cross-database dataset for SQL semantic parsing. For our experiments, we use 7,000 examples from Spider to construct prediction sets.
\textbf{Python generation.}
In this task, the input to the model is a natural language problem statement combined with some $k$ lines of code, and the target is the remaining lines of code to complete the solution. We use the Codex~\cite{https://doi.org/10.48550/arxiv.2107.03374}, a GPT language model fine-tuned on publicly available GitHub code with proficiency in over a dozen programming languages including Python, JavaScript, Go, TypeScript, and C\#. For our experiments, we use natural language to Python code datasets including APPS~\cite{hendrycksapps2021} and HumanEval: Hand-Written Evaluation Set~\cite{https://doi.org/10.48550/arxiv.2107.03374}.
\textbf{Baseline.}
We compare to a baseline strategy that greedily chooses the least likely node to remove at each step. More precisely, it uses the same high-level strategy as our approach---i.e., construct a sequence of partial programs where each one contains the previous, and then use an existing prediction set algorithm to choose $\tau$. The difference is in how the sequence of partial programs is constructed---whereas our approach uses optimization to do so, the baseline uses a greedy strategy. In particular, among all nodes of the current partial program with at least one child missing, it chooses to remove the one that most reduces the negative log likelihood (NLL) of the partial program (normalized by the number of nodes removed).
\textbf{Results.}
We implemented the structured prediction set algorithm described in section~\ref{algo} to construct PAC prediction sets of code leveraging the abstract syntax tree of the SQL and Python programming languages. The results in Figure~\ref{fig:spider_node_rm} \&~\ref{fig:codex_node_rm} show the fraction of nodes removed from the predicted AST for varying $\epsilon$ and $m$. The results in Figures~\ref{fig:spider_set_coverage} \&~\ref{fig:codex_set_coverage} show the percentage of constructed code sets that include the target program for varying $\epsilon$ and $m$.
For both datasets, our approach constructs prediction sets that remove significantly fewer nodes than the baseline. For instance, with $\epsilon=0.15$, our approach only removes about 45\% of nodes for SQL (vs. 55\% for the baseline), and about 55\% of nodes for Python (vs. 65\% for the baseline). Furthermore, our approach achieves better performance while achieving similar coverage rate (both our approach and the baseline achieve the desired coverage rate in all cases).
Additionally, we note that the coverage rate decreases as $\epsilon$ increases, demonstrating that our approach is not overly conservative. In particular, the number of nodes removed is due in large part to errors in the underlying language model.
Interestingly, the performance of our algorithm is not very sensitive to $m$, suggesting that $m=1$ typically suffices for constructing prediction sets. Intuitively, this finding suggests that most errors are localized, though more analysis is needed to understand this phenomenon.
\section{Conclusion}
In this work, we presented an algorithm for constructing PAC prediction sets for large language models for code generation and semantic parsing. Our approach constructs compact prediction sets in the form of partial programs, leveraging an optimization-based approach to construct a monotonic set of candidate prediction sets and then using an existing algorithm to rigorously select a prediction set threshold. Our experiments demonstrate that our approach constructs valid prediction sets while significantly outperforming a na\"{i}ve baseline that uses a greedy heuristic instead of optimization. While our approach is tailored to code generation, we anticipate that the general principle of using optimization to construct compact prediction sets can be applied to many structure prediction problems.
\section*{Acknowledgements}
We are grateful to Jeevana Inala for her helpful comments and discussions.
|
{
"arxiv_id": "2302.08666",
"language": "en",
"timestamp": "2023-02-20T02:05:51",
"url": "https://arxiv.org/abs/2302.08666",
"yymm": "2302"
} | \section{Introduction}
Quantum information theory
transforms the very foundations of information theory and computing~\cite{NC10,Aar13}.
The pre-quantum (known as `classical') scientific framework allows objective information
to be labelled by integers such as bit strings
(e.g., representing text characters by 7-bit strings according to the American Standard Code for Information Interchange, or ASCII),
which is foundational to information theory.
Processing information can be executed under the rules of Boolean logic,
manifested as concatenations of one-bit operations such as NOT and two-bit operations such as NAND.
Quantum information changes the information game entirely by allowing coherent superpositions of informational states,
which, following the quantum principle of complementarity,
can be thought of as being both corpuscular (particle-like) and undular (wave-like).
For example, a three-bit string 010 becomes quantumly a label for a quantum state~$\ket{010}$
(element of Hilbert space),
which could be manifested physically, say,
by three electrons that are spin-down, spin-up and spin-down,
with a spin-down state labelled~$\ket0$
and a spin-up state labelled~$\ket1$
in Dirac, or bra-ket, notation~\cite{Dir39}.
Superposing this three-electron state with its orthogonal complement would be $\ket{101}$.
For state normalisation implicit throughout this paper,
the superposition of these two states is
$\ket{010}+\ket{101}$,
which,
in binary representation,
is a superposition of the numbers~2 and 5.
These superpositions of information states can be processed quantumly,
i.e., in a way that preserves coherence.
Ideally,
this superposition state can be transformed by an arbitrary unitary map (isometry on Hilbert space).
Realistically,
open-system effects such as noise and loss can impede performance,
but almost-unitary maps
(such as completely positive trace-preserving maps~\cite{NC10} that are close to being unitary)
can suffice for useful quantum-information processing provided that the quantum version of error correction is employed in a fault-tolerant way~\cite{Got21}.
An early motivation for quantum computing was for simulating physics,
particularly for simulating quantum systems in a way that is natural for quantum descriptions~\cite{Fey82},
i.e., by using quantum computing.
Since that original idea,
remarkable quantum algorithms have arisen,
where remarkable refers to delivering superior performance compared to classical algorithms,
such as efficient computation meaning computational resources such as run-time and number of bits or qubits, being quantum bits, scale no worse than polynomially with respect to the size of the problem quantified by how many bits are required for specifying the problem instance~\cite{NC10}.
One example of quantum algorithms being advantageous is for efficiently solving
number factorization,
whose hardness is subexponential (almost exponential) using the best known classical algorithm~\cite{Sho94}.
Another example is a provably quadratically faster algorithm for function inversion (hence, unstructured search)~\cite{Gro96}.
Quantum annealing is speculated to enhance optimisation methods in some cases~\cite{HRIT15}.
Building quantum computers is rapidly advancing,
albeit small-scale and noisy,
making quantum computing now viable (in a limited way)
and commercial~\cite{San17,San20,MSSM20}.
Quantum computing is arguably a disruptive technology,
i.e., a technology that is both innovative and market-changing~\cite{Chr97},
if quantum computing eventually delivers a game-changing application
such as for data science~\cite{Chr97,WWE19}.
Some big `ifs' stand in the way of quantum computing being disruptive, though.
My view is that quantum computing faces two huge `ifs', i.e., challenges:
(1)~building a quantum computer
and (2)~making said quantum computer useful.
In a way, the latter is harder than the former.
Scaling a quantum computer to large size (many qubits) and large depth (able to pass through many cycles of quantum logic gates, defined as unitary or near-unitary maps of quantum information states)
is technically extremely challenging but conceivable.
Figuring out,
on the other hand,
what to do with a quantum computer that is truly useful and transformative is, at best, challenging because of a limit to our imagination of what could be the next great quantum algorithm
and, at worst, hopeless if no new great quantum algorithm exists.
Thus, quantum computing is a wonderful, scary, risky adventure,
which we need to navigate with our eyes wide open.
\section{Nexus between quantum computing and data science}
Now I elaborate on the nexus between quantum computing and data science,
as any meaningful overlap between these two galvanising research fields is exciting.
Data science~\cite{KD19} concerns the full pipeline from capturing and maintaining data to processing and analysing these data and,
finally,
communicating or executing data-driven action~\cite{MLW+19}.
Computing is a key part of this pipeline,
not only for exploiting quantum speedup~\cite{RWJ+14} or superior accuracy~\cite{WWDM20}
but also for the possibility of storing information in superposition in a quantum random access memory~\cite{Ble10}
as well as considering cyberattacks on data storage and processing.
Secure data storage could involve quantum cryptography~\cite{PAB+20},
and quantum-secure use of servers could need to exploit methods such as blind quantum computing~\cite{Fit17} or quantum homomorphic encryption~\cite{BJ15,DSS16}.
The full impact of quantum information processing on data science is not yet known and needs extensive study.
Here, we focus here strictly on the impact of quantum computing per se on data science.
I mentioned a little about quantum computing being about superpositions of information states and processing by unitary or near-unitary maps.
As quantum computing is rather subtle in its nature,
let us now take a moment to appreciate how it works.
I would like to illustrate how quantum computing works by considering a quantum simulation,
executable on a quantum computer,
of Schr\"{o}dinger's famous example of a cat
whose existential state can be in a life-death superposition,
so to speak~\cite{Gri14}.
Our purpose is not to philosophise here but rather to recast Schr\"{o}dinger's cat paradox~\cite{Sch35} as quantum information processing thereby illustrating how quantum computing works.
Here, I cast the cat's quantum state of being alive ($b=0$) or dead ($b=1$) as being represented by a quantum state~$\ket{b}$.
Similarly,
we can consider a radioactive nucleus,
that decays by emitting an alpha ($\alpha$) particle.
$\alpha$ decay can be regarded as the emission of an~$\alpha$ particle,
equivalent to a He$^{2+}$ ion,
by quantum tunnelling out of the remaining nucleus~\cite{Gam28}.
The undecayed nucleus can be regarded as being an electromagnetic potential trap for the~$\alpha$ particle,
with the nuclear interior being attractive and the nuclear boundary being repulsive.
We represent the undecayed nucleus by~$\ket0$ and the decayed nucleus by~$\ket1$.
Left alone the initial nuclear state~$\ket0$
evolves into a superposition such as $\ket\pm:=\ket0\pm\ket1$
over one half-life
with choice of~$\pm$ indicating the coherence phase between not having decayed and having decayed.
To make clear the connection with quantum information processing,
Schr\"{o}dinger's cat paradox is depicted in Fig.~\ref{fig:cat}
as a cat in a box
with poison gas triggered by an $\alpha$-particle detection.
The correlation between~$\alpha$ decay and the cat dying is shown in Fig.~2,
with the resultant entangled nuclear-cat state shown in Fig.~2 as well.
We can write this entangled state as $\ket{00}+\ket{11}$.
This state is extremely important in quantum information science and is often called a Bell state.
Just as we have resources like qubits and gates,
this resource is sometimes called an ebit, short for a pair of maximally entangled qubits.
\begin{figure}[h]
\begin{minipage}{21pc}
\includegraphics[width=16pc]{figs/alphacat.png}
\caption{A cat is placed in an opaque box along with a poison gas that is released contingent on detecting the decay of an~$\alpha$ particle.
The decay is modelled as the escape of an~$\alpha$ particle,
equivalently a He$^{2+}$ ion,
from the nucleus,
with this decay occurring by quantum tunnelling.}
\end{minipage}\hspace{2pc}%
\begin{minipage}{16pc}
\includegraphics[width=14pc]{figs/deadandalive.png}
\caption{Above:
the~$\alpha$ particle is deteced, which causes the release of a poison gas that kills the cat;
in Dirac notation,
the nuclear-cat entangled state is a superposition of the nucleus being undecayed and the cat being alive with the orthogonal state of the nucleus having decayed and the cat having died.}
\end{minipage}
\label{fig:cat}
\end{figure}
We can think of Schr\"{o}dinger's cat paradox in terms of a quantum circuit.
The input is initialised to~$\ket{00}$
for an undecayed nucleus and a living cat.
Then one waits for precisely one half-life to achieve $\ket0\mapsto\ket+$.
If decay were a unitary process for one degree of freedom
(not actually the case for nuclear decay but didactically convenient)
then decay could be described by a quantum logic gate known as the Hadamard gate (H),
whose unitary implies
H$:\ket1\mapsto\ket-$ as well,
which would mean that a decayed nucleus implausibly returns to its undecayed state.
The killing of the cat depicted in Fig.~2
is effecting the quantum logic gate known as CNOT, which stands for the controlled-not operation.
The CNOT gate is equivalent to the XOR gate:
flip the second qubit, i.e., apply the quantum NOT gate, if the first qubit is~$\ket1$ but not if the first qubit is~$\ket0$.
Mathematically,
for any two qubits in the computational basis,
written as $\ket{b\;b'}$ for any bits $b$ and~$b'$, the mapping is
${\rm CNOT}:
\ket{b\;b'}
\mapsto
\ket{b\;b'\oplus b}$.
The CNOT gate can be extended in an obvious way,
as a NOT gate controlled by more than one qubit,
such as the controlled-controlled-NOT, or Toffoli, gate
${\rm CCNOT}:
\ket{b\;b'\;b''}
\mapsto
\ket{b\;b'\;b'b''\oplus b}$,
which is valuable as a quantum version of repetition codes~\cite{Bac06}.
For Schr\"{o}dinger's cat paradox,
waiting a half-life means the first qubit is a superposition of~$\ket0$ and~$\ket1$,
not specifically one of these two states, and the output is then the Bell state.
Bell-state preparation,
described by the procedure to manufacture the Schr\"{o}dinger cat paradox,
is a workhorse of some quantum computers,
such as for ion-trap quantum computing~\cite{MS99}.
The Hadamard gate plays a key role in quantum computing.
An outer product of Hadamard gates maps a qubit string by
$\ket{00\cdots0}\mapsto\ket{++\cdots+}$,
which is a superposition of all strings of qubits of given length.
Some quantum algorithms commence this way,
which is at the heart of some references to quantum computing as parallelisation,
but parallelisation is only part of the quantum computing story~\cite{Aar13}.
\section{Building a quantum computer}
Of course we would not be performing quantum computing using cats
(although this terminology is used popularly for quantum computing~\cite{Gri14},
and the term ``two-component Schr\"{o}dinger cat states'' is used for some type of superconducting quantum computing~\cite{MLA+14}
although I prefer the term ``entangled coherent state'' for such states~\cite{San12})
and instead resort to realisable quantum computers.
I now list the current favourite platforms with examples of how the qubits are manifested~\cite{San17,San20},
and I sidestep the growing and intriguing areas of so-called continuous variable quantum computing~\cite{LB99,BSBN02}
and qudit quantum computing~\cite{WHSK20}.
Photonic quantum computing was an early candidate and remains a strong contender.
A single-rail qubit could be manifested as a superposition of no photon and one photon in a particular spatiotemporal mode;
in contrast a dual-rail qubit is a single photon with the logical qubit inherent in the polarisation or which-path or which-time
($\ket{{\rm early}}$ and $\ket{{\rm late}}$, say)
state of the photon.
In magnetic resonance, the qubit can be the nuclear spin state, which can be controlled through hyperfine interactions with electronic qubits in the atom.
Trapped ions can be superpositions of atomic excited and ground states,
and controlled quantum logic gates can be achieved by exploiting collective motion of the ions leading to phononic sidebands of the atomic energy states.
Neutral atomic qubits are also superpositions of ground and excited electronic states,
with entangling two-qubit gates achievable by controlled collisions between atoms.
Solid-state systems could be semiconductors,
with the qubit inherent in the spin of a trapped electron,
and superconducting quantum computing can involve quantized flux or superpositions of charge or using microwave photons in the resonator.
A hybrid semiconductor-superconductor system could lead to topologically protected qubits.
Quantum computers can be realised according to different approach to processing quantum information.
The gate `model' is about making a universal primitive set of gates, such as~H and~CNOT discussed earlier, as well as another gate called~T, which maps~$\ket0\mapsto\ket0$ and $\ket1\mapsto{\rm e}^{{\rm i}\nicefrac{\pi}4}\ket1$.
Any multi-qubit unitary map can be decomposed efficiently into an efficiently scaling number of computer cycles,
with each cycle comprising these single-qubit gates, including the identity gate, plus CNOT gates.
Measurement-based quantum computing, on the other hand, begins by entangling all qubits together and then carving out a circuit by measuring unwanted qubits in an appropriate basis.
The remaining qubits are sequentially measured,
with measurement basis determined by previous measurement outcomes;
the final measurement outcomes provide the desired solution.
Adiabatic quantum computing is about slow, controlled, continuous-time evolution of a Hamiltonian to realise approximately the target unitary map.
Topological quantum computing involves braiding anyonic worldlines and could be realised via the fractional quantum Hall effect.
This list describes universal quantum computers;
purpose-built quantum computers such as quantum annealing---adiabatic or diabatic~\cite{CL21}---and boson sampling~\cite{AA13} enrich the set of quantum computers in development.
Quantum computer technology is quite impressive today, with quantum annealing reaching thousands of qubits and gate-based quantum computing having over a hundred qubits.
Quantum computing primacy has arguably arrived~\cite{DHKL20,San21}.
Certainly, quantum computing technology has surpassed what I imagined would be the status today,
but much more is needed to make quantum computers genuinely useful.
\section{Applied quantum algorithms for data science}
A plethora of quantum algorithms has been developed~\cite{QAZ}.
Although factorisation is the most stunning of the quantum algorithms,
near-term algorithms are the current focus.
Two especially enticing approaches to exploiting noisy intermediate-scale quantum (NISQ) computers~\cite{Pre18}
involve optimisation~\cite{MBB+18},
which can use variational quantum algorithms~\cite{MBB+18,CAB+21},
and quantum machine learning~\cite{SSP15}.
In both cases, a quantum advantage remains an open question.
A business analysis of the potential for commercially viable quantum computing commercially is in combinatorial optimisation~\cite{BGM21}.
This analysis identifies three business verticals,
which are niches that serve needs of a specific set of customers.
They identify the verticals of financial portfolio management,
computing lowest-energy configurations with applications to material design and to pharmaceuticals,
and predicting rare failures for advanced manufacturing.
The near-term gain would, one hopes, arise from exploiting quantum-inspired algorithms~\cite{ADBL20}.
Intriguingly,
even without a quantum advantage,
a theoretical study suggests that a quantum economic advantage for companies developing quantum computers
can emerge even if quantum computers do not offer a quantum computational advantage~\cite{BGM22}.
This argument for a quantum computational advantage,
even if quantum computers do not pay off,
is based on modeling a duopoly comprising a quantum computing company vs a classical computing company.
Their model shows a ``quantum economic advantage'',
even without quantum computing advantage,
partially through market creation.
Quantum machine learning aims to achieve quantum-enhanced machine learning, such as enhancing unsupervised, supervised, generative or transfer learning.
Here,
I summarise Wiebe's list of key questions that a ``quantum machine learner'' should ask~\cite{Wie20}.
Wiebe relates quantum-enhanced machine learning specifically to certain potential advantages over using classical computing.
He identifies four sought-after quantum enhancements to machine learning, namely,
fewer computational steps for training or classifying,
reducing sample complexity,
generating a new data model,
and using quantum optimisation for loss functions.
As a second track of quantum machine learning research,
Wiebe summarises how a quantum device could be natural for classifying or extracting features.
To this end,
he lists examples,
namely,
quantum Boltzmann machines and quantum neural networks,
quantum algorithms such as for principal component analysis, and
a quadratic time reduction for quantum-enhanced function invertion applied to data-training.
Consultants are involved in prognosticating commercial applications of quantum algorithms, and I share a little bit here of such endeavours.
Gartner, Inc, anticipates various quantum computing potential applications~\cite{Gar19}.
They regard chemistry applications as first past the post,
with such applications requiring somewhere around one hundred to two hundred qubits.
As the number of qubits and quantum computational cycles increase,
they forecast quantum optimisation and then quantum machine learning as being viable,
followed by material science,
with each of these application areas needing hundreds to thousands of qubits.
These forecasts are in line with the business forecast discussed above~\cite{BGM21}.
Gartner's analysis culminates with the expectation that, eventually, quantum computers will sovle ``unknown problems'' requiring hundreds of thousands of qubits.
A November 2020 TechRepublic report summarises six experts' predictions for 2021~\cite{Gre20}
beginning with IBM predicting it will achieve 127 qubits,
which indeed happened in 2021.
The aforementioned Gartner, Inc, foresaw that cloud providers would incorporate quantum capability, which is so.
KnowBe4,
a company that focuses on security awareness training,
predicted that quantum computing would break traditional public-key cryptography;
did not happen, and I do not see this happening in the foreseeable future.
Lux Research foreshadowed advances in optimising quantum hardware to reduce resource needs for running quantum algorithms.
Finally, Forrester predicted that 2021 would be a trough of disillusionment,
which is the third of five phases
of the Gartner hype cycle~\cite{FR08,SL10}.
The hype cycle is a plot of maturity or visibility of an emerging technology as a function of time,
and these five phases are
the ``technology trigger'',
followed by the ``peak of inflated expectations'',
then the ``trough of disillusionment'',
subsequently the ``slope of enlightenment''
(waning interest with failure to deliver
with survival dependent on satisfying early adopters)
and, finally, the ``plateau of productivity''.
\section{Conclusions and outlook}
We have reviewed the idea and advantages of quantum computing, the connection with data science emphasising the computational side such as optimisation and machine learning, and took an excursion into what consulting firms are saying.
Now we take stock of all this information to figure out where quantum computing is today and how it affects data science and its applications and verticals.
Clearly, quantum computers are a game changer for computing, at least at a conceptual level.
The idea that hard problems can be solved efficiently on a quantum computer shows that quantum computing is more powerful,
at least in one type of application.
That quantum computing is \emph{provably} quadratically superior,
by some measure,
at inverting functions,
also points to quantum computing being superior in some way to classical computing.
On the other hand, quantum computers today are small and noisy,
and, even if they were large and coherent,
their application to data science entails only speculative advantages.
Whether or not quantum computers are advantageous for data science might only be resolved by building quantum computers, testing algorithmic performance on such computers,
and seeing whether an advantage has been found or not;
this empirical approach is already ongoing in the field.
Fortunately, quantum-inspired algorithms,
which typically arise as ways to challenge a purported quantum advantage by solving the same problem as well or better on a classical computer,
are giving beneficial computational results at present that arguably would not be happening if quantum computing research were not occurring.
Given the unknowns of quantum computing,
including how much quantum computers can scale up and whether we will have good algorithms that exploit the potential of quantum computing for data science,
an approach I call ``quantum-aware, quantum-ready, quantum-active'' is prudent.
This approach means that, at this stage,
we stakeholders in quantum computing, including data scientists, need to be aware of when and how quantum computing could change the game.
We maintain awareness by testing how well existing quantum computers or simulators thereof perform on problems of interest.
If and when quantum computing matters for the application at hand,
it is time to become quantum-ready, which means train up on the technology and prepare for transitioning to quantum computing so that, when the time comes, we are ready to use it without delay.
Finally, in the optimistic scenario that quantum computing has become a disruptive technology whose adoption is necessary,
we have reached the quantum-active stage where data scientists need to be fully engaged in using quantum computing to solve problems.
We need to remember that quantum computing is a high-risk high-reward venture and treat it accordingly.
\section*{Acknowledgments}
This project has been funded by the Alberta Government and by NSERC.
Thanks to S.\ L.\ Sanders who prepared the figures over two decades ago;
those figures have been valuable for my presentations on Schr\"{o}dinger's cat ever since but have not been published before.
\section*{References}
|
{
"arxiv_id": "2302.08611",
"language": "en",
"timestamp": "2023-02-20T02:03:16",
"url": "https://arxiv.org/abs/2302.08611",
"yymm": "2302"
} | \section{Introduction}
Drinfeld modules were first introduced by Vladimir Drinfel'd in order
to prove the Langlands conjecture for $\textnormal{GL}_n$ over a
global function field~\cite{Drinfeld74}. Since then, Drinfeld modules
have attracted attention due to the well established correspondence
between elliptic curves and the rank two case. Moreover, the rank one
case, often referred to as \textit{Carlitz modules}, provides a
function field analogy of cyclotomic extensions; the role played in
class field theory over number fields by elliptic curves with complex
multiplication shows strong parallels with that of Drinfeld modules of
rank two for the function field setting. This has motivated efforts
to translate constructions and algorithms for elliptic curves,
including modular polynomials~\cite{Caranay2020ComputingMP},
isogenies~\cite{Caranay2020ComputingMP}, and endomorphism
rings~\cite{endomorphismpink,garaipap}.
Naturally, cryptographic applications of Drinfeld modules have also
been explored~\cite{crsaction}, but were long anticipated to be
vulnerable for public key cryptography based on
isogenies~\cite{Scanlon01,Joux2019DrinfeldMA}. This question was
finally put to rest by Wesolowski who showed that isogenies between
Drinfeld modules of any rank could be computed in polynomial
time~\cite{wesolowski}.
Drinfeld modules of rank $r > 2$ do not have such a clear parallel,
although an analogy exists between abelian surfaces and so called
$t$-modules~\cite{Anderson86}. Owing to this discrepancy, rank two
Drinfeld modules have been studied far more closely than the case of
more general ranks.
The main goal of this work is to study a Drinfeld module analogue of
$p$-adic techniques such as Kedlaya's
algorithm~\cite{kedlayapointcounting} for computing the
characteristic polynomial of the Frobenius endomorphism acting on an
elliptic or hyperelliptic curve over a finite field. Algorithms for
elliptic curves compute the action of the Frobenius on a basis of a
particular subspace of the de Rham cohomology of a characteristic 0
lift of the curve, with coefficients in $\mathbb{Q}_p$. Our approach follows a
very similar outline, but turns out to be remarkably simpler to
describe, resting crucially on a suitable version of crystalline
cohomology for Drinfeld modules due Gekeler and
Angl\`es~\cite{Angles1997}.
More generally, the approach we present can be used to compute the
characteristic polynomial of any Drinfeld module endomorphism.
\section{Background and Main result}
\subsection{Basic Preliminaries}
Let $R$ be any ring, $r \in R$, and $\sigma: R \to R'$ a ring
homomorphism. We will follow the notational convention that writes
$\sigma(r) = \sigma_r = r^{\sigma}$ throughout this work. If $R$ is a
polynomial ring and $\sigma$ acts on its coefficient ring,
$r^{\sigma}$ denotes coefficient-wise application.
Let $q$ be a prime power, and let $\mathbb{F}_q$ denote a finite field of
order $q$, fixed throughout. We also fix a field extension $\L$ of
$\mathbb{F}_q$ such that $[\L: \mathbb{F}_q] = n$. Explicitly, $\L$ is defined as $\L
= \mathbb{F}_q[t]/(\ell(t))$ for some degree $n$ irreducible $\ell(t) \in
\mathbb{F}_q[t]$, so elements of $\L$ are represented as polynomials in
$\mathbb{F}_q[t]$ of degree less than $n$. We will discuss below an
alternative representation, better suited for some computations.
\subsection{Drinfeld Modules}
In general, Drinfeld modules can be defined over a ring $A$ consisting
of the functions of a projective curve over $\mathbb{F}_q$ that are regular
outside of a fixed place at infinity. For our purposes, we will
restrict ourselves to the consideration of Drinfeld modules defined
over the regular function ring of $\P^1 - \{\infty\}$; that is $A =
\mathbb{F}_q[x]$.
We fix a ring homomorphism $\gamma: A \to \L$ and let $\mathfrak{p} \in A$
the monic irreducible generator of $\ker \gamma$. Then $\mathbb{F}_{\mathfrak{p}} =
\mathbb{F}_q[x]/(\mathfrak{p})$ is isomorphic to a subfield of $\L$; we let
$m=\deg(\mathfrak{p})$, so that $m$ divides $n$. This gives us an
isomorphism $\L\simeq\mathbb{F}_q[x,t]/(\mathfrak{p}(x), g(x, t)) $, with
$g$ monic of degree $n/m$ in $t$. It will on occasion be
convenient to switch from the representation of elements of $\L$ as
univariate polynomials in $t$ to the corresponding bivariate
representation in $x,t$; in that case, for instance, $\gamma_x$ is
simply the residue class of $x$ modulo $(\mathfrak{p}(x), g(x, t))$. We
assume that $\mathfrak{p}$ and $g$ are given as part of the input.
To define Drinfeld modules, we also have to introduce the ring
$\L\ang{\tau}$ of skew polynomials, namely
\begin{align*}
\L\ang{\tau} &= \{U=u_0 + u_1 \tau + \cdots + u_s \tau^s \ \mid \ s
\in \mathbb{N}, u_0,\dots,u_s \in \L\},
\end{align*}
where multiplication is induced by the relation $\tau u = u^q \tau$,
for all $u$ in $\L$.
\begin{definition}
A \textnormal{Drinfeld $A$-module} of rank r over over $(\L, \gamma)$
is a ring homomorphism $\phi: A \to \L\{ \tau \}$ such that
\begin{equation*}
\phi_x = \gamma_x + \Delta_1\tau^1 + \ldots + \Delta_r\tau^r
\end{equation*}
with $\Delta_i$ in $\L$ for all $i$ and $\Delta_r \neq 0$.
\end{definition}
For readers interested in the more general setting under which
Drinfeld modules are typically defined, we recommend the survey by
Deligne and Husem\"{o}ller in~\cite{deligne}.
A Drinfeld module is defined over the \textit{prime field} when $\L
\cong \mathbb{F}_{\mathfrak{p}}$ (that is, $m=n$). Algorithms for Drinfeld modules
in the prime field case tend to be algorithmically simpler, and we
will often highlight the distinction with the more general case.
\begin{example}
Let $\mathbb{F}_q = \mathbb{Z}/5\mathbb{Z}$ and $n = 4$. Set $\ell(t) = t^4 + tx^2 + 4t +
2$ and $\L = \mathbb{F}_5[t]/(\ell(t))$. Let $\gamma_x = t \bmod
\ell(t)$, in which case $\L = \mathbb{F}_{\mathfrak{p}}$. A rank two Drinfeld
module is given by $\phi_x = \tau^2 + \tau + t$.
We may instead take $\gamma_x = t^3 + t^2 + t + 3 \bmod \ell(t)$
in which case $\mathfrak{p} = x^2 + 4x + 2$ and $\mathbb{F}_{\mathfrak{p}} \cong
\mathbb{F}_{25}$. The field $\L$ admits the representations
$$\L=\mathbb{F}_5[t]/(\ell(t)) \simeq \mathbb{F}_5[x,t]/(\mathfrak{p}(x),g(x,t)),$$
with $g(x,t) = t^2 + 4tx + 3t + x$. A rank three Drinfeld module
is given by $\phi_x = \tau^3 + (t^3 + 1)\tau^2 + t \tau + t^3 +
t^2 + t + 3$.
\end{example}
Given Drinfeld $A$-modules $\phi, \psi$ defined over $(\L, \gamma)$,
an $\L$-morphism $u: \phi \to \psi$ is a $u \in \L\{ \tau \}$ such
that $u\phi_a = \psi_au $ for all $a \in A$. The set
$\emorph_{\L}(\phi)$ is the set of $\L$-morphisms $\phi \to \phi$; it
is therefore the centralizer of $\phi_x$ in $\L\{\tau\}$. It
admits a natural ring structure, and contains the
\textit{Frobenius endomorphism} $\tau^n$.
\subsection{Characteristic Polynomials}\label{ssec:charpoly}
The characteristic polynomial of an endomorphism $u \in
\emorph_{\L}(\phi)$ can be defined through several points of view.
Through the action of $\phi$, $A=\mathbb{F}_q[x]$ and its fraction field
$K=\mathbb{F}_q(x)$ can be seen as a subring, resp. subfield of the skew field
of fractions $\L(\tau)$ of $\L\{\tau\}$. Then, $\emorph_{\L}^0(\phi)=
\emorph_{\L}(\phi) \otimes_{A} K$ is the centralizer of $\phi_x$ in
$\L(\tau)$; this is a division ring that contains $K$ in its center.
\begin{definition}
The \textit{characteristic polynomial} $\textnormal{CharPoly}(u)$ of $u \in
\emorph_{\L}(\phi)$ is its reduced characteristic polynomial,
relative to the subfield $K$ of
$\emorph_{\L}^0(\phi)$~\cite[Section~9.13]{Reiner03}.
\end{definition}
The characteristic polynomial of $u$ has degree $r$ and coefficients
in $A \subset K$, so that it belongs to $A[Z]$. More precisely, if
$\deg(u) = d$, $\textnormal{CharPoly}(u)$ has coefficients $a_0, \ldots, a_{r-1}
\in A$ with $\deg(a_i) \leq {d(r - i)}/r$ for all
$i$~\cite[Prop.~4.3]{endomorphismpink} and satisfies
\begin{equation}
u^r + \sum_{i=0}^{r-1} \phi_{a_i}u^i = 0.
\end{equation}
Another definition of $\textnormal{CharPoly}(u)$ follows from the introduction of
the {\em Tate modules} of $\phi$. The Drinfeld module $\phi$ induces
an $A$-module structure on the algebraic closure $\overline{\L}$ of
$\L$ by setting $a * c = \phi_a(c)$ for $a \in A$, $c \in
\overline{\L}$ (defining $\tau^i(c)=c^{q^i}$). Then, for $\mathfrak{l} \in
A$, the $\mathfrak{l}$-torsion module of $\phi$ is defined as $\phi[\mathfrak{l}]
= \{ c \in \overline{\L} \mid \mathfrak{l} *c = 0 \}$. Setting $\mathfrak{l}$ to
be any irreducible element of $A$ different from $\mathfrak{p}$, we can
define the $\mathfrak{l}$-adic Tate module as $T_{\mathfrak{l}}(\phi) =\varprojlim
\phi[\mathfrak{l}^i]$.
Letting $A_{\mathfrak{l}}$ be the $\mathfrak{l}$-adic completion of $A$,
$T_{\mathfrak{l}}(\phi)$ becomes a free $A_{\mathfrak{l}}$-module of rank $r$ and
elements of $\emorph_{\L}(\phi)$ induce endomorphisms on
$T_{\mathfrak{l}}(\phi)$. Then, for $u \in \emorph_{\L}(\phi)$, the
characteristic polynomial $\textnormal{CharPoly}_{A_{\mathfrak{l}}}(u)$ of the induced
endomorphism $u \in \emorph_{A_{\mathfrak{l}}}(T_{\mathfrak{l}}(\phi))$ agrees
with $\textnormal{CharPoly}(u)$~\cite{Gekeler91,Angles1997}.
\begin{example}
Let $\mathbb{F}_q$, $\L$ be as in the context of example 1, and $\gamma_x
= t^3 + 4t^2 + t + 1 \bmod \ell(t)$.
A rank 5 Drinfeld module is given by $\phi_x = (4t^3 + t^2 +
2)\tau^5 + (t^3 + 3t^2 + t + 1)\tau^4 + (4t + 3)\tau^3 + (3t^2 +
4t + 4)\tau^2 + (4t^3 + 4t^2 + 4t)\tau + \gamma_x$.
The characteristic polynomial of $\tau^n$ on $\phi$ is $Z^5 + 3Z^4
+ (x^3 + 4x^2 + x)Z^3 + (2x^2 + 4x + 3)Z^2 + (x^3 + 2x^2 + 4x +
2)Z$ $ + 2x^4 + 3x^2 + 4x + 2$
\end{example}
The results in this paper are based on another interpretation of
$\textnormal{CharPoly}(u)$, as the characteristic polynomial of the endomorphism
induced by $u$ in a certain {\em crystalline cohomology} module, due
to Gekeler and Angl\`es~\cite{Angles1997}. Our first main result is an
algorithm for computing the characteristic polynomial of the Frobenius
endomorphism.
Here, $\omega$ is a real number such that two $s \times s$ matrices
over a ring $R$ can be multiplied in $O(s^{\omega})$ ring operations
in $R$; the current best value is $\omega \leq 2.372$~\cite{DuWuZh22}.
We will also let $\lambda$ denote an exponent such that the
characteristic polynomial of an $s \times s$ matrix over a ring $R$
can be computed in $O(s^{\lambda})$ ring operations in $R$. When $R$
is a field, this can be done at the cost of matrix multiplication and
therefore $\lambda = \omega$~\cite{charpolycomp}. For more general
rings, the best known value to date is $\lambda \approx
2.7$~\cite{KaVi04}.
\begin{theorem}\label{mainresult}
Let $\phi$ be a rank $r$ Drinfeld module over $(\L, \gamma)$. There
is a deterministic algorithm to compute the characteristic
polynomial of the Frobenius endomorphism $\tau^n$ with bit
complexity
\begin{itemize}
\item $(r^\omega n^{1.5} \log q + n \log^2 q)^{1+o(1)}$ for the prime field
case ($m = n$)
\item $((r^{\lambda}/m + r^\omega/\sqrt{m})n^2\log q + n \log^2 q)^{1+o(1)}$ for
the general case $m < n$.
\end{itemize}
\end{theorem}
When $r$ and $q$ are fixed, the runtime in the theorem is thus
essentially linear in $n^2/\sqrt{m}$, which is $n^{1.5}$ in the prime
field case and gets progressively closer to $n^2$ as $m$
decreases. The best prior results~\cite{MuslehSchost} were limited to
the case $r=2$, with runtimes essentially linear in $n^{1.5}$ in the
prime field case and $n^2$ otherwise (for fixed $q$).
This first algorithm builds upon techniques for linear recurrences
originating from~\cite{DOLISKANI2021199}, which are so far limited to
the particular case of the Frobenius endomorphism.
We also obtain two algorithms that can be applied to any $u \in
\emorph_{\L}(\phi)$. The complexity in this case partly depends on
that of multiplication and Euclidean division in $\L\{\tau\}$, which
we will denote ${\sf SM}(d,n,q)$ and which will be discussed in more
detail in Section~\ref{sec:prelim}.
\begin{theorem}\label{mainresult2}
With assumptions as in Theorem \ref{mainresult}, there are
deterministic algorithms to compute the characteristic polynomial of
an endomorphism $u$ of degree $d$ with bit complexities
\begin{itemize}
\item $\big( \frac{\min(dr^2, (d+r)r^{\omega - 1})}{m} (d+m)n \log q + r^{\lambda} n(d+m)/m\log q + n \log^2 q \big)^{1+o(1)}$
\item $(r{\sf SM}(d+r,n, q) + r^{\lambda} n(d+m)/m\log q + n \log^2 q)^{1+o(1)} $.
\end{itemize}
\end{theorem}
Again, it is worth considering the situation with $r$ and $q$
fixed. In this case, the runtimes we obtain are, respectively,
essentially linear in $d(d+m)n/m$ and ${\sf SM}(d,n, q)$. In the next
section, we review known values for ${\sf SM}$; for the best known
value of $\omega$, and fixed $q$, it is $(d^{1.69} n)^{1+o(1)}$ for $d
\le n^{0.76}$, and $(d n^{1.52})^{1+o(1)}$ otherwise. In the case
$d=\Theta(n)$, the runtimes are thus essentially linear in $n^3/m$ and
$n^{2.53}$, respectively (so which is the better algorithm depends on
the value of $m$). For $u=\tau^n$, the algorithm in the previous
theorem is of course superior.
\section{Computational Preliminaries}\label{sec:prelim}
The key element in our complexity analyses is the cost of the
following operations in $\L$: addition/subtraction, multiplication,
inverse and (iterated) Frobenius application.
Some of the algorithms we use below (multiplication and Euclidean
division in $\L\{\tau\}$ from~\cite{PUCHINGER2017b,CaLe17}) assume
that all these operations can be done using $O\tilde{~}(n)$ operations in
$\mathbb{F}_q$. For the representation of $\L$ we use, this is however not
known to be the case; Couveignes and Lercier proved the existence of
``elliptic bases'' that satisfy these requirements~\cite{CoLe09}, but
conversion to our representation does not appear to be obvious.
This explains why in our main result, we do not count operations in
$\mathbb{F}_q$, but bit operations instead (our complexity model is a standard
RAM); we explain below how this allows us to bypass the constraints
above.
Using FFT based algorithms, polynomials of degree at most $n$ with
coefficients in $\mathbb{F}_q$ can be multiplied in boolean time $(n \log
q)^{1 + o(1)}$ \cite{Cantor1991OnFM,HaHoLe17}. It follows that
elementary field operations (addition, multiplication, inversion) in
$\L=\mathbb{F}_q[t]/(\ell(t))$ can be done with the same asymptotic cost.
Conversions between univariate and bivariate representations for
elements of $\L$ take the same asymptotic runtime.
Denote by $\alpha$ the isomorphism $\L=\mathbb{F}_q[t]/(\ell(t)) \to
\mathbb{F}_q[x,t]/(\mathfrak{p}(x),g(x,t))$; then, given $f$ of degree less than $n$
in $\mathbb{F}_q[t]$, we can compute the image $\alpha(f \bmod \ell(t))$ in $(n
\log q)^{1 + o(1)}$ bit operations; the same holds for
$\alpha^{-1}$~\cite{PoSc13a,VANDERHOEVEN2020101405}.
The last important operation is the application of the $q$-power
Frobenius in $\L$. Recall that given polynomials $f,g, h \in \mathbb{F}_q[x]$
of degree at most $n$, {\em modular composition} is the operation that
computes $f(g) \bmod h$. As showed in~\cite{vonzurGathen1992}, for $c$
in $\L=\mathbb{F}_q[t]/(\ell(t))$, $c^q$ can be computed in the same
asymptotic time (up to logarithmic factors) as degree $n$ modular
composition, following a one-time precomputation that takes $(n \log^2
q)^{1+o(1)}$ bit operations. This then extends to arbitrary powers
(positive and negative) of the Frobenius. We
should point out that modular composition techniques also underlie the
algorithms for switching between the two representations of the
elements in $\L$ mentioned above.
In~\cite{KedUman}, Kedlaya and Umans proved that modular composition
in degree $n$ can be computed in $(n \log q)^{1 + o(1)}$ bit
operations (see also the refinement due to van der Hoeven and
Lecerf~\cite{VANDERHOEVEN2020101405}), whence a similar cost for
(iterated) Frobenius in $\L$. Here, the fact that we work in a boolean
model is crucial: Kedlaya and Umans' algorithm is not known to admit a
description in terms of $\mathbb{F}_q$-operations.
From this, we can directly adapt the cost analyses in
\cite{PUCHINGER2017b,CaLe17} to our boolean model. In particular,
following the latter reference (which did so in an algebraic cost
model), we let ${\sf SM}(d,n,q)$ be a function such that
\begin{itemize}
\item degree $d$ multiplication and right Euclidean division in
$\L\{\tau\}$ can be done in $O({\sf SM}(d,n,q))$ bit operations
\item for $n,q$ fixed, $d \mapsto {\sf SM}(d,n,q)/d$ is non-decreasing.
\end{itemize}
The latter condition is similar to the super-linearity of
multiplication functions used in~\cite{GaGe13}, and will allow us to
streamline some cost analyses. Unfortunately, there is no simple
expression for ${\sf SM}(d,n,q)$: on the basis of the algorithms
in~\cite{PUCHINGER2017b,CaLe17}, the analysis done in~\cite{CaLe17}
gives the following upper bounds:
\begin{itemize}
\item for $d \le n^{(5-\omega)/2}$, we can take
${\sf SM}(d,n,q)$ in $(d^{(\omega+1)/2} n \log q)^{1+o(1)}$
\item else, we can take ${\sf SM}(d,n,q)$ in $(d n^{4/(5-\omega)} \log
q)^{1+o(1)}$
\end{itemize}
For instance, with $d=n$, this is $(n^{(9-\omega)/(5-\omega)} \log q
)^{1+o(1)}$.
With $\omega=2.37$, the cost is $(d^{1.69} n \log
q)^{1+o(1)}$ for $d \le n^{0.76}$, and $(d n^{1.52}\log q)^{1+o(1)}$
otherwise; the exponent for $d=n$ is $2.53$. For completeness, we
point out that these algorithms heavily rely on Frobenius
applications, and as such, require spending the one-time cost $(n
\log^2 q)^{1+o(1)}$ mentioned previously.
One should also keep in mind that these asymptotic cost analyses are
not expected to reflect practical runtimes. To the authors' knowledge,
software implementations of the Kedlaya-Umans algorithm achieving its
theoretical complexity, or of matrix multiplication with exponent
close to $2.37$, do not currently exist. For practical purposes,
implementations of modular composition use an algorithm due to Brent
and Kung~\cite{BrKu78}, with an algebraic complexity of $O(n^{(\omega
+ 1)/2}) $ operations in $\mathbb{F}_q$. Revisiting skew polynomial
algorithms and their analyses on such a basis is work that remains to
be done.
\section{Prior Work}
The question of computing the characteristic polynomial, particularly
of the Frobenius endomorphism, was studied in detail
in~\cite{gekfrobdist} for the rank two case only.
The most general approach constructs a linear system based on the
degree constraints of the coefficients $a_i = \sum_{j = 0}^{n(r-i)/r}
a_{i,j} x^j$. Evaluating the characteristic polynomial at the
Frobenius element and equating coefficients gives a linear system
based on
\begin{equation}
\tau^{nr} + \displaystyle\sum_{i=0}^{r-1}\sum_{j =
0}^{\frac{n(r-i)}{r}}\sum_{k = 0}^{n(r-i)} a_{i,j}f_{j,k}
\tau^{k + ni} = 0,
\end{equation}
with $f_{j,k}$ the coefficients of $\phi_{x^{j}}$. Letting
$\textnormal{MinPoly}(\tau^n)$ denote the minimal polynomial of $\tau^n$ (as an
element of the division algebra $\emorph_{\L}^0(\phi)$ over the field
$K=\mathbb{F}_q(x)$), the solution of the preceding system is unique and
yields the characteristic polynomial if and only if $\textnormal{MinPoly}(\tau^n)
= \textnormal{CharPoly}(\tau^n)$.
Garai and Papikian gave an algorithm for computing the characteristic
polynomial~\cite[\S 5.1]{garaipap} valid for the prime field case
only. As with the previous approach, this relies on the explicit
computation of $\phi_{x^i}$, which is the dominant computational
step. This can be done by $O(n^2)$ evaluations of the recurrence
\[f_{i+1,j} = \gamma_x^{q^j}f_{i,j} + \sum_{t=1}^{r} \Delta_t^{q^{j-
t}} f_{i, j-t}.\]
Thus the bit complexity of computing all of
$\phi_x, \phi_{x^2}, \ldots, \phi_{x^n}$ is $(r n^3\log(q))^{1 +
o(1)}$.
Further study of algorithms for the specific case of the Frobenius
endomorphism in rank $r=2$ was done in~\cite{Narayanan18}
and~\cite{MuslehSchost}. The latter focused on the complexity of the
algorithms and used the same computational model that will be used
here. As we reported after Theorem~\ref{mainresult}, the best known
runtime to date was quadratic in $n$, except in the case where
$\textnormal{MinPoly}(\tau^n) = \textnormal{CharPoly}(\tau^n)$, or in the prime field case
where a bit cost of $(n^{1.5} \log q + n \log^2 q)^{1+o(1)}$ is
possible \cite{DOLISKANI2021199}. To our knowledge, no previous
analysis is available for an arbitrary endomorphism $u$.
In the context of elliptic curves, Kedlaya's algorithm
\cite{kedlayapointcounting} computes the characteristic polynomial of
a matrix representation of the lift of the Frobenius action to a
subspace of the Monsky-Washnitzer cohomology, up to some finite
precision. Our algorithm follows the same high-level approach: we
compute a matrix for the endomorphism acting on the crystalline
cohomology with coefficients in a power series ring analogue to Witt
vectors. The induced endomorphism turns out to be quite simple to
describe in terms of skew-polynomial multiplication, which eliminates
the need for a complicated lifting step.
\section{Crystalline Cohomology}
In this section, we first review the construction of the crystalline
cohomology of a Drinfeld module and its main properties; this can be
found in~\cite{Angles1997}, where the definition is credited to
unpublished work of Gekeler. Then, we introduce truncated versions of
these objects, which reduce the computation of characteristic
polynomials of endomorphisms of a Drinfeld module to characteristic
polynomial computations of matrices over truncated power series rings.
\subsection{Definition}
The contents of this subsection is
from~\cite{Gekeler88,Angles1997}. The set of {\em derivations}
$D(\phi, \L)$ of a Drinfeld module $\phi$ is the set of $\mathbb{F}_q$-linear
maps $\eta: A \to \mathbb{L}\{ \tau \}\tau$ satisfying the relation
\begin{equation*}
\eta_{ab} = \gamma_a\eta_b + \eta_a\phi_b, \quad a,b \in A
\end{equation*}
Let then $y$ be a new variable. The set $D(\phi,\L)$ can be made
into an $\L[y]$-module in the following manner.
\begin{definition}{\cite[Section~2]{Angles1997}}
The set $D(\phi, \L)$ is an $\L[y]$-module under $ (c y^i *
\eta)_a =c \eta_a \phi_{x^i}$, for $\eta$ in $D(\phi, \L)$, $c$ in
$\L$, $i \ge 0$ and $a$ in $A$.
\end{definition}
Let further $I$ be the ideal of $\L[y]$ generated by $y -
\gamma_x$; for $k \ge 1$, we set
\[W_k = \L[y]/I^k$$ and $$W = \varprojlim \hspace{1mm} W_k \cong
\L[[y-\gamma_x]].\] Thus $W$ comes equipped with projections
$\pi_k: W \to W_k$ obtained by truncation of a power series, written
as sum of powers of $(y-\gamma_x)$, in degree $k$.
We have canonical ring homomorphisms $\iota_k: A \to W_k$
given by $\iota_k(x) = y \bmod {I}{}^k$. They lift to an inclusion
$\iota: A \to W$, simultaneously commuting with each $\pi_k$, which
represents elements of $A$ via their $I$-adic expansion.
The \textit{crystalline cohomology} $H^{*}_{{\rm crys}}(\phi, \L)$ of $\phi$ is the
$W$-module $ W \otimes_{\L[y]} D(\phi, \L)$, that is, the
completion of $D(\phi,\L)$ at the ideal $I=(y-\gamma_x)$ of
$\L[y]$.
Gekeler proved that $D(\phi, \L)$ is a projective, hence free,
$\L[y]$-module of rank $r$~\cite{Gekeler88}, with canonical basis
${\hat{\eta}} \newcommand{\emorph}{\textnormal{End}}{}^{(i)}$ such that ${\hat{\eta}} \newcommand{\emorph}{\textnormal{End}}{}^{(i)}(x) = \tau^{i}$ for $1
\leq i \leq r$. From this, it follows that $H^{*}_{{\rm crys}}(\phi, \L)$ is a free
$W$-module of rank $r$ as well, as pointed out in~\cite{Angles1997}.
\begin{remark}
In that reference, $A$ is not necessarily a polynomial ring, and
$\L[y]$ is replaced by $A_\L:=\L \otimes_{\mathbb{F}_q} \mathbb{A}$. In this
case, $D(\phi,\L)$ is a projective $A_\L$-module of rank $r$, the
definition of ideal $I$ changes, but it remains maximal in
$A_\L$, so the completion $W$ of $A_\L$ at $I$ is still a local
ring and $H^{*}_{{\rm crys}}(\phi, \L)$ is still free of rank $r$ over $W$.
\end{remark}
An endomorphism $u$ of $\phi$ induces an $\L[y]$-endomorphism $u^*$
of $D(\phi,\L)$, defined as $(u^*(\eta))_x = \eta_x u$, for $\eta$ in
$D(\phi,\L)$; the same holds for the completion $H^{*}_{{\rm crys}}(\phi, \L)$.
Following~\cite{Angles1997}, using the fact that $H^{*}_{{\rm crys}}(\phi, \L)$ is free over
$W$, one can then define the characteristic polynomial
$\textnormal{CharPoly}_{W}(u^*)$ in the usual manner.
Recall now that $\textnormal{CharPoly}(u)$ denotes the characteristic polynomial
of $u$, as defined in Section~\ref{ssec:charpoly}. The following
theorem due to Angl\`es~\cite[Thm. 3.2]{Angles1997} relates this
characteristic polynomial to that of the induced endomorphism on
$H^{*}_{{\rm crys}}(\phi, \L)$, where $\iota$ below acts coefficient-wise.
\begin{theorem}\label{maintheorem} \label{cpoly}
For $u$ in $\emorph_{\L}(\phi)$, $\textnormal{CharPoly}(u)^{\iota} =
\textnormal{CharPoly}_{W}(u^*)$.
\end{theorem}
\subsection{Truncated Cohomology}
Recall now that $\mathfrak{p} \in A$ is the minimal polynomial of $\gamma_x
\in \L$ over $\mathbb{F}_q$. For $k \ge 1$, we are going to define an
$\mathbb{F}_q$-linear homomorphism $\chi_k$ such that the following diagram
commutes:
\begin{center}
\begin{tikzcd}
& W \arrow[d, "\pi_k"] \\
A \arrow[rd, "\theta_k:f(x) \mapsto f(y) \bmod \mathfrak{p}(y)^k"'] \arrow[r, "\iota_k"] \arrow[ru, "\iota"] & W_k \arrow[d, dashed, "\chi_k"] \\
& \mathbb{F}_q[y]/(\mathfrak{p}(y)^k)
\end{tikzcd}
\end{center}
There exists an isomorphism \[T_k:\mathbb{F}_q[x,y]/ (\mathfrak{p}(x),(y-x)^k)
\to \mathbb{F}_q[y]/(\mathfrak{p}(y)^k);\] see e.g.~\cite[Lemma~13]{MeSc16}. On the
other hand, recall that $\L=\mathbb{F}_q[t]/(\ell(t))$ is isomorphic
to \[\mathbb{F}_q[x, t]/(\mathfrak{p}(x), g(x, t)),\] for some $g$ in
$\mathbb{F}_q[x,t]$, monic of degree $n/m$ in $t$; in this
representation of $\L$, $\gamma_x$ is simply (the residue class of)
$x$. As a result, we get
\begin{align}
W_k&= \mathbb{F}_q[t,y]/(\ell(t),(y-\gamma_x)^k)\nonumber\\
&\simeq \mathbb{F}_q[x,t,y]/(\mathfrak{p}(x),g(x,t),(y-x)^k)\nonumber\\
&\simeq \mathbb{F}_q[y,t]/(\mathfrak{p}(y)^k,G_k(y,t)),\label{eq:Wk}
\end{align}
for a certain polynomial $G_k \in \mathbb{F}_q[y,t]$, monic of degree $n/m$
in $t$.
We can then define $\chi_k: W_k \to \mathbb{F}_q[y]/(\mathfrak{p}(y)^k)$ by
\[\chi_k: \sum_{0 \le i < n/m} c_i t^i\mapsto c_0,\] and we verify that
it satisfies our claim. The details of how to compute this
homomorphism are discussed in Section~\ref{sec:algo}.
For $k \ge 1$, we further define the $\textit{precision}$ $k$
cohomology space $H_{k}^*(\phi, \L)} \newcommand{\uk}{\hat{u}_k$ as the $W_k$-module
\[D(\phi, \L) / I^k\,D(\phi, \L) \simeq H^{*}_{{\rm crys}}(\phi, \L) / I^k\, H^{*}_{{\rm crys}}(\phi, \L).\]
It is thus free of rank $r$, and an endomorphism $u$ of $\phi$ induces
a $W_k$-linear endomorphism $u^*_k$ of $H_{k}^*(\phi, \L)} \newcommand{\uk}{\hat{u}_k$.
\begin{remark}
In~\cite{Gekeler88}, Gekeler introduced {\em de Rham cohomology} of
Drinfeld modules; this is the case $k=1$ in this construction (in
which case $W_k=\L$).
\end{remark}
In the following claim,
recall that for a polynomial $P$ and for any map $\chi$ acting on its
coefficient ring, we let $P^{\chi}$ denote coefficient-wise
application of $\chi$ to $P$.
\begin{corollary}\label{maincor} \label{corpoly}
For $u$ in $\emorph_{\L}(\phi)$ and $k \ge 1$,
$\textnormal{CharPoly}(u)^{\theta_k} = \textnormal{CharPoly}_{W_k}(u^*_k)^{\chi_k}$.
\end{corollary}
\begin{proof}
Apply ${\chi_k \circ \pi_k}$ coefficient-wise to the equality in
Theorem~\ref{cpoly}.
\end{proof}
If $u$ has degree $d$ in $\tau$, we know that all coefficients of
$\textnormal{CharPoly}(u)$ have degree at most $d$, so they can be recovered from
their reductions modulo $\mathfrak{p}^k$ for $k = \lceil \frac{d + 1}{m}
\rceil \in O((d+m)/m)$. In the prime field case, where $m=n$, and for
the special case $u=\tau^n$, the above formula gives $k=2$, but we can
take $k=1$ instead; this is discussed in Section~\ref{ssec:other}.
Note also that if we take $k=d+1$, there is no need to consider the
map $\chi_k$: on the representation of $W_{d+1}$ as
\[W_{d+1}=\mathbb{F}_q[x,t,y]/(\mathfrak{p}(x),g(x,t),(y-x)^{d+1}),\] for $f$ of
degree up to $d$, $\iota_k(f)$ is simply the polynomial $f(y)$, so
we can recover $f$ from $\iota_k(f)$ for free. We will however refrain
from doing so, as it causes $k$ to increase.
\section{Main Algorithms}\label{sec:algo}
We will now see how the former discussion can be made more concrete,
by rephrasing it in terms of skew polynomials only. The evaluation map
$\eta \mapsto \eta_x$ gives an additive bijection $D(\phi, \L) \to \L\{
\tau \} \tau$. This allows us to transport the $\L[y]$-module
structure on $D(\phi,\L)$ to $\L\{ \tau \} \tau$: one verifies that it
is given by $(c y^i * \eta) =c \eta \phi_{x^i}$, for $\eta$ in
$\L\{\tau\}\tau$, $c$ in $\L$ and $i \ge 0$, and that
$\mathcal{B}=(\tau,\dots,\tau^r)$ is a basis of $\L\{\tau\}\tau$ over
$\L[y]$.
Further, an endomorphism $u \in \emorph_{\L}(\phi)$ now induces an
$\L[y]$-linear endomorphism $u^\star: \L\{\tau\}\tau \to
\L\{\tau\}\tau$ simply given by $u^\star(v) = v u$ for $v$ in
$\L\{\tau\}\tau$. Reducing modulo the ideal $I^k \subset \L[y]$,
we denote by $u_k^\star$ the corresponding $W_k$-linear endomorphism
on the quotient module $\L\{\tau\}\tau/I^k_\L \L\{\tau\}\tau\simeq
H_{k}^*(\phi, \L)} \newcommand{\uk}{\hat{u}_k$.
We can then outline the algorithm referenced in
Theorems~\ref{mainresult} and~\ref{mainresult2}; its correctness
follows directly from Corollary \ref{maincor} and the bound on $k$
given previously.
\begin{enumerate}
\item Set $k = \lceil \frac{d + 1}{m} \rceil$, with $d=\deg_\tau(u)$,
except if $n=m$ and $u=\tau^n$ (in which case we can take $k=1$)
\item\label{step2} Compute the coefficients $u_{i,1},\dots,u_{i,r} \in
W_k$ of $\tau^i u \bmod I^k$ on the basis $\mathcal{B}$, for
$i=1,\dots,r$
\item Using the coefficients computed in step~\ref{step2}, construct
the matrix for $u^\star_k$ acting on $\L\{\tau\}\tau/I^k_\L
\L\{\tau\}\tau$ and compute its characteristic polynomial
$\textnormal{CharPoly}_{W_k}(u^\star_k) \in W_k[Z]$
\item Apply the map $\chi_k$ to the coefficients of
$\textnormal{CharPoly}_{W_k}(u^\star_k)$ to recover $\textnormal{CharPoly}(u)^{\theta_k}$,
and thus $\textnormal{CharPoly}(u)$.
\end{enumerate}
In Subsections~\ref{ssec:recurrence} to~\ref{ssec:frobenius}, we
discuss how to complete Step~\ref{step2}: we give two solutions for
the case of an arbitrary endomorphism $u$, and a dedicated, more
efficient one, for $u=\tau^n$. We freely use the following notation:
\begin{itemize}
\item for $c$ in $\L$ and $t \in \mathbb{Z}$, let $c^{[t]}$ denote the value
of the $t$th power Frobenius applied to $c$, that is,
$c^{[t]}=c^{q^t}$
\item for $f$ in $\L[y]$, $f^{[t]} \in \L[y]$ is obtained by
applying the former operator coefficient-wise, so
$\deg(f)=\deg(f^{[t]})$
\item for $M=(m_{i,j})_{1 \le i \le u, 1 \le j \le v}$ in
$\L[y]^{u\times v}$, $M^{[t]}$ is the matrix with entries
$(m^{[t]}_{i,j})_{1 \le i \le u, 1 \le j \le v}$.
\end{itemize}
Finally, we define $\mu=(y-\gamma_x)^k \in \L[y]$ (with the
value of $k$ defined above); it generates the ideal $I^k$ in $\L[y]$.
\subsection{Using a Recurrence Relation}\label{ssec:recurrence}
The following lemma is a generalization of a recurrence noted by
Gekeler~\cite[Section~5]{frobderham} for $r=2$. Recall that we write
$\phi_x = \gamma_x + \Delta_1\tau^1 + \ldots + \Delta_r\tau^r$, with
all $\Delta_i$'s in $\L$; in the expressions below, we write
$\Delta_0=\gamma_x $.
\begin{lemma}\label{lemmarec}
For any $t \geq 1$, the following relation holds in the $\L[y]$-module
$\L\{\tau\}\tau$:
\begin{equation}\label{mainrec}
\sum_{i = 0}^{r}\Delta_{i}^{[t]} \tau^{t+i} = y * \tau^t.
\end{equation}
\end{lemma}
\begin{proof}
This follows directly from the module action of $\L[y]$ on
$\L\{\tau\}\tau$, by commuting $\tau^t$ across the defining
coefficients $\Delta_i$ of $\phi$:
$$
y * \tau^t = \tau^t \phi_x = \tau^t \sum_{i = 0}^{r}\Delta_{i} \tau^{i} = \sum_{i = 0}^{r}\Delta_{i}^{[t]} \tau^{t+i}.
\qedhere$$
\end{proof}
For $i=0,\dots,r-1$, let $\Lambda_{i} = -\frac{\Delta_i}{\Delta_r}$
and define the order $t$ companion matrix for the recurrence, $\mathcal{A}} \newcommand{\cC}{\mathcal{C}_t
\in\L[y]^{r\times r}$, as
\begin{equation}
\mathcal{A}} \newcommand{\cC}{\mathcal{C}_t = \begin{bmatrix}
\Lambda_{r-1}^{[t]} & \Lambda_{r-2}^{[t]} & \ldots & \Lambda_1^{[t]} & \Lambda_0^{[t]} + \frac{y}{\Delta_r^{[t]}} \\
1 & 0 & \ldots & 0 & 0 \\
0 & 1 & \ldots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \ldots & 1 & 0
\end{bmatrix}
\end{equation}
For $t \ge 1$, let $\kappa_t \in \L[y]^{1 \times r}$ denote the coefficient
vector of $\tau^t$ with respect to the standard basis
$\mathcal{B}$. Then, we have the following relation between $r \times
r$ matrices over $\L[y]$:
\begin{equation}
\begin{bmatrix}
\kappa_{t + r} \\
\kappa_{t +r - 1} \\
\vdots \\
\kappa_{t + 1}
\end{bmatrix}
= \mathcal{A}} \newcommand{\cC}{\mathcal{C}_t \begin{bmatrix}
\kappa_{t +r- 1} \\
\kappa_{t +r- 2} \\
\vdots \\
\kappa_{t}
\end{bmatrix}
\end{equation}
For $k\ge 1$, these relations can be taken modulo $\mu$, to give
equalities over $W_k=\L[y]/\mu$; below, we will write $\bar
\kappa_t =\kappa_t \bmod \mu \in W_k^{1\times r}$.
Starting from $\bar \kappa_t,\dots,\bar \kappa_{t+r-1}$, we obtain $\bar
\kappa_{t+r-1}$ using $O(r)$ operations (divisions, Frobenius) in $\L$
to obtain the coefficients appearing on the first row of $\mathcal{A}} \newcommand{\cC}{\mathcal{C}_t$,
followed by $O(kr)$ operations in $\L$ to deduce the entries of $\bar
\kappa_{t+r}$.
Below, we will need $\bar \kappa_1,\dots,\bar \kappa_{d+r}$. Altogether,
computing them takes $((d+r)krn\log q)^{1 + o(1)}$ bit operations; with
our chosen value of $k$, this is also
\[( (d+r)(d+m)rn/m\log q + )^{1 + o(1)}.\] Let us then write $u = u_0 +
\cdots + u_d \tau^d$. For $i=1,\dots,r$, we have
\[\tau^i u = u_0^{[i]}\tau_i +\cdots + u_d^{[i]} \tau^{d+i},\]
so the coefficient vector
$[u_{i,1} \cdots u_{i,r}] \in W_k$ of $\tau^i u \bmod I^k$ on the basis
$\mathcal{B}$ is given by the product
\[[u_0^{[i]}~\cdots~u_d^{[i]}]
\begin{bmatrix}
\bar \kappa_{i} \\
\bar\kappa_{i+1} \\
\vdots \\
~\bar \kappa_{i+d}~
\end{bmatrix} \in W_k^{1 \times r}.\]
Each such operation takes $O(dkrn)$ operations in $\L$, for a total of
$(d(d+m)r^2n/m \log q)^{1+o(1)}$ bit operations if done independently
of one another (this is the dominant cost in the algorithm).
In cases when $d$ is not small compared to $r$, we can reduce the cost
slightly using matrix arithmetic, since all coefficient vectors we
want can be read off an $r \times (d+r) \times r$ matrix product,
$$
\begin{bmatrix}
u_0^{[1]} & \cdots & u_d^{[1]} & 0 &\cdots&\cdots &0\\
0 & u_0^{[2]} & \cdots & u_d^{[1]} & 0& \cdots &0 \\
&& \ddots&&\ddots \\
0 &\cdots&\cdots &0 & u_0^{[r]} & \cdots & u_d^{[r]}
\end{bmatrix}
\begin{bmatrix}
\bar \kappa_{1} \\
\bar\kappa_{i+1} \\
\vdots \\
~\bar \kappa_{d+r}~
\end{bmatrix} \in W_k^{r \times r}.$$
This takes
$((d+r)(d+m)r^{\omega-1}n/m \log q)^{1+o(1)}$ bit operations.
\subsection{Using Euclidean Division}\label{ssec:division}
This section describes an alternative approach to computing the
coefficients of an endomorphism $u$ on the canonical basis
$\mathcal{B}$. Computations are done in $\L[y]$ rather than
$W_k=\L[y]/\mu$ (we are not able to take reduction modulo $\mu$
into account in the main recursive process).
The algorithm is inspired by a well-known analogue for commutative
polynomials~\cite[Section~9.2]{GaGe13}: for a fixed $a \in \L[y]$
of degree $r$, we can rewrite any $f$ in $\L[y]$ as $f=\sum_{0 \le
i < r} f_i(a) y^i$, for some coefficients $f_0,\dots,f_{r-1}$ in
$\L[y]$. This is done in a divide-and-conquer manner.
This approach carries over to the non-commutative setting. We start by
showing how $f$ of degree $d$ in $\L\{\tau\}$ can be rewritten as
$$f = \sum_i f_i \phi_x^i,$$ for some $f_i$ of degree less than $r$ in
$\L\{\tau\}$. If we let $K$ be such that $d < K r \le 2d$, with $K$ a
power of $2$, index $i$ in the sum above ranges from $0$ to $K-1$.
If $K=1$, we are done. Else set $K'=K/2$, and compute the quotient $g$
and remainder $h$ in the right Euclidean division of $f$ by
$\phi_x^{K'}$, so that $f = g\phi_x^{K'} + h$. Recursively, we compute
$g_0\dots,g_{K'-1}$ and $h_{0},\dots,h_{K'-1}$, such that
\[g = \sum_{0 \le i < K'} g_{i} \phi_x^i \quad\text{and}\quad h =
\sum_{0 \le i < K'} h_i \phi_x^i.\] Then, we return
$h_0,\dots,h_{K'-1},g_0,\dots,g_{K'-1}$. The runtime of the whole
procedure is $O\tilde{~}({\sf SM}(d,n,q))$ bit operations, with $\sf SM$
as defined in Section~\ref{sec:prelim} (the analysis is the same as
the one done in the commutative case in~\cite{GaGe13}, and uses the
super-linearity of {\sf SM} with respect to $d$).
From there, we are able to compute the coefficients of $f \in
\L\{\tau\}\tau$ on the monomial basis $\mathcal{B}$. This essentially
boils down to using the procedure above, taking care of the fact that
$f$ is a multiple of $\tau$. Factor $\tau$ on the left, writing $f$ as
$\tau g$: if $f=F \tau$, $g=F^{[-1]}$. Apply the previous procedure,
to write $g=\sum_{0 \le i \le s} g_i \phi_x^i$, with all $g_i'$ of
degree less than $r$ and $s \le d/r$.
This gives $f = \tau g = \sum_{0 \le i \le s} (g^{[1]}_i \tau)
\phi_x^i$, with all coefficients $g^{[1]}_i \tau$ supported on
$\tau,\dots,\tau^r$. Extracting coefficients of $\tau,\dots,\tau^r$,
we obtain polynomials $G_1,\dots,G_r$ of degree at most $s$ in
$\L[\tau]$ such that $f = \sum_{1\le i \le r} G_i * \tau^i$.
The cost of left-factoring $\tau$ in $f$, and of multiplying all
coefficients of $g$ back by $\tau$, is $(d n \log q)^{1+o(1)}$, so the
dominant cost is $O\tilde{~}({\sf SM}(d,n,q))$ bit operations from the
divide-and-conquer process. To obtain the matrix of an endomorphism
$u$ of degree $d$, we apply $r$ times this operation, to the terms
$\tau^i u$, $i=1,\dots,r$. The runtime is then dominated by $O\tilde{~}(r
{\sf SM}(d+r,n,q))$. Finally, reducing the entries of the matrix
modulo $\mu=(y-\gamma_x)^k$ takes softly linear time in the size of
these entries, so can be neglected.
\subsection{Special Case of the Frobenius Endomorphism}\label{ssec:frobenius}
In the particular case where $u = \tau^n$, we may speed up the
computation using a baby-step giant-step procedure, based on the
approach used in~\cite{DOLISKANI2021199}. As a first remark, note that
for $u=\tau^n$, $d=n$ and $k$ in $O(n/m)$.
In this case, it is enough to compute the vectors
$\bar\kappa_{n+1},\dots,\bar\kappa_{n+r}$. They are given by
\begin{equation}
\begin{bmatrix}
\bar\kappa_{n+r} \\ \bar\kappa_{n +r - 1} \\ \vdots \\ \bar\kappa_{n +
1}
\end{bmatrix} = \bar\mathcal{A}} \newcommand{\cC}{\mathcal{C}_n\hdots \bar\mathcal{A}} \newcommand{\cC}{\mathcal{C}_1,
\end{equation}
with $\bar \mathcal{A}} \newcommand{\cC}{\mathcal{C}_t$ the image of $\mathcal{A}} \newcommand{\cC}{\mathcal{C}_t$ modulo $\mu=(y-\gamma_x)^k$
for all $t$. To compute the matrix product $\bar \mathcal{A}} \newcommand{\cC}{\mathcal{C} = \bar \mathcal{A}} \newcommand{\cC}{\mathcal{C}_n
\hdots\bar \mathcal{A}} \newcommand{\cC}{\mathcal{C}_1$, we slightly extend the approach used
in~\cite{DOLISKANI2021199} (which dealt with the case $k=1$). Consider
the following element of $\L[y]^{r \times r}$:
\begin{equation}\label{eqdef:B}
\mathcal{B} =
\begin{bmatrix}
\Lambda_{r-1} & \Lambda_{r-2} & \ldots & \Lambda_1 & \Lambda_0 \\
1 & 0 & \ldots & 0 & 0 \\
0 & 1 & \ldots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & \ldots & 1 & 0
\end{bmatrix} + \begin{bmatrix}
0 & 0 & \ldots & \Delta_r^{-1} \\
0 & 0 & \ldots & 0 \\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & 0
\end{bmatrix} y.
\end{equation}
It follows in particular that for $t \ge 1$,
$$\mathcal{A}} \newcommand{\cC}{\mathcal{C}_t = \mathcal{B}^{[t]} \quad\text{and}\quad \bar\mathcal{A}} \newcommand{\cC}{\mathcal{C}_t = \mathcal{B}^{[t]} \bmod
\mu,$$ with reduction applied coefficient-wise.
Write $n^* = \lceil \sqrt{nk} \rceil \in O(n/\sqrt m)$,
and let $n$ be written as $n = n^*n_1 + n_0 $ with
$0 \le n_0 < n^*$, so that $n_1 \le \sqrt{n/k}$.
Setting
$$\cC = \mathcal{B}^{[n^* + n_0]} \cdots \mathcal{B}^{[n_0 + 1]}$$ and
$$\cC_0 =\mathcal{B}^{[n_0]} \ldots \mathcal{B}^{[1]},$$
the matrix $\mathcal{A}} \newcommand{\cC}{\mathcal{C}$ is the product
\begin{equation*}
\mathcal{A}} \newcommand{\cC}{\mathcal{C} = \cC^{[(n_1-1) n^*]} \cdots \cC^{[n^*]} \cC \cC_0.
\end{equation*}
Our goal is to compute $\bar \mathcal{A}} \newcommand{\cC}{\mathcal{C}=\mathcal{A}} \newcommand{\cC}{\mathcal{C} \bmod \mu$, without computing
$\mathcal{A}} \newcommand{\cC}{\mathcal{C}$ itself.
Any Frobenius application (of positive or negative index) in $\L$
takes $(n \log q)^{1+o(1)}$ bit operations. In particular, computing
all matrices $\mathcal{B}^{[i]}$ that arise in the definitions of $\cC$ and
$\cC_0$ takes $(r n^2/\sqrt{m} \log q)^{1+o(1)}$ bit operations.
Once they are known, the next stage of the algorithm computes $\cC$
and $\cC_0$ in $\L[y]$. This is done using a matrix subproduct-tree
algorithm~\cite[Chapter~10]{GaGe13}, using a number of operations in
$\L$ softly linear in $r^\omega n^*$. This is $(r^\omega n^2/\sqrt{m}
\log q)^{1+o(1)}$ bit operations.
To deduce the shifted matrices
$$\cC^{[(n_1-1)n^*]} \bmod \mu,\dots,\cC^{[n^*]}\bmod\mu,$$ we use the
following lemma.
\begin{lemma}
For $f$ in $\L[y]$ and $t \ge 0$,
$$f^{[t]} \bmod \mu = (f \bmod \mu^{[-t]})^{[t]}$$
\end{lemma}
\begin{proof}
Let $g=f \bmod \mu^{[-t]}$, so that we have an equality of the form
$f = a \mu^{[-t]} + g$ in $\L[y]$. We raise this to the power
$q^t$ coefficient-wise; this gives $f^{[t]} = a^{[t]}\mu+
g^{[t]}$. Since $g$, and thus $g^{[t]}$, have degree less than $k$,
this shows that $g^{[t]}=f^{[t]} \bmod \mu$.
\end{proof}
Applying this entry-wise, we compute $\cC^{[i n^*]}\bmod \mu$ by
reducing all entries of $\cC$ modulo $\mu^{[-i n^*]}$, then raising
all coefficients in the result to the power $q^{i n^*}$, for
$i=1,\dots,(n_1-1)$.
Matrix $\cC$ has degree $O(n/\sqrt{m})$, and the sum of the degrees of
the moduli $\mu^{[-t]}$ is $kn_1$, which is $O(n/\sqrt{m})$ as
well. Altogether, this takes $O(r^2n/\sqrt{m})$ applications of
Frobenius in $\L$, together with $O(r^2n/\sqrt{m})$ arithmetic
operations in $\L$ to perform all Euclidean
divisions~\cite[Chapter~10]{GaGe13}. Thus, the runtime is $(r^2
n^2/\sqrt{m} \log q)^{1+o(1)}$ bit operations.
Finally, we multiply all matrices $\cC^{[i n^*]}\bmod \mu$ and $\cC_0
\bmod \mu$. This takes $(r^\omega n^2/\sqrt{m} \log q)^{1+o(1)}$ bit
operations.
\begin{algorithm}[!t]
\label{euclid}
\begin{algorithmic}[1]
\Procedure{CharPolyFrobenius}{} \\ \textbf{Input} A field extension
$\L$ of degree $n$ over $\mathbb{F}_q$, $(\Delta_1, \ldots, \Delta_r) \in
\L^r$ representing a rank $r$ Drinfeld module $\phi$ over $(\L,
\gamma)$.\\ \textbf{Output} $a_i \in A$ such that the characteristic
polynomial of the Frobenius is $ X^r + \sum_{i=0}^{r-1} a_i X^i$.
\State $n^*, n_1, n_0 \gets \lceil \sqrt{nk} \rceil, \lfloor n / n^* \rfloor, n \bmod n^* $.
\State $\mathcal{B}$ as in~\eqref{eqdef:B}
\State $\cC \gets \mathcal{B}^{[n^* + n_0]} \ldots \mathcal{B}^{[n_0 + 1]}$.
\State $\bar \cC_0 \gets \mathcal{B}^{[n_0]} \ldots \mathcal{B}^{[1]} \bmod \mu$
\State $\bar \cC^{[in^*]}\gets (\cC \bmod \mu^{[-in^*]})^{[in^*]}$ for $0 \leq i < n_1$.
\State $\bar \mathcal{A}} \newcommand{\cC}{\mathcal{C} \gets \bigg(\displaystyle\prod_{i=0}^{n_1 - 1}\bar \cC^{[in^*]} \bigg) \bar \cC_0$
\State $\bar a_i \gets \textnormal{coefficient of } Z^i \textnormal{ in } \det(\bar \mathcal{A}} \newcommand{\cC}{\mathcal{C} - ZI)$
\State \Return $a_i = \chi_k(\bar a_i)$ for $0 \leq i < r$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Other Operations}\label{ssec:other}
Once the coefficients of the skew polynomials $\tau^i u$ on the basis
$\mathcal B$ are known modulo $\mu$, we compute the characteristic
polynomial of the matrix formed from these coefficients. This can be
done with a bit cost of $(r^{\lambda} kn\log q)^{1+o(1)}$ when the
matrix has entries in $W_k$, with $\lambda$ the exponent
defined in Section~\ref{ssec:charpoly}.
At this stage, we have all coefficients of
$\textnormal{CharPoly}_{W_k}(u^\star_k)$ in $W_k$. It remains to apply the map
$\chi_k$ to each of them to recover $\textnormal{CharPoly}(u)$.
Elements of $W_k=\mathbb{F}_q[t,y]/(\ell(t),(y-\gamma_x)^k)$ are written
as bivariate polynomials in $t,y$, with degree less than $n$ in $t$
and less than $k$ in $y$. To compute their image through $\chi_k$,
we first apply the isomorphisms
\begin{align*}
W_k= \mathbb{F}_q[t,y]/(\ell(t),(y-\gamma_x)^k)\nonumber&\xrightarrow{A_k} \mathbb{F}_q[x,t,y]/(\mathfrak{p}(x),g(x,t),(y-x)^k)\nonumber\\
&\xrightarrow{B_k} \mathbb{F}_q[y,t]/(\mathfrak{p}(y)^k,G_k(y,t))
\end{align*}
from~\eqref{eq:Wk}, with $\mathfrak{p}(y)^k$ of degree $km$ and $G_k$ of
degree $n/m$ in $t$.
We mentioned in Section \ref{sec:prelim} that for $c$ in
$\L=\mathbb{F}_q[t]/(\ell(t))$, we can compute its image $\alpha(c)$ in
$\mathbb{F}_q[x,t]/(\mathfrak{p}(x),g(x,t)$ using $(n \log q)^{1+o(1)}$ bit
operations. Proceedings coefficient-wise with respect to $y$, this
shows that for $C$ in $W_k$, we can compute $A_k(C)$ in $(k n \log
q)^{1+o(1)}$ bit operations.
The \textit{tangling} map of ~\cite[\S 4.5]{powermod} provides an
algorithm for computing the isomorphism $\mathbb{F}_q[x, y]/(\mathfrak{p}(x),
(y-x)^k) \to \mathbb{F}_q[y]/(\mathfrak{p}(y)^k)$ in $(km \log q)^{1+o(1)}$
bit operations (this could also be done through modular composition,
with a similar asymptotic runtime, but the algorithm
in~\cite{powermod} is simpler and faster). Applying it
coefficient-wise with respect to $t$, this allows us to compute
$B_k(A_k(C))$ in $(k n \log q)^{1+o(1)}$ bit operations again. At
this stage, the mapping $\chi_k$ is simply extraction of the degree-0
coefficient in $t$.
We apply this procedure $r$ times, for a total cost of $(r k n \log
q)^{1+o(1)}$ bit operations. This can be neglected in the runtime
analysis.
When using precision $k = 1$ for the prime field case, for $u=\tau^n$,
it is necessary to compute the constant coefficient $a_0$ separately.
This is done using the formula $a_0 = (-1)^{n(r+1) +
r}N_{\L/\mathbb{F}_q}(\gamma_{\Delta_r})^{-1} \mathfrak{p}$ from \cite{garaipap}
and takes $(n \log q)^{1 + o(1)}$ bit operations.
Summing the costs seen so far for the various steps of the algorithm
finishes the proof of our main theorems.
\subsection{Example}
Let $\mathbb{F}_q = \mathbb{Z}/2\mathbb{Z}$, $n = 3$ and set $\ell(t) = t^3 + t + 1$ and $\L
= \mathbb{F}_2[t]/(\ell(t))$. Let $\gamma_x = t + 1 \bmod \ell(t)$, so that
\[\mathfrak{p} = x^3 + x^2 + 1 = \ell(x+1),\]
and $\L \cong \mathbb{F}_{\mathfrak{p}}=\mathbb{F}_q[x]/(\mathfrak{p}(x))$, with the isomorphism
given by $f(t) \mapsto f(x+1)$.
Consider the rank 4 Drinfeld module $\phi_x = t \tau^4 + (t^2 +
t)\tau^3 + \tau^2 + t^2 \tau + t + 1$. We proceed to compute the
characteristic polynomial using the de Rham cohomology, that is,
crystalline cohomology truncated in degree $k=1$. In other words,
all computations are done over $\L$
The recurrence of equation (\ref{mainrec}) becomes $ \tau^{k + 4} =
(t + 1)^{2^k}\tau^{(k + 3)} + (t^2 + 1)^{2^k}\tau^{k + 2} +
t^{2^k}\tau^{k + 1} + (1 + t^{1 -2^k})\tau^{k} $. Running the
recurrence for $n = 3$ iterations gives:
\begin{itemize}
\item $\tau^{5} = (t^2 + 1)\tau^{4} + (t^2 + t + 1)\tau^{3} + t^2 \tau^{2} + t^2 \tau^{1}$
\item $\tau^{6} = (t^2 + 1)\tau^{4} + (t^2 + 1)\tau^{3} + (t^2 + t) \tau^{2} + \tau^{1}$
\item $\tau^{7} = \tau^{4} + t\tau^{3} + (t + 1) \tau^{2} + \tau^{1}$
\end{itemize}
A matrix for the Frobenius endomorphism
can be inferred to be
\begin{center}
$ \begin{bmatrix}
1 & t & t + 1 & 1 \\
t^2 + 1 & t^2 + 1 & t^2 + t & 1 \\
t^2 + 1 & t^2 + t + 1 & t^2 & t^2 \\
1 & 0 & 0 & 0 \\
\end{bmatrix} . $
\end{center}
It has characteristic polynomial $Z^4 + (t + 1)Z^2 + (t + 1)Z$. Using
the expression for $a_0$ which is valid in the prime field case, the
Frobenius norm can be inferred to be $a_0 = x^3 + x^2 + 1$.
To recover the final coefficients, observe that $t \mapsto x + 1$
gives the required map $\chi_1 : W_1 = \L \to
\mathbb{F}_{\mathfrak{p}}$. Finally, we conclude that the characteristic polynomial
of $\tau^n$ is $Z^4 + xZ^2 + xZ + x^3 + x^2 + 1$.
\section{Experimental Results}
An implementation of the algorithm of section (\ref{ssec:frobenius}) was created in
SageMath~\cite{sagemath} and is publicly available at
\url{https://github.com/ymusleh/drinfeld-module}. An implementation in
MAGMA~\cite{MR1484478} is also publicly available at
\url{https://github.com/ymusleh/drinfeld-magma} and was used to
generate the experimental results included in this work. Our
implementation differs from our theoretical version in a few ways.
\begin{itemize}
\item The Kedlaya-Umans algorithm is most likely not used by MAGMA
for computing Frobenius mappings of elements of~$\L$.
\item To compute the images of coefficients under the map $\chi_k$, we
leverage a simpler procedure using reduction modulo bivariate
Gr\"obner bases, rather than the tangling map of van der Hoeven and
Lecerf. In any case, this does not impact the run times presented.
\end{itemize}
\begin{acks}
We thank Xavier Caruso, Antoine Leudi\`ere and Pierre-Jean
Spaenlehauer for interesting discussions. Schost is supported by an
NSERC Discovery Grant.
\end{acks}
\begin{table}[h!]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|llllllll|}
\hline
\multicolumn{8}{|c|}{Run Times for $m = 10$ $q = 25$ in seconds} \\ \hline
\multicolumn{1}{|l|}{\textbf{}} & \multicolumn{1}{l|}{$n = 100$} & \multicolumn{1}{l|}{$n = 150$} & \multicolumn{1}{l|}{$n = 200$} & \multicolumn{1}{l|}{$n = 300$} & \multicolumn{1}{l|}{$n = 400$} & \multicolumn{1}{l|}{$n = 500$} & $n = 600$ \\ \hline
\multicolumn{1}{|l|}{$r = 5$} & \multicolumn{1}{l|}{0.400} & \multicolumn{1}{l|}{2.260} & \multicolumn{1}{l|}{42.190} & \multicolumn{1}{l|}{86.830} & \multicolumn{1}{l|}{269.760} & \multicolumn{1}{l|}{635.170} & 1099.110 \\ \hline
\multicolumn{1}{|l|}{$r = 9$} & \multicolumn{1}{l|}{0.790} & \multicolumn{1}{l|}{4.210} & \multicolumn{1}{l|}{78.860} & \multicolumn{1}{l|}{157.100} & \multicolumn{1}{l|}{481.090} & \multicolumn{1}{l|}{1129.670} & \\ \hline
\multicolumn{1}{|l|}{$r = 12$} & \multicolumn{1}{l|}{1.170} & \multicolumn{1}{l|}{6.080} & \multicolumn{1}{l|}{104.630} & \multicolumn{1}{l|}{220.430} & \multicolumn{1}{l|}{658.950} & \multicolumn{1}{l|}{1531.580} & \\ \hline
\multicolumn{1}{|l|}{$r = 18$} & \multicolumn{1}{l|}{2.300} & \multicolumn{1}{l|}{11.360} & \multicolumn{1}{l|}{170.790} & \multicolumn{1}{l|}{366.690} & \multicolumn{1}{l|}{1074.840} & \multicolumn{1}{l|}{2451.530} & \\ \hline
\multicolumn{1}{|l|}{$r = 23$} & \multicolumn{1}{l|}{3.820} & \multicolumn{1}{l|}{17.580} & \multicolumn{1}{l|}{240.100} & \multicolumn{1}{l|}{525.670} & \multicolumn{1}{l|}{1518.370} & \multicolumn{1}{l|}{} & \\ \hline
\end{tabular}
}
\end{table}
\FloatBarrier
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.08645",
"language": "en",
"timestamp": "2023-02-20T02:04:45",
"url": "https://arxiv.org/abs/2302.08645",
"yymm": "2302"
} | \section{Introduction}
\IEEEPARstart{E}{xtended} reality (XR), including augmented reality (AR), mixed reality(MR), and virtual reality (VR), has shown great potential in immersive gaming, teleconferencing, and remote education, and is changing the way that humans interact with the computer-simulated environments. In XR, the users need to wear head-mounted devices (HMDs) to perceive the virtual contents. These XR HMDs usually have strict restrictions on weight, energy consumption, and heat dissipation, especially when the users have to wear them for a long time. Therefore, XR offloading, as a way to offload storage and computational costive tasks of XR to remote GPU-accelerated servers, has been widely considered.
Recently, wireless XR has been proposed to increase users' mobility and quality of experience (QoE) \cite{akyildiz2022wireless,morin2022toward}. However, XR offloading through wireless networks is a challenging task. In some latency-critical XR applications, high-resolution images or three-dimensional (3D) data need to be transmitted in several milliseconds, resulting in the data rate requirements as high as tens of Gbps \cite{zhang2022semantic}, which is far above the achievable capacity of the existing
wireless networks with peak from 0.1 Gbps to 2.0 Gbps. Therefore, a new communication framework for supporting wireless XR is more than desired.
To overcome the ultra-high data rate requirement in wireless XR, one promising solution is to exploit semantic communications to significantly reduce the data rate requirements~\cite{qin2021semantic}. In the existing communication systems, the source data is transmitted in the form of bit streams and the communication system is optimized to minimize the bit-error rate. However, how the semantic meaning of the source data will be used to conduct a specific task is not considered. These semantic-unaware or task-unaware transmission method usually requires high data transmission rate or wide bandwidth. Sometimes, even if there are some bit errors during transmission, humans will not misunderstand the semantic meaning of the recovered data or task accuracy will not decrease significantly. This phenomenon inspires us to develop semantic communications for wireless XR.
Semantic communications transmits the semantic meaning in the source data, which is usually the content contributing more to the humans' understanding of the source data or task accuracy. Earlier attempt on semantic communications has developed the joint source and channel coding method (JSCC) for images \cite{bourtsoulatze2019deep} and texts \cite{farsad2018deep}, which first takes source coding into consideration and optimizes the system to minimize word/pixel-level errors rather than bit error. Semantic communications have first brought into wide attentions in~\cite{xie2021deep}, where BERT, a pre-trained natural language processing (NLP) model, is used to measure the sentence similarity in a semantic-aware wireless text transmission framework. The similar idea is then extended to the transmission of other data modalities, such as speech \cite{weng2021semantic}, images \cite{hu2022robust, dai2022nonlinear, huang2022toward}, and videos \cite{jiang2022wireless}. In addition to these reconstruction-oriented transmission works, task-oriented semantic communications have also been developed, which are more effective in reducing data traffic as only task-related semantic information is transmitted. Some representative works consider wireless image retrieval \cite{jankowski2020wireless,hu2022robust}, machine translation \cite{xie2022task}, visual question answering \cite{xie2021task} and so on. Despite the fast development of semantic communications, a semantic communication framework dedicated to wireless XR is still under-developed.
Besides, most existing semantic communication systems adopt fixed code length for transmitting different source inputs~\cite{xie2021deep,bourtsoulatze2019deep}, ignoring the fact that different source inputs may contain different amounts of semantic information. A straightforward example is that the amount of semantic information contained in a picture of a coffee shop changes with the number of customers inside. Therefore, a shorter code should be used for source data with less semantic information. Also, different source inputs may have different anti-noise capabilities. For example, if we calculate the first derivative of the task performance index to the semantic information extracted from source data, a higher value usually means this information is more sensitive to channel noise and a longer code \footnote{when the number of information symbols is fixed, a longer code here means adding more parity symbols} should be used to conquer channel noise. Thus, variable-length coding scheme shall be introduced.
In this paper, we design a universal variable-length semantic-channel coding method for different semantic-aware transmission tasks. In particular, we first use a rate-allocation network to analyze the amount of semantic information and anti-noise capability extracted from the source data, and then generate a rate-allocation index/map. The rate-allocation index/map will then be used to guide the semantic-channel coding network and achieve rate adaption at the transmitter side. We also design a training loss function to realize the explicit trade-off between code length and transmission performance. By adopting some proxy functions, the whole system is trained in an end-to-end manner.
Some prior works have considered exploiting variable-length coding scheme to semantic communications, but their methods differ from ours in certain ways. For instance, J. Dai \textit{et} \textit{al}. \cite{dai2022nonlinear} estimate the entropy of the semantic information and assign the code length proportionally to the estimated entropy. D. Huang \textit{et} \textit{al}. \cite{huang2022toward} divide the semantic features into different classes and adjust the quantization level for each class. However, these works have used strong hand-crafted assumptions on the coding rate allocation scheme while the optimal coding scheme may change from one transmission task to another. Therefore, we propose to learn the rate allocation scheme in an end-to-end manner. The most relevant work to ours is the one proposed in Q. Hu \textit{et} \textit{al}. \cite{hu2022robust}, where an importance weight is learned for each semantic feature. Different from \cite{hu2022robust}, we provide an explicit trade-off between code length and performance through the training loss.
Our contribution can be summarized as follows:
\begin{enumerate}
\item{We design a general semantic communication framework for supporting wireless XR offloading. We also identify the key transmission tasks under the developed framework, and provide task-aware network architectures for semantic coding modules.}
\item{We design a universal variable-length semantic and channel coding module that can be used in different semantic communication systems. The rate-allocation scheme is learned in an end-to-end manner by introducing some proxy functions.}
\item{Experiments on both human mesh reconstruction and image transmission tasks demonstrate the superiority of the proposed semantic communication framework for wireless XR over the traditional communication systems, and the variable-length coding scheme over fix-length coding scheme.}
\end{enumerate}
\section{Semantic coding in wireless XR offloading}
\label{sec:overall}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.45]{overall.eps}
\caption{The overall architecture of semantic communications for extended reality.}
\label{fig:overall}
\end{figure*}
In this section, we will first introduce the overall semantic communication framework for wireless XR offloading. After that, we will introduce the semantic coding modules deployed for the uplink and downlink of wireless XR offloading, respectively.
\subsection{Overall architecture}
As shown in Fig.\ref{fig:overall}, semantic-aware wireless XR offloading usually consists of the following steps: sensor data acquisition, semantic-aware sensor data transmission, rendered results generation, and semantic-aware rendered results transmission. To facilitate the understanding of semantic-aware transmission modules, we first provide a brief description for the overall architecture.
\subsubsection{Sensor data acquisition}
In many XR applications, the virtual contents are generated according to the actions or environmental surroundings of XR users. For example, virtual object insertion requires users' environmental information; VR gaming servers will generate game scenes according to users' actions. To map objects, scenes, and human actions from the real environments to the virtual world, some sensors are required, such as microphones, RGB cameras, depth sensors, and so on. These sensors are usually deployed on XR HMDs, or integrated into some external devices, like haptic gloves and omnidirectional treadmills. Depending on the sensor type, the sensed data can be of various data formats. After collecting the sensed raw data, data aggregation,
coding, and compression will be performed at the user side. In XR offloading, these sensed data will be transmitted to remote servers to process. Here, we define the users who send sensed data to remote servers as source users, and the remote servers who receive sensed data from these source users as destination servers.
\subsubsection{Semantic-aware sensor data transmission}
In traditional communication systems, the sensed data is first transmitted to the remote destination servers, and then used for different computing tasks, such as light estimation, 3D object reconstruction, and object insertion. Considering the fact that task-related semantic information only occupies a small proportion of the overall information contained in the sensed data, transmitting the whole sensed data wastes wireless resources. Therefore, we design a semantic-aware sensor data transmission (SSDT) module.
In our SSDT module, a semantic encoder deployed at the XR source user side, is responsible for extracting task-related semantic information from the sensed data. By discarding semantically-unimportant information, the data traffic for transmitting sensor data can be reduced. At the same time, a semantic decoder deployed at the XR destination server side will use these semantic information directly for task execution. The whole system is then optimized for transmitting these task-related information under the guidance of task accuracy. More details will be provided in Section \ref{sec:SSDT}.
\subsubsection{Rendered results generation}
Based on the semantic decoding results from the SSDT module, a series of rendering operations will be conducted at the XR server side. Specifically, for VR applications, the status of the pre-stored 3D models of objects or scenes will first be updated. This includes adding new 3D object models to the virtual environments, changing the orientations and actions of the 3D object models, and refurbishing the outlooks of the 3D object models. Afterwards, photo-realistic viewpoint-dependent perspective images or 360° panoramic images will be rendered for XR users to perceive these 3D scenes, depending on the XR users' current field of views (FoVs). As for AR/MR applications, in addition to changing the orientations or appearances of the 3D object models, light or shadow effects will also be added to these models according to the environmental changes near XR users. After rendering, these 3D object models will be transmitted for targeted users to perceive.
In some XR applications, the users who receive rendered results from XR servers can be the same ones who send sensor data. For example, AR game servers will require users' to send their environment information and integrate virtual game elements into the physical environments of the same users. While in other applications like VR conferencing and multiplayer VR games, the sensed data transmitted by one user will be used to render the virtual contents designed for other users. Without loss of generality, we call the servers who send rendered results as source servers while the users who receive rendered results as destination users.
\subsubsection{Semantic-aware rendered results transmission}
In traditional communication systems, the rendered images or 3D object models are transmitted to destination users pixel-by-pixel or point-by-point. However, as discussed in the previous works \cite{blau2018perception,mentzer2020high}, simply minimizing the per-pixel error between the original images and the reconstructed images does not necessarily result in high perceptual quality. To improve users' QoE, perceptual quality is considered in our SRRT module. Besides, due to the round-trip latency in XR offloading, XR users' FoVs at the time of receiving these rendered results may be different from the FoVs in the sensor data acquisition stage. In this case, these rendered results shall be transmitted in a way that is friendly to local view-synthesis process. To address these issues, we design a semantic-aware rendered results transmission (SRRT) module.
In our SRRT module, a semantic encoder is deployed at the XR source server side to extract perceptual-friendly or view-synthesis-friendly semantic information from the rendered results. Similarly, a semantic decoder deployed at destination user side will generate or synthesis viewpoint-dependent images and 3D object models using the received semantic information. The whole system is then optimized to maximize perceptual experience, as we will discuss in more details in Section \ref{sec:SRRT}.
\subsection{Uplink SSDT module}
\label{sec:SSDT}
In XR applications, the sensed data is used for task execution.
Sometimes, both the sensed data and the task outputs can be with a large size and unsuitable for transmission. To reduce the data traffic, we develop the SSDT module for XR offloading. In the proposed SSDT module, task-related semantic information, which lies in a hidden space smaller than both the sensed data space and task outputs space, is extracted from the sensed data and used for task execution through a semantic encoder and decoder. Different from the traditional coding modules, the semantic information extracted by semantic coding modules has clear semantic meanings, which is learnt either by proving extra labels or under the guidance of a model-based decoder.
In the subsequent discussion, We take some classic computing tasks in XR offloading as examples and introduce the corresponding semantic coding modules.
\subsubsection{Light estimation}
Light estimation aims to recover a high-dynamic-range (HDR) panoramic illumination map from a single image with limited FoV, which can later be used in many AR/MR applications for realistic virtual object relighting.
In light estimation tasks, the inputs are high-resolution (HR) perspective RGB images captured by HMD cameras and the outputs are HDR panoramic illumination map. XR source users can choose to either send the HR images to XR destination servers or process HR images locally and send estimated illumination map to destination XR servers. However, due to the large data volume of HR images and panoramic images, both ways are challenging to wireless networks. To address this issue, we refer to the recent light estimation method EMLight \cite{zhan2021emlight} and design the semantic coding modules accordingly.
In particular, a semantic encoder, composed of a regression neural network (DenseNet-121) and a feature encoder, is deployed at the XR user side. The regression network extracts high-level semantic features from input RGB images and regresses these features into $N(=128)$ 3D anchor points representing light distribution, a 3D light intensity value, and a 3D ambient term. Simultaneously, the feature encoder encodes the input images and generates a $L$ ($\leq 2048$)-feature vector representing source users' surrounding environments. At the destination server side, a spherical convolution network, served as the semantic decoder, takes the received semantic information as inputs and synthesizes the illumination map via conditional image synthesis process. Therefore, the total number of semantic information after the semantic coding modules is $3(N+2)+L$, which is significantly smaller than the original HR images and expected HDR panoramic illumination output.
\subsubsection{3D mesh recovery}
3D mesh recovery of human hand, face, and body shape from a single RGB image has wide applications in VR conferencing and gaming. In this field, a popular direction is to train a regression network for fitting parameters of parametric 3D hand/face/body models, such as hand model with articulated and non-rigid deformation (MANO) for hands \cite{zhang2019end}, 3D morphable model (3DMM) for faces \cite{tewari2017mofa}, and skinned multi-Person linear model (SMPL) for body shape \cite{kanazawa2018end}, and use estimated parameters for 3D mesh reconstruction. These methods are quite suitable for semantic communications, as we only need to transmit these estimated parameters among the source users and destination servers, and the latter can use received parameters for 3D mesh recovery. Compared with the original RGB images or generated 3D meshes, these parameters are quite small. For example, 76 semantic features are required for 3D hand mesh recovery, among which $3$ features are for camera orientation, $10$ features for hand shape, and $21$ 3D keypoints for hand pose. In this case, the semantic encoder is the regression network while the semantic decoder is the MANO model. Similar process can be applied to face mesh ($257$ semantic features) and human body and shape mesh ($85$ semantic features). A detailed transmission paradigm for human mesh recovery will be given in Section \ref{sec:human mesh}.
\subsubsection{3D scene reconstruction}
3D scene reconstruction from a sequence of RGB images is widely used in AR applications for virtual object placement. In 3D scene reconstruction task, the 3D scene will be represented by a 4D truncated signed distance function (TSDF) volume with color values. Both the image sequences and TSDF volume are with large sizes. Recent studies on sparse TSDF representation \cite{sun2021neuralrecon} and TSDF completion \cite{dai2018scancomplete} inspire us to transmit a sparse representation of TSDF for data traffic reduction. In our SSDT module, the semantic encoder at the source user side first finds a sparse TSDF volume representation from image sequences using the similar process shown in \cite{sun2021neuralrecon}, where only voxels representing surfaces have non-zero values. And then, a downsampling operation is applied to the sparse TSDF volume, as an incomplete 3D scan of these 3D surfaces. At the destination server side, a ScanComplete network \cite{dai2018scancomplete} is used as the semantic decoder to reconstruct the original sparse TSDF volume, followed by a global TSDF volume replacement defined in \cite{sun2021neuralrecon} for final TSDF volume reconstruction.
\subsection{Downlink SRRT module}
\label{sec:SRRT}
After rendering, the downlink SRRT module will transmit rendered results to destination users in a semantic-aware manner. In particular, semantic information that is important for users' visual QoE will consume more channel resources during the transmission process so that users' visual QoE can increase. Here, we consider two factors affecting users' visual QoE, \textit{i}.\textit{e}., perceptual quality, and FoV mismatch. We will design semantic coding modules for these factors as follows.
\subsubsection{Perceptual-friendly XR contents transmission} To improve the perceptual quality of reconstructed XR images, we use three perceptual-friendly semantic losses to enhance the semantic information extraction and reasoning process in semantic coding modules: learned perceptual image patch similarity (LPIPS) \cite{zhang2018unreasonable} loss, generative adversarial network (GAN) loss \cite{mentzer2020high}, and salience-weighted loss \cite{wang2023robust}.
LPIPS loss measures the distance of the original images and the recovered images in the feature space of a deep neural network originally
trained for image classification. Minimizing the discrepancy in feature space has been regarded as a way to improve perceptual quality in earlier works \cite{zhang2018unreasonable}. GAN uses a learned discriminator to detect the artifacts in the recovered images and adopts adversarial training to encourage the semantic coding modules to generate artifact-free contents. Salience-weighted loss imposes a higher penalty to the pixels belong to salient objects so that the salient objects in images can be constructed better. As users first notice the salient objects in images, salience-weighted loss can help improve visual QoE. Under the guidance of these semantic-aware losses, semantic information affecting human's visual perception process can be extracted more effectively.
The implementation details of LPIPS and GAN losses will be provided in Section \ref{sec:examples}. To implement salience-weighted loss, a salience detection network is required \cite{wang2015deep}, whose outputs will be used to design the weighted loss. To boost the performance of salience-weighted loss, a spatially-varying coding scheme shall be implemented, so that pixels do not belong to any salient object can be assigned a short code, as they are more likely to be ignored by users. In Section \ref{spatial}, we will develop a spatially-varying coding method.
\subsubsection{View-Synthesis for XR contents} Due to fast changes of users' viewing directions and standing points, XR users' current FoVs may be different from the FoVs in the sensor data acquisition process, leading to the FoV mismatch between rendered contents and ideal contents. Despite the FoV mismatch caused by viewing direction changes can be solved by transmitting panoramic images, FoV mismatch resulting from standing points changes requires further signal processing techniques. In the SRRT module, we adopt existing view-synthesis technologies \cite{wiles2020synsin} to address this problem and design the semantic coding modules correspondingly. Specifically, a semantic encoder consists of a feature predictor and a depth regressor is deployed at the source server side. The extracted semantic features and depth map are then transmitted to the destination users. Based on users' current FoV, a neural render and a refinement module are used as a semantic decoder to synthesize the targeted views. Transmitting these semantic features with clear semantic meanings and functionalities enables the fast viewpoint adaptation at the destination user side.
\section{Variable-length semantic and channel coding}
\label{sec:VLCC}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.48]{Adaptive_paradigm.eps}
\caption{The variable-length semantic-channel coding method for 1D semantic information.}
\label{fig:adaptive}
\end{figure*}
In this section, we first introduce the proposed variable-length semantic-channel coding method (VL-SCC) for 1D semantic information. And then, we will extend it to 2D/3D semantic information with spatially-varying coding scheme.
\subsection{VL-SCC for 1D semantic information}
\label{sec:1DVL}
\subsubsection{Overall architecture}
The overall architecture of the proposed VL-SCC is shown in Fig. \ref{fig:adaptive}. Given a 1D semantic information $x$, we first apply a rate allocation network (RAN) to $x$ to analyze its amount of semantic information and anti-noise capability and output a rate allocation index, $r(x) \in (0,1)$. Then, $r(x)$ is used to guide the coding process and generates a $0$-$1$ mask for rate control.
In particular, a semantic and channel encoder (SCE), another neural network, takes both $x$ and $r(x)$ as inputs and generates $N$ modulated symbols $y\in \mathcal{R}^{N}$. Under the guidance of $r(x)$, $y$ will be generated in a specific way, \textit{i}.\textit{e}., the most important information symbols and parity symbols for $x$ transmission are selected and put in the first $Nr(x)$ symbols of $y$ while the other symbols are generated in an importance-descending way.
At the same time, a $0$-$1$ mask, $M(x)\in \mathcal{R}^{N}$, will be generated according to the rate index $r(x)$. Specifically, we first apply a uniform quantizer to $r(x)$, and quantify $r(x)$ into $L$ discreet values. Denote the quantization result as $q(x)$. And then, we generate $M(x)$ according to the value of $q(x)$ in a way that $1$ values are all generated ahead of $0$ values. The details of the quantizer and mask generation operation will be given hereafter. Once generated, $M(x)$ will be element-wisely multiplied with $y$ to get transmitted symbols, $z$, and the symbols with $0$ values in the mask can be safely discarded before transmission process. We now finish the coding rate adaption at the encoder side.
After encoding, the shortened transmitted symbols, $z$, will be transmitted to the decoder through noisy wireless channels. Simultaneously, $q(x)$, indicating the adopted variable-length coding scheme at the encoder side, will also be transmitted to the decoder. Different from $z$, $q(x)$ will be transmitted in an error-free manner. As the quantization level, $L$, is chosen as a small value, $q(x)$ can be represented by several bits and can be easily transmitted without error.
After receiving the noise-corrupted symbols $\hat{z}$, we first apply zero-padding to them to ensure that they have a length of $N$. After this, these zero-padded symbols, $p$, and $q(x)$ are concatenated and fed into a semantic and channel decoder (SCD), which has symmetric architecture with the SCE. Finally, semantic information is reconstructed at the decoder side, which is annotated as $\hat{x}$.
As we can see from Fig. \ref{fig:adaptive}, the quantizer and mask generation process restrict the end-to-end training property of the whole architecture. Also, a training loss should be designed to train the SCE, SCD, and rate allocation network. In the following, we will address these issues sequentially.
\subsubsection{Quantizer}
We first solve the gradient problem of quantizer. In our network, the quantization process is defined as as follows,
\begin{equation}
q(x)=l, \text{if} \,
\label{equ:1}
\end{equation}
$\frac{l-0.5}{L-1}\leq r(x) \leq \frac{l+0.5}{L-1}$, for $l=0,1, \dots, L-1$. As mentioned above, $r(x)\in (0,1)$. Therefore, $q(x)$ may take one of $L$ different quantity values, \textit{i}. \textit{e}. $0, 1, \dots, L-1$. After using this quantization formulation, the gradient is zero almost everywhere. To address this issue, we use a straight-through estimator of the gradient \cite{courbariaux2016binarized} and define the gradient in the back-propagation process as,
\begin{equation}
q'_{x}=L-1.
\label{equ:2}
\end{equation}
In this way, the gradient can be back-propagated from $q(x)$ to $r(x)$.
\subsubsection{Mask generation} Next, we will introduce the design of mask generation operation. In the forward-propagation process, we first initialize an all-zero vector $M(x)$. Then, we define the value of each element in $M(x)$ as follows,
\begin{equation}
\begin{split}
m_{i}= \left \{
\begin{array}{ll}
1, & i < \frac{N}{L-1} q(x),\\
0, & \text{else},\\
\end{array}
\right.
\end{split}
\label{equ:3}
\end{equation}
for $i=0,\dots, N-1$, where $m_{i}$ is the $i$-th element in $M(x)$. As shown in Eq. (\ref{equ:3}), the value of $m_{i}$ is decided by a step function of $i$. Given a specific $q(x)$ value, all elements of $M(x)$, whose corresponding locations in $M(x)$ are smaller than $\frac{N}{L-1} q(x)$, have $1$ values while the rests have $0$ values, In this way, we can ensure all $1$ values are generated before $0$ values in the final $0$-$1$ mask. Also, when $q(x)$ has a larger value, $\frac{N}{L-1} q(x)$ will increase, making more elements in $M(x)$ taking $1$ values.
Similar to the quantization operation, the gradient is zero everywhere for $\frac{dq}{dx}$. To address this issue, we first rewrite Eq. (\ref{equ:3}) as,
\begin{equation}
\begin{split}
m_{i}= \left \{
\begin{array}{ll}
1, & q(x) > \frac{L-1}{N} i,\\
0, & \text{else},\\
\end{array}
\right.
\end{split}
\label{equ:4}
\end{equation}
for $i=0,\dots, N-1$, to make each $m_{i}$ as a step function of $q(x)$. Under this formulation, we can again adopt a straight-through estimator of the gradient. We define the gradient in the back-propagation process as,
\begin{equation}
\begin{split}
{m'_{i}}_{q}= \left \{
\begin{array}{ll}
1/3, & \lceil \frac{L-1}{N} i\rceil-2<q(x)\leq \frac{L-1}{N} i\rceil+1,\\
0, & \text{else},\\
\end{array}
\right.
\end{split}
\label{equ:4}
\end{equation}
where $\lceil \rceil$ is the ceil operation. Here, the gradient is defined as an interval of $3$ units to stabilize the training process. Under this formulation, only $m_{i}$, whose location in $m$ is close to $\frac{N}{L-1} q(x)$, will contribute gradients to $q(x)$.
\subsubsection{Model learning} At last, we will define the training process for the whole network. Specifically, we want the encoded representation of semantic information to be compact when measured in symbols while at the same time
we want distortion $\mathcal{L}_{d}(x,\hat{x})$ to be small, where $\mathcal{L}_{d}$ is some measurement of the reconstruction error. Under this design goal, we can formulate the training loss as the well-known rate-distortion trade-off \cite{mentzer2020high,balle2016end,mentzer2018conditional}. Let $\mathcal{X}$ be a set of semantic information used for training and $x\in \mathcal{X}$ be a training example from the set, the training loss is defined as,
\begin{equation}
\mathcal{L}=\sum_{x\in \mathcal{X}}{\mathcal{L}_{D}(x,\hat{x})+\gamma \mathcal{L}_{r}(x)},
\label{equ:5}
\end{equation}
where $\mathcal{L}_{r}(x)$ represents the rate loss and $\gamma$ is an introduced trade-off parameter between distortion loss and rate loss. Increasing the value of $\gamma$ will penalize more on the coding length and reduce the average code length of $\mathcal{X}$ sets.
\textbf{Distortion loss:} Distortion loss is used to evaluate the
transmission quality of semantic information. In semantic communications, it is a combination of a data-fidelity term (for faithful reconstruction) and a semantic loss term (for human understanding). As its formulation is task-specific, we do not give its detailed mathematical expression here but leave it to Section \ref{sec:examples}, where some semantic-aware XR applications transmission task will be given.
\textbf{Rate loss:}
As shown in Fig. \ref{fig:adaptive}, the number of transmitted symbols is completely decided by the value of $r(x)$. Therefore, $r(x)$ can be regarded as a continuous indicator of code length. We define the rate loss as $\mathcal{L}_{r}(x)=r(x)$.
By minimizing $\mathcal{L}_{D}(x,\hat{x})$, more elements in $M(x)$ will be driven to be $1$ values while by minimizing $\mathcal{L}_{r}(x)$, more elements in $M(x)$ will be $0$ values, achieving an explicit trade-off between the performance and the code length. Through Eq. (\ref{equ:5}), the best code for each $x$ can be learnt.
\begin{figure*}[ht]
\centering
\includegraphics[scale=0.75]{Adaptive_image.eps}
\caption{The variable-length semantic-channel coding method for 2D/3D semantic information.}
\label{fig:adaptive_image}
\end{figure*}
\subsection{Spatially-variant VL-SCC}
\label{spatial}
In the previous subsection, the variable-length coding scheme is deigned for 1D semantic information. Here, we will extend it for 2D/3D semantic information by introducing spatially-variant coding. Parts of the design here are inspired by the early deep compression work in \cite{li2018learning}. Different from \cite{li2018learning}, we jointly consider source and channel coding and directly map semantic information into modulated symbols without bit transition.
\subsubsection{Motivation}
Given a 2D/3D semantic information, especially those extracted from images/videos, its information amount is spatially-variant\footnote{Given a 3D tensor of shape $(H \times W \times C)$, we define the first two dimensions as spatial dimension, and the last as channel dimension}. For example, the semantic information extracted from a sky region in images has less amount of information than those extracted from a human region. And if the average information amount of all regions is relatively-smaller, a shorter code should be used. This motivates us to design a spatially-variant coding.
\subsubsection{Overall architecture}
The architecture of the proposed VL-SCC for 2D/3D semantic information is shown in Fig. \ref{fig:adaptive_image}. Given a semantic information $X \in \mathcal{R}^{H_{1}\times W_{1}\times C_{1}}$ extracted from an original image of shape $(H, W, 3)$, we first adopt a rate allocation network to analyze its semantic information amount and anti-noise capability in different spatial locations and generate a rate allocation map $R(X)\in \mathcal{R}^{H_{2}\times W_{2}}$, which indicates the final code length at each spatial location of modulated symbols.
Different from the rate index in Section \ref{sec:1DVL}, $R(X)$ here is a matrix. The burden of transmitting its quantified version is proportional to its spatial size. To address this issue, down-sampling operation is adopted in the rate allocation network.
Similarly, $R(X)$ is used in two ways. It is first concatenated with $X$ and fed into a SCE. The SCE will generate modulated symbols $Y$ of shape $(H_{2}, W_{2}, N)$. Under the guidance of $R(X)$, $Y$ will be generated to satisfy the following properties:
\begin{enumerate}
\item The most important information symbols and parity symbols at spatial location $(i,j)$ will be selected and put in the first $R(X)_{i,j}N$ symbols at that location\footnote{In semantic communications, different information symbols have different importance levels according to their contributions to the transmission task. Usually, each parity symbol is only used to protect several information symbols, like LDPC coding. In this case, parity symbols will also have importance differences according to the information symbols they are protecting. By training, the networks can learn to distinguish important information and parity symbols from others, and adjust their locations.};
\item For location $(i,j)$ where the number of important symbols is smaller than $R(X)_{i,j}N$, it will accept important symbols from neighbouring locations $(m,n)$ where the number of important symbols is larger than $R(X)_{m,n}N$;
\item The rest symbols are generated in an importance-descending way along the channel dimension.
\end{enumerate}
In SCE, down-sampling operation is also used to ensure that the modulated symbols $Y$ have the same spatial size as $R(X)$.
Simultaneously, $R(X)$ will be quantified by Eq. (\ref{equ:1}) to get $Q(X)$. After that, a mask tensor, $M(X) \in \mathcal{R}^{H_{2}\times W_{2} \times N}$, is calculated, where each mask vector in the spatial dimension of $M(X)$ is generated using Eq. (\ref{equ:3}). Next, we will conduct an element-wise multiplication operation between $M(X)$ and $Y$, and discard symbols at locations in which the mask value is $0$. Finally, the rest symbols will be reshaped into a vector $Z$.
After encoding, the symbol vector $Z$ will be transmitted to the decoder via noisy channels. Simultaneously, $Q(x)$ will be transmitted in an error-free way. The bit length of $Q(x)$ will be $H_{2}W_{2}\log_{2}(L)$. Considering the spatial size of the original image $HW$, the bit rate for this error-free transmission link will be $H_{2}W_{2}\log_{2}(L)/HW$ bits per pixel (bpp), which must be set as a small value to reduce the transmission costs for this error-free side link. In our methods, we set $L=64$, $H_{2}=\frac{H}{16}$, and $W_{2}=\frac{W}{16}$, resulting in $0.0234$ bpp. Also, as this is an error-free link, entropy coding can be used to further decrease the bit rates of transmitting $Q(x)$. We adopt Huffman coding and reduce the bit rates to $0.0209\sim 0.0232$\footnote{As we do not introduce an extra loss to explicitly reduce the entropy of $Q(x)$, the rate reduction brought by entropy coding is not significant. We will consider it in our future work.}.
At the decoder side, $Q(x)$ will guide the process of reshaping the received symbol vector $\hat{Z}$ back to a 3D tensor of shape $(H_{2},W_{2},N)$. Zero-padding is applied in this process to pixels that are discarded at the encoder side. After getting zero-padded symbols $P$, a SCD will reconstruct the 2D/3D semantic information, $\hat{X}$.
\textbf{Rate loss:}
As shown in Fig. \ref{fig:adaptive_image}, the code length at each spatial location $(i,j)$ is decided by $R(x)_{i,j}$. Therefore, we define the rate loss as $\mathcal{L}_{r}(x)=\sum_{i}\sum_{i}R(x)_{i,j}$.
\section{Joint Semantic Coding and Semantic-Channel Coding}
\label{sec:examples}
In this sections, we will show how the semantic coding modules developed in Section \ref{sec:overall} and the VL-SCC module designed in Section \ref{sec:VLCC} are jointly optimized in semantic communications. Specifically, we will introduce the implementation details for human mesh recovery task in the uplink SSDT module and the perceptual-friendly wireless image transmission in the downlink SRRT module.
\subsection{Human mesh recovery}
\label{sec:human mesh}
\begin{figure}[t]
\centering
\includegraphics[scale=0.65]{HUMAN.eps}
\caption{Semantic communications for human mesh recovery in uplink SSDT.}
\label{fig:human_mesh}
\end{figure}
The semantic communication framework for human mesh recovery task in the uplink SSDT is shown in Fig. \ref{fig:human_mesh}. From the figure, a semantic encoder, consisted of a ResNet50 and a regressor, will take a RGB image as input and estimate task-related semantic information at XR source user side. The semantic information, $x \in \mathcal{R}^{85}$, parameterizes the camera parameters ($t \in \mathcal{R}^{3}$), human pose ($\alpha \in \mathcal{R}^{72}$, 3D rotation of 23 joints + 1 global rotation), and human body shape ($\beta \in \mathcal{R}^{10}$). These semantic information is then transmitted to and reconstructed at the destination server using the VL-SCC module introduced in Section \ref{sec:1DVL}. For the networks used in the VL-SCC module, we set them as fully-connected network with $1024$ hidden units ($6$ layer used for SCE and SCD, $4$ layer for rate allocation network). The VL-SCC module will output the reconstructed semantic information $\hat{x}=(\hat{t}, \hat{\alpha}$, $\hat{\beta})$. At last, a SMPL model is used to reconstruct the final 3D triangulated human mesh with $N = 6980$ vertices, $z\in \mathcal{R}^{N\times 3}$, from $\hat{x}$.
To jointly train the semantic coding module and the VL-SCC module, a training dataset and a training loss are required. In this task, we have access to three kinds of data sets:
\begin{enumerate}
\item A large RGB image dataset where all are annotated with ground truth 2D joints;
\item A small dataset in which some have 3D joints annotations;
\item A 3D human mesh dataset of varying shape and pose but without corresponding images.
\end{enumerate}
Based on the available datasets and earlier work \cite{kanazawa2018end}, we design the training loss correspondingly. The training loss is composed of three terms: a data fidelity term $\mathcal{L}_{l}$, a semantic loss $\mathcal{L}_{s}$, and a rate loss $\mathcal{L}_{r}$.
\textbf{Data fidelity term:} The data fidelity term is used to minimize the distance of 2D joints' locations and 3D joints' location between true data labels and those calculated from reconstructed 3D human mesh. Specifically, 3D joints' locations can be obtained by linear regression
from the reconstructed mesh vertices $z$; 2D joints' locations are calculated by projecting 3D joints locations to 2D using estimated camera parameters $t$. Thus, the data fidelity term can be represented as $\mathcal{L}_{f}=\mathcal{L}_{2D}+\mathcal{L}_{3D}$.
\textbf{Semantic loss:}
Semantic loss is designed for realistic-looking reconstruction of the 3D human mesh. This is because a low data fidelity term does not always mean that the reconstructed 3D human mesh has a good visual quality, especially when we only have partial observations on 2D and 3D joints, and have no access to a labelled human mesh dataset. To address this problem, a semantic loss based on GAN is developed. Specifically, we introduce a discriminator $D$, which is trained by minimizng
$E_{(\Theta \sim p_{T})}(D(\Theta-1)^2)+E_{(\hat{\Theta} \sim p_{R})}(D(\hat{\Theta})^2)$, where $\Theta=(\alpha,\beta)$; $\hat{\Theta}=(\hat{\alpha},\hat{\beta)}$; $p_{T}$ denotes the distribution of $\theta$ in true mesh dataset; $p_{R}$ denotes the distribution of estimated parameters. $D(\Theta)\in (0,1)$ denotes the output of the discriminator when $\Theta$ is input. With this training loss, the discriminator is able to distinguish the true parameters from the estimated ones. Simultaneously, the semantic loss, $\mathcal{L}_{Gan}$, is defined as $E_{(\hat{\Theta} \sim p_{R})}(D(\hat{\Theta}-1)^2)$, which will encourage the estimated parameters to have the same distribution with the true parameters.
\textbf{Rate loss:}
Rate loss is adopted to train the variable-length coding scheme in the VL-SCC module. It is formulated as $\mathcal{L}_{r}$ as before.
Finally, the total training loss can be represented as,
\begin{equation}
\mathcal{L}=\mathcal{L}_{f}+\lambda \mathcal{L}_{Gan}+ \gamma \mathcal{L}_{r},
\label{equ:6}
\end{equation}
where $\lambda$ is an introduced trade-off parameter between data fidelity term and semantic loss. A high value will penalize more on the distribution discrepancy.
\subsection{Perceptual-friendly wireless image transmission}
\begin{figure}[t]
\centering
\includegraphics[scale=0.7]{Image.eps}
\caption{Semantic communications for perceptual-friendly image transmission in downlink SRRT.}
\label{fig:image}
\end{figure}
\begin{table}[]
\centering
\caption{Networks for Wireless Image Transmission}
\label{sec:image}
\label{tab:my-table}
\begin{tabular}{|cccccc|}
\hline
\multicolumn{3}{|c|}{\textbf{Semantic Encoder}} & \multicolumn{3}{c|}{\textbf{SCE}} \\ \hline
\multicolumn{1}{|c|}{Type} & \multicolumn{1}{c|}{Para.} & \multicolumn{1}{c|}{Out\_c} & \multicolumn{1}{c|}{Type} & \multicolumn{1}{c|}{Para.} & Out\_c \\ \hline
\multicolumn{1}{|c|}{S-Conv} & \multicolumn{1}{c|}{9x9,S2} & \multicolumn{1}{c|}{256} & \multicolumn{1}{c|}{S-Conv} & \multicolumn{1}{c|}{5x5,S2} & 512 \\
\multicolumn{1}{|c|}{PReLu} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{PReLu} & \multicolumn{1}{c|}{-} & - \\
\multicolumn{1}{|c|}{S-Conv} & \multicolumn{1}{c|}{5x5,S2} & \multicolumn{1}{c|}{256} & \multicolumn{1}{c|}{S-Conv} & \multicolumn{1}{c|}{5x5,S1} & 512 \\
\multicolumn{1}{|c|}{PReLu} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{c|}{PReLu} & \multicolumn{1}{c|}{-} & - \\
\multicolumn{1}{|c|}{S-Conv} & \multicolumn{1}{c|}{5x5,S1} & \multicolumn{1}{c|}{256} & \multicolumn{1}{c|}{S-Conv} & \multicolumn{1}{c|}{5x5,S1} & 512 \\ \cline{1-3}
\multicolumn{3}{|c|}{\textbf{}} & \multicolumn{1}{c|}{S-Conv} & \multicolumn{1}{c|}{3x3,S2} & 512 \\ \hline
\multicolumn{6}{|c|}{\textbf{Rate Allocaion Network}} \\ \hline
\multicolumn{1}{|c|}{Type} & \multicolumn{1}{c|}{Para.} & \multicolumn{1}{c|}{Act.} & \multicolumn{1}{c|}{Out\_c} & & \\ \hline
\multicolumn{1}{|c|}{S-Conv} & \multicolumn{1}{c|}{5x5,S1} & \multicolumn{1}{c|}{PReLu} & \multicolumn{1}{c|}{128} & & \\
\multicolumn{1}{|c|}{S-Conv} & \multicolumn{1}{c|}{5x5,S1} & \multicolumn{1}{c|}{PReLu} & \multicolumn{1}{c|}{128} & & \\
\multicolumn{1}{|c|}{S-Conv} & \multicolumn{1}{c|}{5x5,S2} & \multicolumn{1}{c|}{PReLu} & \multicolumn{1}{c|}{128} & & \\
\multicolumn{1}{|c|}{S-Conv} & \multicolumn{1}{c|}{3x3,S2} & \multicolumn{1}{c|}{Sigmoid} & \multicolumn{1}{c|}{1} & & \\ \hline
\end{tabular}
\end{table}
The semantic communication framework for perceptual-friendly wireless image transmission is shown in Fig. \ref{fig:image}. As shown in the figure, a semantic encoder is first used to extract semantic information from input images $x$. And then, the VL-SCC module described in Section \ref{spatial} will be adopted to transmit the semantic information from the server side to user side. Finally, a semantic decoder reconstructs the image from recovered semantic information. Denote the reconstructed image as $y$. We show the networks used in this task in Table. \ref{tab:my-table}, where networks that have symmetrical architectures with the listed ones are omitted. In this image transmission task, the training loss is also composed of three terms, a data fidelity term, a semantic loss, and a rate loss.
\textbf{Data fidelity term:}
The data fidelity term is defined as $\textit{l}_{2}$ loss between original images $x$ and reconstructed images $y$. By minimizing this term, we are optimizing pixel-level error.
\textbf{Semantic loss:}
In this task, We use LPIPS loss, $\mathcal{L}_{LPIPS}$, as the semantic loss. LPIPS is defined as the distance of two images in the feature space of a network, defined as $d(x,y)=\sum_{l}\frac{1}{H_{l}W_{l}}\sum_{h,w}||w_{l}\cdot ((F_{x})^{l}_{hw}-(F_{y})^{l}_{hw})||^2_{2}$, where $F^{l}_{x} \in \mathcal{R}^{H_{l}\times W_{l}}$ is the features extracted from $x$ in the $l$-th layer of the network; $w_{l}$ is a weight designed for the $l$-th layer.
\textbf{Rate loss:} The rate loss $\mathcal{L}_{r}$ for the VL-SCC module is defined in Section \ref{spatial}.
Finally, the total training loss can be represented as,
\begin{equation}
\mathcal{L}=\textit{l}_{2}(x,y)+\lambda \mathcal{L}_{LPIPS}+ \gamma \mathcal{L}_{r},
\label{equ:6}
\end{equation}
where $\lambda$ is an introduced trade-off parameter between data fidelity term and semantic loss.
\section{Experiments}
In this section, we compare semantic communications with traditional communications in human mesh recovery task and wireless image transmission task. We also present experiments to verify the proposed variable-length coding scheme.
\subsection{Human mesh recovery}
In this subsection, we conduct experiments on human mesh recovery task.
\subsubsection{Dataset} We use the following image datasets annotated with 2D joints: LSP, LSP-extended \cite{johnson2010clustered}, and MPII \cite{andriluka20142d}. The adopted image dataset with 3D joints we use is MPI-INF-3DHP \cite{mehta2017vnect}. Due to the lack of 3D joints annotations, we only randomly select 5\% samples for validation, 5\% samples for testing, and use the rest for training. All images are scaled to $224\times 224$ before feeding into ResNet50v2. The images are also randomly scaled, translated, and flipped as a data augmentation method.
\subsubsection{Experimental environments} We consider additive white Gaussian noise (AWGN) channels, and the signal-to-noise-ratio (SNR) per symbol changes from $-20dB$ to $10dB$. We evaluate the performance of different methods under the same average number of modulated symbols sent by XR users. In this experiment, we set it as $2,000$ ($1,000$ in complex number).
\subsubsection{Considered transmission methods} We now describe the considered transmission methods for human mesh recovery task.
\begin{itemize}
\item \textbf{benchmark}: In traditional communications, XR users will first transmit the images to the remote server, and the remote server will use received images for task execution. To simulate this process, we first transmit all the training images over noisy channels, and then train a human mesh recovery network based on the received images and true labels. During the image transmission process, we use better portable graphics (BPG) \cite{sullivan2012overview} for source coding, and low-density parity-check (LDPC) for channel coding. For each SNR situation, we first fine-tune the channel coding rate and modulation order to ensure bit-error rate smaller than a threshold. After that, we adjust the BPG compression rate under the bandwidth constraint.
\item \textbf{VL-SCC}: In this method, we adopt the semantic communication framework designed in Section \ref{sec:human mesh}. The semantic coding module and the VL-SCC module are jointly trained for each SNR situation. We set $N=4,000$, which means the maximum number of modulated symbols available for each sample is $4,000$. By fine-tuning to the value of $\gamma$ in different SNR situations, we ensure that the average number of modulated symbols sent by users is smaller than $2,000$ to satisfy system requirement.
\item \textbf{SCC(2k)}: By deleting the rate allocation network and its output, we can get a fixed-length semantic communication network, where the $N=2,000$ modulated symbols are transmitted directly to the receiver without any rate control.
\item \textbf{SCC(4k)}: In this case, we show the performance of fixed-length SCC systems with $N=4,000$, which is unrealistic in the current experimental settings but served as a performance indicator.
\end{itemize}
\subsubsection{Performance metrics}
\begin{figure}[t]
\centering
\includegraphics[scale=0.75]{Human_mesh.eps}
\caption{MPJPE vs SNR for different transmission methods.}
\label{fig:hmr}
\end{figure}
We use mean per joint position error (MPJPE)\footnote{We do not multiply the predicted joints with the image size here, therefore the reported MPJPE belong to (0,1)} \cite{kanazawa2018end} as the performance matrix, which is calculated based on the true 3D joints labels, $l_{3D}^{T}\in \mathcal{R}^{K\times 3}$, and the estimated 3D joints locations, $l_{3D}^{F}\in \mathcal{R}^{K\times 3}$, where $K$ is the number of joints. Suppose there are $M$ training samples. MPJPE is calculated by $\sum_{i=1}^{M}\sum_{j=1}^{K}\frac{1}{MK}(||{l_{3D}}_{i}^{T}(j)-{l_{3D}}_{i}^{F}(j)||_{2})$. A lower MPJPE indicates the estimated 3D joints locations are more accurate. We show the performance of different transmission methods in Fig. \ref{fig:hmr}, where the ideal case is the performance limit when the source is transmitted over noiseless channels. As we can see from Fig. \ref{fig:hmr}, our proposed designs under different settings work significantly better than the traditional method with BPG and LDPC. This is because we only transmit the task-related information and the information can be protected better when transmitted alone compared with when transmitted alongwith images, as all wireless resources are only used for these task-related information. Moreover, from Fig. \ref{fig:hmr}, VL-SCC outperforms SCC (2K), which verifies the effectiveness of VL-SCC in improving coding efficiency.
\subsubsection{Semantic loss validation}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.45]{visual.eps}
\caption{The visual effects of human mesh recovery task with/without semantic loss.}
\label{fig:visual}
\end{figure*}
We show the visual effects of human mesh recovery task with and without semantic loss on some testing examples in Fig. \ref{fig:visual}. From the figure, when semantic loss is not considered, the reconstructed human meshes have terrible visual effects, even if the corresponding MPJPE is nearly the same as or even better than the method with semantic loss. This verifies the importance of introducing a semantic loss.
\subsection{Perceptual-friendly wireless image transmission}
In this part, we consider the wireless image transmission task. The details are as follows.
\subsubsection{Dataset}
We choose MS-COCO 2014 \cite{lin2014microsoft} as the dataset\footnote{As discussed above, the spatial size of the modulated symbols will be only $1/256$ of the input images to reduce the overhead of the side link. This means the training images cannot be very small. Also, the number of training data should be large enough to show diversity in the amount of semantic information. Based on this considerations, we choose MS-COCO 2014}, in which we use $82,783$ training samples, $2,000$ validation samples, and $2,000$ testing samples. During the training, $128\times 128$ image patches are randomly cropped.
\subsubsection{Experimental environments} We consider additive white Gaussian noise (AWGN) channels, and the signal-to-noise-ratio (SNR) per symbol is set as $10dB$. The average number of modulated symbols sent by XR servers varies in this experiment.
\subsubsection{Considered transmission methods} We now describe the considered transmission methods for semantic-aware wireless image transmission.
\begin{itemize}
\item \textbf{BPG+LDPC:} Similar to the previous experiment, we use BPG for source coding and LDPC for channel coding. When SNR$=10dB$, we use 16QAM and $1/3$ LDPC.
\item \textbf{VL-SCC($\lambda$):} In this method, we adopt the semantic communication framework design in Section \ref{sec:image}. We set $N=512$, which is the maximum code length for each spatial location. We fine-tune the value of $\gamma$ to adjust average number of modulated symbols. When $\lambda=0$, the network is not trained with LPISP loss.
\item \textbf{SCC($0$):} This is the fixed-length coding version of VL-SCC by neglecting the rate allocation network. We adjust the average number of modulated symbols by changing the channel dimension of the modulated symbols.
\item \textbf{JSCC:} JSCC proposed in \cite{bourtsoulatze2019deep} is also presented as a benchmark of deep learning based image transmission scheme.
\end{itemize}
\subsubsection{Performance metrics}
\begin{figure}[t]
\centering
\includegraphics[scale=0.75]{PSNR.eps}
\caption{PSNR vs symbols per pixel for different transmission methods.}
\label{fig:PSNR}
\end{figure}
We first compare the peak signal-to-noise ratio (PSNR) versus the number of symbols per pixel (SPP) of different transmission methods, where the SPP is defined as the average number of modulated symbols (in complex numbers) assigned to each color pixel. The PSNR value reflects the per-pixel reconstruction quality of images. A higher value indicates a better reconstruction quality. The results are shown in Fig. \ref{fig:PSNR}. From the figure, our fixed-length coding method SCC(0) outperforms the JSCC and traidtional methods with BPG and LDPC as the source coding and channel coding schemes, respectively, showing the semantic coder and the SCC module used in our work are effective. Also, the proposed variable-length coding method VL-SCC(0) can further increase the coding efficency, which also shows the proposed VL-SCC is universal and can be applied to various tasks and data types.
\begin{figure}[t]
\centering
\includegraphics[scale=0.75]{LPIPS.eps}
\caption{LPIPS vs symbols per pixel for different transmission methods.}
\label{fig:lpips}
\end{figure}
The LPIPS loss versus SPP is shown in Fig. \ref{fig:lpips}. From the figure, without giving a LPIPS loss during training, VL-SCC(0), SCC(0), and BPG all lead to high semantic errors. However, once the semantic error is considered during the training processing, such as VL-SCC($\lambda$), it can be suppressed significantly.
\section{Conclusion}
\label{sec:conclusion}
In this work, we design a semantic communication framework for XR. Particularly, we develop the semantic coding modules for different XR tasks, including light estimation, 3D mesh recovery, image transmission, and so on. We propose a novel differentiable variable-length coding mechanism for semantic communications, where the best code length allocation scheme is learnt in an end-to-end manner. Moreover, we verify the effectiveness of the proposed semantic coding scheme and variable-length semantic-channel coding scheme in both the uplink and downlink of wireless XR, showing the generality of the proposed methods.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.08617",
"language": "en",
"timestamp": "2023-02-20T02:03:37",
"url": "https://arxiv.org/abs/2302.08617",
"yymm": "2302"
} |
\section{Introduction}\label{sec:intro}
\textit{Quantum Machine Learning} (QML) is an emerging domain built at the confluence of quantum information processing and \textit{Machine Learning} (ML) \citep{saggio2021experimental}. A noteworthy volume of prior works in QML demonstrate how quantum computers could be effectively leveraged to improve upon classical results pertaining to classification/regression based predictive modeling tasks \citep{aimeur2013quantum, rebentrost2014quantum, arunachalam2017guest}. While the efficiency of QML frameworks have been shown on conventional supervised/unsupervised ML use cases, how similar improvements could be translated to \textit{Reinforcement Learning} (RL) tasks have gained significant attention recently and is the focus of this paper.
Traditional RL tasks comprise of an agent interacting with an external environment attempting to learn its configurations, while collecting rewards via actions and state transitions \citep{sutton2018reinforcement}. RL techniques have been credibly deployed at scale over a variety of agent driven decision making industry use cases, e.g., autonomous navigation in self-driving cars \citep{al2019deeppool}, recommendation systems in e-commerce websites \citep{rohde2018recogym}, and online gameplay agents such as AlphaGo \citep{silver2017mastering}. Given the wide applications, this paper aims to study if quantum computing can help further improve the performance of reinforcement learning algorithms. This paper considers an episodic setup, where the learning occurs in episodes with a finite horizon. The performance measure for algorithm design is the \textit{ regret} of agent's rewards \citep{mnih2016asynchronous, cai2020provably}, which measures the gap in the obtained rewards by the Algorithm and the optimal algorithm. A central idea in RL algorithms is the notion of \textit{exploration-exploitation} trade-off, where agent's policy is partly constructed on its experiences so far with the environment as well as injecting a certain amount of optimism to facilitate exploring sparsely observed policy configurations \citep{kearns2002near, jin2020provably}. In this context, we emphasize that our work adopts the well-known \textit{Value Iteration} (VI) technique which combines empirically updating state-action policy model with \textit{Upper Confidence Bound} (UCB) based strategic exploration \citep{azar2017minimax}.
In this paper, we show an exponential improvement in the regret of reinforcement learning. The key to this improvement is that quantum computing allows for improved \textit{mean estimation} results over classical algorithms \citep{brassard2002quantum, hamoudi2021quantum}. Such mean estimators feature in very recent studies of \textit{quantum bandits} \citep{wang2021quantum}, \textit{quantum reinforcement learning} \citep{wang2021quantum2}, thereby leading to noteworthy convergence gains. In our proposed framework, we specifically incorporate a quantum information processing technique that improves non-asymptotic bounds of conventional empirical mean estimators which was first demonstrated in \citep{hamoudi2021quantum}.
In this regard, it is worth noting that a crucial novelty pertaining to this work is carefully engineering agent's interaction with the environment in terms of collecting classical and quantum signals. We further note that one of the key aspects of analyzing reinforcement learning algorithms is the use of Martingale convergence theorems, which is incorporated through the stochastic process in the system evolution. Since there is no result on improved Martingale convergence results in quantum computing so far (to the best of our knowledge), this work does a careful analysis to approach the result without use of Martingale convergence results.
Given the aforementioned quantum setup in place, this paper attempts to address the following:
{\em Can we design a quantum VI Algorithm that can improve classical regret bounds in episodic RL setting?}
This paper answers the question in positive. The key to achieve such quantum advantage is the use of quantum environment that provides more information than just an observation of the next state. This enhanced information is used with quantum computing techniques to obtain efficient reget bounds in this paper.
To this end, we summarize the major contributions of our work as follows:
\begin{enumerate} %
\item We present a novel quantum RL architecture that helps exploit the quantum advantage in episodic reinforcement learning.
%
\item We propose QUCB-VI, which builds on the classical UCB-VI algorithm \citep{azar2017minimax}, wherein we carefully leverage available quantum information and quantum mean estimation techniques to engineer computation of agent's policy.
\item We perform rigorous theoretical analysis of the proposed framework and characterize its performance in terms of \textit{ regret} accumulated across $K$ episodes of agent's interaction with the unknown \textit{Markov Decision Process} (MDP) environment. More specifically, we show that QUCB-VI incurs $\Tilde{\mathcal{O}}(1)$ regret. We note that our algorithm provides a faster convergence rate in comparison to classical UCB-VI which accumulates $\Tilde{\mathcal{O}} (\sqrt{K})$ regret, where $K$ is the number of training episodes.
\item We conduct thorough experimental analysis of QUCB-VI (algorithm \ref{algo: UCBVI}) and compare against baseline classical UCB-VI algorithm on a variety of benchmark RL environments. Our experimental results reveals QUCB-VI's performance improvements in terms of regret growth over baseline.
\end{enumerate}
The rest of the paper is organized as follows. In Section \ref{sec: related_work}, we present a brief background of key existing literature pertaining to classical RL, as well as discuss prior research conducted in development of quantum mean estimation techniques and quantum RL methodologies relevant to our work. In Section \ref{sec: problem_formln}, we mathematically formulate the problem of episodic RL in a finite horizon unknown MDP with the use of quantum oracles in the environment. In Section \ref{sec: algo_framework}, we describe the proposed QUCB-VI Algorithm while bringing out key differences involving agent's policy computations as compared to classical UCB-VI. Subsequently, we provide the formal analysis of regret for the proposed algorithm in Section \ref{sec: theory_result}. In Section \ref{sec: experiments}, we report our results of experimental evaluations performed on various RL environments for the proposed algorithm and classical baseline method. Section \ref{sec: conclusion} concludes the paper.
\section{Background and Related Work} \label{sec: related_work}
\textbf{Classical reinforcement learning:} ~In the context of classical RL, an appreciable segment of prior research focus on obtaining theoretical results in \textit{tabular} RL, i.e., agent's state and action spaces are discrete \citep{sutton2018reinforcement}. Several existing methodologies guarantee sub-linear \textit{ regret} in this setting via leveraging \textit{optimism in the face of uncertainty} (OFU) principle \citep{lai1985asymptotically}, to strategically balance \textit{exploration-exploitation} trade-off \citep{osband2016generalization, strehl2006pac}. Furthermore, on the basis of design requirements and problem specific use cases, such algorithms have been mainly categorized as either \textit{model-based} ~\citep{auer2008near, dann2017unifying} or \textit{model-free} ~\citep{jin2018q, du2019provably}. In the episodic tabular RL problem setup, the optimal \textit{ regret} of $\Tilde{\mathcal{O}}(\sqrt{K})$ ($K$ is the number of episodes) have been studied for both \textit{model-based} as well as \textit{model-free} learning frameworks \citep{azar2017minimax, jin2018q}. In this paper, we study model-based algorithms and derive $\Tilde{\mathcal{O}}(1)$ regret with the use of quantum environment.
\textbf{Quantum Mean Estimation:} ~
Mean estimation is a statistical inference problem in which samples are used to produce an estimate of the mean of an unknown distribution. The improvement in sample complexity for mean estimation using quantum computing has been widely studied \citep{grover1998framework, brassard2002quantum, brassard2011optimal}. In \citep{montanaro2015quantum}, a quantum information assisted Monte-Carlo Algorithm was proposed which achieves an asymptotic near-quadratic faster convergence over its classical baseline. %
In this paper, we use the approach in \citep{hamoudi2021quantum} for the mean estimation of quantum random variables. We first describe the notion of random variable and corresponding extension to quantum random variable.
\begin{definition}[Random Variable]
A finite random variable is a function $X: \Omega \to E$ for some probability space $(\Omega, P)$, where $\Omega$ is a finite sample set, $P:\Omega\to[0, 1]$ is a probability mass function and $E\subset \mathbb{R}$ is the support of $X$. As is customary, we will often omit to mention $(\Omega, P)$ when referring to the random variable $X$.
\end{definition}
The notion is extended to a quantum random variable (or q.r.v.) as follows.
\begin{definition}[Quantum Random Variable]
A q.r.v. is a triple $(\mathcal{H},U,M)$ where $\mathcal{H}$ is a finite-dimensional Hilbert space, $U$ is a unitary transformation on $\mathcal{H}$, and $M = \{M_x\}_{x\in E}$ is a projective measurement on $\mathcal{H}$ indexed by a finite set $E \subset \mathbb{R}$. Given a random variable $X$ on a probability space $(\Omega, P)$, we say that a q-variable $(\mathcal{H}, U, M )$ generates $X$ when,
\begin{itemize}
\item[(1)] $\mathcal{H}$ is a finite-dimensional Hilbert space with some basis $\{|\omega \rangle\}_{\omega \in \Omega}$ indexed by $\Omega$.
\item[(2)] $U$ is a unitary transformation on $\mathcal{H}$ such that $U|\mathbf{0}\rangle=\sum_{\omega \in \Omega} \sqrt{P(\omega)}|\omega\rangle$.
\item[(3)] $M = \{M_x\}_{x}$ is the projective measurement on $\mathcal{H}$ defined by $M_x=\sum_{\omega: X(\omega)=x}|\omega\rangle\langle\omega|$.
\end{itemize}
%
\end{definition}
We now define the notion of a quantum experiment. Let $(\mathcal{H},U,M)$ be a q.r.v. that generates $X$. With abuse of notations, we call $X$ as the q.r.v. even though the actual q.r.v. is the $(\mathcal{H},U,M)$ that generates $X$. We define a quantum experiment as the process of applying any of the unitaries $U$, their
inverses or their controlled versions, or performing a measurement according to $M$. We also assume an access to the quantum evaluation oracle $|\omega\rangle|0\rangle \to |\omega\rangle|X(\omega)\rangle$. Using this quantum oracle, the quantum mean estimation result can be stated as follows.
\begin{lemma}[Sub-Gaussian estimator \citep{hamoudi2021quantum}]
\label{lem: SubGau}
Let $X$ be a q.r.v. with mean $\mu$ and variance $\sigma^2$. Given $n$ i.i.d. samples of q.r.v. $X$ and a real $\delta \in (0,1)$ such that $n > \log(1/\delta)$, a quantum algorithm \texttt{SubGaussEst}$(X,n,\delta)$ (please refer to algorithm 2 in \citep{hamoudi2021quantum}) outputs a mean estimate $\hat{\mu}$ such that,
\begin{align}
P\left[|\hat{\mu}-\mu|\le \frac{\sigma \log(1/\delta)}{n}\right]\ge 1-\delta.
\end{align}
The algorithm performs $O(n\log^{3/2}(n)\log\log(n))$ quantum experiments.
\end{lemma}
\vspace{-2mm}
We note that this result achieves the mean estimation error of $1/n$ in contrast to $1/\sqrt{n}$ for the classical mean estimation, thus providing a quadratic reduction in the number of i.i.d. samples needed for same error bound.
\textbf{Quantum reinforcement learning:} ~Recently, quantum mean estimation techniques have been applied with favorable theoretical convergence speed-ups for Quantum \textit{multi-armed bandits} (MAB) problem setting \citep{casale2020quantum,wang2021quantum,lumbreras2022multi}. However, bandits do not have the notion of state evolution like in reinforcement learning. Further, quantum reinforcement learning has been studied in \citep{paparo2014quantum, dunjko2016quantum,dunjko2017advances, jerbi2021quantum,dong2008quantum}, while these works do not study the regret performance. The theoretical regret performance has been recently studied in \citep{wang2021quantum}, where a generative model is assumed and sample complexity guarantees are derived for discounted infinite horizon setup. In contrast, our work does not consider discounted case, and we don't assume a generative model. This paper demonstrates the quantum speedup for episodic reinforcement learning.
\section{Problem Formulation} \label{sec: problem_formln}
\begin{figure}
\centering
\includegraphics[width=3.25in]{qrl_fig_updated_v2.jpg}
\caption{Quantum episodic reinforcement learning architecture depicting agent-environment interaction at round $h$.}
\label{fig:qrl_setup}
\end{figure}
We consider episodic reinforcement learning in a finite horizon Markov Decision Process \citep{agarwal2019reinforcement} given by a tuple $\left(\mathcal{S}, \mathcal{A},H, \{P_{h}\}_{h \in [0, H-1]},
\{r_{h}\}_{h \in [0, H-1]}\right)$, where $\mathcal{S}$ and $\mathcal{A}$ are the state and the action spaces with cardinalities $S$ and $A$, respectively, $H \in \mathbb{N}$ is the episode length, $P_h(s^\prime|s,a) \in [0,1]$ is the probability of transitioning to state $s^\prime$ from state $s$ provided action $a$ is taken at step $h$ and $r_h(s,a) \in [0,1]$ is the immediate reward associated with taking action $a$ in state $s$ at step $h$. In our setting, we denote an episode by the notation $k$, and every such episode comprises of $H$ rounds of agent's interaction with the learning environment. In our problem setting, we assume an MDP with a fixed start state $s_0$, where at the start of each new episode the state is reset to $s_0$. We note that the results can be easily extended to the case where starting state is sampled from some distribution. This is because we can have a dummy state $s_0$ which transitions to the next state $s_1$ coming from this distribution, independent of action, and having a reward of $0$.
We encapsulate agent's interaction with the unknown MDP environment via the architecture as presented in Fig. \ref{fig:qrl_setup}. At an arbitrary time step $h$, given a state $s_h$ and action $a_h$, the environment gives the reward $r_h$ and next state $s_{h+1}$. Furthermore, we highlight that in our proposed architecture this set of signals i.e., $\{s_h, a_h, r_h, s_{h+1}\}$ are collected at the agent as \textit{classical} information. Additionally, our architecture facilitates availability of $S$ \textit{quantum random variables} (q.r.v.) ($X_{1,h}, \cdots, X_{S,h}$) at the agent's end, wherein q.r.v. $X_{i,h}$ generates the random variable $\mathbf{1}\left\{s_{h+1} = i\right\}$. This q.r.v. corresponds to the Hilbert space with basis vectors $|0\rangle$ and $|1\rangle$ and the unitary transformation, given as follows:
\begin{align}
U|\mathbf{0}\rangle = & \sqrt{1-P_{h}(s_{h+1} = i|s_h,~a_h)} |0 \rangle \nonumber \\
& ~~~+ \sqrt{P_{h}(s_{h+1} = i|s_h,~a_h)} |1 \rangle.
\end{align}
We note that these q.r.v.'s can be generated by using a quantum next state from the environment which is given in form of the basis vectors for $S$ states as $|1 0 \cdots 0>$ with $S$ \textit{qubits} for first state and so on till $|0 \cdots 0 1>$ for the last state. Thus, the overall quantum next state is the superposition of these $S$ states with the amplitudes as $\sqrt{P_{h}(s_{h+1} = 1|s_h,~a_h)}, \cdots, \sqrt{P_{h}(s_{h+1} = S|s_h,~a_h)}$, respectively. The $S$ q.r.v.'s correspond to the $S$ \textit{qubits} in this next quantum state. Further, the next state can be obtained as a measurement of this joint next state superposition. Thus, assuming that the quantum environment can generate multiple copies of the next state superposition, all the $S$ q.r.v.'s and the next state measurement can be obtained.
We note that the agent does not know $\{P_{h}\}_{h \in [0, H-1]}$, which needs to be estimated in the model-based setup. In order to estimate this, we will use the quantum mean estimation approach. This approach needs as quantum evaluation oracle $|\omega\rangle |0\rangle \to |\omega\rangle |\mathbf{1}\left\{s_{h+1} = \omega\right\}\rangle$ for $\omega \in \{0,1\}$. In Section \ref{sec: algo_framework}, we provide more details on how the aforementioned set of quantum indicator variables i.e., $\{{X_{s',h}}\}_{s' \in \mathcal{S}}$ are fed to a specific quantum mean estimation procedure to obtain the transition probability model.
Based on the observations, the agent needs to determine a policy $\pi_h$ which determines action $a_h$ given state $s_h$. For given policy $\pi$ and $h \in \{0,\cdots,H-1\}$, we define the value function $V_h^{\pi}: \mathcal{S} \rightarrow \mathbb{R}$ as
\begin{align}
V_h^{\pi}(s)=\mathbb{E}\left[\sum_{t=h}^{H-1} r_h\left(s_t, a_t\right) \mid \pi, ~s_h = s\right],
\end{align}
where the expectation is with respect to the randomness of the trajectory, that is, the randomness in state transitions and the stochasticity of $\pi$. Similarly, the state-action value (or $Q$-value) function $Q_h^\pi: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$ is defined as
\begin{align}
Q_h^\pi(s, a)=\mathbb{E}\left[\sum_{t=h}^{H-1} r_h\left(s_t, a_t\right) \mid \pi, s_h=s, a_h=a\right].
\end{align}
We also use the notation $V^\pi(s)=V_0^\pi(s)$. Given a state s, the goal of the agent is to find a policy $\pi^*$ that maximizes the value, i.e., the optimization problem the agent seeks to solve is:
\begin{align}
\max _\pi ~V^\pi(s).
\end{align}
Define $Q_h^{\star}(s, a)=\sup _{\pi \in \Pi} ~Q_h^\pi(s, a)$ and $V_h^{\star}(s)=\sup _{\pi \in \Pi} ~V_h^\pi(s)$. The agent aims to minimize the expected cumulative \textit{ regret} incurred across $K$ episodes:
\begin{align}
\label{eq:reg}
\texttt{Regret:} ~\mathbb{E} \big[KV^{*}(s_0) - \sum_{k = 0}^{K-1}\sum_{h = 0}^{H-1} r(s_h^k, a_h^k) \big].
\end{align}
In the following section, we describe the proposed algorithm and analyze its regret in Section 5.
\if 0
Our Algorithm relies on the assumption that we have quantum access to the environment of MDP. Thus, in this section, we now introduce some basic knowledge of quantum computation by using standard quantum notation (Dirac notation) \citep{nielsen2002quantum}.
In Dirac notation, a vector $v=(v_1,\dots,v_m)$ in Hilbert space $\mathbb{C}^m$ is denoted as $|v\rangle$ and called ``\texttt{ket} $v$''. The notation $|i\rangle$ with $i \in [n]$, is reserved for the $i$-th standard basis vector. $|\mathbf{0}\rangle$ is also reserved for the 1st standard basis vector when there is no conflict. We denote $v^{\dagger}$ by the $\texttt{bra}$ notation $\langle v|$ , where $\dagger$ means the Hermitian conjugation. Given $|x\rangle \in \mathbb{C}^m$ and $|y\rangle \in \mathbb{C}^n$, we denote their tensor product by $|x\rangle|y\rangle := (x_1y_1, \cdots, x_m y_n)$.
The input to the quantum environment in Figure \ref{fig:my_label} is a real-valued random variable defined on some probability space. We only consider finite probability spaces due to finite encoding. First, we recall the definition of a random variable:
$\mathcal{H}$ is the Hilbert space with basis vector $\{|s^\prime\rangle\}_{s^\prime \in \mathcal{S}}$. We encode our quantum model by transition probability $P_h(s^\prime|s,a)$.
$$U|\mathbf{0}\rangle = \sum_{s^\prime \in \mathcal{S}} \sqrt{P_h(s^\prime|s,a)} |\psi_{s^\prime} \rangle|s^\prime\rangle$$ for some unknown garbage $|\psi_{s^\prime} \rangle$. In our case, we can consider the q-random variable $D$ defined on the probability space $(\Omega,P)$ where $D(|\psi_{s^\prime} \rangle|s^\prime\rangle)=1$ or $0$.
\fi
\section{Algorithmic Framework} \label{sec: algo_framework}
In this section, we describe in detail our quantum information assisted algorithmic framework to perform learning of unknown MDP under finite horizon episodic RL setting. In particular, we propose a quantum algorithm that incorporates \textit{model-based} episodic RL procedure originally proposed in the classical setting \citep{azar2017minimax, agarwal2019reinforcement}. In algorithm \ref{algo: UCBVI}, we present Quantum \textit{Upper Confidence Bound} - \textit{Value Iteration} (QUCB-VI) Algorithm which takes number of episodes $K$, length of an episode $H$ and confidence parameter $\delta$ as inputs. In the very first step, the count of visitations corresponding to every state-action pair $(s,a) \in \mathcal{S} \times \mathcal{A}$ are initialized to 0. Subsequently, at the beginning of each episode $K$, the value function estimates for the entire state-space $\mathcal{S}$, i.e., $\{\widehat{V}_{H}^k(s)\}_{s\in \mathcal{S}}$ are set to 0.
Next, for each time instant up to $H-1$, we update the transition probability model. Here, we utilize the set of \textit{quantum random variable} (q.r.v.) $\{X_{s',h}^{k}\}_{s' \in \mathcal{S}}$ as introduced in our quantum RL architecture presented in Figure \ref{fig:qrl_setup}. Recall that the elements of $\{X_{s',h}^{k}\}_{s' \in \mathcal{S}}$ are defined at each time step $h$ during episode $k$ as follows:
\begin{align}
X_{s',h}^{k} \triangleq \mathrm{1}[s_{h+1}^{k} = s']. \label{eq: X_shk_def}
\end{align}
We define $N_h^k(s, a)$ as the number of times $(s, a)$ is visited before episode $k$. More formally, we have
\begin{align}
N_h^k(s, a)=\sum_{i=0}^{k-1} \mathbf{1}\big[(s_h^i, a_h^i)=(s, a) \big]. \label{eq: n_hk_def}
\end{align}
\begin{algorithm}[t!]
\caption{QUCB-VI}
\label{algo: UCBVI}
\begin{algorithmic}[1]
\State \textbf{Inputs:} $K$, $H$, $\delta \in (0,1]$.
\State Set ${N}_h^1(s,a) \leftarrow 0, ~\forall ~s \in \mathcal{S}, ~a \in \mathcal{A},~ h \in [0,H-1])$.
\For {$k=1,\dots,K$}
\State Set $\widehat{V}_{H}^k(s) \leftarrow 0, ~\forall s \in \mathcal{S}$.
\For {$h=H-1,\dots,1,0$}.
\State Set $\{\hat{P}_{h}^{k}(s' | s,a )\}_{s,a,s') \in \mathcal{S} \times \mathcal{A} \times \mathcal{S}}$ via Eq. \eqref{eq: sub_gaussian_txprob_update}.
\State Set $b_h^k(s,a) \leftarrow \frac{ \log(SAHK/\delta)}{N_h^k(s,a)}$.
\State Set $\{\widehat{Q}_h^k(s,a)\}_{s \in \mathcal{S}, a \in \mathcal{A}}$ via Eq. \eqref{eq: Q-func-update}.
\State Set $\{\widehat{V}_{h}^k(s)\}_{s \in \mathcal{S}}$ via Eq. \eqref{eq: Value_update}.
\EndFor
\State Set policies $\{\pi_h^k(s)\}_{s \in \mathcal{S}, h \in [0, H-1]}$ using Eq. \eqref{eq: policy_calc}.
\State \hspace{-2mm} Get Trajectory $\{s_{h}^{k}, a_{h}^{k}, r_{h}^{k}, s_{h+1}^{k}\}_{h=0}^{H-1}$ via $\{\pi_h^k\}_{h=0}^{H-1}$.
\State Set $\{{N}_h^{k+1}(s,a)\}_{s \in \mathcal{S}, a \in \mathcal{A}, h \in [0,H-1]}$ via Eq. \eqref{eq: n_hk_def}.
\EndFor
\end{algorithmic}
\end{algorithm}
${N}_h^k(s,a)$ indicates the number of samples obtained for $(s,a)$ in the past which will help in efficient averaging to estimate the transition probabilities. With the formulation of q.r.v. $\{X_{s',h}^{k}\}_{s' \in \mathcal{S}}$ in place, we update the transition probability model elements, i.e., $\{\hat{P}_{h}^{k}(s' | s,a )\}_{(s,a,s') \in \mathcal{S} \times \mathcal{A} \times \mathcal{S}}$, in step 6 of algorithm \ref{algo: UCBVI}. To get the estimate of $\hat{P}_{h}^{k}(s' | s,a )$, we use the ${N}_h^k(s,a)$ q.r.v.'s $X_{s',h}^{k}$ for all past $k$. Thus, $\hat{P}_{h}^{k}(s' | s,a )$ is estimated as:
\begin{align}
\hat{P}_{h}^{k}(s' | s,a ) \leftarrow ~\texttt{SubGaussEst}(X_{s',h}^{k}, {N}_h^k(s,a) ,\delta), \label{eq: sub_gaussian_txprob_update}
\end{align}
where subroutine \texttt{SubGaussEst} as presented in Algorithm 2 of \citep{hamoudi2021quantum} performs mean estimation of q.r.v. $X_{s',h}^{k}$ given ${N}_h^k(s,a)$ collection of samples and confidence parameter $\delta$. We emphasize that step 6 in Algorithm \ref{algo: UCBVI} brings out the key change w.r.t. classical UCB-VI via carefully estimating mean of quantum information collected by agent via interaction with the unknown MDP environment. In step 7, we set the reward bonus i.e., $b_h^k(s,a)$ which resembles a Bernstein-style UCB bonus, essentially inducing \textit{optimism} in the learnt model. Consequently, step 8-9 compute estimates of $\{{Q}_h^k(s,a)\}_{s \in \mathcal{S}, a \in \mathcal{A}}, ~\{\widehat{V}_{h}^k(s)\}_{s \in \mathcal{S}}$ by adopting the following \textit{Value Iteration} based updates at time step $h$:
\begin{align}
& \widehat{Q}_h^k(s,a) \leftarrow
\min\{H,{r}_{h}^{k}(s, a)
+ \langle \widehat{V}_{h+1}^k, \widehat{P}_{h}^{k}\left(\cdot|s, a\right) \rangle \nonumber \\
& \hspace{3cm} + b_h^k(s,a) \}, \label{eq: Q-func-update} \\
& \widehat{V}_{h}^k(s) \leftarrow \underset{a \in \mathcal{A}}{\max} ~\widehat{Q}_h^k(s,a). \label{eq: Value_update}
\end{align}
This \textit{Value Iteration} procedure (i.e., inner loop consisting of steps 6-9) is executed for $H$ time steps thereby generating a collection of $H$ policies $\{\pi_h^k(s)\}_{s \in \mathcal{S}, h \in [0, H-1]}$ calculated for each pair of $(s,h)$ in step 11 as:
\begin{align}
\pi_h^k(s) \leftarrow \underset{a \in \mathcal{A}}{\arg\max} ~\widehat{Q}_h^k(s,a), \label{eq: policy_calc}
\end{align}
Next, using the updated policies i.e., $\{\pi_h^k(s)\}_{s \in \mathcal{S}, h \in [0, H-1]}$ which are based on observations recorded till episode $k-1$, the agent collects a new trajectory of $H$ tuples pertaining to episode $k$ i.e., $\{s_{h}^{k}, a_{h}^{k}, r_{h}^{k}, s_{h+1}^{k}\}_{h=0}^{H-1}$ in step 12 starting from initial state reset to $s_0$. Finally, the frequency of agent's visitation to all state action pairs at every time step $h$ over the $k$ episodes i.e., $\{{N}_h^{k+1}(s,a)\}_{s \in \mathcal{S}, a \in \mathcal{A}, h \in [0,H-1]}$ are updated in step 13. Consequently, algorithm \ref{algo: UCBVI} triggers a new episode of agent's interaction with the unknown MDP environment.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\section{Regret Results for the Proposed Algorithm} \label{sec: theory_result}
\subsection{Main Result: Regret Bound for QUCB-VI}
In Theorem \ref{thm: main_regr}, we present the cumulative \textit{ regret} collected upon deploying QUCB-VI in an unknown MDP environment $\mathcal{M}$ (please refer to Section \ref{sec: problem_formln} for definition of MDP) over a finite horizon of $K$ episodes.
\begin{theorem} \label{thm: main_regr} In an unknown MDP environment $\mathcal{M} \triangleq \left(\mathcal{S}, \mathcal{A},H, \{P_{h}\}_{h \in [0, H-1]},
\{r_{h}\}_{h \in [0, H-1]}\right)$, the regret incurred by QUCB-VI (algorithm \ref{algo: UCBVI}) across $K$ episodes is bounded as follows:
\begin{align}
\mathbb{E} \Big[\sum_{k=0}^{K-1} \big(V^{*}(s_0) &- V^{\pi^{k}}(s_0) \big) \Big] \nonumber \\
& \leq O (H^2 S^2 A \log^2(SAH^2 K^2 )). \label{eq: _regr}
\end{align}
\end{theorem}
The result obtained via Eq. \eqref{eq: _regr} in Theorem \ref{thm: main_regr} brings out the key advantage of the proposed framework in terms of accelerating the \textit{ regret} convergence rate to $\Tilde{\mathcal{O}}(1)$ against the classical result of $\Tilde{\mathcal{O}} (\sqrt{K})$ \citep{azar2017minimax}.
In order to prove Theorem \ref{thm: main_regr}, we present the following auxiliary mathematical results in the ensuing subsections: bound for probability transition model error pertaining to every state-action pair (section \ref{sec: model_err_SA}); optimism exhibited by the learnt model understood in terms of value functions of the states (section \ref{sec: model optimism}); a supporting result that bounds inverse frequencies of state-action pairs over the entire observed trajectory (section \ref{sec: traj_bound}). Subsequently, we utilize these aforementioned theoretical results to prove Theorem \ref{thm: main_regr}
in section \ref{sec: thm_proof}.
\subsection{Probability Transition Model Error for state-action pairs} \label{sec: model_err_SA}
\begin{lemma} \label{lemma: model_err_SA} For $k \in \{0, \ldots, {K} - 1 \}$, $s \in \mathcal{S}$, $a \in \mathcal{A}$, $h \in \{0, \ldots, H - 1 \}$, for any $f : \mathcal{S} \rightarrow [0, H]$, execution of QUCB-VI (algorithm \ref{algo: UCBVI}) guarantees that the following holds with probability at least $1-\delta$:
\begin{align}
\Big|\Big(\hat{P}_{h}^{k}(\cdot | s,a ) - {P}_{h}(\cdot | s,a ) \Big)^{T} f \Big| \leq \frac{HSL}{N_{h}^{k}(s,a)}, \label{eq: model_est_err}
\end{align}
where $L \triangleq {\log (SAHK/\delta)}$ and $\{ N_{h}^{k}(s,a) $, $ \hat{P}_{h}^{k}(s' | s,a ) \}$ are defined in Eq. \eqref{eq: n_hk_def}, \eqref{eq: sub_gaussian_txprob_update}.
\end{lemma}
\begin{proof}
To prove the claim in Eq. \eqref{eq: model_est_err}, we consider an arbitrary tuple $(s,a,k,h,f)$ and obtain the following:
\begin{align}
\Big|\Big(\hat{P}_{h}^{k}(\cdot | s,a ) & - {P}_{h}(\cdot | s,a ) \Big)^{T} f \Big| \nonumber \\
& \leq \sum_{s' \in \mathcal{S}} f(s')|\hat{P}_{h}^{k}(s' | s,a ) - {P}_{h}(s' | s,a )|, \label{eq: model_est_err_1} \\
& \leq H \sum_{s' \in \mathcal{S}} |\underbrace{\hat{P}_{h}^{k}(s' | s,a ) - {P}_{h}(s' | s,a )}_{\text{(a)}}|, \label{eq: model_est_err_2}
\end{align}
where Eq. \eqref{eq: model_est_err_2} is due to the definition of $f(\cdot)$ as presented in the lemma statement. Next, in order to analyze (a) in Eq. \eqref{eq: model_est_err_2}, we note the definition of $X_{s',h}^{k}$ presented in Eq. \eqref{eq: X_shk_def} as an \textit{indicator} q.r.v allows us to write:
\begin{align}
& \hat{P}_{h}^{k}(s' | s,a ) = \mathbb{E}_{q, \hat{P}_{h}^{k}}[X_{s',h}^{k}|s,a], \label{eq: model_est_err_3} \\
& {P}_{h}(s' | s,a ) = \mathbb{E}_{q, {P}_{h}}[X_{s',h}^{k}|s,a]. \label{eq: model_est_err_4}
\end{align}
Using the fact that $X_{s',h}^{k}$ is a q.r.v. allows us to directly apply Lemma \ref{lem: SubGau}, thereby further bounding (a) in Eq. \eqref{eq: model_est_err_2} with probability at least $1- \delta$ as follows:
\begin{align}
|\hat{P}_{h}^{k}(s' | s,a ) & - {P}_{h}(s' | s,a )| \nonumber \\
& \leq \frac{\log (1/\delta)}{N_{h}^{k}(s,a)}, \\
& \leq \frac{\log(SAHK/\delta)}{N_{h}^{k}(s,a)}
= \frac{L}{N_{h}^{k}(s,a)}, \label{eq: model_est_err_5}
\end{align}
where, we emphasize that Eq. \eqref{eq: model_est_err_5} is a consequence of applying union-bound $\forall s,a,h,k$ as well as we use the definition of $L$ provided in the lemma statement. Plugging the bound of term (a) as obtained in Eq. \eqref{eq: model_est_err_5} back into Eq. \eqref{eq: model_est_err_2}, we obtain:
\begin{align}
\Big|\Big(\hat{P}_{h}^{k}(\cdot | s,a ) - {P}_{h}(\cdot | s,a ) \Big)^{T} f \Big| & \leq H \sum_{s' \in \mathcal{S}} \frac{L}{N_{h}^{k}(s,a)}, \\
& \leq \frac{HSL}{N_{h}^{k}(s,a)},
\end{align}
which proves the claim of the Lemma.
\end{proof}
\textbf{Interpretation of Lemma \ref{lemma: model_err_SA}:} ~One of the key insights that we draw from this lemma is that the quantum mean estimation with q.r.v. $X_{s',h}^{k}$ allowed the use of Lemma \ref{lem: SubGau}, which facilitated quadratic speed-up of transition probability model convergence. More specifically, our result suggests transition probability model error diminishes with $\Tilde{\mathcal{O}} (\frac{1}{N_{h}^{k}(s,a)})$ speed as opposed to the classical results of $\Tilde{\mathcal{O}} (\frac{1}{\sqrt{N_{h}^{k}(s,a)}})$ \citep{azar2017minimax, agarwal2019reinforcement}.
\subsection{Optimistic behavior of QUCB-VI} \label{sec: model optimism}
\begin{lemma} \label{lemma: model optimism} Assume that the event described in Lemma \ref{lemma: model_err_SA} is true. Then, the following holds $\forall k$:
\begin{align}
\widehat{V}_{0}^{k}(s) \geq {V}_{0}^{*}(s), ~\forall s \in \mathcal{S},
\end{align}
where $\widehat{V}_{0}^{k}(s)$ is calculated via our QUCB-VI Algorithm and ${V}_{h}^{*}: \mathcal{S} \rightarrow [0,H]$.
\end{lemma}
\begin{proof}
To prove the lemma statement, we proceed via mathematical induction. Firstly, we highlight that the following holds at time step $H$:
\begin{align}
\hat{V}_{H}^{k}(s) = {V}_{H}^{*}(s) = 0, ~\forall s \in \mathcal{S},
\end{align}
In the next step, assume that $\hat{V}_{h+1}^{k}(s) \geq {V}_{h+1}^{*}(s)$. If $\hat{Q}_{h}^{k}(s,a) = H$, then $\hat{Q}_{h}^{k}(s,a) \geq {Q}_{h}^{*}(s,a)$ since ${Q}_{h}^{*}(s,a)$ can be atmost $H$. Otherwise, at time step $h$, we obtain:
\begin{align}
& \hat{Q}_{h}^{k} (s,a) - {Q}_{h}^{*}(s,a) \nonumber \\
& = b_{h}^{k}(s,a) + \langle \hat{P}_{h}^{k}(\cdot | s,a), \hat{V}_{h}^{k} \rangle - \langle {P}_{h}(\cdot | s,a), {V}_{h}^{*} \rangle,\\
& \geq b_{h}^{k}(s,a) + \langle \hat{P}_{h}^{k}(\cdot | s,a), {V}_{h}^{*} \rangle - \langle {P}_{h}(\cdot | s,a), {V}_{h}^{*} \rangle, \label{eq: model optimism eq 1} \\
& = b_{h}^{k}(s,a) \nonumber \\
& ~~~+ \sum_{s' \in \mathcal{S}}\big( \hat{P}_{h}^{k}(s' | s,a) - {P}_{h}(s' | s,a) \big){V}_{h}^{*}(s'), \\
& \geq b_{h}^{k}(s,a) - \frac{HSL}{N_{h}^{k}(s,a)}, \label{eq: model optimism eq 2} \\
& = 0, \label{eq: model optimism eq 3}
\end{align}
where Eq. \eqref{eq: model optimism eq 1} is due to the induction assumption. Furthermore, Eq. \eqref{eq: model optimism eq 2}, \eqref{eq: model optimism eq 3} are owed to Lemma \ref{lemma: model_err_SA} and definition of bonus in step 7 of Algorithm \ref{algo: UCBVI}, respectively. Hence, we have $\hat{Q}_{h}^{k}(s,a) \geq {Q}_{h}^{*}(s,a)$. Using \textit{Value Iteration} computations in Eq. \eqref{eq: Q-func-update} - \eqref{eq: Value_update}, we obtain $\hat{V}_{h}^{k}(s) \geq {V}_{h}^{*}(s), ~\forall h$.
This completes the proof.
\end{proof}
\textbf{Interpretation of Lemma \ref{lemma: model optimism}:} ~This Lemma reveals that QUCB-VI (algorithm \ref{algo: UCBVI}) outputs estimates of the value function which are always lower bounded by the true value at each time step, thereby exhibiting similar optimistic behavior as the classical UCB-VI algorithm. Interestingly, faster convergence properties of QUCB-VI's transition model (i.e., Lemma \ref{lemma: model_err_SA}) complemented the usage of a sharper bonus term (i.e., $b_{h}^{k}(s,a)$ defined in algorithm \ref{algo: UCBVI}) instead of the bonus terms of the classical algorithm, while keeping the optimism behavior of the model intact.
\subsection{Trajectory Summation Bound Characterization} \label{sec: traj_bound}
Next, we present a technical result bounding inverse of observed state-action pair frequencies over agent's trajectory collected across all the episodes in Lemma \ref{lemma: trajectory_sum}.
\begin{lemma} \label{lemma: trajectory_sum}
Assume an arbitrary sequence of trajectories $\{s_{h}^{k}, a_{h}^{k} \}_{h = 0}^{H-1}$ for $k = 0, \cdots, K-1$. Then, the following result holds:
\begin{align}
\sum_{k = 0}^{K-1} \sum_{h=0}^{H-1} \frac{1}{N_{h}^{k}(s_h^k, a_h^k)} \leq HSA \log (K).
\end{align}
\end{lemma}
\vspace{-1cm}
\begin{proof}
We change order of summations to obtain:
\begin{align}
\sum_{k = 0}^{K-1} &\sum_{h = 0}^{H-1} \frac{1}{N_{h}^{k}(s_h^k, a_h^k)} \nonumber \\
&= \sum_{h = 0}^{H-1} \sum_{k = 0}^{K-1} \frac{1}{N_{h}^{k}(s_h^k, a_h^k)}, \\
& = \sum_{h = 0}^{H-1} \sum_{(s,a) \in \mathcal{S} \times \mathcal{A}} \sum_{i = 1}^{N_{h}^{K}(s,a)} \frac{1}{i}, \\
& \leq \sum_{h = 0}^{H-1} \sum_{(s,a) \in \mathcal{S} \times \mathcal{A}} \log (N_{h}^{K}(s,a)), \label{eq: trajectory_sum_1} \\
& \leq HSA ~{\max}_{\mathcal{S} \times \mathcal{A} \times [0,H-1]} \log (N_{h}^{K}(s,a)), \\
& \leq HSA \log (K),
\end{align}
where Eq. \eqref{eq: trajectory_sum_1} is due to the fact that $\sum_{i=1}^{N} 1/i \leq \log (i)$.
This completes the proof of the lemma statement.
\end{proof}
\subsection{Proof of Theorem \ref{thm: main_regr}} \label{sec: thm_proof}
To prove Theorem \ref{thm: main_regr}, we first note that the following holds for episode $k$:
\begin{align}
V^{*}(s_0) & - V^{\pi^{k}}(s_0) \nonumber \\
&\leq \hat{V}_{0}^{k}(s_0) - V_{0}^{\pi^{k}}(s_0), \label{eq: sim_lemma 1} \\
& = \hat{Q}_{0}^{k}(s_0, \pi^{k}(s_0)) - {Q}_{0}^{\pi^{k}}(s_0, \pi^{k}(s_0)), \\
& \leq r_{0}^{k}(s_0, \pi^{k}(s_0)) + b_{0}^{k}(s_0, \pi^{k}(s_0)) \nonumber \\
& ~~~+ \langle \widehat{V}_{1}^k, \widehat{P}_{0}^{k}\left(\cdot|s_0, \pi^{k}(s_0) \right) \rangle - r_{0}^{k}(s_0, \pi^{k}(s_0)) \nonumber \\
& ~~~- \langle {V}_{1}^{\pi^{k}},{P}_{0}\left(\cdot|s_0, \pi^{k}(s_0) \right) \rangle, \label{eq: sim_lemma 2} \\
& = b_{0}^{k}(s_0, \pi^{k}(s_0)) + \langle \widehat{V}_{1}^k,\widehat{P}_{0}^{k}\left(\cdot|s_0, \pi^{k}(s_0) \right) \rangle \nonumber \\
& ~~~- \langle {V}_{1}^{\pi^{k}},{P}_{0}\left(\cdot|s_0, \pi^{k}(s_0) \right) \rangle, \\
& = b_{0}^{k}(s_0, \pi^{k}(s_0)) \nonumber \\
& ~~~+ \langle \widehat{V}_{1}^k,\widehat{P}_{0}^{k}\left(\cdot|s_0, \pi^{k}(s_0) \right) - {P}_{0}\left(\cdot|s_0, \pi^{k}(s_0)\right) \rangle \nonumber \\
& ~~~+ \underbrace{\langle \widehat{V}_{1}^k - {V}_{1}^{\pi^{k}}, {P}_{0}\left(\cdot|s_0, \pi^{k}(s_0) \right) \rangle}_{\text{(a)}} \label{eq: sim_lemma 3} \\
& = \sum_{h=0}^{H-1} \mathbb{E}_{s_h, a_h \sim d_{h}^{\pi^k}}\Big[b_{h}^{k}(s_h,a_h) \nonumber \\
& ~~~+ \underbrace{\langle \hat{P}_{h}^{k}(\cdot | s_h,a_h) - {P}_{h}(\cdot | s_h,a_h) , \hat{V}_{h+1}^{\pi^{k}} \rangle}_\text{(b)} \Big], \label{eq: main_regr_eq 1}
\end{align}
where Eq. \eqref{eq: sim_lemma 1} is due to Lemma \ref{lemma: model optimism}. Eq. \eqref{eq: sim_lemma 2} uses the definition of $Q$ function in Eq. \eqref{eq: Q-func-update} and the fact that $\min \{a,b \} \leq a$. In Eq. \eqref{eq: sim_lemma 3}, we identify that term (a) is the 1-step recursion of RHS of Eq. \eqref{eq: sim_lemma 1}. Consequently, we obtain Eq. \eqref{eq: main_regr_eq 1} where the expectation is w.r.t. the intermediate trajectories $d_{h}^{\pi^k}$ generated via policies $\{ \pi_{i}^{k}\}_{i = 0}^{h-1}$.
Let us denote the event described in Lemma \ref{lemma: model_err_SA} as $\mathcal{E}$. Formally, $\mathcal{E}$ is described as follows for some $\delta \in [0,1]$:
\begin{align}
\mathcal{E} \triangleq \mathrm{1}\Big[|\hat{P}_{h}^{k}(s' | s,a ) & - {P}_{h}(s' | s,a )| \leq \frac{\log (1/\delta)}{N_{h}^{k}(s,a)}, ~\forall s,a,s' \Big]. \nonumber
\end{align}
Then, assuming that $\mathcal{E}$ is true, term (b) in Eq. \eqref{eq: main_regr_eq 1} can be further bounded as follows:
\begin{align}
\langle \hat{P}_{h}^{k}&(\cdot | s_h,a_h) - {P}_{h}(\cdot | s_h,a_h) , \hat{V}_{h+1}^{\pi^{k}} \rangle \nonumber \\
& \leq \|\hat{P}_{h}^{k}(\cdot | s_h,a_h) - {P}_{h}(\cdot | s_h,a_h) \|_{1} \| \hat{V}_{h+1}^{\pi^{k}}\|_{\infty}, \label{eq: main_regr_eq 2} \\
& \leq \frac{HSL}{N_{h}^{k}(s,a)}, \label{eq: main_regr_eq 3}
\end{align}
where Eq. \eqref{eq: main_regr_eq 2}, \eqref{eq: main_regr_eq 3} are consequences of Holder's inequality and Lemma \ref{lemma: model_err_SA} respectively. By plugging the bound for term (a) as obtained in Eq. \eqref{eq: main_regr_eq 3} back into RHS of Eq. \eqref{eq: main_regr_eq 1}, we have:
\begin{align}
V^{*} & (s_0) - V^{\pi^{k}}(s_0) \nonumber \\
& \leq \sum_{h=0}^{H-1} \mathbb{E}_{s_h, a_h \sim d_{h}^{\pi^k}}\Big[b_{h}^{k}(s_h,a_h) + \frac{HSL}{N_{h}^{k}(s,a)} \Big], \label{eq: thm_val_diff1} \\
& \leq \sum_{h=0}^{H-1} \mathbb{E}_{s_h, a_h \sim d_{h}^{\pi^k}}\Big[ 2 \frac{HSL}{N_{h}^{k}(s,a)}\Big], \label{eq: main_regr_eq 4} \\
& = 2 HSL \cdot \mathbb{E}\Big[ \sum_{h=0}^{H-1} \frac{1}{N_{h}^{k}(s,a)} \big| \mathcal{H}_{<k} \Big], \label{eq: main_regr_eq 5}
\end{align}
where Eq. \eqref{eq: main_regr_eq 4} is owed to the definition of bonus $b_{h}^{k}$ in algorithm 1. Further, in Eq. \eqref{eq: main_regr_eq 5}, the expectation is w.r.t. trajectory $\{s_h^{k}, a_{h}^{k}\}$ generated via policy $\pi^{k}$ while conditioning on history collected till end of episode $k-1$, i.e., $\mathcal{H}_{<k}$. Summing up across all the episodes and taking into account success/failure of event $\mathcal{E}$, we obtain the following:
\begin{align}
\mathbb{E} & \Big[\sum_{k = 0}^{K-1} V^{*}(s_0) - V^{\pi^{k}}(s_0) \Big] \nonumber \\
& = \mathbb{E} \Big[ \mathrm{1}[\mathcal{E}] \Big(\sum_{k = 0}^{K-1} V^{*}(s_0) - V^{\pi^{k}}(s_0) \Big)\Big] \nonumber \\
& ~~~+ \mathbb{E} \Big[\mathrm{1}[\overline{\mathcal{E}}] \Big(\sum_{k = 0}^{K-1} V^{*}(s_0) - V^{\pi^{k}}(s_0) \Big) \Big], \\
& \leq \mathbb{E} \Big[ \mathrm{1}[\mathcal{E}] \Big(\sum_{k = 0}^{K-1} V^{*}(s_0) - V^{\pi^{k}}(s_0) \Big)\Big] + 2\delta KH, \label{eq: main_regr_eq 6} \\
& \leq 2 HSL \cdot \mathbb{E}\Big[ \sum_{k = 0}^{K-1} \sum_{h=0}^{H-1} \frac{1}{N_{h}^{k}(s,a)} \Big] + 2\delta KH, \label{eq: main_regr_eq 7} \\
& \leq 2 H^{2}S^{2}A L \log (K) + 2\delta KH, \label{eq: main_regr_eq 8}
\end{align}
where Eq. \eqref{eq: main_regr_eq 6} is owed to the facts that value functions $\{V^{*}, V^{\pi^{k}} \}$ are bounded by $H$ and the failure probability is at most $\delta$. Next, we obtain Eq. \eqref{eq: main_regr_eq 7} by leveraging Eq. \eqref{eq: main_regr_eq 5} when event $\mathcal{E}$ is successful. Eq. \eqref{eq: main_regr_eq 8} is a direct consequence of Lemma \ref{lemma: trajectory_sum}.
\begin{figure*}[htb!]
\centering
\subfigure[ \scriptsize Riverswim-6. $S = 6, ~A = 2.$]{
\includegraphics[scale = 0.34]{alg=UCBVI_env=riverswim6_nEps=10000_nRuns=20_plot.png}
\label{fig:riverswim6_regr}
}
\subfigure[ \scriptsize Riverswim-12. $S = 12, ~A = 2.$]{
%
\includegraphics[scale = 0.34]{alg=UCBVI_env=riverswim12_nEps=10000_nRuns=20_plot.png}
\label{fig:riverswim12_regr}
}
\subfigure[ \scriptsize Grid-world. $S = 20, ~A = 4.$ ]{
%
\includegraphics[scale = 0.34]{alg=UCBVI_env=gridworld_nEps=10000_nRuns=20_plot.png}
\label{fig: gridworld_regr}
}
\caption{Cumulative Regret incurred by QUCB-VI (algorithm \ref{algo: UCBVI}) and UCB-VI algorithm for various RL environments.}
\label{fig: regret_fig}
\vspace{0in}
\end{figure*}
By setting, $\delta = 1/KH$, we obtain:
\begin{align}
\mathbb{E} & \Big[\sum_{k = 0}^{K-1} V^{*}(s_0) - V^{\pi^{k}}(s_0) \Big] \nonumber \\
& \leq 2H^2 S^2 A \log^2(SAH^2 K^2 ) + 2, \\
& = O (H^2 S^2 A \log^2(SAH^2 K^2 )).
\end{align}
This completes the proof of the theorem.
The choice of improved reward bonus term manifests itself in Eq. \eqref{eq: thm_val_diff1}-\eqref{eq: main_regr_eq 5} and plays a significant role towards QUCB-VI's overall \textit{regret} improvement. Further, we note that the Martingale style proof approach and the corresponding Azuma Hoeffding's inequality, which are typical in the regret bound proof of classical UCB-VI, are not used in the analysis of QUCB-VI algorithm.
\section{Numerical Evaluations} \label{sec: experiments}
In this section, we analyze the performance of QUCB-VI (Algorithm \ref{algo: UCBVI}) via proof-of-concept experiments on multiple RL environments. Furthermore, we investigate the viability of our
methodology against its classical counterpart UCB-VI \citep{azar2017minimax, agarwal2019reinforcement}. To this end, we first conduct our empirical evaluations on \textit{RiverSwim-6} environment comprising of 6 states and 2 actions, which is an extensively used environment for benchmarking \textit{model-based} RL frameworks \citep{osband2013more, tossou2019near, chowdhury2022differentially}. Next, we extend our testing setup to include \textit{Riverswim-12} with 12 states and 2 actions. Finally, we construct a \textit{Grid-world} environment \citep{sutton2018reinforcement} comprising of a $7 \times 7$ sized grid and characterized by 20 states and 4 actions.
\textbf{Simulation Configurations:} ~In our experiments, for all the aforementioned environments, we conduct training across $K = 10^4$ episodes, and every episode consists of $H=20$ time-steps. The environment is reset to a fixed initial state at the beginning of each episode. Furthermore, we perform 20 independent Monte-Carlo simulations and collect episode-wise cumulative regret incurred by QUCB-VI, and baseline UCB-VI algorithms. In our implementation of QUCB-VI, we accumulate the estimates of state transition probability model based on uniform sample from the actual transition probability within a window governed by the quantum mean estimation error.
\if 0
\begin{remark}
We highlight that classical implementation of QUCB-VI is facilitated by the closed form of measurement output distribution pertaining to quantum amplitude estimation procedures as corroborated by Theorem 11 in \citep{brassard2002quantum}. Consequently, this simulation strategy has been adopted in prior works in QRL ~\citep{wan2022quantum}.
\end{remark}
\fi
\textbf{Interpretation of Results} ~In Figure \ref{fig: regret_fig}, we report our experimental results in terms of cumulative regret of agents rewards incurred against number of training episodes for each RL environment. In Fig. \ref{fig:riverswim6_regr}- \ref{fig: gridworld_regr}, we note that QUCB-VI significantly outperforms classical UCB-VI with a noticeable margin, while QUCB-VI also achieves model convergence within the chosen number of training episodes. These observations support the performance gains in terms of convergence speed of the proposed algorithm as revealed in our theoretical analysis of regret. In Fig. \ref{fig:riverswim12_regr}-\ref{fig: gridworld_regr}, we observe that classical UCB-VI suffers an increasingly linear trend in regret growth. This
indicates that in environments such as \textit{RiverSwim-12}, \textit{Grid-world} which are characterized by large diameter MDPs, it is necessary to increase training episodes in order to ensure sufficient exploration by the RL agent with classical environment. This demonstrates that quantum computing helps in significantly faster convergence.
\section{Conclusion and Future Work} \label{sec: conclusion}
We propose a Quantum information assisted \textit{model-based} RL methodology that facilitates an agent's learning in an unknown MDP environment. To this end, we first present a carefully engineered architecture modeling agent's interaction with the environment at every time step. Consequently, we outline QUCB-VI Algorithm that suitably incorporates an efficient quantum mean estimator, leading to exponential theoretical convergence speed improvements in contrast to classical UCB-VI proposed in \citep{azar2017minimax}. Finally, we report evaluations on a set of benchmark RL environment which support the efficacy of QUCB-VI algorithm. As a future work, %
it will be worth exploring whether the benefits can be translated to \textit{model-free} as well as continual RL settings. |
{
"arxiv_id": "2302.08672",
"language": "en",
"timestamp": "2023-02-20T02:06:09",
"url": "https://arxiv.org/abs/2302.08672",
"yymm": "2302"
} | \section{Introduction}
\label{sec:introduction}
\input{review/text/intro.tex}
\section{Background}
\label{sec:background}
\input{review/text/background.tex}
\section{Problem Formulation}
\label{sec:problem}
\input{review/text/problem.tex}
\section{Method}
\label{sec:methods}
\input{review/text/methods.tex}
\section{Experiments}
\label{sec:experiments}
\input{review/text/experiments.tex}
\section{Related Work}
\label{sec:related}
\input{review/text/related.tex}
\section{Conclusion}
\label{sec:conclusion}
\input{review/text/conclusion.tex}
\subsection*{Acknowledgements}
We thank Jae-Won Chung, Junhyug Noh and Dongsub Shim for constructive feedback on the manuscript.
This work was supported in part by grants from LG AI Research and NSF CAREER IIS-1453651.
{\small
\bibliographystyle{abbrvnat}
\subsection{Subtask State Prediction from Noisy Annotations}
\label{subsec:method_subtask_state_prediction}
This section describes our model and training objective for predicting missing subtask state labels $\hat{\mathbf{S}}$ in the videos.
Based on the intuition that visual and textual data provide complementary information, we train a network by providing both visual and text signals to predict which of the three states ($\hat{s}_t[n] \in $ \{\emph{\textit{not started}}, \emph{\textit{in progress}}, \emph{\textit{completed}}\}) a subtask $n$ is at any given time $t$.
These predictions are used as inputs for subtask graph generation as described in \Cref{subsec:method_graph_inference}. %
\paragraph{Architecture.}
Inspired by MERLOT~\cite{zellers-neurips21}, we jointly process visual frames and language information to model subtask states.
Given visual information $\mathbf{F}$ and sentences from corresponding text transcript $\mathbf{X}$, we first obtain frame embeddings $\mathbf{F}^{e}$ and sentence embeddings $\mathbf{X}^{e}$ using a pre-trained CLIP \citep{radford-icml21} model similar to prior work \citep{buch-cvpr22,li-arxiv21,luo-arxiv21} (See~\Cref{supp_sec:subtask_state_prediction} for details about preprocessing).
We enrich the obtained visual representations with information from the text transcript through a Multi-Head Attention (MHA) mechanism~\citep{vaswani-neurips17}, followed by a residual layer as in~\Cref{eq:mha,eq:residual} (normalization and dropout omitted for brevity).
This approach allows us to fuse misaligned visual and text information from videos.
These representations are processed by a Transformer (with a bidirectional attention mask), and we obtain $\hat{\mathbf{S}}$ as projections of the final layer Transformer representations (\Cref{eq:proj}).
\begin{align}
\mathbf{A} &= \mbox{MHA}(\text{Query}=\mathbf{F}^{e}, \text{Keys} = \text{Values} = \mathbf{X}^{e}) \label{eq:mha} \\
\mathbf{H} &= \text{Feed-forward}(\mathbf{A}) + \mathbf{A} \label{eq:residual} \\
\hat{\mathbf{S}} &= \text{Feed-forward}(\text{Transformer}(\mathbf{H})) \label{eq:proj}
\end{align}
Based on an intuition that sampled frame may not always include clear information (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, black transition frame), we predict subtask states at the beginning and the end of each subtask\footnote{based on the ground-truth subtask segmentation} (instead of predicting for every time step, which can be noisy) by appending special delimiter symbols (\tokensymbol{[start]} and \tokensymbol{[end]}, respectively) as shown in \Cref{fig:subtask_state_prediction}.
Specifically, we feed delimiter symbols to the Transformer by replacing $\mathbf{H}$ in \Cref{eq:proj} and estimate state predictions $\hat{\mathbf{S}}$ for a given subtask based on the final layer representations of these delimiter tokens.
We denote the final predictions based on both visual and language information as $\hat{\mathbf{s}}_t \in \mathbb{R}^{N_\tau}$.
\paragraph{Training Objective for Subtask State Prediction.}
We incorporate ordinal regression loss~\citep{niu-cvpr16, cao-prl20} to prevent subtasks from transitioning directly from \emph{not started} to \emph{completed} without going through the intermediate \emph{in progress} state.\footnote{We observed that subtask state prediction performance dropped by $\sim$30\% when modeled as a multiclass classification problem.}
We model our ordinal regression objective similar to~\citet{cao-prl20}, where the real line is partitioned into three subsets based on a threshold parameter $b > 0$ such that the ranges $\hat{s}_t[n] \in (-\infty, -b)$, $\hat{s}_t[n] \in (-b, b)$, $\hat{s}_t[n] \in (b, \infty)$ respectively model the three subtask states.
As shown in \Cref{eq:masked_cross_entropy}, our objective is based on two binary classification losses corresponding to the two decision boundaries.
$\sigma$ denotes the sigmoid function, BCE is the binary cross-entropy loss and the expectation is taken over time-steps $t$ corresponding to the special delimiter tokens and subtasks $n$ that appear in the video.
\begin{align}
\begin{split}
\mathcal{L}_\text{ssp} = \mathbb{E}_{t,n}
[
&\mbox{\small BCE} ( \sigma( \hat{s}_t[n] + b), \mathbb{I}[s_t[n] \neq {\small\textit{not started}}] ) \\
+ &\mbox{\small BCE} ( \sigma( \hat{s}_t[n] - b), \mathbb{I}[s_t[n] = {\small\textit{completed}}] )
]
\end{split}
\label{eq:masked_cross_entropy}
\end{align}
\subsection{Graph Generation from Subtask State Labels} %
\label{subsec:method_graph_inference}
\input{review/text/graph_gen}
\subsection{Learning Compositional Tasks using Graphs} %
\label{subsec:related_task_graph}
Prior works have tackled compositional tasks by modeling the task structure using a manually defined symbolic representation.
Classical planning works learn STRIPS~\cite{fikes-ai71, frank-c03} domain specifications (action schemas) from given trajectories (action traces)~\cite{mehta-neurips11,suarez-aaaiw20,walsh-aaai08,zhuo-ai10}.
Reinforcement learning approaches model task structure in the form of graphs such as hierarchical task networks~\cite{hayes-icra16,huang-cvpr19,xu-icra18} and subtask graphs~\cite{liu-aaai22,sohn-neurips18,sohn-iclr20} to achieve compositional task generalization.
However, they assume that there is no noise in the data and high-level subtask information (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, whether each subtask is eligible and completed) is available to the agent.
Thus, existing methods cannot be directly applied to videos of real-world activities due to (1) missing high-level subtask information and (2) data noise (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, rearranged, skipped, or partially removed subtasks).
Instead, our model learns to predict subtask progress from the video and adopts a novel inductive logic programming algorithm that can robustly generate the subtask graph from noisy data.
\subsection{Video Representation Learning} %
\label{subsec:related_vision_language}
There has been limited progress in video representation learning~\cite{simonyan-neurips14,tran-iccv15} compared to its image counterpart~\cite{he-cvpr16,he-iccv17,huang-cvpr17}, due in part to the considerable human effort in labeling innately higher-dimensional data~\cite{miech-iccv19}.
Following the success of the Transformer \citep{vaswani-neurips17} architecture in Natural Language Processing~\cite{devlin-naacl19,radford-blog19}, it has been subsequently used for visual representation learning~\cite{li-arxiv19,radford-icml21,ramesh-icml21}.
More recent efforts have been directed towards learning from multimodal video and text data \cite{miech-iccv19,miech-cvpr20,sun-iccv19,zellers-neurips21}.
One of the key issues in learning from video and paired text narrations is handling the misalignment between video frames and text transcripts~\cite{miech-cvpr20,shen-cvpr21}.
In addition, extracting a holistic understanding of video content is challenging, so most prior works focus on short clip-level representations \cite{miech-iccv19,miech-cvpr20,sun-iccv19}.
Our work attempts to extract a holistic understanding of a task described in instructional videos by learning to jointly model visual and language information.
Unlike prior works that study the sequence of events appearing in individual instructional videos \cite{elhamifar-iccv19,zhukov-cvpr19,tang-cvpr19}, we attempt to consolidate information from multiple videos of a task to obtain a robust understanding of the task structure.
\subsection{Graph Generation}
\label{subsec:experiment_graph_generation}
In this section, we qualitatively and quantitatively compare our method against baselines on the subtask graph generation task.
We first introduce the dataset and the baselines we use and then discuss the main results.
\paragraph{Dataset.} %
Since there are no existing datasets that consist of videos with subtask graph annotations, we constructed a new dataset based on the ProceL dataset~\citep{elhamifar-iccv19}.
ProceL includes a variety of subtasks with dense labels from real-world instructional videos.
We asked three human annotators to manually construct a graph for each task and aggregated these annotations by retaining the high-confidence preconditions (See~\Cref{supp_subsec:ablation_study_subtask_state_prediction} for more details).
We use all the videos from twelve primary tasks in ProceL accompanied by spoken instructions generated using Automatic Speech Recognition (ASR).
Each task has 54.5 videos and 13.1 subtasks on average, and each video has subtask label and timing annotations (start and end times).
\paragraph{Baselines.} %
We compare our approach against the following baselines.
\begin{itemize}
\item \textbf{MSGI}~\cite{sohn-iclr20}: An inductive logic programming (ILP) method that maximizes the objective in \Cref{eq:ilp-org}. %
\item \textbf{MSGI+\xspace}: A variant of MSGI which assumes that the precondition is not met until the subtask is performed (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $e_t[n]=0$ if $\hat{s}_t[n]=\textit{not started}$) similar to prior work \citep{hayes-icra16, xu-icra18}).
We demonstrate that this method, though an improvement over MSGI, yields worse performance than MSG$^2${} due to its strong assumption.
We instead assume that it is unknown whether precondition is met in this case (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $e_t[n]$ is unknown for $\hat{s}_t[n]\neq\textit{in progress}$).
\item \textbf{\ourMethodName-noSSP} (\ourMethodName~without Subtask State Prediction): Generate graph (\Cref{subsec:method_graph_inference}) from human-annotated subtask state labels $\mathbf{S}$. In other words, this is our method without the multimodal subtask state inference module in \Cref{subsec:method_subtask_state_prediction}.
\item \textbf{ProScript}~\cite{sakaguchi-emnlpf21}: A T5 model~\cite{raffel-jmlr20} fine-tuned to predict a partially-ordered script from a given scenario description (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, baking a cake) and a set of unordered subtask descriptions.
Note that this is a text-only baseline trained on script data from \citet{sakaguchi-emnlpf21} and does not use the ProceL video/text data.
\end{itemize}
We use T5-Large~\cite{raffel-jmlr20} as the transformer in our state prediction module.
During training we randomly omit video frames corresponding to 25\% of subtasks for data augmentation.
See~\supplementary{\Cref{supp_subsec:ablation_study_subtask_state_prediction}} for further details and ablations regarding choice of model and architecture.
\paragraph{Metrics.}
We consider the following metrics to measure the quality of predicted subtask graph $G=(\pfuncarg{1}, \ldots, \pfuncarg{N_\tau})$.
\begin{itemize}
\item \textit{Accuracy} measures how often the output of the predicted and the ground-truth preconditions agree~\cite{sohn-iclr20}, where
$\pfuncarg{n}^{*}$ is the ground-truth precondition of the $n^\text{th}$ subtask.
\begin{align}
\text{Accuracy}=\frac{1}{N_{\tau}}\sum_{n=1}^{N_{\tau}} \mathbb{E}_\mathbf{c} \left[
\mathbb{I}\left[
\pfuncarg{n}(\mathbf{c})=\pfuncarg{n}^{*}(\mathbf{c})
\right]
\right]
\label{eq:accuracy}
\end{align}
\item \emph{Strict Partial Order Consistency}~(SPOC): We define a strict partial order relation $R_G$ imposed by graph $G$ on subtasks $a, b \in S$ as: $(a,b) \in R_{G}$ iff $a$ is an ancestor of $b$ in graph $G$ (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, it can be written as a binary relation with matrix notation: $R_{G}[a][b] \triangleq \mathbb{I}$[$a$ is an ancestor of $b$ in $G$]).
SPOC~is defined as in \Cref{eq:graphmetric}, where $G^{*}$ denotes the ground-truth graph.
\begin{align}
\label{eq:graphmetric}
\text{SPOC}=\frac{1}{N_{\tau}(N_{\tau}-1)} \sum_{a\neq b}\mathbb{I}\left[R_{G}[a][b]=R_{G^{*}}[a][b]\right]
\raisetag{10pt}
\end{align}
\item \emph{Compatibility} measures the extent to which subtask trajectories in the dataset are compatible with causality constraints imposed by the predicted graph.
For instance, given a subtask sequence $x_1,\ldots,x_n$, if $x_j$ is an ancestor of $x_i$ in the graph (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $(x_j, x_i) \in R_{G}$) for $i < j$, this might suggest that subtasks $x_i$ and $x_j$ violate the causality constraint imposed by the graph.
However, this is not always the case as a subtask can still occur \emph{before} an \emph{ancestor} subtask as long as it's precondition is satisfied (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\pfuncarg{x_i}(x_1,\ldots,x_{j-1})=1$).\footnote{In an abuse of notation, we use $\pfuncarg{x_i}(x_1,\ldots,x_{j-1})$ to represent whether the precondition of $x_i$ is satisfied given that subtasks $x_1, \ldots,x_{j-1}$ were completed.}
We thus define the Compatibility~of the given subtask sequence as in \Cref{eq:compatibility}.\footnote{We assume the product taken over a null set to be 1.}
The metric is (macro) averaged across all dataset instances.
\begin{align}
\text{Compatibility} = \frac{1}{n} \sum_{j=1}^n \prod_{\substack{i < j \\ {(x_j, x_i) \in R_{G}}}} \pfuncarg{x_{i}}(x_1,\ldots,x_{j-1})
\label{eq:compatibility}
\raisetag{13pt}
\end{align}
\end{itemize}
\begin{table}[t]
\setlength{\tabcolsep}{4pt}
\centering
\small
\caption{\textbf{Graph generation results in ProceL~\cite{elhamifar-iccv19}.}
We report the average percentage (\%) across all tasks (See~\supplementary{\Cref{supp_subsec:task_level_graph_generation_result}} for task-level performance).
}
\begin{tabular}{l|ccc}
\toprule
Method & Accuracy~$\uparrow$ & SPOC~$\uparrow$ & Compatibility~$\uparrow$ \\ \midrule
ProScript & 57.50 & 62.34 & 37.12 \\
MSGI & 54.23 & 64.62 & N/A \\
MSGI+\xspace & 73.59 & 72.39 & 76.08 \\
\ourMethodName-noSSP & 81.62 & 88.35 & 97.86 \\
\textbf{MSG$^2$~(Ours)} & \textbf{83.16} & \textbf{89.91} & \textbf{98.30} \\
\bottomrule
\end{tabular}
\label{tab:graph_evaluation}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\linewidth]{review/figure/comparison_gt_ilp_ours_vertical_v3.pdf}
\caption{\textbf{Illustration of subtask graphs generated by each method for \task{perform CPR} task}. %
For the predicted graphs ((b)~\ourMethodName~without Subtask State Prediction~and (c)~MSG$^2$), {\color{blue}redundant edges} (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, false positive) are colored in {\color{blue}blue} and {\color{red}missing edges} (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, false negative) are colored in {\color{red}red}.
Please check~\Cref{supp_fig:generated_graph_examples} for more examples.
}
\label{fig:generated_graph}
\end{figure*}
\paragraph{Results.}
\Cref{tab:graph_evaluation}~summarizes the performance of different graph generation methods.
First, the complexity regularization term~(\Cref{eq:ilp-new}) helps our method (MSG$^2$) be more robust to incomplete and noisy data compared to the MSGI and MSGI+\xspace baselines.
Intuitively, MSGI+\xspace assumes that the input data contains no noise and attempts to infer a binary decision tree that perfectly discriminates between eligible and ineligible samples.
When the data is noisy, we observe that such strictness results in an overly complicated precondition (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, overfits to the noise), resulting in inaccurate graphs.
On the other hand, MSGI simply ignores all the data points when it is unknown whether precondition is met.
This results in MSGI predicting \emph{null} preconditions (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, precondition is always satisfied) for all subtasks.
We thus report \emph{Compatibility}~as N/A in \Cref{tab:graph_evaluation} for MSGI.
Note that our MSG$^2${} circumvents this degeneration problem by optimizing the precision (as opposed to the accuracy, as in MSGI) and regularizing the complexity of the inferred graph (see \Cref{eq:jours}).
When using subtask states predicted by our state prediction module (MSG$^2$) instead of human annotations (\ourMethodName-noSSP), we observe consistent improvements for all metrics, which shows the benefit of predicting missing subtasks by exploiting multimodal vision-text data.
See~\supplementary{\Cref{supp_fig:completion_prediction_examples}}~for examples of subtask state prediction.
Our method also significantly outperforms proScript~\cite{sakaguchi-emnlpf21}, which relies on a web-scale pretrained text model~\cite{raffel-jmlr20}.
\paragraph{Qualitative Evaluation.} %
\Cref{fig:generated_graph}~shows predicted graphs for the \task{perform CPR}~task.
The~MSG$^2$~graph (Figure~\ref{fig:generated_graph}.(c)) is closer to the ground-truth graph (Figure~\ref{fig:generated_graph}.(a)) compared to our model without subtask state prediction (Figure~\ref{fig:generated_graph}.(b)).
This is because the predicted subtask states provide additional clues about subtask completion.
For instance, consider the subtask \subtask{D:check breathing}, which is a prerequisite for subtasks \subtask{F}, \subtask{G} and \subtask{H} in the ground truth graph.
Since \subtask{D} is sparsely annotated in the data (does not appear in 29\% of the sequences), the baseline model assumes that the presence of other subtasks (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \subtask{C}) explains subtasks \subtask{F}, \subtask{G} and \subtask{H} (the redundant outgoing arrows from \subtask{C}).
By recovering some of the missing annotations for \subtask{D} based on visual and text signals (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, it is clearly visible that \subtask{check breathing} was performed), our method resolves some of these cases (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \subtask{C} is no longer a precondition for \subtask{H}).
However, the recovery is not perfect, and our model still mistakenly assumes that \subtask{C} is a precondition for \subtask{F}.
Further improvements in multimodal subtask state modeling can help address these errors.
\subsection{Next Subtask Prediction}
\label{subsec:experiment_next_step_prediction}
Inspired by~\citet{epstein-iccv21}, which performed the downstream task of predicting the next \emph{subtask}, we consider the next subtask prediction based on the inferred subtask graph. %
Predicting the next subtask can help humans and agents break down a complex task into more manageable subtasks and focus on subtask that are currently relevant.
It further helps complete tasks efficiently by minimizing trial and error, and identify/address potential issues that may arise during the task.
In contrast to predicting the next \emph{frame} in time~\cite{damen-eccv18,kuehne-cvpr14}, next \emph{subtask} prediction requires reasoning about events at a higher level of temporal abstraction. %
\paragraph{Dataset.} %
We use the CrossTask dataset, which consists of 18 primary tasks with 7.4 subtasks and 140.6 videos per task on average.\footnote{We found CrossTask to be more interesting and challenging than other datasets in the literature as it includes more variation in subtasks.}
We also adopt all 12 tasks from the ProceL dataset which includes 13.2 subtasks with 54.5 videos per task on average.
For both ProceL~\cite{elhamifar-iccv19} and CrossTask~\cite{zhukov-cvpr19}, we convert all videos to 30 fps, obtain verb phrases and extract CLIP features following~\Cref{subsec:experiment_graph_generation}.
For each dataset we take 15\% of the videos of each task as the test set.
Please see~\supplementary{\Cref{supp_sec:next_step_pred}}~for the details.
\paragraph{Baselines.} %
We first choose two open-source end-to-end video-based Transformer models (STAM \cite{sharir-arxiv21}, ViViT \cite{arnab-iccv21}) for comparison.
These models are based on the Vision Transformer~\cite{dosovitskiy-iclr21}, which slices each frame into $16\times16$ patches and treats each patch as an input to the transformer.
To keep the input the same for all models used in this experiment, we replace the patch embedding from each model implementation with the CLIP embedding used in our model.
Specifically, we replace the patch embedding with $\mathbf{H}^{i}$~in~\Cref{eq:residual} for each model.
We train the methods to predict the next subtask from the history of previous subtasks using a multi-class subtask classification loss, where a linear projection of the final transformer hidden state is used as the subtask prediction.
In addition to these end-to-end neural baselines, we also tested the graph-based methods introduced in~\Cref{tab:graph_evaluation}.
For all graph-based methods, we first generated the graph from the train split.
We then use the generated subtask graph to predict whether the precondition is satisfied or not from the subtask completion in test split.
Specifically, we used the GRProp policy~\cite{sohn-neurips18}, which is a simple GNN policy that takes the subtask graph as input and chooses the best next subtask to execute.
It aims to maximize the total reward, when each subtask is assigned with a scalar-valued reward.
At every time step, we assigned higher reward to each subtask if its precondition was satisfied more \emph{recently}: \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $r_n \propto \lambda^{{\Delta t}_n}$, where $r_n$ is the reward assigned to $n^\text{th}$ subtask, ${\Delta t}_n$ is the time steps elapsed since the precondition $\pfuncarg{n}$ for has been satisfied and $0<\lambda < 1$ is the discount factor.
Please see~\supplementary{\Cref{supp_subsec:next_step_pred_details}} for more details.
\paragraph{Metric.} %
For each test video, we measure the accuracy of predicting each subtask correctly, given previous subtasks and corresponding ASR sentences.
\begin{table}[t]
\centering
\small
\setlength{\tabcolsep}{4pt}
\caption{\textbf{Next Subtask Prediction Task.} We perform the next subtask prediction on ProceL~\cite{elhamifar-iccv19}~and CrossTask dataset~\cite{zhukov-cvpr19}~and measure the accuracy (\%).
We report the average among all tasks, and task-level accuracy can be found in~\supplementary{\Cref{supp_subsec:task_level_next_step_pred_result}}.
}
\begin{tabular}{l|cc}
\toprule
Model & ProceL & CrossTask \\ \midrule
STAM~\citep{sharir-arxiv21} & 29.86 & 40.17 \\
ViViT~\citep{arnab-iccv21} & 26.98 & 41.96 \\\midrule
proScript~\cite{sakaguchi-emnlpf21} & 18.86 & 36.78 \\
MSGI~\citep{sohn-iclr20} & 17.42 & 32.31 \\
MSGI+\xspace & 26.54 & 32.72 \\
\ourMethodName-noSSP & 48.39 & 53.39 \\
\textbf{MSG$^2$~(Ours)} & \textbf{55.38} & \textbf{54.42} \\
\bottomrule
\end{tabular}
\label{tab:next_task_pred}
\end{table}
\paragraph{Results.}
\Cref{tab:next_task_pred} shows the next subtask prediction performance.
Compared to the end-to-end neural baselines, MSG$^2$~achieves 85\% and 30\% higher prediction performance in ProceL~\cite{elhamifar-iccv19} and CrossTask~\cite{zhukov-cvpr19} datasets, respectively.
In addition, we observe that end-to-end neural baselines are competitive with some of the graph generation baselines.
This indicates that the higher quality of graphs predicted by our approach is responsible for the significant performance improvement over all baselines.
\section{Graph Generation}
\label{supp_sec:ilp_detail}
\subsection{Background: Subtask Graph Inference using Layer-wise Inductive Logic Programming}
\label{supp_subsec:ilp_prelim}
For a task $\tau$ that consists of $N_\tau$ subtasks, we define the \emph{completion} vector $\smash{\mathbf{c}\in\{0, 1\}^{N_\tau} }$ and \emph{eligibility} vector $\smash{\mathbf{e}\in\{0, 1\}^{N_\tau} }$ where $c_n$ indicates if the $n$-th subtask has completed and $e_n$ indicates if the $n$-th subtask is eligible (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, its precondition is satisfied).
Given $\mathcal{D}=\{(\mathbf{c}^j, \mathbf{e}^j)\}_{j=1}^{|\mathcal{D}|}$ as training data,~\citet{sohn-iclr20}~proposed an Inductive Logic Programming (ILP) algorithm which finds the subtask graph $G$ that maximizes the binary classification accuracy (\Cref{suppeq:ilp-obj}):
\begin{align}
\hat{G}
&=\operatornamewithlimits{argmax}_{G} P(\pfuncarg{G} (\mathbf{c}) = \mathbf{e} )\label{suppeq:ilp-obj}\\
&=\operatornamewithlimits{argmax}_{G} \sum_{j=1}^{|\mathcal{D}|}{ \mathbb{I}[\mathbf{e}^{j} = \pfuncarg{G}(\mathbf{c}^{j})] }\\
&=
\left(
\operatornamewithlimits{argmax}_{G_1} \sum_{j=1}^{|\mathcal{D}|}{ \mathbb{I}\left[e^{j}_1 = \pfuncarg{1}(\mathbf{c}^{j})\right] },
\ldots,
\operatornamewithlimits{argmax}_{G_{N_\tau}} \sum_{j=1}^{|\mathcal{D}|}{ \mathbb{I}\left[e^{j}_{N_\tau} = \pfuncarg{N_\tau}(\mathbf{c}^{j})\right] }
\right),\label{suppeq:ilp-indiv}
\end{align}
where $\mathbb{I}[\cdot]$ is the element-wise indicator function, $\pfuncarg{G}: \mathbf{c}\mapsto \mathbf{e}$ is the precondition function defined by the subtask graph $G$, which predicts whether subtasks are eligible (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the precondition is satisfied) from the subtask completion vector $\mathbf{c}$, and $\pfuncarg{n}: \mathbf{c}\mapsto e_n$ is the precondition function of $n$-th subtask defined by the precondition $G_n$.
\begin{figure*}[t]
\centering
\includegraphics[draft=false,width=0.99\linewidth]{review/figure/logic_induction_ver2.pdf}
\caption{
The inductive logic programming (ILP) module takes all the completion and eligibility vectors ($\{(\mathbf{c}^j, \mathbf{e}^j)\}_{j=1}^{|\mathcal{D}|}$) as input, and builds a binary decision tree to infer the precondition $G$.
For example, in case of subtask \subtask{E} (the bottom row in the Figure), it takes ($\{(\mathbf{c}^j, e^j[\text{\subtask{E}}])\}_{j=1}^{|\mathcal{D}|}$) as (input, output) pairs of training dataset, and constructs a decision tree by choosing a variable (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, component of completion vector $\mathbf{c}$, which corresponds to each subtask) at each step that best splits the true (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $e[\text{\subtask{E}}]=1$) and false (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $e[\text{\subtask{E}}]=0$)-labeled data samples.
Then, the decision tree is represented as a logic expression (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, precondition), simplified, transformed to a subtask graph form, and merged together to form a subtask graph.
The figure was adopted from~\citet{sohn-iclr20}~and modified to match our notation style.
}
\label{supp_fig:ilp}
\end{figure*}
\Cref{supp_fig:ilp} illustrates the detailed process of subtask graph inference in~\citet{sohn-iclr20} from the completion and eligibility data.
The precondition function $\pfuncarg{n}$ is modeled as a binary decision tree where each branching node chooses the best subtask (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, subtask B in the first branching in the bottom row of~\Cref{supp_fig:ilp}) to predict whether the $n$-th subtask is eligible or not based on Gini impurity~\cite{breiman-routledge84}.
Each binary decision tree constructed in this manner is then converted into a logical expression (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, $BA + B\bar{A}C$ in the bottom row of~\Cref{supp_fig:ilp}) that represents precondition.
Finally, we build the subtask graph by consolidating the preconditions of all subtasks.
\subsection{Subtask Graph Inference from Real-world Data}
\label{supp_subsec:ilp_ours}
\paragraph{Layer-wise Precondition Inference.}
One major problem of inferring the precondition independently for each subtask is the possibility of forming a \emph{cycle} in the resulting subtask graph, which leads to a causality paradox (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, subtask A is a precondition of subtask B and subtask B is a precondition of subtask A).
To avoid this problem, we perform precondition inference in a \emph{layer-wise} fashion similar to~\citet{sohn-iclr20}.
To this end, we first infer the layer of each subtask from the (noise reduced) subtask state labels $\{\mathbf{S}^i\}_{i=1}^{|\mathbf{V}_\tau|}$ for the task $\tau$.
\citet{sohn-iclr20}~infer the layer of each subtask by finding the minimal set of subtask completions to perfectly discriminate the eligibility of a subtask.
However, this approach is not applicable to our setting due to a lack of data points with ineligible subtasks; \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, we can perfectly discriminate the eligibility by predicting a subtask to be \emph{always} eligible.
Instead, we propose to extract the parent-child relationship from the subtask state labels $\mathbf{S}$.
Intuitively speaking, we consider subtask $n$ to be the ancestor of subtask $m$ if subtask $n$ (almost) always precedes subtask $m$ in the videos, and assign at least one greater depth to subtask $m$ than subtask $n$.
Specifically, we count the (long-term) transitions between all pairs of subtasks and compute the \emph{transition purity} as follows:
\begin{align}
\text{purity}_{n\rightarrow m} = \frac{\text{\# occurences subtask $n$ preceds subtask $m$ in the video}}{\text{\# occurences subtask $n$ and subtask $m$ appears together in the video}}
\end{align}
We consider subtask $n$ to be the ancestor of subtask $m$ if the transition purity is larger than a threshold $\delta$ (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\text{purity}_{n\rightarrow m} > \delta)$.
Intuitively, when the data has higher noise, we should use lower $\delta$. We used $\delta=0.96$ for ProceL and $\delta=0.55$ for CrossTask in the experiment.
Note that this is only a \emph{necessary} condition, and it cannot guarantee to extract \emph{all} of the parent-child relationships, especially when the precondition involves \andorgraph{OR} (\( | \)) relationship.
Thus, we use this information only for deciding the layer of each subtask and do not use it for inferring the precondition.
After each subtask is assigned to its depth, we perform precondition inference at each depth in order of increasing depth.
By definition, the subtasks at depth$=0$ do not have any precondition.
When inferring the precondition of a subtask in layer $l$, we use the completion of subtasks in depth $1, \ldots, (l-1)$.
This ensures that the edge in the subtask graph is formed from the lower depth to the higher depth, which prevents the cycle.
\paragraph{Recency Weighting.}
To further improve the graph generation, we propose to take the temporal information into account.
In fact, the conventional ILP does not need to take the time step into account since it assumes eligibility data to be available at every time step.
However, in our case, we are only given a single data point with positive eligibility per video clip per subtask, where incorporating the temporal information can be very helpful.
Motivated by this, we propose to assign the weight to each data sample according to the \textit{recency}; \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, we assign higher weight if a subtask has become eligible more recently.
We first rewrite~\Cref{eq:ilp-new}~as follows:
\begin{align}
\hat{\pfuncarg{n}}
&=\operatornamewithlimits{argmax}_{\pfuncarg{n}} \left\{P\left(e_n=1 | \pfuncarg{n}(\mathbf{c})=1\right) - \alpha C\left(\pfuncarg{n}\right)\right\}\\
&\simeq \operatornamewithlimits{argmax}_{\pfuncarg{n}}\frac{
\frac{1}{|\mathcal{D}_n|} \sum_{j=1}^{|\mathcal{D}_n|}
{
\mathbb{I}\left(
\pfuncarg{n}(\mathbf{c}^{j})=1, e^{j}_{n}=1
\right)
}
}{
\frac{1}{2^{N}} \sum_{\mathbf{c}}
{
\mathbb{I}\left(
\pfuncarg{n}(\mathbf{c})=1
\right)
}
} - \alpha C\left(\pfuncarg{n}\right)\\
&=\operatornamewithlimits{argmax}_{\pfuncarg{n}}\frac{
\frac{1}{|\mathbf{X}_\tau|} \sum_{i=1}^{|\mathbf{X}_\tau|}
{
\frac{1}{T}
\sum_{t=1}^{T}
{
\mathbb{I}
\left(
\pfuncarg{n}(\mathbf{c}^{i}_{t})=1, e^{i}_{t, n}=1
\right)
}
}
}{
\frac{1}{2^{N}} \sum_{\mathbf{c}}
{
\mathbb{I}\left(
\pfuncarg{n}(\mathbf{c})=1
\right)
}
} - \alpha C\left(\pfuncarg{n}\right)\label{suppeq:ilp-rewrite}
\end{align}
Then, we add the recency weight ${\color{blue}w_{t, n}}$ and modify~\Cref{suppeq:ilp-rewrite} as follows:
\begin{align}
\hat{\pfuncarg{n}}
&\simeq \operatornamewithlimits{argmax}_{\pfuncarg{n}}\frac{
\frac{1}{|\mathbf{X}_\tau|} \sum_{i=1}^{|\mathbf{X}_\tau|}
{
\frac{1}{T}
\sum_{t=1}^{T}
{
{\color{blue}w_{t, n}} \ \mathbb{I}
\left(
\pfuncarg{n}(\mathbf{c}^{i}_{t})=1, e^{i}_{t,n}=1
\right)
}
}
}{
\frac{1}{2^{N}} \sum_{\mathbf{c}}
{
\mathbb{I}\left(
\pfuncarg{n}(\mathbf{c})=1
\right)
}
} - \alpha C\left(\pfuncarg{n}\right)\label{suppeq:ilp-recency},
\end{align}
where
\begin{align}
w_{t, n} = \max(0.1, \lambda ^ {t_n-t}),
\end{align}
$0<\lambda<1$ is the discount factor, $t_n$ is the time step when the precondition for subtask $n$ became satisfied.
\paragraph{Hyperparameters.}
We used $\alpha=0.2$ and $\lambda=0.7$ in our experiments.
\subsection{Task-level Graph Generation Results}
\label{supp_subsec:task_level_graph_generation_result}
We present graph generation metrics for each task separately in \Cref{supptab:graph_evaluation}.
\begin{table*}[ht]
\setlength{\tabcolsep}{4.2pt}
\centering
\footnotesize
\caption{\textbf{Task-level Graph Generation Result in ProceL~\cite{elhamifar-iccv19}.} Task label indexes (a-l) are identical to~\Cref{supptab:frame_evaluation}.}
\begin{tabular}{cl|cccccccccccc|c}
\toprule
Metric & Method & (a) & (b) & (c) & (d) & (e) & (f) & (g) & (h) & (i) & (j) & (k) & (l) & Avg\\
\midrule
\multirow{5}{*}{\rotatebox[origin=c]{90}{Accuracy}}
& proScript & 54.69 & 55.56 & 62.50 & 63.54 & 53.57 & 62.50 & 58.93 & 57.14 & 57.14 & 54.17 & 55.00 & 55.21 & 57.50 \\
& MSGI & 50.00 & 55.56 & 53.12 & 55.21 & 48.81 & 52.50 & 58.93 & 55.36 & 51.79 & 58.33 & 50.00 & 61.11 & 54.23 \\
& MSGI+\xspace & 57.03 & 66.67 & 53.12 & 60.42 & 57.74 & 68.75 & 61.61 & 67.86 & 64.29 & 63.54 & 58.44 & 64.58 & 62.00 \\
& \ourMethodName-noSSP & 73.44 & 80.56 & 81.25 & 84.38 & 88.69 & 90.00 & 76.79 & 98.21 & 85.71 & 81.25 & 72.50 & 66.67 & 81.62 \\
& \textbf{MSG$^2$~(Ours)} & 87.50 & 79.17 & 90.62 & 84.38 & 83.33 & 85.00 & 67.86 & 100.00 & 87.50 & 83.33 & 82.50 & 66.67 & \textbf{83.16} \\
\midrule
\multirow{5}{*}{\rotatebox[origin=c]{90}{SPOC}}
& proScript & 72.08 & 65.03 & 57.14 & 60.61 & 64.52 & 70.00 & 65.38 & 62.09 & 58.24 & 47.73 & 60.00 & 65.28 & 62.34 \\
& MSGI & 83.33 & 66.34 & 69.64 & 58.33 & 60.00 & 61.11 & 57.69 & 50.55 & 59.89 & 67.42 & 74.44 & 66.67 & 64.62 \\
& MSGI+\xspace & 77.92 & 70.59 & 69.64 & 62.12 & 64.76 & 82.22 & 69.23 & 84.07 & 72.53 & 65.91 & 73.33 & 76.39 & 72.39 \\
& \ourMethodName-noSSP & 87.92 & 92.48 & 91.07 & 93.18 & 86.90 & 90.00 & 82.97 & 93.41 & 89.56 & 90.15 & 77.78 & 84.72 & 88.35 \\
& \textbf{MSG$^2$~(Ours)} & 95.83 & 88.89 & 92.86 & 93.18 & 90.00 & 92.22 & 89.01 & 100.00 & 90.11 & 87.12 & 83.33 & 76.39 & \textbf{89.91} \\
\midrule
\multirow{5}{*}{\rotatebox[origin=c]{90}{Compatibility}}
& proScript & 26.20 & 9.30 & 51.27 & 30.12 & 28.70 & 52.29 & 25.74 & 47.75 & 34.12 & 15.55 & 48.03 & 76.40 & 37.12 \\
& MSGI & \multicolumn{12}{c|}{N/A} & N/A \\
& MSGI+\xspace & 71.62 & 81.62 & N/A & 67.80 & 62.18 & 73.53 & 80.94 & 88.41 & 87.51 & 73.16 & 73.54 & 76.60 & 76.08 \\
& \ourMethodName-noSSP & 98.14 & 99.14 & 99.39 & 98.78 & 100.00 & 98.88 & 94.41 & 96.71 & 100.00 & 97.40 & 95.58 & 95.93 & 97.86 \\
& \textbf{MSG$^2$~(Ours)} & 100.00 & 98.60 & 98.52 & 98.78 & 98.57 & 98.55 & 96.65 & 99.43 & 98.26 & 97.19 & 97.48 & 97.53 & \textbf{98.30} \\
\bottomrule
\end{tabular}
\label{supptab:graph_evaluation}
\end{table*}
\section{Next Step Prediction with Subtask Graph}
\label{supp_sec:next_step_pred}
\subsection{Details about Next Step Prediction with Graphs}
\label{supp_subsec:next_step_pred_details}
From the subtask label $\mathbf{s}_{t}$ and predicted subtask state $\hat{\mathbf{s}}_{t}$, we first obtain the subtask completion $\mathbf{c}_t$ by checking whether each subtask state is $\textit{completed}$.
Then, we compute the subtask eligibility $\mathbf{e}_t$ from the completion $\mathbf{c}_t$ and subtask graph $G$ as $\mathbf{e}_t = \pfuncarg{G}(\mathbf{c}_t)$.
When predicting the next step subtask, we exploit the fact that a subtask can be completed only if 1) it is eligible and 2) it is incomplete.
Thus, we compute the subtask prediction mask $\mathbf{m}_t$ as $\mathbf{m}_t = \mathbf{e}_t \odot (1 - \mathbf{c}_t)$, where $\odot$ denotes element-wise multiplication.
Lastly, among the eligible and incomplete subtasks, we assign a higher probability if a subtask has been eligible more recently: $p_{t+1}[n] \propto m_t[n] \cdot \rho ^ {\Delta t^\text{elig}_n}$,
where $p_t[n]$ is the probability that $n$-th subtask is (or will be) completed at time $t$, $\Delta t^\text{elig}_n$ is the time steps elapsed since the $n$-th subtask has been eligible, and $0 < \rho < 1$ is the discount factor. We used $\rho=0.9$ in the experiment.
\subsection{Training Details}
\label{supp_subsec:training_details_next_step_pred_result}
We apply our subtask graph generation method to the next subtask prediction task in~\Cref{subsec:experiment_next_step_prediction}.
For both ProceL~\cite{elhamifar-iccv19} and CrossTask~\cite{zhukov-cvpr19}, we first convert all videos to 30 fps, obtain the verb phrases, and extract CLIP features, following~\Cref{supp_subsec:training_details_subtask_state_prediction}.
We split 15\% of the data as the test set. %
During training, we randomly select a subtask and feed the data up to the selected subtask with subsampling at least three frames from each subtask region but no more than 190 frames in total for all models.
For evaluation, we provide all previous subtasks in the dataset and sample 3 frames per subtask in an equidistant manner (without any random sampling).
All the other settings are the same as subtask state prediction.
\subsection{Task-level Next Step Prediction Results}
\label{supp_subsec:task_level_next_step_pred_result}
We share task-level next step prediction results in~\Cref{supptab:next_subtask_pred}.
All the method labels correspond to~\Cref{tab:next_task_pred}.
\begin{table*}[t]
\setlength{\tabcolsep}{3.0pt}
\centering
\footnotesize
\caption{\textbf{Task-level Next Step Prediction Results.} We perform next subtask prediction on ProceL~\cite{elhamifar-iccv19} and CrossTask~\cite{zhukov-cvpr19} datasets and measure the accuracy(\%). Task label indexes (a-l) in ProceL task are the same as~\Cref{supptab:frame_evaluation}. Task labels (i-x) in CrossTask task are (i) Add Oil to Your Car, (ii) Build Simple Floating Shelves, (iii) Change a Tire, (iv) Grill Steak, (v) Jack Up a Car, (vi) Make a Latte, (vii) Make Banana Ice Cream, (viii) Make Bread and Butter Pickles, (ix) Make French Strawberry Cake, (x) Make French Toast, (xi) Make Irish Coffee, (xii) Make Jello Shots, (xiii) Make Kerala Fish Curry, (xiv) Make Kimchi Fried Rice, (xv) Make Lemonade, (xvi) Make Meringue, (xvii) Make Pancakes and (xviii) Make Taco Salad.}
\begin{tabular}{r|l|cccccccccccc|c}
\toprule
& \textbf{Model} & (a) & (b) & (c) & (d) & (e) & (f) & (g) & (h) & (i) & (j) & (k) & (l) & Avg \\ \midrule
\multirow{7}{*}{\rotatebox[origin=c]{90}{\textbf{ProceL}}} & STAM~\cite{sharir-arxiv21} & 23.61 & 23.81 & 21.43 & 46.88 & 25.00 & 63.04 & 57.14 & 34.29 & 4.88 & 24.53 & 27.50 & 6.25 & 29.86 \\
& ViViT~\cite{arnab-iccv21} & 26.39 & 20.24 & 19.05 & 50.00 & 25.00 & 28.26 & 55.36 & 40.00 & 4.88 & 32.08 & 22.50 & 0.00 & 26.98 \\ \cline{2-15}
& proScript~\cite{sakaguchi-emnlpf21} & 15.57 & 10.34 & 37.50 & 36.17 & 10.62 & 32.76 & 21.05 & 28.06 & 13.19 & 8.33 & 12.70 & 0.00 & 18.86 \\
& MSGI~\cite{sohn-iclr20} & 10.66 & 13.10 & 32.81 & 25.53 & 7.96 & 25.86 & 14.74 & 24.46 & 13.19 & 13.54 & 7.94 & 19.23 & 17.42 \\
& MSGI+\xspace & 22.13 & 36.55 & 32.81 & 26.60 & 25.66 & 37.93 & 20.00 & 34.53 & 16.48 & 27.08 & 7.94 & 30.77 & 26.54 \\
& \ourMethodName-noSSP & 31.97 & 56.55 & 43.75 & 73.40 & 51.33 & 51.72 & 46.32 & 69.06 & 50.55 & 48.96 & 30.16 & 26.92 & 48.39 \\
& \textbf{MSG$^2$~(Ours)} & 40.16 & 51.72 & 56.25 & 73.40 & 62.83 & 68.97 & 44.21 & 69.06 & 59.34 & 48.96 & 39.68 & 50.00 & \textbf{55.38} \\
\bottomrule
\end{tabular}
\setlength{\tabcolsep}{6.0pt}
\begin{tabular}{r|l|ccccccccc|c}
& \multirow{2}{*}{\textbf{Model}} & (i) & (ii) & (iii) & (iv) & (v) & (vi) & (vii) & (viii) & (ix) & \multirow{2}{*}{Avg}\\
& & (x) & (xi) & (xii) & (xiii) & (xiv) & (xv) & (xvi) & (xvii) & (xviii) & \\ \midrule
\multirow{14}{*}{\rotatebox[origin=c]{90}{\textbf{CrossTask}}}
& \multirow{2}{*}{STAM~\cite{sharir-arxiv21}} & 30.77 & 58.82 & 48.21 & 38.82 & 27.27 & 26.98 & 34.00 & 21.31 & 15.69 & \multirow{2}{*}{40.17}\\
& & 47.91 & 78.57 & 48.94 & 34.83 & 38.60 & 36.90 & 54.93 & 42.86 & 37.63 & \\
& \multirow{2}{*}{ViViT~\cite{arnab-iccv21}} & 34.62 & 43.14 & 49.11 & 34.12 & 63.64 & 34.92 & 60.00 & 13.11 & 11.76 & \multirow{2}{*}{41.96}\\
& & 48.37 & 82.14 & 50.00 & 32.58 & 38.60 & 35.71 & 46.48 & 43.57 & 33.33 & \\\cline{2-12}
& \multirow{2}{*}{proScript~\cite{sakaguchi-emnlpf21}} & 21.37 & 50.65 & 37.93 & 20.07 & 90.62 & 33.33 & 48.15 & 22.52 & 34.12 & \multirow{2}{*}{36.78}\\
& & 33.44 & 45.54 & 33.78 & 26.36 & 33.72 & 38.89 & 42.57 & 24.49 & 24.40 & \\
& \multirow{2}{*}{MSGI~\cite{sohn-iclr20}} & 30.77 & 32.47 & 23.45 & 14.60 & 71.88 & 33.33 & 39.51 & 18.02 & 18.82 & \multirow{2}{*}{32.31}\\
& & 22.51 & 33.04 & 36.49 & 33.33 & 45.35 & 32.54 & 34.65 & 34.69 & 26.19 & \\
& \multirow{2}{*}{MSGI+\xspace} & 30.77 & 32.47 & 22.76 & 14.60 & 71.88 & 33.33 & 39.51 & 27.93 & 18.82 & \multirow{2}{*}{32.72}\\
& & 20.58 & 33.04 & 36.49 & 33.33 & 45.35 & 32.54 & 34.65 & 34.69 & 26.19 & \\
& \multirow{2}{*}{\ourMethodName-noSSP} & 67.52 & 72.73 & 64.83 & 45.99 & 90.62 & 44.74 & 48.15 & 32.43 & 37.65 & \multirow{2}{*}{53.39}\\
& & 63.99 & 50.89 & 52.03 & 47.29 & 48.84 & 36.51 & 58.42 & 63.78 & 34.52 & \\
& \multirow{2}{*}{\textbf{MSG$^2$~(Ours)}} & 67.52 & 72.73 & 68.28 & 41.24 & 90.62 & 46.49 & 48.15 & 32.43 & 37.65 & \multirow{2}{*}{\textbf{54.42}} \\
& & 63.99 & 55.36 & 57.43 & 48.06 & 48.84 & 46.03 & 58.42 & 64.80 & 31.55 & \\
\bottomrule
\end{tabular}
\label{supptab:next_subtask_pred}
\end{table*}
\subsection{Generated Graphs}
\label{supp_subsec:graph_generation}
\begin{figure}[t]
\centering
\begin{subfigure}{\linewidth}
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.6\linewidth]{review/figure/gt_graph/perform_cpr.pdf}
\vspace{-10px}
\caption*{1. Human-annotated graph}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.6\linewidth]{review/figure/ours_nostate_graph/perform_cpr.pdf}
\vspace{-10px}
\caption*{2. \ourMethodName~without Subtask State Prediction}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.7\linewidth]{review/figure/ours_graph/perform_cpr.pdf}
\vspace{-10px}
\caption*{3. MSG$^2$}
\end{subfigure}
\caption{Perform CPR}
\label{supp_subfig:perform_cpr}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.75\linewidth]{review/figure/gt_graph/assemble_clarinet.pdf}
\vspace{-10px}
\caption*{1. Human-annotated graph}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.75\linewidth]{review/figure/ours_nostate_graph/assemble_clarinet.pdf}
\vspace{-10px}
\caption*{2. \ourMethodName~without Subtask State Prediction}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.75\linewidth]{review/figure/ours_graph/assemble_clarinet.pdf}
\vspace{-10px}
\caption*{3. MSG$^2$}
\end{subfigure}
\caption{Assemble Clarinet}
\label{supp_subfig:assemble_clarinet}
\end{subfigure}
\caption{\textbf{Ground Truth and Generated Graphs.} We plot the subtask graphs for (a) Perform CPR and (b) Assemble Clarinet. %
In each subfigure, the three graphs correspond to 1. Human-annotated graph, 2. \ourMethodName~without Subtask State Prediction, and 3. MSG$^2$~from the ProceL dataset, respectively.
}
\label{supp_fig:generated_graph_examples}
\end{figure}
To evaluate the performance of our graph generation approach, we hand-annotate subtask graphs for each task in the ProceL dataset, and we plot the subtask graphs for two tasks in~\Cref{supp_fig:generated_graph_examples}.
In addition to the hand-annotated graphs, we also plot the subtask graphs generated from~MSG$^2$~and~\ourMethodName~without Subtask State Prediction. %
When we compare the ground-truth with~MSG$^2$, our predictions closely resemble the ground-truth graphs, compared with~\ourMethodName~without Subtask State Prediction.
These results show that our subtask state prediction improves subtask labels in the data, which leads to more accurate generated graphs.
\section{Subtask State Prediction}
\label{supp_sec:subtask_state_prediction}
\subsection{Training Details}
\label{supp_subsec:training_details_subtask_state_prediction}
\paragraph{Preprocessing.} We first convert all videos in ProceL~\cite{elhamifar-iccv19} to 30 fps.
Then, we extract the verb phrases of \textit{verb+(prt)+dobj +(prep+pobj)}\footnote{parenthesis denotes optional component, prt: particle, dobj: direct object, prep: preposition, pobj: preposition object.} from Automated Speech Recognition (ASR) output, using the implementation in SOPL~\cite{shen-cvpr21}.
For both vision and text data, we extract the frame and verb phrases features from the `ViT-B/32' variant of CLIP~\cite{radford-icml21}.
We extract the features with each subtask's start and end position and load the CLIP features, instead of video frames, for faster data reading.
We choose the first label of each subtask in a video and convert the temporal segment labels to \textit{not started},~\textit{in progress}, and~\textit{completed}~for the existing subtasks. %
\paragraph{Training.} We set the hidden dimension of each feedforward dimension in 16-head modality fusion attention to be the same as the CLIP representation embedding size of 512 and use a single fully-connected layer when projecting to the status prediction.
During training, we randomly drop up to 25\% of subtasks from each task sequence and then subsample at least three frames from each subtask region, but no more than 190 frames in total.
On the other hand, for testing, we did not drop any subtask and sample 3 frames per subtask in an equidistance manner (no randomness).
For all the language features, because the length of the ASR varies from video to video, we use three consecutive sentences per frame while setting the center of the sentence closest to the selected frame, inspired by~\citet{miech-cvpr20}.
We train all models with a learning rate of 3e-4 with the Adam~\cite{kingma-iclr15} optimizer with cosine scheduling, following BERT~\cite{devlin-naacl19}.
We set the batch size as 32 and trained each model for 600 epochs, with 100 steps of warm-up.
Each model is trained on an 18.04 LTS Ubuntu machine with a single NVIDIA A100 GPU on CUDA 11.3, cuDNN 8.2, and PyTorch 1.11.
\subsection{Ablation Study}
\label{supp_subsec:ablation_study_subtask_state_prediction}
\begin{table}[ht]
\setlength{\tabcolsep}{3.5pt}
\centering
\footnotesize
\caption{\textbf{Ablation Result on Subtask State Prediction in ProceL~\cite{elhamifar-iccv19}.} We denote video-input only case as ``VisionOnly'', video with the narration text data, but with skip connection as ``Vision + ASR'', and our subtask state prediction model as ``\textbf{Ours}'' and measure the performance of completion prediction. We denote the pretrained transformer model as ``ViT'' and ``VisualBERT'', following the pretrained model names~\cite{dosovitskiy-iclr21, li-arxiv19}. Task label indexes are (a) Assemble Clarinet (b) Change Tire (c) Perform CPR (d) Setup Chromecast (e) Change Toilet Seat (f) Make Peanut Butter and Jelly Sandwich (g) Jump Car (h) Tie Tie (i) Change iPhone Battery (j) Make Coffee, (k) Repot Plant and (l) Make Salmon Sandwich, respectively.}
\begin{tabular}{l|cccccccccccc|c}
\toprule
Subtask State Prediction Module & (a) & (b) & (c) & (d) & (e) & (f) & (g) & (h) & (i)& (j) & (k) & (l) & Avg\\
\midrule
VisionOnly & 72.41 & 74.91 & 80.21 & 93.68 & 72.94 & 91.11 & 85.58 & 93.65 & 89.52 & 88.93 & 79.45 & 70.90 & 82.77 \\
Vision + ASR & 71.83 & 74.76 & 83.12 & 89.42 & 69.05 & 91.07 & 83.80 & 90.08 & 86.36 & 87.78 & 78.16 & 72.64 & 81.51 \\
Ours (from scratch) & 63.56 & 68.93 & 74.73 & 90.36 & 66.61 & 79.02 & 71.54 & 87.76 & 69.54 & 80.47 & 72.42 & 65.97 & 74.24 \\
Ours (w/o~\textit{in progress}~state) & 70.67 & 74.42 & 82.72 & 93.28 & 72.41 & 89.95 & 85.56 & 93.94 & 88.01 & 88.85 & 79.59 & 74.63 & 82.84 \\
\textbf{Ours} & 71.25 & 74.93 & 83.51 & 94.56 & 73.42 & 89.76 & 85.94 & 94.71 & 91.14 & 89.54 & 79.44 & 75.58 & \textbf{83.65} \\\midrule
VisualBERT~\cite{li-arxiv19} & 69.09 & 73.84 & 79.63 & 69.59 & 58.45 & 91.68 & 84.01 & 90.40 & 83.18 & 83.83 & 62.07 & 67.73 & 76.13 \\
VisualBERT$\dagger$ & 59.01 & 59.99 & 72.98 & 71.07 & 64.66 & 78.93 & 77.69 & 72.33 & 75.52 & 75.33 & 63.43 & 68.10 & 69.92 \\
ViT~\cite{dosovitskiy-iclr21} & 58.69 & 59.92 & 74.11 & 68.84 & 59.69 & 79.09 & 78.29 & 70.39 & 74.90 & 78.17 & 70.03 & 67.45 & 69.96 \\
\bottomrule
\end{tabular}
\label{supptab:frame_evaluation}
\end{table}
Since a subtask sequence $\mathbf{S}$ in video stores the start and end frame numbers, we can directly compare the completion prediction result of all subtasks by checking $\mathbb{I}[\hat{s}_t[n] \geq 1]$.
However, because the label for $\mathbf{S}$ only covers the labeled part of the sequence, we first split 15\% of the data as the validation set and hand-annotated the subtask state of all subtask labels for the videos in the validation set.
For both the ground-truth subtask graph and the subtask state labels, we asked three people to manually annotate after watching all videos in the task. We choose the majority answer among three as the label for the subtask state labels, and we iterate multiple rounds of ground-truth subtask graph labeling until the label converges among three people.
We performed an ablation study with the VisionOnly (replacing $\mathbf{H}$ defined in~\Cref{eq:residual} with $\mathbf{F}^{e}$), our model without having first binary cross entropy loss in~\Cref{eq:masked_cross_entropy} (denoted as~w/o~\textit{in progress}~state), as well as our model with the skip connection (adding $+ \mathbf{F}^{e}$ to the right-hand side of~\Cref{eq:mha}, following Transformer~\cite{vaswani-neurips17}. We denote this as `Vision+ASR') by measuring the binary completion prediction accuracy per task with the hand-annotated labels.
In addition to this, we performed additional experiments with the pretrained ViT~\cite{dosovitskiy-iclr21} and VisualBERT~\cite{li-arxiv19} models from Huggingface~\cite{wolf-arxiv19}, replacing the T5~\cite{raffel-jmlr20} model. Specifically, we use `google/vit-large-patch16-224', `uclanlp/visualbert-vqa-coco-pre', and `google/t5-v1\_1-large' pretrained weights for this ablation.
We also tried a variation of VisualBERT (indicated as VisualBERT$^{\dagger}$) where we directly feed the frames $\mathbf{F}^{e}$ and sentences $\mathbf{X}^{e}$, instead of $\mathbf{H}$ in~\Cref{eq:residual}, as input. We set $b$ in~\Cref{eq:masked_cross_entropy} as 1 for all of the models.
The results are shown in~\Cref{supptab:frame_evaluation}. %
First of all, we can see that our model with the skip connection could lead the model to be overfitted to the train set, which is the reason behind our decision not to add a skip connection to the~MSG$^2$~model.
Also, we found inferior performance when we train without ASR or~\textit{in progress}~state information, so we perform subtask state prediction with both vision and language modality with~\textit{in progress}~for the rest of the paper. %
We additionally found that the model predicts~\textit{completed}~for the unlabeled tasks more clearly one subtask after the original prediction timing.
We conjecture that the end-time of a subtask, which is annotated by a human annotator, is often noisy and annotated to a slightly earlier time step where subtask is still \textit{in progress}.
Thus, we grab the states from the next subtask timing and use \textit{completed}~if $\hat{s}[n] \geq 0$ instead in graph generation.
We also trained our model from scratch instead of finetuning a pretrained T5 encoder-based model. We believe the performance gap between `Ours' and `Ours (from scratch)' shows the effectiveness of finetuning from the pretrained weights. %
In addition to this, we tested with other pretrained Transformers and found that the pretrained T5 encoder-based model performs best among all the transformer models.
Interestingly, ViT and VisualBERT were worse than T5, which seems to indicate that language priors are more useful for modeling subtask progression.
\subsection{Visualization of Subtask Completion}
\label{supp_subsec:visualization_of_subtask_completion}
We present predicted subtask completion in~\Cref{supp_fig:completion_prediction_examples}.
Our model predicts~\textit{completed}~states from missing subtasks (labeled in {\color{red}red}).
Such predicted subtask states help generate better graphs.
\begin{figure}[ht]
\centering
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.98\linewidth]{review/figure/completion_figure1.pdf}
\caption{Tie Tie}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=0.98\linewidth]{review/figure/completion_figure2.pdf}
\caption{Setup Chromecast}
\end{subfigure}
\caption{\textbf{Completion Prediction Examples.} We plot predicted subtask completions from (a) Tie Tie and (b) Setup Chromecast. The missing labels in the original dataset are colored red, and each colored row represents the completion of a subtask at the matched frame on top.
The presence of a horizontal bar at a particular time-step indicates the subtask is in \textit{completed}~state at the time-step.
Bars in red show subtask completion states that were inferred by our model but were missing in the dataset.
}
\label{supp_fig:completion_prediction_examples}
\end{figure}
|
{
"arxiv_id": "2302.08600",
"language": "en",
"timestamp": "2023-02-20T02:02:47",
"url": "https://arxiv.org/abs/2302.08600",
"yymm": "2302"
} |
\section{Appendix}
\iffalse
\subsection{Handling multiple opinions}\label{subse:apx_multiple}
Consider the case in which the set of possible opinions is $\{1,\ldots,
k\}$ for $k\ge 2$, with $1$ again the correct opinion. We collapse
opinions $2,\ldots , k$ into one class, i.e., opinion $0$ without loss
of generality. We then consider the random variable $X_t$, giving the
number of agents holding opinion $1$ at the end of round $t$. Clearly,
the configuration in which all agents hold opinion $1$ is the only
absorbing state under the voter model and
convergence time is defined as $\min\{t\ge 0: X_t = n\}$. For a generic
number $i$ of agents holding opinion $1$, we next compute the
probability $p_i$ of the transition $i\rightarrow i + 1$ (for $i\le n
- 1$) and the probability $q_i$ of the transition $i\rightarrow i - 1$
(for $i\ge z + 1$):
\begin{align*}
&\p_i = \Prob{}{X_{t+1} = i + 1 \mid X_t = i} = \frac{n-i}{n}\cdot\frac{i}{n},
\end{align*}
where the first factor in the right hand side of the above equality is
the probability of activating an agent holding an opinion other than
$1$, while the second factor is the probability that said agent in turn
copies the opinion of an agent holding the correct opinion. Similarly, we
have:
\begin{align*}
&\q_i = \Prob{}{X_{t+1} = i - 1 \mid X_t = i} = \frac{i -
z}{n}\cdot\frac{n-i}{n},
\end{align*}
with the first factor in the right hand side the probability of
sampling a non-source agent holding opinion $1$ and the second factor
the probability of this agent in turn copying the opinion of an agent
holding any opinions other than $1$.
The above argument implies that if we are interested in the time to
converge to the correct opinion, variable $X_t$ is what we are actually
interested in. On the other hand, it is immediately clear that the
evolution of $X_t$ is described by the birth-death chain
$\mathcal{C}_1$ introduced in Section \ref{sec:prelim} (again with $n$
as the only absorbing state) or by its reversible counterpart
$\mathcal{C}_2$. This in turn implies that the
analysis of Section \ref{sec:upper} seamlessly carries over to the case of
multiple opinions.
\fi
\subsection{Technical results for the Lower Bound} \label{app:lower}
\begin{proof} [Proof of Lemma~\ref{lem:lower_bound_sum_inverse}]
Consider the case that $\sum_{i=1}^n x_i \leq N$.
Using the inequality of arithmetic and geometric means, we can write
\begin{equation*}
1 \geq \frac{1}{N} \sum_{i=1}^N x_i \geq \left(\prod_{i=1}^N x_i \right)^{\frac{1}{N}}.
\end{equation*}
Therefore,
\begin{equation*}
1 \leq \left(\prod_{i=1}^N \frac{1}{x_i}\right)^{\frac{1}{N}} \leq \frac{1}{N} \sum_{i=1}^N \frac{1}{x_i},
\end{equation*}
which concludes the proof of Lemma~\ref{lem:lower_bound_sum_inverse}.
\end{proof}
\begin{proof} [Proof of Lemma~\ref{lem:lower_bound_hit}]
Let~$w_0 = 1$ and for $i \in \{1,\ldots,n\}$, let $w_i = 1/a(1:i)$. The following result is well-known (see, e.g., Eq.~(2.13) in \cite{levin2017markov}). For every~$\ell \in \{1,\ldots,n\}$,
\begin{equation*}
\Expec{\ell-1}{\tau_\ell} = \frac{1}{q_\ell w_\ell} \sum_{i=0}^{\ell-1} w_i.
\end{equation*}
Thus,
\begin{equation*}
\Expec{\ell-1}{\tau_\ell} = \frac{1}{q_\ell} \sum_{i=0}^{\ell-1} \frac{a(1:\ell)}{a(1:i)} = \frac{1}{q_\ell} \sum_{i=1}^{\ell} a(i:\ell) \geq \sum_{i=1}^{\ell} a(i:\ell).
\end{equation*}
Eventually, we can write
\begin{equation*}
\Expec{0}{\tau_n} = \sum_{\ell = 1}^n \Expec{\ell-1}{\tau_\ell}\geq \sum_{1 \leq i < j \leq n} a(i:j),
\end{equation*}
which concludes the proof of Lemma~\ref{lem:lower_bound_hit}.
\end{proof}
\section{Introduction} \label{sec:intro}
\iffalse
Here put a nice Intro please!
Why convergence time in MAS is important:
Typical questions studied are the convergence properties of the
opinion dynamics: Is convergence to stable states guaranteed and
if yes, what are upper and lower bounds on the convergence time?
Guaranteed convergence is essential since otherwise the predictive
power of the model is severely limited. Moreover, studying the
convergence time of opinion dynamics is crucially important. In
general, the analysis of stable states is significantly more meaningful if these states are likely to be reached in a reasonable amount of
time, i.e., if quick convergence towards such states is guaranteed.
If systems do not stabilize in a reasonable time, stable states lack justification as a prediction of the system’s behavior.
\fi
Identifying the specific algorithm employed by a biological system is extremely challenging. This quest combines empirical evidence, informed guesses, computer simulations, analyses, predictions, and verifications. One of the main difficulties when aiming to pinpoint an algorithm stems from the huge variety of possible algorithms. This is particularly true when multi-agent systems are concerned, which is the case in many biological contexts, and in particular in collective behavior \cite{sumpter2010collective,feinerman2017individual}. To reduce the space of algorithms, the scientific community often restricts attention to simple algorithms, while implicitly assuming that despite the fact that real algorithms may not necessarily be simple to describe, they could still be approximated by simple rules \cite{couzin2005effective,gelblum2015ant,fonio2016locally}.
However, even though this restriction reduces the space of algorithms significantly, the number of simple algorithms still remains extremely large.
Another direction to reduce the parameter space is to identify classes of algorithms that are less likely to be employed in a natural scenario, for example, because they are unable to efficiently handle the challenges induced by this scenario \cite{boczkowski2018limits,feinerman2017ants,bialek2012statistical}. Analyzing the limits of computation under different classes of algorithms and settings has been a main focus in the discipline of theoretical computer science.
Hence, following the framework of understanding science through the \emph{computational lens} \cite{karp2011understanding}, it appears promising to employ lower-bound techniques from computer science to biologically inspired scenarios, in order to understand which algorithms are less likely to be used, or alternatively, which parameters of the setting are essential for efficient computation \cite{guinard2021intermittent,boczkowski2018limits}. This lower-bound approach may help identify and characterize phenomena that might be harder to uncover using more traditional approaches, e.g., using
simulation-based approaches or even differential equations techniques. The downside of this approach is that it is limited to analytically tractable settings, which may be too ``clean'' to perfectly capture a realistic setting.
Taking a step in the aforementioned direction, we focus on a basic problem of information dissemination, in which few individuals have pertinent information about the environment, and other agents wish to learn this information while using constrained and random communication \cite{aspnes2009introduction,boczkowski2019minimizing,bastide2021self,DBLP:conf/podc/KormanV22}.
Such information may include, for example, knowledge about a preferred migration route \cite{franks2002information,lindauer1957communication}, the location of a food source \cite{couzin2011uninformed}, or the need to recruit agents for a particular task \cite{razin2013desert}.
In some species, specific signals are used to broadcast such information, a remarkable example being the waggle-dance of honeybees that facilitates the recruitment of hive members to visit food sources \cite{franks2002information,seeley2003consensus}.
In many other biological systems, however, it may be difficult for individuals to distinguish those who have pertinent information from others in the group \cite{couzin2005effective,razin2013desert}. Moreover,
in multiple biological contexts, animals cannot rely on distinct signals and must obtain information by merely observing the behavior characteristics of other animals (e.g., their position in space, speed, etc.). This weak form of communication, often referred to as {\em passive communication} \cite{wilkinson1992information}, does not even require animals to deliberately send communication signals \cite{cvikel2015bats,giraldeau2018social}. A key theoretical question is
identifying minimal computational resources that are
necessary for information to be disseminated efficiently using passive communication.
Here, following the work in \cite{DBLP:conf/podc/KormanV22}, we consider an idealized model, that is inspired by the following scenario. \\
\noindent{\em Animals by the pond.} Imagine an ensemble of $n$ animals gather around a pond to drink water from it. Assume that one side of the pond, either the northern or the southern side, is preferable (e.g., because the risk of having predators there is reduced). However, the preferable side is known to a few animals only. These informed animals
will therefore remain on the preferable side of the pond. The rest of the group members would like to learn which side of the pond is preferable, but they are unable to identify which animals are knowledgeable. What they are able to do instead, is to scan the pond and estimate the number of animals on each side of it, and then, according to some rule, move from side to side. Roughly speaking, the main result in \cite{DBLP:conf/podc/KormanV22} is that there exists a rule that allows all animals to converge on the preferable side relatively quickly, despite initially being spread arbitrarily in the pond. The suggested rule essentially says that each agent compares its current sample of the number of agents on each side with the sample obtained in the previous round; If the agent sees that more animals are on the northern (respectively, southern) side now than they were in the previous sample, then it moves to the northern (respectively, southern) side.
\iffalse
In this perspective, we investigate models of opinion formation in
which one \emph{source} agent\footnote{All results we present
seamlessly extend (up to constants) to a constant number of source
agents.} initially holds a \emph{correct}
opinion, representing the piece of information that is to be
disseminated. In
the remainder, we use labels $1,\ldots , k$ for the opinions (provided
we initially have $k$ different opinions) and we
assume that $1$ is the {\em correct} opinion. Starting from an initial
configuration in which all agents but the source are in any arbitrary
configuration, the system evolves in rounds.
In each round, one agent selected uniformly at random is activated. The activated agent is given access to the opinions of $\ell$ other agents,
sampled uniformly at random. The agent then revises its current opinion
using a decision rule that is common to all agents and arbitrary,
as long as it only depends on the opinions contained in the
sample.\footnote{Note that this implies that agents are not allowed to switch to opinions they don't see in their samples and/or to use memory to
store information other than the current opinion they possess.} Our
focus is on the minimum number of rounds required for the system to
converge on the correct opinion. It should be noted that there is a
subtle but fundamental difference with symmetric consensus, where we
are interested in consensus (or almost consensus) on \emph{any} of the
opinions initially present in the system \cite{BCN20}.
The general model oulined above is certainly simple when not simplistic
if one is interested in practical scenarios. Despite its simplicity
however, this model encompasses very popular opinion dynamics, such as
the voter model \cite{liggett2012interacting} and best-of-$k$ opinion dynamics
\cite{schoenebeck2018consensus}. Moreover, the hypothesis of
agents having access to the opinions held by a uniform sample of the
agents is consistent with well-mixed population scenarios, while
constraints on the opinion adoption strategy are consistent with
the model of \emph{passive communication},
describing artificial or natural scenarios in which agents can only infer information about
the system based on the external behaviour of other agents
\cite{wilkinson1992information,DBLP:conf/podc/KormanV22}.\footnote{Modeled by the opinions they hold in our
case.} The setting we consider is sufficiently rich but simple enough to afford
analytical investigation of a few key questions, in particular the following:
\begin{quote}
Can a system of memoryless, passively communicating agents
efficiently disseminate information or is some amount of memory
necessary?
\end{quote}
Here, ``efficient'' should be referred to symmetric consensus
scenarios, for which dynamics achieving consensus in $\mathcal{O}(n\log n)$
rounds with high probability\footnote{For a family of events
$\{\mathcal{E}_n\}_{n \in \mathbb{N}}$ we say that $\mathcal{E}_n$ occurs
\emph{with high probability} (\emph{w.h.p.}, in short) if a constant $\gamma >
0$ exists such that $\Prob{}{\mathcal{E}_n} = 1 - \mathcal{O}(n^{-\gamma})$,
for every sufficiently large~$n$.} exist, which is almost
optimal.\footnote{Note that, depending on the initial configuration, at least
$\Omega(n)$ agents need to be activated at least once to achieve
consensus to any of the opinions present in the initial system
configuration.} A maybe secondary but interesting question concerns the trade-off between the amount of information
available to activated agents and efficient dissemination:
\begin{quote}
What is the impact of the sample size $\ell$ on efficient information
dissemintation?
\end{quote}
\lb{Add a reference to comment following first question.}
\fi
Within the framework described above, we ask whether knowing anything about the previous samples is really necessary, or whether fast convergence can occur by considering the current sample alone. Roughly speaking, we show that indeed it is not possible to converge fast on the correct opinion without remembering information from previous samples. Next, we describe the model and results in a more formal manner.
\paragraph{Problem definition.}
We consider $n$ agents, each of which holds an {\em opinion} in $\{0,1,\ldots,k\}$, for some fixed integer $k$. One of these opinions is called {\em correct}. One \emph{source} agent\footnote{All results we present
seamlessly extend (up to constants) to a constant number of source
agents.} knows which opinion is correct, and hence holds this opinion throughout the execution. The goal of non-source agents is to converge on the correct opinion as fast as possible, from any initial configuration. Specifically, the process proceeds in discrete {\em rounds}. In each round, one
agent is sampled uniformly at random (u.a.r) to be {\em activated}.
The activated agent is then given access to the opinions of $\ell$ agents, sampled u.a.r (with replacement\footnote{All the results directly hold also if the sampling is without replacement.}) from the multiset of all the opinions in the population (including the source agent, and the sampling agent itself), for some prescribed integer $\ell$ called {\em sample size}.
If it is not a source,
the agent then revises its current opinion using a decision rule, which defines the {\em dynamics}, and which is used by all non-source agents. We restrict attention to dynamics that are not allowed to switch to opinions that are not contained in the samples they see.
A dynamics is called {\em memoryless} if the corresponding decision rule only depends on the opinions contained in the current sample and on the opinion of the agent taking the decision. Note that the classical \emph{voter model} and \emph{majority} dynamics are memoryless.
\iffalse
\amos{see the modifications above.}
The {\em running time} of a given algorithm is the expected number of rounds required for the system to
converge on the correct opinion, from the worst possible initial configuration of non-source agents.
\amos{Amos: we might want to define the running time more formally, so that we don't use the word ``worse'' which might be ambigious to some people}
\andy{I suggest to make all this part less formal and put only notions that are
necessary to state our results informally in the Intro. The technical statements in the next sections will clarify any ambiguos issue.In particular, once we have said that the probabilistic process will proceeds in rounds, there is no needs here to explane what is the expected convergence time (i.e. number of rounds) starting from worst-case configuration. We can put just a footnote expalining that expectation is taken w.r.t. the random agent activation at every round, and, if any, on the randomness used by the local updating rule. }
\andy{Round is ok for me.}
\amos{Amos has questions: do our results hold also for random initial conditions? what is the dependency on expectation --- does the lower bound hold also for protocol that converge with constant probability?}
\andy{These questions are rather interesting for our community, but I am not sure
they will attract strong attention in the AI-MAS community: however, if our proof easily implies something on them, we can add remarks. }
\amos{I have a feeling that it holds trivially for random initialization. Isn't that so?}
\andy{me too! it roughly speaking means to start from unbiased conf...}
\fi
\subsection{Our results}
In Section \ref{sec:lower}, we prove that every memoryless dynamics
must have expected running time $\Omega(n^2)$ for every constant number
of opinions. A bit surprisingly, our analysis holds even under a stronger model in which, in
every round, the activated agent has access to the current opinions of
\emph{all} agents in the system.
For comparison, in \emph{symmetric consensus}\footnote{In the remainder, by
\emph{symmetric consensus} we mean the standard setting in which agents are
required to eventually achieve consensus on \emph{any} of the opinions that are initially
present in the system.}
convergence is achieved in $\mathcal{O}(n\log n)$ rounds with high
probability, for a large class of majority-like dynamics and using
samples of constant size \cite{schoenebeck2018consensus}. We thus have an exponential gap
between the two settings, in terms of the average number of activations
per agent.\footnote{This measure is often
referred to as the \emph{parallel time} in distributed computing
literature \cite{10.1016/j.ipl.2022.106314}.}
We further show that our lower bound is essentially tight. Interestingly,
we prove that the standard voter model
achieves almost optimal performance, despite using samples of size $\ell=1$. Specifically,
in Section \ref{sec:upper}, we
prove that the voter model converges to the correct opinion within
$\mathcal{O}(n^2\log n)$ rounds in expectation and $\mathcal{O}(n^2\log^2n)$ rounds with high probability.
This result and the lower bound of Section \ref{sec:lower} together
suggest that sample size cannot be a key ingredient in
achieving fast consensus to the correct opinion after all.
Finally, we argue that allowing agents to use a relatively small amount
of memory can drastically decrease convergence time. As mentioned
earlier in the introduction, this result has been formally proved in~\cite{DBLP:conf/podc/KormanV22} in the \textit{parallel} setting,
where at every round, all agents are activated simultaneously. We
devise a suitable adaptation of the algorithm proposed in~\cite{DBLP:conf/podc/KormanV22} to work in the sequential,
random activation model that is considered in this paper. This adaptation uses samples of size $\ell=\Theta(\log n)$ and $\Theta(\log
\log n)$ bits of local memory. Empirical evidence discussed in Section
\ref{sec:simulations_short}
suggests that its convergence time might be compatible with $n \log^{O(1)} n$. In terms of parallel time, this would imply an exponential gap between this case and the memoryless case.
\section{Discussion and Future Work} \label{sec:conc}
This work investigates the role played by memory in multi-agent systems that rely on passive communication and aim to achieve consensus on an opinion held by few ``knowledgable'' individuals \cite{DBLP:conf/podc/KormanV22,couzin2005effective,ayalon2021sequential}. Under the model we consider, we prove that incorporating past observations in the current decision is necessary for achieving fast convergence even if the observations regarding the current opinion configuration are complete. The same lower bound proof can in fact be adapted to any process that is required to alternate the consensus (or semi-consensus) opinion, i.e., to let the population agree (or almost agree) on one opinion, and then let it agree on the other opinion, and so forth. Such oscillation behaviour is fundamental to sequential decision making processes \cite{ayalon2021sequential}.
The ultimate goal of this line of research is to reflect on biological processes and conclude lower bounds on biological parameters. However, despite the generality of our model, more work must be done to obtain concrete biological conclusions.
Conducting an experiment that fully adheres to our model, or refining our results to apply to more realistic settings remains for future work. Candidate experimental settings that appear to be promising include fish schooling \cite{couzin2005effective,couzin2011uninformed}, collective sequential decision-making in ants \cite{ayalon2021sequential}, and recruitment in ants \cite{razin2013desert}.
If successful, such an outcome would be highly pioneering from a methodological perspective. Indeed, to the best of our knowledge, a concrete lower bound on a biological parameter that is achieved in an indirect manner via mathematical considerations has never been obtained.
\paragraph{\bf Acknowledgement.} The authors would like to thank Yoav Rodeh for very helpful discussions concerning the lower bound proof (Theorem~\ref{thm:lowbound}).
\section{Notations and Preliminaries} \label{sec:prelim}
We consider a system consisting of $n$ \emph{anonymous} agents.
We denote by $x_u^{(t)}$ the opinion held by agent $u$ at the end of round $t$, dropping the superscript whenever it is clear from the context. The \emph{configuration} of the
system at round $t$ is the vector $\mathbf{x^{(t)}}$ with $n$ entries, such that its
$u$'th entry is $x_u^{(t)}$.
We are interested in dynamics
that efficiently \emph{disseminate} the correct opinion. I.e., (i) they
eventually bring the system into the \emph{correct
configuration} in which all agents share the correct opinion, and
(ii) they do so in as few rounds as possible.
For brevity, we sometimes refer to the latter quantity as
\emph{convergence time}.
If $T$ is the convergence time of an execution, we denote
by $T/n$ the average number of activations per agent, a measure often referred to
as \emph{parallel time} in the distributed computing literature
\cite{10.1016/j.ipl.2022.106314}.
For ease of exposition, in the remainder we assume that opinions are binary
(i.e., they belong to $\{1, 0\}$).
We remark the following: (i) the lower bound on the convergence time given in Section \ref{sec:lower} already applies by restricting attention to the binary case, and,
(ii) it is easy to extend the analysis of the voter model given in
Section \ref{sec:upper} to the general case of $k$ opinions using
standard arguments. These are briefly summarized in Subection
\ref{subse:apx_multiple} for the sake of completeness.
\paragraph{Memoryless dynamics.}
We consider dynamics in which, beyond being anonymous, non-source agents are
memoryless and identical.
We capture these and the general requirements outlined in Section
\ref{sec:intro} by the following
decision rule, describing the behavior of agent $u$
\begin{enumerate}
\item $u$ is presented with a uniform
sample $S$ of size $\ell$;
\item $u$ adopts opinion $1$ with probability $g_{x_u}(|S|)$,
where $|S|$ denotes the number of $1$'s in sample $S$.
\end{enumerate}
Here, $\uf_{x_u}:\{0,\ldots , \ell\}\rightarrow [0, 1]$ is a function that
assigns a probability value to the number of ones that appear in $S$. In
particular, $\uf_{x_u}$ assigns probability zero to opinions with no
support in the sample, i.e., $\uf_{x_u}(0) = 0$ and $\uf_{x_u}(\ell) = 1$.\footnote{In general, dynamics not meeting this constraint
cannot enforce consensus.} Note that, in principle, $\uf_{x_u}$ may
depend on the current opinion of agent $u$.
The class of
dynamics described by the general rule above strictly
includes all memoryless algorithms that are based on random samples of fixed size including the popular dynamics, such as the voter model and a large
class of quasi-majority dynamics
\cite{liggett2012interacting,schoenebeck2018consensus,BCN20}.
\paragraph{Markov chains.}
In the remainder, we consider discrete time, discrete space Markov chains, whose state
space is represented by an
integer interval $\chi = \{z, z + 1,\ldots , n\}$, for suitable $z\ge 1$ and
$n > z$, without loss of generality (the reason for
this labeling of the states will be clear in the next sections).
Let $X_t$ be the random
variable that represents the state of the chain at round $t\ge 0$. The \emph{hitting time} \cite[Section 1]{levin2017markov} of state $x\in S$ is the first time the chain
is in state $x$, namely:
\[
\hit_x = \min\{t\ge 0: X_t = x\}.\footnote{Note that the hitting
time in general depends on the initial state. Following
\cite{levin2017markov}, we specify it when needed.}
\]
A basic ingredient used in this paper
is describing the dynamics we consider in terms of suitable
\emph{birth-death} chains, in which the only possible transitions from
a given state $i\ge z$ are to the states $i$, $i + 1$ (if $i\le n - 1$) and
$i - 1$ (if $i\ge z + 1$). In the remainder, we denote by
$\p_i$ and $\q_i$ respectively the probability of moving to
$i + 1$ and the probability of moving to $i - 1$ when the chain is in
state $i$. Note that $\p_n = 0$ and $\q_z = 0$. Finally, $\pr_i = 1 -
\p_i - \q_i$ denotes the probability that, when in state $i$, the chain
remains in that state in the next step.
\paragraph{A birth-death chain for memoryless dynamics.}
The global behaviour of a system with $z$ source agents holding opinion (wlog) $1$ and in which
all other agents revise their opinions according
to the general dynamics described earlier when activated, is completely described by a
birth-death chain $\mathcal{C}_1$ with state space $\{z, ... , n\}$ and the
following transition probabilities, for $i = z,\ldots n - 1$:
\begin{align}\label{eq:p_i}
&\p_i = \Prob{}{X_{t+1} = i + 1 \mid X_t = i}\nonumber \\
& = \frac{n-i}{n}\sum_{s =
0}^{\ell}\uf_0(s)\Prob{}{|S| = s \mid X_t = i}\nonumber \\
& = \frac{n-i}{n}\Expec{i}{\uf_0(|S|)},
\end{align}
where $X_t$ is simply the number of agents holding opinion $1$ at the end of round $t$ and where, following the notation of \cite{levin2017markov}, for a random variable
$V$ defined over some Markov chain $\mathcal{C}$, we denote by
$\Expec{i}{V}$ the expectation of $V$ when $\mathcal{C}$ starts in
state $i$.
Eq. \eqref{eq:p_i} follows from the
law of total probability applied to the possible values for $|S|$ and
observing that (a) the transition $i \rightarrow i +
1$ can only occur if an agent holding opinion $0$ is selected for update, which
happens with probability $(n - i)/n$, and (b) if such an agent observes
$s$ agent with opinion $1$ in its sample, it will adopt that opinion
with probability $\uf_0(s)$.
Likewise, for $i = z + 1,\ldots , n - 1$:
\begin{equation} \label{eq:q_i}
\q_i = \Prob{}{X_{t+1} = i - 1 \mid X_t = i} = \frac{i-z}{n}(1 - \Expec{i}{\uf_1(|S|)}),
\end{equation}
with the only caveat that, differently from the previous case, the
transition $i + 1\rightarrow i$ can only occur if an agent with opinion
$1$ is selected for update and \emph{this agent is not a source}.
For this chain, in addition to $\p_n = 0$ and
$\q_z = 0$ we also have $\q_n = 0$, which follows since $\uf_1(\ell) = 1$.
We finally note the following (obvious) connections between $\mathcal{C}_1$ and any specific opinion dynamics $P$: (i) the specific birth-death chain for $P$ is obtained from $\mathcal{C}_1$ by specifying the corresponding $\uf_0$ and $\uf_1$ in Eqs. \eqref{eq:p_i} and \eqref{eq:q_i} above; and (ii) the expected convergence time of $P$ starting in a configuration with $i\ge z$ agents holding opinion 1 is simply $\Expec{i}{\hit_n}$.
\subsection{Previous work} \label{sec:related}
The problem we consider spans a number of areas of potential interest.
Disseminating
information from a small subset of agents to the larger population is a
key primitive in many biological, social or artificial systems.
Not surprisingly, dynamics/protocols taking up this challenge have been investigated for
a long time across several communities, often using different
nomenclatures, so that terms such as ``epidemics''
\cite{demers1987epidemic}, ``rumor spreading''
\cite{karp2000randomized}
may refer to the same or similar problems depending on context.
The corresponding literature is vast and providing an exhaustive review is
infeasible here. In the following paragraphs, we discuss previous
contributions that most closely relate to the present work.
\paragraph{Information dissemination in MAS with limited communication.}
Dissemination is especially difficult when communication is limited
and/or when the environment is noisy or unpredictable.
For this reason, a line of recent work in distributed computing focuses
on designing robust protocols, which are tolerant to faults and/or
require minimal assumptions on the communication patterns.
An effective theoretical framework to address these challenges is that
of self-stabilization, in which problems related to the
scenario we consider have been investigated, such as self-stabilizing clock synchronization or
majority computations \cite{aspnes2009introduction,ben2008fast,boczkowski2019minimizing}.
In general however, these models make few assumptions about memory
and/or communication capabilities and they rarely fit the framework of passive communication.
Extending the self-stabilization framework to
multi-agent systems arising from biological distributed systems
\cite{giraldeau2018social,angluin2004computation} has been the focus of
recent work, with interesting preliminary results
\cite{DBLP:conf/podc/KormanV22} discussed earlier in the introduction.
\paragraph{Opinion dynamics.}
Opinion dynamics are mathematical models that have been extensively
used to investigate processes of opinion formation resulting in stable
consensus and/or clustering equilibria in multi-agent systems
\cite{BCN20,CHK18,OZ21}.
One of the most popular opinion dynamics is the voter model, introduced
to study spatial conflicts between species in biology
and in interacting particle systems \cite{CS73,HL75}.
The investigation of majority update rules originates
from the study of consensus processes in spin systems
\cite{KL03}. Over the last decades, several variants of the basic majority
dynamics have been studied \cite{BCN20,BHKLRS22,DGMSS11,MNT14}.
\iffalse
In particular, bounds on the convergence
time of majority rules with small (i.e. constant-size) samples in the
presence of byzantine and/or faulty agents have been derived for the
complete graphs in \cite{DGMSS11} and for special classes of graphs in
\cite{}.
\fi
The recent past has witnessed increasing interest for biased variants
of opinion dynamics, modelling multi-agents systems in which agents may
exhibit a bias towards one or a subset of the opinions, for example
reflecting the appeal represented by the diffusion of
an innovative technology in a social system. This general problem has been investigated
under a number of models \cite{ABCPR20,BHKLRS22,CMQR21,LGP22}.
In general, the focus of this line of work is
different from ours, mostly being on the sometimes complex interplay between bias and
convergence to an equilibrium, possibly represented by global adoption
of one of the opinions. In contrast, our focus is on how quickly
dynamics can converge to the (unknown) correct opinion,
i.e., how fast a piece of information can be disseminated within a
system of anonymous and passive agents, that can infer the
``correct'' opinion only by accessing random samples of the opinions held
by other agents.\footnote{For reference, it is easy to
verify that majority or best-of-$k$ majority rules
\cite{schoenebeck2018consensus} (which have
frequently been considered in the above literature)
in general fail to complete the
dissemination task we consider.}
\paragraph{Consensus in the presence of zealot agents.} A large body of
work considers opinion dynamics in the presence of zealot agents, i.e.,
agents (generally holding heterogeneous opinions) that
never depart from their initial opinion~\cite{DCN22,Ma15,MTB20,YOASS13}
and may try to influence the rest of the agent population. In this case, the
process resulting from a certain dynamics can result in equilibria
characterized by multiple opinions. The main focus of this body of work is investigating
the impact of the number of zealots, their positions in the network and
the topology of the network itself on such equilibria
\cite{FE14,Ma15,MTB20,YOASS13}, especially when the social goal of the
system may be achieving self-stabilizing ``almost-consensus'' on
opinions that are different from those supported by the zealots.
Again, the focus of the present work is considerably different, so that
previous results on consensus in the presence of zealots do not carry
over, at least to the best of our knowledge.
\section{Faster Dissemination with Memory} \label{sec:simulations_short}
In this section, we give experimental evidence suggesting that dynamics using a modest amount of memory can achieve consensus in an almost linear number of rounds. When compared to memory-less dynamics, this represents an exponential gap (following the results of Section~\ref{sec:lower}).
The dynamics that we use is derived from the algorithm introduced in \cite{DBLP:conf/podc/KormanV22} and is described in the next Subsection.
\subsection{``Follow the trend'': our candidate approach} The dynamics that we run in the simulations is derived from the algorithm of \cite{DBLP:conf/podc/KormanV22}, and uses a sample size of $\ell = 10 \, \log n$.
Each time an agent is activated, it decrements a countdown by 1. When the countdown reaches 0, the corresponding activation is said to be \textit{busy}.
On a busy activation,
the agent compares the number of opinions equal to 1 that it observes, to the number observed during the last busy activation.
\begin{itemize}
\item If the current sample contains more 1's, then the agent adopts the opinion 1.
\item Conversely, if it contains less 1's, then it adopts the opinion 0.
\item If the current sample contains exactly as many 1's as the previous sample, the agent remains with the same opinion.
\end{itemize}
At the end of a busy activation, the agent resets its countdown to~$\ell$ (equal to the sample size) -- so that there is exactly one busy activation every $\ell$ activations. In addition, the agent memorizes the number of 1's that it observed, for the sake of performing the next busy activation.
Overall, each agent needs to store two integers between $0$ and~$\ell$ (the countdown and the number of opinions equal to 1 observed during the last busy activation), so the dynamics requires $2\log(\ell) = \Theta(\log \log n)$ bits of memory.
The dynamics is described formally in Algorithm~\ref{algo:sequential_follow_the_trend}.
\begin{algorithm}[!ht]
\SetKwInOut{Input}{Input}
\caption{Follow the Trend \label{algo:sequential_follow_the_trend}}
\DontPrintSemicolon
{\bf Sample size:} $\ell = 10 \log n$ \;
{\bf Memory:} ${\tt previousSample},{\tt countdown} \in \{ 0,\ldots,\ell \}$ \;
\Input{$k$, number of ones in a sample
\BlankLine
\uIf{${\tt countdown} = 0$} {
\;
\uIf{$k < {\tt previousSample}$}{
$x_u \leftarrow 0$ \;
}
\ElseIf{$k > {\tt previousSample}$}{
$x_u \leftarrow 1$ \;
}\;
${\tt previousSample} \leftarrow k$ \;
${\tt countdown} \leftarrow \ell$ \;
} \Else {
${\tt countdown} \leftarrow {\tt countdown} - 1$ \;
}
\end{algorithm}
\subsection{Experimental results}
Simulations were performed for~$n=2^i$, where $i \in \{3,\ldots,10\}$ for the Voter model, and $i \in \{3,\ldots,17\}$ for Algorithm~\ref{algo:sequential_follow_the_trend}, and repeated $100$ times each.
Every population contains a single source agent ($z=1$).
In self-stabilizing settings, it is not clear what are the worst initial configurations for a given dynamics. Here, we looked at two different ones:
\begin{itemize}
\item a configuration in which all opinions (including the one of the source agent) are independently and uniformly distributed in~$\{0,1\}$,
\item a configuration in which the source agent holds opinion~$0$, while all other agents hold opinion~$1$.
\end{itemize}
We compare it experimentally to the voter model.
Simulations were performed for~$n=2^i$, where $i \in \{3,\ldots,10\}$ for the voter model, and $i \in \{3,\ldots,17\}$ for our candidate dynamics, and repeated $100$ times each.
Results are summed up in Figure~\ref{fig:simulations_memory_short}, in terms of parallel rounds (one parallel round corresponds to~$n$ activations).
They suggest that convergence of our candidate dynamics takes about~$\Theta(\mathrm{polylog} n)$ parallel rounds.
In terms of parallel time, this represents an exponential gap when compared to the lower bound in Theorem \ref{thm:lowbound} established for memoryless dynamics.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6\textwidth]{resource/figure1.png}
\caption{\em
{\bf ``Follow the Trend'' versus the voter model.}
Average convergence time (in parallel rounds) is depicted for different values of~$n$, over 100 iterations each, for $z=1$ source agent.
Blue lines with circular markers correspond to our candidate dynamic which is an adaptation of the ``follow the trend'' algorithm from \cite{DBLP:conf/podc/KormanV22}.
Orange lines with triangular markers correspond to the voter model.
Full lines depict initial configurations in which all opinions are drawn uniformly at random from~$\{0,1\}$.
Dotted lines depict initial configurations in which all opinions are taken to be different from the source agent.
}
\label{fig:simulations_memory_short}
\end{figure}
\section{The Voter Model is (Almost) Optimal} \label{sec:upper}
The voter model
is the popular dynamics in which the random agent $v$, activated
at round $t$, pulls another agent $u \in V$ u.a.r. and updates its opinion to the
opinion of $u$.
In this section, we prove that this dynamics achieves consensus within
$\mathcal{O}(n^2 \log n)$ rounds in expectation. We prove the result for $z = 1$,
noting that the upper bound can only improve for $z > 1$. Without loss of generality, we assume that 1 is the correct opinion.
\paragraph{The modified chain $\mathcal{C}_2$.}
In principle, we could study convergence of the voter model using the
chain $\mathcal{C}_1$ introduced in Section \ref{sec:prelim} and
used to prove the results of Section \ref{sec:lower}.
Unfortunately, $\mathcal{C}_1$ has one absorbing state (the
state $n$ corresponding to consensus), hence
it is not reversible, so that we cannot leverage
known properties of reversible birth-death chains \cite[Section
2.5]{levin2017markov} that would simplify the proof. Note however that
we are interested in $\hit_n$, the
number of rounds to reach state $n$ under the voter model. To this
purpose, it is possible to
consider a second chain $\mathcal{C}_2$ that is almost identical to
$\mathcal{C}_1$ but reversible. In particular, the transition probabilities
$\p_i$ and $\q_i$ of $\mathcal{C}_2$ are the same as in $\mathcal{C}_1$,
for $i = z,\ldots, n-1$. Moreover, we have $\p_n = 0$ (as in
$\mathcal{C}_1$) but $\q_n = 1$.\footnote{Setting $\q_n = 1$ is only
for the sake of simplicity, any positive value will do.} Obviously, for
any initial state $i\le n - 1$, $\hit_n$ has exactly the same
distribution in $\mathcal{C}_1$ and $\mathcal{C}_2$. For this reason,
in the remainder of this section we consider the chain $\mathcal{C}_2$, unless
otherwise stated.
\begin{theorem}\label{thm:uppbound}
For $z = 1$, the voter model achieves consensus to opinion $1$ within $\mathcal{O}(n^2
\log n)$ rounds in expectation and within $\mathcal{O}\left(n^2\log
n\log\frac{1}{\delta}\right)$ rounds with probability at least $1 - \delta$, for
$0 < \delta < 1$.
\end{theorem}
\subsection{Proof of Theorem \ref{thm:uppbound}}
We first compute the general expression for $\Expec{z}{\hit_n}$, i.e., the
expected time to reach state $n$ (thus, consensus) in $\mathcal{C}_2$ when the
initial state is $z$, corresponding to the system starting in a state in which
only the source agents hold opinion $1$. We then give a specific upper bound
when $z = 1$. First of all, we recall that, for $z$ source agents we have that
$\Expec{z}{\hit_n} = \sum_{k = z + 1}^n\Expec{k-1}{\hit_k}$. Considering the
general expressions of the $\p_i$'s and $\q_i$'s in~Eq.~\eqref{eq:p_i}
and~Eq.~\eqref{eq:q_i}, we soon observe that for the voter model $g_0 = g_1 =g$,
since the output does not depend on the opinion of the agent, and
$\Expec{}{g(|S|)} = i / n$ whenever the number of agent with opinion $1$ in the
system is $i$. Hence for $\mathcal{C}_2$ we have
\begin{equation}\label{eq:voter_transition_probs}
\begin{aligned}
p_i & =
\left\{
\begin{array}{cl}
\frac{(n-i)i}{n^2}, & \text{ for } i = z,\ldots , n-1\\[1mm]
0, & \text{ for } i = n
\end{array}
\right.\\
q_i & =
\left\{
\begin{array}{cl}
0, & \text{ for } i = z \\[1mm]
\frac{(n-i)(i-z)}{n^2}, & \text{ for } i = z + 1, \ldots, n - 1\\[1mm]
1, & \text{ for } i = n.
\end{array}
\right.
\end{aligned}
\end{equation}
The proof now proceeds along the following rounds.
\paragraph{General expression for $\Expec{k-1}{\hit_k}$.}
It is not difficult to see that
\begin{equation}\label{eq:one_step_expect}
\Expec{k-1}{\hit_k} = \frac{1}{\q_kw_k}\sum_{j=z}^{k-1}w_j\,,
\end{equation}
where $w_0 = 1$ and $w_k = \prod_{i = z+1}^k\frac{\p_{i-1}}{\q_i}$, for $k = z
+ 1, \ldots, n$. Indeed, the $w_k$'s satisfy the detailed balanced conditions
$\p_{k-1}w_{k-1} = \q_k w_k$ for $k = z+1,\ldots, n$,
\begin{align*}
\p_{k-1}w_{k-1}
& = \p_{k-1}\prod_{i = z+1}^{k-1}\frac{\p_{i-1}}{\q_i} \\
& = \p_{k-1}\frac{\q_k}{\p_{k-1}}\prod_{i = z+1}^k\frac{\p_{i-1}}{\q_i}
= \q_kw_k.
\end{align*}
and~Eq.~\eqref{eq:one_step_expect} follows proceeding like in~\cite[Section
2.5]{levin2017markov}.
\paragraph{Computing $\Expec{k-1}{\hit_k}$ for $\mathcal{C}_2$.}
First of all, considering the expressions of $\p_i$ and $\q_i$
in~Eq.~\eqref{eq:voter_transition_probs}, for $k = z + 1,\ldots , n - 1$ we have
\begin{align*}
w_k & = \prod_{i=z+1}^{k}\frac{(n-i+1)(i-1)}{(i-z)(n-i)} \\
& = \prod_{i=z+1}^{k}\frac{n-i+1}{n-i}\cdot\prod_{i=z+1}^{k}\frac{i-1}{i-z}
= \frac{n-z}{n-k} \cdot\prod_{i=z+1}^{k}\frac{i-1}{i-z}.
\end{align*}
Hence
\[
w_k =
\left\{
\begin{array}{cl}
\frac{n-z}{n-k}f(k), & \; \mbox{ for } k = z + 1,\ldots , n - 1 \\[1mm]
\frac{(n-z)(n-1)}{n^2}f(n-1), & \; \mbox{ for } k = n
\end{array}
\right.
\]
where $f(k) = \prod_{i=z+1}^{k}\frac{i-1}{i-z}$.
\paragraph{The case $z = 1$.}
In this case, the formulas above simplify and, for $k =
z+1,\ldots , n-1$, we have
\[
\Expec{k-1}{\hit_k}
= \frac{n^2}{(k-1)f(k)}\sum_{j=1}^{k-1}\frac{f(j)}{n-j}
= \frac{n^2}{k-1}\sum_{j=1}^{k-1}\frac{1}{n-j},
\]
where the last equality follows from the fact that that $f(z) = f(z+1) = \cdots
= f(k) = 1$, whenever $z = 1$. Moreover, for $k = n$ we have
\begin{align*}
\Expec{n-1}{\hit_n} & = \frac{1}{\q_nw_n}\sum_{j=1}^{n-1}w_j
= \left(\frac{n}{n-1}\right)^2 \, \sum_{j=1}^{n-1}\frac{n-1}{n-j} \\
& = \frac{n}{n-1}H_{n-1} = \mathcal{O}(\log n),
\end{align*}
where $H_{k}$ denotes the $k$-th harmonic number.
Hence, for $z = 1$ we have
\begin{align}\label{eq:expected_ub}
& \Expec{1}{\hit_n} = \sum_{k=2}^n\Expec{k-1}{\hit_k} \nonumber \\
& = n^2 \sum_{k=2}^{n-1}\frac{1}{k-1}\sum_{j=1}^{k-1}\frac{1}{n-j} +
\mathcal{O}(\log n),
\end{align}
where in the second equality we took into account that $\Expec{n-1}{\hit_n} = \mathcal{O}(\log n)$.
Finally, it is easy to see that
\begin{equation}\label{eq:double_harmonic}
\sum_{k=2}^{n-1}\frac{1}{k-1}\sum_{j=1}^{k-1}\frac{1}{n-j} = \mathcal{O}(\log n)
\end{equation}
Indeed, if we split the sum at $\lfloor n/2 \rfloor$, for $k \leqslant \lfloor
n/2 \rfloor$ we have
\begin{equation}\label{eq:dh_1}
\sum_{k=2}^{\lfloor n/2 \rfloor}\frac{1}{k-1}\sum_{j=1}^{k-1}\frac{1}{n-j}
\leqslant \sum_{k=2}^{\lfloor n/2 \rfloor}\frac{1}{k-1}\sum_{j=1}^{k-1}\frac{2}{n}
= \mathcal{O}(1)
\end{equation}
and for $k > \lfloor n/2 \rfloor$ we have
\begin{align}\label{eq:dh_2}
& \sum_{k = \lfloor n/2 \rfloor + 1}^{n-1}\frac{1}{k-1}\sum_{j=1}^{k-1}\frac{1}{n-j}
\leqslant \sum_{k = \lfloor n/2 \rfloor + 1}^{n-1} \frac{2}{n}\sum_{j=0}^{n-1}\frac{1}{n-j} \nonumber \\
& = \sum_{k = \lfloor n/2 \rfloor + 1}^{n-1} \frac{2}{n} H_n = \mathcal{O}(\log n)\,.
\end{align}
From~Eqs. \eqref{eq:dh_1} and~\eqref{eq:dh_2} we get~Eq. \eqref{eq:double_harmonic}, and
the first part of the claim follows by using in~Eq. \eqref{eq:expected_ub} the
bound in~Eq. \eqref{eq:double_harmonic}.
To prove the second part of the claim, we use a standard argument. Consider
$\lceil \log \frac{1}{\delta}\rceil$ consecutive time intervals, each consisting
of $s = 2 \lceil\Expec{1}{\hit_n}\rceil = \mathcal{O}(n^2 \log n)$ consecutive rounds.
For $i = 1,\ldots , s - 1$, if the chain did not reach state $n$ in any of the
first $i - 1$ intervals, then the probability that the chain does not reach
state $n$ in the $i$-th interval is at most $1/2$ by Markov's inequality. Hence,
the probability that the chain does not reach state $n$ in any of the intervals
is at most $\left(1/2\right)^{\log (1/\delta)} = \delta$.
\subsection{Handling multiple opinions}\label{subse:apx_multiple}
Consider the case in which the set of possible opinions is $\{1,\ldots,
k\}$ for $k\ge 2$, with $1$ again the correct opinion. We collapse
opinions $2,\ldots , k$ into one class, i.e., opinion $0$ without loss
of generality. We then consider the random variable $X_t$, giving the
number of agents holding opinion $1$ at the end of round $t$. Clearly,
the configuration in which all agents hold opinion $1$ is the only
absorbing state under the voter model and
convergence time is defined as $\min\{t\ge 0: X_t = n\}$. For a generic
number $i$ of agents holding opinion $1$, we next compute the
probability $p_i$ of the transition $i\rightarrow i + 1$ (for $i\le n
- 1$) and the probability $q_i$ of the transition $i\rightarrow i - 1$
(for $i\ge z + 1$):
\begin{align*}
&\p_i = \Prob{}{X_{t+1} = i + 1 \mid X_t = i} = \frac{n-i}{n}\cdot\frac{i}{n},
\end{align*}
where the first factor in the right hand side of the above equality is
the probability of activating an agent holding an opinion other than
$1$, while the second factor is the probability that said agent in turn
copies the opinion of an agent holding the correct opinion. Similarly, we
have:
\begin{align*}
&\q_i = \Prob{}{X_{t+1} = i - 1 \mid X_t = i} = \frac{i -
z}{n}\cdot\frac{n-i}{n},
\end{align*}
with the first factor in the right hand side the probability of
sampling a non-source agent holding opinion $1$ and the second factor
the probability of this agent in turn copying the opinion of an agent
holding any opinions other than $1$.
The above argument implies that if we are interested in the time to
converge to the correct opinion, variable $X_t$ is what we are actually
interested in. On the other hand, it is immediately clear that the
evolution of $X_t$ is described by the birth-death chain
$\mathcal{C}_1$ introduced in Section \ref{sec:prelim} (again with $n$
as the only absorbing state) or by its reversible counterpart
$\mathcal{C}_2$. This in turn implies that the
analysis of Section \ref{sec:upper} seamlessly carries over to the case of
multiple opinions.
\section{The Upper Bound} \label{sec:upper}
As defined in Section \ref{sec:prelim}, the Voter Model is the popular dynamics in which the random node $v$, activated
at round $t$, pulls u.a.r. a random node $u \in V$ and updates his opinion to the opinion of $u$.
In this section, we prove that this dynamics converges within
$\Theta(n^2 \log n)$ rounds in expectation. We prove the result for $z
= 1$, noting that the upper bound can only improve for $z > 1$.
\begin{theorem}\label{thm:uppbound}
If $z = 1$, the Voter Model achieves consensus to opinion $1$ within
$2n^2H_{n-1} + \mathcal{O}(n^2)$ steps\footnote{$H_{\ell}$ denotes the
$\ell$-th harmonic number.} in expectation and within $\mathcal{O}\left(n^2\log
n\log\frac{1}{\delta}\right)$ steps with probability at least $1 - \delta$,
for $0 < \delta < 1$.
\end{theorem}
\begin{proof}
We first compute the general expression for $\Expec{z}{\hit_m}$, i.e., the
expected time to reach state $n$ (thus, consensus) in $\mathcal{C}_2$
when the initial state is $z$, corresponding to the system starting in
a state in which only the seed nodes hold opinion $1$. We then give a
specific upper bound when $z = 1$. First of all, we
note that, for $z$ seed nodes we have:
\[
\Expec{z}{\hit_m} = \sum_{k = z + 1}^n\Expec{k-1}{\hit_k}.
\]
Considering the general expressions for the $\p_i$'s and
$\q_i$'s for the birth-death chains we consider, we soon observe that for the Voter model,
\robin{$g_0 = g_1 =g$ (since the output does not depend on the opinion of the agent), with}
$\Expec{}{g(|S|)} = \frac{i}{n}$, whenever the
number of nodes with opinion $1$ in the system is $i$. Hence for
$\mathcal{C}_2$ we have:
\begin{align*}
&\p_n = 0,\ \p_i = \frac{(n-i)i}{n^2}, \text{ for } i =
z,\ldots , n-1\\
&\q_z = 0,\ \q_n = 1,\ \q_i = \frac{(n-i)(i-z)}{n^2}\text{, for } i = z + 1,\ldots ,
n - 1
\end{align*}
The proof now proceeds along the following steps.
\paragraph{General expression for $\Expec{k-1}{\hit_k}$.}
For this first part, we proceed along the lines of \cite[Section
2.5]{levin2017markov}. In a similar fashion, we define $w_0 = 1$ and,
for $k = z + 1,\ldots , n$:
\[
w_k = \prod_{i = z+1}^k\frac{\p_{i-1}}{\q_i}.
\]
Let $W = \sum_{j=z}^nw_j$ and $\pi_k = w_k/W$ for $k = z,\ldots , n$.
Then $\{\pi_k\}_{k=z}^n$ is a probability distribution over the state
space $\chi$ of $\mathcal{C}_2$. Moreover, proceeding like in \cite[Section
2.5]{levin2017markov}, it is easy to see that the $w_k$'s (and thus,
the $\pi_k$'s) satisfied the detailed balanced conditions for $k =
z+1,\ldots, n$, namely:
\[
\p_{k-1}w_{k-1} = \q_kw_k.
\]
To see this, note that, for $k\ge z+1$:
\[
\p_{k-1}w_{k-1} = \p_{k-1}\prod_{i =
z+1}^{k-1}\frac{\p_{i-1}}{\q_i} =
\p_{k-1}\frac{\q_k}{\p_{k-1}}\prod_{i = z+1}^k\frac{\p_{i-1}}{\q_i}
= \q_kw_k.
\]
Hence, \cite[Proposition 1.20]{levin2017markov} implies that
$\{\pi_k\}_{k=z}^n$ is also a stationary distribution for
$\mathcal{C}_2$.
Proceeding like \cite[Section 2.5]{levin2017markov}, we then conclude
that
\begin{equation}
\Expec{k-1}{\hit_k} = \frac{1}{\q_kw_k}\sum_{j=z}^{k-1}w_j.
\end{equation}
\paragraph{Computing $\Expec{k-1}{\hit_k}$ for $\mathcal{C}_2$.}
First of all, considering the expressions of $\p_i$ and $\q_i$ for
$\mathcal{C_2}$ we have, for $k = z + 1,\ldots , n - 1$:
\[
w_k = \prod_{i=z+1}^{k}\frac{(n-i+1)(i-1)}{(i-z)(n-i)} =
\prod_{i=z+1}^{k}\frac{n-i+1}{n-i}\cdot\prod_{i=z+1}^{k}\frac{i-1}{i-z}.
\]
For the first term we have:
\[
\prod_{i=z+1}^{k}\frac{n-i+1}{n-i} = \frac{n-z}{n-k},
\]
which follows by observing that this is a telescopic product in which
all terms but the numerator of the first and the denominator of the
last cancel out. As a result:
\[
w_k = \frac{n-z}{n-k}f(k),
\]
where $f(k) = \prod_{i=z+1}^{k}\frac{i-1}{i-z}$. Finally, for $w_n$ we
have:
\[
w_n = w_{n-1}\frac{\p_{n-1}}{\q_n} = \frac{(n-z)(n-1)}{n^2}f(n-1),
\]
where the last equality follows since $\q_n = 1$ and by replacing the
expression of $\p_{n-1}$ computer earlier.
\paragraph{The case $z = 1$.}
In this case, the formulas above simplify and we have, for $k =
z+1,\ldots , n-1$:
\begin{align*}
&\Expec{k-1}{\hit_k} = \frac{1}{\q_kw_k}\sum_{j=1}^{k-1}w_j =
\frac{n^2}{(k-1)f(k)}\sum_{j=1}^{k-1}\frac{f(j)}{n-j}\\
& = \frac{n^2}{k-1}\sum_{j=1}^{k-1}\frac{1}{n-j},
\end{align*}
where the first equality follows by replacing $\q_k$ and $w_z,\ldots ,
w_k$ with their expressions for $\mathcal{C}_2$ computed above, while
the second equality follows by observing that $f(z) = f(z+1) = \cdots =
f(k) = 1$, whenever $z = 1$. Moreover, for $k = n$ we have:
\[
\Expec{n-1}{\hit_n} = \frac{1}{\q_nw_n}\sum_{j=1}^{n-1}w_j =
\frac{n}{n-1}H_{n-1} = \mathcal{O}(\log n),
\]
where $H_{\ell}$ denotes the $\ell$-th harmonic number. In the
derivation above, the first equality follows since for $z = 1$: i) $w_n
= \left(\frac{n}{n-1}\right)^2$ and ii) we have:
\[
\sum_{j=z}^{n-1}w_j = \sum_{j=1}^{n-1}\frac{n-1}{n-j} =
(n-1)H_{n-1}.
\]
We therefore have, for $z = 1$:
\begin{align*}
&\Expec{1}{\hit_n} = \sum_{k=2}^n\Expec{k-1}{\hit_k} =
\sum_{k=2}^{n-1}\frac{n^2}{k-1}\sum_{j=1}^{k-1}\frac{1}{n-j} +
\mathcal{O}(\log n),
\end{align*}
where in the second equality we took into account that
$\Expec{n-1}{\hit_n} = \mathcal{O}(\log n)$. We further write the first term
of the right hand side as:
\begin{align*}
&\sum_{k=2}^{n-1}\frac{n^2}{k-1}\sum_{j=1}^{k-1}\frac{1}{n-j}\\
& = \sum_{k=2}^{\left\lfloor\frac{n}{2}\right\rfloor}\frac{n^2}{k-1}\sum_{j=1}^{k-1}\frac{1}{n-j}
+ \sum_{k=\left\lfloor\frac{n}{2}\right\rfloor +
1}^{n-1}\frac{n^2}{k-1}\sum_{j=1}^{k-1}\frac{1}{n-j}.
\end{align*}
When $k\le\left\lfloor\frac{n}{2}\right\rfloor$ we have $1/(n-j) <
2/n$ for every $j\le k - 1$, whence
\begin{align*}
&\sum_{k=2}^{\left\lfloor\frac{n}{2}\right\rfloor}\frac{n^2}{k-1}\sum_{j=1}^{k-1}\frac{1}{n-j} <
\sum_{k=2}^{\left\lfloor\frac{n}{2}\right\rfloor}\frac{n^2}{k-1}\cdot\frac{2k}{n} =
2n\sum_{k=2}^{\left\lfloor\frac{n}{2}\right\rfloor}\frac{k}{k-1}\\
& = 2n\sum_{k=2}^{\left\lfloor\frac{n}{2}\right\rfloor}\left(1 +
\frac{1}{k-1}\right) < n^2 + 2n\sum_{k=2}^{\left\lfloor\frac{n}{2}\right\rfloor}
\frac{1}{k-1}\\
& = n^2 + 2nH_{\left\lfloor\frac{n}{2}\right\rfloor} =
n^2 + \mathcal{O}(n\log n).
\end{align*}
On the other hand, when $k\ge\left\lfloor\frac{n}{2}\right\rfloor + 1$
we have $\frac{n^2}{k-1}\le \frac{2n^2}{n-2}$, whence
\begin{align*}
&\sum_{k=\left\lfloor\frac{n}{2}\right\rfloor +
1}^{n-1}\frac{n^2}{k-1}\sum_{j=1}^{k-1}\frac{1}{n-j}\le \sum_{k=\left\lfloor\frac{n}{2}\right\rfloor +
1}^{n-1}\frac{2n^2}{n-2}\sum_{j=1}^{k-1}\frac{1}{n-j}\\
& = \sum_{k=\left\lfloor\frac{n}{2}\right\rfloor +
1}^{n-1}\frac{2n^2}{n-2}(H_{n-1} - H_{n-k}) < \sum_{k=\left\lfloor\frac{n}{2}\right\rfloor +
1}^{n-1}\frac{2n^2}{n-2}H_{n-1}\\
& = \sum_{k=\left\lfloor\frac{n}{2}\right\rfloor +
1}^{n-1}\frac{2(n^2 - 4)}{n-2}H_{n-1} + \sum_{k=\left\lfloor\frac{n}{2}\right\rfloor +
1}^{n-1}\frac{8}{n-2}H_{n-1}\\
& < 2n(n + 2)H_{n-1} + 4H_{n-1}.
\end{align*}
where the fourth equality and the fifth inequality follow from simple
manipulations.
Overall, summing the contributions above, we have for $\Expec{1}{\hit_n}$ (and thus expected consensus time we
have):
\[
\Expec{1}{\hit_n} = 2n^2H_{n-1} + \mathcal{O}(n^2).
\]
To prove the second part of the claim, we use a standard argument.
Consider $\lceil\ln\frac{1}{\delta}\rceil$ consecutive time intervals,
each consisting of $s = \lceil\Expec{1}{\hit_n}\rceil = 2en^2H_{n-1} + \mathcal{O}(n^2)$
consecutive steps, with $e$ the base of natural logarithms. For $i = 1,\ldots , s - 1$, if the chain did not
reach state $n$ in any of the first $i - 1$ intervals, then the
probability that the chain does not reach state $n$ in the $i$ interval
is at most $1/e$ by Markov's inequality. Hence, the probability that
the chain does not reach state $n$ in any of the intervals is at most
\[
\left(\frac{1}{e}\right)^{\ln\frac{1}{\delta}} = \delta.
\]
This concludes the proof.
\end{proof}
\paragraph{Remark.} It might seem reasonable to consider a different
dynamics (generalizing the Voter Model), in which a node samples $k$
neighbours uniformly and independently at random and then it adopts
opinion $1$ with probability $\ell/k$ if $\ell$ is the number of nodes
with that opinion in the sample. It is easy to check that this change
has no effect, in the sense that the transition probabilities of the
corresponding birth-death chain do not change.
\section{Lower Bound} \label{sec:lower}
In this section, we prove a lower bound on the convergence time of memoryless dynamics.
We show that this negative result holds in a very-strong sense: any dynamics must take $\Omega(n^2)$ expected time even if the agents have full knowledge of the current system configuration.
As mentioned in the previous section, we restrict the analysis to the case of two opinions, namely 0 and 1, w.l.o.g.
To account for the fact that agents have access to the exact configuration of the system, we slightly modify the
notation introduced in Section~\ref{sec:prelim}, so that here $\uf_{x_u}:\{0,\ldots,n\} \rightarrow [0,1]$
assigns a probability to the number of ones that appear in the population, rather than in a random sample of size~$\ell$.
Before we prove our main result, we need the following technical results.
\begin{lemma} \label{lem:lower_bound_sum_inverse}
For every~$N \in \mathbb{N}$, for every $x \in \mathbb{R}^N$ s.t. for every~$i \in \{1,\ldots,N\}$, $x_i > 0$, we have either
$\sum_{i=1}^N x_i \geq N$ or $\sum_{i=1}^N \frac{1}{x_i} \geq N$.
\end{lemma}
\begin{proof}
Consider the case that $\sum_{i=1}^n x_i \leq N$.
Using the inequality of arithmetic and geometric means, we can write
\begin{equation*}
1 \geq \frac{1}{N} \sum_{i=1}^N x_i \geq \left(\prod_{i=1}^N x_i \right)^{\frac{1}{N}}.
\end{equation*}
Therefore,
\begin{equation*}
1 \leq \left(\prod_{i=1}^N \frac{1}{x_i}\right)^{\frac{1}{N}} \leq \frac{1}{N} \sum_{i=1}^N \frac{1}{x_i},
\end{equation*}
which concludes the proof of Lemma~\ref{lem:lower_bound_sum_inverse}.
\end{proof}
\begin{lemma} \label{lem:lower_bound_hit}
Consider any birth-death chain on~$\{0,\ldots,n\}$. For~$1 \leq i \leq j \leq n$, let $a_i = q_i/p_{i-1}$ and $a(i:j) = \prod_{k=i}^j a_k$. Then, $\Expec{0}{\hit_n} \geq \sum_{1 \leq i < j \leq n} a(i:j)$.
\end{lemma}
\begin{proof}
Let~$w_0 = 1$ and for $i \in \{1,\ldots,n\}$, let $w_i = 1/a(1:i)$. The following result is well-known (see, e.g., Eq.~(2.13) in \cite{levin2017markov}). For every~$\ell \in \{1,\ldots,n\}$,
\begin{equation*}
\Expec{\ell-1}{\tau_\ell} = \frac{1}{q_\ell w_\ell} \sum_{i=0}^{\ell-1} w_i.
\end{equation*}
Thus,
\begin{equation*}
\Expec{\ell-1}{\tau_\ell} = \frac{1}{q_\ell} \sum_{i=0}^{\ell-1} \frac{a(1:\ell)}{a(1:i)} = \frac{1}{q_\ell} \sum_{i=1}^{\ell} a(i:\ell) \geq \sum_{i=1}^{\ell} a(i:\ell).
\end{equation*}
Eventually, we can write
\begin{equation*}
\Expec{0}{\tau_n} = \sum_{\ell = 1}^n \Expec{\ell-1}{\tau_\ell}\geq \sum_{1 \leq i < j \leq n} a(i:j),
\end{equation*}
which concludes the proof of Lemma~\ref{lem:lower_bound_hit}.
\end{proof}
\begin{theorem}\label{thm:lowbound}
Fix~$z \in \mathbb{N}$. In the presence of~$z$ source agents, the expected convergence time of any memoryless dynamics is at least $\Omega(n^2)$, even when each sample contains the complete configuration of the opinions in the system, i.e., the case $\ell=n$.
\end{theorem}
\begin{proof}
Fix $z \in \mathbb{N}$. Let~$n \in \mathbb{N}$, s.t. $n>4z$, and let $P$ be any memoryless dynamics.
The idea of the proof is to show that the birth-death chain associated with~$P$, obtained from the chain $\mathcal{C}_1$ described in Section \ref{sec:prelim} by specifying $\uf_0$ and $\uf_1$ for the dynamics $P$, cannot be ``fast'' in both directions at the same time.
We restrict the analysis to the subset of states $\chi = \{n/4,\ldots,3n/4\}$.
More precisely, we consider the two following birth-death chains:
\begin{itemize}
\item $\mathcal{C}$ with state space $\chi$, whose states represent the number of agents with opinion~$1$, and assuming that the source agents hold opinion~$1$.
\item $\mathcal{C}'$ with state space $\chi$, whose states represent the number of agents with opinion~$0$, and assuming that the source agents hold opinion~$0$.
\end{itemize}
Let~$\tau_{3n/4}$ (resp. $\tau'_{3n/4}$) be the hitting time of the state $3n/4$ of chain $\mathcal{C}$ (resp. $\mathcal{C}'$).
We will show that
\begin{equation*}
\max \left( \Expec{n/4}{\tau_{3n/4}} , \Expec{n/4}{\tau'_{3n/4}} \right) = \Omega(n^2).
\end{equation*}
Let~$g_0,g_1 : \chi \rightarrow [0,1]$ be the functions describing~$P$ over~$\chi$.
Following Eqs.~\eqref{eq:p_i} and~\eqref{eq:q_i} in Section~\ref{sec:prelim}, we can derive the transition probabilities for~$\mathcal{C}$ as
\begin{equation}
p_i = \frac{n-i}{n} g_0(i), \qquad q_i = \frac{i-z}{n} (1-g_1(i)).
\end{equation}
Note that the expectations have been removed as a consequence of agents having ``full knowledge'' of the configuration.
Similarly, for~$\mathcal{C}'$, the transition probabilities are
\begin{equation}
p_i' = \frac{n-i}{n} (1-g_1(n-i)), \qquad q_i' = \frac{i-z}{n} g_0(n-i).
\end{equation}
Following the definition in the statement of Lemma~\ref{lem:lower_bound_hit}, we define~$a_i$ and~$a'_i$ for $\mathcal{C}$ and $\mathcal{C}'$ respectively. We have
\begin{equation*}
a_i = \frac{q_i}{p_{i-1}} = \frac{i-z}{n-i+1} \cdot \frac{1-g_1(i)}{g_0(i-1)},
\end{equation*}
and
\begin{equation*}
a_i' = \frac{q_i'}{p_{i-1}'} = \frac{i-z}{n-i+1} \cdot \frac{g_0(n-i)}{1-g_1(n-i+1)}.
\end{equation*}
Observe that we can multiply these quantities by pairs to cancel the factors on the right hand side:
\begin{equation} \label{eq:a_i_product}
a_{n-i+1} \cdot a_i' = \frac{i-z}{i} \cdot \frac{n-i+1-z}{n-i+1}.
\end{equation}
$(i-z)/i$ is increasing in~$i$, so it is minimized on $\chi$ for~$i = n/4$. Similarly, $(n-i+1-z)/(n-i+1)$ is minimized for~$i = 3n/4$. Hence, we get the following (rough) lower bound from Eq.~\eqref{eq:a_i_product}: for every~$i \in \chi$,
\begin{equation} \label{eq:a_i_product_lowerbound}
a_{n-i+1} \cdot a_i' \geq \left(1-\frac{4z}{n}\right)^2.
\end{equation}
Following the definition in the statement of Lemma~\ref{lem:lower_bound_hit}, we define~$a(i:j)$ and~$a'(i:j)$ for $\mathcal{C}$ and $\mathcal{C}'$ respectively. From Eq.~\eqref{eq:a_i_product_lowerbound}, we get for any~$i,j \in \chi$ with~$i\leq j$:
\begin{align*}
a'(i:j) &\geq \left( 1- \frac{4z}{n} \right)^{2(j-i+1)} \frac{1}{a(n-j+1:n-i+1)} \\
& \geq \left( 1- \frac{4z}{n} \right)^n \frac{1}{a(n-j+1:n-i+1)}.
\end{align*}
Let~$c = c(z) = \exp(-4z)/2$.
For~$n$ large enough,
\begin{equation} \label{eq:a_i_prime_lowerbound}
a'(i:j) \geq \frac{c}{a(n-j+1:n-i+1)}.
\end{equation}
Let~$N = n^2/8 + n/4$.
By Lemma~\ref{lem:lower_bound_sum_inverse}, either
\begin{equation*}
\sum_{\substack{i,j \in \chi \\ i<j}} a(i:j) \geq N,
\end{equation*}
or (by Eq.~\eqref{eq:a_i_prime_lowerbound})
\begin{equation*}
\sum_{\substack{i,j \in \chi \\ i<j}} a'(i:j) \geq c \sum_{\substack{i,j \in \chi \\ i<j}} \frac{1}{a(i:j)} \geq cN.
\end{equation*}
By Lemma~\ref{lem:lower_bound_hit}, it implies that either
\begin{equation*}
\Expec{n/4}{\tau_{3n/4}} \geq N, \quad \text{ or } \quad \Expec{n/4}{\tau'_{3n/4}} \geq cN.
\end{equation*}
In both cases, there exists an initial configuration for which at least~$\Omega(n^2)$ rounds are needed to achieve consensus, which concludes the proof of Theorem~\ref{thm:lowbound}.
\end{proof}
\subsection{Faster Dissemination with Memory} \label{sec:simulations}
In this section, we give experimental evidence suggesting that dynamics using a modest amount of memory can achieve consensus in an almost linear number of rounds.
When compared to memory-less dynamics, this represents an exponential gap (following the results of Section~\ref{sec:lower}).
\paragraph{``Follow the trend'': our candidate approach} The dynamics that we run in the simulations is derived from the algorithm of \cite{DBLP:conf/podc/KormanV22}, and uses a sample size of $\ell = 10 \, \log n$.
Each time an agent is activated, it decrements a countdown by 1. When the countdown reaches 0, the corresponding activation is said to be \textit{busy}.
On a busy activation,
the agent compares the number of opinions equal to 1 that it observes, to the number observed during the last busy activation.
\begin{itemize}
\item If the current sample contains more 1's, then the agent adopts the opinion 1.
\item Conversely, if it contains less 1's, then it adopts the opinion 0.
\item If the current sample contains exactly as many 1's as the previous sample, the agent remains with the same opinion.
\end{itemize}
At the end of a busy activation, the agent resets its countdown to~$\ell$ (equal to the sample size) -- so that there is exactly one busy activation every $\ell$ activations. In addition, the agent memorizes the number of 1's that it observed, for the sake of performing the next busy activation.
Overall, each agent needs to store two integers between $0$ and~$\ell$ (the countdown and the number of opinions equal to 1 observed during the last busy activation), so the dynamics requires $2\log(\ell) = \Theta(\log \log n)$ bits of memory.
The dynamics is described formally in Algorithm~\ref{algo:sequential_follow_the_trend}.
\begin{algorithm}[h!]
\SetKwInOut{Input}{Input}
\caption{Follow the Trend \label{algo:sequential_follow_the_trend}}
\DontPrintSemicolon
{\bf Sample size:} $\ell = 10 \log n$ \;
{\bf Memory:} ${\tt previousSample},{\tt countdown} \in \{ 0,\ldots,\ell \}$ \;
\Input{$k$, number of ones in a sample
\BlankLine
\uIf{${\tt countdown} = 0$} {
\;
\uIf{$k < {\tt previousSample}$}{
$x_u \leftarrow 0$ \;
}
\ElseIf{$k > {\tt previousSample}$}{
$x_u \leftarrow 1$ \;
}\;
${\tt previousSample} \leftarrow k$ \;
${\tt countdown} \leftarrow \ell$ \;
} \Else {
${\tt countdown} \leftarrow {\tt countdown} - 1$ \;
}
\end{algorithm}
\paragraph{Experimental results}
Simulations were performed for~$n=2^i$, where $i \in \{3,\ldots,10\}$ for the Voter model, and $i \in \{3,\ldots,17\}$ for Algorithm~\ref{algo:sequential_follow_the_trend}, and repeated $100$ times each.
Every population contains a single source agent ($z=1$).
In self-stabilizing settings, it is not clear what are the worst initial configurations for a given dynamics. Here, we looked at two different ones:
\begin{itemize}
\item a configuration in which all opinions (including the one of the source agent) are independently and uniformly distributed in~$\{0,1\}$,
\item a configuration in which the source agent holds opinion~$0$, while all other agents hold opinion~$1$.
\end{itemize}
Results are summed up in Figure~\ref{fig:simulations_memory_short} in the main text, in terms of parallel rounds (one parallel round corresponds to~$n$ activations).
They are consistent with the theoretical insights of Section~\ref{sec:upper}.
Moreover, they suggest that the expected convergence time of our candidate dynamics is about~$\Theta(\mathrm{polylog} n)$ parallel rounds.
\fi |
{
"arxiv_id": "2302.08715",
"language": "en",
"timestamp": "2023-02-20T02:07:44",
"url": "https://arxiv.org/abs/2302.08715",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
3D models such as point cloud and mesh have been widely studied and applied in virtual/augmented reality (V/AR), game industry, film post-production, etc, \cite{application}. However, the 3D models are usually bothered by geometry/color noise and compression/simplification loss during generation and transmission procedures. Therefore, many 3D model quality assessment (3DQA) methods have been proposed to predict the visual quality levels of degraded 3D models. Nevertheless, due to the complex structure of the 3D models, efficient feature extraction is difficult to perform and most 3DQA methods require huge computational resources and inference time, which makes it hard to put such methods into practical use and calls for more efficient 3DQA solutions.
Normally speaking, 3DQA methods can be categorized into model-based and projection-based methods.
Different from model-based 3DQA methods that extract features directly for the 3D models, projection-based 3DQA methods evaluate the visual quality of 3D models via the 2D projections (regardless of the 3D models' digital representation formats and resolution), which can take advantage of the mature 2D vision backbones to achieve cost-effective performance. Unfortunately, the projections are highly dependent on the viewpoints and a single projection is not able to cover sufficient quality information. Therefore, many projection-based 3DQA methods try to utilize multiple projections or perceptually select the main viewpoint to achieve higher performance and gain more robustness \cite{liu2021pqa,zhang2022treating,nehme2022textured}. Namely, VQA-PC \cite{zhang2022treating} uses 120 projections for analysis and G-LPIPS \cite{nehme2022textured} needs to select the main viewpoint that covers the most geometric, color, and semantic information in advance. However, multiple projections and perceptual selection lead to taking up extra rendering time and huge computational resources, which motivates us to develop an \underline{E}fficient and \underline{E}ffective \underline{P}rojection-based \underline{3D} Model \underline{Q}uality \underline{A}ssessment (\textbf{EEP-3DQA}) method based on fewer projections.
Specifically, we propose a random projection sampling (RPS) strategy to sample projections from the 6 perpendicular viewpoints to reduce the rendering time cost. The tiny version \textbf{EEP-3DQA-t} only employs 2 projections while the base version \textbf{EEP-3DQA} employs 5 projections. The number of the projections is defined according to the experimental discussion in Section \ref{sec:num}. Then, inspired by \cite{wu2022fast}, we employ the Grid Mini-patch Sampling (GMS) strategy and adopt the lightweight Swin-Transformer tiny (ST-t) as the feature extraction backbone \cite{liu2021swin} to further ensure the efficiency and effectiveness of the proposed method. With the features extracted from the sampled projections, the fully-connected layers are used to map the features into quality scores. Finally, we average the quality scores of the sampled projections as the final quality value for the 3D model. The extensive experimental results show that the proposed method outperforms the existing NR-3DQA methods on the point cloud quality assessment (PCQA) and mesh quality assessment (MQA) databases, and is even superior to the FR-3DQA methods on the PCQA databases. Our proposed tiny version only takes about 1.67s to evaluate one point cloud on CPU (11.50$\times$ faster than VQA-PC) while still obtaining competitive performance.
\begin{figure}
\centering
\includegraphics[width = \linewidth]{framework.pdf}
\caption{The framework of the proposed method.}
\label{fig:framework}
\vspace{-0.2cm}
\end{figure}
\section{Related Work}
In this section, we briefly review the development of model-based and projection-based 3DQA methods.
\subsection{Model-based 3DQA}
The early FR-PCQA methods only use the geometry information to estimate the quality loss at the point level \cite{mekuria2016evaluation,tian2017geometric}. Further, to deal with the colored point clouds, both geometry and color information are incorporated for analysis by calculating the similarity of various quality domains \cite{meynet2020pcqm,yang2020graphsim,alexiou2020pointssim,database}. Later, 3D-NSS \cite{zhang2022no} is proposed by quantifying the distortions of both point clouds and meshes via some classic Natural Scene Statistics (NSS) distributions. ResSCNN \cite{liu2022point} proposes an end-to-end sparse convolutional neural network (CNN) to learn the quality representation of the point clouds. Unlike the 3D models used for classification and segmentation, the 3D models for quality assessment are usually denser and contain more points/vertices, thus making the feature extraction more complicated.
\subsection{Projection-based 3DQA}
Similar to the FR image quality assessment (IQA) methods, the early projection-based FR-3DQA methods compute the quality loss between the projections rendered from the reference and the distorted 3D models with common handcrafted IQA descriptors. Namely, Tian $et$ $al.$ \cite{tian-color} introduces a global distance over texture image using Mean Squared Error (MSE) to quantify the effect of color information. Yang $et$ $al.$ \cite{yang2020predicting} uses both projected and depth images upon six faces of a cube for quality evaluation.
The performance of the projection-based methods is further boosted by the development of deep learning networks. PQA-net \cite{liu2021pqa} designs a multi-task-based shallow network and extracts features from multi-view projections of the distorted point clouds. VQA-PC \cite{zhang2022treating} proposes to treat the point clouds as moving camera videos by capturing frames along the defined circular pathways, and utilize both 2D-CNN and 3D-CNN for spatial and temporal feature extraction respectively. G-LPIPS \cite{nehme2022textured} perceptually selects the main viewpoint of the textured meshes and assesses the quality of the main viewpoint projection with CNN.
\section{Proposed Method}
The framework of the proposed method is illustrated in Fig \ref{fig:framework}, which includes the projection sampling process, the feature extraction module, and the feature regression module.
\begin{figure*}
\centering
\includegraphics[width = 0.9\linewidth]{spatial_grid.pdf}
\caption{An example of the projection sampling process. $N$ projections are randomly selected from the 6 perpendicular viewpoints, which are further sampled into spatial grids by GMS. The details of GMS can be referred to in \cite{wu2022fast}.}
\label{fig:grid}
\end{figure*}
\subsection{Projection Sampling Process}
\label{sec:projection}
Following the mainstream projection setting employed in the popular point cloud compression standard MPEG VPCC \cite{graziosi2020overview}, we define 6 perpendicular viewpoints of the given 3D model $\mathbf{M}$ represented by point cloud or mesh, corresponding to the 6 surfaces of a cube:
\begin{equation}
\begin{aligned}
\mathbf{P} &= \psi(\mathbf{M}),
\end{aligned}
\end{equation}
where $\mathbf{P} = \{P_{i}|i=1,\cdots, 6\}$ indicates the set of the 6 projections and $\psi(\cdot)$ denotes the projection capture process. Additionally, the white background of the projections is cropped out.
There may exist redundant quality information among the 6 projections and the efficiency can be improved by extracting sufficient quality information from fewer projections since fewer projections consume less rendering time and computational resource. Therefore, we propose a Random Projection Sampling (RPS) strategy to improve the efficiency, which functions by randomly selecting $N$ projections for evaluation:
\begin{equation}
\mathbf{P_N} = \alpha(\mathbf{P}),
\end{equation}
where $\alpha(\cdot)$ denotes for the RPS operation and $\mathbf{P_N} = \{P_{N_j}|j=1,\cdots, N\}$ stands for the set of sampled projections. It's worth noting that only the selected projections are rendered.
Further inspired by the boosted efficiency benefited from Grid Mini-patch Sampling (GMS) \cite{wu2022fast}, we similarly cut the projections into uniformly none-overlapped spliced spatial grids as the sampled results:
\begin{equation}
\begin{aligned}
\mathbf{\hat{P}_N} &= \beta(\mathbf{P_N}),
\end{aligned}
\end{equation}
where $\beta$ indicates the GMS operation and $\mathbf{\hat{P}_N} = \{\hat{P}_{N_j}|j=1,\cdots, N\}$ represents the set of projections after GMS operation. From Fig. \ref{fig:grid}, we can see that the spatial grids can maintain the local quality-aware patterns that can be bothered with resize operation.
\subsection{Efficient Feature Extraction}
To take up fewer flops and parameters, we select the light-weight Swin-Transformer tiny (ST-t) \cite{liu2021swin} as the feature extraction backbone. Given the input projections set $\mathbf{\hat{P}_N}$, the quality-aware features can be obtained as:
\begin{equation}
\begin{aligned}
F_{N_j} = &\gamma(\hat{P}_{N_j}), \\
\overline{F}_{N_j} = &\mathbf{Avg}(F_{N_j}), \\
\end{aligned}
\end{equation}
where $\gamma(\cdot)$ represents the feature extraction operation with ST-t, $F_{N_j}$ indicates the extracted feature maps from the $N_j$-th input sampled projection, $\mathbf{Avg}(\cdot)$ stands for the average pooling operation and $\overline{F}_{N_j}$ denotes the pooled features.
\subsection{Quality Regression}
To map the quality-aware features into quality scores, we simply adopt a two-stage fully-connected (FC) layer for regression:
\begin{equation}
Q_{N_j} = \mathbf{FC}(\overline{F}_{N_j}),
\end{equation}
where $Q_{N_j}$ indicates the quality score for the $N_j$-th sampled projection. Then the final quality $Q$ for the given 3D model can be computed by averaging the quality values:
\begin{equation}
Q = \frac{1}{N} \sum_{j=1}^N Q_{N_j},
\end{equation}
where $Q$ indicates the final quality score for the 3D model. The Mean Squared Error (MSE) is utilized as the loss function:
\begin{equation}
Loss = \frac{1}{n} \sum_{\eta=1}^{n} (Q_{\eta}-Q_{\eta}')^2,
\end{equation}
where $Q_{\eta}$ is the predicted quality scores, $Q_{\eta}'$ is the quality label of the 3D model, and $n$ is the size of the mini-batch.
\begin{table*}[!htp]
\centering
\caption{Performance results on the SJTU-PCQA and WPC databases. The best performance results are marked in {\bf\textcolor{red}{RED}} and the second performance results are marked in {\bf\textcolor{blue}{BLUE}}.}
\begin{tabular}{l:l:l|cccc|cccc}
\toprule
\multirow{2}{*}{Ref}&\multirow{2}{*}{Type}&\multirow{2}{*}{Methods} & \multicolumn{4}{c|}{SJTU-PCQA} & \multicolumn{4}{c}{WPC} \\ \cline{4-11}
&& & SRCC$\uparrow$ & PLCC$\uparrow$ & KRCC$\uparrow$ & RMSE $\downarrow$ & SRCC$\uparrow$ & PLCC$\uparrow$ & KRCC$\uparrow$ & RMSE $\downarrow$ \\ \hline
\multirow{10}{*}{FR} &\multirow{8}{45pt}{{{Model-based}}}
&MSE-p2po & 0.7294 & 0.8123 & 0.5617 & 1.3613 & 0.4558 & 0.4852 & 0.3182 & 19.8943 \\
&&HD-p2po & 0.7157 & 0.7753 & 0.5447 & 1.4475 &0.2786 &0.3972&0.1943 &20.8990\\
&&MSE-p2pl & 0.6277 & 0.5940 & 0.4825 & 2.2815 & 0.3281 & 0.2695 &0.2249 & 22.8226 \\
&&HD-p2pl & 0.6441 & 0.6874 & 0.4565 & 2.1255 & 0.2827 & 0.2753 &0.1696 &21.9893 \\
&&PSNR-yuv & 0.7950 & 0.8170 & 0.6196 & 1.3151 & 0.4493 & 0.5304 & 0.3198 & 19.3119\\
&& PCQM & {0.8644} & {0.8853} & {0.7086} & {1.0862} &{0.7434} & {0.7499} & {0.5601} & 15.1639 \\
&& GraphSIM & {0.8783} & 0.8449 & {0.6947} & {1.0321} & 0.5831 & 0.6163 & 0.4194 & 17.1939 \\
&& PointSSIM & 0.6867 & 0.7136 & 0.4964 & 1.7001 & 0.4542 & 0.4667 & 0.3278 & 20.2733 \\ \cdashline{2-11}
&\multirow{2}{45pt}{{{Projection-based}}} &
PSNR &0.2952 &0.3222 &0.2048 &2.2972 &0.1261 &0.1801 &0.0897 &22.5482 \\
&&SSIM &0.3850 &0.4131 &0.2630 &2.2099 &0.2393 &0.2881 &0.1738 &21.9508 \\ \hdashline
\multirow{6}{*}{NR} &\multirow{2}{45pt}{{{Model-based}}}
& 3D-NSS & 0.7144 & 0.7382 & 0.5174 & 1.7686 & 0.6479 & 0.6514 & 0.4417 & 16.5716 \\
&& ResSCNN & 0.8600 & 0.8100 &- &- &- &- &- &-\\ \cdashline{2-11}
&\multirow{4}{45pt}{{{Projection-based}}} &
PQA-net & 0.8500 & 0.8200 & - & - & 0.7000 & 0.6900 & 0.5100 & 15.1800 \\
&&VQA-PC & 0.8509 & 0.8635 & 0.6585 & 1.1334 & 0.7968 & 0.7976 & 0.6115 & 13.6219\\ \cdashline{3-11}
&&\textbf{EEP-3DQA-t} & \bf\textcolor{blue}{0.8891} & \bf\textcolor{blue}{0.9130} & \bf\textcolor{blue}{0.7324} & \bf\textcolor{blue}{0.9741} & \bf\textcolor{blue}{0.8032} & \bf\textcolor{blue}{0.8124} & \bf\textcolor{blue}{0.6176} & \bf\textcolor{blue}{12.9603}
\\
&&\textbf{EEP-3DQA} &\bf\textcolor{red}{0.9095} &\bf\textcolor{red}{0.9363} & \bf\textcolor{red}{0.7635} & \bf\textcolor{red}{0.8472} & \bf\textcolor{red}{0.8264} & \bf\textcolor{red}{0.8296} & \bf\textcolor{red}{0.6422} & \bf\textcolor{red}{12.7451}
\\
\bottomrule
\end{tabular}
\label{tab:pcqa}
\end{table*}
\section{Experiment}
\subsection{Benchmark Databases}
To investigate the efficiency and effectiveness of the proposed method, the subjective point cloud assessment database (SJTU-PCQA) \cite{yang2020predicting}, the Waterloo point cloud assessment database (WPC) proposed by \cite{liu2022perceptual}, and the textured mesh quality (TMQ) database proposed by \cite{nehme2022textured} are selected for validation. The SJTU-PCQA database introduces 9 reference point clouds and each reference point cloud is degraded into 42 distorted point clouds, which generates 378 = 9$\times$7$\times$6 distorted point clouds in total.
The WPC database contains 20 reference point clouds and augmented each point cloud into 37 distorted stimuli, which generates 740 = 20$\times$37 distorted point clouds. The TMQ database includes 55 source textured meshes and 3,000 corrupted textured meshes with quality labels. The 9-fold cross validation is utilized for the SJTU-PCQA database and the 5-fold cross validation is used for the WPC and the TMQ databases respectively. The average performance is recorded as the final performance results.
\subsection{Competitors \& Criteria}
For the PCQA databases, the compared FR quality assessment methods include MSE-p2point (MSE-p2po) \cite{mekuria2016evaluation}, Hausdorff-p2point (HD-p2po) \cite{mekuria2016evaluation}, MSE-p2plane (MSE-p2pl) \cite{tian2017geometric}, Hausdorff-p2plane (HD-p2pl) \cite{tian2017geometric}, PSNR-yuv \cite{torlig2018novel}, PCQM \cite{meynet2020pcqm}, GraphSIM \cite{yang2020graphsim}, PointSSIM \cite{alexiou2020pointssim}, PSNR, and SSIM \cite{ssim}. The compared NR methods include 3D-NSS \cite{zhang2022no}, ResSCNN \cite{liu2022point}, PQA-net \cite{liu2021pqa}, and VQA-PC \cite{zhang2022treating}. For the TMQ database, the compared FR quality assessment methods include PSNR, SSIM \cite{ssim}, and G-LPIPS (specially designed for textured meshes) \cite{nehme2022textured}. The compared NR methods include 3D-NSS \cite{zhang2022no}, BRISQUE \cite{mittal2012brisque}, and NIQE \cite{mittal2012making}. It's worth mentioning that PSNR, SSIM, BRISQUE, and NIQE are calculated on all 6 projections and the average scores are recorded.
Afterward, a five-parameter logistic function is applied to map the predicted scores to subjective ratings, as suggested by \cite{antkowiak2000final}. Four popular consistency evaluation criteria are selected to judge the correlation between the predicted scores and quality labels, which consist of Spearman Rank Correlation Coefficient (SRCC), Kendall’s Rank Correlation Coefficient (KRCC), Pearson Linear Correlation Coefficient (PLCC), and Root Mean Squared Error (RMSE). A well-performing model should
obtain values of SRCC, KRCC, and PLCC close to 1, and
the value of RMSE near 0.
\subsection{Implementation Details}
The employed Swin-Transformer tiny \cite{liu2021swin} backbone is initialized with the weights pretrained on the ImageNet database \cite{deng2009imagenet}. The RPS parameter $N$ discussed in Section \ref{sec:projection} is set as 2 for the tiny version \textbf{EPP-3DQA-t} and 5 for the base version \textbf{EPP-3DQA} respectively.
The adam optimizer \cite{kingma2014adam} is employed with the 1e-4 initial learning rate and the learning rate decays with a ratio of 0.9 for every 5 epochs. The default batch size is set as 32 and the default training epochs are set as 50. The average performance of $k$-fold cross validation is reported as the final performance to avoid randomness.
\begin{table}[!t]
\centering
\renewcommand\tabcolsep{2.3pt}
\caption{Performance results on the TMQ database. The best performance results are marked in {\bf\textcolor{red}{RED}} and the second performance results are marked in {\bf\textcolor{blue}{BLUE}}.}
\vspace{0.1cm}
\begin{tabular}{l:l|cccc}
\toprule
\multirow{2}{*}{Ref}&\multirow{2}{*}{Methods} & \multicolumn{4}{c}{TMQ} \\ \cline{3-6}
& & SRCC$\uparrow$ & PLCC$\uparrow$ & KRCC$\uparrow$ & RMSE $\downarrow$ \\ \hline
\multirow{3}{*}{FR} &
PSNR &0.5295 &0.6535 &0.3938 &0.7877\\
&SSIM &0.4020 &0.5982 &0.2821 &0.8339 \\
&G-LPIPS &\bf\textcolor{red}{0.8600} & \bf\textcolor{red}{0.8500} &- &-\\ \hdashline
\multirow{5}{*}{NR}&
3D-NSS &0.4263 &0.4429 &0.2934 &1.0542 \\
&BRISQUE & 0.5364 & 0.4849 & 0.3788 & 0.9014 \\
&NIQE & 0.3731 & 0.3866 & 0.2528 & 0.8782 \\ \cdashline{2-6}
&\textbf{EEP-3DQA-t} & {0.7350} & {0.7430} & \bf\textcolor{blue}{0.5439} & \bf\textcolor{blue}{0.6468}
\\
&\textbf{EEP-3DQA} &\bf\textcolor{blue}{0.7769} &\bf\textcolor{blue}{0.7823} & \bf\textcolor{red}{0.5852} & \bf\textcolor{red}{0.5975}
\\
\bottomrule
\end{tabular}
\vspace{-0.4cm}
\label{tab:mqa}
\end{table}
\subsection{Experimental Results}
The experimental results are listed in Table \ref{tab:pcqa} and Table \ref{tab:mqa}. From Table \ref{tab:pcqa}, we can observe that the proposed EEP-3DQA outperforms all the compared methods while the tiny version EEP-3DQA-t achieves second place on the SJTU-PCQA and WPC databases, which proves the effectiveness of the proposed method. What's more, all the 3DQA methods experience significant performance drops from the SJTU-PCQA database to the WPC database. This is because the WPC database introduces more complicated distortions and employs relatively fine-grained degradation levels, which makes the quality assessment tasks more challenging for the 3DQA methods. From Table \ref{tab:mqa}, we can find that the proposed EPP-3DQA is only inferior to the FR method G-LPIPS. However, G-LPIPS operates on the projections from the perceptually selected viewpoints, which thus gains less practical value than the proposed method.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{n_comparison_replace.pdf}
\vspace{-0.2cm}
\caption{Illustration of the varying SRCC values corresponding to the different projection number $N$ selection. }
\label{fig:n_srcc}
\vspace{-0.4cm}
\end{figure}
\subsection{Projection Number Selection}
\label{sec:num}
In this section, we exhibit the performance with different RPS $N$ selections in Fig \ref{fig:n_srcc} and give the reasons for why setting $N$ as 2 for the tiny version and 5 for the base version of the proposed EPP-3DQA. From Fig \ref{fig:n_srcc}, we can see that randomly sampling 5 projections obtains the highest performance among all three databases, which is why we set $N$=5 for the base version. Affected by over-fitting, using all 6 projections causes performance drops. What's more, when $N$ increases from 1 to 2, the SRCC values gain the largest improvement on the SJTU-PCQA databases and gain relatively significant improvement on the WPC and TMQ databases. Therefore, we set $N$=2 for the tiny version rather than setting $N$=1 to get more cost-effective performance.
\subsection{Efficiency Analysis}
Previous discussions have proven the effectiveness of the proposed EEP-3DQA, this section mainly focuses on efficiency. Three FR-3DQA methods (PCQM, GraphSIM, and PointSSIM) and two NR-3DQA methods (3D-NSS and VQA-PC) are selected for comparison. It's worth mentioning that VQA-PC and the proposed EEP-3DQA are deep-learning based methods, and the rest methods are all handcrafted based methods. We test the operation time of the 3DQA methods on a computer with Intel 12500H @ 3.11 GHz CPU, 16G RAM, and NVIDIA Geforce RTX 3070Ti GPU on the Windows platform. The efficiency comparison is exhibited in Table \ref{tab:efficiency}. We can see that the base version of the proposed method requires only 1/2.09 inference compared with the fastest competitor 3D-NSS while achieving the best performance on CPU. Moreover, the tiny version EEP-3DQA-t takes up even fewer flops and inference time with the compared deep-learning based VQA-PC. All the comparisons confirm the superior efficiency of the proposed EPP-3DQA.
\subsection{Ablation Study}
We propose two sampling strategies RPS and GMS in Section \ref{sec:projection} and we try to investigate the contributions of each strategy. Note that we fix the projection viewpoints as not using the RPS strategy and we test with 5 different sets of fixed projection viewpoints to ease the effect of randomness. The ablation study results are shown in Table \ref{tab:ablation}, from which we can see that using both RPS and GMS achieves higher SRCC values than excluding either of the strategies. This indicates that the proposed RPS and GMS all make contributions to the final results. With closer inspections, we can find that RPS makes relatively more contributions to the tiny version compared with the base version. This is because the base version employs five random projections out of the six perpendicular projections, which is not that significantly different from fixing five projections out of the six perpendicular projections. Additionally, the GMS strategy tends to make more contributions for the base version than the tiny version, which suggests that the GMS strategy can better dig out the quality-aware information with more projections.
\begin{table}[!tp]
\centering
\renewcommand\tabcolsep{2.3pt}
\renewcommand\arraystretch{1.1}
\caption{Illustration of flops, parameters, and average inference time (on CPU/GPU) per point cloud of the SJTU-PCQA and WPC databases. The subscript `$\mathbf{_{A\times}}$' of the consuming time indicates the corresponding method takes up $\mathbf{A\times}$ operation time of the proposed base version \textbf{EEP-3DQA}. }
\vspace{0.1cm}
\begin{tabular}{c|cc|c}
\toprule
{Method} & {Para. (M)} & {Gflops} & {Time (S) CPU/GPU} \\ \hline
PCQM & - & - & 12.23{$\mathbf{_{4.99\times}}$}/- \\
GraphSIM & - & - &270.14{$\mathbf{_{110.26\times}}$}/- \\
PointSSIM &- & - &9.27{$\mathbf{_{3.78\times}}$}/- \\
3D-NSS &- &- &5.12{$\mathbf{_{2.09\times}}$}/- \\
VQA-PC &58.37 &50.08 &19.21{$\mathbf{_{7.84\times}}$}/16.44{$\mathbf{_{11.26\times}}$} \\ \hdashline
\textbf{EEP-3DQA-t} &27.54 & 8.74 &1.67\textcolor{red}{$\mathbf{_{0.68\times}}$}/1.12\textcolor{red}{$\mathbf{_{0.77\times}}$} \\
\textbf{EEP-3DQA} &27.54 &21.87 &2.45\textcolor{blue}{$\mathbf{_{1.00\times}}$}/1.46\textcolor{blue}{$\mathbf{_{1.00\times}}$} \\
\bottomrule
\end{tabular}
\label{tab:efficiency}
\end{table}
\begin{table}[]
\centering
\renewcommand\arraystretch{1.1}
\caption{SRCC performance results of the ablation study. Best in bold.}
\vspace{0.05cm}
\begin{tabular}{c:c:c|c|c|c}
\toprule
Ver. & RPS & GMS & SJTU & WPC &TMQ \\ \hline
\multirow{3}{*}{Tiny} & $\checkmark$ & $\times$ & 0.8834 & 0.8016 & 0.7347 \\
& $\times$ & $\checkmark$ & 0.8822 & 0.7953 & 0.7136 \\
& $\checkmark$ & $\checkmark$ & \textbf{0.8891} & \textbf{0.8032} & \textbf{0.7350} \\ \hdashline
\multirow{3}{*}{Base} & $\checkmark$ & $\times$ & 0.8853 & 0.8140 & 0.7564 \\
& $\times$ & $\checkmark$ & 0.8933 & 0.8241 & 0.7704 \\
& $\checkmark$ & $\checkmark$ & \textbf{0.9095} & \textbf{0.8264} & \textbf{0.7769} \\
\bottomrule
\end{tabular}
\label{tab:ablation}
\end{table}
\section{Conclusion}
In this paper, we mainly focus on the efficiency of 3DQA methods and propose an NR-3DQA method to tackle the challenges. The proposed EEP-3DQA first randomly samples several projections from the 6 perpendicular viewpoints and then employs the grid mini-patch sampling to convert the projections into spatial grids while marinating the local patterns. Later, the lightweight Swin-Transformer tiny is used as the feature extraction backbone to extract quality-aware features from the sampled projections. The base EEP-3DQA achieves the best performance among the NR-3DQA methods on all three benchmark databases and the tiny EEP-3DQA-t takes up the least inference time on both CPU and GPU while still obtaining competitive performance. The further extensive experiment results further confirm the contributions of the proposed strategies and the rationality of the structure.
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.08609",
"language": "en",
"timestamp": "2023-02-20T02:03:14",
"url": "https://arxiv.org/abs/2302.08609",
"yymm": "2302"
} | \section{Introduction\label{sct:introduction}}
The collider production of an isolated partonic object bearing uncanceled strong nuclear charge
is immediately followed by a frenzied showering of soft and collinear radiation with
a complex process of recombination into meta-stable color-singlet hadronic states. In order to compare
theoretical predictions for hard scattering events against experimental observations, it is necessary to systematically
reassemble these ``jets'' of fragmentary debris into a faithful representation of their particle source.
The leading clustering algorithm serving this purpose at the Large Hadron Collider (LHC) is anti-$k_\textrm{T}$~\cite{Cacciari:2008gp},
which is valued for yielding regular jet shapes that are simple to calibrate.
It can often be the case that clustering is complicated by early decays of a heavy unstable state into
multiple hard prongs, e.g., as $(W^+ \to u\, \bar{d})$, or $(t \to W^+ b \to u\, \bar{d}\, b)$.
This additional structure can be of great benefit for tagging presence of the heavy initial state.
However, identification will be confounded if essential constituents of distinct prongs
either remain uncollected, or are merged together into a joint assemblage.
Standard practice is to err in the latter direction, via construction of a large-radius
jet that is intended to encapsulate all relevant showering
products, and to subsequently attempt recovery of the lost ``substructure''
using a separate algorithm such as $N$-subjettiness~\cite{Thaler:2010tr}.
The casting of this wide net invariably also sweeps up extraneous low-energy radiation at large
angular separations and successful substructure tagging typically hinges on secondarily
``grooming'' large-radius jets with a technique such as Soft Drop~\cite{Larkoski:2014wba}.
Furthermore, the appropriate angular coverage is intrinsically dependent
upon the process being investigated, since more boosted parents will
tend to produce more collimated beams of children.
Accordingly, such methods are conventionally tuned to target
specific energy scales for maximal efficacy.
In this paper, we introduce a new jet clustering algorithm called
SIFT (Scale-Invariant Filtered Tree) that is engineered
to avoid losing resolution of structure in the first place.
Similar considerations previously motivated development
of the exclusive XCone~\cite{Stewart:2015waa,Thaler:2015xaa} algorithm
based on minimization of $N$-jettiness~\cite{Stewart:2010tn}.
Most algorithms in common use reference a fixed angular cone
size $R_0$, inside of which all presented objects
will cluster, and beyond which all exterior objects will be excluded.
We identify this parameter, and the conjugate momentum scale which it imprints,
as primary culprits responsible for the ensuing loss of resolution.
In keeping, our proposal rejects any such references
to an external scale, while asymptotically recovering successful
angular and kinematic behaviors of algorithms in the $k_\textrm{T}$-family.
Specifically, the SIFT prioritizes pairing objects that have hierarchically dissimilar transverse momentum scales
(i.e., one member is soft \emph{relative} to its partner),
and narrow angular separation (i.e., the members are collinear).
The form we are lead to by these considerations is quite similar to a
prior clustering measure named Geneva~\cite{BETHKE1992310}, although
our principal motivation (retention of substructure) differs
substantively from those that applied historically.
We additionally resolve a critical fault that otherwise precludes
the practical application of radius-free measures such as Geneva,
namely the tendency to gather uncorrelated soft radiation at wide
angular separations. This is accomplished with a novel filtering
and halting prescription that is motivated by Soft Drop,
but defined in terms of the scale-invariant measure itself and
applied to each candidate merger during the initial clustering.
The final-state objects retained after algorithm termination are isolated
variable-large-radius jets that dynamically bundle decay products of
massive resonances in response to the \emph{process} scale.
Since mutually hard prongs tend to merge last (this behavior
is very different from that of anti-$k_\textrm{T}$), the end stages
of clustering densely encode information indicating
the presence of substructure. Accordingly, we introduce the concept
of an $N$-subjet tree, which represents the ensemble of a
merged object's projections onto $N$ axes,
as directly associated with the clustering progression from $N$ prongs to $1$.
We demonstrate that the subjet axes obtained
in this manner are effective for applications such as
the computation of $N$-subjettiness. Additionally,
we confirm that they facilitate accurate and sharply-peaked
reconstruction of associated mass resonances.
Finally, we show that the sequential evolution history
of the SIFT measure across mergers is itself an excellent
substructure discriminant. As quantified with the
aid of a Boosted Decision Tree (BDT), we find that
it significantly outperforms the benchmark approach to
distinguishing one-, two-, and three-prong events using
$N$-subjettiness ratios, especially in
the presence of a large transverse boost.
The outline of this work is as follows.
Section~\ref{sct:algorithm} presents a master sequential jet clustering algorithm into which SIFT and the $k_\textrm{T}$ prescriptions may be embedded.
Sections~\ref{sct:siftmeasure}--\ref{sct:njettree} define the SIFT algorithm in terms of its scale-invariant measure
(with transformation to a coordinate representation), filtering and isolation criteria, and final-state $N$-subjet tree objects.
Section~\ref{sct:comparison} visually contrasts clustering priorities, soft radiation catchment, and halting status against common algorithms.
Sections~\ref{sct:reconstruction}--\ref{sct:tagging} comparatively assesses SIFT's performance in various applications,
relating firstly to jet resolution and mass reconstruction, and secondly to the tagging of structure.
Section~\ref{sct:theory} addresses computability, infrared and collinear safety, and $\mathcal{O}(1)$ deviations from recursive safety.
Section~\ref{sct:conclusion} concludes and summarizes.
Appendix~\ref{sct:collidervars} provides a pedagogical review of hadron collider coordinates.
Appendix~\ref{sct:software} describes available software implementations of the SIFT algorithm
along with methods for reproducing current results and plans for integration
with the {\sc FastJet}~\cite{Cacciari:2011ma,Cacciari:2005hq} contributions library.
\section{The Master Algorithm\label{sct:algorithm}}
\begin{figure*}[ht!]
\centering
\begin{tikzpicture}[node distance=7.5mm]
\node (start) [startstop] {Start};
\node (in1) [io, below=of start] {Input Candidate Physics Object Pool};
\node (dec1) [decision, aspect=3, text width=30mm, below=of in1, yshift=0mm, xshift=0mm] {Object Pool Count\\Exceeds Target?};
\node (pro1) [process, below=of dec1] {~~~~Identify Objects with Smallest Pairwise Separation Measure $\delta_{AB}$~~~~};
\node (dec2) [decision, aspect=3, text width=30mm, below=of pro1, yshift=0mm, xshift=-35mm] {Pairing Passes\\Soft/Wide Filter?};
\node (pro2) [process, text width=50mm, right=of dec2, yshift=0mm, xshift=5mm] {Sum Member Four-Vectors and\\Return Clustered Object to Pool};
\node (dec3) [decision, aspect=3, text width=30mm, below=of dec2, yshift=0mm, xshift=0mm] {Paired Scales\\Are Dissimilar?};
\node (pro3) [process, text width=50mm, right=of dec3, yshift=0mm, xshift=5mm] {Drop Softer Member and\\Return Harder Object to Pool};
\node (pro4) [process, text width=50mm, below=of pro3, yshift=-10mm, xshift=0mm] {Set Both Members Aside as\\Isolated Final State Objects};
\node (out1) [io, below=of dec3, yshift=-20mm, xshift=+35mm] {Output Final State $N$-Subjet Tree with Measure History};
\node (stop) [startstop, below=of out1] {Stop};
\draw [arrow] (start) -- (in1);
\draw [arrow] (in1) -- (dec1);
\draw [arrow] (dec1) -- node[anchor=east] {Yes} (pro1);
\draw [arrow] (dec1) -- node[anchor=south] {No} ++(-72mm,0) |- (out1);
\draw [arrow] (pro1) -- (dec2);
\draw [arrow] (dec2) -- node[anchor=south] {Yes} (pro2);
\draw [arrow] (dec2) -- node[anchor=east] {No} (dec3);
\draw [arrow] (dec3) -- node[anchor=south] {Yes} (pro3);
\draw [arrow] (dec3) |- node[anchor=east] {No} (pro4);
\draw [arrow] (pro2) -- ++(+38mm,0) coordinate (m1) |- (dec1);
\draw [arrow] (pro3) -| coordinate(m2) (m1);
\draw [arrow] (pro4) -| (m2);
\draw [arrow] (out1) -- (stop);
\end{tikzpicture}
\caption{\footnotesize
Generalized logical flow chart representing a family of sequential jet clustering algorithms
defined by a distance measure $\delta_{AB}$, along with a specification for
filtering and isolation criteria and/or a target final-state jet count for exclusive clustering.}
\label{fig:flow}
\end{figure*}
This section summarizes the global structure of a general sequential
jet clustering algorithm, as illustrated in FIG.~\ref{fig:flow}.
Examples are provided of how standard algorithms fit into this framework.
These examples will be referenced subsequently to motivate the SIFT
algorithm and to comparatively assess its performance.
To begin, a pool of $N$ low-level physics objects
(e.g., four-vector components of track-assisted calorimeter hits)
is populated, typically including reconstructed hadrons as well as
photons and light leptons that fail applicable hardness, identification, or isolation criteria.
The main loop then begins by finding the two most-proximal objects ($A,B$),
as defined via minimization of a specified distance measure $\delta_{AB}$ over each candidate pairing.
For example, the anti-$k_\textrm{T}$ algorithm is a member (with index $n = -1$) of a broader class
of algorithms that includes the earlier $k_\textrm{T}$~\cite{Catani:1993hr,Ellis:1993tq} (with $n = +1$)
and Cambridge-Aachen~\cite{Dokshitzer:1997in,Wobisch:1998wt} (with $n = 0$) formulations,
corresponding to the measure $\delta_{AB}^{k_{\rm T},n}$ defined as:
\begin{equation}
\delta_{AB}^{k_{\rm T},n} \equiv
{\rm min} \bigg[ \, {\left(p_\textrm{T}^A\right)}^{2n} \!\!,\, {\left(p_\textrm{T}^B\right)}^{2n} \, \bigg] \times
{\left( \frac{\Delta R_{AB}}{R_0} \right)}^{\!2}
\label{eq:ktmeasure}
\end{equation}
The quantity ${( \Delta R_{AB} )}^2 \equiv {( \Delta \eta)}^2 + {( \Delta \phi )}^2$
expresses ``geometric'' adjacency as a Cartesian norm-square of differences
in pseudo-rapidity $\eta$ and azimuthal angle $\phi $ (cf. Appendix~\ref{sct:collidervars})
relative to the maximal cone radius $R_0$.
The $(n = 0)$ scenario prioritizes small values of $\Delta R_{AB}$ without reference to the transverse momentum $p_\textrm{T}$.
A positive momentum exponent $(n = +1)$ first associates pairs wherein at least one member is very soft,
naturally rewinding the chronology of the showering process.
A negative momentum exponent $(n = -1)$ favors pairs wherein at least one
member represents hard radiation directed away from the beamline.
Once an object pair has been selected, there are various ways
to handle its members. Broadly, three useful alternatives are
available, identified here as clustering, dropping, and isolating.
The criteria for distinguishing between these actions are
an important part of any algorithm's halting condition.
Clustering means that the objects ($A,B$) are merged, usually
via a four-vector sum ($p^\mu_{AB} = p^\mu_{A} + p^\mu_{B}$),
and replaced by this merged object in the pool.
Typically, clustering proceeds unless the applicable
angular separation is too great and/or the momentum scales
of the members are too dissimilar.
For pairings failing this filter,
two options remain. Dropping, wherein the softer member is
set aside (it may literally be discarded or rather
reclassified as a final-state object) while the harder
member is returned to the object pool, sensibly applies
when momentum scales are hierarchically imbalanced.
Conversely, a symmetric treatment of both members
is motivated when momentum scales are similar,
and isolating involves mutual reclassification
as final-state objects. Exclusive algorithms
(which guarantee a fixed ending count $N_{\rm exc}$ of jets)
typically always cluster, although any construction
that reduces the net object count by
one unit per iteration is consistent\footnote{
A valid example is clustering plus dropping without isolation.}.
To complete the prior example, conventional implementations of $k_\textrm{T}$-family
clustering do not discard objects or collectively isolate final-state pairs.
However, they do singly reclassify all objects with no partners
nearer than $R_0$ in $\Delta R$ as final-state jets.
This behavior may be conveniently embedded into the master algorithm flow
by also allowing each object in the active pool to pair one at a time with the ``beam''
and associating it with the alternative measure $\delta_{\rm beam}^{k_{\rm T},n}=p_\textrm{T}^{2n}$.
The dropping criterion is then adapted to set aside any object for which
this beam distance is identified as the global minimum.
Note that such objects are indeed guaranteed to have
no neighbors inside a centered cone of size $R_0$.
The described loop repeats until the number of objects remaining in the
active pool reaches a specified threshold. In the context of an exclusive
clustering mode this would correspond to a target count $N_{\rm exc}$,
whereas continuation otherwise simply requires the presence of multiple ($N>1$) objects.
Many clustering algorithms, including SIFT, may be operated in either of these modes.
Upon satisfaction of the halting criteria, the list of final-state objects
is returned along with a record of the clustering sequence and associated measure history.
The SIFT algorithm will now be systematically developed
by providing specific prescriptions for each element of the master algorithm.
The scale-invariant measure $\delta_{AB}$ is defined in Section~\ref{sct:siftmeasure}
and provided with an intuitive geometric form in Section~\ref{sct:geometricform}.
The filtering and isolation criteria are itemized in Section~\ref{sct:filtering}.
The $N$-subjet tree is introduced in Section~\ref{sct:njettree}.
\section{The Scale-Invariant Measure~\label{sct:siftmeasure}}
This section establishes the SIFT clustering measure $\delta_{AB}$ and
places it in the context of similar constructs from the literature.
Our principal objective is to develop an approach that is intrinsically
resilient against loss of substructure in boosted event topologies,
i.e., which maintains resolution of collimated radiation
associated with distinct partonic precursors in the large-$p_\textrm{T}$ limit.
We identify specification of an angular size parameter $R_0$ as the primary culprit imposing
a conjugate momentum scale dependence on the performance of traditional approaches to jet clustering.
In pursuit of a scale-independent algorithm,
we require that the clustering measure be free of any such factor.
Nevertheless, it is desirable to asymptotically
recover angular and kinematic characteristics
of existing successful approaches such as anti-$k_\textrm{T}$ (cf. Eq.~\ref{eq:ktmeasure} with $n = -1$),
and we thus seek out proxies for its dominant behaviors.
Specifically, these include a preference for pairs with close angular proximity,
and a preference for pairs where one member carries a large transverse boost.
For the former purpose, we invoke the mass-square difference, defined as follows:
\begin{align}
\Delta m_{AB}^{2} &\equiv (p^{\mu}_A + p^{\mu}_B)^2 - m_{A}^{2} - m_{B}^{2}
\,=\, 2 p^{\mu}_A p_{\mu}^{B}
\label{eq:dmsq}\\
& \simeq 2 E^{A}E^{B}\times(1-\cos\Delta \theta_{AB})
\simeq E^{A} E^{B}\Delta \,\theta_{AB}^{2} \nonumber
\end{align}
The property that small mass-square changes correlate with collinearity of
decay products was similarly leveraged by the early JADE~\cite{Bethke:1988zc} algorithm.
For the latter purpose, we turn to suppression in a denominator by the summed transverse energy-square:
\begin{align}
{\textstyle \sum}\, E_\textrm{T}^2 &\equiv (E_\textrm{T}^{A})^{2} + (E_\textrm{T}^{B})^{2}
\label{eq:etsq} \\
E_\textrm{T}^2 &\equiv p_\textrm{T}^2 + m^2 = E^2 - p_z^2
\nonumber
\end{align}
The summation plays a role similar to that of the ``min'' criterion in Eq.~(\ref{eq:ktmeasure}),
in the case that the pair of objects under consideration is very asymmetrically boosted.
The choice of $E_\textrm{T}^2$ over simply $p_\textrm{T}^2$ prevents a certain
type of divergence, as it is possible for the vector quantity $\vec{p}_{\rm T}$
to cancel during a certain phase of the clustering, but not without generation of mass.
All together, the simple prescription for the proposed algorithm is to sequentially
cluster indexed objects $A$ and $B$ corresponding to the smallest pairwise value
of the following expression:
\begin{equation}
\delta_{AB} \equiv
\frac{\Delta m_{AB}^2}{(E_\textrm{T}^{A})^{2}+(E_\textrm{T}^{B})^{2}}
\label{eq:siftmeasure}
\end{equation}
Since this measure represents our default context,
we write it without an explicit superscript label for simplicity.
We note that it consists of the dimensionless ratio of a Lorentz invariant
and an invariant under longitudinal boosts, and that it manifestly possesses the desired
freedom from arbitrary externally-specified scales.
For comparison, the JADE clustering measure is:
\begin{equation}
\delta_{AB}^{\rm JADE} \equiv \frac{E_{A}E_{B}}{s}\times\left(1-\cos\Delta \theta_{AB}\right)
\label{eq:JADE}
\end{equation}
Having been designed for application at a lepton collider,
JADE~\cite{JADE:1986kta} references total energies ($E_A,E_B$)
and spherically symmetric angular separations $\Delta \theta_{AB}$
rather than cylindrical quantities.
It also treats all merger candidates as massless,
i.e., it uniformly factors ($\vert\vec{p}\vert\Rightarrow E$)
out of the angular dependence.
Although Eq.~(\ref{eq:JADE}) is dimensionless and avoids referencing
a fixed cone size, it is not scale-invariant, due to normalization
against the global Mandelstam center-of-momentum energy $\sqrt{s}$.
This distinction is amplified in a hadron collider context,
where the partonic $\sqrt{\hat{s}}$ and laboratory center-of-momentum
frames are generally not equivalent.
However, the historical measure most alike to SIFT is Geneva,
which was developed as a modification to JADE:
\begin{equation}
\delta_{AB}^{\rm Geneva} \equiv \frac{8}{9} \frac{E_{A}E_{B}}{{(E_A+E_B)}^2}\times\left(1-\cos\Delta \theta_{AB}\right)
\label{eq:geneva}
\end{equation}
Like Eq.~(\ref{eq:siftmeasure}), Eq.~(\ref{eq:geneva}) achieves scale-invariance
by constructing its denominator from dimensionful quantities local to the clustering process.
There are also several differences between the two forms, which are summarized here and explored further
in Section~\ref{sct:geometricform}. Variations in the dimensionless normalization are irrelevant\footnote{
Geneva was introduced with a coefficient of $8/9$ to match the normalization of JADE in the limit of three hard prongs.}.
The substitution of cylindrical for spherical kinematic quantities is relevant, albeit trivial.
Exchanging the squared sum for a sum of squares causes the SIFT measure fall off more sharply than the Geneva measure as the momentum scales of the clustering candidates diverge. But, the most
critical distinguishing feature is that SIFT is sensitive to accumulated mass.
Despite core similarities at measure level, the novel filtering and isolation criteria
described in Section~\ref{sct:filtering} are responsible for essential divergences
in the properties of final-state objects produced by SIFT relative to those from Geneva.
In particular, we will show that the $N$-subjet tree introduced in Section~\ref{sct:njettree}
is effective for reconstructing hard objects and tagging the presence of substructure
at large transverse boosts without the need for additional de/reclustering or post-processing.
\begin{figure*}[ht!]
\centering
\hspace{7pt}\includegraphics[width=.95\textwidth]{Figures/Delta_RSq_Tilde.pdf}
\vspace{-12pt}
\caption{\footnotesize
The factor $\Delta \widetilde{R}^2_{AB}$ is compared to
the traditional angular measure $\Delta R^2_{AB}$,
as a function of $\Delta\eta$ and $\Delta \phi$,
and for various values of ($m/p_\textrm{T}$).
Regions where the SIFT measure enhances (suppresses)
clustering are shown in green (red).
Blue indicates similar behavior, with
black contour placed at unity. Grey contours mark increments
of $0.1$ in $\log_{10}(\Delta \widetilde{R}^2_{AB}\div \Delta R^2_{AB})$.}
\label{fig:drtilde}
\end{figure*}
\section{The Geometric Form\label{sct:geometricform}}
This section presents a transformation of the SIFT clustering
measure into the geometric language of coordinate differences.
While the underlying physics is invariant under this change
of variables, the resulting expression is better suited for developing
intuition regarding clustering priorities and how they compare to those of traditional algorithms.
It will also be used to motivate and express the filtering and isolation criteria in Section~\ref{sct:filtering},
and to streamline calculations touching on computational safety in Section~\ref{sct:theory}.
Additionally, it is vital for realizing fast numerical implementations of the SIFT
algorithm that employ optimized data structures based on geometric coordinate adjacency.
The canonical form of distance measure on a manifold is
a sum of bilinear coordinate differentials $dx^{i} dx^{j}$
with general coordinate-dependent coefficients $g_{ij}$.
Measures applicable to jet clustering must be integrated,
referencing finite coordinate separations $\Delta x^i$.
The measure expressed in Eq.~(\ref{eq:siftmeasure}) does not apparently
refer to coordinate differences at all, although Eq.~(\ref{eq:dmsq})
provides a hint of how an implicit dependence of this type might manifest.
We start from the mass-square difference:
\begin{equation}
\Delta m_{AB}^2
= 2 \times \left( E^{A}E^{B} - p_z^{A} p_z^{B} - p_\textrm{T}^{A} p_\textrm{T}^{B} \cos\Delta \phi_{AB} \right)
\label{eq:dmsqdiffs}
\end{equation}
To proceed, recall that that Lorentz transformations are generated
by hyperbolic ``rotation'' in rapidity $y$. In particular, we may
boost via matrix multiplication from the transverse frame with $y=0$,
$p_z = 0$, and $E = E_T$ to any longitudinally related frame.
\begin{equation}
\begin{pmatrix}
E \\ p_z
\end{pmatrix}
=
\begin{pmatrix}
\cosh y & \sinh y \\
\sinh y & \cosh y
\end{pmatrix}
\begin{pmatrix}
E_\textrm{T} \\ 0
\end{pmatrix}
=
\begin{pmatrix}
E_\textrm{T} \cosh y \\
E_\textrm{T} \sinh y \\
\end{pmatrix}
\label{eq:hyperbolic_rot}
\end{equation}
This may be used to reduce Eq.~(\ref{eq:dmsqdiffs}),
using the standard hyperbolic difference identity.
\begin{align}
& E_\textrm{T}^{A}E_\textrm{T}^{B} - p_z^{A} p_z^{B}
\label{eq:hyperbolic_diffs} \\
\quad=\,\,& E_\textrm{T}^{A} E_\textrm{T}^{B} \times \left( \cosh y^{A} \cosh y^{B} - \sinh y^{A} \sinh y^{B} \right)
\nonumber \\
\quad=\,\,& E_\textrm{T}^{A} E_\textrm{T}^{B} \times \cosh \Delta y_{AB}
\nonumber
\end{align}
The transverse energy will factor perfectly out of the
mass-square difference if the constituent four-vectors
are individually massless, i.e.,~if $(m_{A}=m_{B}=0)$.
Otherwise, there are residual coefficients $(\xi_A,\xi_B)$ defined as follows:
\begin{align}
\Delta m_{AB}^{2} &=2\, E_\textrm{T}^{A} E_\textrm{T}^{B} \, \times \left( \cosh \Delta y_{AB} -\xi^{A} \xi^{B} \cos\Delta \phi_{AB} \right)
\nonumber \\
\xi &\equiv\, \frac{p_\textrm{T}}{E_\textrm{T}}
\,=\, {\bigg(1-\frac{m^2}{E_\textrm{T}^2}\bigg)}^{\hspace{-2pt}+\sfrac{1}{2}}
\!\!\!=\, {\bigg(1+\frac{m^2}{p_\textrm{T}^2}\bigg)}^{\hspace{-2pt}-\sfrac{1}{2}}
\label{eq:dmsqdiffs2}
\end{align}
The role of the $\xi$ in Eq.~(\ref{eq:dmsqdiffs2})
is to function as a ``lever arm'' deemphasizing
azimuthal differences in the non-relativistic limit and at low $p_\textrm{T}$.
The precise relationship between $\Delta m_{AB}^2$ and the angular
separation $\Delta R_{AB}^2$ can also now be readily established,
referencing Taylor expansions for cosine and hyperbolic cosine in the limit.
\begin{align}
\Delta \widetilde{R}_{AB}^2
&\equiv \frac{\Delta m_{AB}^2}{E_\textrm{T}^{A} E_\textrm{T}^{B}}
\label{eq:drsqtilde} \\
&=2\, \times \left( \cosh \Delta y_{AB} -\xi^{A} \xi^{B} \cos\Delta \phi_{AB} \right)
\nonumber \\
&\simeq\, \Delta \eta_{AB}^2 + \Delta \phi_{AB}^2
\,\equiv\, \Delta R_{AB}^2
\nonumber
\end{align}
\begin{figure*}[ht!]
\centering
\hspace{7pt}\includegraphics[width=.95\textwidth]{Figures/Twice_Epsilon_AB.pdf}
\vspace{-12pt}
\caption{\footnotesize
The factor $2\times\epsilon^{AB}$ is log-symmetric
in the ratio $E_\textrm{T}^A/E_\textrm{T}^B$ of transverse energies,
becoming small whenever candidate scales are hierarchically dissimilar.
SIFT (blue) is compared against analogous behaviors for the Geneva (green) and
$k_\textrm{T}$-family (red) algorithms, the latter at ($\,\beta \equiv E_\textrm{T}^{A}E_\textrm{T}^{B}/E_0^2 \Rightarrow 1\,$).
Grey contours illustrate scaling of the $k_\textrm{T}$ measures with a power $n=\pm 1$ of
($\,\beta \Rightarrow \sfrac{1}{10},\, \sfrac{1}{5},\, \sfrac{1}{2},\, 2,\, 5,\,10\,$).}
\label{fig:epsilon}
\end{figure*}
The indicated correspondence becomes increasingly exact as one approaches
the collinear $(\Delta R^2 \ll 1)$ and massless
($\Delta y \Rightarrow \Delta \eta$, $\xi \Rightarrow 1$) limits.
The fact that $\cosh \Delta y$ is unbounded
and has purely positive series coefficients,
whereas $\cosh \Delta \phi$ is bounded
and its terms are of alternating sign,
implies that $\Delta\widetilde{R}^2$ is more sensitive to separations
in rapidity than separations in azimuth.
The ratio ($\Delta\widetilde{R}^2 \div \Delta R^2$) is explored
graphically in FIG.~\ref{fig:drtilde} as a function of $\Delta\eta$
and $\Delta\phi$, for various values of ($m/p_\textrm{T}$). For purposes of
illustration, deviations from ($\eta = 0$) are applied symmetrically
to the candidate object pair,
and the quoted mass ratio applies equivalently to both objects.
Black contours indicate unity, bisecting regions of
neutral bias, which are colored in blue.
Grey contours are spaced at increments of $0.1$
in the base-$10$ logarithm, and regions where SIFT
exhibits enhancement (suppression) of clustering
are colored green (red).
In the massless limit (lefthand panel) a near-symmetry
persists under ($\Delta \eta \Leftrightarrow \Delta\phi$),
with corrections from subleading terms as described previously.
When $m$ approaches $p_\textrm{T}$ (center panel), the pseudo-rapidity versus azimuth
symmetry is meaningfully broken at large $\Delta R$ and a
strong aversion emerges to the clustering of massive states
at small $\Delta R$. The latter effect dominates for
highly non-relativistic objects (righthand panel),
and extends to larger separations, such that the
distinction between $\Delta \eta$ and $\Delta\phi$
is again washed out. SIFT binds objects
approaching ($\Delta R \sim \pi$) significantly more tightly
than the $k_\textrm{T}$ algorithms in this limit. Intuition
for that crossover in sign can be garnered from leading
terms in the multi-variate expansion shown
following, as developed from
Eqs.~(\ref{eq:dmsqdiffs2},~\ref{eq:drsqtilde},~\ref{eq:rap}):
\begin{align}
\label{eq:rsqeffdiff}
\Delta \widetilde{R}_{AB}^2 &\Rightarrow \Delta R_{AB}^2 \\
&+ \Bigg\{1-\frac{\Delta R_{AB}^2}{2}\Bigg\}\times
\Bigg\{\left(\frac{m_A}{p_\textrm{T}^A}\right)^2\!\!+\left(\frac{m_B}{p_\textrm{T}^B}\right)^2\Bigg\}
+ \cdots
\nonumber
\end{align}
We turn attention next to the denominator from Eq.~(\ref{eq:siftmeasure}),
defining a new quantity $\epsilon^{AB}$ in conjunction
with the transverse energy product factored out of $\Delta m_{AB}^2$.
\begin{equation}
\epsilon^{AB}
\equiv
\frac{E_\textrm{T}^{A} E_\textrm{T}^{B}}{(E_\textrm{T}^{A})^{2} + (E_\textrm{T}^{B})^{2}}
= \left\{\left(\frac{E_\textrm{T}^{A}}{E_\textrm{T}^{B}}\right) + \left(\frac{E_\textrm{T}^{B}}{E_\textrm{T}^{A}}\right)\right\}^{-1}
\label{eq:eps}
\end{equation}
This expression has a symmetry under the transformation
$(\alpha \equiv E_\textrm{T}^A/E_\textrm{T}^B \Rightarrow \alpha^{-1})$. It is maximized
at $\alpha = 1$, where $(\epsilon^{AB} \Rightarrow \sfrac{1}{2})$,
and minimized at $\alpha = (0,+\infty)$, where $(\epsilon^{AB} \Rightarrow 0)$.
The response is balanced by a change of variables
$\Delta u = \ln \alpha$, i.e.,~$\alpha = e^{\Delta u}$,
where the logarithm converts ratios into differences.
\begin{align}
u &\equiv \ln {\Big( E_\textrm{T} / {\rm [GeV]} \Big)}
\label{eq:cosh_et} \\
\epsilon^{AB} &= {\Big( e^{+\Delta u_{AB}} + e^{-\Delta u_{AB}}\Big)}^{-1}
= {\Big( 2\cosh \Delta u_{AB}\Big)}^{-1}
\nonumber
\end{align}
Putting everything together, we arrive at a
formulation of the measure from Eq.~(\ref{eq:siftmeasure})
that is expressed almost entirely in terms of coordinate differences
of the rapidities, azimuths, and log-transverse energies,
excepting the coefficients $\xi$ from Eq.~(\ref{eq:dmsqdiffs2}), which depend on the ratios $m/p_\textrm{T}$.
Of course, it is possible to adopt a reduction of the measure where $(\xi \Rightarrow 1)$ by fiat,
which is equivalent to taking a massless limit in the fashion of JADE and Geneva,
as described in Section~\ref{sct:siftmeasure}.
\begin{align}
\delta_{AB} &= \epsilon^{AB} \times \Delta\widetilde{R}_{AB}^2
\label{eq:measure_diffs} \\
&= \frac{ \cosh \Delta y_{AB} -\xi^{A} \xi^{B} \cos\Delta \phi_{AB} }{\cosh \Delta u_{AB}}
\nonumber
\end{align}
The scale-invariance of Eq.~(\ref{eq:measure_diffs}) is explicit, in two regards.
By construction, there is no reference to an external angular cutoff $R_0$.
In addition, the fact that transverse energies are referenced
only via ratios, and never in absolute terms, is emergent.
The measure is additionally observed to smoothly blend attributes of
$k_\textrm{T}$ and anti-$k_\textrm{T}$ jet finding, insomuch as
the former prioritizes clustering when one member of a pair is soft,
the latter when one member of a pair is hard,
and SIFT when the transverse energies are logarithmically disparate.
\begin{figure*}[ht!]
\centering
\includegraphics[width=.95\textwidth]{Figures/Phase_Diagram.pdf}
\vspace{-12pt}
\caption{\footnotesize
Phase diagram for the separation of object merging, filtering, and isolation responses.}
\label{fig:phase}
\end{figure*}
This behavior is illustrated in FIG.~\ref{fig:epsilon}, with
$2\times\epsilon^{AB}$ plotted in blue
as a function of ($\alpha \equiv E_\textrm{T}^A/E_\textrm{T}^B$).
For comparison, the analogous momentum-dependent factor
for Geneva from Eq.~(\ref{eq:geneva}) is shown in
green on the same axes, taking ($E \Rightarrow E_\textrm{T}$)
and normalizing to unity at ($\alpha = 1$).
Like SIFT, Geneva is symmetric with respect to variation of
the absolute event scale $\beta$.
However, the extra cross term appearing in its denominator
produces heavier tails when the scale ratio $\alpha$ is unbalanced.
The cusped red region similarly represents the $k_\textrm{T}$ and anti-$k_\textrm{T}$ measures.
Specifically, the following expression is proportional
to Eq.~(\ref{eq:ktmeasure}) for ($n=\pm1$) in the massless limit:
\begin{equation}
\delta_{AB}^{k_{\rm T},n} \,\appropto\,
{\left(\frac{E_\textrm{T}^{A}E_\textrm{T}^{B}}{E_0^2}\right)}^{\!\!n}\!\!
\times
{\rm min} \left[ \frac{E_\textrm{T}^{A}}{E_\textrm{T}^{B}}, \frac{E_\textrm{T}^{B}}{E_\textrm{T}^{A}} \right]
\label{eq:measureprop}
\end{equation}
The selected normalization agrees with $2\times\epsilon^{AB}$
in the further limits ($\beta \equiv E_\textrm{T}^{A}E_\textrm{T}^{B}/E_0^2 \Rightarrow 1$)
and ($\alpha \Rightarrow 1$), where $E_0$ is an arbitrary constant
reference energy. The distinction between $k_\textrm{T}$
and anti-$k_\textrm{T}$ clustering amounts to an enhancement
versus suppression by the product (squared geometric mean)
of transverse momenta. This is illustrated with the grey
contours in FIG.~\ref{fig:epsilon}, which rescale
by $\beta = (\sfrac{1}{10},\,\sfrac{1}{5},\,\sfrac{1}{2},\,2,\,5,\,10)$
from inner-lower to outer-upper for
($n=+1$), or in the reverse order for ($n=-1$).
The $\beta$-invariant case ($n=0$) is also potentially of interest,
but it is a new construction that is not to be confused with
the Cambridge-Aachen algorithm, which has no energy dependence at all.
We conclude this section with a transformation that identifies
the coordinate $u$ introduced in Eq.~(\ref{eq:cosh_et})
as a sort of ``dual'' to the rapidity $y$ from Eq.~(\ref{eq:rap}).
The log-transverse momentum $\ln (\,p_\textrm{T}\,/{\rm [GeV]})$ is similarly linked
to pseudo-rapidity $\eta$ in the massless limit.
\begin{alignat}{2}
\Omega_\pm &\equiv \phantom{\frac{1}{2}} \ln {\bigg( \frac{E \pm p_z}{{\rm [GeV]}} \bigg)} &&
\label{eq:rap_loget} \\
\frac{\Omega_+ +\hspace{2pt} \Omega_-}{2} &= \frac{1}{2} \ln {\bigg( \frac{E^2 - p_z^2}{{\rm [GeV]}^2} \bigg)} &&\,=\, u
\nonumber \\
\frac{\Omega_+ -\hspace{2pt} \Omega_-}{2} &= \frac{1}{2} \ln {\bigg( \frac{E + p_z}{E - p_z} \bigg)} &&\,=\, y
\nonumber
\end{alignat}
\section{Filtering, Isolation, and Halting\label{sct:filtering}}
This section establishes the SIFT filtering and isolation criteria,
which are used in conjunction to formulate a suitable halting
condition for the non-exclusive clustering mode.
Direct integration of a grooming stage effectively rejects stray radiation and pileup.
SIFT's behavior will be visualized with and without filtering
in Section~\ref{sct:comparison}, and compared against each
$k_\textrm{T}$-family algorithm in the presence of a soft ``ghost'' radiation background.
In conjunction, the two factorized terms in Eq.~(\ref{eq:measure_diffs})
ensure that clustering prioritizes the merger of object pairs that
have a hierarchically soft member and/or that are geometrically collinear,
mimicking fundamental poles in the matrix element for QCD showering.
This behavior is illustrated by the ``phase diagram'' in FIG.~\ref{fig:phase},
where the product of horizontal ``$x$'' and vertical ``$y(x)$'' coordinates on that plane
is equal to $\delta_{AB}$. Grey ``$y(x) = 1/x$'' contours trace constant values
of the measure, equal to ($.002,\,.005,\, .01,\, .02,\, .05,\, .1,\, .2,\, .5$),
with minimal values gathered toward the lower-left.
As a consequence, SIFT successfully preserves mutually hard structures with tight angular adjacency,
maintaining their resolution as distinct objects up to the final stages of clustering\footnote{
Mutually soft pairings tend not to occur, since such objects are typically
gathered up by a harder partner at an early stage.}.
However, iterative application of the SIFT (or Geneva) measure
does not offer an immediately apparent halting mechanism,
and it will ultimately consume any presented
objects into a single all-encompassing jet if left to run.
These measures are additionally prone to sweeping up uncorrelated soft
radiation onto highly-boosted partners at wide angular separation.
The solution to both problems turns out to be related.
For inspiration, we turn to the Soft Drop procedure,
wherein a candidate jet is recursively declustered and
the softer of separated constituents is discarded
until the following criterion is satisfied:
\begin{equation}
\frac{\min\, (p_\textrm{T}^A,p_\textrm{T}^B)}{p_\textrm{T}^A+p_\textrm{T}^B} > z_{\rm cut} \left(\frac{\Delta R_{AB}}{R_0}\right)^\beta
\label{eq:soft_drop}
\end{equation}
The dimensionless $z_{\rm cut}$ coefficient is typically $\mathcal{O}(0.1)$.
The exponent $\beta$ can vary for different applications, although we focus here on $\beta = 2$.
We first attempt to recast the elements of Eq.~(\ref{eq:soft_drop}) into
expressions with asymptotically similar behavior that adopt the vocabulary of Eq.~(\ref{eq:siftmeasure}).
The factor $\Delta \widetilde{R}_{AB}^2$ can be carried over directly. Likewise,
$\epsilon^{AB}$ behaves similarly to a minimized ratio of transverse energies,
with the advantage of analyticity.
\begin{align}
\epsilon^{AB}
&= \left\{\min \left(\frac{E_\textrm{T}^{A}}{E_\textrm{T}^{B}}\right) + \max \left(\frac{E_\textrm{T}^{A}}{E_\textrm{T}^{B}}\right)\right\}^{-1}
\nonumber \\
&\simeq \left\{\max \left(\frac{E_\textrm{T}^{A}}{E_\textrm{T}^{B}}\right)\right\}^{-1}
\nonumber \\
&= \min \left(\frac{E_\textrm{T}^{A}}{E_\textrm{T}^{B}}\right)
\simeq \frac{ \min ( E_\textrm{T}^{A}, E_\textrm{T}^{B} )}{ E_\textrm{T}^{A} + E_\textrm{T}^{B}}
\label{eq:min_proxy}
\end{align}
Paired factors of $2$ previously emerged ``naturally''
in Eqs.~(\ref{eq:drsqtilde},~\ref{eq:cosh_et}),
and we group them here in a way that could be interpreted
as setting ``$z_{\rm cut} = \sfrac{1}{4}$''
in the context of a large-radius $(R_0 = 1)$ jet.
Putting all of this together, the suggested analog to Eq.~(\ref{eq:soft_drop}) is shown following.
\begin{equation}
\textrm{Cluster:} \hspace{12pt}
\frac{\Delta \widetilde{R}_{AB}^{2}}{2} <
\big\{ \left( 2\, \epsilon^{AB}\right) \le 1 \big\}
\label{eq:sd_proxy}
\end{equation}
An important distinction from conventional usage is
that this protocol is to be applied during the initial clustering cycle
itself, at the point of each candidate merger.
This is similar in spirit to Recursive Soft Drop~\cite{Dreyer:2018tjj}.
Intuition may be garnered by turning again to FIG.~\ref{fig:phase},
where clustering consistent with the Eq.~(\ref{eq:sd_proxy}) prescription
is observed to occur only in the upper-left (green) region of the phase diagram,
above the ``$y(x)=x$'' diagonal cross-cutting contours of the measure.
The upper bound on $(2\,\epsilon^{AB})$ precludes
clustering if $(\Delta\widetilde{R}_{AB} \ge \sqrt{2})$.
However, this angular threshold is \emph{dynamic},
and the capturable cone diminishes in area
with increasing imbalance of the transverse scales.
This condition may be recast into a limit
on the clustering measure from Eq.~(\ref{eq:measure_diffs}).
\begin{equation}
\textrm{Cluster:} \hspace{12pt}
\delta_{AB} < \big\{ \left( 2\, \epsilon^{AB} \right)^2 \le 1 \big\}
\label{eq:sd_measure}
\end{equation}
The question now is what becomes of those objects rejected
by the Eq.~(\ref{eq:sd_measure}) filter.
Specifically, are they discarded, or are they classified
for retention as isolated final-state jets?
One possible solution is to simply determine this based on the magnitude
of the transverse energy, but that runs somewhat counter to
current objectives. Seeking a simple scale-invariant
criterion for selecting between isolation and rejection,
we observe that there are distinct two ways in which Eq.~(\ref{eq:sd_proxy}) may fail.
Namely, the angular opening may be too wide, and/or the transverse scales
may be too hierarchically separated. This is illustrated again by
FIG.~\ref{fig:phase}, wherein one may exit the clustering region (green)
by moving rightward (wider angular separation) or downward (more scale disparity).
If the former cause is dominant,
e.g., if $(\Delta \widetilde{R}_{AB}^{2} \gg 1)$ and $(\epsilon^{AB} \sim 1)$
such that both members are on equal footing and $(\delta_{AB}\gg 1)$,
then collective isolation as a pair of distinct final states is appropriate.
Conversely, if the latter cause primarily applies,
e.g., if $(\epsilon^{AB} \ll 1)$ and $(\Delta \widetilde{R}_{AB}^2 \sim 1)$ such that $(\delta_{AB}\ll 1)$,
then the pertinent action is to asymmetrically set aside just the softer candidate\footnote{
Although we treat ``drop'' here as a synonym for ``discard'', a useful
alternative (cf. Section~\ref{sct:algorithm}) is single reclassification
as a final state, since softness is not guaranteed in an absolute sense.}.
These scenarios may be quantitatively distinguished in a manner that
generates a balance with and continuation of Eq.~(\ref{eq:sd_proxy}), as follows.
\begin{alignat}{2}
\textrm{Drop:} &&\hspace{3pt} \big\{ \left( 2\, \epsilon^{AB}\right) \le 1 \big\} &\le \frac{\Delta \widetilde{R}_{AB}^{2}}{2} < \big\{ 1 \le \left( 2\, \epsilon^{AB}\right)^{-1} \big\}
\nonumber \\
\textrm{Isolate:} &&\hspace{3pt} \big\{ 1 \le \left( 2\, \epsilon^{AB}\right)^{-1} \big\} &\le \frac{\Delta \widetilde{R}_{AB}^{2}}{2}
\label{eq:iso_drop}
\end{alignat}
Each criterion may also be recast in terms of the clustering measure,
with isolation always and only indicated against an absolute reference
value ($1$) of the SIFT measure $\delta_{AB}$ that heralds a substantial mass-gap.
Observe that the onset of object isolation necessarily culminates in
global algorithmic halting since all residual pairings must correspond
to larger values of $\delta_{AB}$.
\begin{alignat}{2}
\textrm{Drop:}&&\hspace{8pt} \big\{ \left( 2\, \epsilon^{AB}\right)^2 \le 1 \big\} &\le \delta_{AB} < \big\{ 1 \big\}
\nonumber \\
\textrm{Isolate:}&&\hspace{8pt} \big\{ 1 \big\} &\le \delta_{AB}
\label{eq:iso_drop_measure}
\end{alignat}
FIG.~\ref{fig:phase} again provides visual intuition,
where isolation occurs in the upper-right (blue) regions for
values of the measure above unity, and soft wide radiation
is dropped in the lower-central (red) regions.
The clustering and isolation regions are fully separated
from each other, making contact only at the zero-area ``triple point''
with ($2\,\epsilon^{AB}=1$, $\Delta \widetilde{R}_{AB}^{2}/2=1$).
We will provide additional support from simulation for triggering isolation
at a fixed $\mathcal{O}(1)$ value of the measure in Section~\ref{sct:tagging}.
Note that non-relativistic
or beam-like objects at very large rapidity
that approach the ($p_\textrm{T} \Rightarrow 0$, $m \ne 0$) limit
will never cluster, since vanishing of the lever arm
($\xi = 0$) in Eq.~(\ref{eq:dmsqdiffs2}) implies
via Eq.~(\ref{eq:drsqtilde}) that ($\Delta \widetilde{R}_{AB}^{2}/2\ge1$).
Also, objects of equivalent transverse energy
with angular separation $(\Delta\widetilde{R}_{AB} \ge \sqrt{2})$
will always be marked for isolation,
although this radius is again \emph{dynamic}.
The phase gap between clustering and isolation opens
up further when scales are mismatched, and
the effective angular square-separation
exceeds its traditional counterpart
as mass is accumulated during clustering
(cf. Eq.~\ref{eq:rsqeffdiff}).
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.8\textwidth]{Figures/frame_EVT-001_0000.pdf} \\
\includegraphics[width=0.8\textwidth]{Figures/frame_EVT-001_0001.pdf}
\caption{\footnotesize
Upper: A simulated LHC scattering event with $\sqrt{s} = 14$~TeV is visualized at the partonic level.
Top quark pair production $p p \to t \bar{t}$ (red, red) is followed by the
decays $t \to W^+ b$ (green, black) and $W^+ \to u\, \bar{d}$ (black, black), plus conjugates.
A transverse boost of $p_\textrm{T} \simeq 800$~GeV for each top quark produces narrow collimation of decay products.
Lower: Generator-level radiation deposits resulting from showering, hadronization,
and decay of the partonic event appearing in the upper frame.}
\label{fig:filmframesEVT}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.8\textwidth]{Figures/frame_SFT_EXC-001_GST_EVT-001_7480.pdf} \\
\includegraphics[width=0.8\textwidth]{Figures/frame_SFT_EXC-001_GST_EVT-001_7494.pdf}
\caption{\footnotesize
Frames representing sequential clustering of the FIG.~\ref{fig:filmframesEVT} event
using the exclusive $N_{\rm exc}=1$ SIFT algorithm
without the associated filtering or isolation criteria
(cf. FIG.~\ref{fig:filmframesE} for non-exclusive clustering with both criteria enabled).
Upper: Mutually hard prongs with narrow angular separation remain unmerged up
to the final stages of clustering. However, hard objects are likely to sweep
up soft radiation at wide angles. Lower: An image of the initial pair
production is reconstructed just prior to termination. In the absence of a supplementary
halting criterion these structures will subsequently merge to completion, accompanied
by a large discontinuity in the measure $\delta_{AB}$.}
\label{fig:filmframesA}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.8\textwidth]{Figures/frame_AKT_RAD-0P5_GST_EVT-001_0351.pdf} \\
\includegraphics[width=0.8\textwidth]{Figures/frame_AKT_RAD-0P5_GST_EVT-001_7498.pdf}
\caption{\footnotesize
Frames representing sequential clustering of the FIG.~\ref{fig:filmframesEVT} event
using the anti-$k_\textrm{T}$ algorithm.
Upper: Priority is given to the hardest radiation, which immediately captures surrounding territory.
Hard substructure at angular scales smaller than the clustering radius
will be washed out rapidly. Lower: The final state is characterized by
regular jet shapes with uniform expected areas.}
\label{fig:filmframesB}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.8\textwidth]{Figures/frame_KTJ_RAD-0P5_GST_EVT-001_7492.pdf} \\
\includegraphics[width=0.8\textwidth]{Figures/frame_KTJ_RAD-0P5_GST_EVT-001_7497.pdf}
\caption{\footnotesize
Frames representing sequential clustering of the FIG.~\ref{fig:filmframesEVT} event
using the $k_\textrm{T}$ algorithm.
Upper: Priority is given to the softest radiation, resulting in the
growth of dispersed catchments having a correlation length that increases in time.
Mutually hard substructures are preserved until the last stages of clustering.
Lower: The final state is characterized by irregular jet shapes with unpredictable areas.}
\label{fig:filmframesC}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.8\textwidth]{Figures/frame_CAJ_RAD-0P5_GST_EVT-001_6326.pdf} \\
\includegraphics[width=0.8\textwidth]{Figures/frame_CAJ_RAD-0P5_GST_EVT-001_7354.pdf}
\caption{\footnotesize
Frames representing sequential clustering of the FIG.~\ref{fig:filmframesEVT} event
using the Cambridge-Aachen algorithm.
Upper: Priority is given only the angular proximity, without reference to
the momentum scale. Substructure is preserved only until the grain size
eclipses its angular scale.
Lower: The final state is characterized by irregular jet shapes with unpredictable areas.}
\label{fig:filmframesD}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.8\textwidth]{Figures/frame_SFT_ISO_DRP_GST_EVT-001_7474.pdf} \\
\includegraphics[width=0.8\textwidth]{Figures/frame_SFT_ISO_DRP_GST_EVT-001_7498.pdf}
\caption{\footnotesize
Frames representing sequential clustering of the FIG.~\ref{fig:filmframesEVT} event
using the SIFT algorithm with the application of filtering and isolation criteria.
Upper: Initial activity is dominated by the rejection of soft-wide radiation that
is paired with a hard prong by the measure but fails the filtering criterion.
Mutually hard substructures are resolved without contamination from stray radiation.
Lower: The isolation criterion triggers halting before distinct
objects associated with the initial pair production would be merged.}
\label{fig:filmframesE}
\end{figure*}
\section{The $\boldsymbol{N}$-Subjet Tree\label{sct:njettree}}
This section concludes development of the SIFT algorithm
by formalizing the concept of an $N$-subjet tree.
This data structure records the clustering history of each
isolated final-state object from $N$ prongs down to $1$,
along with the associated value of the measure at each merger.
Final-state jets defined according to the global halting
condition outlined in Section~\ref{sct:filtering} may
still bundle multiple hard structured prongs,
since isolation requires a minimal separation of
$(\Delta\widetilde{R}_{AB} \ge \sqrt{2})$.
Within each quarantined partition,
SIFT reverts to its natural form,
as an exclusive clustering algorithm.
So, for example, it might be that reconstruction
of a doubly-hadronic $t\bar{t}$ event
would isolate the pair of top-quark remnants,
but merge each bottom with products from the associated $W$.
This is a favorable outcome, amounting to
the identification of variable large-radius jets.
But, the question of how to identify and recover the optimal
partition of each such object into $N$ subjets remains.
One could consider formulating a local halting condition
that would block the further assimilation of hard prongs
within each large-radius jet once some threshold were met.
However, such objects are only defined in our prescription
\emph{after} having merged to exhaustion.
More precisely, a number of candidate large-radius jets
may accumulate objects in parallel as clustering progresses,
and each will have secured a unique ``$N=1$'' configuration
at the moment of its isolation.
As such, the only available course of action appears to be proceeding
with these mergers, even potentially in the presence of substructure.
However, the fact that the SIFT measure tends to preserve mutually
hard features until the final stages of clustering suggests
that suitable proxies for the partonic event axes may be generated
automatically as a product of this sequential transition through
all possible subjet counts, especially during the last few mergers.
We refer to the superposed \emph{ensemble} of projections onto ($N = \ldots,\,3,\,2,\,1$)
prongs, i.e.,~the history of residually distinct four-vectors at each level
of the clustering flow, as an $N$-subjet tree.
In fact, it seems that interrupting the final stages of clustering
would amount to a substantial information forfeiture.
Specifically, the merger of axis candidates that are not collinear or relatively
soft imprints a sharp discontinuity on the measure, which
operates in this context like a mass-drop tagger~\cite{Butterworth:2008iy}
to flag the presence of substructure.
Additionally, the described procedure generates a basis of groomed axes that
are directly suitable for the computation of observables such as $N$-subjettiness.
In this sense, the best way to establish that a pair of constituents within a
large-radius jet should be kept apart may be to go ahead and join them,
yet to remember what has been joined and at which value of $\delta_{AB}$.
In contrast to conventional methods for substructure recovery
that involve de- and re-clustering according to a variety of disjoint prescriptions,
the finding of $N$-subjet trees representing a compound scattering
event occurs in conjunction with the filtering of stray
radiation and generation of substructure observables
during a single unified operational phase.
The performance of this approach for kinematic reconstruction
and tagging hard event prongs will be comparatively assessed
in Sections~\ref{sct:reconstruction} and~\ref{sct:tagging}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Jet-Energy-Response_SIFT.pdf} \hspace{12pt}
\includegraphics[width=0.45\textwidth]{Figures/Jet-Energy-Response_Delphes.pdf} \\
\includegraphics[width=0.45\textwidth]{Figures/Jet-Angular-Response_SIFT.pdf} \hspace{12pt}
\includegraphics[width=0.45\textwidth]{Figures/Jet-Angular-Response_Delphes.pdf}
\caption{\footnotesize
Top: Distribution $\mathcal{R}^A_B$ of reconstructed jet energy responses with detector effects relative to the partonic truth level at various transverse boosts.
Bottom: Distribution $\Delta R^{\,A}_{\,B}$ of reconstructed jet angular responses with detector effects relative to the partonic truth level at various transverse boosts.
Lefthand panels represent the leading filtered and isolated SIFT jet, while righthand panels represent the leading ($R_0 = 1$)
large-radius Soft Drop jet reported by {\sc Delphes}. No calibration of jet energy scales is attempted for either category.}
\label{fig:JER_JAR}
\end{figure*}
\begin{table*}[htb!]
\vspace{18pt}
\bgroup
\def1.25{1.25}
\begin{center}
\begin{tabular}{C{0.1025\textwidth}|C{0.1025\textwidth}|C{0.1025\textwidth}|C{0.1025\textwidth}|%
C{0.1025\textwidth}|C{0.1025\textwidth}|C{0.1025\textwidth}|C{0.1025\textwidth}|C{0.1025\textwidth}}
$p_\textrm{T}^{{\rm GeV}\pm 5\%}$ &
$\langle \, \mathcal{R}^{\rm \,SIFT}_{\rm \,Truth} \, \rangle$ & $\sigma^{\rm SIFT}_{\mathcal{R}}$ &
$\langle \, \mathcal{R}^{\rm \,\textsc{Delphes}}_{\rm \,Truth} \, \rangle$ & $\sigma^{\rm \textsc{Delphes}}_{\mathcal{R}}$ &
$\langle \, \Delta R^{\rm \,SIFT}_{\rm \,Truth} \, \rangle$ & $\sigma^{\rm SIFT}_{\!\Delta R}$ &
$\langle \, \Delta R^{\rm \,\textsc{Delphes}}_{\rm \,Truth} \, \rangle$ & $\sigma^{\rm \textsc{Delphes}}_{\!\Delta R}$ \\
\hline
100 & $-0.009$ & 0.17 & $+0.087$ & 0.17 & 0.17 & 0.12 & 0.17 & 0.12 \\
200 & $-0.046$ & 0.16 & $+0.026$ & 0.16 & 0.15 & 0.12 & 0.15 & 0.12 \\
400 & $-0.059$ & 0.15 & $-0.002$ & 0.15 & 0.13 & 0.11 & 0.13 & 0.11 \\
800 & $-0.071$ & 0.14 & $-0.024$ & 0.14 & 0.10 & 0.10 & 0.10 & 0.10 \\
1600 & $-0.081$ & 0.13 & $-0.042$ & 0.13 & 0.08 & 0.09 & 0.08 & 0.09 \\
3200 & $-0.089$ & 0.12 & $-0.058$ & 0.12 & 0.05 & 0.06 & 0.05 & 0.06 \\
\end{tabular}
\caption{\footnotesize
Detector-level jet energy responses $\mathcal{R}^A_B$ and angular responses $\Delta R^A_B$
with associated resolutions $\sigma_{\mathcal{R}}$ and $\sigma_{\!\Delta R}$.}
\label{tab:JER_JAR}
\end{center}
\egroup
\end{table*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Jet-Energy-Response_ZERO-ISR_GEN-JET_SIFT.pdf} \hspace{12pt}
\includegraphics[width=0.45\textwidth]{Figures/Jet-Energy-Response_ZERO-ISR_GEN-JET_Delphes.pdf} \\
\includegraphics[width=0.45\textwidth]{Figures/Jet-Angular-Response_ZERO-ISR_GEN-JET_SIFT.pdf} \hspace{12pt}
\includegraphics[width=0.45\textwidth]{Figures/Jet-Angular-Response_ZERO-ISR_GEN-JET_Delphes.pdf}
\caption{\footnotesize
Top: Distribution $\mathcal{R}^A_B$ of reconstructed jet energy responses, at generator level and without initial-state radiation, relative to the partonic truth level at various transverse boosts.
Bottom: Distribution $\Delta R^{\,A}_{\,B}$ of reconstructed jet angular responses, at generator level and without initial-state radiation, relative to the partonic truth level at various transverse boosts.
Lefthand panels represent the leading filtered and isolated SIFT jet, while righthand panels represent the leading ($R_0 = 1$)
large-radius Soft Drop jet reported by {\sc Delphes}. No calibration of jet energy scales is attempted for either category.}
\label{fig:JER_JAR_ZERO-ISR_GEN-JET}
\end{figure*}
\begin{table*}[htb!]
\vspace{18pt}
\bgroup
\def1.25{1.25}
\begin{center}
\begin{tabular}{C{0.1025\textwidth}|C{0.1025\textwidth}|C{0.1025\textwidth}|C{0.1025\textwidth}|%
C{0.1025\textwidth}|C{0.1025\textwidth}|C{0.1025\textwidth}|C{0.1025\textwidth}|C{0.1025\textwidth}}
$p_\textrm{T}^{{\rm GeV}\pm 5\%}$ &
$\langle \, \mathcal{R}^{\rm \,SIFT}_{\rm \,Truth} \, \rangle$ & $\sigma^{\rm SIFT}_{\mathcal{R}}$ &
$\langle \, \mathcal{R}^{\rm \,\textsc{Delphes}}_{\rm \,Truth} \, \rangle$ & $\sigma^{\rm \textsc{Delphes}}_{\mathcal{R}}$ &
$\langle \, \Delta R^{\rm \,SIFT}_{\rm \,Truth} \, \rangle$ & $\sigma^{\rm SIFT}_{\!\Delta R}$ &
$\langle \, \Delta R^{\rm \,\textsc{Delphes}}_{\rm \,Truth} \, \rangle$ & $\sigma^{\rm \textsc{Delphes}}_{\!\Delta R}$ \\
\hline
100 & $-0.059$ & 0.054 & $+0.034$ & 0.052 & 0.038 & 0.052 & 0.043 & 0.049 \\
200 & $-0.052$ & 0.049 & $+0.013$ & 0.040 & 0.030 & 0.050 & 0.030 & 0.050 \\
400 & $-0.045$ & 0.045 & $+0.004$ & 0.033 & 0.025 & 0.047 & 0.022 & 0.047 \\
800 & $-0.040$ & 0.042 & $-0.001$ & 0.031 & 0.022 & 0.045 & 0.018 & 0.046 \\
1600 & $-0.035$ & 0.039 & $-0.003$ & 0.029 & 0.018 & 0.040 & 0.015 & 0.042 \\
3200 & $-0.031$ & 0.035 & $-0.004$ & 0.025 & 0.015 & 0.032 & 0.011 & 0.033 \\
\end{tabular}
\caption{\footnotesize
Zero-ISR generator-level jet energy responses $\mathcal{R}^A_B$ and angular responses $\Delta R^A_B$
with associated resolutions $\sigma_{\mathcal{R}}$ and $\sigma_{\!\Delta R}$.}
\label{tab:JER_JAR_ZERO-ISR_GEN-JET}
\end{center}
\egroup
\end{table*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=.45\textwidth]{Figures/MASS_PT-0200.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/MASS_PT-0400.pdf} \\
\includegraphics[width=.45\textwidth]{Figures/MASS_PT-0800.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/MASS_PT-1600.pdf}
\caption{\footnotesize
Distribution of $W$-boson and top quark masses for di, and tri-jet samples reconstructed with SIFT at various transverse boosts.}
\label{fig:mass}
\end{figure*}
\section{Comparison of Algorithms\label{sct:comparison}}
This section provides a visual comparison of merging priorities and final states
for the SIFT and $k_\textrm{T}$-family algorithms. The images presented here are
still frames extracted from full-motion video simulations of each clustering sequence.
These films are provided as ancillary files with the source package
for this paper on the arXiv and may also be viewed on YouTube~\cite{youtube}.
Frames are generated for every 25th clustering action, as well as each of the initial and final 25 actions.
The Mathematica notebook used to generate these films is described in Appendix~\ref{sct:software},
and maintained with the AEACuS~\cite{aeacus} package on GitHub.
The visualized event\footnote{
This event is expected to be reasonably representative,
being the first member of its Monte Carlo sample.}
comes from a simulation in {\sc MadGraph}/{\sc MadEvent}~\cite{Alwall:2014hca}
of top quark pair production at the 14~TeV LHC with fully hadronic decays.
The scalar sum $H_{\rm T}$ of transverse momentum is approximately 1.6~TeV at the partonic level.
This large boost results in narrow collimation of the three hard prongs (quarks)
on either side of the event, as depicted in the the upper frame of FIG.~\ref{fig:filmframesEVT}.
In-plane axes represent the pseudo-rapidity $\eta$ and azimuth $\phi$, with a cell width
($\Delta R \simeq 0.1$) approximating the resolution of a modern hadronic calorimeter.
The height of each block is proportional to the log-transverse momentum $\log_{10} (\,p_\textrm{T}\,/{\rm [GeV]})$ it contains.
The event is passed through through {\sc Pythia8}~\cite{Sjostrand:2014zea} for showering and hadronization,
and through {\sc Delphes}~\cite{deFavereau:2013fsa} for fast detector simulation.
Detector effects are bypassed in the current context, which starts with unclustered generator-level
({\sc Pythia8}) jets and non-isolated photons/leptons extracted from the {\sc Delphes} event record by {\sc AEACuS},
but they will be included for most of the analysis in Sections~\ref{sct:reconstruction} and~\ref{sct:tagging}.
This initial state is depicted in the lower frame of FIG.~\ref{fig:filmframesEVT},
which exhibits two dense clusters of radiation that are clearly associated with
the partonic event, as well as several offset deposits having less immediate origins
in the underlying event or initial state.
Prior to clustering, a sheet of ultra-soft ghost radiation is distributed across the angular
field in order to highlight differences between the catchment area \cite{Cacciari:2008gn} of each algorithm.
We begin with an example of exclusive ($N_{\rm exc}=1$) clustering ordered by the SIFT measure from Eq.~\ref{eq:siftmeasure},
but without application of the filtering and isolation criteria described in Section~\ref{sct:filtering}.
Film~A clearly exhibits both of the previously identified pathologies,
opening with a sweep of soft-wide radiation by harder partners
(as visualized with regions of matching coloration) and closing
with the contraction of hard-wide structures into a single surviving object.
However, the described success is manifest in between, vis-\`a-vis mutual preservation
of narrowly bundled hard prongs until the end stages of clustering.
In particular, the upper frame of FIG.~\ref{fig:filmframesA} features a pair of triplets at ($\delta_{AB} \simeq 0.04$)
that fairly approximate their collinear antecedents despite bearing wide catchments.
Subsequently, this substructure collapses into an image of the original pair production,
as depicted at ($\delta_{AB} \simeq 0.3$) in the lower frame of FIG.~\ref{fig:filmframesA}.
Nothing further occurs until ($\delta_{AB} \gtrsim 1.0$), beyond which
residual structures begin to merge and migrate in unphysical ways.
For comparison, we process the same event using the anti-$k_\textrm{T}$ algorithm at ($R_0 = 0.5$).
Film~B shows how early activity is dominated by the hardest radiative seeds, which
promptly capture all available territory up to the stipulated radial boundary.
In particular, any substructure that is narrower than $R_0$ will be rapidly erased,
as illustrated in the upper frame of FIG.~\ref{fig:filmframesB}.
The subsequent stages of clustering are of lesser interest, being progressively
occupied with softer seeds gathering up yet softer unclaimed scraps.
At termination, the lower frame of FIG.~\ref{fig:filmframesB} exhibits the regular
cone shapes with uniform catchment areas that are a hallmark of anti-$k_\textrm{T}$.
This property is linked to the anchoring of new cones on hard prongs that
are less vulnerable to angular drift. It is favored by experimentalists for facilitating
calibration of jet energy scales and subtraction of soft pileup radiation.
Proceeding, we repeat the prior exercise using the $k_\textrm{T}$ algorithm at ($R_0 = 0.5$).
In contrast to anti-$k_\textrm{T}$, clustering is driven here by the softest seeds.
Film~C demonstrates the emergence of a fine grain structure in the
association pattern of objects from adjacent regions that grows in
size as the algorithm progresses. Unlike SIFT, which preferentially
binds soft radiation to a hard partner, mutually soft objects without a strong
physical correlation are likely to pair in this case. Since summing geometrically
adjacent partners tends increase $p_\textrm{T}$, merged objects become
less immediately attractive to the measure. As a result, activity is
dispersed widely across the plane, and attention jumps rapidly
from one location to the next. However, the combination of mutually hard prongs
is actively deferred, which causes collimated substructures to be preserved,
as shown in the upper frame of FIG.~\ref{fig:filmframesC}.
In contrast to SIFT, this hardness criterion is absolute, rather than relative.
Ultimately, structures more adjacent than the fixed angular cutoff $R_0$ will still be absorbed.
Jet centers are likely to drift substantially,
and associated catchment shapes are thus highly irregular, as shown in the
lower frame of FIG.~\ref{fig:filmframesC}.
Similarly, we also cluster using the Cambridge Aachen algorithm at ($R_0 = 0.5$).
Pairings are driven here solely by angular proximity, and Film~D shows an associated
growth of grain size that is like that of the $k_\textrm{T}$ algorithm.
The banded sequencing is simply an artifact of the way we disperse
ghost jets, randomizing $p_\textrm{T}$ but regularizing placement on the grid.
In contrast, mutually hard substructures are not specifically protected
and will last only until the correlation length catches up to their
separation, as shown in the upper frame of FIG.~\ref{fig:filmframesD}.
As before, the angular cutoff $R_0$ limits resolution of structure.
Likewise, jet drift leads to irregular catchment shapes, as shown in the
lower frame of FIG.~\ref{fig:filmframesD}.
We conclude this section with a reapplication of the SIFT algorithm,
enabling the filtering and isolation criteria from Section~\ref{sct:filtering}.
Film~E demonstrates that the soft ghost radiation is still targeted first,
but it is now efficiently discarded (as visualized with dark grey)
rather than clustered, suggesting resiliency to soft pileup.
Hard substructures are resolved without accumulating stray radiation,
as shown in the upper frame of FIG.~\ref{fig:filmframesE}.
This helps to stabilize reconstructed jet kinematics relative to the parton-level event.
Objects decayed and showered from the pair of opposite-hemisphere
top quarks are fully isolated from each other in the final state,
as shown in the lower frame of FIG.~\ref{fig:filmframesE}.
Each such object selectively associates constituents within a
``fuzzy'' scale-dependent catchment boundary.
Nevertheless, it may still be possible to establish an ``effective''
jet radius by integrating the pileup distribution function
up to the maximal radius $(\Delta \widetilde{R}_{AB} < \sqrt{2})$.
In any case, the traditional approach
to pileup subtraction has been somewhat superseded by the
emergence of techniques for event-by-event, and per-particle
pileup estimation like {\sc PUPPI}~\cite{Bertolini:2014bba}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.45\textwidth]{Figures/SIFT-Measure-Evolution_003.pdf} \hspace{12pt}
\includegraphics[width=0.45\textwidth]{Figures/SIFT-Measure-Evolution_004.pdf} \\
\includegraphics[width=0.45\textwidth]{Figures/SIFT-Measure-Evolution_005.pdf} \hspace{12pt}
\includegraphics[width=0.45\textwidth]{Figures/SIFT-Measure-Evolution_006.pdf}
\caption{\footnotesize
Evolution of the measure $\delta^N_{AB}$
as a function of the number $N$ of unmerged objects for QCD
multi-jet production at various values of $\sqrt{\hat{s}}$.
Partons have an angular separation ($\Delta R \geq 2.0$)
and the $p_\textrm{T}$ threshold is stepped in proportion to $\sqrt{\hat{s}}$.
Initial-state radiation is suppressed and analysis is at generator level.
The orange band indicates the interval where all samples have
merged to the point of their natural partonic count (white) but not beyond (black).
The grey dashed line marks the isolation threshold at ($\delta_{AB}=1$).
}
\label{fig:evo}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=.45\textwidth]{Figures/Tau21_PT-0200_SIFT.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/Tau21_PT-0400_SIFT.pdf} \\
\includegraphics[width=.45\textwidth]{Figures/Tau21_PT-0800_SIFT.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/Tau21_PT-1600_SIFT.pdf}
\caption{\footnotesize
Distribution of $\tau_2/\tau_1$ computed with SIFT axes for mono-, di-, and tri-jet samples at various transverse boosts.
}
\label{fig:tau21}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=.45\textwidth]{Figures/Tau32_PT-0200_SIFT.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/Tau32_PT-0400_SIFT.pdf} \\
\includegraphics[width=.45\textwidth]{Figures/Tau32_PT-0800_SIFT.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/Tau32_PT-1600_SIFT.pdf}
\caption{\footnotesize
Distribution of $\tau_3/\tau_2$ computed with SIFT axes for mono-, di-, and tri-jet samples at various transverse boosts.
}
\label{fig:tau32}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=.45\textwidth]{Figures/Tau21_PT-0400_Delphes.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/Tau21_PT-0800_Delphes.pdf} \\
\includegraphics[width=.45\textwidth]{Figures/Tau32_PT-0400_Delphes.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/Tau32_PT-0800_Delphes.pdf}
\caption{\footnotesize
Reference distributions of $\tau_2/\tau_1$ and $\tau_3/\tau_2$ computed by {\sc Delphes} at various transverse boosts.
}
\label{fig:tau2132Delphes}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=.45\textwidth]{Figures/SIFT-1_PT-0200.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/SIFT-1_PT-0400.pdf} \\
\includegraphics[width=.45\textwidth]{Figures/SIFT-1_PT-0800.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/SIFT-1_PT-1600.pdf}
\caption{\footnotesize
Distribution of $\delta_{AB}$ at the $N=1$ stage of clustering with SIFT for mono-, di-, and tri-jet samples at various transverse boosts.
}
\label{fig:siftdab1}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=.45\textwidth]{Figures/SIFT-2_PT-0200.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/SIFT-2_PT-0400.pdf} \\
\includegraphics[width=.45\textwidth]{Figures/SIFT-2_PT-0800.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/SIFT-2_PT-1600.pdf}
\caption{\footnotesize
Distribution of $\delta_{AB}$ at the $N=2$ stage of clustering with SIFT for mono-, di-, and tri-jet samples at various transverse boosts.
}
\label{fig:siftdab2}
\end{figure*}
\begin{table}[ht!]
\bgroup
\def1.25{1.25}
\begin{center}
\begin{tabular}{C{0.0875\textwidth}|C{0.0875\textwidth}|C{0.0875\textwidth}|C{0.0875\textwidth}|C{0.0875\textwidth}}
$p_\textrm{T}^{{\rm GeV}\pm 5\%}$ & $\tau_{\textsc{Delphes}}^{N+1/N}$ & $\tau_{\rm SIFT}^{N+1/ N}$ & $\delta_{AB}^N$ & $\delta+\tau$ \\
\hline
100 & 0.62 & 0.68 & 0.69 & 0.70 \\
200 & 0.91 & 0.86 & 0.88 & 0.89 \\
400 & 0.89 & 0.85 & 0.91 & 0.92 \\
800 & 0.82 & 0.79 & 0.92 & 0.93 \\
1600 & 0.77 & 0.74 & 0.91 & 0.92 \\
3200 & 0.78 & 0.76 & 0.88 & 0.90 \\
\end{tabular}
\caption{\footnotesize
Area under curve ROC scores for discrimination of resonances with hard 1- and 2-prong substructure using a BDT trained on various sets of event observables.}
\label{tab:BDT_21}
\end{center}
\egroup
\end{table}
\begin{table}[ht!]
\bgroup
\def1.25{1.25}
\begin{center}
\begin{tabular}{C{0.0875\textwidth}|C{0.0875\textwidth}|C{0.0875\textwidth}|C{0.0875\textwidth}|C{0.0875\textwidth}}
$p_\textrm{T}^{{\rm GeV}\pm 5\%}$ & $\tau_{\textsc{Delphes}}^{N+1/N}$ & $\tau_{\rm SIFT}^{N+1/ N}$ & $\delta_{AB}^N$ & $\delta+\tau$ \\
\hline
100 & 0.61 & 0.61 & 0.63 & 0.65 \\
200 & 0.63 & 0.60 & 0.71 & 0.72 \\
400 & 0.82 & 0.74 & 0.90 & 0.90 \\
800 & 0.85 & 0.80 & 0.94 & 0.95 \\
1600 & 0.77 & 0.77 & 0.97 & 0.97 \\
3200 & 0.77 & 0.79 & 0.98 & 0.99 \\
\end{tabular}
\caption{\footnotesize
Area under curve ROC scores for discrimination of resonances with hard 2- and 3-prong substructure using a BDT trained on various sets of event observables.}
\label{tab:BDT_32}
\end{center}
\egroup
\end{table}
\begin{table}[ht!]
\bgroup
\def1.25{1.25}
\begin{center}
\begin{tabular}{C{0.0875\textwidth}|C{0.0875\textwidth}|C{0.0875\textwidth}|C{0.0875\textwidth}|C{0.0875\textwidth}}
$p_\textrm{T}^{{\rm GeV}\pm 5\%}$ & $\tau_{\textsc{Delphes}}^{N+1/N}$ & $\tau_{\rm SIFT}^{N+1/ N}$ & $\delta_{AB}^N$ & $\delta+\tau$ \\
\hline
100 & 0.70 & 0.75 & 0.77 & 0.77 \\
200 & 0.86 & 0.87 & 0.90 & 0.90 \\
400 & 0.93 & 0.91 & 0.95 & 0.96 \\
800 & 0.91 & 0.89 & 0.96 & 0.96 \\
1600 & 0.84 & 0.83 & 0.94 & 0.95 \\
3200 & 0.76 & 0.78 & 0.91 & 0.92 \\
\end{tabular}
\caption{\footnotesize
Area under curve ROC scores for discrimination of resonances with hard 3- and 1-prong substructure using a BDT trained on various sets of event observables.}
\label{tab:BDT_13}
\end{center}
\egroup
\end{table}
\begin{figure*}[ht!]
\centering
\includegraphics[width=.45\textwidth]{Figures/SIFT-3_PT-0200.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/SIFT-3_PT-0400.pdf} \\
\includegraphics[width=.45\textwidth]{Figures/SIFT-3_PT-0800.pdf} \hspace{12pt}
\includegraphics[width=.45\textwidth]{Figures/SIFT-3_PT-1600.pdf}
\caption{\footnotesize
Distribution of $\delta_{AB}$ at the $N=3$ stage of clustering with SIFT for mono-, di-, and tri-jet samples at various transverse boosts.
}
\label{fig:siftdab3}
\end{figure*}
\section{Resolution and Reconstruction\label{sct:reconstruction}}
This section characterizes SIFT's angular and energetic
response functions for the resolution of hard mono-jets
and tests the reconstruction of collimated di- and tri-jet
systems associated with a massive resonance.
The best performance is achieved for large transverse boosts.
We generate Monte Carlo collider data modeling the
$\sqrt{s} = 14$~TeV LHC using {\sc MadGraph}/{\sc MadEvent},
{\sc Pythia8}, and {\sc Delphes} as before.
Clean ($N=1,2,3$) prong samples are obtained by simulating the processes
($p p \Rightarrow j Z \Rightarrow j +\nu \bar{\nu}$),
($p p \Rightarrow W^\pm Z \Rightarrow j j +\nu \bar{\nu}$), and
($p p \Rightarrow tW^- \Rightarrow j j j + \bar{\nu}\ell^-$) plus conjugate,
respectively. In the latter case, an angular isolation cone
with ($\Delta R = 0.5$) is placed around the visible lepton.
Hard partonic objects are required to carry a minimal transverse
momentum ($p_\textrm{T} \ge 25$~GeV) and be inside ($\vert\eta\vert <= 3.0$).
No restrictions are placed on the angular separation of decay products.
Jets consist of gluons and/or light first-generation quarks ($u,\,d$),
as well as $b$-quarks where required by a third-generation process.
In order to represent a wide range of event scales,
we tranche in the transverse momentum (vector sum magnitude)
of the hadronic system, considering six log-spaced intervals
$p_\textrm{T} = (\,100,\,200,\,400,\,800,\,1600,\,3200\,)$~GeV $\pm 5\%$
and giving attention primarily to the inner four.
Clustering is disabled at the detector simulation level
by setting the jet radius $R_0$ and aggregate $p_\textrm{T}$ threshold to very small values.
We retain the default {\sc Delphes} efficiencies for tracks and calorimeter
deposits (including $p_\textrm{T}$ thresholds on low-level detector objects),
along with cell specifications and smearing (resolution) effects in the latter case.
Jet energy scale corrections are turned off (set to $1.0$) since these
are calibrated strictly for application to fully reconstructed (clustered) objects.
For purposes of comparison and validation,
we also extract information from {\sc Delphes} regarding
the leading large-radius jet ($R_0 = 1$), which is processed by
trimming~\cite{Krohn:2009th}, pruning~\cite{Ellis:2009me}, and applying Soft-Drop.
Event analysis (including clustering) and computation of observables
are implemented with {\sc AEACuS} (cf. Appendix~\ref{sct:software}).
We begin by pre-clustering detector-level objects with
anti-$k_\textrm{T}$ at ($R_0 = 0.01$) to roughly mimic a characteristic
track-assisted calorimeter resolution at the LHC.
The isolation and filtering criteria described in Section~\ref{sct:filtering} are
then used in conjunction to select the subset of detector-level object
candidates retained for analysis. Specifically, our procedure
is equivalent to keeping members gathered by the hardest isolated $N$-subjet
tree that survive filtering all the way down to the final merger.
All histograms are generated with {\sc RHADAManTHUS}~\cite{aeacus},
using {\sc MatPlotLib}~\cite{Hunter:2007} on the back end.
We begin by evaluating the fidelity of the variable large-radius SIFT
jet's directional and scale reconstruction in the context of the mono-jet sample.
Respectively, the upper-left and lower-left panels of FIG.~\ref{fig:JER_JAR} show
distributions of the energy response $\mathcal{R}^A_B \equiv (\,p_\textrm{T}^{A}/p_\textrm{T}^{B} - 1\,)$
and angular response $\Delta R^A_B$ relative to the original
truth-level ({\sc MadGraph}) partonic jet at various transverse boosts.
The corresponding right-hand panels feature the same two distributions
for the leading large-radius jet identified by {\sc Delphes}.
The vanishing tail of events for which the SIFT or {\sc Delphes} jet
fails ($\Delta R \leq 0.5$) relative to the partonic sum are vetoed here and throughout.
Central values and associated widths (standard deviations) are provided for the
$\mathcal{R}^A_B$ and $\Delta R^A_B$ in TABLE~\ref{tab:JER_JAR}.
The SIFT variable-radius jet energy response is very regular,
systematically under-estimating the momentum of hard objects by about 6\%.
The {\sc Delphes} jet energy response shows more drift, transitioning
from positive values for soft objects to negative values for hard objects.
Widths of the two distributions are indistinguishable, with fluctuations amounting
to about 15\% in both cases, narrowing slightly at larger boosts.
It is anticipated from these observations that an energy calibration of
SIFT jets would be relatively straightforward, using standard techniques.
Angular performance of the two methodologies is identical,
with typical offsets and deviations both near one-tenth of a radian
(but less for hard objects and more for soft objects).
For comparison, we repeat this analysis in
FIG.~\ref{fig:JER_JAR_ZERO-ISR_GEN-JET} and TABLE~\ref{tab:JER_JAR_ZERO-ISR_GEN-JET},
using generator-level ({\sc Pythia8}) objects without detector effects
and suppressing the emission of initial-state radiation.
The distributions are substantially narrower in all cases,
with widths around a half or a third of the prior reference values.
The energy response is affected by both idealizations, but more so
by the elimination of detector effects, whereas the angular response is
improved primarily by the elimination of initial-state radiation.
The most distinctive difference between the SIFT and large-radius
{\sc Delphes} jets at this level is that the former is bounded
from above by the partonic $p_\textrm{T}$, while the latter commonly
exceeds it. The observed momentum excess is attributable to the capture of
radiation from the underlying event. However, SIFT's filtering
stage is apparently more adept at rejecting this contaminant,
producing a reflection in the tail orientation that is
reminiscent of various approaches to grooming.
Proceeding, we turn to attention the reconstruction of $W$-boson and top quark mass resonances,
as visualized in FIG.~\ref{fig:mass} at each of the four central simulated $p_\textrm{T}$ ranges.
$M_W$ and $M_t$ are respectively recovered from di- and tri-jet samples,
by summing and squaring residual four-vector components after filtering.
A second $W$ reconstruction is obtained from decays of a $t$
by optimizing the combinatoric selection of two prongs from the ($N=3$) clustering flow.
An excess near $M_W \simeq 80$~GeV is apparent for $p_\textrm{T} \ge 200$~GeV,
although the top quark remains unresolved by the leading SIFT
jet at low boost, since the associated bottom is likely to be separately isolated.
The top-quark bump is clearly visible for $p_\textrm{T} \ge 400$~GeV,
though its centroid falls somewhat
to the left of $M_t \simeq 175$~GeV.
The plotted distributions narrow at higher boost,
and substantially sharper peaks are observed for $p_\textrm{T} \ge 800$~GeV.
The systematic under-estimation of mass is consistent with effects
observed previously in the jet energy response, and it is similarly
expected to be improvable with a suitable calibration.
\section{Structure Tagging\label{sct:tagging}}
This section describes applications of the SIFT
algorithm related to structure and substructure tagging, including
discrimination of events with varying partonic multiplicities,
$N$-subjettiness axis-finding, and identification of heavy resonances.
We comparatively assess SIFT's performance on Monte Carlo
collider data against standard approaches, and quantify its
discriminating power with the aid of a Boosted Decision Tree (BDT).
Our first objective will be characterizing distinctive features in the evolution
of the SIFT measure $\delta_{AB}$ for events with different numbers of hard prongs.
We proceed by simulating pure QCD multi-jets representing LHC
production of ($N=2\text{--}5$) gluons and/or light first-generation quarks ($u,\,d$).
Samples are generated at various partonic center-of-momentum energies, taking
$\sqrt{\hat{s}} = (\,100,\,200,\,400,\,800,\,1600,\,3200\,)$~GeV $\pm 20\%$.
In order to ensure that splittings are hard and wide
(corresponding to a number of non-overlapping
large-radius jets with more or less commensurate scales),
we require ($\,p_\textrm{T} \ge \sqrt{\hat{s}}\div16\,$) and ($\,\Delta R \ge 2.0\,$).
We also suppress initial-state radiation so that
consistent partonic multiplicities can be achieved,
and return to the use of generator-level ({\sc Pythia8}) objects.
Other selections and procedures are carried forward.
In contrast to the examples in Section~\ref{sct:reconstruction},
the relevant showering products of these non-resonant systems
are not expected to be captured within a single variable large-radius jet.
Accordingly, we do not engage the isolation criterion
from Section~\ref{sct:filtering} for this application, but instead apply
exclusive clustering to the event as a whole with termination at ($N_{\rm exc}=1$).
Filtering of soft-wide radiation is retained, but values of $\delta_{AB}$
are registered only for the merger of objects surviving to the final state.
Relative to FIG.~\ref{fig:phase}, candidate object pairings
in the green and blue regions proceed to merge,
while the softer member is discarded for those in the red region.
FIG.~\ref{fig:evo} follows evolution of the SIFT measure
as it progresses from 8 down to 1 remaining objects.
Since we are considering entire events
(as opposed to a hadronic event hemisphere
recoiling off a neglected leptonic hemisphere),
attention is focused here on the upper four values of $\sqrt{\hat{s}}$
to promote closer scale alignment with prior examples.
Each of the simulated partonic multiplicities are tracked separately, represented
by the geometric mean of $\delta^N_{AB}$ over all samples at level $N$ in the clustering flow.
The relative change in the measure is larger when merging objects associated with distinct hard partons,
suggesting that the jettiness count is intrinsically imprinted on the clustering history.
Specifically, a steepening in the log-slope of the measure evolution occurs when
transitioning past the natural object count\footnote{
If the isolated objects have dissimilar $p_\textrm{T}$, then
the discontinuity can be less severe, but the
increased slope may extend to ($N=1$).},
i.e., from the black markers to the white markers.
This supports the argument from Section~\ref{sct:njettree}
that the most useful halting criterion can sometimes be none at all.
In other words, it suggests that a determination of which objects should be considered resolved
might best be made after observing how those objects would otherwise recombine.
The orange bands in FIG.~\ref{fig:evo} mark the range of
$\delta^N_{AB}$ wherein all structures are fully reconstructed but not over-merged,
and the grey dashed line marks ($\delta_{AB} = 1$).
Independently of the collision energy, white points tend to land above this line and black points below it,
which helps to substantiate the isolation protocol from Section~\ref{sct:filtering}.
Bulk features of the evolution curves are substantially similar
across the plotted examples, and practically identical above
the isolation cutoff, reflecting the scale-invariant design.
However, $\delta_{AB}^N$ ``starts'' with a smaller value from large $N$
for harder processes, and the orange band gap is expanded accordingly.
This is because the tighter collimation
(or smaller $m/p_\textrm{T}$, cf. Eq.~\ref{eq:drsqtilde})
associated with a large transverse boost induces
smaller values of the measure when constituents are merged.
Universality at termination is clarified by example,
considering a true dijet system with balanced $p_\textrm{T}$,
for which Eq.~(\ref{eq:measure_diffs}) indicates that
($\,\delta_{AB}^{N=1} \simeq \Delta R^2/2\,$).
This is consistent with illustrated values around 20 for
($\vert\Delta \eta\vert \simeq 6$) and ($\Delta \phi = \pi$).
Our next objective will be to identify and test applications of SIFT
for resolving substructure within a narrowly collimated beam of radiation.
$N$-subjettiness represents one of the most prominent contemporary
strategies for coping with loss of structure in boosted jets.
In this prescription, one first clusters a large-radius jet, e.g.,
with ($R_0 \simeq 1.0$), which is engineered to contain all of the
products of a decaying parton such as a boosted top or $W$-boson.
For various hypotheses of the subjet count $(N = 1,\,2,\,3,\,\ldots)$,
a set of spatial axis directions are identified via a separate procedure, e.g., by reclustering
all radiation gathered by the large-radius jet with an exclusive
variant of the $k_\textrm{T}$ or Cambridge-Aachen
algorithms that forgoes beam isolation and forces explicit termination at $N$ jets.
One then computes a measure $\tau_N$ of compatibility with
the hypothesis, which is proportional to a sum over minimal angular separation $\Delta R$ from any of the
$N$ axes, weighted by the transverse momentum $p_\textrm{T}$ of each radiation component. Maximal discrimination
of the subjet profile is achieved by taking ratios, e.g., $\tau_2/\tau_1$, or $\tau_3/\tau_2$.
This procedure will be our reference standard for benchmarking
SIFT's substructure tagging performance.
The SIFT $N$-subjet tree automatically
provides an ensemble of axis candidates at all
relevant multiplicities that are intrinsically suitable
for the computation of $N$-subjettiness.
We test this claim using the previously described
mono-, di-, and tri-jet event samples. The axis candidates are
simply equal to the surviving objects at level $N$ in the
clustering flow. However, this process references only members
of the leading isolated large-radius jet,
rather than constituents of the event at large.
FIG.~\ref{fig:tau21} and FIG.~\ref{fig:tau32} respectively
exhibit distributions of $\tau_2/\tau_1$ and $\tau_3/\tau_2$
calculated in this manner at various transverse boosts.
The intuition that $\tau_3/\tau_2$ should be effective
at separating $W$-bosons from top quarks, whereas
$\tau_2/\tau_1$ should be good for telling QCD monojets
apart from $W$'s is readily validated.
For comparison, FIG.~\ref{fig:tau2132Delphes} shows
corresponding distributions of the same two quantities
at the inner pair of $p_\textrm{T}$ scales,
as computed directly by {\sc Delphes} from the leading
($R_0 = 1.0$) Soft-Drop jet.
Although there are qualitative differences between the two sets
of distributions, their apparent power for substructure
discrimination is more or less similar.
This will be quantified subsequently with a BDT analysis.
Our final objective involves directly
tagging substructure with sequential
values of the SIFT measure.
Distributions of $\delta_{AB}$ at the ($N=1$) and ($N=2$) clustering stages
are plotted in FIG.~\ref{fig:siftdab1} and FIG.~\ref{fig:siftdab2} respectively,
for mono-, di-, and tri-jet samples at each of the four central simulated $p_\textrm{T}$ ranges.
Clear separation between the three tested object multiplicities is observed,
with events bearing a greater count of partonic prongs tending
to aggregate at larger values of the measure, especially
after transitioning through their natural prong count.
We observe that superior substructure discrimination is achieved
by referencing the measure directly, rather than
constructing ratios in the fashion beneficial to $N$-subjettiness.
This is is connected to the fact that $\delta_{AB}$
is explicitly constructed as a ratio from the outset.
In order to concretely gauge relative performance
of the described substructure taggers,
we provide each set of simulated observables to a
Boosted Decision Tree for training and validation.
BDTs are a kind of supervised machine learning
that is useful for discreet (usually binary) classification
in a high-dimensional space of numerical features.
In contrast to ``deep learning'' approaches based
around neural networks, where internal
operations are shrouded behind a ``black box''
and the question of ``what is learned''
may be inscrutable, the mechanics of a BDT
are entirely tractable and transparent.
While neural networks excel at extracting
hidden associations between ``low level'' features,
e.g., raw image data at the pixel level,
BDTs work best when seeded with ``high-level''
features curated for maximal information density.
At every stage of training, a BDT
identifies which feature and what transition value
optimally separates members of each class.
This creates a branch point on a decision tree, and
the procedure is iterated for samples following either fork.
Classifications are continuous, typically on the range ($0,1$),
and are successively refined across
a deep stack of shallow trees, each ``boosted'' (reweighted)
to prioritize the correction of errors accumulated during prior stages.
Safeguards are available against over-training
on non-representative features, and scoring is always
validated on statistically independent samples.
We use 50 trees with a maximal depth of 5 levels,
a training fraction of $\sfrac{2}{3}$,
a learning rate of $\eta = 0.5$, and L2 regularization
with $\lambda = 0.1$ (but no L1 regularization).
The BDT is implemented with {\sc MInOS}~\cite{aeacus},
using {\sc XGBoost}~\cite{Chen:2016:XST:2939672.2939785} on the backend.
The lefthand panel of FIG.~\ref{fig:distroroc} shows the
distribution of classification scores for mono-
and di-jet event samples at $p_\textrm{T}=1600$~GeV after
training on values of the SIFT measure $\delta_{AB}$
associated with the final five stages of clustering.
The two samples (plotted respectively in blue ``Background''
and orange ``Signal'') exhibit clear separation,
as would be expected from examination of the
second element of FIG.~\ref{fig:siftdab1}.
The underlying discretized sample data is represented
with translucent histograms, and the interpolation into
continuous distribution functions is shown
with solid lines.
The righthand panel of FIG.~\ref{fig:distroroc} shows the
associated Receiver Operating Characteristic (ROC)
curve, which plots the true-positive rate versus
the false-positive rate as a function of a
sliding cutoff for the signal classification score.
The Area-Under-Curve (AUC) score, i.e., the fractional
coverage of the shaded blue region, is a good
proxy for overall discriminating power.
A score of $0.5$ indicates no separation,
whereas classifiers approaching
the score of $1.0$ are progressively ideal.
The AUC ($0.91$) from the example in FIG.~\ref{fig:distroroc}
is collected with related results in TABLE~\ref{tab:BDT_21}.
Separability of mono- and di-jet samples is quantified
at each simulated range of $p_\textrm{T}$
while making various feature sets available to the BDT.
The first column uses
the four {\sc Delphes} $N$-subjettiness ratios
built from $\tau_1$ to $\tau_5$.
The next column references the same four ratios,
but as computed with objects and axes from the leading
SIFT $N$-subjet tree. The third column provides the
BDT with the final ($N=1\text{--}5$)
values of the SIFT measure $\delta_{AB}^N$.
The last column merges information from the prior two.
The two $N$-subjettiness computations perform
similarly, but the fixed-radius {\sc Delphes} implementation
shows an advantage of a few points in the majority of trials.
The performance of $N$-subjettiness degrades at large boost,
losing more than 10 points between $p_\textrm{T} = 200$ and $p_\textrm{T}=1600$~GeV.
The SIFT $\delta_{AB}$ measure outperforms $N$-subjettiness
in five of six trials, with an average advantage (over trials)
of 7 points. Its performance is very stable at larger boosts,
where it has an advantage of at least 10 points for $p_\textrm{T} \ge 800$~GeV.
Combining the SIFT measure with $N$-subjettiness generates
a marginal advantage of about 1 point relative to $\delta_{AB}$ alone.
TABLE~\ref{tab:BDT_32} represents a similar comparison of discriminating power
between resonances associated with hard 2- and 3-prong substructures.
$N$-subjettiness is less performant in this application, and the associated AUC scores
drop by around 6 points. Performance of the SIFT measure degrades for soft events,
but it maintains efficacy for events at intermediate scales, and
shows substantial improvement for $p_\textrm{T} \ge 800$~GeV, where its advantage
over $N$-subjettiness grows to around 20 points.
TABLE~\ref{tab:BDT_13} extends the comparison to resonances with hard 3- and 1-prong substructures.
The SIFT $N$-subjettiness computation is marginally
preferred here over its fixed-radius counterpart.
The $\delta_{AB}$ measure remains the best single discriminant by
a significant margin, yielding an AUC at or above $0.90$
for $p_\textrm{T} \ge 200$~GeV.
We conclude this section with a note on several
additional procedural variations that were tested.
Some manner of jet boundary enforcement (either via a fixed $R_0$
or the SIFT isolation criterion) is observed to be essential
to the success of all described applications.
Likewise, filtering of soft/wide radiation is vital to
axis finding, computation of $N$-subjettiness,
and the reconstruction of mass resonances.
Increasing the pre-clustering cone size from $0.01$ radians
to $0.1$ substantially degrades the performance of $N$-subjettiness,
whereas discrimination with $\delta_{AB}$ is more resilient to this change.
\begin{figure*}[ht!]
\centering
\includegraphics[width=.54\textwidth]{Figures/density_plot_CHN_005_BDT_101_001.pdf} \hspace{12pt}
\includegraphics[width=.36\textwidth]{Figures/roc_plot_CHN_005_BDT_101_001.pdf}
\caption{\footnotesize
Left: Example distribution of BDT classification scores for the discrimination
of mono- and di-jet samples, respectively ``Background'' and ``Signal'', at $p_\textrm{T}=1600$~GeV.
Training features include the $\delta_{AB}^N$ for ($N=1\text{--}5$).
Right: Associated Receiver Operating Characteristic curve for true-positives versus false positives.}
\label{fig:distroroc}
\end{figure*}
\section{Computability and Safety\label{sct:theory}}
This section addresses theoretical considerations associated
with computability of the SIFT observable $\delta_{AB}$.
Expressions are developed for various limits of interest.
Infrared and collinear safety is confirmed and deviations
from recursive safety are calculated and assessed.
It is suggested that SIFT's embedded filtering criterion
may help to regulate anomalous behaviors in the latter context,
improving on the Geneva algorithm.
Soft and collinear singularities drive the QCD matrix element
governing the process of hadronic showering. In order to
compare experimental results against theoretical predictions
it is typically necessary to perform all-order resummation
over perturbative splittings. In the context of computing
observables related to jet clustering,
the calculation must first be organized
according to an unambiguous parametric understanding of the
priority with which objects are to be merged, i.e.,~a
statement of how the applicable distance measure
ranks pairings of objects that are subject to the
relevant poles. Specifically, cases of interest
include objects that are $i$) mutually hard but collinear,
$ii$) hierarchically dissimilar in scale,
and $iii$) mutually soft but at wide angular separation.
Pairs in the first two categories are likely to be physically
related by QCD, but those in the third are not.
In order to facilitate considerations of this type,
we outline here how the Eq.~(\ref{eq:measure_diffs}) measure
behaves in relevant limits. The angular factor $\Delta \widetilde{R}_{AB}^2$
carries intuition for small differences by construction
(cf. Eq.~\ref{eq:drsqtilde}), and its dependence
on aggregated mass has been further clarified
in and around Eq.~(\ref{eq:rsqeffdiff}).
We turn attention then to the energy-dependent factor
$\epsilon^{AB}$, as expressed in Eq.~(\ref{eq:eps}), in
two limits of interest. First, we take the case of
hierarchically dissimilar transverse energies, expanding
in the ratio ($\alpha \equiv E_\textrm{T}^A/E_\textrm{T}^B$) about $0$.
\begin{equation}
2\times\epsilon^{AB} \Rightarrow 2\,\alpha + \cdots
\label{eq:expandalpha}
\end{equation}
Next, we expand for small deviations
($\zeta \equiv E_\textrm{T}^A/E_\textrm{T}^B - 1$)
from matched transverse energies.
\begin{equation}
2\times\epsilon^{AB} \Rightarrow 1 - \frac{\zeta^2}{2} + \cdots
\label{eq:expandzeta}
\end{equation}
The SIFT algorithm is observed be be safe in the soft/infrared and collinear (IRC)
radiation limits, because the object separation measure explicitly
vanishes as ($\alpha \Rightarrow 0$) or ($\Delta R \Rightarrow 0$),
up to terms proportional to the daughter mass-squares
(cf. Eq.~\ref{eq:rsqeffdiff}) in the latter case.
This feature ensures that splittings at small angular
separation or with hierarchically distinct transverse energies
will be reunited during clustering at high priority.
With a clustering sequence strictly ranked by generated mass,
JADE was plagued by an ordering ambiguity
between the first and third categories described above,
which presented problems for resummation.
Geneva resolved the problem of mergers between
uncorrelated mutually soft objects at wide separation in the same
way that SIFT does, by diverging when neither
entry in the denominator carries a large energy.
\begin{figure}[ht!]
\centering
\includegraphics[width=.475\textwidth]{Figures/rIRC_splitting.pdf}
\caption{\footnotesize
Hard object $\lambda$ emits a soft and collinear object $\kappa$ at separation $\Delta \eta$, which experiences
a secondary collinear splitting into a pair of objects with comparable hardness ($z \simeq \sfrac{1}{2}$).
}
\label{fig:splitting}
\end{figure}
Yet, both SIFT and Geneva fall short of meeting the {\it recursive}
IRC safety conditions described in Ref.~\cite{Banfi:2004yd} at the measure level.
The challenge arises when a soft and collinear emission splits secondarily into
a very collinear pair, as first observed in Ref.~\cite{Catani:1992tm}.
This scenario is visualized in FIG.~\ref{fig:splitting}, with hard object $\lambda$
recoiling off a much softer emission $\kappa$ (having $E_\textrm{T}^\kappa/E_\textrm{T}^\lambda \ll 1$) at
a narrow pseudo-rapidity separation ($\Delta \eta \ll 1$).
Azimuthal offsets are neglected here for simplicity. The secondary radiation
products are of comparable hardness for the situation of interest,
carrying momentum fractions ($z \simeq \sfrac{1}{2}$)
and ($1-z$) relative to their parent object $\kappa$.
It can be that the members of this secondary pair
each successively combine with the hard primary object rather than first
merging with each other. This ordering ambiguity implies that the value
of the measure $\delta_{AB}$ after the final recombination of all three
objects is likewise sensitive to the details of the secondary splitting.
However, the mismatch is guaranteed to be no more than a factor of 2.
Accordingly, this is a much milder violation than one
associated with a divergence (as for JADE).
While it does present difficulties for standard approaches to automated
computation, it does not exclude computation.
We conclude this section by sketching the relevant calculation, translating
results from Appendix~F of Ref.~\cite{Banfi:2004yd}
into the language of the current work.
The secondary splitting is characterized by a parameter
$\mu^2_\kappa \equiv \left(m_\kappa / p_\textrm{T}^{\hspace{0.75pt}\kappa}\right)^{2}$.
We further apply the limit ($\,\mu^2_\kappa \!\ll\! 1\,$), which implies
($E_\textrm{T}^{\hspace{0.75pt}\kappa} \simeq p_\textrm{T}^{\hspace{0.75pt}\kappa}$),
and treat the radiation products of object $\kappa$ as individually massless.
The value of the measure for merging these objects
is readily computed with Eq.~(\ref{eq:siftmeasure}),
yielding ($\delta^z_{1-z} \simeq 2\,\mu^2_\kappa$). Note that the
coefficient comes from the sum of squares in the measure
denominator, in the limit of a balanced splitting.
The merger of objects $\lambda$ and $\kappa$ (given prior recombination of the $\kappa$ products)
is best treated with Eq.~(\ref{eq:measure_diffs}),
defining $\mu^2_\lambda \equiv \left(m_\lambda / p_\textrm{T}^{\hspace{0.75pt}\lambda}\right)^{2}$,
and applying the limits in Eqs.~(\ref{eq:rsqeffdiff},~\ref{eq:expandalpha}), as follows:
\begin{equation}
\delta_\lambda^{\kappa} \simeq
\bigg( \frac{E_\textrm{T}^\kappa}{E_\textrm{T}^\lambda} \bigg)
\times
\bigg[ \, (\Delta \eta^\kappa_\lambda)^2 + \mu^2_\kappa + \mu^2_\lambda \,\bigg]
\label{eq:deltakl}
\end{equation}
However, if the remnants of object $\kappa$ instead combine in turn
with object $\lambda$, then the final value of the measure
(taking $z \ge \sfrac{1}{2}$ without loss of generality)
is instead ($\,\delta^z_\lambda \simeq z \times \delta_\lambda^{\kappa}\,$).
In addition to that overall rescaling,
the $\mu^2_\kappa$ term from Eq.~(\ref{eq:deltakl})
is absent from the analogous summation in this context.
If the $\kappa$ splitting is hierarchically imbalanced
(with $z \simeq 1$), then the secondary splittings are
less resistant to merging first and
the terminal measure value
becomes insensitive to the merging order.
For balanced splittings, the physical showering history
will be ``correctly'' rewound if ($\,\delta^z_{1-z} < \,\delta^{1-z}_\lambda\,$).
But, there are no applicable kinematic restrictions enforcing that
condition, and SIFT's preference for associating objects
at dissimilar momentum scales actually constitutes a bias in the other direction.
On the other hand, the filtering criterion can
help curb potential ambiguities in this regime.
Specifically, the energy scale factor ($\,2\times\epsilon^z_{1-z} \simeq 1\,$)
associated with products of object $\kappa$
will be subject here to the Eq.~(\ref{eq:expandzeta}) limit.
So, the ``wrong'' order of association is strongly correlated
with cases where ($\,\epsilon^{1-z}_\lambda \ll 1\,$), since this
is generally required in order to overcome the tendency
for strict collinearity
($\Delta\widetilde{R}^z_{1-z} \ll \Delta\widetilde{R}^{1-z}_\lambda$)
in secondary splittings to commensurate momentum scales.
In turn, this enhances the likelihood that the Drop condition
from Eq.~(\ref{eq:iso_drop}) will veto any such merger.
A full clarification of the SIFT filtering criterion's
implications for recursive IRC safety is beyond
our current scope, but is of interest for future work.
\section{Conclusions and Summary\label{sct:conclusion}}
We have introduced a new scale-invariant jet clustering algorithm named SIFT (Scale-Invariant Filtered Tree)
that maintains the resolution of substructure for collimated decay products at large boosts.
This construction unifies the isolation of variable-large-radius jets,
recursive grooming of soft wide-angle radiation,
and finding of subjet-axis candidates into a single procedure.
The associated measure asymptotically recovers angular and kinematic behaviors of algorithms in the $k_\textrm{T}$-family,
by preferring early association of soft radiation with a resilient hard axis,
while avoiding the specification of a fixed cone size.
Integrated filtering and variable-radius isolation criteria resolve the halting problem
common to radius-free algorithms and block assimilation of soft wide-angle radiation.
Mutually hard structures are preserved to the end of clustering,
automatically generating a tree of subjet axis candidates
at all multiplicities $N$ for each isolated final-state object.
Excellent object identification and kinematic reconstruction are maintained without parameter tuning across
more than a magnitude order of transverse momentum scales, and superior resolution is exhibited for highly-boosted partonic systems.
The measure history captures information that is useful for tagging massive resonances,
and we have demonstrated with the aid of supervised machine learning that this observable has substantially more power
for discriminating narrow 1-, 2-, and 3-prong event shapes than the benchmark technique using $N$-subjettiness.
These properties suggest that SIFT may prove to be a useful tool for the continuing study of jet substructure.
\section*{Acknowledgements}
The authors thank Bhaskar Dutta, Teruki Kamon, William Shepherd, Andrea Banfi, Rok Medves,
Roman Kogler, Anna Albrecht, Anna Benecke, Kevin Pedro, Gregory Soyez, David Curtin, and Sander Huisman for useful discussions.
The work of AJL was supported in part by the UC Southern California Hub,
with funding from the UC National Laboratories division of the University of California Office of the President.
The work of DR was supported in part by DOE grant DE-SC0010813.
The work of JWW was supported in part by the National Science Foundation under Grant Nos. NSF PHY-2112799 and NSF PHY-1748958.
JWW thanks the Mitchell Institute of Fundamental Physics and Astronomy and the Kavli Institute for Theoretical Physics for kind hospitality.
High-performance computing resources were provided by Sam Houston State University.
\clearpage
|
{
"arxiv_id": "2302.13224",
"language": "en",
"timestamp": "2023-02-28T02:13:19",
"url": "https://arxiv.org/abs/2302.13224",
"yymm": "2302"
} | \section{Introduction}\label{sec:intro}
The theory of the \emph{fair allocation} investigates how to allocate items to a set of agents under some fairness notion, where different agents may have different valuation functions on the items. The spirit of the fair allocation problem is to achieve a desired outcome for individuals and the whole community simultaneously, which motivates several important problems in mathematics and computer science. The \emph{goods allocation} problem with positive valuation functions has received extensive studies~\cite{LMMS04,budish2011,BL16,markakis2017}.
However, in some scenarios in real life, the items to be allocated may have disutility, i.e., the valuation functions are negative, such as troublesome jobs, household chores, or debt. For this case, the problem is called the \emph{chores allocation} problem~\cite{ARSW17}.
Seemingly, the problem of chores allocation is similar to the well-studied goods allocation problem as we can reduce the former to the latter by setting all valuation functions as their negative counterparts. However, many properties of the problem may change and not be applicable under this reduction. Thus, most results for goods allocation cannot be trivially extended to chores allocation~\cite{ARSW17,BK20,HL21,ALW19,ZW22}.
There are several fairness notions for allocations, such as envy-free (EF)~\cite{LMMS04}, proportionality (PROP)~\cite{steihaus1948}, maximin share (MMS)~\cite{budish2011}, and so on.
In this paper, we consider the MMS fairness notion where the MMS value is the best possible guarantee that an agent can expect when he divides the items into several bundles but picks after all other agents. Moreover, we study MMS allocations of chores on graphs with connectivity constraints in which chores are embedded on a graph and each bundle assigned to an agent is required to be connected~\cite{DBLP:journals/corr/abs-1808-09406,DBLP:journals/corr/abs-1811-04872,BCEIP17,LT18,BCL19}. The connectivity requirements for chores capture the scenarios such as the allocation for energy conservation and emission reduction obligations between countries where the feature of geographical adjacency is natural, crop harvesting task allocation problem or clean work arrangement in the street where arranging a task with continuous geographical location is more convenient to set and use tools, and so on.
Note that MMS allocation may not always exist. It is also meaningful to study the approximate $\alpha$-MMS allocation: whether each agent can always get $\alpha$ fraction of his MMS guarantee in the allocation.
\subsection{Our Contribution}
We propose two novel techniques to study MMS allocation of chores with connectivity constraints and demonstrate their applications
on trees and cycles. Although the graph classes are restrictive, claiming the existence of the (approximate) MMS allocations of chores with these constraints is already non-trivial and requires sophisticated analysis.
For trees, the simple algorithm for MMS allocation of goods~\cite{BCEIP17} can not be extended to chores and an illustrated example will be presented in our later discussion. Resulted from the subtle difference between the goods and chores settings, it remains unknown whether or not MMS allocation of chores on a tree always exists.
We generalize the classical last-diminisher method to a method called the group-satisfied method. Together with a matching technique in graph theory, we show the existence of MMS allocations of chores on some special trees, such as trees with depth at most 3, spiders, and so on.
For cycles, the idea for goods allocation in \cite{LT18} can be trivially used to obtain a 3/2-MMS allocation of chores on cycles. Here our major contribution is the idea of using linear programming to design algorithms for approximate $\alpha$-MMS allocations and construct examples to show the nonexistence of MMS allocations. We take the problem of allocating chores on a cycle to three agents as an example to show the linear programming method.
By using the linear programming method, we may be able to avoid some complicated combinatorial analysis. We show a tight result: $7/6$-MMS allocation of chores on a cycle to three agents always exists and can be found in polynomial time; $\alpha$-MMS allocation may not exist for any constant $\alpha < {7/6}$. The linear programming method may be used to solve more allocation problems. A directed extension to MMS allocation of goods on a cycle to three agents, we also get a tight ratio of $5/6$ for $\alpha$.
In the following part of the paper, we first introduce related work on MMS and preliminary background.
Second, we discuss the difference between allocating goods and chores on trees and introduce the group-satisfied method.
Third, we show how to use the group-satisfied method to find MMS allocations for chores on two kinds of trees: trees with depth 3 and spiders.
Fourth, we consider MMS allocations of chores on cycles and introduce a linear programming method to find approximation MMS allocations.
Finally, we discuss the feasibility of our new methods for other problems.
\subsection{Other Related Works}
For goods allocation, the MMS fairness notion was first introduced in~\cite{budish2011}.
The first instance where MMS allocation does not exist was given in~\cite{KPW18}. An instance with a fewer number of goods was identified in~\cite{kPW16}. On the other hand, a 2/3-MMS allocation always exists~\cite{KPW18}. Later, the ratio was improved to 3/4~\cite{GHSSY18} and to $3/4+1/(12n)$~\cite{GT20}, where $n$ is the number of agents.
For only three agents and four agents, the ratio can be further improved to 7/8~\cite{AMNS17} and 4/5~\cite{GHSSY18} respectively.
It is also known that a 39/40-MMS allocation for three agents is impossible~\cite{FFT21}.
All the above results do not take into account connectivity constraints.
For goods allocation on a graph with connectivity constraints, MMS allocation always exists on trees~\cite{BCEIP17} but may not exist on single cycles~\cite{LT18}.
For goods on a cycle, it is known that 1/2-MMS allocation always exists and the ratio can be improved to 5/6 if there are only three agents~\cite{LT18}.
For the chores setting, MMS allocation may not always exist, but 2-MMS allocation always exists~\cite{ARSW17}.
The result was later improved to $4/3$~\cite{BK20} and to $11/9$ \cite{HL21}.
For chores with connectivity constraints, EF, PROP, and Equitability allocations on paths and stars were studied in~\cite{BCL19}. A more relaxed fairness notion was studied on paths in~\cite{BCFIMPVZ22}.
Nevertheless, the MMS criterion has not been well explored, except for some trivial results extended from goods allocation.
\section{Backgrounds}
Let $A=\{1,\dots,n\}$ be a set of $n$ agents and $C=\{c_1,\dots,c_m\}$ be a set of $m$ chores.
There is an undirected graph $G=(C,E)$ with vertices being chores in $C$.
For each agent $i\in A$, there is a disutility function on chores $u_i:C\rightarrow\mathbb{R}_{\leq 0}$,
where the functions $u_i$ are \emph{additive}, i.e., $u_i(C')=\sum_{c\in C'} u_i(c)$ holds for any subset of chores $C'\subseteq C$.
The whole disutility function profile for all agents is denoted by $U=\{u_1,\dots,u_n\}$.
Let $k$ be a positive integer. Let $P^k: [k] \to 2^C$ be a \emph{$k$-partition} of $C$ such that $\cup_{i\in [k]}P^k(i)=C$ and $P^k(i)\cap P^k(j)=\emptyset$ for any different $i,j\in [k]$, where $[k]=\{1,\dots,k\}$ and the set $P^k(i)$ is called the $i$th \emph{bundle} in the $k$-partition.
A $k$-partition $P^k$ is \emph{valid} if the induced subgraph $G[P^k(i)]$ is connected for each $i\in [k]$.
Let $\mathcal{P}^k$ denote the set of all valid $k$-partitions.
An \emph{allocation} of $C$ is defined as a valid $n$-partition $\pi:A\rightarrow 2^C$.
We mainly consider the \textit{Maximin share (MMS)} criterion for the allocation of chores.
Given a graph $G$ and a positive integer $k$, for each $i\in A$, we define the \emph{MMS value} of agent $i$ as follows when the chores in $G$ are allocated to $k$ agents
$$mms_i^k(G)=\max_{P^k\in\mathcal{P}^k}\min_{j\in [k]}u_i(P^k(j)).$$
For simplification, we use $mms_i^k$ and $mms_i$ to denote $mms_i^k(G)$ and $mms_i^n$, respectively.
For any constant $\alpha \geq 1$.
An $n$-partition $P$ of $G$ is called an \emph{$\alpha$-MMS$_{i}$ split} for agent $i$ if each bundle $b\in P$ induces a connected subgraph and $u_i(b)\geq \alpha\times mms_i$.
When $\alpha=1$, we simply call it an MMS$_{i}$ split.
We say a valid allocation $\pi$ is an \emph{$\alpha$-MMS allocation} if $u_i(\pi(i))\geq \alpha\times mms_i$ for each agent $i\in A$. When $\alpha=1$, we simply call it an MMS allocation.
We focus on the existence of MMS (or $\alpha$-MMS) allocations of chores on graphs and algorithms to find them if they exist. Our algorithms may need to use the MMS value for each agent.
We show that when the graph is a tree or a cycle, the MMS value for each agent can be easily computed. The algorithms are modified from the algorithms for goods in~\cite{BCEIP17} and the detailed proof can be found in
Appendix~\ref{append-1}.
\begin{lemma} \label{MMScom}
For allocating chores on a tree or a cycle, the MMS value $mms_i$ for each agent $i\in A$ can be computed in polynomial time.
\end{lemma}
\section{Failure of the Last-diminisher Method on Trees for Chores Allocations}
As mentioned in the introduction, MMS allocations of goods on trees always exist and can be computed in polynomial time~\cite{BCEIP17}.
The algorithm uses the idea of the last-diminisher method for proportionality cuttings of divisible cake~\cite{cake16}.
We first review the allocation procedure for goods. First, let one agent take a sub rooted tree as $T$ (the original tree is rooted by taking an arbitrary vertex as the root);
second, each other agent, in order, replaces $T$ with a sub rooted tree $T'$ of it if he thinks $T$ is too much and does nothing if he thinks the current subtree $T$ is not enough or just right;
third, the last agent who modifies $T$ gets the bundle $T$. Note that after deleting $T$ from the original tree $G$, the remaining part is still a connected tree.
The same procedure is then applied to allocate the remaining tree to the remaining agents. This algorithm is not exactly the algorithm for allocating goods on trees in~\cite{BCEIP17}.
Both algorithms use the idea of the last-diminisher method and we just present a simplified version.
For chores, agents do not want to get more burdens. The corresponding last-diminisher method will be different: each agent may want to expand the current bundle if he thinks the current bundle of burdens is too light.
At first glance, we can also assign the last expanded bundle to the last agent who expands it.
However, the expanding operation will cause trouble on trees. After expanding the current bundle (a subtree) by including some vertices to it, the remaining graph after deleting the new bundle may not be connected (not be a connected tree anymore). This may change the property of the problem. Specifically, we give an example in Figure~\ref{hard} to show that after expanding a rooted subtree the remaining part may not be connected. The current subtree $T$ is the subtree rooted as $v_3$, which contains three vertices $\{v_3, v_6,v_7\}$. An agent may include two vertices $v_1$ and $v_2$ to the subtree $T$. However,
after adding them, $T$ is not a rooted subtree and the remaining graph after deleting $T$ is not connected.
For this case, we can not guarantee the existence of allocations to the remaining $n-1$ agents even if the original allocations to the $n$ agents exist.
\begin{figure}[h]
\centering
\includegraphics[width=4cm,height=2.1cm]{chores1.pdf}
\caption{The initial bundle contains $v_3, v_6$ and $v_7$. After adding $v_1$ and $v_2$ in the bundle, the new bundle is not a rooted subtree anymore.}
\label{hard}
\end{figure}
It turns out that chores allocation on trees become much harder. We show that chores allocation on some special trees always exists and can be found in polynomial time.
Our algorithms will use a technique, called the \emph{group-satisfied method}, which can be regarded as an extension of the last-diminisher method. In an allocation procedure of the group-satisfied method, we will assign $k'$ bundles to $k'$ agents together such that the remaining $n-k'$ agents `agree with' this allocation and the remaining objects still form a connected tree. Here `agree' means that the agent thinks in the new instance his MMS value will not decrease. The group-satisfied method will iteratively apply this allocation procedure until all objects are assigned to agents.
If in each allocation procedure only one bundle is assigned to one agent, then the group-satisfied method becomes the last-diminisher method.
Before introducing the group-satisfied method in detail, we first define some concepts.
\section{Preliminaries for the group-satisfied method}
Recall that a partial allocation is valid if each bundle of chores in the allocation induces a connected graph.
\begin{definition}
Let $\pi'$ be a valid partial allocation of a subset set of chores $C'\subseteq C$ to a subset of agents $A'\subseteq A$.
An agent $i\in A'$ is \emph{satisfied} with $\pi'$ if the disutility of the bundle assigned to him is not less than his MMS value, i.e., $u_i (\pi'(i))\geq mms_i$;
an agent $i\in A\setminus A'$ is \emph{satisfied} with $\pi'$ if in the remaining instance his MMS value is not decreasing, i.e., $mms_i^{n-|A'|}(G[C\setminus C'])\geq mms_i^{n}(G)$ for each agent $i\in A\setminus A'$.
\end{definition}
\begin{definition}
A valid partial allocation $\pi'$ is \emph{group-satisfied} if all agents are satisfied with it.
\end{definition}
Given a valid partial allocation $\pi': A'\rightarrow 2^{C'}$. To check whether $\pi'$ is group-satisfied we need to check whether all agents are satisfied with it.
For an agent in $A'$, it is easy to check since we only need to compute the disutility of the bundle assigned to him. For an agent $i\in A\setminus A'$, we use the following sufficient condition to guarantee their requirements,
\begin{lemma} \label{forsaft}
Let $\pi': A'\rightarrow 2^{C'}$ be a valid partial allocation.
Let $i$ be an agent in $A\setminus A'$ and $P_i$ be an MMS$_i$ split of $G$.
If $G[C\setminus C']$ contains chores only from at most $n-|A'|$ bundles of $P_i$ and the chores from the same bundle induce a connected graph in $G[C\setminus C']$, then $i$ is satisfied with $\pi'$.
\end{lemma}
\begin{proof}
For agent $i\in A\setminus A'$, we split $G[C\setminus C']$ into at most $n-|A'|$ bundles according to the MMS$_i$ split $P_i$ of $G$, i.e., chores in the same bundle of $P_i$ will still be in the same bundle.
We can see that each bundle is also connected. Furthermore, the disutility of each bundle is not less than $mms_i^{n}(G)$. This split is a valid $n-|A'|$-partition of $G[C\setminus C']$. So we know that $mms_i^{n-|A'|}(G[C\setminus C'])\geq mms_i^{n}(G)$.
\end{proof}
A subclass of trees satisfying a certain property $\Pi$ is called \emph{Property-$\Pi$ graphs}. For example, the class of paths is trees having the property: having no vertices of degree $\geq 3$.
In this paper, we will consider several Property-$\Pi$ graphs, such as paths, trees with only one vertex of degree $\geq 3$, trees with depth 3, and so on.
We have the following important lemma that will widely be used in our proofs.
\begin{lemma} \label{important}
For any instance of MMS allocation of chores on Property-$\Pi$ graphs, if there is always a group-satisfied partial allocation $\pi': A'\rightarrow 2^{C'}$ such that the remaining graph $G[C\setminus C']$ is still a Property-$\Pi$ graph,
then MMS allocations of chores on Property-$\Pi$ graphs always exist. Furthermore, if the partial allocation $\pi'$ can be found in polynomial time, then MMS allocations of chores on Property-$\Pi$ graphs can be found in polynomial time.
\end{lemma}
\begin{proof}
We prove this lemma by induction on the number of agents. It trivially holds for one agent: MMS allocations always exist and they can be found in polynomial time. Next, we assume that the lemma holds for any $n'<n$ agents and proves that the lemma also holds for $n$ agents. Let $\pi': A'\rightarrow 2^{C'}$ be a group-satisfied partial allocation and there is an algorithm that can find it in polynomial time.
We consider the remaining problem after the allocation $\pi'$. The remaining set of chores $C''=C\setminus C'$ also induces a graph $G''$ that is a Property-$\Pi$ graph since $\pi'$ is group-satisfied. We need to allocation $C''$ to the remaining $n-|A'|$ agents in $A''=A\setminus A'$.
By the induction, we know that there is an MMS allocation $\pi'': A''\rightarrow 2^{C''}$ such that $u_i (\pi''(i))\geq mms_i^{|A''|}(G'')$ holds for each $i\in A''$.
By the definition of group-satisfied allocations, we also know that $mms_i^{|A''|}(G'')\geq mms_i^n(G)$ holds for each $i\in A''$.
Therefore, the two allocations $\pi'$ and $\pi''$ will form a valid MMS allocation of the original graph $G$.
For the running time, by the induction, we know that the allocation $\pi''$ can be found in polynomial time. The allocation $\pi'$ can also
be executed in polynomial time by the assumption. So an MMS allocation of the whole graph $G$ can be done in polynomial time.
\end{proof}
Next, we show how to use Lemma~\ref{important} to find feasible allocations. We first consider the simple case of paths.
\subsection{Paths}
Assume that the graph $G$ is a path now. We consider the MMS$_i$ split of the path $G$ for each agent $i$. In each MMS$_i$ split, the bundle containing the most left chore in the path $G$ is called the \emph{first bundle}.
Since the graph is a path, we know that there is always an agent $i^*$ whose first bundle $C^*$ contains any other agent's first bundle. The allocation procedure is to allocate bundle $C^*$ to agent $i^*$.
We can see that this allocation is group-satisfied and the remaining chores still form a path. By Lemma~\ref{important}, we know that MMS allocations of chores on paths always exist.
For the running time, if we already know the MMS value of each agent, then the algorithm takes only $O(nm)$ time because we check from the left to right at each chore whether the sum of the values of all the chores on the left is greater than the MMS value for each agent.
Note that, in this allocation procedure, only one bundle of chores is assigned to one agent in each iteration. So it is indeed the last-diminisher method.
Before introducing the algorithms for trees of a more complicated structure,
we introduce more notations.
\subsection{$x$-perfect and $x$-super subtrees}
Next, we assume the input graph $G$ is a tree. We will select a vertex $r$ in $G$ as the root and consider the tree as a rooted tree. The subtree rooted as a vertex $v$ is denoted by $T_v$. The length of the path from the root $r$ to a vertex $v$ in the tree is the \emph{depth} of the vertex $v$. The \emph{depth} of a rooted tree is the largest depth of all vertices.
We will also say an unrooted tree has the depth at most $x$ if we can select a vertex as the root such that the rooted tree has the depth at most $x$.
A tree of a single vertex is called a \emph{trivial tree} and a tree of more than one vertex is called a \emph{non-trivial tree}.
For each agent $i\in A$, we fix an MMS$_i$ split $P_i$ of the tree.
Let $v$ be a vertex on the tree. We consider the subtree $T_v$ rooted at $v$. Assume that $x$ bundles in an MMS$_i$ split $P_i$ are contained in the subtree $T_v$, and $y$ bundles in $P_i$ contain at least one vertex in $T_v$.
It always holds that $x\leq y \leq x+1$ since there is at most one bundle in $P_i$, which contains the vertex $v$, that contains some vertices in $T_v$ but is not contained in $T_v$.
We say that agent $i$ \emph{$y$-splits} the subtree $T_v$ if the MMS$_i$ split $P_i$ makes $x=y$
and \emph{$y^+$-splits} the subtree $T_v$ if the MMS$_i$ split $P_i$ makes $y=x+1$.
See Figure~\ref{pv1} for an illustration of $y$-splitting and $y^+$-splitting.
\begin{figure}[htbp]
\centering
\includegraphics[width=4cm,height=2.2cm]{chores2.pdf}
\caption{In this instance, there are 7 chores on the tree. One agent's MMS split partitions the tree into three bundles as shown.
We can see that this agent $2^{+}$-splits the subtree rooted as $v_2$ and $1$-splits the subtree rooted as $v_3$.}
\label{pv1}
\end{figure}
\vspace{-2mm}
\begin{definition} \label{pands}
A rooted subtree $T_v$ is called \emph{$x$-perfect} if no agent $z$-splits it for any $z< x$, no agent $z^+$-splits it for any $z\leq x$, and there is at least one agent $x$-splits it.
A rooted subtree $T_v$ is called \emph{$x$-super} if no agent $z$-splits it for any $z< x$, no agent $z^+$-splits it for any $z< x$, and there is at least one agent $x^+$-splits it.
\end{definition}
\begin{definition}
For an $x$-super tree $T_v$, an agent that $x^+$-splits it is called a \emph{dominator} of the tree $T_v$, and the set of dominators of $T_v$ is denoted by $D(T_v)$.
\end{definition}
\begin{lemma}\label{perfect_c}
Let $T_v$ be an $x$-perfect rooted subtree, and $\pi'$ be a valid partial allocation that allocates $T_v$ to $x$ agents $A'$.
If each agent in $A'$ is satisfied with $\pi'$, then $\pi'$ is group-satisfied.
\end{lemma}
\begin{proof}
We only need to show that any agent in $A\setminus A'$ is also satisfied with $\pi'$. According to the definition of $x$-perfect subtrees, we know that for any agent $i\in A$, the graph $G-T_v$ contains chores from at most $n-x$ bundles of his MMS$_i$ split of $G$. Furthermore, the chores from the same bundle form a connected graph in $G-T_v$ since $T_v$ is a rooted subtree of $G$. By Lemma~\ref{forsaft}, we know that all agents in $A\setminus A'$ are satisfied with $\pi'$.
\end{proof}
\begin{lemma}\label{super_c}
Let $\mathcal{T}=\{T_1, T_2, \dots, T_l\}$ be a set of disjoint rooted subtrees of $G$, where $T_i$ is $x_i$-super $(i\in \{1,2,\dots, l\})$.
Let $\pi'$ be a valid partial allocation that allocates chores in all subtrees in $\mathcal{T}$ to $\sum_{i=1}^l x_i$ agents $A'$, where $A'\supseteq \bigcup_{i=1}^l D(T_i)$ contains all dominators of all subtrees in $\mathcal{T}$.
If each agent in $A'$ is satisfied with $\pi'$, then $\pi'$ is group-satisfied.
\end{lemma}
\begin{proof}
We only need to show that any agent in $A\setminus A'$ is also satisfied with the allocation $\pi'$.
For any subtree $T_i\in \mathcal{T}$ and any agent $j\in A\setminus D(T_i)$, by the definition of $x$-super subtrees and dominators, we know that the graph $G-T_i$ contains only chores from at most $n-x_i$ bundles of the MMS$_i$ split of $G$. By iteratively applying this claim, we get that for any agent $j_0 \in A\setminus \Pi_{i=1}^l D(T_i)$, the graph $G-\cup_{i=1}^l T_i$ contains only chores from at most $n- \sum_{i=1}^l x_i$ bundles of his MMS$_{j_0}$ split of $G$. Furthermore, the chores from the same bundle form a connected graph in $G-\cup_{i=1}^l T_i$ since each $T_i$ is a rooted subtree of $G$. We have that $A'\supseteq D(T_v)$.
By Lemma~\ref{forsaft}, we know that all agents in $A\setminus A'$ are satisfied with $\pi'$.
\end{proof}
Lemmas~\ref{perfect_c} and~\ref{super_c} provide some ideas to construct group-satisfied partial allocations based on $x$-perfect and $x$-super subtrees.
Note that Lemma~\ref{perfect_c} only considers one $x$-perfect subtree while Lemma~\ref{super_c} considers several $x_i$-super subtrees together. For several $x$-perfect subtrees, we can deal with them one by one by using Lemma~\ref{perfect_c}. However, for several $x_i$-super subtrees, we should use the stronger Lemma~\ref{super_c}.
\section{Trees with depth 3}
Now we consider trees with depth at most 3 and show how to use the group-satisfied method to solve the allocation problem on these trees. Let $r$ be the root of the tree $G$. If the depth of the tree is 2, then the tree is a star.
For this case, we consider the MMS$_{i_0}$ split $P_{i_0}$ of an arbitrary agent $i_0$ and let agent $i_0$ take the bundle containing the root. Note that all other $n-1$ bundles in $P_{i_0}$ contain only one chore in one leaf of the tree. We arbitrarily assign the $n-1$ chores to
the other $n-1$ agents, since the value of every single chore will not less than the MMS value of each agent. This will be an MMS allocation. Next, we assume that the depth of $G$ is 3.
In the algorithm, we also first consider the MMS$_{i_0}$ split $P_{i_0}$ of an arbitrary agent $i_0$ and denote the bundle containing the root by $p_r$.
In the graph $G'=G-p_r$ after deleting $p_r$, each connected component is either a star or a single vertex.
Let $C=\{c_1,c_2,\dots,c_p\}$ be the set of stars in $G'$. Each star in $C$ is a rooted subtree of $G$. We will show that we can always find a group-satisfied allocation.
\textbf{Case 1}: There is a star $c_q\in C$ that is $x$-perfect for some integer $x$. Assume that agent $i$ $x$-splits star $c_q$. We consider the $x$-partition of $c_q$ by agent $i$ and assign the bundle containing the center vertex of $c_q$ to agent $i$ and
assign the left $x-1$ chores to arbitrary $x-1$ agents. All the $x$ agents are satisfied with this allocation. By Lemma~\ref{perfect_c}, we know that this allocation of $c_q$ to the $x$ agents is group-satisfied.
\textbf{Case 2}: All stars in $C$ are $x$-super for different integers $x$. Assume that $c_i$ is $x_i$-super for $1\leq i\leq p$. In this case, we will use a matching technique to show that there is either a group-satisfied allocation to allocate some stars in $C$ or a group-satisfied allocation to allocate the whole graph $G$.
We construct an auxiliary bipartite graph $H=(V_1, V_2, E_H)$. The vertex set $V_1$ contains $|C|+1$ vertices. Each star in $C$ is corresponding to a vertex in $V_1$ and the last vertex in $V_1$ is corresponding to the bundle $p_r$ in $P_{i_0}$, i.e., $V_1=C\cup\{p_r\}$. The vertex set $V_2$ is corresponding to the agent set $A$, i.e., $V_2=A$. A vertex in $c_j\in C$ is adjacent to a vertex $i\in V_2$ if and only if the corresponding agent $i$ is a dominator of the subtree $c_j$, i.e., $i\in D(c_j)$. The vertex $p_r\in V_1$ is only adjacent to vertex $i_0\in V_2$. See Figure~\ref{auxgraph} for an illustration of the construction of the auxiliary graph $H$.
\vspace{-4mm}
\begin{figure}[htbp]
\centering
\includegraphics[width=9cm,height=5cm]{chores3.pdf}
\caption{The graph $G$, where $D(C_1)=\{1,3\},D(C_2)=\{1,2\}$ and $D(C_3)=\{3,i_0\}$, and the auxiliary graph $H$.}
\label{auxgraph}
\end{figure}
We use a standard matching algorithm to find a maximum matching $M_H$ between $V_1$ and $V_2$ in $H$.
\textbf{Case 2.1}: All vertices in $V_1$ are matched in $M_H$. For this case, we will show that we can find a group-satisfied allocation to allocate the whole graph $G$ according to the matching $M_H$.
First of all, the vertex $p_r\in V_1$ can only be matched with $i_0\in V_2$ since $p_r$ is only adjacent to $i_0$ in $H$. So we assign the bundle $p_r$ to agent $i_0$.
Next, we consider other edges in the matching $M_H$.
Assume that $c_{j}\in C$ is matched with $i_j \in V_2$. Then agent $i_j$ is a dominator of the subtree $c_j$. We consider an $x_j$-partition of $c_j$ by agent $i_j$ and assign the bundle containing the center vertex to agent $i_j$. All other bundles left are bundles of a single chore.
For each vertex in $c_{j}\in C$, we do the above allocation. Then we can allocate $|M_H|=|C|+1$ bundles to $|M_H|$ different agents since $M_H$ is a matching.
After this, the remaining chores will form an independent set. We assign each remaining chore to a remaining agent. This is the algorithm.
\begin{lemma}\label{gs_case2.1}
The allocation in Case 2.1 is group-satisfied.
\end{lemma}
\begin{proof}
First, we show that the allocation is a valid allocation to allocate all chores to agents. The MMS$_{i_0}$ split $P_{i_0}$ of agent $i_0$ splits the graph into $n$ bundles.
For a star $c_{i}\in C$, which is $x_i$-super, if it contains exact $x'$ bundles in $P_{i_0}$, then according to Definition~3 we know that $x_i\leq x'$. This relation holds for all stars in $C$.
In our allocation, we will split each star $c_{i}\in C$ into $x_i\leq x'$ bundles. So our allocation will split the whole graph into $n'\leq n$ bundles. In our allocation, the first $|C|+1$ bundles are assigned according to
the matching. So no two bundles are assigned to the same agent. For the remaining bundles, each of them contains a single chore and the number of bundles is not greater than the number of remaining agents by $n'\leq n$.
So all chores can be assigned to agents. This is a valid allocation to allocate all chores to agents.
Second, we show that all agents are satisfied with the allocation. First, agent $i_0$ is satisfied with the bundle $p_r$ because $p_r$ is a bundle in his MMS$_{i_0}$ split $P_{i_0}$.
For each $c_{i}\in C$, we will assign a bundle containing the center vertex of $c_i$ to agent $i_j$. Agent $i_j$ is satisfied with this bundle because the star $c_i$ was split by agent $i_j$.
All other bundles left are bundles of a single chore. Each agent is satisfied with a single chore. So all agents are satisfied with the allocation.
\end{proof}
\textbf{Case 2.2}: Some vertices in $V_1$ are not in the matching $M_H$. In this case, we will show that we can find a group-satisfied allocation to allocate a subset of stars at $C$.
Our algorithm will use the following concept \emph{crown}.
\begin{definition}
Let $H=(V_1, V_2, E_H)$ be a bipartite graph.
A pair of nonempty vertex sets $(V'_1,V'_2)$ is called a \emph{crown} of $H$, if the following conditions hold:
\begin{enumerate}
\item $V'_1\subseteq V_1$ and $V'_2 \subseteq V_2$.
\item any vertex in $V'_1$ is only adjacent to vertices in $V'_2$.
\item there is a matching $M'_H$ of size $|V'_2|$ between $V'_1$ and $V'_2$. The matching $M'_H$ is called a \emph{witness matching} of the crown.
\end{enumerate}
\end{definition}
The following lemma gives a condition for the existence of the crown structure
\begin{lemma}\label{condition_crown}
Let $H=(V_1, V_2, E_H)$ be a bipartite graph and $M_H$ be a maximum matching between $V_1$ and $V_2$. If $|M_H|< |V_1|$, then there is a crown $(V'_1,V'_2)$ of $H$, which can be found in linear time.
\end{lemma}
\begin{proof}
A vertex in $H$ is called \emph{$M_H$-saturated} if it is an endpoint of an edge in $M_H$ and \emph{$M_H$-unsaturated} otherwise. A path $P$ in $H$ that alternates between edges not in $M_H$ and edges in $M_H$ is called an \emph{$M_H$-alternating path}.
Since $|M_H|< |V_1|$, we know that there are some $M_H$-unsaturated vertices in $V_1$.
Let $V'_1\subseteq V_1$ be the set of vertices in $V_1$ that are reachable from an $M_H$-unsaturated vertex in $V_1$ via an $M_H$-alternating path, possibly of length zero
(which means that all $M_H$-unsaturated vertices in $V_1$ are in $V'_1$).
Let $V'_2\subseteq V_2$ be the set of vertices in $V_2$ that are reachable from an $M_H$-unsaturated vertex in $V_1$ via an $M_H$-alternating path.
Note that $V'_1$ and $V'_2$ can be computed in linear time by a breadth-first search if $M_H$ is given.
We will show that $(V'_1,V'_2)$ is a crown.
The first condition in the definition of crown is trivial. The second condition holds because any vertex $v$ adjacent to a vertex $u\in V'_1$ must be in $V'_2$ because any $M_H$-alternating path from an $M_H$-unsaturated vertex in $V_1$ to $u$ plus the edge $uv$ will form an $M_H$-alternating path from an $M_H$-unsaturated vertex in $V_1$ to $v$, no matter $uv$ is in $M_H$ or not.
For the third condition, we show that the subset $M'_H$ of edges in $M_H$ with two endpoints in $V'_1\cup V'_2$ will form a witness matching.
We know that $M'_H$ is a matching since it is a subset of the matching $M_H$. We only need to prove that $|M'_H|=|V'_2|$. It is sufficient to prove that any vertex in $V'_2$ is contained in an edge in $M'_H$.
For any $v' \in V'_2$, there is an $M_H$-alternating path $P$ from an $M_H$-unsaturated vertex $u'\in V_1$ to $v'$. Note that the first edge (containing $u'$) in $P$ is not in $M_H$ since $u'$ is $M_H$-unsaturated.
If the last edge (containing $v'$) in $P$ is not in $M_H$, then we could get a bigger matching $M^*_H$ of $H$ by replacing $M_H\cap E(P)$ with $E(P)\setminus M_H$ in $M_H$,
which is a contradiction to the maximality of $M_H$. So we know that any vertex in $V'_2$ is contained in an edge in $M'_H$.
All three conditions in the definition of the crown hold. So $(V'_1,V'_2)$ is a crown.
\end{proof}
\begin{lemma} \label{crownredu}
Let $H=(V_1, V_2, E_H)$ be the auxiliary bipartite graph constructed in Case~2 for trees with depth 3.
Given a crown $(V'_1,V'_2)$ of $H$, we can find a group-satisfied allocation in linear time.
\end{lemma}
\begin{proof}
We will use Lemma 3
to prove this lemma. So we only need to show that we can always find a group-satisfied allocation after executing which the remaining graph is still a connected tree or an empty graph.
If the condition of Step 2 holds, then it becomes Case~1. For this case, we find an $x$-perfect subtree.
We will assign the $x$-perfect subtree to $x$ agents such that all the $x$ agents are satisfied with this allocation.
By Lemma 4, we know that this allocation is group-satisfied.
Otherwise, the algorithm will execute Step 5, and then it becomes Case~2. If the condition of Step 7 holds (Case 2.1), we will get a group-satisfied allocation by Lemma~\ref{gs_case2.1}.
If the condition of Step 9 holds (Case 2.2), we will also get a group-satisfied allocation by Lemma~\ref{crownredu}. In any case, we can get a group-satisfied allocation that keeps the remaining graph connected.
By Lemma 3, we know that this lemma holds.
\end{proof}
For Case 2.2, the size of the matching $M_H$ is less than the size of $V_1$, and then the condition of Lemma~\ref{condition_crown} holds.
By Lemma~\ref{condition_crown}, we can find a crown $(V'_1,V'_2)$ in polynomial time.
Then we execute the group-satisfied allocation according to the crown $(V'_1,V'_2)$ by Lemma~\ref{crownredu}.
The main steps of our algorithm to compute MMS allocations of chores on trees with depth 3 are described as Algorithm~\ref{alg:two}.
\begin{algorithm}[h!]
\caption{Depth$(A,C,U,G)$}
\label{alg:two}
\KwIn{An instance $I=(A,C,U,G)$, where $G=(C,E)$ is a tree with depth 3.}
\KwOut{An MMS allocation of $I$.}
Select an arbitrary agent $i_0$ and let the MMS$_{i_0}$ split be $P_{i_0}$.
Let $p_r \in P_{i_0}$ be the bundle containing the root $r$.
Let $C=\{c_1,c_2,\dots,c_p\}$ be the set of star components in $G'=G-p_r$\;
\If{A star $c_q\in C$ is $x$-perfect for some integer $x$,}
{Assign the star $c_q$ to $x$ agents: split $c_q$ into $x$ bundles according to an agent $i$ who $x$-partitions it, and assign
the bundle containing the center vertex of $c_q$ to agent $i$ and assign the left $x-1$ chores to arbitrary $x-1$ agents\; and \textbf{return} Depth$(I')$, where $I'$ is the remaining instance after assigning $c_p$ to $x$ agents\;}
\Else{Each star in $C$ is $x$-super for some integer $x$; We compute the auxiliary graph $H=(V_1, V_2, E_H)$ and a maximum matching $M_H$ in $H$\;
\If{ $|V_1|=|M_H|$}{Assign the whole graph $G$ according to $M_H$\;}
\Else{Compute the crown structure in $H$ and assign some stars in $C$ to agents according to the crown\; and \textbf{return} Depth$(I')$, where $I'$ is the remaining instance after the assignment.}
}
\end{algorithm}
\begin{lemma}\label{depthalg}
MMS allocations of chores in trees with depth 3 always exist and can be computed in polynomial time.
\end{lemma}
\begin{proof}
We will use Lemma~\ref{important} to prove this lemma. So we only need to show that we can always find a group-satisfied allocation after executing which the remaining graph is still a connected tree or an empty graph.
If the condition of Step 2 holds, then it becomes Case~1. For this case, we find an $x$-perfect subtree.
We will assign the $x$-perfect subtree to $x$ agents such that all the $x$ agents are satisfied with this allocation. By Lemma~\ref{perfect_c}, we know that this allocation is group-satisfied.
Otherwise, the algorithm will execute Step 5, and then it becomes Case~2. If the condition of Step 7 holds (Case 2.1), we will get a group-satisfied allocation by Lemma~\ref{gs_case2.1}.
If the condition of Step 9 holds (Case 2.2), we will also get a group-satisfied allocation by Lemma~\ref{crownredu}. In any case, we can get a group-satisfied allocation that keeps the remaining graph connected.
By Lemma~\ref{important}, we know that this lemma holds.
\end{proof}
\section{Spiders}
A graph is a \emph{spider} if it is a tree having only one vertex of degree $\geq 3$.
We will also use the group-satisfied method to show the existence of MMS allocations of chores in spiders.
The vertex of degree $\geq 3$ in a spider is called the \emph{center}. A degree-1 vertex in a spider is called a \emph{leaf}.
The path from a leaf to the center is called
a \emph{branch} of the spider and the number of edges in a branch is the \emph{length} of the branch.
In this section, we assume that the input graph $G$ is a spider,
and use $r$ to denote the center and use $B_i$ to denote the branch between the center and a leaf $f_i$.
We also consider the tree as a rooted tree with the root being the center $r$.
Similar to the algorithm for trees with depth 3, the algorithm will also use Lemma~\ref{perfect_c} and Lemma~\ref{super_c} to construct group-satisfied partial allocations based on $x$-perfect and $x$-super subtrees.
\begin{lemma}\label{spiderbranch}
If there is a branch $B_i$ of $G$ not completely contained in one bundle in the MMS$_j$ split of any agent $j\in A$, then we can find a group-satisfied allocation after which the remaining graph is still a spider.
\end{lemma}
\begin{proof}
We give an algorithm to prove this lemma.
For each agent $j$, the bundle in the MMS$_j$ split containing a leaf is called an \emph{ending bundle}. For each agent, the ending bundle containing the leaf $f_i$ is a subpath of the branch $B_i$ by the condition of the lemma. Assume that agent $j_0$ has the longest ending bundle containing $f_i$. We assign this bundle to the agent $j_0$.
This allocation is group-satisfied because all agents are satisfied with this allocation and the remaining graph is still a spider. This algorithm is like what we do in paths.
\end{proof}
The above lemma provides a way to find group-satisfied allocations for a special case. Our algorithm iteratively applies the operation in the proof of Lemma~\ref{spiderbranch} until we get a spider such that each branch of it is contained in one bundle in the MMS$_j$ split of an agent $j$. The following part of the algorithm is similar to the algorithm for
trees with depth 3.
We consider the MMS$_{j_0}$ split $P_{j_0}$ of an arbitrary agent $j_0$ and denote the bundle containing the root by $p_r$.
In graph $G'=G-p_r$, each connected component is a path.
Let $C=\{c_1,c_2,\dots,c_p\}$ be the set of these paths. Each path in $C$ is 1-super because the whole branch is contained in a bundle in the MMS$_{j}$ split of some agent $j$.
We also construct an auxiliary bipartite graph $H=(V_1, V_2, E_H)$.
Set $V_1$ contains $|C|+1$ vertices, which are corresponding to the $|C|$ paths in $C$ and the bundle $p_r$.
Each vertex in $V_2$ is corresponding to an agent in $A$ and we simply denote $V_2$ by $A$. A vertex $j\in V_2$ is adjacent to a vertex $c\in C$ in $H$ if and only if the corresponding agent $j\in A$ is a dominator of $c$.
Vertex $p_r\in V_1$ is only adjacent to $j_0\in A$. We compute a maximum matching $M_H$ between $V_1$ and $V_2$ in $H$.
\textbf{Case 1}: $|M_H|=|V_1|$. We allocate the whole graph to the $n$ agents according to the matching $M_H$. The bundle $c$ corresponding to a vertex in $V_1$ will be assigned to agent $i$ if they are matched in $M_H$.
It is easy to see that all chores will be allocated to agents, and all agents are satisfied with the allocation. The allocation is group-satisfied.
\textbf{Case 2}: $|M_H|<|V_1|$.
By Lemma~\ref{condition_crown}, we can find a crown $(V'_1,V'_2)$ in polynomial time.
We allocate the chores to agents according to the algorithm in the proof of the following lemma.
\begin{lemma} \label{crownredu-s}
Let $H=(V_1, V_2, E_H)$ be the auxiliary bipartite graph constructed in Case~2 for spiders.
Given a crown $(V'_1,V'_2)$ of $H$, we can find a group-satisfied allocation in linear time.
\end{lemma}
\begin{proof}
We will give an algorithm to find the group-satisfied allocation. The proof is similar to the proof of Lemma~8.
Let $M'_H$ be the witness matching of the crown $(V'_1,V'_2)$. Let $V^*_1\subseteq V'_1$ be the subset of vertices appearing in $M'_H$. The
group-satisfied allocation will assign each path in $V^*_1$ to an agent in $V'_2$.
We assume that $c_{j}\in V^*_1$ is matched with $i_j \in V'_2$ in $M'_H$.
Then agent $i_j$ is a dominator of the subpath $c_j$ and agent $i_j$ is satisfied with $c_j$. We assign $c_j$ to agent $i_j$.
Then we allocate each subpath in $V^*_1$ to a different agent in $V'_2$ since $M'_H$ is a matching. Furthermore, all agents in $V'_2$ are assigned with a bundle.
This is the allocation algorithm. It is easy to see that the algorithm can be executed in linear time.
We show that the allocation satisfies the condition in Lemma~5 to prove that it is group-satisfied.
It is easy to see that after deleting all the subpaths in $V^*_1$ from $G$, the remaining graph is still a spider.
According to the construction of $H$ and the definition of crown, we know that for any path $c_{j}\in V^*_1$, the set of dominators of it is a subset of $V'_2$, i.e., $D(c_j)\subseteq V^*_2$ holds for each $c_{j}\in V^*_1$.
In our algorithm, all agents in $V'_2$ will be assigned with a bundle. So any agent left unassigned is not a dominator of a path in $V^*_1$. By Lemma~5, we know that the allocation is group-satisfied.
\end{proof}
All the above results imply that
\begin{lemma} \label{spideralg}
MMS allocations of chores on spiders always exist and can be computed in polynomial time.
\end{lemma}
\begin{proof}
Our algorithm to find the ${\frac{3}{2}}$-MMS allocation is that: first, remove an arbitrary edge from the cycle, and then find an MMS allocation for the instance on a path.
To prove the correctness, we only need to show that the for each agent the MMS value on the path is not less than ${\frac{3}{2}}$ times of his MMS value on the cycle, since we can always find MMS allocations for chores on paths in polynomial time by using the algorithm presented in previous sections.
For each agent $i$, we consider the MMS$_i$ split $P_i$ of the cycle $G$. After deleting an edge $e$ from $G$, one bundle in $P_i$ may be split into two pieces $x_1$ and $x_2$. At least one piece, say $x_1$ has disutility not less than ${\frac{1}{2}}mms_i(G)$. We adjoin $x_1$ with its neighbor bundle in $P_i$ and let $x_2$ become a single bundle. In this way, we split the path $G-e$ into $n$ bundles, the disutility of each bundle is not less than ${\frac{3}{2}} mms_i(G)$. Then we know that the $mms_i(G-e)\geq {\frac{3}{2}} mms_i(G)$.
\end{proof}
\section{Cycles}
Next, we consider the case where the chores are on a simple cycle. Different from trees, MMS allocations of chores on cycles may not exist.
We will give an example to show the nonexistence of MMS allocations even to only three agents. We will show how we find the example later.
\begin{table}[h]
\centering
\begin{tabular}{p{40pt}|p{12pt}|p{12pt}|p{12pt}|p{12pt}|p{12pt}|p{12pt}|p{12pt}|p{12pt}|p{12pt}}
\hline
&$c_1$&$c_2$&$c_3$&$c_4$&$c_5$&$c_6$&$c_7$&$c_8$&$c_9$\\
\hline
Agent 1&$-\frac{1}{2}$&$-\frac{1}{3}$&$-\frac{1}{6}$&$-\frac{1}{6}$&$-\frac{1}{3}$&$-\frac{1}{2}$&$-\frac{1}{3}$&$-\frac{1}{3}$&$-\frac{1}{3}$\\
\hline
Agent 2&$-\frac{1}{6}$&$-\frac{1}{2}$&$~0$&$-\frac{1}{2}$&$-\frac{1}{6}$&$-\frac{1}{2}$&$-\frac{1}{3}$&$-\frac{1}{3}$&$-\frac{1}{2}$\\
\hline
Agent 3&$-\frac{1}{3}$&$-\frac{1}{6}$&$-\frac{1}{6}$&$-\frac{1}{3}$&$-\frac{1}{2}$&$-\frac{1}{3}$&$-\frac{1}{3}$&$-\frac{1}{3}$&$-\frac{1}{2}$\\
\hline
\end{tabular}
\caption{An example of nonexistence of $\alpha$-MMS allocations of a 9-cycle to three agents for any $\alpha <{\frac{7}{6}}$}\label{t1-ex}
\end{table}
In the example in Table~\ref{t1-ex}, we are going to allocate nine chores on a cycle to three agents. The chores appear on the cycle are in the same order as that in the table. The numbers in the table are the disutilities of the chores for the agents. The MMS value of each agent is $-1$. For any four consecutive chores, the disutility for any agent is at most $-\frac{7}{6}$. If an $\alpha$-MMS allocation with $\alpha <\frac{7}{6}$ exists, then it would split the cycle into
three bundles of three consecutive chores. There are only three ways to split it. It is easy to check that none of the three partitions yields an $\alpha$-MMS allocation with $\alpha <\frac{7}{6}$.
Thus, no $\alpha$-MMS allocation with $\alpha <\frac{7}{6}$ exists for this instance.
On the other hand, ${\frac{3}{2}}$-MMS allocations of chores on cycles always exist.
The simple observation of the ${\frac{1}{2}}$-MMS allocations of goods on cycles in~\cite{LT18} can get the corresponding result for chores.
\begin{lemma}
An ${\frac{3}{2}}$-MMS allocation of chores on a cycle always exists and can be found in polynomial time.
\end{lemma}
\begin{proof}
Our algorithm to find the ${\frac{3}{2}}$-MMS allocation is that: first, remove an arbitrary edge from the cycle, and then find an MMS allocation for the instance on a path.
To prove the correctness, we only need to show that the for each agent the MMS value on the path is not less than ${\frac{3}{2}}$ times of his MMS value on the cycle, since we can always find MMS allocations for chores on paths in polynomial time by using the algorithm presented in previous sections.
For each agent $i$, we consider the MMS$_i$ split $P_i$ of the cycle $G$. After deleting an edge $e$ from $G$, one bundle in $P_i$ may be split into two pieces $x_1$ and $x_2$. At least one piece, say $x_1$ has disutility not less than ${\frac{1}{2}}mms_i(G)$. We adjoin $x_1$ with its neighbor bundle in $P_i$ and let $x_2$ become a single bundle. In this way, we split the path $G-e$ into $n$ bundles, the disutility of each bundle is not less than ${\frac{3}{2}} mms_i(G)$. Then we know that the $mms_i(G-e)\geq {\frac{3}{2}} mms_i(G)$.
\end{proof}
\subsection{Allocating a cycle to three agents}\label{cyclechores}
We use a linear programming method to help us find the (tight) approximation ratio for allocating chores on a cycle to three agents.
We will construct a linear programming model for our problem. Let $\alpha$ be the ratio. Our objective is to find the maximum value of $\alpha$ such that
for any valid allocation of the cycle to $n$ agents, at least one agent will get a bundle with the disutility less than $\alpha$ times of his MMS value.
So our objective is to find the maximum value of $\alpha$ such that there is no $\alpha$-MMS allocation.
However, the constraints in our model are hard to list out directly. The number of constraints may even be exponential to $n$ and $m$. We will first give some properties that will allow us to dramatically decrease the number of constraints if there are only three agents.
In the remaining of this subsection, we always assume that the problem is to allocate chores on a cycle to three agents.
\begin{lemma}\label{subsetcc}
Assume that the instance has three agents and let $\alpha\geq1$. If there is an $\alpha$-MMS$_i$ split $P_i$ of $G$ for agent $i$ and an $\alpha$-MMS$_j$ split $P_j$ of $G$ for agent $j$ such that one bundle in $P_i$ is a subset of one bundle in $P_j$,
then $\alpha$-MMS allocations for this instance always exist.
\end{lemma}
\begin{proof}
Let the agent set be $\{1,2,3\}$ and $P_i=\{B_{i1},B_{i2},B_{i3}\}$ be an $\alpha$-MMS$_i$ split for agent $i$ ($i\in\{1,2,3\}$).
W.l.o.g, we assume that the bundle $B_{21}$ in $P_2$ is a subset of the bundle $B_{11}$ in $P_1$ and show
that there is an $\alpha$-MMS allocation. We partition the cycle into three connected bundles $B_{11}$, $B'_{22}=B_{22}-B_{11}$, and $B'_{23}=B_{23}-B_{11}$. We will assign the three bundles to different agents according to different cases.
Case 1. $u_3(B_{11})\leq mms_3$: Now we have that $u_3(G-B_{11})\geq 2 mms_3$. For any partition of $G-B_{11}$ into two connected bundles, the disutility of at least one bundle is not less than $mm_3$ for agent 3. So
one of $u_3(B'_{22}) \geq mms_3$ and $u_3(B'_{23}) \geq mms_3$ always holds.
Note that $B'_{22}$ and $B'_{23}$ are subsets of $B_{22}$ and $B_{23}$. We know that $u_2(B'_{22}) \geq \alpha \cdot mms_2$ and $u_2(B'_{23}) \geq \alpha \cdot mms_2$. We assign $B_{11}$ to agent 1, let agent 3 takes one of the favorite in $B'_{22}$
and $B'_{23}$, and assign the remaining bundle to agent 2. This is an $\alpha$-MMS allocation.
Case 2. $u_3(B_{11})> mms_3$: We can see that either $B_{12}\supseteq B'_{22}$ or $B_{13}\supseteq B'_{23}$ holds since
$B_{12}\cup B_{13} =B'_{22}\cup B'_{23}$. So one of $u_1(B'_{22}) \geq \alpha \cdot mms_1$ and $u_1(B'_{23}) \geq \alpha \cdot mms_1$ always holds. Then we assign $B_{11}$ to agent 3, let agent 1 takes one of the favorite in $B'_{22}$
and $B'_{23}$, and assign the remaining bundle to agent 2. This is an $\alpha$-MMS allocation.
\end{proof}
A partition of the cycle $G$ into 3 bundles can be represented by a set of three edges in the cycle.
We will call it the \emph{cut representation} of the partition.
Next, we give some important properties for instance having no $\alpha$-MMS allocation, which will be used to build our linear programming constraints.
\begin{lemma} \label{lpuse1}
Assume that the instance has three agents. Let $P_1=\{e_{11}, e_{12}, e_{13}\}, P_2=\{e_{21}, e_{22}, e_{23}\}$ and $P_3=\{e_{31}, e_{32}, e_{33}\}$ be the cut representations of $\alpha$-MMS splits of $G$ for agents 1, 2 and 3, respectively, which always exist for $\alpha \geq 1$.
If the instance has no $\alpha$-MMS allocation, then it holds that\\
(a) all the edges in $P_1, P_2$ and $P_3$ are different;\\
(b) we can relabel the edges in the three sets $P_1, P_2$ and $P_3$ such that the nine edges appear in the following order
on the cycle $G$: $e_{11}e_{21}e_{31}e_{12}e_{22}e_{32}e_{13}e_{23}e_{33}$ (See Figure~\ref{9-cycle}).
\end{lemma}
\begin{proof}
(a) Assume that two of $P_1$, $P_2$ and $P_3$, say $P_1$ and $P_2$ have a common edge $e$.
After deleting $e$ the graph becomes a path. Both of $P_1$ and $P_2$ will partition the path into three connected bundles.
Let $b_1$ be the bundle of agent 1 containing the most left chore on the path $G-e$ and let $b_2$ be the bundle of agent 2 containing the most left chore on $G-e$. It is easy to see that either $b_1\subseteq b_2$ or $b_2\subseteq b_1$,
which implies the condition of Lemma~\ref{subsetcc} holds. We get a contradiction that the instance has an $\alpha$-MMS allocation.
So the edges in $P_1, P_2$ and $P_3$ are different.
(b) If the nine edges do not appear in the above order, say exchanging the positions of any two edges, we will see that one bundle of one agent is a subset of one bundle of another agent. By Lemma~\ref{subsetcc}, we will also get a contradiction.
We omit the trivial and tedious case analysis here.
\end{proof}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}
[scale = 1, line width = 0.5pt,solid/.style = {circle, draw, fill = black, minimum size = 0.3cm},empty/.style = {circle, draw, fill = white, minimum size = 0.3cm}]
\node[empty,label=center:\tiny$S_1$] (B1) at (90:1cm) {};
\node[empty,label=center:\tiny$S_2$] (C1) at (50:1cm) {};
\node[empty,label=center:\tiny$S_3$] (D1) at (10:1cm) {};
\node[empty,label=center:\tiny$S_4$] (E1) at (330:1cm) {};
\node[empty,label=center:\tiny$S_5$] (F1) at (290:1cm) {};
\node[empty,label=center:\tiny$S_6$] (G1) at (250:1cm) {};
\node[empty,label=center:\tiny$S_7$] (H1) at (210:1cm) {};
\node[empty,label=center:\tiny$S_8$] (I1) at (170:1cm) {};
\node[empty,label=center:\tiny$S_9$] (J1) at (130:1cm) {};
\node[label=center:\tiny$e_{11}$] (B2) at (110:1.2cm) {};
\node[label=center:\tiny$e_{21}$] (B2) at (70:1.2cm) {};
\node[label=center:\tiny$e_{31}$] (B2) at (30:1.2cm) {};
\node[label=center:\tiny$e_{12}$] (B2) at (350:1.2cm) {};
\node[label=center:\tiny$e_{22}$] (B2) at (310:1.2cm) {};
\node[label=center:\tiny$e_{32}$] (B2) at (270:1.2cm) {};
\node[label=center:\tiny$e_{13}$] (B2) at (230:1.2cm) {};
\node[label=center:\tiny$e_{23}$] (B2) at (190:1.2cm) {};
\node[label=center:\tiny$e_{33}$] (B2) at (150:1.2cm) {};
\draw[red] (B1)--(C1);
\draw[red] (E1)--(F1);
\draw[red] (H1)--(I1);
\draw[blue] (C1)--(D1);
\draw[blue] (F1)--(G1);
\draw[blue] (I1)--(J1);
\draw[black,dashed] (J1)--(B1);
\draw[black,dashed] (D1)--(E1);
\draw[black,dashed] (G1)--(H1);
\end{tikzpicture}
\caption{A cycle, where the set of three black edges $P_1=\{e_{11}, e_{12}, e_{13}\}$ is the cut representation of an MMS$_1$ split for agent $1$, the set of three red edges $P_2=\{e_{21}, e_{22}, e_{23}\}$ is the cut representation of an MMS$_2$ split for agent $2$, the set of three blue edges $P_3=\{e_{31}, e_{32}, e_{33}\}$ is the cut representation of an MMS$_3$ split for agent $3$, and $\{S_1,S_2,\dots, S_9\}$ are the nine segments after deleting $P_1\cup P_2 \cup P_3$. }
\label{9-cycle}
\end{figure}
\begin{corollary}
MMS allocations of at most eight chores on a cycle to three agents always exist.
\end{corollary}
\begin{proof}
When there are at most eight chores, the cycle $G$ has at most eight edges. The cut representations of MMS split of $G$ for agents 1, 2, and 3 will have at least one common edge. By Lemma~15(a),
we know that MMS allocation always exists.
\end{proof}
Let $P_1$, $P_2$ and $P_3$ be the edge representations of an MMS split of $G$ for agents 1, 2 and 3, respectively.
Each connected part in the graph after deleting edges in $P_1\cup P_2 \cup P_3$ is called a \emph{segment} (See Figure~\ref{9-cycle}).
\begin{lemma}\label{lpuse1-1}
Assume that the instance has three agents. Fix an MMS$_i$ split of $G$ for each agent $i$. Let $\alpha\geq 1$.
If the instance has no $\alpha$-MMS allocation, the disutility of any four consecutive segments is less than $\alpha\cdot mms_i$ for any agent $i$.
\end{lemma}
\begin{proof}
Let $b$ denote four consecutive segments. Assume that $u_{i_0}(b)\geq \alpha \cdot mms_{i_0}$ holds for an agent $i_0\in N$.
We show that there is always an $\alpha$-MMS allocation.
Case 1. $b$ contains a bundle in the MMS$_{i_0}$ split of $G$ for agent $i_0$: Then there is an $\alpha$-MMS$_{i_0}$ split for agent $i_0$ such that one bundle in it is exactly $b$.
Note that any four consecutive segments will contain two bundles in the MMS splits of $G$ for two different agents. Any MMS split is also an $\alpha$-MMS split for $\alpha \geq 1$.
So the condition of Lemma~\ref{subsetcc} holds and then $\alpha$-MMS allocations always exist.
Case 2. $b$ does not contain a bundle in the MMS$_{i_0}$ split of $G$ for agent $i_0$. For this case, it is easy to see that the remaining path $G-b$ consists of two connected paths, each of which is a subset of a bundle
in the MMS split of $G$ for an agent other than $i_0$. We assign these two parts to these two agents and assign $b$ to agent $i_0$. This will be an $\alpha$-MMS allocation.
In any case, we can find an $\alpha$-MMS allocation, a contradiction. So the lemma holds.
\end{proof}
\begin{lemma}\label{lpuse2}
Assume that the instance has three agents. Let $\alpha\geq 1$.
If the instance has no $\alpha$-MMS allocation, then for any $\alpha$-MMS$_i$ split $P_i$ of $G$ for agent $i\in A$, there is one bundle $b$ in $P_i$ such that $u_j(b)\geq \alpha \cdot mms_j$ holds for any agent $j\in A\setminus \{i\}$, and for all other bundles $b'$ in $P_i$ it holds that $u_j(b')< \alpha \cdot mms_j$ for any agent $j\in A\setminus \{i\}$.
\end{lemma}
\begin{proof}
We say that an agent is satisfied with a bundle if the disutility of this bundle is not less than $\alpha$ times of his MMS value.
We know that agent $i$ is satisfied with all the three bundles in $P_i$.
For the other two agents $\{j_1,j_2\}=A\setminus \{i\}$, each is satisfied with at least one bundle in $P_i$.
If agent $j_1$ and agent $j_2$ are satisfied with two different bundles in $P_i$, then we assign these two bundles in $P_i$ to them and assign the last bundle in $P_i$ to agent $i$.
This would be an $\alpha$-MMS allocation. So we know that agent $j_1$ and agent $j_2$ are satisfied with the same bundle in $P_i$.
\end{proof}
Now, we are ready to describe our linear programming (LP). Assume that the instance has no $\alpha$-MMS allocation. We use Lemmas~\ref{lpuse1} to~\ref{lpuse2} to construct the constraints in LP.
We consider an instance $I=(A,C,U,G)$ of allocating chores on a cycle $G$ to three agents $A=\{1,2,3\}$.
Let $P_1=\{e_{11}, e_{12}, e_{13}\}, P_2=\{e_{21}, e_{22}, e_{23}\}$ and $P_3=\{e_{31}, e_{32}, e_{33}\}$ be the cut representations of MMS splits of $G$ for agents 1, 2 and 3, respectively.
We assume that $I$ does not have $\alpha$-MMS allocation for some constant $\alpha> 1$. By Lemma~\ref{lpuse1}, we know that the nine edges in $P=P_1\cup P_2 \cup P_3$ are different and
we can assume without loss of generality that they appear in the following order
on the cycle $G$: $e_{11}e_{21}e_{31}e_{12}e_{22}e_{32}e_{13}e_{23}e_{33}$.
After deleting the nine edges in $P$, the cycle $G$ will be split into nine segments. We label the nine segments as $S_1, S_2,\dots, S_9$ in the order as shown in Figure~\ref{9-cycle}.
Let the MMS splits of $G$ for agents 1, 2 and 3 be $P_1=\{B_{11}, B_{12}, B_{13}\}, P_2=\{B_{21}, B_{22}, B_{23}\}$ and $P_3=\{B_{31}, B_{32}, B_{33}\}$, where $B_{11}=S_1\cup S_2 \cup S_3$, $B_{12}=S_4\cup S_5 \cup S_6$,
$B_{13}=S_7\cup S_8 \cup S_9$, $B_{21}=S_2\cup S_3 \cup S_4$, $B_{22}=S_5\cup S_6 \cup S_7$, $B_{23}=S_8\cup S_9 \cup S_1$, $B_{31}=S_3\cup S_4 \cup S_5$, $B_{32}=S_6\cup S_7 \cup S_8$,
and $B_{33}=S_9\cup S_1 \cup S_2$.
For each segment, it may contain only one chore. So in the next analysis, we will not split a segment anymore and consider each segment as a single big chore.
Besides $\alpha$, our LP has $3\times9$ variables $x_j, y_j$ and $z_j$ ($j\in \{1,2,\dots, 9\}$). The variable $x_j$ (resp., $y_j$ and $z_j$) is the disutility of segment $S_j$ of agent 1 (resp., agent 2 and agent 3), i.e.,
$x_j=u_1(S_j)$, $y_j=u_2(S_j)$ and $z_j=u_3(S_j)$. The first set of constraints in our LP model is to set the domain of these variables.
\begin{eqnarray}\label{equs1}
x_j,y_j,z_j\leq 0 ~~~~~~(j=1,2,\dots,9).
\end{eqnarray}
We normalize the MMS value for each agent by letting $mms_1=mms_2=mms_3=-1$. According to the MMS splits of the three agents, we get $3\times 3$ constraints
\begin{eqnarray}
x_{1+i}+x_{2+i}+x_{3+i}\geq -1&(i=0,3,6);\label{equs2}\\
y_{2+i}+y_{3+i}+y_{4+i}\geq -1&(i=0,3,6);\label{equs3}\\
z_{3+i}+z_{4+i}+z_{5+i}\geq -1&(i=0,3,6),\label{equs4}
\end{eqnarray}
where the indices are computed modulo 9. We will always assume this without stating it every time.
By Lemma~\ref{lpuse1-1}, we directly get $3\times 9$ constraints
\begin{eqnarray}
x_{1+i}+x_{2+i}+x_{3+i}+x_{4+i}< -\alpha&(i=0,1,\dots,8);\label{equs5}\\
y_{1+i}+y_{2+i}+y_{3+i}+y_{4+i}< -\alpha&(i=0,1,\dots,8);\label{equs6}\\
z_{1+i}+z_{2+i}+z_{3+i}+z_{4+i}< -\alpha&(i=0,1,\dots,8).\label{equs7}
\end{eqnarray}
Next, we consider the constraints generated by Lemma~\ref{lpuse2}. The three bundles in the MMS$_1$ split $P_1$ for agent 1 are $B_{11}, B_{12}$ and $B_{13}$.
By Lemma~\ref{lpuse2}, we know that for agents 2 and 3, at most one bundle is satisfied (the disutility is not less than $\alpha$ times of his MMS value).
W.l.o.g., we assume that agents 2 and 3 are satisfied with $B_{11}$, which means that agents 2 and 3 are not satisfied with the other two bundles. Then we get 4 constraints
\begin{eqnarray}
u_i(B_{12})< \alpha \times mms_i =-\alpha&(i=2,3);\label{eqd1}\\
u_i(B_{13})< \alpha \times mms_i =-\alpha&(i=2,3). \label{eqd2}
\end{eqnarray}
We look at the MMS$_2$ split $P_2=\{B_{21}, B_{22}, B_{23}\}$ for agent 2 and the MMS$_3$ split $P_3=\{B_{31}, B_{32}, B_{33}\}$ for agent 3.
By Lemma~\ref{lpuse2} again, agent 1 and agent 3 are satisfied with one bundle in $P_2$, and agent 1 and agent 2 are satisfied with one bundle in $P_3$.
By identifying the symmetrical cases (Rotations of the three agents are also considered), we only need to consider three different cases.
Case 1: agent 1 and agent 3 are satisfied with $B_{21}$, and agent 1 and agent 2 are satisfied with $B_{31}$;
Case 2: agent 1 and agent 3 are satisfied with $B_{21}$, and agent 1 and agent 2 are satisfied with $B_{32}$
Case 3: agent 1 and agent 3 are satisfied with $B_{23}$, and agent 1 and agent 2 are satisfied with $B_{32}$.
For each case, we generate $4\times 2$ constraints like (\ref{eqd1}) and (\ref{eqd2}). There are three cases. So we will get three LP models. Each LP model has the same number of constraints.
By solving the three LPs, we get that $\alpha <{\frac{8}{7}}$ for two LPs and $\alpha <{\frac{7}{6}}$ for the other one. The bigger one is ${\frac{7}{6}}$.
This means for any $\alpha \geq {\frac{7}{6}}$, our LP will have no solution. We get the following result.
\begin{lemma}
$\frac{7}{6}$-MMS allocations of chores on a cycle to 3 agents always exist and can be found in polynomial time.
\end{lemma}
\begin{proof}
For any instance, we fix an MMS split $P_i$ for each agent $i$ (considering them as the cut representations). If they have some common edges, then we can find an MMS allocation directly in polynomial time by Lemma~\ref{lpuse1}(a) and the algorithm in Lemma~\ref{subsetcc}.
Otherwise, we let $P_1=\{e_{11}, e_{12}, e_{13}\}, P_2=\{e_{21}, e_{22}, e_{23}\}$ and $P_3=\{e_{31}, e_{32}, e_{33}\}$. They are nine different edges.
We can relabel them in the order as shown in Figure~4, otherwise, we know that one bundle of one agent will be a subset of one bundle of another agent by Lemma~\ref{lpuse1}(b) and we can find an MMS allocation in polynomial time by Lemma~\ref{subsetcc}.
Next, we can assume that the above LP model is suitable for this instance. The LP does not have any solution for $\alpha <{\frac{7}{6}}$, which means some constraints in the LP will not hold for $\alpha <{\frac{7}{6}}$.
The constraints (1) to (4) clearly hold. If some constraints in (5) to (7) do not hold, then there is an $\frac{7}{6}$-MMS allocation with one bundle containing four consecutive segments by Lemma~\ref{lpuse1-1}.
We can enumerate all these kinds of partitions and find the $\frac{7}{6}$-MMS allocation in polynomial time. Otherwise, one of (8) and (9) and the following constraints will not hold.
For this case, one of $P_1$, $P_2$ and $P_3$ is a $\frac{7}{6}$-MMS allocation by Lemma~\ref{lpuse2} and we can also check it in polynomial time.
In any case, there is a $\frac{7}{6}$-MMS allocation and it can be found in polynomial time.
\end{proof}
The LP model can also be used to find instances with possible tight approximation ratio. If we add the equal sign into the constraints (\ref{equs5}) to (\ref{eqd2}) and the following constraints, we will solve one LP with
$\alpha ={\frac{7}{6}}$ and two LPs with $\alpha ={\frac{8}{7}}$. For the LP with the solution $\alpha ={\frac{7}{6}}$, we get the values of the other variables $x_j, y_j$ and $z_j$ ($j\in \{1,2,\dots, 9\}$) as shown in Table~\ref{t1-ex}. That is the way we got the example of the nonexistence of $\alpha$-MMS allocations for $\alpha <{\frac{7}{6}}$ in Table~\ref{t1-ex}.
\begin{lemma}
For any $\alpha <{\frac{7}{6}}$, $\alpha$-MMS allocations of chores on a cycle to three agents may not exist.
\end{lemma}
The LP method can also be directly used to solve the problem of allocating goods on a cycle to three agents. The detailed arguments are put in Appendix~\ref{appen-2}. We get the following results which are consistent with the results obtained by using combinatorial analysis in~\cite{LT18}.
\begin{lemma}\label{lemgood}
$\alpha$-MMS allocations of goods on a cycle to three agents always exist and can be found in polynomial time for $\alpha \leq \frac{5}{6}$, and may not exist for $\alpha > \frac{5}{6}$.
\end{lemma}
\section{Discussion and Conclusion}
For MMS allocations of chores on a tree, we propose the group-satisfied method to solve the problem on two subclasses.
Whether MMS allocations of chores on general trees always exist or not is still an open problem.
We believe that MMS allocations of chores on trees always exist.
We also believe that the proposed group-satisfied method can be used to solve more related problems.
Another contribution of this paper is a novel method based on linear programming (LP) to characterize the optimal approximate MMS allocations without complicated combinatorial analysis. This method could potentially solve more general cases by figuring out simple and clean necessary conditions for the non-existence of $\alpha$-MMS allocation.
|
{
"arxiv_id": "2302.13259",
"language": "en",
"timestamp": "2023-03-03T02:13:26",
"url": "https://arxiv.org/abs/2302.13259",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
These guidelines include complete descriptions of the fonts, spacing, and
related information for producing your proceedings manuscripts. Please follow
them and if you have any questions, direct them to Conference Management
Services, Inc.: Phone +1-979-846-6800 or email
to \\\texttt{icip2022@cmsworkshops.com}.
\section{Formatting your paper}
\label{sec:format}
All printed material, including text, illustrations, and charts, must be kept
within a print area of 7 inches (178 mm) wide by 9 inches (229 mm) high. Do
not write or print anything outside the print area. The top margin must be 1
inch (25 mm), except for the title page, and the left margin must be 0.75 inch
(19 mm). All {\it text} must be in a two-column format. Columns are to be 3.39
inches (86 mm) wide, with a 0.24 inch (6 mm) space between them. Text must be
fully justified.
\section{PAGE TITLE SECTION}
\label{sec:pagestyle}
The paper title (on the first page) should begin 1.38 inches (35 mm) from the
top edge of the page, centered, completely capitalized, and in Times 14-point,
boldface type. The authors' name(s) and affiliation(s) appear below the title
in capital and lower case letters. Papers with multiple authors and
affiliations may require two or more lines for this information. Please note
that papers should not be submitted blind; include the authors' names on the
PDF.
\section{TYPE-STYLE AND FONTS}
\label{sec:typestyle}
To achieve the best rendering both in printed proceedings and electronic proceedings, we
strongly encourage you to use Times-Roman font. In addition, this will give
the proceedings a more uniform look. Use a font that is no smaller than nine
point type throughout the paper, including figure captions.
In nine point type font, capital letters are 2 mm high. {\bf If you use the
smallest point size, there should be no more than 3.2 lines/cm (8 lines/inch)
vertically.} This is a minimum spacing; 2.75 lines/cm (7 lines/inch) will make
the paper much more readable. Larger type sizes require correspondingly larger
vertical spacing. Please do not double-space your paper. TrueType or
Postscript Type 1 fonts are preferred.
The first paragraph in each section should not be indented, but all the
following paragraphs within the section should be indented as these paragraphs
demonstrate.
\section{MAJOR HEADINGS}
\label{sec:majhead}
Major headings, for example, "1. Introduction", should appear in all capital
letters, bold face if possible, centered in the column, with one blank line
before, and one blank line after. Use a period (".") after the heading number,
not a colon.
\subsection{Subheadings}
\label{ssec:subhead}
Subheadings should appear in lower case (initial word capitalized) in
boldface. They should start at the left margin on a separate line.
\subsubsection{Sub-subheadings}
\label{sssec:subsubhead}
Sub-subheadings, as in this paragraph, are discouraged. However, if you
must use them, they should appear in lower case (initial word
capitalized) and start at the left margin on a separate line, with paragraph
text beginning on the following line. They should be in italics.
\section{PRINTING YOUR PAPER}
\label{sec:print}
Print your properly formatted text on high-quality, 8.5 x 11-inch white printer
paper. A4 paper is also acceptable, but please leave the extra 0.5 inch (12 mm)
empty at the BOTTOM of the page and follow the top and left margins as
specified. If the last page of your paper is only partially filled, arrange
the columns so that they are evenly balanced if possible, rather than having
one long column.
In LaTeX, to start a new column (but not a new page) and help balance the
last-page column lengths, you can use the command ``$\backslash$pagebreak'' as
demonstrated on this page (see the LaTeX source below).
\section{PAGE NUMBERING}
\label{sec:page}
Please do {\bf not} paginate your paper. Page numbers, session numbers, and
conference identification will be inserted when the paper is included in the
proceedings.
\section{ILLUSTRATIONS, GRAPHS, AND PHOTOGRAPHS}
\label{sec:illust}
Illustrations must appear within the designated margins. They may span the two
columns. If possible, position illustrations at the top of columns, rather
than in the middle or at the bottom. Caption and number every illustration.
All halftone illustrations must be clear black and white prints. Colors may be
used, but they should be selected so as to be readable when printed on a
black-only printer.
Since there are many ways, often incompatible, of including images (e.g., with
experimental results) in a LaTeX document, below is an example of how to do
this \cite{Lamp86}.
\section{FOOTNOTES}
\label{sec:foot}
Use footnotes sparingly (or not at all!) and place them at the bottom of the
column on the page on which they are referenced. Use Times 9-point type,
single-spaced. To help your readers, avoid using footnotes altogether and
include necessary peripheral observations in the text (within parentheses, if
you prefer, as in this sentence).
\begin{figure}[htb]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8.5cm]{image1}}
\centerline{(a) Result 1}\medskip
\end{minipage}
\begin{minipage}[b]{.48\linewidth}
\centering
\centerline{\includegraphics[width=4.0cm]{image3}}
\centerline{(b) Results 3}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\linewidth}
\centering
\centerline{\includegraphics[width=4.0cm]{image4}}
\centerline{(c) Result 4}\medskip
\end{minipage}
\caption{Example of placing a figure with experimental results.}
\label{fig:res}
\end{figure}
\section{COPYRIGHT FORMS}
\label{sec:copyright}
You must submit your fully completed, signed IEEE electronic copyright release
form when you submit your paper. We {\bf must} have this form before your paper
can be published in the proceedings.
\section{RELATION TO PRIOR WORK}
\label{sec:prior}
The text of the paper should contain discussions on how the paper's
contributions are related to prior work in the field. It is important
to put new work in context, to give credit to foundational work, and
to provide details associated with the previous work that have appeared
in the literature. This discussion may be a separate, numbered section
or it may appear elsewhere in the body of the manuscript, but it must
be present.
You should differentiate what is new and how your work expands on
or takes a different path from the prior studies. An example might
read something to the effect: "The work presented here has focused
on the formulation of the ABC algorithm, which takes advantage of
non-uniform time-frequency domain analysis of data. The work by
Smith and Cohen \cite{Lamp86} considers only fixed time-domain analysis and
the work by Jones et al \cite{C2} takes a different approach based on
fixed frequency partitioning. While the present study is related
to recent approaches in time-frequency analysis [3-5], it capitalizes
on a new feature space, which was not considered in these earlier
studies."
\vfill\pagebreak
\section{REFERENCES}
\label{sec:refs}
List and number all bibliographical references at the end of the
paper. The references can be numbered in alphabetic order or in
order of appearance in the document. When referring to them in
the text, type the corresponding reference number in square
brackets as shown at the end of this sentence \cite{C2}. An
additional final page (the fifth page, in most cases) is
allowed, but must contain only references to the prior
literature.
\bibliographystyle{IEEEbib}
\section{The race towards generalization}
\label{sec:Introduction}
In the recent few years, the research community has witnessed a race towards achieving higher and higher performance (in terms of an error on unseen data), proposing very large architectures like, for example, transformers~\cite{liu2021swin}. From big architectures come big responsibilities: learning strategies to avoid the over-fitting urge to be developed.\\
The most straightforward approach would be to provide more data: deep learning methods are notoriously data hunger. Since they typically optimize some objective function through gradient descent, having more data in the training set helps the optimization process in selecting the most appropriate set of features (to oversimplify, the most recurrent ones). This allows us to have high performance on unseen data. Such an approach has the big drawbacks of requiring enormous computational power for training and, most importantly, large annotated datasets. While undertaking the first drawback is an actual research topic~\cite{frankle2018the, bragagnolo2022update}, the second is broadly explored with approaches like transfer learning~\cite{zhuang2020comprehensive} or self-supervised learning~\cite{ravanelli2020multi}.\\
\begin{figure}[t]
\includegraphics[width=\linewidth]{figures/teaser-2.pdf}
\caption{The double descent phenomenon (dashed line): is it possible to constrain the learning problem to a minimum such that the loss, in over-parametrized regimes, remains close to $\mathcal{L}^{opt}$ (continuous line)?}
\label{fig:DD_scheme}
\end{figure}
In more realistic cases, large datasets are typically unavailable, and in those cases approaches working with small data, in the context of \emph{frugal AI}, need to be employed. This poses research questions on how to enlarge the available datasets or to transfer knowledge from similar tasks; however it poses also questions on how to optimally dimension the deep learning model to be trained. Contrarily to what is expected from the bias-variance trade-off, the phenomenon of \emph{double descent} can be observed in a very over-parameterized network: given some optimal set of parameters for the model $\boldsymbol{w}^{opt}$ with loss value $\mathcal{L}^{opt}$, adding more parameters will worse the performance until a local maximum $\mathcal{L}^{*}$ beyond which, adding even more parameters, the trend goes back decreasing. This phenomenon, named \emph{double descent}~\cite{Belkin_2019} is displayed in Fig.~\ref{fig:DD_scheme}, and is consistently reported in the literature~\cite{spigler2019jamming, Geiger_2019}. Double descent poses the serious problem of finding the best set of parameters, in order not to fall into an over-parametrized (or under-parametrized) regime. The possible approaches to tackle this are two: finding $\boldsymbol{w}^{opt}$, which is a problem requiring a lot of computation, or extremely over-sizing the model. Unfortunately, both the roads are not compatible with a frugal setup: is there a solution to this problem?\\
In this work, we show that the double descent phenomenon is potentially avoidable. Having a sufficiently large regularization on the model's parameters drives the deep models in a configuration where the set of parameters in excess $\boldsymbol{w}^{exc}$ are essentially producing no perturbation on the output of the model.
Nevertheless, as opposed to Nakkiran~et~al.~\cite{nakkiran2021optimal}, who showed in regression tasks that such a regularization could help in dodging double descent, we observe that, in classification tasks, this regularization is insufficient in complex scenarios: an ingredient is still missing, although we are on the right path.
\section{Double descent and its implications}
\label{sec:Related work}
\textbf{Double descent in machine learning models.} The double descent phenomenon has been highlighted in various machine learning models, like decision trees, random features~\cite{Meng-Random-Feature-Model}, linear regression~\cite{muthukumar2020harmless} and deep neural networks~\cite{yilmaz2022regularization}. Based on the calculation of the precise limit of the excess risk under the high dimensional framework where the training sample size, the dimension of data, and the dimension of random features tend to infinity proportionally, Meng~et~al.~\cite{Meng-Random-Feature-Model} demonstrate that the risk curves of double random feature models can exhibit double and even multiple descents. The double descent risk curve was proposed to qualitatively describe the out-of-sample prediction accuracy of variably-parameterized machine learning models. Muthukumar~et~al.~\cite{muthukumar2020harmless} provide a precise mathematical analysis for the shape of this curve in two simple data models with the least squares/least norm predictor. By defining the effective model complexity, Nakkiran~et~al.~\cite{nakkiran2021deep} showed that the double descent phenomenon is not limited to varying the model size, but it is also observed as a function of training time or epochs and also identified certain regimes were increasing the number of train samples hurts test performance.\\
\textbf{Double descent in regression tasks.} It has been recently shown that, for certain linear regression models with isotropic data distribution, optimally-tuned $\ell_2$ regularization can achieve monotonic test performance as either the sample size or the model size is grown. Nakkiran~et~al.~\cite{nakkiran2021optimal} demonstrated it analytically and established that optimally-tuned $\ell_2$ regularization can mitigate double descent for general models, including neural networks like Convolutional Neural Networks. Endorsing such a result, Yilmaz~et~al.~\cite{yilmaz2022regularization} indicated that regularization-wise double descent can be explained as a superposition of bias-variance trade-offs on different features of the data (for a linear model) or parts of the neural network and that double descent can be eliminated by scaling the regularization strengths accordingly.\\
\textbf{Double descent for classification tasks.} Although many signs of progress in regression models have been done, in classification tasks, the problem of avoiding, or formally characterizing, the double descent phenomenon, is much harder to tackle. The test error of standard deep networks, like the ResNet architecture, trained on standard image classification datasets, consistently follows a double descent curve
both when there is label noise (CIFAR-10) and without any label noise (CIFAR-100)~\cite{yilmaz2022regularization}.
Double descent of pruned models concerning the number of original model parameters has been studied by Chang~et~al, which reveals a double descent behavior also in model pruning~\cite{Chang_Overparameterization}. Model-wise, the double descent phenomenon has been studied a lot under the spectrum of over-parametrization: a recent work also confirmed that sparsification via network pruning can cause double descent in presence of noisy labels~\cite{SparseDoubleDescent}. He~et~al.~\cite{SparseDoubleDescent} proposed a novel learning distance interpretation, where they observe a correlation between the model's configurations before and after training with the sparse double descent curve, emphasizing the flatness of the minimum reached after optimization. Our work differs from this study by enlightening some cases in which the double descent phenomenon is not evident. We show, in particular, that by imposing some constraints on the learning process, we can avoid the double descent. The experimental setup follows the same as He~et~al.'s.
\section{Dodging the double descent}
\label{sec:method}
\begin{figure*}[t]
\centering
\begin{subfigure}{.685\columnwidth}
\includegraphics[width=\columnwidth]{figures/SDD_MNIST.pdf}
\end{subfigure}\hfill
\begin{subfigure}{.685\columnwidth}
\includegraphics[width=\columnwidth]{figures/SDD_CIFAR_10.pdf}
\end{subfigure}
\begin{subfigure}{.685\columnwidth}
\includegraphics[width=\columnwidth]{figures/SDD_CIFAR_100.pdf}
\end{subfigure}
\caption{Test accuracy in the function of sparsity with different amounts of symmetric noise $\varepsilon$.\\Dashed lines correspond to vanilla and solids lines to $\ell_2$ regularization.\\
\textbf{Left:} LeNet-300-100 on MNIST.
\textbf{Middle:} ResNet-18 on CIFAR-10.
\textbf{Right:} ResNet-18 on CIFAR-100.}
\label{fig:MNIST}
\end{figure*}
\begin{algorithm}[t]
\caption{Sketching performance in over-parametrized setups: prune, rewind, train, repeat.}
\label{Algo}
\begin{algorithmic}[1]
\Procedure{Sketch ($\boldsymbol{w}^{init}$, $\Xi$, $\lambda$, $T^{iter}$,$T^{end}$)}{}
\State $\boldsymbol{w} \gets$ Train($\boldsymbol{w}^{init}$, $\Xi$, $\lambda$)\label{line:dense}
\While{Sparsity($\boldsymbol{w}, \boldsymbol{w}^{init}$) $< T^{end}$}\label{line:endcond}
\State $\boldsymbol{w} \gets$ Prune($\boldsymbol{w}$, $T^{iter}$) \label{line:prune}
\State $\boldsymbol{w} \gets$ Rewind($\boldsymbol{w}$, $\boldsymbol{w}^{init}$)\label{line:rewind}
\State $\boldsymbol{w} \gets$ Train($\boldsymbol{w}$,$\Xi$, $\lambda$)\label{line:wd}
\EndWhile
\EndProcedure
\end{algorithmic}
\end{algorithm}
\textbf{A regularization-based approach.} In the previous section we have presented all the main contributions achieved in the literature around the double descent phenomenon. It is a known result that, given an over-parametrized model, without special constraints on the training, this phenomenon can be consistently observed. However, it is also known that, for an optimally parametrized model, double descent should not occur. Let us say, the target, optimal output for the model is $\boldsymbol{y}_{opt}$. For instance, there is some subset $\boldsymbol{w}_{exc}\in \boldsymbol{w}$ of parameters belonging to the model which are in excess, namely the ones contributing to the double descent phenomenon. Since these are not essential in the learning/inference steps, they can be considered as \emph{noisy parameters}, which deteriorate the performance of the whole model and make the whole learning problem more difficult. They generate a perturbation in the output of the model which we can quantify as
\begin{equation}
\boldsymbol{y}^{exc} = \sum_{w_i\in \boldsymbol{w}^{exc}} \text{forward}\left[\phi(\boldsymbol{x}_i\cdot w_i)\right],
\end{equation}
where $\boldsymbol{x}_i$ is the input(s) processed by $w_i$, $\phi(\cdot)$ is some non-linear activation for which $\phi(z)\approx z$ when $z\rightarrow 0$, and $\text{forward}(\cdot)$ simply forward-propagates the signal to the output of the neural network. As the output of the model is given by $\boldsymbol{y}=\boldsymbol{y}^{opt}+\boldsymbol{y}^{exc}$, in the optimal case we would require that $\boldsymbol{y}^{exc}=\boldsymbol{0}$. To satisfy such a condition, we have two possible scenarios:
\begin{itemize}
\item $\exists w_i\neq 0 \in \boldsymbol{w}^{exc}$. In this case, the optimizer finds a minimum loss such that the algebraic sum of the noisy contribution is zero. When a subset of these is removed, it is possible that $\boldsymbol{y}^{exc}\neq \boldsymbol{0}$, which results in a performance loss and, for instance, the double descent phenomenon is observed.
\item $w_i=0, \forall w_i\in \boldsymbol{w}^{exc}$. In this case, there is no contribution of the noisy terms and we are in the optimal scenario, where the parameters are de-facto removed from the model.
\end{itemize}
Let us focus on the second case. For numerical reasons, this scenario is unrealistic during continuous optimization; hence we can achieve a similar outcome by satisfying two conditions:
\begin{enumerate}
\item $w_i x_i \approx 0, \forall w_i \in \boldsymbol{w}^{exc}$;
\item $\|\text{forward}\left[\phi(\boldsymbol{x}_i\cdot w_i)\right]\|_1 \leq \|\phi(\boldsymbol{x}_i\cdot w_i)\|_1$,\\ with $\|\phi(\boldsymbol{x}_i\cdot w_i)\|_1\approx 0$.
\end{enumerate}
We can achieve these conditions with a sufficiently large regularization on the parameters $\boldsymbol{w}$. The first condition is achievable employing any common weight penalty, as we assume that, for local minima of the loss where $\frac{\partial L}{\partial w_i} = 0$, we have some weight penalty $C$ pushing its magnitude towards zero. The second condition, on the contrary, requires more careful consideration. Indeed, for the propriety of the function's composability, we need to ensure that the activation function in every layer does not amplify the signal (true in the most common scenarios) and that all the parameters have the lowest magnitude possible. For this reason, ideally, the regulation to employ should be $\ell_\infty$; on the other hand, however, we are also required to enforce some sparsity. Towards this end, recent works in the field suggest $\ell_2$ regulation is a fair compromise\cite{han2015learning}. Hence, we need a sufficiently large $\ell_2$ regularization.\\
\textbf{How can we observe we have avoided double descent?} We present an algorithm to sketch the (eventual) double descent phenomenon in Alg.~\ref{Algo}. After training for the first time the model on the learning task $\Xi$, eventually with $\ell_2$ regularization weighted by $\lambda$ (line~\ref{line:dense}), a magnitude pruning stage is set up (line~\ref{line:prune}). Neural network pruning, whose goal is to reduce a large network to a smaller one without altering accuracy, removes irrelevant weights, filters, or other structures from neural networks. An unstructured pruning method called magnitude-based pruning, popularized by~\cite{han2015learning}, adopted a process in which weights, below some specific threshold $T$, are pruned (line~\ref{line:prune}).
We highlight that more complex pruning approaches exist, but magnitude-based pruning shows their competitiveness despite very low complexity~\cite{Gale_Magnitude}. Towards this end, the hyper-parameter $T^{iter}$ sets the relative pruning percentage, or in other words, how many parameters will be removed at every pruning stage.
Once pruned, the accuracy of the model typically decreases. To recover performance, the lottery ticket rewinding technique, proposed by Frankle \& Carbin~\cite{LTR}, is used. It consists of retraining the subset of parameters that are surviving the pruning stage to their initialization value (line~\ref{line:rewind}) and then training the model (line~\ref{line:wd}). This approach allows us to state whether a lowly-parametrized model from initialization can, in the best case, learn a target task. We end our sketching once we reach a sparsity higher or equal than $T^{end}$ (line~\ref{line:endcond}).\\
\textbf{Experimental setup.} For the experimental setup, we follow the same approach as He~et~al.~\cite{SparseDoubleDescent}. The first model we train is a LeNet-300-100 on MNIST, for 200 epochs, optimized with SGD with a fixed learning rate of 0.1. The second model is a ResNet-18, trained on CIFAR-10 \& CIFAR-100, for 160 epochs, optimized with SGD, having momentum 0.9 and a learning rate of 0.1 decayed by a factor 0.1 at milestones 80 and 120. For each dataset, a percentage $\varepsilon$ of symmetric, noisy labels are introduced: the labels of a given proportion of training samples are flipped to one of the other class labels, selected with equal probability~\cite{Noisy_labels}. In our experiments, we test with $\varepsilon \in \{10\%, 20\%, 50\%\}$.
When the regularization $\ell_2$ is employed, we set $\lambda$ to \num{1e-4}. In all experiments, we use batch sizes of 128 samples, $T^{iter}$ to 20\% and $T^{end}$ is 99.9\%.\\
\textbf{Results.} Fig.~\ref{fig:MNIST} displays our results. As in He~et~al.~\cite{SparseDoubleDescent} work, looking at LeNet-300-100 with $\varepsilon=10\%$, the double descent consists of 4 phases. First, at low sparsities, the network is over-parameterized, thus pruned network can still reach similar accuracy to the dense model. The second phase is a phase near the interpolation threshold, where training accuracy is going to drop, and test accuracy is about to first decrease and then increase as sparsity grows. The third phase is located at high sparsities, where test accuracy is rising. The final phase happens when both training and test accuracy drop significantly. However, while we can observe the double descent in the test accuracy without $\ell_2$ regularization, the phenomenon fades when the regularization is added. Indeed, the existence of the second phase is questioned: the test accuracy which is expected to decrease in this phase reaches a plateau before rising when regularization is added. In this simple setup, the double descent is here dodged.
However, Fig.~\ref{fig:MNIST} also portrays the result of ResNet-18 on CIFAR-10 and CIFAR-100 experiences, with different percentages of noisy labels. Whether the regularization is used or not, on average, and for every value of $\varepsilon$,
the double descent phenomenon occurs and is present in both cases. Those experiments, which can be considered more complex than the previous one, highlight some limits of the use of the standard regularization to avoid the double descent phenomenon, and suggest that a specific regularizer should be designed.\\
\textbf{Ablation on $\boldsymbol{\lambda}$.}
In the previous experiments in Fig.~\ref{fig:MNIST}, we have proposed solutions with a $\ell_2$-regularization hyper-parameter which provides a good trade-off between performance in terms of validation error and avoidance of the double descent.
However, ablating the regularization parameter is a matter of interest to ensure that with larger values, the double descent can not be avoided either.
Hence, we propose in Fig.~\ref{fig:ablation_test} an ablation study on $\lambda$ for CIFAR-10 with $\varepsilon=10\%$.
We observe that, even for extremely high $\lambda$, the double descent is not dodged, and the overall performance of the model drops: the imposed regularization becomes too high, and the training set is not entirely learned anymore. This indicates us that, while for regression tasks $\ell_2$ regularization is the key to dodge the double descent, in classification tasks the learning scenario can be very different, and some extra ingredient is still missing.
\begin{figure}
\centering
\begin{subfigure}{1\columnwidth}
\includegraphics[width=1.0\linewidth]{figures/ICASSP_4_values_Train_Loss.pdf}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\includegraphics[width=1.0\linewidth]{figures/ICASSP_4_values_Test_Loss.pdf}
\end{subfigure}
\caption{Train and test loss at varying $\lambda$ for CIFAR-10, with $\varepsilon=10\%$.}
\label{fig:ablation_test}
\end{figure}
\section{Is double descent avoidable?}
\label{sec:Conclusion}
The problem of finding the best-fitting set of parameters for deep neural networks, which has evident implications for both theoretical and applicative cases, is currently a subject of great interest in the research community. In particular, the phenomenon of double descent prioritizes the research around finding the optimal size for deep neural networks: if a model is not extremely over-parametrized it may fall into a sub-optimal local minimum, harming the generalization performance.\\
In this paper, we have moved some first steps, in a traditional classification setup, towards avoiding the double descent. If we successfully achieve a local minimum where, regardless of its over-parametrization, the performance of the model is consistently high at varying the cardinality of its parameters, there would not be a strict need in finding the optimal subset of parameters for the trained model. Standard regularization approaches like $\ell_2$, which have proven their effectiveness in regression tasks, evidenced some limits in more complex scenarios while dodging effectively the double descent in simpler setups. This result gives us hope: a custom regularization towards avoidance of double descent can be designed, and will be the subject of future research. |
{
"arxiv_id": "2302.13171",
"language": "en",
"timestamp": "2023-02-28T02:11:47",
"url": "https://arxiv.org/abs/2302.13171",
"yymm": "2302"
} | \section{introduction}
The notions of $\delta$-strong compactness and almost strong compactness (see Definition \ref{dscc})
were introduced by Bagaria and Magidor in \cite{BM2014,BMO2014}.
They are weak versions of strong compactness,
and characterize many natural compactness properties of interest in different areas.
See \cite{BM2014,BMO2014,U2020} for details.
Like strong compactness, $\delta$-strong compactness can be characterized
in terms of compactness properties of infinitary languages, elementary embeddings, ultrafilters, etc..
In addition, many interesting properties following from strong compactness
are a consequence of $\delta$-strong compactness.
For example, if $\kappa$ is the least $\delta$-strongly compact cardinal for some uncountable
cardinal $\delta$, then the \emph{Singular Cardinal Hypothesis} ($\sch$) holds above $\kappa$;
and for every regular cardinal $\lambda \geq \kappa$,
stationary reflection holds for every stationary subset of $\mathrm{S}^{\lambda}_{<\delta}=\{\alpha<\lambda \ | \ \cof(\alpha)<\delta \}$.
In addition, Goldberg in \cite[Corollary 2.9]{Gol21} proved that Woodin's $\mathrm{HOD}$ Dichotomy\footnote{
Suppose $\kappa$ is $\omega_1$-strongly compact.
Then either all sufficiently large regular cardinals are measurable in $\hod$
or every singular cardinal
$\lambda$ greater than $\kappa$ is singular in $\hod$ and $\lambda^{+N}=\lambda^+$.}
is a consequence of $\omega_1$-strong compactness.
$\delta$-strong compactness, almost strong compactness, and strong compactness are close compactness principles.
Magidor in \cite{Mag76} proved that
consistently the least strongly compact cardinal, say $\kappa$, is the least measurable cardinal. In this case,
$\kappa$ is also the least $\delta$-strongly compact cardinal for every uncountable cardinal $\delta<\kappa$ and the least almost strongly compact cardinal.
In addition, Goldberg in \cite[Proposition 8.3.7]{Gol20} proved that under the assumption of the \emph{Ultrapower Axiom},
which is expected to hold in all canonical inner models, for any uncountable cardinal $\delta$,
the least $\delta$-strongly compact cardinal is strongly compact.
On the other hand, Bagaria and Magidor in \cite{BMO2014}
turned a supercompact cardinal $\kappa$ into
the least $\delta$-strongly compact cardinal by using a suitable Radin forcing of length some measurable cardinal $\delta<\kappa$.
Thus in the generic extension, $\kappa$ is singular, which implies it is not strongly compact.
This separates $\delta$-strong compactness from strong compactness.
The more subtle case is between almost strong compactness and strong compactness.
Obviously if $\kappa$ is a strongly compact cardinal, the successor of a strongly compact cardinal,
or a limit of almost strongly compact cardinals, then it is almost strongly compact.
For the other direction,
Menas essentially proved that
if an almost strongly compact cardinal is measurable, then it is strongly compact (see \cite[Theorem 22.19]{Kana}).
Recently, Goldberg in \cite[Theorem 5.7]{G2020} proved that
assuming the $\sch$ holds, every almost strongly compact cardinal $\kappa$ of uncountable cofinality is trivial, i.e.,
one of the three cases mentioned above.
In particular, noting that $\sch$ holds above the least almost strongly compact cardinal, Goldberg \cite[Theorem 5.8]{G2020} proved that for every ordinal $\alpha>0$, if the $(\alpha + 1)$-st almost strongly compact limit cardinal has uncountable cofinality, then it is strongly compact.
But the following question posed by
Boney and Brooke-Taylor remained open:
\begin{question}[\cite{G2020}\label{q2}]
Is the least almost strongly compact cardinal necessarily strongly compact?
\end{question}
We will show Theorem 5.7 and Theorem 5.8 in \cite{G2020} may be no longer true when the
cofinality assumption is dropped in Section \ref{sec4}.
The point is that Fodor's lemma does not hold for the least infinite cardinal $\omega$.
This answers Question \ref{q2} in the negative.
To achieve this,
we need a positive answer to the following question of Bagaria and Magidor:
\begin{question}[Bagaria, Magidor]\label{QBM}
Is there a class (possibly proper) $\mathcal{K}$ with $|\mathcal{K}| \geq 2$, and a $\delta_{\kappa}<\kappa$ for every $\kappa \in \mathcal{K}$, so that $\kappa$ is the least exactly $\delta_{\kappa}$-strongly compact cardinal for every $\kappa \in \mathcal{K}$?
\end{question}
For example, if we can give an affirmative answer to the above question when $\mathcal{K}$ has order type $\omega$,
and $\delta_{\kappa}<\kappa$ is a measurable cardinal above $\max(\mathcal{K} \cap \kappa)$ for every $\kappa \in \mathcal{K}$,
then $\sup(\mathcal{K})$ may be the least almost strongly compact cardinal.
\iffalse
It should be possible to give a positive answer for Question \ref{QBM},
because in some sense, the large cardinal strength of a $\delta$-strongly compact cardinal $\kappa$ is revealed by
stationary reflection for every stationary subset of $\mathrm{S}^{\lambda}_{<\delta}$,
where $\lambda \geq \kappa$ is regular.
\fi
We give a positive answer for Question \ref{QBM} in Section \ref{sec4}. To achieve this,
we recall a new construction of Gitik in \cite[Theorem 3.1]{Git2020}.
He developed Kunen's basic idea of a construction of a model with a $\kappa$-saturated ideal over $\kappa$ (see \cite{Kun78}).
After some preparation, he added a $\delta$-ascent $\kappa$-Suslin tree at a supercompact cardinal $\kappa$.
This turned $\kappa$ into the least $\delta$-strongly compact cardinal.
Further analysis shows that $\kappa$ is not $\delta^+$-strongly compact,
which means that we may control the compactness of $\kappa$.
In addition, the forcing for adding a $\delta$-ascent $\kappa$-Suslin tree has nice closure properties, i.e., ${<}\kappa$-strategically closed and ${<}\delta$-directed closed.
So we may apply this method and the well-known method of producing class many non-supercompact strongly compact cardinals simultaneously,
to get hierarchies of $\delta$-strongly compact cardinals for different $\delta$ simultaneously.
Thus we may obtain an affirmative answer to Question \ref{QBM}.
\iffalse
\begin{question}
If there is a proper class of almost strongly compact cardinals, is there a proper class of strongly compact cardinals?
\end{question}
\fi
\subsection*{The structure of the paper}
In this paper, Section \ref{sec2} covers some technical preliminary information and basic definitions. In Section \ref{sec3},
we give some variants of Gitik's construction.
In Section \ref{sec4}, building on the results of the previous section,
we construct hierarchies of $\delta$-strongly compact cardinals for different $\delta$ simultaneously
and provide nontrivial examples of almost strongly compact cardinals, in particular answering Question \ref{q2} and Question \ref{QBM}.
\iffalse
Theorem \ref{thm1} deals with the case $sup(\mathcal{A}\cap \kappa)<\delta_{\kappa}<\kappa$ for any $\kappa \in \mathcal{K}$ and makes $\kappa$ the least $\delta_{\kappa}$-strongly compact cardinal. Theorem \ref{thm2} deals with the case and $\delta_{\kappa}<\kappa_0$ for every $\kappa \in \mathcal{K}$, which is weaker than Theorem \ref{thm2} in the sense that it can't promise that $\kappa$ is the least $\delta_{\kappa}$-strongly compact cardinal.
\iffalse
\begin{corollary}
Suppose $\mathcal{K}$ is the class of supercompact cardinals with no measurable limit points, and for any $\kappa \in \mathcal{K}$, $sup(\mathcal{K}\cap \kappa)<\delta_{\kappa}<\kappa$ is measurable. Then there is a forcing extension $V^{\mathbb{P}}$ in which there are no strongly comapct cardinals, and for any $\kappa \in \mathcal{K}$, $\kappa$ is the least $\delta_{\kappa}$-strongly compact cardinal.
\end{corollary}
\begin{corollary}
Suppose that $\kappa_1<...<\kappa_n$ are the first $n$ supercompact cardinals, $\lambda_i$ is the least $2$-Mahlo cardinal above $\kappa_i$ for $1 \leq i \leq n$, and $\delta_1<...<\delta_n<\kappa_1$ are $n$ measurable cardinals. Then there is a forcing extension $V^{\mathbb{P}}$, in which for any cardinal $\kappa_i$ with $1 \leq i \leq n$, $\kappa_i$ is exactly $\delta_i$-strongly compact. In addition, there are no strongly compact cardinals.
\end{corollary}
\fi
The point here is that if there are no measurable limit points of $\mathcal{K}$, $\mathcal{A}$ may be chosen to equal $\mathcal{K}$ and every strongly compact cardinal is in $\mathcal{K}$. Since every limit point of $\mathcal{K}$ is almost strongly compact, the corollary also gives a negative answer to a question of Boney-Unger and Brooke-Taylor
Magidor proved that the first $n$ strongly compact cardinals can be the first $n$ measurable cardinals for every fixed finite $n$, we also have the following similar theorem:
\fi
\section{preliminaries}\label{sec2}
\subsection{Large cardinals} \label{scc}
We assume the reader is familiar with the large cardinal notions of Mahloness, measurability, strongness, strong compactness, and supercompactness (see \cite{J2003} or \cite{Kana} for details).
We first review the definitions and basic properties of $\delta$-strongly compact cardinals and almost strongly compact cardinals.
\begin{definition}(\cite{BM2014,BMO2014})\label{dscc}
Suppose $\kappa \geq \delta$ are two uncountable cardinals,
\begin{enumerate}
\item For any $\theta \geq \kappa$, $\kappa$ is \emph{$(\delta,\theta)$-strongly compact} if there is an elementary embedding $j:V \rightarrow M$ with $M$ transitive such that $\mathrm{crit}(j) \geq \delta$, and there is a $D \in M$ such that $j''\theta \subseteq D$ and $M \models |D|<j(\kappa)$.
\item $\kappa$ is \emph{$\delta$-strongly compact} if $\kappa$ is $(\delta,\theta)$-strongly compact for any $\theta \geq \kappa$;
$\kappa$ is \emph{exactly $\delta$-strongly compact} if $\kappa$ is $\delta$-strongly compact but not $\delta^+$-strongly compact.
\item $\kappa$ is \emph{almost strongly compact} if $\kappa$ is $\delta$-strongly compact for every uncountable cardinal $\delta<\kappa$.
\end{enumerate}
\end{definition}
By the definition above, it is easy to see that $\kappa$ is $\kappa$-strongly compact if and only if $\kappa$ is strongly compact, and the former implies the latter for these sentences: $\kappa$ is strongly compact, $\kappa$ is almost strongly compact, $\kappa$ is $\delta'$-strong compactness with $\delta<\delta'<\kappa$, and $\kappa$ is $\delta$-strongly compact.
Usuba characterized $\delta$-strong compactness in terms of $\delta$-complete uniform ultrafilters, which generalized Ketonen's result.
\begin{theorem}[\cite{U2020}]\label{tKU}
Suppose $\kappa \geq \delta$ are two uncountable cardinals. Then $\kappa$
is $\delta$-strongly compact if and only if for every regular $\lambda \geq \kappa$, there is a
$\delta$-complete uniform ultrafilter over $\lambda$, i.e.,
there ia a $\delta$-complete ultrafilter $U$ over $\lambda$ such that every $A \in U$ has cardinality $\lambda$.
\end{theorem}
\iffalse
The following proposition shows that when an elementary embedding of a model of set theory can be extended to an embedding of some generic extension of that model.
\begin{proposition}
Let $k : M \rightarrow N$ be an elementary embedding between
transitive models of $ZFC$. Let $\mathbb{P} \in M$ be a notion of forcing, let $G$ be $\mathbb{P}$-generic
over $M$ and let $H$ be $k(\mathbb{P})$-generic over $N$. The following are equivalent:
\begin{enumerate}
\item $\forall p \in G \ k(p) \in H$.
\item There exists an elementary embedding $k^+ : M[G] \rightarrow N[H]$, such that
$k^+(G) = H$ and $k^+ \upharpoonright M = k$.
\end{enumerate}
\end{proposition}
\fi
\iffalse
The following lemma follows from Mena's argument.
\begin{lemma}\label{lem13}
For any $\eta$, if $\kappa>\eta$ is the first measurable limit of strong cardinals above $\eta$, then $\kappa$ is not strong.
\end{lemma}
\begin{proof}
If $\kappa$ is strong, then there is an elementary embedding $j:V \rightarrow M$ with $\mathrm{crit}(j)=\kappa$ and $V_{\kappa+3} \subseteq M$. Since
\[
V \vDash ``\kappa \text{ is the first measurable limit of strong cardinals above } \eta",
\]
by elementarity, we have
\[
M \vDash ``j(\kappa) \text{ is the first measurable limit of strong cardinals above } \eta".
\]
But in $M$, $\kappa$ is measurable since $V_{\kappa+3} \subseteq M$, and any $\alpha<\kappa$ which is strong in $V$ is also strong in $M$, so we easily know that $\kappa$ is a measurable limit of strong cardinals above $\eta$ in $M$,
contrary to the fact that $j(\kappa)$ is the first measurable limit of strong cardinals above $\eta$ in $M$.
\end{proof}
\fi
Next, we list two useful lemmas (see \cite[Lemma 2.1, Lemma 2.4]{AC2001} for details).
\begin{lemma}\label{lem10}
Let $\kappa$ be $2^\kappa$-supercompact and strong.
Assume $j : V \rightarrow M$
is a $2^{\kappa}$-supercompact embedding of $\kappa$.
Then $\kappa$ is a strong cardinal limit of strong cardinals in $M$.
\end{lemma}
\begin{lemma}\label{lem11}
Suppose $\kappa$ is $\lambda$-supercompact for some strong limit cardinal $\lambda$ of cofinality greater than $\kappa$.
Let $j:V \rightarrow M$ be a $\lambda$-supercompact embedding such that $M \vDash `` \kappa$ is not $\lambda$-supercompact$"$.
Then in $M$, there is no strong cardinal in $(\kappa,\lambda]$.
\end{lemma}
\iffalse
The idea of the above lemma is that in $M$, $\kappa$ is ${<}\lambda$-supercompact, so the existence of a strong cardinal in $(\kappa,\lambda)$ implies that $\kappa$ is supercompact, contrary to the fact that $\kappa$ is not $\lambda$-supercompact.
\fi
\subsection{Forcing and large cardinals} In this subsection,
we recall some well-known basic techniques
for lifting elementary embeddings. Readers can refer elsewhere for details.
For a partial order $\mathbb{P}$ and an ordinal $\kappa$,
we say $\mathbb{P}$ is $\kappa$-strategically closed if and only if in a two-person game,
in which the players construct a decreasing sequence $\langle p_{\alpha} \ | \ \alpha<\kappa \rangle$ of conditions in $\mathbb{P}$,
with Player Odd playing at odd stages and Player Even playing at even and limit stages (choosing trivial condition at stage $0$),
player Even has a strategy to ensure the game can always be continued.
$\mathbb{P}$ is ${<}\kappa$-strategically closed if and only if for any $\alpha<\kappa$,
the game is $\alpha$-strategically closed.
We say $\mathbb{P}$ is ${<}\kappa$-directed closed if and only if any directed subset
$D \subseteq \mathbb{P}$ of size less than $\kappa$ has a lower bound in $\mathbb{P}$.
Here $D$ is directed if every two elements of $D$ have a lower bound in
$\mathbb{P}$. We say $\mathbb{P}$ is ${<}\kappa$-closed if and only if any decreasing subset
$D \subseteq \mathbb{P}$ of size less than $\kappa$ has a lower bound in $\mathbb{P}$.
We use $\add(\kappa,1)=\{f \subseteq \kappa \ | \ |f|<\kappa \}$ for the Cohen forcing that adds a subset of $\kappa$.
\iffalse
\begin{lemma}\label{clai3}
Suppose $\mathbb{P}$ and $\mathbb{Q}$ are two forcing notions, and $\mathbb{Q}$ is $(2^{| \mathbb{P} |})^+$-distributive. If $G$ is $\mathbb{P}$-generic over $V$ and $H$ is $\mathbb{Q}$-generic over $V$, then $G \times H$ is $\mathbb{P}\times \mathbb{Q}$-generic over $V$.
\end{lemma}
\fi
\iffalse
\begin{theorem}[\cite{BMO2014}]\label{thm1.3}
Suppose $\kappa$ is $\lambda-$strongly compact for some uncountable cardinal $\lambda \leq \kappa$. Then for every regular $\delta \geq \lambda$, every stationary subset of $\delta$ consisting of ordinals of countable cofinality reflects, i.e., for any stationary subset $S \subseteq \delta$ consisting of ordinals of countable cofinality, there is an $\alpha<\delta$ with uncountable cofinality such that $S \cap \alpha$ is a stationary subset of $\alpha$.
The lemma is useful when we combine Magidor's forcing and Gitik's together to prove Theorem \ref{thm17}.
\fi
The following is Easton’s lemma, see \cite[Lemma 15.19]{J2003} for details.
\begin{lemma}\label{clai3}
Suppose that $G \times H$ is $V$-generic for $\mathbb{P} \times \mathbb{Q}$,
where $\mathbb{P}$ is ${<}\kappa$-closed and $\mathbb{Q}$ satisfies the $\kappa$-c.c..
Then $\mathbb{P}$ is ${<}\kappa$-distributive in $V[H]$.
In other words, $\mathrm{Ord}^{<\kappa} \cap V[G][H] \subseteq V[H]$.
\end{lemma}
\begin{theorem}[\cite{Lav78}]
Suppose $\kappa$ is supercompact. Then there is a forcing $\mathbb{P}$ such that in $V^{\mathbb{P}}$, $\kappa$ is indestructible by any ${<}\kappa$-directed closed forcing. In other words, $\kappa$ is supercompact and remains supercompact after any ${<}\kappa$-directed closed forcing.
\end{theorem}
\subsection{Gitik's construction} \label{sgc}
In this subsection, let us recall some definitions and results in \cite{Git2020}.
\iffalse
\begin{definition}
A partial order $T$ is a tree if for every $t\in T$, $\{s \in T \ | \ s<_T t \}$ is well-ordered by the relation $<_T$.
\end{definition}
In this paper, every tree $T$ is a subset of $2^{<\kappa}$ for some ordinal $\kappa$, and the order $<_T$ is the inclusion order.\fi
Suppose that $T$ is a subtree of ${}^{<\theta}2$. This means that for every $t\in T$ and $\alpha<\dom(t)$, $t\restriction\alpha\in T$,
and for all $s,t\in T$, $s<_Tt$ iff $s\subseteq t$. For any $t\in T$, we denote by $\hei(t)$ the domain of $t$ and by $\Lev_{\alpha}(T)$ the $\alpha^{\text{th}}$ level of $T$, i.e., $\Lev_{\alpha}(T)=\{ t\in T \ | \ \hei(t)=\alpha \}$.
Let $\hei(T)$ denote the height of $T$.
\begin{definition}
Suppose $T \subseteq {}^{<\theta}2$ is a tree of height $\theta$, and $\delta<\theta$ is a regular cardinal.
\begin{enumerate}
\iffalse \item $T$ is a $\theta$-tree if $\hei(t)=\theta$ and $|\Lev_{\alpha}(T)|<\theta$ for every $\alpha<\theta$.\fi
\item $T$ is a $\theta$-\emph{Suslin tree} if every maximal antichain of $T$ has size less than $\theta$.
\item $T$ is \emph{normal} if for any $t \in T$ and any $\hei(t)<\alpha<\theta$, there is an $s >_T t$ with $\hei(s)=\alpha$.
\item $T$ is \emph{homogeneous} if $T_{s_0}=T_{s_1}$ for every $s_0,s_1 \in T$ in the same level, where $T_s=\{t \upharpoonright (\theta \setminus |s|) \ | \ t \in T, t \geq_T s \}$ for every $s \in T$.
\item $T$ has a $\delta$-\emph{ascent path} if there exists a function sequence $\vec{f}=\langle f_{\alpha} \ | \ \alpha<\theta, f_{\alpha}:\delta \rightarrow \mathrm{Lev}_{\alpha}(T) \rangle$, such that for every $\alpha<\beta<\theta$, the set $\{\upsilon<\delta \ | \ f_{\alpha}(\upsilon)<_T f_{\beta}(\upsilon)\}$ is co-bounded in $\delta$.
\end{enumerate}
\end{definition}
\begin{definition}[\cite{Git2020}]
Suppose $\kappa$ is a 2-Mahlo cardinal, and $\delta<\kappa$ is a regular cardinal.
Define the forcing notion $Q_{\kappa,\delta}$ as follows:
$\langle T,\vec{f} \rangle \in Q_{\kappa,\delta}$ if
\begin{enumerate}
\item $T \subseteq {^{<\kappa}}2$ is a normal homogeneous tree of a successor height.
\item $\vec{f}$ is a $\delta$-ascent path through $T$.
\end{enumerate}
The order on $Q_{\kappa,\delta}$ is defined by taking end extensions.
For any $Q_{\kappa,\delta}$-generic filter $G$, we denote $\vec{f}^G=\bigcup_{\langle t, \vec{f'} \rangle \in G}\vec{f'}$ by $\langle f^G_{\alpha} \mid \alpha<\kappa \rangle$, and let $T(G)$ be the $\kappa$-tree added by $G$.
\end{definition}
\begin{definition}[\cite{Git2020}]
Suppose $G$ is a $Q_{\kappa,\delta}$-generic filter over $V$.
In $V[G]$, define the forcing $F_{\kappa,\delta}$ associated with $G$,
where $g \in F_{\kappa,\delta}$ if there is a
$\xi_g<\delta$ such that
$g=f^G_{\alpha}\upharpoonright (\delta \setminus \xi_g)$ for some $\alpha<\kappa$. Set $g_0 \leq_{F_{\kappa,\delta}} g_1$ if and only if $\xi_{g_0}=\xi_{g_1}$ and for every
$\upsilon$ with $\xi_{g_0} \leq \upsilon<\delta$,
$g_0(\upsilon) \geq_{T(G)} g_1(\upsilon)$.
We also view $F_{\kappa,\delta}$ as a set of pairs $g=\langle f^G_{\alpha},\xi_g \rangle$.
\end{definition}
\iffalse
For convenience, when the context is clear, we use $Q_{\kappa},F_{\kappa}$ to denote $Q_{\kappa,\delta},F_{\kappa,\delta}$, respectively.
\fi
\begin{fact}[\cite{Git2020}]\label{factsus}
Suppose $\kappa$ is a $2$-Mahlo cardinal, and $\delta<\kappa$ is a regular cardinal.
Let $G$ be $Q_{\kappa,\delta}$-generic over $V$. Then the following holds:
\begin{enumerate}
\item\label{ite00} $Q_{\kappa,\delta}$ is ${<}\kappa$-strategically closed and ${<}\delta$-directed closed.
\item \label{ite1} $T(G)$ is a $\delta$-ascent $\kappa$-Suslin tree.
\item\label{ite01} In $V[G]$, $\langle F_{\kappa,\delta},\leq_{F_{\kappa,\delta}} \rangle$ satisfies the $\kappa$-c.c..
\item\label{it1} $\mathrm{Add}(\kappa,1)$ is forcing equivalent to the two-step iteration $Q_{\kappa,\delta}*\dot{F}_{\kappa,\delta}$.
\end{enumerate}
\end{fact}
\begin{proof}
The proofs of \eqref{ite00}, \eqref{ite1} and \eqref{ite01} may be found in \cite{Git2020}.
For the sake of completeness, we provide the proof of \eqref{it1}.
Since every ${<}\kappa$-closed poset of size $\kappa$ is forcing equivalent to $\mathrm{Add}(\kappa,1)$,
we only need to prove that $Q_{\kappa,\delta}*\dot{F}_{\kappa,\delta}$ has a ${<}\kappa$-closed dense subset of size $\kappa$.
Consider the poset $R_0$,
where $\langle t,\vec{f},g \rangle \in R_0$ iff $\langle t,\vec{f} \rangle \in Q_{\kappa,\delta}$,
and there is a $\xi_g<\delta$ such that $g=\langle f_{\mathrm{ht}(t)-1}, \xi_g \rangle$.
For every $\langle t^1,\vec{f}^1,g^1 \rangle, \langle t^2,\vec{f}^2,g^2 \rangle \in R_0$,
$\langle t^1,\vec{f}^1,g^1 \rangle \leq_{R_0} \langle t^2,\vec{f}^2,g^2 \rangle$
iff $\langle t^1,\vec{f}^1 \rangle \leq_{Q_{\kappa,\delta}} \langle t^2,\vec{f}^2 \rangle$,
$\xi_{g^1}=\xi_{g^2}$ and $g^1(\upsilon)>_{t^1} g^2(\upsilon)$ for every $\upsilon$ with $\xi_{g^1} \leq \upsilon<\delta$.
We may view $R_0$ as a subset of $Q_{\kappa,\delta}*\dot{F}_{\kappa,\delta}$.
For every $\langle t^1,\vec{f}^1, \dot{g} \rangle \in Q_{\kappa,\delta}*\dot{F}_{\kappa,\delta}$,
strengthen $\langle t^1,\vec{f}^1 \rangle$ to a condition $\langle t,\vec{f} \rangle$,
such that $\langle t,\vec{f} \rangle$ decides that $\dot{g}$ is $g=\langle f_{\alpha},\xi_g \rangle$ with $\alpha<\hei(t)$.
We may extend $\langle t,\vec{f} \rangle$ to a condition $\langle t',\vec{f}' \rangle$
so that $f'_{\mathrm{ht}(t')-1}(\upsilon) \geq_{t'} f_{\alpha}(\upsilon)$ for every $\upsilon$ with $\xi_{g} \leq \upsilon<\delta$.
Let $g'=\langle f'_{\mathrm{ht}(t')-1}, \xi_{g} \rangle$.
Then $\langle t',\vec{f}',g' \rangle$ extends $\langle t^1,\vec{f}^1, \dot{g} \rangle$.
Thus $R_0$ is dense in $Q_{\kappa,\delta}*\dot{F}_{\kappa,\delta}$.
Now we prove that $R_0$ is ${<}\kappa$-closed.
Take any limit $\gamma<\kappa$ and any $\gamma$-downward sequence $\langle \langle t^{\alpha},\vec{f}^{\alpha},g^{\alpha} \rangle \ | \ \alpha<\gamma \rangle$ of conditions in $R_0$.
We have $\xi_{g^{\alpha}}=\xi_{g^{0}}$ for every $\alpha<\gamma$,
and for every $\alpha_0<\alpha_1<\gamma$, $g^{\alpha_1}(\upsilon)>_{t^{\alpha_1}} g^{\alpha_0}(\upsilon)$ for every $\upsilon$ with $\xi_{g^0} \leq \upsilon<\delta$.
Thus $t=\bigcup_{i<\gamma}t^{i}$ has a cofinal branch.
So we may extend $t$ by adding all cofinal branches of $t$ to the level $\hei(t)$
and get a tree $t^{\gamma}$.
Let $\vec{f}=\bigcup_{\alpha<\gamma}\vec{f}^{\alpha}:=\langle f_{\alpha} \ | \ \alpha<\mathrm{ht}(t) \rangle$,
and let $f_{\mathrm{ht}(t)}:\delta \rightarrow \mathrm{Lev}_{\mathrm{ht}(t)}(t^{\gamma})$,
so that for every $\upsilon$ with $\xi_{g^0}\leq \upsilon<\delta$,
$f_{\mathrm{ht}(t)}(\upsilon)$ is the continuation of $\bigcup_{\alpha<\mathrm{ht}(t)}f_{\alpha}(\upsilon)$.
Let $\vec{f}^{\gamma}=\vec{f}\cup \{\langle \mathrm{ht}(t), f_{\mathrm{ht}(t)} \rangle\}$,
and let $g^{\gamma}=\langle f_{\mathrm{ht}(t)}, \xi_{g^0} \rangle$.
Then $\langle t^{\gamma},\vec{f}^{\gamma},g^{\gamma} \rangle$
extends $\langle t^{\alpha},\vec{f}^{\alpha},g^{\alpha}\rangle$ for every $\alpha<\gamma$.
\iffalse
(\ref{ite1}) We only need to prove that $Q_{\kappa,\delta}$ is ${<}\delta$-closed.
Take any $\gamma<\delta$.
Suppose $\langle \langle t^{\alpha},\vec{f}^{\alpha} \rangle \ | \ \alpha<\gamma \rangle$ is a $\gamma$-downward sequence of conditions in $Q_{\kappa,\delta}$.
Let $t=\bigcup_{\alpha<\gamma}t^{\alpha}$,
and let $\vec{f}=\bigcup_{\alpha<\gamma}\vec{f}^{\alpha}=\langle f_{\alpha} \ | \ \alpha<\hei(t) \rangle$.
Since $\delta$ is regular and $\gamma<\delta$, it follows that there is an $\upsilon_0<\delta$ such that $\bigcup_{\alpha<\gamma}f_{\alpha}(\upsilon)$ is a cofinal branch of $t$ for every $\upsilon_0 \leq \upsilon<\delta$.
Then as the proof of \eqref{it1},
we may get a condition $\langle t^{\gamma}, \vec{f}^{\gamma} \rangle \in Q_{\kappa,\delta}$
extending every element in $\langle \langle T^{\alpha},\vec{f}^{\alpha} \rangle \ | \ \alpha<\gamma \rangle$.
\fi
\end{proof}
In the above fact,
if $\kappa$ is supercompact and $\delta<\kappa$ is measurable,
then $Q_{\kappa,\delta}$ may preserve the $\delta$-strong compactness of $\kappa$ after some preparation.
The idea is that after a small ultrapower map $i_U:V\rightarrow M_U$ given by a normal measure $U$ over $\delta$,
we may lift $i_U$ to $i_U:V[G] \rightarrow M_U[G^*]$ for some $G^*$ by transfer argument.
Then $i_U''\vec{f}^G$ may generate an $i_U(F_{\kappa,\delta})$-generic object associated with $i_U(T(G))$ over $M_U[G^*]$,
which may resurrect the supercompactness of $\kappa$.
It is well-known that if there is a $\kappa$-Suslin tree, then $\kappa$ is not measurable.
We give a similar result for a $\delta$-ascent $\kappa$-Suslin tree (actually, a $\delta$-ascent $\kappa$-Aronszajn tree is enough).
\begin{lemma}\label{lemma1}
Let $\kappa>\delta$ be two regular cardinal.
If $T$ is a $\delta$-ascent $\kappa$-Suslin tree, then $\kappa$ carries no $\delta^+$-complete uniform ultrafilters.
In particular, $\kappa$ is not $\delta^+$-strongly compact.
\iffalse
Suppose $\langle T, \vec{f} \rangle$ satisfies that
\begin{enumerate}
\item\label{item1} $T$ is a $\kappa$-Suslin tree;
\item\label{item2} $\vec{f}=\langle f_{\alpha} \ | \ \alpha<\mathrm{ht}(T), f_{\alpha}:\delta \rightarrow \mathrm{Lev}_{\alpha}(T) \rangle$;
\item\label{item3} for every $\alpha<\beta<\mathrm{ht}(T)$, the set $\{\upsilon<\delta \ | \ f_{\alpha}(\upsilon)<_T f_{\beta}(\upsilon)\}$ is co-bounded in $\delta$.
\end{enumerate}
\fi
\end{lemma}
\begin{proof}
This follows from \cite[Lemmas 3.7 and 3.38]{LR22}, but we provide here a direct proof.
Suppose not, then there exists a $\delta^+$-complete uniform ultrafilter over $\kappa$.
So we have an ultrapower map $j:V \rightarrow M$ such that $\mathrm{crit}(j)>\delta$ and $\sup(j''\kappa)<j(\kappa)$.
Let $\beta=\sup(j''\kappa)$.
Let $\vec{f}=\langle f_{\alpha} \ | \ \alpha<\kappa \rangle$ be a $\delta$-ascent path through $T$,
and denote $j(\vec{f})$ by $\langle f^*_{\alpha} \ | \ \alpha<j(\kappa) \rangle$.
By elementarity and the fact that $\mathrm{crit}(j)>\delta$, it follows that
for any $\alpha_1<\alpha_2<j(\kappa)$,
the set $\{ \upsilon<\delta \ | \ f_{\alpha_1}^*(\upsilon)<_{j(T)} f^*_{\alpha_2}(\upsilon) \}$ is co-bounded in $\delta$.
So for every $\alpha<\kappa$, there is a $\theta_{\alpha}<\delta$ such that
$f^*_{\beta}(\upsilon)>_{j(T)}f^*_{j(\alpha)}(\upsilon)=j(f_{\alpha}(\upsilon))$
for every $\upsilon$ with $\theta_{\alpha} \leq \upsilon<\delta$.
Since $\mathrm{cof}(\beta)=\kappa>\delta$, it follows that
there is an unbounded subset $A \subseteq \kappa$ and a $\theta<\delta$ such that
for each $\alpha \in A$, we have $\theta=\theta_{\alpha}$.
Hence for every $\alpha_1>\alpha_2$ in $A$,
we have $j(f_{\alpha_1}(\upsilon))>_{j(T)}j(f_{\alpha_2}(\upsilon))$ for every $\upsilon$ with $\theta \leq \upsilon<\delta$,
and thus $f_{\alpha_1}(\upsilon)>_{T}f_{\alpha_2}(\upsilon)$ for every $\upsilon$ with $\theta \leq \upsilon<\delta$ by elementarity.
Therefore, $\{s \in T \mid \exists \alpha \in A (s \leq_{T} f_{\alpha}(\theta)) \}$ is a cofinal branch through $T$,
contrary to the fact that $T$ is a $\kappa$-Suslin tree.
Thus $\kappa$ carries no $\delta^+$-complete uniform ultrafilter. This means that
$\kappa$ is not $\delta^+$-strongly compact by Theorem \ref{tKU}.
\end{proof}
By Fact \ref{factsus}, if $\kappa$ is a $2$-Mahlo cardinal, then $Q_{\kappa,\delta}$ adds a $\delta$-ascent $\kappa$-Suslin tree. So $\kappa$ is not $\delta^+$-strongly compact in $V^{Q_{\kappa,\delta}}$.
\begin{corollary}\label{kill}
Suppose $\kappa$ is a $2$-Mahlo cardinal, and $\delta<\kappa$ is a regular cardinal.
Then every cardinal less than or equal to $\kappa$ is not $\delta^+$-strongly compact in $V^{Q_{\kappa,\delta}}$.
\end{corollary}
\iffalse
If $\kappa$ is a $2$-Mahlo cardinal, and $G$ is $Q_{\kappa,\delta}$-generic over $V$,
then $\langle T(G),\vec{f}^G \rangle$ satisfies the requirements (\ref{item1}), (\ref{item2}) and (\ref{item3}) above. So by Lemma \ref{lemma1}, there is no $\delta^+$-complete uniform ultrafilter over $\kappa$ in $V[G]$.
Hence there is no $\delta^+$-strongly compact cardinal below $\kappa^+$ by Theorem \ref{tKU}.
\fi
\section{the proofs for one cardinal}\label{sec3}
Gitik in \cite[Theorem 3.9 and the comment below the theorem]{Git2020} gave a new construction of a non-strongly compact $\delta$-strongly compact cardinal.
\begin{theorem}[\cite{Git2020}]
Assume $\gch$. Let $\kappa$ be a supercompact cardinal and let $\delta < \kappa$ be a measurable cardinal.
Then there is a cofinality preserving generic extension which satisfies the following:
\begin{enumerate}
\item $\gch$,
\item $\kappa$ is not measurable,
\item $\kappa$ is the least $\delta$-strongly compact cardinal.
\end{enumerate}
\end{theorem}
It seems that Gitik only gave a proof for the case that there is no inaccessible cardinal above the supercompact cardinal $\kappa$, though the technique for transferring this proof to the general case is standard.
For the sake of completeness,
we first give some details of the proof of the general case in Proposition \ref{t1}. Then
we give some variants of it.
\begin{proposition}\label{t1}
Suppose $\kappa$ is a supercompact cardinal, and $\delta<\kappa$ is a measurable cardinal.
Then for every regular $\eta<\delta$,
there is a ${<}\eta$-directed closed forcing $\mathbb{P}$,
such that $\kappa$ is the least exactly $\delta$-strongly compact cardinal in $V^{\mathbb{P}}$.
In addition, if $\mathrm{GCH}$ holds in $V$, then it holds in $V^{\mathbb{P}}$.
\end{proposition}
\begin{proof}
Let $A=\{ \alpha<\kappa \mid \alpha>\delta$ is a strong cardinal limit of strong cardinals$\}$,
and let $B=\{\alpha<\kappa \mid \alpha>\delta$ is the least inaccessible limit of strong cardinals above some ordinal$\}$.
Then $A$ and $B$ are nonempty by Lemma \ref{lem10},
and $A\cap B=\emptyset$.
Let $\mathbb{P}_{\kappa+1}=\langle \langle \mathbb{P}_{\alpha},\dot{\mathbb{R}}_{\alpha} \rangle \mid \alpha \leq \kappa \rangle$
be the Easton support iteration of length $\kappa+1$,
where $\mathbb{\dot{R}}_{\alpha}$ is
\begin{itemize}
\item a $\mathbb{P}_{\alpha}$-name for $\mathrm{Add}(\alpha,1)_{V^{\mathbb{P}_{\alpha}}}$ if $\alpha \in A$,
\item a $\mathbb{P}_{\alpha}$-name for $(Q_{\alpha,\eta})_{V^{\mathbb{P}_{\alpha}}}$ if $\alpha \in B$,
\item a $\mathbb{P}_{\alpha}$-name for $(Q_{\kappa,\delta})_{V^{\mathbb{P}_{\alpha}}}$ if $\alpha=\kappa$,
\item the trivial forcing, otherwise.
\end{itemize}
Let $\mathbb{P}=\mathbb{P}_{\kappa+1}$.
Let $G_{\kappa}$ be a $\mathbb{P}_{\kappa}$-generic filter over $V$,
let $g$ be an $\mathbb{R}_{\kappa}$-generic filter over $V[G_{\kappa}]$, and let $G=G_{\kappa}*g \subseteq \mathbb{P}$.
For every $\alpha<\kappa$, let $G_{\alpha}=\{p \upharpoonright \alpha \mid p \in G_{\kappa}\}$,
and let $\dot{\mathbb{P}}_{\alpha,\kappa+1}$ name the canonical iteration of length $\kappa+1-\alpha$ such that $\mathbb{P}$
is forcing equivalent to $\mathbb{P}_{\alpha}*\dot{\mathbb{P}}_{\alpha,\kappa+1}$.
Then $\mathbb{P}$ is ${<}\eta$-directed closed,
because for every $\alpha<\kappa$, $\mathbb{P}_{\alpha}$ forces that $\dot{\mathbb{R}}_{\alpha}$ is ${<}\eta$-directed closed.
In addition, if $\mathrm{GCH}$ holds in $V$, then it holds in $V^{\mathbb{P}}$.
Take any $\alpha \in B$. In $V[G_{\alpha}]$,
the forcing $\mathbb{R}_{\alpha}=Q_{\alpha,\eta}$ adds an $\eta$-ascent $\alpha$-Suslin tree.
So $\alpha$ carries no $\eta^+$-complete uniform ultrafilters in $V[G_{\alpha+1}]$ by Lemma \ref{lemma1}.
Meanwhile, $\mathbb{P}_{\alpha+1,\kappa+1}$ is $(2^\alpha)^+$-strategically closed in $V[G_{\alpha+1}]$.
So any ultrafilter over $\alpha$ in $V[G]$ is actually in $V[G_{\alpha+1}]$.
Hence, there are no $\eta^+$-complete uniform ultrafilters over $\alpha$ in $V[G]$.
This implies that there is no $\eta^+$-strongly compact cardinal below $\alpha$ in $V[G]$ by Theorem \ref{tKU}.
Note that $B$ is unbounded in $\kappa$,
it follows that there is no $\eta^+$-strongly compact cardinal below $\kappa$.
Now we are ready to prove that $\kappa$ is exactly $\delta$-strongly compact.
By Corollary \ref{kill}, the cardinal $\kappa$ is not $\delta^+$-strongly compact.
Let $W$ be a normal measure over $\delta$, and let $i_W:V \rightarrow M_W$ be the corresponding ultrapower map.
Note that $\mathbb{P}$ is ${<}\delta^+$-strategically closed,
we may lift $i_W$ to $i_W:V[G] \rightarrow M_W[G^*]$ by transfer argument.
Here $G^*:=G_{\kappa}^**g^* \subseteq i_W(\mathbb{P})$ is the filter generated by $i_W''G$.
Next, we show that in $V[G]$,
there is an $i_W(F_{\kappa,\delta})$-generic object over $M_W[G^*]$.
In $M_W[G^*]$, the sequence $\vec{f}^{g^*}=\langle f^{g^*}_{\alpha} \mid \alpha<\kappa \rangle:=i_W(\vec{f}^g)$ is an $i_W(\delta)$-ascent path through
the $\kappa$-Suslin tree $T(g^*)$.
And for every $\upsilon$ with $\delta \leq \upsilon<i_W(\delta)$,
$\{f^{g^*}_{i_W(\alpha)}(\upsilon) \mid \alpha<\kappa \}$ generates a cofinal branch $\{ t \in T(g^*) \mid \exists \alpha<\kappa (t<_{T(g^*)} f^{g^*}_{i_W(\alpha)}(\upsilon))\}$ through $T(g^*)$.
We claim that the set $\{ \langle f^{g^*}_{i_W(\alpha)},\delta \rangle \mid \alpha<\kappa \}$ generates an $i_W(F_{\kappa,\delta})$-generic filter $F^*$ over $M_W[G^*]$.
In other words, let
\[
F^*=\{\langle f^{g^*}_{\gamma},\delta \rangle \mid \gamma<\kappa, \exists \gamma<\alpha<\kappa (\forall \delta \leq \upsilon <i_W(\delta)(f^{g^*}_{\gamma}(\upsilon)<_{T(g^*)}f^{g^*}_{i_W(\alpha)}(\upsilon)))\}.
\]
Then $F^*$ is an $M_W[G^*]$-generic set for the forcing
$i_W(F_{\kappa,\delta})$ associated with the tree $T(g^*)$.
We only need to prove that for every maximal antichain $A$ of $i_W(F_{\kappa,\delta})$,
there is an element of $F^*$ extends some element of $A$.
In $M_W[G^*]$,
note that $i_W(F_{\kappa,\delta})$ has $\kappa$-c.c., it follows that
for every maximal antichain $A$ of $i_W(F_{\kappa,\delta})$,
we may find an ordinal $\alpha<\kappa$
such that each function of $A$ acts at a level of $T(g^*)$ below $\alpha$.
Now take an inaccessible cardinal $\alpha<\beta<\kappa$ in $V[G]$.
Then $\langle f^{g^*}_{\beta},\delta \rangle \in F^*$
extends some element of $A$.
Thus $M_W[G^*,F^*]$ is an $i_W(\mathbb{P}_{\kappa} * \dot{\mathrm{Add}}(\kappa,1))$-generic extension of $M_W$.
Now work in $M_W$.
Let $\lambda > \kappa$ be an arbitrary singular strong limit cardinal of cofinality greater than $\kappa$.
Let $U$ be a normal measure over $\mathcal{P}_{\kappa}(\lambda)$,
so that the embedding $j:=j^{M_W}_U:M_W \rightarrow N \cong \mathrm{Ult}(M_W,U)$ satisfies that $N \vDash ``\kappa$ is not $\lambda$-supercompact$"$. Let $\pi=j\circ i_W$.
Work in $M_W[G^*,F^*]$, which is an $i_W(\mathbb{P}_{\kappa} * \dot{\mathrm{Add}}(\kappa,1))$-generic extension of $M_W$.
Note that $\kappa \in \pi(A)$ by Lemma \ref{lem10},
and in $N$, there is no strong cardinal in $(\kappa,\lambda]$ by Lemma \ref{lem11},
a standard argument shows that we may lift $j$ and obtain an embedding $j:M_W[G^*]\rightarrow N[G^*,F^*,H]$. Here $H$ is a $\pi(\mathbb{P})/ (G^**F^*)$-generic filter constructed in $M_W[G^*,F^*]$(see \cite{C2010} or \cite[Lemma 4]{AC2001} for the detail).
\iffalse
Let $W$ be a normal ultrafilter over $\delta$, and let $i:=i^M_W:M_U \rightarrow N \cong \ult(M,W)$ be the corresponding ultrapower map.
Let $\pi:=i \circ j_U:V \rightarrow N$ be the composition of $j_U$ and $i$.
Then $i(\kappa)=\kappa$ and $i(\lambda)=\lambda$,
and $\pi$ is also the composition of the ultrapower map $i_W: V \rightarrow M_W$ and the ultrapower map $j:=j^{M_W}_{i_W(U)}:M_W \rightarrow N \cong \ult(M_W,i_W(U))$.
By Lemma \ref{lem10}, $\kappa$ is a strong cardinal limit of strong cardinals in $N$. So $\kappa \in \pi(A)$.
By Lemma \ref{lem11},
\begin{equation}\label{eq1}
N \vDash ``\text{ there is no strong cardinal in }(\kappa,\lambda]".
\end{equation}
Now consider the forcing $\pi(\mathbb{P})$ in $N$.
Note that $\kappa \in \pi(A)$ and \eqref{eq1},
the forcing $\pi(\mathbb{P})$ can split into the part $i_W(\mathbb{P}_{\kappa} * \dot{\mathrm{Add}}(\kappa,1))$,
and the part $\mathbb{P}_{\lambda,j(\kappa)}*\pi(\dot{Q}_{\kappa,\delta})$ above $\lambda$.
Note that $\mathrm{Add}(\kappa,1)$ is forcing equivalent to $Q_{\kappa,\delta}*\dot{F}_{\kappa,\delta}$,
so $i_W(\mathbb{P}_{\kappa} * \dot{\mathrm{Add}}(\kappa,1))$ is forcing equivalent to $i_W(\mathbb{P}_{\kappa} * (\dot{Q}_{\kappa,\delta}*\dot{F}_{\kappa,\delta}))=i_W(\mathbb{P})*i(\dot{F}_{\kappa,\delta})=i_W(\mathbb{P})*i_W(\dot{F}_{\kappa,\delta})$.
\iffalse
Now we deal with the part $i(\mathbb{P}_{\lambda,j(\kappa)}*j(\dot{Q}_{\kappa,\delta}))$.
Consider $\pi=j_{i_W(U)}^{M_W}\circ i_W$.
By the argument above, we have a lifting embedding $i_W:V[G]\rightarrow M_W[G^*]$
and a model
Next, we prove that there exists a $i(\mathbb{P}_{\lambda,j(\kappa)}*j(\dot{Q}_{\kappa,\delta}))$-generic
object over $N[G^*,F^*]$, and a master condition $q$.
Then we may lift $\pi$ by Silver's criteria.
Note that $Q_{\kappa,\delta}*\dot{F}_{\kappa,\delta}$ is forcing equivalent to the Cohen forcing $\add(\kappa,1)$,
since $|j(\kappa)|=\lambda^+$
and $\mathbb{P}_{\lambda,j(\kappa)}*j(\dot{Q}_{\kappa,\delta})$ is $\prec \lambda^+$-strategically closed in $V[G]^{F_{\kappa,\delta}}$,
the standard diagonal argument shows that
we may construct a $\mathbb{P}_{\lambda,j(\kappa)}*j(\dot{Q}_{\kappa,\delta})$-generic filter over $M[G]^{F_{\kappa,\delta}}$,
say $G_{\lambda,j(\kappa)+1} \subseteq \mathbb{P}_{\lambda,j(\kappa)}*j(\dot{Q}_{\kappa,\delta})$.
Meanwhile, note that the generic object given by $F_{\kappa,\delta}$ adds a true cofinal branch through $\vec{f}^g$.
So in $V[G]^{F_{\kappa,\delta}}$,
we may define $T'$ by adding all cofinal branches of $T(g)$,
and define $f_{\kappa}:\delta \rightarrow \Lev_{\kappa}(T')$ such that
$p:=\langle T',\vec{f}'\rangle \in j_U(Q_{\kappa,\delta})$, where
$\vec{f}'=\vec{f}^g\cup \{\langle \kappa, f_{\kappa} \rangle \}$.
Then $p$ is a master condition for $j$ and $Q_{\kappa,\delta}$.
We may assume the master condition is in $G_{\lambda,j(\kappa)+1}$.
\iffalse
Let $T_{\kappa}=\bigcup_{t \in T(g)}t$, and let $\vec{f}=\langle f_{\alpha} \mid \alpha<\kappa \rangle=\bigcup_{\langle t,\vec{f}'\rangle \in g}\vec{f'}$ be the sequence of functions occurred in $g$.
In $V[G]^{F_{\kappa,\delta}}$, $F_{\kappa,\delta}$ adds a cofinal branch of the $\kappa$-Suslin tree $T(g)$.
Thus we may get a tree $T'$ by add all maximal branches of $T_{\kappa}=\bigcup_{t \in T(g)}t$ to the level $\kappa$.
We may also get a function $f_{\kappa}:\delta \rightarrow \mathrm{Lev}_{\kappa}(T')$, which is co-boundedly extends $f_{\alpha}$ for every $\alpha<\kappa$.
Let $\vec{f'}=\vec{f}\cup \{\langle \kappa, f_{\kappa}\rangle \}$,
then $\langle T', \vec{f'}\rangle$ is a master condition of $j$ and $Q_{\kappa,\delta}$.
Note that $|j(\kappa)|=|j(\lambda)|=\lambda^+$, and $j(Q_{\kappa,\delta})$ is $\prec \lambda^+$-strategically closed in $V[G,h]$,
the standard diagonal argument shows that we can construct a $j(Q_{\kappa,\delta})$-generic filter, say $G_{j(\kappa),j(\kappa)}$.
\fi
Let $\langle \dot{q}_{\xi} \mid \xi<\lambda^+ \rangle$ be a sequence of canonical $F_{\kappa,\delta}$-names for elements in $G_{\lambda,j(\kappa)+1}$.
\fi
Now use $i$ and move to $N$. Then $\langle i(\dot{q}_{\alpha}) \mid \alpha<\lambda^+ \rangle$ will be a master condition sequence for $i$ and $N[G^*,F^*]$.
Thus we may lift $\pi$ and obtain a lifting embedding $\pi:V[G] \rightarrow N[G^*,F^*,R]$, where $R$ is generated by $\langle (i(\dot{q}_{\alpha}))_{G^**F^*} \mid \alpha<\lambda^+ \rangle$.
\fi
Let $D=j''\lambda \in N[G^*,F^*,H]$, then $\pi''\lambda=D$ and $N[G^*,F^*,H] \vDash |D|<\pi(\kappa)$.
Hence in $V[G]$, $\pi$ witnesses that $\kappa$ is $(\delta,\lambda)$-strongly compact.
As $\lambda$ is an arbitrarily strong limit cardinal of cofinality greater than $\kappa$,
it follows that $\kappa$ is $\delta$-strongly compact.
This completes the proof of Proposition \ref{t1}.
\iffalse
\begin{lemma}\label{le3}
\begin{equation}\label{ee8}
V[G][F] \vDash ``\mathbb{P}_{(\lambda,j(\kappa))} \text{ is $\prec\lambda^+-$strategically closed''},
\end{equation}
and
\begin{equation}\label{ee9}
V[G][F] \vDash ``\mathbb{P}_{(\lambda,j(\kappa))} \text{ has $\lambda^+$ maximal antichains lying in $M[G][F]$''}.
\end{equation}
\end{lemma}
\begin{proof}
We know $M[G][F] \vDash ``\mathbb{P}_{(\lambda,j(\kappa))}$ is $\prec\lambda^+-$strategically closed''. Since $\mathbb{P}*\dot{Q}_{\kappa}*\dot{F}_{\kappa}$ is $\lambda^+$-c.c. and $V \vDash {^{\lambda}M \subseteq M}$, we have $V[G][F] \vDash {^{\lambda}M[G][F] \subseteq M[G][F]}$, so \eqref{ee8} holds.
$\mathbb{P}_{\kappa}$ is $\kappa-$c.c. with size $\kappa$, and in $M$, we have $j(\mathbb{P}_{\kappa})\backsimeq \mathbb{P}*\dot{Q}_{\kappa}*\dot{F}_{\kappa}*\dot{\mathbb{P}}_{(\lambda,j(\kappa))}$, then in $M[G][F]$, $\mathbb{P}_{(\lambda,j(\kappa))}$ is $j(\kappa)-$c.c. with size $j(\kappa)$, which implies that we have $j(\kappa)$ maximal antichains of $\mathbb{P}_{(\lambda,j(\kappa))}$. Since $V \vDash |j(\kappa)|=|j(\lambda)|=\lambda^{\lambda^{<\kappa}}=\lambda^+$,(\ref{ee9}) holds.
\end{proof}
\fi
\iffalse
\begin{lemma}
\begin{equation}\label{ee10}
V[G][F] \vDash ``j(Q_{\kappa}) \text{ is $\prec\lambda^+-$strategically closed''},
\end{equation}
and
\begin{equation}\label{ee11}
V[G][F] \vDash ``\text{$j(Q_{\kappa})$ has $\lambda^+$ maximal antichains lying in $M[G_{j(\kappa)}]$''}.
\end{equation}
\end{lemma}
\begin{proof}
Since $V[G][F] \vDash {^{\lambda}M[G][F] \subseteq M[G][F]}$, $V[G][F] \vDash {^{\lambda} On} \subseteq M[G][F]$, and $M[G][F] \subseteq M[G_{j(\kappa)}]$, we have $V[G][F] \vDash {^{\lambda} On} \subseteq M[G_{j(\kappa)}]$. Therefore, $V[G][F]\vDash {^{\lambda}M[G_{j(\kappa)}] \subseteq M[G_{j(\kappa)}]}$. Also, $M[G_{j(\kappa)}] \vDash ``j(Q_{\kappa}) \text{ is $<j(\kappa)-$strategically closed''}$, hence (\ref{ee10}) holds.
Since in $M[G_{j(\kappa)}]$, $j(Q_{\kappa})$ is $j(\kappa^+)$-c.c. and has size $j(\kappa)$, we know that $j(Q_{\kappa})$ has $j(\kappa^+)$ maximal antichains. Since $V \vDash |j(\kappa^+)|\leq |j(\lambda)|=\lambda^{+}$, (\ref{ee11}) follows.
\end{proof}
\fi
\end{proof}
\iffalse
\begin{corollary}
Suppose $\kappa$ is an indestructible supercompact cardinal, $\delta<\kappa$ is a measurable cardinal and $\lambda \geq \kappa$ is a $2$-Mahlo cardinal. Then in $V^{Q_{\lambda,\delta}}$, $\kappa$ is exactly $\delta$-strongly compact cardinal.
\end{corollary}
\begin{remark}\label{rmk1}
In the proposition above, if $\mathbb{Q} \in V_{\kappa}$ is a $\prec \delta^+$-strategically closed forcing,
then $\kappa$ is also the least exactly $\delta$-strongly compact cardinal in $V^{\mathbb{P}\times \mathbb{Q}}$.
\end{remark}
In the iteration $\mathbb{P}=\mathbb{P}_{\kappa+1}=\langle \langle \mathbb{P}_{\alpha},\dot{\mathbb{R}}_{\alpha} \rangle \mid \alpha \leq \kappa \rangle$, if $\dot{\mathbb{R}}_{\alpha}$ names a trivial forcing for every $\alpha \in B$, then $\kappa$ is still an exactly $\delta$-strongly compact cardinal, but it may not be the least one. And for any $\eta<\kappa$, the forcing $\mathbb{P}$ can be ${<}\eta$-directed closed if we start this iterated forcing above $\eta$.
\fi
\iffalse
and measurable in $V^{\mathbb{P}}$. In particular, if , then $\kappa$ is exactly $\delta$-strongly compact and measurable in $V^{Q_{\lambda,\delta}}$.
\fi
\begin{remark}\label{rmk2}
Take any inaccessible $\theta$ with $\delta<\theta<\kappa$. For any ${<}\theta$-strategically closed forcing $\mathbb{Q}_0 \in V_{\kappa}$
and any forcing $\mathbb{Q}_1 \in V_{\theta}$ such that $i_W$ can be lifted to some embedding with domain $V^{\mathbb{Q}_1}$,
$\kappa$ is the least exactly $\delta$-strongly compact cardinal in $V^{\mathbb{P}\times \mathbb{Q}_0 \times \mathbb{Q}_1}$.
\end{remark}
It is easy to see that $\kappa$ is the least $\eta^+$-strongly compact cardinal in the proposition above.
If we add a $\delta$-ascent Suslin tree at the least $2$-Mahlo cardinal above $\kappa$ instead of $\kappa$, then $\kappa$ may be measurable.
Thus the least $\delta$-strongly compact cardinal may be non-strongly compact and measurable.
We may compare this with Menas' theorem that every measurable almost strongly compact cardinal must be strongly compact.
\begin{proposition}\label{t2}
Suppose $\kappa$ is a supercompact cardinal, $\kappa'$ is the least $2$-Mahlo cardinal above $\kappa$,
and $\delta<\kappa$ is a measurable cardinal.
Then for any $\eta<\delta$, there is a ${<}\eta$-directed closed forcing $\mathbb{P}$,
such that in $V^{\mathbb{P}}$, $\kappa$ is the least exactly $\delta$-strongly compact cardinal, and $\kappa$ is measurable.
\end{proposition}
\begin{proof}
Without loss of generality,
we may assume that $2^{\kappa}=\kappa^+$.
Take the Easton support forcing $\mathbb{P}=\mathbb{P}_{\kappa+1}$ in Proposition \ref{t1},
but with $A=\{ \alpha<\kappa \mid \alpha>\delta$ is the first $2$-Mahlo cardinal above some strong cardinal limit of strong cardinals$\}$,
and $\dot{\mathbb{R}}_{\kappa}$ names $(Q_{\kappa',\delta})_{V^{\mathbb{P}_{\kappa}}}$ instead of $(Q_{\kappa,\delta})_{V^{\mathbb{P}_{\kappa}}}$.
Then in $V^{\mathbb{P}}$, $\kappa$ is the least exactly $\delta$-strongly compact by the argument in Proposition \ref{t1}
and measurable by a standard lifting argument (see \cite[Theorem 11.1]{C2010}).
\end{proof}
In the above proposition,
note that $A \cup B$ is sparse and every element in $A \cup B$ is not measurable,
it is easy to see that if $\gch$ holds, then every measurable cardinal is preserved
by a standard lifting argument (see \cite[Theorem 11.1]{C2010}).
\begin{corollary}\label{t4}
Suppose $\kappa$ is supercompact and indestructible under any ${<}\kappa$-directed forcing, and $\delta<\kappa$ is a measurable cardinal.
Then $\kappa$ is exactly $\delta$-strongly compact in $V^{Q_{\kappa,\delta}}$. In addition,
if $\kappa'$ is the least $2$-Mahlo cardinal above $\kappa$,
then $\kappa$ is exactly $\delta$-strongly compact and measurable in $V^{Q_{\kappa',\delta}}$.
\end{corollary}
\iffalse
Next, we will prove that $\kappa$ is strong.
Fix an arbitrary cardinal $\theta>\lambda$ with $\theta=\aleph_{\theta}$.
Let $j:V \rightarrow M$ be an elementary embedding witnessing $\lambda$-strongness of $\kappa$.
Like the proof of \cite[Lemma 2.5]{Apter Cummings(2001)},
we may lift $j$ and obtain a lifting embeding $j:V^{\mathbb{P}_{\kappa}} \rightarrow M^{j(\mathbb{P}_{\kappa})}$.
Note that $\mathbb{R}_{\kappa}=Q_{\lambda,\delta}$ is $\prec\lambda$-strategically closed in $V^{\mathbb{P}_{\kappa}}$
and $j$ has width $\kappa$,
we may lift $j$ and obtain $j:V^{\mathbb{P}}\rightarrow M^{j(\mathbb{P})}$ by transfer argument.
Thus $j$ is a $\theta$-strong embeding.
Since $\theta$ is arbitrary, it follows that $\kappa$ is strong.
By a standard argument, the $<\lambda$-supercompactness of $\kappa$ is also preserved. Thus $\kappa$ is measurable.
Next, we will prove that $\kappa$ is strong. Fix $\eta>\lambda$, $\eta$ is cardinal with $\eta=\aleph_{\eta}$. Let $j:V \rightarrow M$ be an elementary embedding witnessing $\lambda$-strongness of $\kappa$. Let $U=\{ x \subseteq \kappa \mid \kappa \in j(x)\}$ be the normal measure, and let $i$ be the ultrapower given by $U$. Then we may define an elementary embedding $k:M \rightarrow N$ by $k([F]_{U})=j(F)(\kappa)$, and $j=k \circ i$.
For any ordinal $\alpha$, define $\sigma_{\alpha}$ as the least ordinal $> \alpha$ so that $\alpha$ is not $\sigma_{\alpha}$
strong if such an ordinal exists, and $\sigma_{\alpha} = 0$ otherwise. Define $f : \kappa \rightarrow \kappa$
as $f(\alpha) =$ The least inaccessible cardinal $> \sigma_{\alpha}$. By our choice of $\eta$ and
the preceding paragraph, $\kappa < \eta < j(f)(\kappa) < \eta$, where $\eta$ is the least strong
cardinal in $M \geq \kappa$.
Let $\gamma=i(f)(\kappa)$. Then $k(\gamma)=j(f)(\kappa)<\eta$. Note now that
\[
M=\{ j(f)[a] \mid a \in [\eta]^{<\omega}, f:[\kappa]^{|a|} \rightarrow V \}
=\{k(f)[a] \mid a \in [\eta]^{<\omega}, f:[\gamma]^{|a|} \rightarrow V \}
\]
By elementarity, $N \vDash \kappa$ is not strong and $k(\kappa)=\kappa<k(\gamma)=j(f)(\kappa)<k(\eta_0)=\eta$.
Therefore, $k$ can be assumed to be generated by an $N$-extender of width $\gamma \in (\eta,\delta_0)$.
\fi
\iffalse
The following proposition shows that the least $\omega_1$-strongly compact but not strongly compact cardinal can be the second measurable.
\begin{proposition}\label{t2}
Suppose $\kappa_0 <\kappa_1<\lambda$, $\kappa_0$ is measurable, $\kappa_1$ is supercompact, $\lambda$ is the only $2-$Mahlo cardinal above $\kappa_1$. Then there is a generic extension, in which $\kappa_1$ is the least exactly $\kappa_0-$strongly compact cardinal and the second measurable cardinal.
\end{proposition}
\begin{proof}
By forcing if necessary, we may assume that $GCH$ holds. Let $A=\{\theta<\kappa_1 \mid \theta>\kappa_0,$ and $\theta$ is a measurable cardinal$\}$, let $\mathbb{P}=\mathbb{P}_{\kappa_1+1}=\langle \mathbb{P}_{\alpha},\dot{\mathbb{R}}_{\alpha} \mid \alpha \leq \kappa_1 \rangle$ be the iteration of length $\kappa_1+1$ with Easton support in which $\mathbb{\dot{R}}_{\alpha}$ is a $\mathbb{P}_{\alpha}-$name for $Q_{\alpha,\eta}\times Add(\alpha',1)$ if $\alpha \in A$, where $\alpha'$ is the first $2$-Mahlo cardinal above $\alpha$, a $\mathbb{P}_{\kappa_1}-$name for $Q_{\lambda,\kappa_0}$ if $\alpha=\kappa_1$, and trivial forcing otherwise.
Let $G_{\kappa_1}$ be $\mathbb{P}_{\kappa_1}$-generic over $V$, $g$ be $\mathbb{R}_{\kappa_1}-$generic over $V[G_{\kappa_1}]$, and $G=G_{\kappa_1}*g \subseteq \mathbb{P}$. It is not difficult to see that $GCH$ holds at and above $\kappa_1$ in $V[G]$.
For every $\alpha \in A$, in $V[G_{\alpha}]$, $Q_{\alpha,\eta}$ adds an $\alpha-$Suslin tree. Since $Add(\alpha',1)$ is $(2^{\alpha})^+$-closed, hence $(2^{\alpha})^+-$distributive, by Lemma \ref{clai3}, in $V[G_{\alpha+1}]$, the $\alpha-$Suslin tree added by $Q_{\alpha,\eta}$ is also $\alpha-$Suslin. Hence, by Lemma \ref{clai1}, in $V[G]$, there are no $\omega_1$-strongly compact cardinals below $\kappa_1$. Similarly, $\kappa_1$ is not $\kappa_0^+$-strongly compact. Also, by a standard argument, there are no measurable cardinals between $\kappa_0$ and $\kappa_1$.
Now we are ready to prove that $\kappa_1$ is $\kappa_0$-strongly compact.\\
We follow the strategy used in Proposition \ref{t1}. Let $W$ be a normal ultrafilter over $\kappa_0$, and $j_W:V \rightarrow M_W\backsimeq Ult(V,W)$ be the corresponding ultrapower map. Since $\mathbb{P}$ is $\delta^+-$strategically closed, by the transfer argument, we may lift $j_W$ and obtain $j_W:V[G] \rightarrow M_W[G^1]$, where $G^1=G_{\kappa_1}^1*g^1 \subset j_W(\mathbb{P})$ is generated by $j_W''G$. Obviously, $j_W(\lambda)=\lambda$, so $j_W(T(g))$ is a $\lambda-$Suslin tree in $M_W[G^1]$. Let $\vec{f}^{g^1}=\langle f^{g^1}_{\alpha} \mid \alpha<\lambda \rangle=j_W(\vec{f}^g)$. Similarly, we may build an $M_W[G^1]-$generic set $F^1$ for the forcing $j_W(F_{\lambda,\kappa_0})$ with the tree $j_W(T(g))$ in $M_W[G^1]$, with $\xi_f=\kappa_0$.\\
Work in $M_W[G^1,F^1]$. Let $U_1$ be a normal ultrafilter with minimal Mitchell ordering over $\kappa_1$, and $j_1=j^{M_W}_{j_W(U_1)}:M_W \rightarrow M=Ult(M_W,j_W(U_1))$ be the canonical embedding, then we may lift it and obtain $j_1:M_W[G_{\kappa_1}^1,g^1] \rightarrow M[G_{j_1(\kappa_1)},j_1(g^1)]$ and $j_1:M_W[G_{\kappa_1}^1,g^1,F^1] \rightarrow M[G_{j_1(\kappa_1)},j_1(g^1),j_1(F^1)]$.
Also, we may build an object $h \in M_W$ such that $h$ is $Q_{j_1(\kappa_1),\eta}$-generic over $M[G_{j_1(\kappa_1)}]$. Since in $M[G_{j_1(\kappa_1)}]$, $Q_{j_1(\kappa_1),\eta}$ is small, and $j_W(Q_{\lambda,\kappa_0})*j_W(\dot{F}_{\lambda,\kappa_0})$ is $<\lambda-$distributive, $h \times (j_1(g^1) * j_1(F^1))$ is $Q_{j_1(\kappa_1),\eta} \times (j_W(Q_{\lambda,\kappa_0})*j_W(\dot{F}_{\lambda,\kappa_0}))$-generic over $M[G_{j_(\kappa_1)}]$.
For convenience, let $\kappa'_1=j_1(\kappa_1)$, $\mathbb{P'}=j_1(j_W(\mathbb{P}_{\kappa_1}))$, $A'=j_1(j_W(A))$, $G'=G_{j_1(\kappa_1)}$, $\vec{f}=j_1(j_W(\vec{f}^g))$ and $C=j_1(g^1) * j_1(F^1) \subseteq j_W(Q_{\lambda,\kappa_0})*j_W(\dot{F}_{\lambda,\kappa_0})$. \\
Work in $M[G',h,C]$. Fix a regular cardinal $\lambda_1>\lambda$.
\iffalse
$\lambda_1=j'(j_W(\lambda_1'))$ for some regular cardinal $\lambda_1' > \lambda$ in $V$
\fi
Since $\kappa'_1$ is supercompact in $M$, let $U$ be a supercompact normal measure over $\mathcal{P}_{\kappa'_1}(\lambda_1)$ and $j_U:M \rightarrow N=Ult(V,U)$ be the canonical embedding.
Consider the forcing $j_U(\mathbb{P}')$ in $N$. Since $\kappa'_1$ is measurable in $N$, $j_U(\mathbb{P})$ splits into the part $\mathbb{P'}*( \dot{Q}_{\kappa'_1,\eta} \times (Add(\lambda,1)))$, and the part $\mathbb{P}'_{(\kappa_1+1,j_U(\kappa'_1))}*\dot{Q}_{j_U(\lambda),\kappa_0}$ above $\lambda$. Actually, $\mathbb{P}_{(\kappa_1+1,j_U(j'(\kappa_1)))}*\dot{Q}_{j_U(\lambda),\kappa_0}$ is above $\lambda_1$, as $M \vDash {^{\lambda_1}}N \subseteq N$ and there are no inaccessible cardinals above $\lambda$. Now we are ready to show that we may lift $j_U$ and obtain $j_U:M[G',j_1(g^1)]\rightarrow N[j_U(G'),j_U(j_1(g^1))]$.
\begin{lemma}\label{le3}
\begin{equation}\label{ee6}
M[G',h,C] \vDash ``\mathbb{P}_{(\kappa_1+1,j_U(\kappa'_1))} \text{ is $\prec\lambda_1^+-$strategically closed''},
\end{equation}
and
\begin{equation}\label{ee5}
M[G',h,C] \vDash ``\mathbb{P}_{(\kappa_1+1,j_U(\kappa'_1))} \text{ has $\lambda_1^+$ maximal antichains lying in $N[G',h,C]$''}.
\end{equation}
\end{lemma}
We may build a generic object $S \in M[G',h,C]$ such that $S$ is $\mathbb{P}_{(\kappa_1+1,j_U(\kappa'_1))}$-generic over $N[G',h,C]$. Let $G'_{j_U(\kappa'_1)}=G'*(C \times h)*S$, so that $j_U''G' \subset G'_{j_U(\kappa'_1)}$, and $G'_{j_U(\kappa'_1)}$ is $j_U(\mathbb{P'})$-generic over $N$. Then we may lift $j_U:M \rightarrow N$ and obtain $j_U:M[G'] \rightarrow N[G'_{j_U(\kappa'_1)}]$.
\begin{lemma}\label{le1}
$M[G',h,C]\vDash {^{\lambda_1}N[G'_{j_U(\kappa'_1)}]\subseteq N[G'_{j_U(\kappa'_1)}]}$.
\end{lemma}
\begin{proof}
Since $M[G',h,C] \vDash {^{\lambda_1}N[G',h,C] \subseteq N[G',h,C]}$, $M[G',h,C] \vDash {^{\lambda_1} On} \subseteq N[G',h,C]$, and $N[G',h,C] \subseteq N[G'_{j_U(\kappa'_1)}]$, we have $M[G',h,C] \vDash {^{\lambda_1} On} \subseteq N[G'_{j_U(\kappa'_1)}]$. Therefore, the lemma follows.
\end{proof}
\begin{lemma}
\begin{equation}\label{ee3}
M[G',h,C] \vDash ``Q_{j_U(\lambda),\kappa_0} \text{ is $\prec\lambda_1^+-$strategically closed''},
\end{equation}
and
\begin{equation}\label{ee4}
M[G',h,C] \vDash ``\text{$Q_{j_U(\lambda),\kappa_0}$ has $\lambda_1^+$ maximal antichains lying in $N[G'_{j_U(\kappa'_1)}]$''}.
\end{equation}
\end{lemma}
\begin{proof}
By Lemma \ref{le1}, we know $M[G',h,C] \vDash ``Q_{j_U(\lambda),\kappa_0}$ is $\prec\lambda_1^+-$strategically closed''. Also, $N[G'_{j_U(\kappa'_1)}] \vDash ``Q_{j_U(\lambda),\kappa_0} \text{ is $<j_U(\kappa_1)-$strategically closed''}$, hence (\ref{ee3}) holds.\\
Since in $N[G'_{j_U(\kappa'_1)}]$, $Q_{j_U(\lambda),\kappa_0}$ is $j_U(\lambda^+)$-c.c. and has size $j_U(\lambda^+)$, we know that $Q_{j_U(\lambda),\kappa_0}$ has $j_U(\lambda+)$ maximal antichains. Since $V \vDash |j_U(\lambda^+)|<|(\lambda^+)^{\lambda_1}|^+=\lambda_1^{++}$, (\ref{ee4}) follows.
\end{proof}
So we may build a $g_1 \in M[G',h,C]$, such that $g_1$ is $Q_{j_U(\lambda),\kappa_0}-$generic over $N[G'_{j_U(\kappa'_1)}]$. Similar to the proof of Fact \ref{factsus}, we may assume $j_U''g \subset g_1$. Then we may lift $j_U:V[G_{\kappa_1}] \rightarrow M_U[G_{j_U(\kappa_1)}]$ and obtain $j_U:M[G',j_1(g^1)]\rightarrow N[G'_{j_U(\kappa'_1)},g_1]$.
Let $i=j_U \circ j_1 \circ j_W$. By the argument above, the embedding $i$ may extend to
\[
i^*:V[G] \rightarrow N[G'_{j_U(\kappa'_1)},g_1],
\]
Let $D=j_U''\lambda_1 \in N[G'_{j_U(\kappa'_1)},g_1]$, then $i^{*} {''} \lambda'_1 \subseteq D$ for any $\lambda'_1$ with $j_1 \circ j_W(\lambda'_1) \leq \lambda_1$, $N[G'_{j_U(\kappa'_1)},g_1] \vDash |D|<i^*(\kappa_1)$, and $crit(i^*)=\kappa_0$.
So we know that $\kappa_1$ is $\kappa_0-$strongly compact in $V[G]$.
\end{proof}
\fi
\section{main theorems}\label{sec4}
\iffalse
we may construct hierarchies of $\delta$-strongly compact cardinals for different $\delta$ simultaneously,
and prove main theorems that will provide nontrivial examples of non-strongly compact almost strongly compact cardinals.
\fi
In this section, by the proof of \cite[Theorem 2]{Apter(1996)}, for a given class $\mathcal{K}$ of supercompact cardinals,
we may assume that $V \vDash ``\mathrm{ZFC}+\mathrm{GCH}+ \mathcal{K}$ is the class of supercompact cardinals
$+$
Every supercompact cardinal $\kappa$ is Laver indestructible
under any ${<}\kappa$-directed closed forcing not destroying $\mathrm{GCH}$ $+$
The strongly compact cardinals and supercompact cardinals coincide precisely,
except possibly at measurable limit points$"$.
\begin{theorem}\label{thm1}
Suppose that $\mathcal{A}$ is a subclass of the class $\mathcal{K}$ of the supercompact cardinals containing none of its limit points,
and $\delta_{\kappa}$ is a measurable cardinal with $\sup(\mathcal{A}\cap \kappa)< \delta_{\kappa}<\kappa$ for every $\kappa \in \mathcal{A}$.
Then there exists a forcing extension, in which
$\kappa$ is the least exactly $\delta_{\kappa}$-strongly compact cardinal for every $\kappa \in \mathcal{A}$.
In addition, no new strongly compact cardinals are created and $\gch$ holds.
\end{theorem}
\begin{proof}
For each $\kappa \in \mathcal{A}$,
let $\eta_{\kappa}=(2^{\sup(\mathcal{A}\cap \kappa)})^+$,
with $\eta_{\kappa}=\omega_1$ for $\kappa$ the least element of $\mathcal{A}$.
Let $\mathbb{P}_{\kappa}$ be a forcing that forces $\kappa$
to be the least exactly $\delta_{\kappa}$-strongly compact cardinal
by first taking $\eta=\eta_{\kappa}$ and $\delta=\delta_{\kappa}$ and
then using any of ${<}\eta$-directed closed forcing given by Proposition \ref{t1}.
Let $\mathbb{P}$ be the Easton support product $\Pi_{\kappa \in \mathcal{A}}\mathbb{P}_{\kappa}$.
Note that the forcing is a product, and the field of $\mathbb{P}_{\kappa}$ lies in $(\delta_{\kappa},\kappa]$,
which contains no elements of $\mathcal{A}$,
so the fields of the forcings $\mathbb{P}_{\kappa}$ occur in different blocks.
Though $\mathbb{P}$ is a class forcing,
the standard Easton argument shows $V^{\mathbb{P}}\vDash \mathrm{ZFC}+\mathrm{GCH}$.
\begin{lemma}
If $\kappa \in \mathcal{A}$, then $V^{\mathbb{P}} \vDash ``\kappa$ is the least exactly $\delta_{\kappa}$-strongly compact cardinal$"$.
\end{lemma}
\begin{proof}
We may factor the forcing in $V$ as $\mathbb{P}=\mathbb{Q}^{\kappa} \times \mathbb{P}_{\kappa} \times \mathbb{Q}_{<\kappa}$,
where $\mathbb{Q}^{\kappa}=\Pi_{\alpha>\kappa}\mathbb{P}_{\alpha}$ and $\mathbb{Q}_{<\kappa}=\Pi_{\alpha<\kappa}\mathbb{P}_{\alpha}$.
By indestructibility and the fact that $\mathbb{Q}^{\kappa}$ is a ${<}\kappa$-directed closed forcing not destroying $\gch$,
it follows that $\kappa$ is supercompact in $V^{\mathbb{Q}^{\kappa}}$.
Note that the strong cardinals below $\kappa$ are the same in $V^{\mathbb{Q}^{\kappa}}$ and $V$, because $\mathbb{Q}^{\kappa}$ is ${<}\kappa$-directed closed and $\kappa$ is supercompact in $V^{\mathbb{Q}^{\kappa}}$.
Therefore, the forcing $\mathbb{P}_{\kappa}$ satisfies the same definition in either $V$ or $V^{\mathbb{Q}^{\kappa}}$. So $\kappa$ is the least $\delta_{\kappa}$-strongly compact cardinal in $V^{\mathbb{Q}^{\kappa}\times \mathbb{P}_{\kappa}}$ by Proposition \ref{t1}.
Note that $\delta_{\kappa}>\sup(\mathcal{A}\cap \kappa)$ is a measurable cardinal, it follows that $|\mathbb{Q}_{<\kappa}|<\delta_{\kappa}$. So $V^{\mathbb{P}}=V^{\mathbb{Q}^{\kappa} \times \mathbb{P}_{\kappa} \times \mathbb{Q}_{<\kappa}}\vDash ``\kappa$ is the least exactly $\delta_{\kappa}$-strongly compact cardinal$"$.
\iffalse
If $\delta_{\kappa}=\sup(\mathcal{A}\cap \kappa)$ is a measurable cardinal, then note that $2^{\delta_{\kappa}}=\delta_{\kappa}^+$ holds in $V$, it follows that it holds in $V^{\mathbb{P}}$. Hence we may lift an ultrapower map given by a normal measure over $\delta_{\kappa}$. Then by Remark \ref{rmk1}, $V^{\mathbb{P}}=V^{\mathbb{Q}^{\kappa} \times \mathbb{P}_{\kappa} \times \mathbb{Q}_{<\kappa}}\vDash ``\kappa$ is the least exactly $\delta_{\kappa}$-strongly compact cardinal$"$.
\fi
\end{proof}
\iffalse
$\kappa$ is not a limit point of $\mathcal{A}$.
If $\delta_{\kappa}=\sup(\mathcal{A}\cap \kappa)$, then $\kappa \in \mathcal{A}$ is a supercompact cardinal. Hence $2^{\kappa}=\kappa^+$. By a standard argument, we may lift an ultrapower map given by a normal measure over $\delta_{\kappa}$. Hence by Remark \ref{rmk1}, $V^{\mathbb{P}}=V^{\mathbb{Q}^{\kappa} \times \mathbb{P}_{\kappa} \times \mathbb{Q}_{<\kappa}}\vDash ``\kappa$ is the least exactly $\delta_{\kappa}$-strongly compact cardinal$"$.
\fi
Now we only need to prove that no new strongly compact cardinal are created.
In other words, if $\theta$ is strongly compact in $V^{\mathbb{P}}$,
then $\theta \in \mathcal{K}$ or it is a measurable limit point of $\mathcal{K}$ in $V$.
Suppose not.
If $\sup(\mathcal{A})<\theta$, then $|\mathbb{P}|<\theta$.
This implies that $\theta$ is also strongly compact in $V$, contrary to our assumption.
So we may assume that $\theta\leq \sup(\mathcal{A})$.
Then there are two cases to consider:
$\blacktriangleright$ $\theta$ is not a limit point of $\mathcal{A}$.
Let $\kappa=\min(\mathcal{A} \setminus (\theta+1))$.
Then $\theta \in (\eta_{\kappa},\kappa)$.
We may factor $\mathbb{P}$ as $\mathbb{Q}^{\kappa} \times \mathbb{P}_{\kappa} \times \mathbb{Q}_{<\kappa}$.
Then in $V^{\mathbb{P}_{\kappa}}$, $\theta$ is strongly compact,
which implies that $\theta$ is $\eta_{\kappa}^+$-strongly compact.
However, $\kappa$ is the least $\eta_{\kappa}^+$-strongly compact cardinal in $V^{\mathbb{P}_{\kappa}}$ by Proposition \ref{t1}, a contradiction.
$\blacktriangleright$ $\theta$ is a limit of $\mathcal{A}$. For the sake of completeness,
we give an outline of the proof here (see \cite[Lemma 8]{Apter(1996)} for details).
\iffalse
Suppose $\theta$ is not measurable in $V$
\fi
Note that $\theta$ is measurable in $V^{\mathbb{P}}$, it is easy to see that $\theta$ is measurable in $V^{\mathbb{Q}_{< \theta}}:=V^{\Pi_{\alpha<\theta}\mathbb{P}_{\alpha}}$. In addition, $\mathbb{Q}_{< \theta}$ has $\theta$-c.c. in $V^{\mathbb{Q}_{< \theta}}$.
Work in $V^{\mathbb{Q}_{< \theta}}$. Let $\mu$ be a normal measure over $\theta$.
We can prove that there exists a $q \in \mathbb{Q}_{<\theta}$, such that for every $X \subseteq \theta$ in $V$,
we have $q$ decides $X \in \dot{\mu}$, i.e.,
$q \Vdash ``X \in \dot{\mu}"$ or $q \Vdash ``X \notin \dot{\mu}"$.
Otherwise, we can construct a $\theta$-tree
$T=\{\langle r_s,X_s \rangle\mid s \in \ ^{<\theta}2, r_s \in \mathbb{Q}_{<\theta}, X_s \subseteq \theta\}$ such that
\begin{enumerate}
\item $r_s \Vdash ``X_s \in \dot{\mu}"$,
\item if $s' \subseteq s$ then $r_{s}\leq_{\mathbb{Q}_{< \theta}} r_{s'}$,
\item $X_s=X_{s^{\frown}\langle 0 \rangle}\cup X_{s^{\frown}\langle 1 \rangle}$ and $X_{s^{\frown}\langle 0 \rangle}\cap X_{s^{\frown}\langle 1 \rangle}=\emptyset$.
\item If $\dom(s)$ is a limit ordinal, then $X_s=\cap_{s' \subsetneq s}X_{s'}$.
\end{enumerate}
Note that $\theta$ is weakly compact in $V^{\mathbb{Q}_{<\theta}}$,
it follows that $T$ has a cofinal branch $\langle \langle r_s,X_s \rangle \mid s\in \ ^{<\theta}2, \ s \subseteq f \rangle$ for some function $f:\theta \rightarrow 2$.
Then $\{r_{s^{\frown}\langle i \rangle} \mid i=0 \vee i=1, s\in \ ^{<\theta}2, \ s \subseteq f, s^{\frown}\langle i \rangle \nsubseteq f\}$
is an antichain of $\mathbb{Q}_{<\theta}$ of size $\theta$.
However, $\mathbb{Q}_{<\theta}$ has $\theta$-c.c., a contradiction.
Hence there exists a $q \in \mathbb{Q}_{<\theta}$, such that
$q$ decides $X \in \dot{\mu}$ for every $X \subseteq \theta$ in $V$, i.e., $\theta$ is measurable in $V$.
This completes the proof of Theorem \ref{thm1}.
\end{proof}
In the above proof, if we let $\mathbb{P}_{\kappa}$ be the poset in Proposition \ref{t2} instead of the poset in Proposition \ref{t1},
then $\kappa$ remains measurable in $V^{\mathbb{P}}$.
If there is no measurable limit point of $\mathcal{K}$ and $\mathcal{A}=\mathcal{K}$,
then there is no strongly compact cardinal in $V^{\mathbb{P}}$.
\iffalse
every $\kappa \in \mathcal{K}$ is the least exactly $\delta_{\kappa}$-strongly compact cardinal.
Thus $\kappa$ is not strongly compact.
Meanwhile, if a cardinal $\lambda$ is strongly compact in the generic extension,
then $\lambda \in \mathcal{K}$,
because there is no limit point of $\mathcal{K}$.
But every element in $\mathcal{K}$ is not strongly compact.
Hence there is no strongly compact cardinal in the generic extension.
\fi
\begin{corollary}
Suppose $\mathcal{K}$ is the class of supercompact cardinals with no measurable limit points, and $\delta_{\kappa}$ is measurable with $\sup(\mathcal{K}\cap \kappa)<\delta_{\kappa}<\kappa$ for any $\kappa \in \mathcal{K}$. Then there exists a forcing extension, in which $\kappa$ is the least exactly $\delta_{\kappa}$-strongly compact cardinal for any $\kappa \in \mathcal{K}$, and there is no strongly compact cardinal.
\end{corollary}
In the corollary above, note that in the forcing extension, every limit point of $\mathcal{K}$ is almost strongly compact,
we may have a model in which the least almost strongly compact cardinal is not strongly compact.
\begin{theorem}\label{coric}
Suppose that $\langle \kappa_n \mid n<\omega \rangle$ is an increasing sequence of supercompact cardinals. Let $\kappa=\lim_{n<\omega}\kappa_n$. Then there is a forcing extension, in which $\kappa$ is the least almost strongly compact cardinal.
\end{theorem}
\begin{proof}
Let $\kappa_{-1}=\omega_1$ for simplicity. Let $\delta_n$ be the least measurable cardinal greater than $\kappa_{n-1}$ for every $n<\omega$.
Then by Theorem \ref{thm1}, there is a forcing extension, say $V^{\mathbb{P}}$,
in which $\kappa_n$ is the least exactly $\delta_n$-strongly compact cardinal for every $n<\omega$.
Thus $\kappa$ is $\delta_n$-strongly compact cardinal for every $n<\omega$.
Note also that $\kappa=\lim_{n<\omega}\delta_n$,
we have $\kappa$ is almost strongly compact.
Take any $n<\omega$.
Then in $V^{\mathbb{P}}$, there are unboundedly many $\alpha$ below $\kappa_n$
such that there is no $(2^{\kappa_{n-1}})^{++}$-complete uniform ultrafilter over $\alpha$.
So there is no almost strongly compact cardinal in $(\kappa_{n-1},\kappa_n)$.
Since $n<\omega$ is arbitrary, it follows that there is no almost strongly compact cardinal below $\kappa$.
Thus $\kappa$ is the least almost strongly compact cardinal.
\end{proof}
This answers Question \ref{q2} in the negative.
\iffalse
Goldberg essentially proved the following theorem.
\begin{theorem}[\cite[Theorem 5.7]{G2020}]
Suppose $\kappa$ is an almost strongly compact cardinal above the least $\omega_1$-strongly compact cardinal. Then one of the following holds:
\begin{enumerate}
\item $\kappa$ has cofinality $\omega$
\item $\kappa$ is a strongly compact cardinal.
\item $\kappa$ is the successor of a strongly compact cardinal.
\item $\kappa$ is a limit of almost strongly compact cardinals.
\end{enumerate}
\end{theorem}
\fi
If there exists a proper class of supercompact cardinals, we may get a model in which there exists a proper class of almost strongly compact cardinals, but there are no strongly compact cardinals.
\begin{corollary}
Suppose there is a proper class of supercompact cardinals with no measurable limit points. Then there exists a forcing extension, in which there is a proper class of almost strongly compact cardinals, but there are no strongly compact cardinals.
\end{corollary}
\iffalse
Similarly, we have the following corollary, which gives a negative answer to Question \ref{q3}.
\begin{corollary}
Suppose that there are $\omega \cdot \omega$ many supercompact cardinals, which is a proper class. Then there is a forcing extension, in which there are proper class many almost strongly compact cardinals, and no strongly compact cardinals.
\end{corollary}
\fi
Next, we deal with another case.
\begin{theorem}\label{thm2}
Suppose $\mathcal{K}$ is a set of supercompact cardinals and has order type less than or equal to $\min(\mathcal{K})+1$, and $\langle \delta_{\kappa} \mid \kappa \in \mathcal{K} \rangle$ is an increasing sequence of measurable cardinals with $\sup_{\kappa\in \mathcal{K}}\delta_{\kappa} \leq \min(\mathcal{K})$. Then there exists a forcing extension, in which $\kappa$ is exactly $\delta_{\kappa}$-strongly compact for any $\kappa \in \mathcal{K}$.
Moreover, if $\kappa=\min(\mathcal{K})$, then $\kappa$ is the least exactly $\delta_{\kappa}$-strongly compact cardinal. In addition, there is no strongly compact cardinal and $\gch$ holds.
\end{theorem}
\begin{proof}
For every $\kappa \in \mathcal{K}$, let $\kappa'$ be the least $2$-Mahlo cardinal above $\kappa$ if $\kappa<\sup(\mathcal{K})$, and $\kappa$, otherwise. Let $\kappa_0=\min(\mathcal{K})$. Let $\mathbb{P}_{\kappa}$ be a forcing given by Proposition $\ref{t2}$ with $\eta=\omega_1$ if $\kappa=\kappa_0$, and $Q_{\kappa',\delta_{\kappa}}$ if $\kappa\in \mathcal{K}\setminus (\kappa_0+1)$. Let $\mathbb{P}$ be the Easton product forcing $\Pi_{\kappa \in \mathcal{K}}\mathbb{P}_{\kappa}$. Then $V^{\mathbb{P}}\vDash \zfc +\gch$.
Note that $\gch$ holds in $V$,
it follows that $\delta_{\kappa}$ remains measurable in $V^{\mathbb{P}}$
by the comment below Proposition $\ref{t2}$.
For every $\kappa \in \mathcal{K}$, we may factor $\mathbb{P}$ in $V$ as $\mathbb{Q}^{\kappa} \times \mathbb{P}_{\kappa} \times \mathbb{Q}_{<\kappa}$, where $\mathbb{Q}^{\kappa}=\Pi_{\alpha>\kappa}\mathbb{P}_{\alpha}$ and $\mathbb{Q}_{<\kappa}=\Pi_{\alpha<\kappa}\mathbb{P}_{\alpha}$.
Take any $\kappa \in \mathcal{K}$, and let $\lambda_{\kappa}=\min(\mathcal{K}\setminus (\kappa+1))$
(if $\kappa=\max(\mathcal{K})$, then let $\lambda_{\kappa}=\infty)$.
Then it is easy to see that $\mathbb{Q}^{\kappa}$ is ${<}\lambda_{\kappa}$-strategically closed.
Note also that $\kappa$ is supercompact and indestructible by any ${<}\kappa$-directed closed forcing not destroying $\gch$, we have
$V^{\mathbb{Q}^{\kappa}}\vDash ``\kappa$ is ${<}\lambda_{\kappa}$-supercompact and indestructible by any ${<}\lambda_{\kappa}$-directed closed forcing not destroying $\gch"$.
\iffalse
We may factor $\mathbb{P}_{\lambda}$ as $\mathbb{P}_{\eta_{\lambda},\lambda}*\dot{Q}_{\lambda, \delta_{\lambda}}$,
where $\mathbb{P}_{\eta_{\lambda},\lambda}$ is the Easton support iteration of length $\lambda$ with field $(\eta_{\lambda},\lambda)$.
Note that $\mathbb{P}_{\eta_{\lambda},\lambda}$ is $<\eta_{\lambda}$-directed closed, by indestructibility, we have in $V^{\mathbb{P}_{\eta_{\lambda},\lambda}}$, $\kappa$ is supercompact. Note also that in $V^{\mathbb{P}_{\eta_{\lambda},\lambda}}$, $Q_{\lambda,\delta_{\lambda}}$ is $<\lambda$-strategically closed and $\lambda$ is inaccessible, it follows that $\kappa$ is $<\lambda$-supercompact in $V^{\mathbb{P}_{\lambda}}$.
\fi
We may factor $\mathbb{Q}_{<\kappa}$ as the product of $\mathbb{P}_{\kappa_0}$ and a ${<}\lambda_{\kappa_0}$-strategically closed forcing in $V_{\kappa}$.
Note also that if $i_W: V^{\mathbb{Q}^{\kappa}} \rightarrow M$ is an ultrapower map given by a normal measure $W$ over $\delta_{\kappa}$ such that $\kappa$ is not measurable in $M$,
then we may lift $i_W$ to an embedding with domain $V^{\mathbb{Q}^{\kappa}\times \mathbb{P}_{\kappa_0}}$.
Thus it is not hard to see that
$\kappa$ is $(\delta_{\kappa},\lambda_{\kappa})$-strongly compact in $V^{\mathbb{P}}$ by the argument in Proposition \ref{t1} and Remark \ref{rmk2}.
\iffalse
For every singular strong limit cardinal $\kappa \leq \theta<\lambda_{\kappa}$,
let $i_W: V^{\mathbb{Q}^{\kappa}} \rightarrow M_W$ be the ultrapower map given by a normal measure $W$ over $\delta_{\kappa}$,
such that $M_W\vDash ``\delta_{\kappa}$ is not measurable $"$.
Let $j:=j^{M_W}_U:M_W \rightarrow N \cong \ult(M_W,U)$ be an ultrapower map given by a normal measure $U$ over $\mathcal{P}_{\kappa,\theta}$
Thus by Proposition \ref{t2} if $\kappa=\kappa_0$, or Proposition \ref{t4}, otherwise, we have
\[
V^{\mathbb{Q}^{\kappa}\times \mathbb{P}_{\kappa}} \vDash ``\kappa \text{ is exactly } \delta_{\kappa} \text{-strongly compact up to } \lambda_{\kappa} ".
\]
Without loss of generality, we may assume that the field of $\mathbb{P}_{\kappa_0}$ is in $(\delta_{\kappa},\kappa'_0]$,
because the part of $\mathbb{P}_{\kappa_0}$ below $\delta_{\kappa}$ is a small forcing and preserves the measurability of $\delta_{\kappa}$.
Note that
in the proof of \eqref{eq4}, note that
we may lift a small ultrapower map $i_W$ given a normal measure $W$ over $\delta_{\kappa}$,
and lift a supercompact embedding $j$ of $\kappa$ separately.
For the lifting argument, the possible problem is that $\mathbb{Q}_{<\kappa}$ is not ${<}\delta_{\kappa}^+$-strategically closed.
But it doesn't matter, because we may factor $\mathbb{Q}_{<\kappa}$ as the part $\mathbb{P}_{\kappa_0}$ and the remainder.
The lifting argument for the latter is ok, because it is ${<}\lambda_{\kappa_0}$-strategically closed. The lifting ).
This implies that for every regular cardinal $\theta$ with $\kappa \leq \theta<\lambda_{\kappa}$,
there exists a $\delta_{\kappa}$-complete uniform ultrafilter over $\theta$ by Theorem \ref{tKU}.
\fi
Since $\kappa \in \mathcal{K}$ is arbitrary, it follows that $\kappa$ is exactly $\delta_{\kappa}$-strongly compact in $V^{\mathbb{P}}$ by Theorem \ref{tKU}.
Moreover, $\kappa_0$ is the least $\delta_{\kappa_0}$-strongly compact cardinal in $V^{\mathbb{P}}$.
In addition, there is no strongly compact cardinal in $(\delta_{\kappa},\kappa]$ for every $\kappa \in \mathcal{K}$.
This means that there is no strongly compact cardinal $\leq \sup(\mathcal{K})$.
So there is no strongly compact cardinal in $V^{\mathbb{P}}$ by our assumption at the beginning of this section.
\end{proof}
In the above theorem, $\kappa \in \mathcal{K}$ may not be the least $\delta_{\kappa}$-strongly compact cardinal if $\kappa \neq \kappa_0$.
We used the forcing given by Proposition \ref{t2} instead of the forcing given by Proposition \ref{t1} for $\kappa=\kappa_0$, because
we need to preserve the measurability of $\kappa_0$ if $\kappa_0=\delta_{\max(\mathcal{K})}$.
\iffalse
However, there is no strongly compact cardinal $\theta$ with $\delta_{\kappa}<\theta \leq \kappa$. Otherwise, $\kappa$ is $\theta$-strongly compact. But $\kappa$ is not $\delta_{\kappa}^+$-strongly compact.
In the theorem above, we require the measurable sequence $\langle \delta_{\kappa} \mid \kappa \in \mathcal{K}, \kappa\neq \sup(\mathcal{K})\rangle$ is bounded in $\min(\mathcal{K})$, because the forcing preparation may destroy measurability of $\delta_{\kappa}$ for some $\kappa \in \mathcal{K}$.
\begin{remark}
If $|\mathcal{K}|=\min(\mathcal{K})$, and $\langle \delta_{\kappa} \mid \kappa \in \mathcal{K}\rangle$ is an increasing sequence of measurable cardinals with $\sup_{\kappa\in \mathcal{K}}\delta_{\kappa}$ after the forcing preparation in the theorem above, then there is a forcing extension, in which $\kappa$ is exactly $\delta_{\kappa}$
\end{remark}
\fi
Next, we combine Theorem \ref{thm1} and Theorem \ref{thm2}.
\begin{theorem}\label{thm7}
Suppose that $\mathcal{A}$ is a subclass of the class $\mathcal{K}$ of the supercompact cardinals containing none of its limit points,
and $\langle \delta_{\kappa} \mid \kappa \in \mathcal{K}\rangle$ is an increasing sequence of measurable cardinals such that $\delta_{\kappa}<\kappa$ \iffalse
and $\sup(\mathcal{D}\cap\kappa)<\kappa$\fi
for any $\kappa\in \mathcal{A}$.
Then in some generic extension,
$\kappa$ is exactly $\delta_{\kappa}$-strongly compact for any $\kappa \in \mathcal{A}$.
Moreover, $\kappa$ is the least exactly $\delta_{\kappa}$-strongly compact cardinal
if $\delta_{\kappa}>\sup(\mathcal{A} \cap \kappa)$.
\end{theorem}
\begin{proof}
For every $\kappa \in \mathcal{A}$, let $\eta_{\kappa}=(2^{\sup(\mathcal{A}\cap \kappa)})^+$,
and let $\kappa'$ be the least $2$-Mahlo cardinal above $\kappa$ if $\kappa<\sup(\mathcal{K})$, and $\kappa$, otherwise.
Let $\mathbb{P}_{\kappa}$ be
\begin{itemize}
\item the forcing $Q_{\kappa',\delta_{\kappa}}$ if $\delta_{\kappa}{\leq}\sup(\mathcal{A}\cap \kappa)$,
\item the forcing given by Proposition \ref{t1} with $\eta=\eta_{\kappa}$ and $\delta=\delta_{\kappa}$ if $\kappa=\sup(\mathcal{A})$,
\item the forcing given by Proposition \ref{t2} with $\eta=\eta_{\kappa}$ and $\delta=\delta_{\kappa}$, otherwise.
\end{itemize}
Let $\mathbb{P}$ be the Easton product forcing $\Pi_{\kappa \in \mathcal{A}}\mathbb{P}_{\kappa}$.
Then for every measurable cardinal $\delta$, if $\delta \neq \sup(\mathcal{A}\cap \kappa)$,
then $\delta$ remains measurable in $V^{\mathbb{P}}$.
Take any $\kappa \in \mathcal{A}$.
Again we may factor $\mathbb{P}$ as
$\mathbb{Q}^{\kappa} \times \mathbb{P}_{\kappa} \times \mathbb{Q}_{<\kappa}$.
Let $\lambda_{\kappa}=\min(\mathcal{A}\setminus (\kappa+1))$.
Then there are two cases to consider:
$\blacktriangleright$ $\delta_{\kappa}{>}\sup(\mathcal{A}\cap \kappa)$.
If $\delta_{\lambda_{\kappa}}> \kappa$,
then $\mathbb{Q}^{\kappa}$ is ${<}\kappa$-directed closed, and $\mathbb{Q}_{\kappa}$ has size less than $\delta_{\kappa}$.
So $\kappa$ is the least exactly $\delta_{\kappa}$-strongly compact cardinal in $V^{\mathbb{P}}$ by the argument in Theorem \ref{thm1}.
If $\delta_{\lambda_{\kappa}} \leq \kappa$,
then $\kappa$ is ${<}\lambda_{\kappa}$-supercompact in $V^{\mathbb{Q}^{\kappa}}$.
Note that $\mathbb{Q}_{\kappa}$ has size less than $\delta_{\kappa}$,
it follows that $\kappa$ is $(\delta_{\kappa},\lambda_{\kappa})$-strongly compact in $V^{\mathbb{P}}$.
$\blacktriangleright$ $\delta_{\kappa}{\leq}\sup(\mathcal{A}\cap \kappa)$.
If $\delta_{\kappa}<\sup(\mathcal{A}\cap \kappa)$, then $\sup(\mathcal{A}\cap \delta_{\kappa})<\delta_{\kappa}$.
We may factor $\mathbb{Q}_{<\kappa}$ as the product of $\Pi_{\theta \in \mathcal{A}, \delta_{\kappa}\leq \theta<\kappa}\mathbb{P}_{\theta}$ and $\Pi_{\theta \in \mathcal{A}\cap \delta_{\kappa}}\mathbb{P}_{\theta}$.
Then $\kappa$ is exactly $(\delta_{\kappa},\lambda_{\kappa})$-strongly compact
in $V^{\mathbb{Q}^{\kappa} \times \mathbb{P}_{\kappa} \times \Pi_{\theta \in \mathcal{A}, \delta_{\kappa}\leq \theta<\kappa}\mathbb{P}_{\theta}}$ by Theorem \ref{thm2}.
Note also that $\Pi_{\theta \in \mathcal{A}\cap \delta_{\kappa}}\mathbb{P}_{\theta}$ has size less than $\delta_{\kappa}$,
it follows that $\kappa$ is exactly $(\delta_{\kappa},\lambda_{\kappa})$-strongly compact in $V^{\mathbb{P}}$.
If $\delta_{\kappa}=\sup(\mathcal{A}\cap \kappa)$, then $\delta_{\kappa}$ is a measurable limit point of $\mathcal{A}$ in $V$ and $\mathcal{A}\cap [\delta_{\kappa},\kappa)=\emptyset$.
Then $\kappa$ is exactly $(\delta_{\kappa},\lambda_{\kappa})$-strongly compact by Remark \ref{rmk2}.
Thus in $V^{\mathbb{P}}$, $\kappa$ is exactly $\delta_{\kappa}$-strongly compact for any $\kappa \in \mathcal{A}$.
Moreover, $\kappa$ is the least exactly $\delta_{\kappa}$-strongly compact cardinal
if $\delta_{\kappa}>\sup(\mathcal{A} \cap \kappa)$ in $V^{\mathbb{P}}$.
This completes the proof of Theorem \ref{thm7}.
\end{proof}
This answers Question \ref{QBM} affirmatively.
\section{Open Problems}
By the results of Goldberg in \cite{G2020}, it is not hard to see the following theorem holds:
\begin{theorem}[\cite{G2020}]
Suppose $\kappa$ is an almost strongly compact cardinal. Then exactly one of the following holds:
\begin{enumerate}
\item\label{pro1} $\kappa$ has cofinality $\omega$, and $\kappa$ is not a limit of almost strongly compact cardinals.
\item\label{pro2} $\kappa$ is the least $\omega_1$-strongly compact cardinal, but $\kappa$ is not strongly compact.
\item\label{pro3} $\kappa$ is a strongly compact cardinal.
\item\label{pro4} $\kappa$ is the successor of a strongly compact cardinal.
\item\label{pro5} $\kappa$ is a non-strongly compact limit of almost strongly compact cardinals.
\end{enumerate}
\end{theorem}
According to Theorem \ref{coric}, we proved that consistently \eqref{pro1} is possible under suitable large cardinal assumptions.
\eqref{pro3}, \eqref{pro4} and \eqref{pro5} are trivial.
However, we don't know whether \eqref{pro2} is possible or not.
\begin{question}
If the least almost strongly compact cardinal is the least $\omega_1$-strongly compact cardinal, is it necessarily strongly compact?
\end{question}
Under the assumption of $\sch$, Goldberg in \cite[Theorem 5.7]{G2020} gave an affirmative answer to this question.
So to get a negative consistency result, one has to violate $\mathrm{SCH}$ at unboundedly many cardinals below $\kappa$.
\iffalse
\begin{theorem}\label{thm17}
Suppose that $\kappa_1, \kappa_2,...,\kappa_n$ are the first $n$ supercompact cardinals, $\kappa_0$ is the least measurable cardinal. Then there is a generic extension, in which for any $1 \leq i \leq n$, $\kappa_i$ is the $(i+1)$-th measurable cardinal and $\kappa_i$ is exactly $\kappa_{i-1}$-strongly compact.
\end{theorem}
\begin{proof}
By forcing if necessary, we may assume that $V_0 \vDash ``ZFC$ + $GCH$ + for every $1\leq i \leq n$, $\kappa_i$ is supercompact + no cardinal greater than $\kappa_n$ is measurable". Let $P^* \in V_0$ be a poset so that $V=V_0^{P^*} \vDash$ ``Each $\kappa_i$ for $i=1,\cdots,n$ is Laver indestructible under $<\kappa_i$-directed closed forcing+ $2^\kappa_i=\kappa^+_i$". Also, there are no measurable cardinals above $\kappa_n$ in $V$. For every $1\leq i \leq n$, let $\eta_i=(2^{\kappa_{i-1}})^+<\kappa_i$ and $\lambda_i$ be the first $2-$Mahlo cardinal above $\kappa_i$, and $\mathbb{P}_i$ be the partial ordering given in Proposition \ref{t2}, but with $\eta=\eta_i$ and $\lambda=\lambda_i$. Let $\mathbb{P}=\Pi_{1 \leq i \leq n}\mathbb{P}_i$. Let $G=\Pi_{1 \leq i \leq n}G_i$ be $\mathbb{P}$-generic over $V$. Similarly, in $V[G]$, for every $1\leq i \leq n$, $\kappa_i$ is not $\kappa_{i-1}^+$-strongly compact.
\begin{claim}
$V[G]\vDash ``\kappa_i$ is $\kappa_{i-1}$-strongly compact and measurable".
\end{claim}
\begin{proof}
The proof is similar to Claim \ref{claims}. The measurability is obvious, so we only prove that $\kappa_i$ is $\kappa_{i-1}$-strongly compact. For every $\alpha>\lambda_n$, let
\end{proof}
\end{proof}
\subsection*{Acknowledge}
We would like to give
thanks to Professor Joan Bagaria for making useful suggestions.
The first author was supported by the China Scholarship Council (No. 202004910742).
The second author was supported by a UKRI Future Leaders Fellowship [MR/T021705/1].
\fi
\iffalse
|
{
"arxiv_id": "2302.13201",
"language": "en",
"timestamp": "2023-02-28T02:12:48",
"url": "https://arxiv.org/abs/2302.13201",
"yymm": "2302"
} | \section{Introduction}
Commonsense reasoning (CSR) relies on shared and unchanged knowledge across different languages and cultures to help computers understand and interact with humans naturally~\cite{davis2015commonsense}. CSR is a crucial problem in natural language processing that has been proved important for artificial intelligence (AI) systems~\cite{talmor-etal-2019-commonsenseqa, storks2019recent}.
Cross-lingual CSR aims to reason commonsense across languages,
which is the key to bridging the language barrier in natural language understanding and generalizing CSR to a broader scope~\cite{hu-2020-xtreme}. Recently, several cross-lingual datasets are proposed amidst the surging interests in cross-lingual CSR, \textit{e.g.} XCOPA~\cite{ponti-2020-xcopa}, X-CSQA~\cite{lin-etal-2021-xcsr}, and X-CODAH~\cite{lin-etal-2021-xcsr}.
Multilingual pre-trained language models (mPTMs) based on the Transformer~\cite{NIPS2017_3f5ee243}, such as mBERT~\cite{DBLP:journals/corr/abs-1810-04805}, XLM~\cite{lample2019cross}, XLM-R~\cite{conneau2019unsupervised} and InfoXLM~\cite{chi2020infoxlm}, have also been demonstrated to have
the potentials of conducting CSR in multiple languages~\cite{conneau2018xnli,hu-2020-xtreme,ponti-2020-xcopa,lin-etal-2021-xcsr}.
The performance of mPTMs for non-English CSR, however, is typically worse than that for English CSR due to the lack of non-English data for training~\cite{yeo2018machine,conneau2019unsupervised,ponti-2020-xcopa}.
Furthermore, mPTMs have raised concerns about their ability to transfer commonsense knowledge across languages, as they do not 1) differentiate between commonsense and non-commonsense knowledge and 2) improve the CSR for any specific language in multilingual scenarios~\cite{conneau2019unsupervised}.
To address the above issues, we propose a Cross-LIngual Commonsense Knowledge transfER (\textbf{CLICKER}) to bridge the performance gap of using mPTMs for CSR between the source (\textbf{English}) and the target (\textbf{non-English}) language by eliciting commonsense knowledge explicitly via cross-lingual task-adaptive pre-training~\cite{gururangan2020don}.
\begin{comment}
CLICKER aligns the commonsense reasoning between the source language and target language by training on parallel datasets, resulting in improved performance in the target language on CSR tasks and the generalization can be made to other non-English languages.
\end{comment}
Specifically, CLICKER is a three-step framework based on XLM-R~\cite{conneau2019unsupervised}.
First, we conduct task-adaptive pre-training on the multilingual commonsense corpora to enable XLM-R to perform the CSR task better.
In this process, the self-attention~\cite{NIPS2017_3f5ee243} mechanism is adopted to obtain multilingual embeddings for CSR.
Second, we distinguish between commonsense and non-commonsense knowledge by jointly optimizing their similarities with bilingual and parallel data.
Third, the extracted commonsense knowledge representation is further fine-tuned on the downstream cross-lingual CSR tasks.
Experimental results demonstrate that our approach significantly reduces the performance discrepancies between English and German on CSR.
Moreover, it outperforms XLM-R baselines on both X-CSQA and X-CODAH benchmarks~\cite{lin-etal-2021-xcsr}.
Further analysis indicates that CLICKER can extract cross-lingual commonsense representations more effectively, and with better interpretability.
\begin{comment}
\section{Related Work}
\textbf{Multilingual commonsense reasoning.} The study of CSR has a long tradition in English as a way to probe the capacities of language models on reasoning about natural language~\cite{roemmele2011choice, Sakaguchi2019winogrande,zellers-etal-2018-swag,talmor-etal-2019-commonsenseqa,huang-etal-2019-cosmos}.
Recently, several multilingual CSR datasets are proposed to support a series of multilingual CSR tasks, such as commonsense coreference resolution~\cite{emelin-sennrich-2021-wino,stojanovski-etal-2020-contracat}, causality reasoning~\cite{ponti-2020-xcopa}, commonsense question-answering~\cite{lin-etal-2021-xcsr} and sentence completion ~\cite{lin-etal-2021-xcsr}.
Multilingual pre-trained language models~\cite{DBLP:journals/corr/abs-1810-04805, lample2019cross,conneau2019unsupervised,Siddhant2019evaluating} proved effective in various cross-lingual tasks with their contextual representations.
The most related work to ours is XCSR~\cite{lin-etal-2021-xcsr} which proposes multilingual contrastive pre-training (MCP) and evaluates on XCSQA and XCODAH.
In contrast, we use self-attention to address commonsense knowledge extraction and transfer, as well as using cosine distances which are easier to be differentiated.
\textbf{Cross-lingual knowledge transfer.} One practical approach for cross-lingual knowledge transfer is neural machine translation (NMT) which transforms the text in other languages into English so as to leverage the natural language understanding abilities in English-only models ~\cite{bahdanau2014neural,NIPS2017_3f5ee243,liu2020multilingual,xue2020mt5}.
Multilingual representations show potential for knowledge transfer across languages and have been used combined with task-specific architectures~\cite{kim-etal-2017-cross,ahmad-etal-2019-difficulties,xie-etal-2018-neural}.
Recently, pre-trained multilingual representations is considered effective for knowledge transfer due to their success in a series of evaluation tasks~\cite{NEURIPS2020_1457c0d6,conneau2019unsupervised,wu2019beto,pires-etal-2019-multilingual,Siddhant2019evaluating}.
However, the above approaches all witness considerable discrepancies between English and other languages on CSR.
Our work addresses these issues and proposes to reduce the discrepancies in the model's capacities for cross-lingual CSR for English and the target language.
\end{comment}
\section{Method}
This section introduces the CLICKER model based on XLM-R~\cite{conneau2019unsupervised} for cross-lingual CSR.
As illustrated in Figure~\ref{Fig:model},
CLICKER extracts commonsense knowledge from English CSR to help non-English CSR\footnote{In this paper, we take German as an example of a foreign language that is not up to par with English for CSR.} in three steps: 1) task-adaptive pre-training, 2) commonsense differentiation, and 3) knowledge-transfer fine-tuning.
\subsection{Problem Definition}
The CSR task aims to select one from multiple choices that are most reasonable in commonsense given the previous statement or question.
For example, the plausible choice of answer to \textit{``What is a great place to lay in the sun?''} is \textit{``beach''} rather than \textit{``in the basement''} or \textit{``solar system''}.
Denoting a set of choices for CSR as $\mathbf{S}^{(j)}$'s, $j \in$ [1,$\dots$, $|C|$], where the number of choices for each input as $|C|$, the goal is to predict the common sense choice:
\begin{equation}
\begin{aligned}\label{eq:object}
\tilde{y} = \text{argmax}_j P(y=j | \{\mathbf{S}^{(1)}, \dots, \mathbf{S}^{(|C|)}\})\\
\end{aligned}
\end{equation}
\subsection{Step One: Task-Adaptive Pre-Training}\label{sec:step1}
Task-adaptive pre-training uses self-attention to enable the XLM-R to learn representations for the CSR task.
Specifically, the input is a tokenized utterance, \textit{i.e.} $\mathbf{S}_i=\{\text{[CLS]},q_i^{(1)},$
$\dots,q_i^{(k)},\text{[SEP]}\}$, where $i\in [1,\dots,N]$, $N$ is the size of the dataset, and $k$ is the length of sequence.
A self-attention layer is built on top of the Transformer to obtain attentions toward commonsense knowledge from the Transformer's pooled output states.
The self-attended outputs are optimized by the Cross-Entropy loss through a multiple-choice classifier to select commonsense-reasonable choices.
Our model is trained on multilingual CSR datasets containing examples of both English (\textbf{EN}) and German (\textbf{DE}).
\subsection{Step Two: Commonsense Differentiation}\label{sec:step2}
In this step, the representation of commonsense knowledge shared across languages is differentiated from the non-commonsense representation using EN and DE datasets in parallel.
The inputs are similar to those in Sec~\ref{sec:step1}, while inputs with the same semantics in different languages are mapped together.
We note here that the parallel datasets are not necessarily restricted to CSR datasets, but can be generalized to any bilingual datasets for mapping semantics of English and the non-English language, e.g. bilingual dictionaries or textbooks.
The output states of the Transformer are pooled and weighted by the self-attention layer followed by a linear projection, being extracted as \textbf{commonsense} embeddings $X_i$ and \textbf{non-commonsense} embeddings $\tilde{X_i}$, respectively.
\begin{equation}
\begin{aligned}\label{eq:residual}
X_i = & FFN(\text{Attention}(\mathbf{O}_i))
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}\label{eq:attn}
\tilde{X_i} = & FFN(1-\text{Attention}(\mathbf{O}_i))
\end{aligned}
\end{equation}
where $\mathbf{O}_i$ are output hidden states from the last layer of the Transformer for the $i$-\textit{th} input, and $FFN$ represents a \textit{Feed-Forward} layer.
For brevity, we omit index $i$ in the following equations.
We use $X^{EN}$ and $X^{DE}$ to denote commonsense embeddings of English and German input
, respectively.
Similarly, $\tilde{X}^{EN}$ and $\tilde{X}^{DE}$ represent non-commonsense embeddings.
Knowledge mapping is made by measuring the similarities between commonsense embeddings and non-commonsense embeddings.
Specifically, we maximize the cosine similarity between English and German embeddings that share the same and valid commonsense knowledge, \textit{i.e.} $X^{EN_{j^*}}$ and $X^{DE_{j^*}}$, as in Eq.~(\ref{eq:common1}). And we minimize the cosine similarity between
$X^{EN_{j^*}}$ and $X^{EN_j}$, as in Eq.~(\ref{eq:common2}).
$j^*$ is the index of the choice that is reasonable in commonsense, $j \in$ [1,$\dots$, $|C|$] and $j \neq {j^*}$.
Such that similar commonsense knowledge in both languages is projected into the same position in the semantic representation space.
\begin{equation}
\label{eq:common1}
\mathcal{L}_{align} = 1 - \cos(X^{EN_{j^*}}, X^{DE_{j^*}})
\end{equation}
\begin{equation}
\begin{aligned}\label{eq:common2}
\mathcal{L}_{diff} &= \sum_{j= 1, j \neq {j^*}}^{|C|} (\text{max}(0, \cos(X^{EN_{j^*}}, X^{EN_j})) \\
&+~\text{max}(0,\cos(X^{DE_{j^*}}, X^{DE_j})) )\\
\end{aligned}
\end{equation}
On the other hand, the non-commonsense embeddings represent knowledge unrelated to cross-lingual commonsense.
Assuming the correct choice and other incorrect choices associated with the same question share similar non-commonsense knowledge, we maximize the intra-language cosine similarity of non-commonsense embeddings.
Moreover, the correct choice of different languages should share the same non-commonsense knowledge so that we maximize inter-language cosine similarity jointly, as defined in Eq.~(\ref{eq:res}).
\begin{equation}
\begin{aligned}\label{eq:res}
\mathcal{L}_{nc} &= \sum_{j= 1, j \neq {j^*}}^{|C|} ( 1 - \cos(\tilde{X_i}^{EN_{j^*}}, \tilde{X_i}^{EN_j}) ) \\
& + \sum_{j= 1, j \neq {j^*}}^{|C|} ( 1 - \cos(\tilde{X_i}^{DE_{j^*}}, \tilde{X_i}^{DE_j}))\\
& + 1 - \cos(\tilde{X_i}^{EN_{j^*}}, \tilde{X_i}^{DE_{j^*}}) \\
\end{aligned}
\end{equation}
All the losses above and the Cross-Entropy loss are optimized
as the joint training objective of the cross-lingual CSR.
We use output commonsense embeddings $X^{DE_{j^*}}$ and $X^{DE_{j}}$ to calculate the Cross-Entropy loss.
\subsection{Step Three: Knowledge-Transfer Fine-Tuning}\label{sec:step3}
Finally, our model is fine-tuned by the training objectives similar to Sec~\ref{sec:step2} for evaluating CSR on the multiple-choice question-answering (QA) and the clause-selection tasks, leveraging parallel CSR datasets of English (\textbf{EN}) and German translated from English (\textbf{EN\_DE}) as inputs.
Different from previous steps, each input of XLM-R is the concatenation of a question and a choice of answer which are then split into tokens with additional special ones, \textit{i.e.} $\mathbf{S}_i=\{\text{[CLS]},q_i^{(1)},\dots,q_i^{(m)},\text{[SEP]},\text{[CLS\_Q]},a_i^{(1)},\dots,$ $a_i^{(n)},\text{[SEP]}\}$, where [CLS\_Q] is the beginning special token of the answer spans, $q_i$ and $a_i$ are tokens of the question and answer, and $m$, $n$ are numbers of question and answer tokens, respectively.
\begin{comment}
\subsection{Task-Specific Variations}
\textbf{Averaged-Answer Pooling.} Multiple-choice QA CSR commonly confronts the problem that differences impacting commonsense are condensed to short answer spans.
It is difficult to distinguish between many choices as the lengthy question spans share the same commonsense knowledge.
Combining averaged-answer embedding and the sentence embedding of Transformer outputs could be a way to address commonsense knowledge contained in answer spans at the same time maintaining the semantics of the entire choice.
Such that we propose averaged-answer pooling, where mean-pooling is applied to the embeddings spanning from [CLS\_Q] to the last [SEP] token, \textit{i.e.}$\{\text{[CLS\_Q]},a_1,\dots,a_n,\text{[SEP]}\}$.
Then it is added to the embedding of [CLS] to formulate the adjusted self-attention outputs for CSR.
The adjustment is applied to the outputs of the last self-attention layer before the linear projection in Section~\ref{sec:attended}.
\textbf{Triplet-Loss.} Optimizing cosine distances towards different objectives can be difficult to achieve the best solution.
We propose to use a contrastive triplet loss to overcome this difficulty and optimize objectives for commonsense knowledge simultaneously.
The triplet loss is described in Eq.~\ref{eq:trip}, defining $\alpha=-1$ as the margin for optimization.
\begin{equation}
\begin{aligned}\label{eq:trip}
L_{triplet} =& ~\text{max} ( (X_i^{DE_*} - X_i^{EN_*})^2 -\\
& \sum_j{(X_i^{DE_*} - X_i^{EN_j})^2} + \alpha, 0 )
\end{aligned}
\end{equation}
\end{comment}
\section{Experiments and Analyses}
We use English and German subsets of Mickey Corpus~\cite{lin-etal-2021-xcsr} for Step 1 to warm up the multilingual language model for cross-lingual CSR tasks.
Then we take advantage of parallel corpora of English and German in the Mickey Corpus again for Step 2 to obtain their semantic mappings and differentiate commonsense and non-commonsense embeddings.
For Step 3, the CLICKER model is fine-tuned on the English and machine-translated German training set of X-CSQA and X-CODAH~\cite{lin-etal-2021-xcsr}, which are in the style of multiple-choice QA and selection of appropriate clauses, respectively.
We compare our model with the multilingual contrastive pre-training (MCP)~\cite{lin-etal-2021-xcsr} model based on XLM-R$_B$~\cite{conneau2019unsupervised}.
MCP model is trained on permuted Mickey Corpus for multilingual contrastive training and fine-tunes on cross-lingual CSR training set in English only.
Instead, we re-implement it to train on the combination of English and German Mickey Corpus. Then we fine-tune it on both English and machine-translated German CSR training sets and evaluate it on the test set in German to make a fair comparison with our method.
The following subsections describe the experimental results and analyze CLICKER models on the cross-lingual CSR benchmarks X-CSQA and X-CODAH in German.
Note that our experiments are conducted for commonsense knowledge transfer from English to German, but the approach can be extended to other languages.
\begin{comment}
\subsection{Experiment Setup and Datasets}
\textcolor{red}{
We use the multilingual Mickey Corpus~\cite{lin-etal-2021-xcsr} in Step 1 (Sec.\ref{sec:step1}) to warm up the multilingual language model for cross-lingual CSR tasks.
Mickey Corpus is transformed into multiple-choice problems with 5 natural-language choices, and the goal is to select the most commonsense plausible one. This corpus is split into 16k examples for training and 1k examples for evaluation for each language.
Then we leverage parallel datasets in Mickey Corpus again in Step 2 (Sec.\ref{sec:step2}) to obtain semantic mappings of English and German and differentiate commonsense and non-commonsense embeddings.
In Step 3 (Sec.\ref{sec:step3}), the CLICKER model is fine-tuned and evaluated on X-CSQA and X-CODAH, which are evaluation benchmarks for mPTMs on cross-lingual CSR, both containing $\sim$8k English training examples and $\sim$500 validation and test examples in other 15 languages.
In order to obtain the parallel training sets in English and German to use in our models, we leverage machine translation to process the English training set in X-CSQA and X-CODAH to German with MBART~\cite{liu2020multilingual}.
To make fair comparisons, our experiments use English and machine-translated German for training and use German for evaluation on both benchmarks.
}
Our models are based on XLM-R$_B$~\cite{conneau2019unsupervised}.
Implementations are based on HuggingFace~\cite{wolf-etal-2020-transformers} and follow initializations in \cite{lin-etal-2021-xcsr}.
Each input is split into tokens by WordPiece~\cite{wu2016google} with special tokens added.
The optimizer is AdamW~\cite{loshchilov2018decoupled} with a linearly-decaying scheduler, and the warm-up steps are 100. We run experiments on NVIDIA V100 up to 8 GPUs.
\end{comment}
\subsection{Experimental Results}
Table~\ref{xcsqa} shows the test accuracy of baselines and CLICKER models for CSR in German.
Different combinations of losses are applied in experiments for optimizing commonsense differentiation.
We observe consistent improvements with our three-step framework by extracting commonsense knowledge with self-attentions (\textit{i.e.} CLICKER - \textit{base}) on both datasets compared to baselines.
Results show that the \textit{align} loss further improves the base CLICK model on X-CSQA.
And the \textit{non-commonsense (nc)} loss is proved effective on both datasets.
The best performance on X-CSQA is achieved when using the \textit{align} loss with or without the \textit{diff} loss, which shows that lining up embeddings in English and German with the same commonsense knowledge dominates the performance of CSR.
Besides, the model with \textit{align} and \textit{nc} loss is slightly inferior to the model with \textit{nc} loss only on X-CSQA.
On X-CODAH, our CLICK models perform the best with the \textit{nc} loss which maximizes the cosine similarity of non-commonsense embeddings, improving 1.6\% on accuracy.
\begin{comment}
We observe a similar phenomenon as above from other implementations of our models: optimizing \textit{align} loss curbs the optimization of \textit{nc} loss.
When the \textit{nc} loss is combined with \textit{align} loss, the model performance on commonsense reasoning is not up to par with using a non-commonsense loss only.
And the model combining \textit{align} and \textit{diff} losses performs similarly to the model without both losses on X-CODAH.
This indicates our alignment and differentiation objectives for commonsense embeddings do not show contributions on the X-CODAH dataset as that shown on X-CSQA.
Nevertheless, the effects of using \textit{diff} loss only or \textit{align+diff+nc} loss are not applicable to improving the accuracy of CSR, such that we leave the study on it as future work.
\end{comment}
\begin{comment}
\begin{table}
\centering
\begin{tabular}{ll}
\hline
\textbf{X-CSQA} & \textbf{Acc} \\
\hline
\textbf{CLICKER} - \textit{base} + \textbf{avg-answer} + \textbf{triplet} & 50.0 \\
\textbf{CLICKER} - \textit{nc} + \textbf{avg-answer} + \textbf{triplet} & 51.6 \\
\hline
\textbf{X-CODAH} & \textbf{Acc} \\
\hline
\textbf{CLICKER} - \textit{align + diff} + \textbf{triplet} & 51.7 \\
\textbf{CLICKER} - \textit{align + diff + nc} + \textbf{triplet} & 51.5 \\
\hline
\end{tabular}
\caption{\label{extension}
Performance of CLICKER models with extensions of averaged-answer embedding and triplet loss on X-CSQA and X-CODAH.
}
\end{table}
We also propose two task-specific variations for the proposed CLICKER models: 1) averaged answer embeddings and 2) triplet-loss enhancement.
The results of including model variations are included in Table~\ref{extension}.
\sout{Since the sentence completion task in X-CODAH has much longer answers, we only implement the first extension for the issue of short answer spans for X-CSQA.} Our CLICKER models obtain higher accuracy with both extensions, compared to results in Table~\ref{xcsqa}.
And our CLICKER model with non-commonsense loss and two variations achieves +2.8 accuracy in comparison with the baseline model.
As for X-CODAH, our CLICKER models also show enhanced performance combined with triplet loss.
We, therefore, conjecture the averaged-answer pooling could help our models discern commonsense and non-commonsense components better; the triplet loss does help relieve the conflicts when optimizing multiple objectives together.
Experimental results indicate the effectiveness of improving commonsense reasoning with our proposed models and variations on both commonsense QA benchmarks.
\end{comment}
\begin{table}
\centering
\begin{tabular}{{p{0.7\linewidth}p{0.22\linewidth}}}
\hline
\textbf{Models} & \textbf{Acc} \\
\hline
\multicolumn{2}{ l }{\textit{X-CSQA}}\\
\hline
MCP(XLM-R$_B$)*~\cite{lin-etal-2021-xcsr} & 48.8 \\
CLICKER - \textit{base} & 49.6 (+0.8) \\
CLICKER - \textit{align} & \textbf{50.6 (+1.8)} \\
CLICKER - \textit{align+diff} & \textbf{50.6 (+1.8)} \\
CLICKER - \textit{nc} & 49.8 (+1.0) \\
CLICKER - \textit{align+nc} & 49.6 (+0.8) \\
\hline
\multicolumn{2}{ l }{\textit{X-CODAH}}\\
\hline
MCP(XLM-R$_B$)*~\cite{lin-etal-2021-xcsr} & 49.2 \\
CLICKER - \textit{base} & 50.2 (+1.0) \\
CLICKER - \textit{align} & 49.6 (+0.4) \\
CLICKER - \textit{align+diff} & 50.3 (+1.1)\\
CLICKER - \textit{nc} & \textbf{50.8 (+1.6)} \\
CLICKER - \textit{align+nc} & 49.6 (+0.4)\\
\hline
\end{tabular}
\caption{\label{xcsqa}
Accuracy on the test set of X-CSQA and X-CODAH in German. MCP(XLM-R$_B$)* model is trained in English and machine-translated German. The $align$, $diff$, $nc$ refer to the objectives in equation (\ref{eq:common1}), (\ref{eq:common2}), and (\ref{eq:res}), respectively.
}
\end{table}
\subsection{Discussion}
Our models address the alignment of extracted embeddings with various combinations of objectives.
The fact that \textit{align+nc} loss is not as good as \textit{nc} loss alone suggests a conflict between aligning the commonsense embeddings and aligning the non-commonsense embeddings.
This can be explained as both objectives aiming to maximize the cosine similarity of embeddings, making it harder for the model to discern different commonsense knowledge in them.
From the best accuracy achieved on two datasets, we conjecture the quality of commonsense embeddings (optimized by \textit{align} and \textit{diff} losses) dominates CSR on X-CSQA, while non-commonsense embeddings (optimized by \textit{nc} loss) dominates that on X-CODAH.
The reason for this may be extracting commonsense knowledge for clause selection in X-CODAH is more challenging than multiple-choice QA in X-CSQA, whereas separating the non-commonsense embeddings help the multiple-choice classifier understand the commonsense portion with less noise.
We also observe that using \textit{align} and \textit{nc} losses together is not the best practice according to our experiments.
It suggests that jointly optimizing both objectives makes it more difficult for the multiple-choice classifier to predict correctly, as correct choices are pushed closer to incorrect ones.
\begin{comment}
We also note that it is not the best practice to use all losses together for commonsense and non-commonsense components. These losses together fail to contribute the sum of their own portions to improve the accuracy of CSR, however, the one which is being optimized can bring more difficulties to the others to be optimized as well.
\end{comment}
\textbf{Commonsense v.s. Non-commonsense.} To investigate the effectiveness of our learned commonsense embeddings, we evaluate the accuracy of our CLICKER models on the X-CSQA dev set predicted by commonsense embeddings or non-commonsense embeddings. As seen in Table~\ref{embed}, the performance of commonsense embeddings is significantly better than that of non-commonsense embeddings.
It is as expected, as our models are trained with cross-lingual CSR objectives to discern commonsense embeddings, while maximizing the similarity of non-commonsense embeddings.
Non-commonsense embeddings can induce confusion for CSR, such that combining both embeddings performs worse than using commonsense embeddings only.
\begin{figure}[!htb]
\centering
\includegraphics[width=.86\linewidth]{before_you_can_dream.png}
\caption{Attention head of the self-attention layer. The given example is from X-CSQA.}\label{Fig:Data2}
\end{figure}
\textbf{Does self-attention imply commonsense knowledge?} We assume that self-attentions in our models can appropriately attend to tokens that affect the plausibility of commonsense.
Figure~\ref{Fig:Data2} is the heatmap of the attention head in the self-attention layer evaluated on an example ``\textit{What do you need to be before you can dream?}'' from X-CSQA.
It's noteworthy to see that attention weights are given more to commonsense-related tokens, such as
``\textit{before}'', ``\textit{dream}'' and ``\textit{sleep}'' tokens.
A similar phenomenon is observed on X-CODAH as well.
These tokens are weighted to generate commonsense embeddings and help our model improve accuracy and interpretability of reasoning commonsense knowledge.
\begin{table}[!t]
\centering
\begin{tabular}{ll}
\hline
\textbf{Classifier Input} & \textbf{Dev Acc} \\
\hline
Commonsense & \textbf{47.8} \\
Non-Commonsense & 11.0 \\
Commonsense + Non-Commonsense & 47.6 \\
\hline
\end{tabular}
\caption{\label{embed}
Dev accuracy on X-CSQA taking commonsense or non-commonsense embeddings as inputs for the classifier.
}
\end{table}
\section{Conclusion}
In this paper, we propose a cross-lingual framework CLICKER for commonsense reasoning.
Experiments on X-CSQA and X-CODAH demonstrate the effectiveness of CLICKER in cross-lingual commonsense reasoning as it not only reduces performance discrepancies of commonsense reasoning between English and non-English languages but also improves the interpretability of commonsense knowledge across languages.
The potential of our approach to be generalized to other low-resource languages will be beneficial for alleviating data scarcity in cross-lingual commonsense reasoning.
\begin{comment}
|
{
"arxiv_id": "2302.13198",
"language": "en",
"timestamp": "2023-02-28T02:12:37",
"url": "https://arxiv.org/abs/2302.13198",
"yymm": "2302"
} | \section{Introduction}
Over the past two decades, many companies have faced production problems that have led to low efficiency, poor product quality, and high costs. In particular, traditional and heavy industries, including steel and metal processing, also face material waste challenges that have a significant environmental impact and reduce profits. An essential reason for product defects and high cost is the low degree of automation and digitalization of the existing equipment used on the shop floor \cite{Cagle2020Digitalization}. For example, a forging factory typically relies on old heavy machinery, such as large induction ovens that are used to heat steel bars. The heating process of a forging line is essential for the final product quality; however, it is still manually controlled or by using well-known recipes (predefined parameter settings from experience) created by experts and experienced operators. The standard recipe may fail when unseen events occur. Here, an inaccurate manual temperature adjustment to heat the steel bars may not meet the required production requirement. Therefore, poor process control can lead to material quality degradation or even waste because the production temperature is outside the specification and the quality of the product cannot be automatically detected during production. Many companies are investing more resources to improve digitalization and automation in process control to overcome this problem. With the improvement of digitalization, more key enabling technologies, such as artificial intelligence (AI) and industrial Internet of Things (I-IoT), can be integrated into the company's equipment to achieve AI-assisted automation\cite{ZDRAVKOVIC21AI-DHS}. As one of the emerging technologies, digital twins (DT) are becoming increasingly important in improving digitalization in the process industry due to rapidly evolving simulation and modeling capabilities \cite{Matteo2022DT}, also using the vast amount of compute processing from edge/cloud infrastructures.
A DT is a high-fidelity virtual model aimed at emulating physical processes and behaviors with the ability to evaluate, optimize, and predict\cite{Graessler2017DT}. It also provides a real-time link between the physical and digital worlds, allowing companies to have a complete digital footprint of their products and production processes throughout the production cycle \cite{Khan2018DT}. In turn, companies can realize significant value in improved operations, reduced defects, increased efficiency, and enabling predictive maintenance \cite{Panagou2022PM}. With real-time data acquisition, a DT can help operators understand the production process and make preventive decisions when an anomaly occurs \cite{LIU2021DT}. However, to make the correct decisions, DTs need to accurately simulate the physical world, which is difficult due to the complexity involved. Consequently, DTs representing the production process must be systematically tested to prove their reliability.
Testing DT as a software product is important because it identifies DT imperfections and helps increase its precision, ensuring high performance and reliable operation. With the acquisition of a large amount of production data, providing a continuous automated testing approach is crucial to ensure the overall quality of DT software. This testing process becomes more complicated when it should be done in real-time by combining the DT with the streaming production data. This is challenging because DT-based data processing is complicated due to the complexity of the physical process and the lack of consistency in the data stream. Most of the literature focuses on the use of DT to test different applications, where the DT itself is still manually tuned and tweaked with offline experimental data. Less evidence can be found for DT validation with online data streams. However, the performance of the well-tuned DT with offline data may change when the production environment changes. Hence, continuously monitoring the DT deviation from the real production line and continually building an ultra-fidelity virtual replica by entirely using physical live data streams is essential.
To address the mentioned challenges, in this paper, we provide a systematic and automated real-time method to continuously test the quality of the DT software with live production data in the forging factory (Bharat Forge Kilsta AB, Karlskoga, Sweden). This is a significant contribution as it allows the DT to be tested and validated using live production data rather than relying on simulated or historical data. The method is essential in industrial process automation, ensuring that the DT is accurate and reliable when used to optimize and control the process. The paper also presents a snapshot creation methodology that allows the DT to be tested in a systematic and automated way. The snapshot method allows the DT to be continuously tested and monitored, ensuring that it is always up-to-date and accurate. To discover faulty data, a snapshot creation methodology repetitively creates two snapshots, feeds the first snapshot to the DT, and compares the output of the DT with the second snapshot. The method identifies the deficiencies between the two snapshots and localizes the faults in the DT data. The paper also presents an architectural implementation of the testing method within the DT when a machine learning (ML) solution is used for the power control optimization algorithm.
To this end, the contributions of this paper are as follows: we 1) propose a DT test architecture for real-time data parsing, processing, visualization, and analysis, 2) propose a novel method to test DT with real production data in real-time, and 3) provide a stable and scalable approach to processing the production data. To address these contributions, this paper is organized as follows. In Section \ref{sec:RelatedWork}, we provide the necessary background and related work. Section \ref{sec:DTtest} presents our case study in which the heating line of a forging factory, the functionality of the DT, and the DT-assisted DRL structure will be explained in detail. Section \ref{sec: DT_Architecture} illustrates the DT testing architecture and the snapshot creation methodology. Section \ref{sec:results} contains the experimental evaluation, including the testing setup, the experimental results, and the discussion. Section \ref{ThreatsToValidity} explains the threats to validity, generalizability, and the applicability of the proposed method. Finally, Section \ref{sec:conclusion} concludes the paper.
\section{Background and Related Work}\label{sec:RelatedWork}
In recent years, relevant academics have investigated and evaluated the concept, reference model, and direction of development of the DT, indicating the huge potential of landing applications. In academic and industrial fields, researchers and practitioners have successively proposed some smart industry concepts based on DT\cite{MA2022DTsmartmanufacturing}, including online process monitoring\cite{LI2022SCDT}, product quality control\cite{TURAN2022DTOptim}, and process optimization methods\cite{Flores2021DTsmartproduction}.
Most DTs are either modeled by expert knowledge or tuned by offline data from production. Erhan et al.\cite{TURAN2022DTOptim} provide a DT modeling application to improve the quality of processes and products. DT is modeled with data from sensors and PLC, finite element simulations, and several data-analytic tools are also combined to minimize material consumption, maximize product performance, and improve process quality. After the DT is developed, a detailed analysis of the process characteristics, process parameters, and process challenges is performed with material modeling to validate the fidelity of the DT. With the help of DT, the thermoforming process is significantly improved and the scrap ratio decreased. However, the validation of the DT still relies on the manual tuning process and expert knowledge. Panagiotis et al. \cite{STAVROPOULOS2022DTPC} describe a method that uses DT to model thermal manufacturing processes. The work proposes DT that generalizes the concept of process control toward multivariable performance optimization. Data-driven models describe the mathematical link between process parameters and performance indicators. Furthermore, the performance of process models is examined by aggregating the details of the modeling from the process data to reduce the gap between theoretical and data-driven models. Flores et al. \cite{Flores2021DTsmartproduction} propose a framework that combines DT-based services and the Industrial Internet of Things for real-time material handling and location optimization. The research shows an improvement in the delivery, makespan and distance traveled during material handling.
DT also shows a great advantage when combined with artificial intelligence (AI) and machine learning (ML). In most applications, DT is driven by real-scene data, online monitoring in real-time, and optimization of the entire process. For example, Jinfeng et al. \cite{LIU2021PQ} describe a DT-driven approach to traceability and dynamic control for processing quality. The research introduced a Bayesian network model to analyze factors that affect processing quality. With the acquisition of real-time data in the IoT system of the manufacturing unit for dynamic control of processing quality, DT-driven dynamic control and management is proposed. Yun-Peng et al.\cite{ZHAO2022DT} describe a DT based on an artificial neural network that detects damage to the fishing net in real-time. The proposed DT can detect the final net damage accurately and quickly, even under complex sea conditions. Industrial robots can be trained in manufacturing to learn dexterous manipulation skills with DRL. However, training industrial robots on real production lines is unrealistic, as the interaction between the DRL agent and the real-world environment is time-consuming, expensive, and unsafe\cite{Meyes2017industrialrobots}. A common approach is to train robots through simulations and then deploy algorithms on physical robots.
The fidelity of DT is significant when combined with other state-of-the-art technologies, for example, by training a DRL agent with DT. To evaluate the fidelity of DT, Shimin et al. \cite{LIU2022DTEVA} construct an adaptive evaluation network for the DT machining system, which evaluation network can predict the decision-making error in the process route and assess its reliability. Houde et al. \cite{SONG2022CALDT} demonstrated an autonomous online calibration of DT using two-stage machine learning. In the offline stage, the simulated data and the corresponding measurement data are used to build an error database. Errors in the database are grouped by clustering to reduce the complexity of the calibration model. The calibration model is continuously updated in the online stage using a dynamic error database while the digital twin runs in parallel with the real operation. Similarly, He et al. \cite{ZHANG2022EVADT} proposed a consistent evaluation framework that uses the analytic hierarchy process to form a comprehensive consistent evaluation. The judgment matrix is constructed in this approach to calculate the consistency ratio. The judgment matrix is updated iteratively to obtain a consistent evaluation framework.
DT-assisted industrial production has achieved significant success in different areas. The combination of DT and state-of-the-art technologies such as AI, IIoT, and ML provides potential industrial digitalization and automation opportunities. However, in the literature, most of the work used DT for industrial optimization, quality testing, online monitoring, and control. Less work has been shown to test the actual DT software online and continuously evaluate the DT performance during production. In this sense, our work shows a data-driven approach to test the DT software. Consistent evaluation of DT is critical for industrial applications, since AI decision-making and other control algorithms closely work with DT, and the fidelity of DT directly impacts the performance of control algorithms centralized with DT.
\section{The Digital Twin Under Test}\label{sec:DTtest}
The DT under test in this paper is used to simulate the induction heating process of the Smart Forge industrial showcase. It should be mentioned that although the DT under test in this paper is specific to the induction heating process of the Smart Forge, our testing method can be applied to other DT applications as long as sensor data is involved. This is because our method focuses on evaluating the accuracy of the sensor data being used by the DT and the robustness, which is a critical aspect of any DT application.
As shown in Figure \ref{fig:furnace}, the DT is used to simulate the heating process of steel bars passed through the heating furnace. The heating furnace is divided into five zones of electrical induction heaters consisting of four induction coils, each. The temperature between each coil is monitored by pyrometers. In normal production, steel bars move inside the heating furnace toward the shear process outside Zone 5. However, when there is a disturbance downstream of the production line, the furnace is set to holding mode, in which the power to the coils is reduced and the steel bars slowly oscillate back and forth. This is essential to keep the bar temperature at a certain level and not shut down the furnace, as it will take a long time and significant resources to heat it up to a certain temperature again when the disturbance is over. A simulator is developed that aims to represent this production process by incorporating the effects of induction heating, conduction, radiative cooling, and steel bar movement into a DT of the production line. The developed DT can also facilitate our control algorithm design, e.g., combinatorial optimization algorithm and DRL algorithm. In this paper, we also provide the details of our DT-based DRL architecture as shown in Figure \ref{fig:architecture}. The following subsections provide the details of the DT and its components, including our developed ML algorithm to optimize and automate the production line.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figures/zones.pdf}
\caption{Heating Furnace}
\label{fig:furnace}
\end{figure}
\subsection{The Digital Twin}\label{sec:DT}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth ]{Figures/digital_twin.pdf}
\caption{Digital Twin}
\label{fig:DT}
\end{figure*}
The DT of the forging line simulates the heating process of one or more steel bars by moving them through one or several linearly arranged coils. Each coil is defined by its fixed start and end positions and power settings, which can change dynamically. The DT assumes that only the segment of the steel bar under a coil is heated with the coil power. Other segments of the steel bar are not affected by the coil. The dimensions, mass, and specific heat coefficient define a steel bar. The DT is implemented as a discrete event simulator. The DT simulates two modes:
\textbf{Normal mode:} In normal production, steel bars move at a configurable constant speed towards sheer. The induction coils in the oven are applied with normal production power.
\textbf{Holding mode:} Steel bars move forward and backward with a configurable holding mode speed, and the direction changes at regular intervals. The induction coils in the oven are applied with warm holding power.
The position of the bar along the production line is updated with a \emph{Movement Manager} for each simulation step. The Movement Manager has two modes of operation:
\noindent When the steel bar reaches the end of the heating line, the DT assumes that the bar is moved to the next step of production. Therefore, the DT will remove the bar and record the temperature of the head. The history of movement of the steel bar and its temperature profile for each time step are kept for further analysis. The temperature of the steel bar is updated at each DT time step by a \emph{Temperature Manager}. For every segment along the steel bar (within a configurable resolution), the segment is heated if it is within the coil range. The DT also provides an interface for algorithms to interact with it using the \emph{Controller Manager} class, which continuously provides data to external control algorithms and returns the algorithm responses to the DT.
As shown in Fig. \ref{fig:DT}, the DT consists mainly of six parts, as follows:
\begin{enumerate}
\item \textbf{Parameter settings:} define all parameters in the DT, including physical parameter settings (steel bar's mass, dimension, density, speed, etc.) and thermal parameter settings (specific heat, heating efficiency, cooling factors, etc.) of the steel bar, power settings of each zone, simulation time, coil position, sensor position, etc.
\item \textbf{Warm holding manager:} determines whether the DT simulates normal production mode or warm holding mode.
\item \textbf{Controller manager:} provides an interface to design different control algorithms such that the DT can be used to assist in algorithm development.
\item \textbf{Movement manager:} updates the speed of the steel bar, which is determined by the operation mode and the properties of the steel bar. The position of the steel bar is also updated at each discrete time step.
\item \textbf{Temperature manager:} is a crucial part of the DT. Mathematical equations define thermo-heating and cooling models. The temperature manager defines parameters such as the heating efficiency and cooling factors for different materials. Thus, the heating and cooling temperatures of the steel bar can be computed at every discrete time step. The temperature history of the steel bar is also recorded for analysis.
\item \textbf{Sensor manager:} manages sensor temperature updates. The sensors are located between the induction coils. When the steel bar moves under the sensor, the temperature of the part under the sensor is recorded on the corresponding sensor. The sensor manager also logs all sensor temperature history during simulation time to perform an analysis.
\end{enumerate}
\subsection{DT Assisted Deep Reinforcement Learning Architecture}
Most industrial controllers, such as PID controllers, are model-based, relying on the need for mathematical modeling and knowledge of process dynamics. The performance of classical PID controllers depends only on a well-tuned process model. However, identifying an unstable system model is tedious due to the unbounded nature of the output. Consequently, model-based methods are degraded with unstable process dynamics due to inevitable modeling errors \cite{SHUPRAJHAA2022RLPID}. To address this problem, a data-driven and AI-assisted controller within this DT is used that does not require expert knowledge and can achieve robust control. More specifically, reinforcement learning (RL) is used within the DT to control the production process by predicting and deciding the power to be used in the next steps of the production line. The RL framework shown in Figure \ref{fig:mdps} consists of an agent that interacts with a stochastic environment modeled as a Markov decision process (MDP). The goal of RL is to train an agent that can predict an optimal policy \cite{Spielberg2020DeepRL}. In our application, the agent is the controller of the oven power, and the environment is the DT. Also, in practice, we found that the performance of the DRL will be greatly improved if the DRL agent only controls the power. Hence, we reduce the search dimension by keeping the speed constant, which may be improved by using multi-agent DRL for future work.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figures/MDPs.png}
\caption{Markov Decision Process}
\label{fig:mdps}
\end{figure}
Figure \ref{fig:architecture} shows the overall architecture of DT where deep reinforcement learning (DRL) is used within the architecture for decision making. It should be mentioned that the evaluation of the DRL's performance is beyond the scope of this paper. However, the DT serves the decision-making process led by the DRL algorithms. More information on the design and evaluation of this DRL can be found in our latest published study \cite{Ma2022DRL}.
As shown in Figure \ref{fig:architecture}, the simulation model within the DT of the production process acts as an environment for the agent and simulates the physical dynamics at each time step. In the offline training process (blue dashed box), the offline DRL models (holding mode: $DRL\_wh$, normal production: $DRL\_n$) are trained by the agent that observes states and rewards by instrumenting the DT and outputs actions (e.g., adjust power) to the simulation model. The temperatures, positions, and speeds of the steel bars are mapped to the states of the agents. The agent observes the states and immediate rewards to update the neural networks. Once the agent is well trained, it can predict optimal power based on the observed temperatures, positions, and speeds of the steel bars at every time step. Offline models are transferred to online DRL models ($DRL^\ast\_wh$, $DRL^\ast\_n$) using, for example, transfer learning. Online DRL models are deployed on edge compute platform for real-time interaction with the control process to adjust the parameters of the heating process for the forging line (e.g., power levels).
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth, ]{Figures/DT_DRL.pdf}
\caption{Digital Twin-Based Deep Reinforcement Learning Architecture}
\label{fig:architecture}
\end{figure*}
The part in the orange dashed box in Figure \ref{fig:architecture} shows the online control process when using the trained policy. The heating oven is controlled by adjusting the power to the induction coils or changing the speed of the conveyor rollers via a PLC. Actual production data, such as the temperatures in each zone, conveyor movements, and used voltage levels, are recorded through an OPC-UA server. Online DRL models are deployed on an edge computing platform, subscribe to OPC-UA monitoring data, and predict optimal control signals for the next time step, which are forwarded to the PLC by writing to dedicated OPC-UA tags on the server.
As the control policy is trained on a DT, the DT should resemble the conditions of the physical world as much as possible, so the off-line learned policy of the DRL is also a good policy when operating on real production data. If there is a large mismatch between the DT and the real system, the trained control policy will not be a good policy when feeding it with real-time data from the actual production data. Therefore, it is important to have a high-fidelity DT. Automated testing of this DT is essential to ensure high fidelity and accuracy to make robust decisions. By comparing real and simulated data, we can measure the error and use that information to fine-tune the DT after testing.
\section{Real-time DT Testing Architecture} \label{sec: DT_Architecture}
\subsection{Testing Agent Architecture}
We aim to design and implement an automated testing architecture that can help evaluate the accuracy of DT in an online production setting. The main idea is that the live process data together with the coil power settings enter the DT, which aims to predict the temperature using the DRL, compare the prediction with the real values of the process, and visualize the deviation. Figure \ref{fig:testingarchitecture} shows the architecture of the testing agent. The architecture obtains real data from the OPC-UA server, creates snapshots continuously, and automatically instruments the DT. The DT receives the preceding snapshot from the real data, runs several simulation steps, and compares the output with the oncoming snapshot from the process data. Each snapshot acts as a test case. The deviation calculation acts as a test oracle for each snapshot. The OPC-UA server transfers sensor signal data to other services, including the temperature of the pyrometer sensors, the movement of the conveyor and the voltage levels sent to the induction coils. Thingsboard Gateway\footnote{Thingsboard web page https://thingsboard.io/} subscribes to OPC-UA tags on the factory server and publishes sensor updates as telemetry data to a local Thingsboard instance using the MQTT protocol, which stores and visualizes sensor data. To build high-performance and scalable data pipelines for DT testing, the Thingsboard server uses a Kafka connector to push sensor data to a Kafka broker. We build a Faust\footnote{Faust web page https://faust.readthedocs.io/en/latest/} agent as a Kafka consumer that subscribes to the Kafka broker to provide different functionalities: DT testing, statistics collection, and real-time error visualization through the GUI. The Faust agent instrument the DT by creating snapshots of the current production line in terms of power, temperatures, bar position, and speed, which are sent to the DT. The DT simulates expected new temperatures for the given power settings and bar position, which the Faust agent compares against a new snapshot from the production data. For a perfect DT, the new snapshot from the production data should have a slight error compared to the expected temperature calculated from the DT.
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth, ]{Figures/test_architecture.pdf}
\caption{Testing Agent Architecture}
\label{fig:testingarchitecture}
\end{figure*}
\subsection{Snapshot Creation Methodology} \label{subsec:snapshot}
The Faust agent continuously receives a data stream with telemetry information that includes the temperature of each sensor, the position of the head and tail of the bar, and the power in each zone. However, the challenge is that the OPC-UA server may lag in time, telemetry may be lost, or the update intervals of each OPC-UA tag may be different. The DT cannot be evaluated if the data are missing or not coherent. Therefore, we propose a snapshot creation algorithm in which we collect snapshots of the data and compare them with the simulation output.
\begin{table}
\centering
\caption{Snapshot Data Structure}
\begin{tabular}{||c | c||}
\hline
\textbf{Dataset(1/2)} & \textbf{Values }\\ [0.5ex]
\hline\hline
Power & List[pwr] \\
\hline
Temp & List[temp] \\
\hline
Position & List[back, head] \\
\hline
Speed & Float \\[1ex]
\hline
Timestamp & Float \\[1ex]
\hline
\end{tabular}
\label{table:snapshot}
\end{table}
A snapshot is created by continuously collecting telemetry from the real-time stream until the snapshot contains all the information listed in Table \ref{table:snapshot}: the coil power at each zone, the temperature at each sensor for all zones, bar's position including bar head and tail position, speed of the bar, the timestamp. The powers, temperatures, and positions values are logged in a list. The speed is stored as a floating point number. The timestamp, which corresponds to each data sample, is also collected. The main procedure is shown in Algorithm \ref{alg:snapshot}. Steps 4 to 10 explain that we repetitively use Dataset1 as the DT input and compare the DT output with Dataset2. In addition, we also collected other data to facilitate our analysis, such as the hold mode indicator, which can help to differentiate data on the normal production mode or hold mode. This indicator can also be verified by the speed moving direction. However, they do not directly contribute to our snapshot creation algorithm.
\begin{equation}\label{equa: simulation time}
t_{simulation} = (Position_{Dataset2} - Position_{Dataset1})/Speed
\end{equation}
\alglanguage{pseudocode}
\begin{algorithm}
\scriptsize
\caption{Snapshot Creation Algorithm}
\label{alg:snapshot}
\begin{algorithmic}[1]
\State $Dataset1 \gets Snapshot1$
\State $Dataset2 \gets Snapshot2$
\While {True}
\State Use Dataset1 as input for the DT
\State Calculate simulation time $t_{simulation}$by Equation \ref{equa: simulation time}
\State Run the DT for $t_{simulation}$ time steps
\State Compare DT output with Dataset2
\State Copy Dataset2 to Dataset1, empty the position in Dataset2
\State Overwrite temperature, power, and speed in Dataset2 until getting a new position
\State Collect snapshot and store it in Dataset2
\EndWhile
\end{algorithmic}
\end{algorithm}
\section{Experimental Evaluation} \label{sec:results}
Our experimental evaluation aims to understand whether the DT can be automatically tested with the online data stream. By analyzing the test results, we aim to know whether the movement pattern, position update, and sensor temperature update in the DT align with real production. The details of the DT structure are mentioned in Section \ref{sec:DT}. The parameter setting module, warm holding manager, and controller manager receive inputs that include parameter settings, whether the operation is in normal production or warm holding mode, and power settings to the DT. They do not contribute to the fidelity of the DT. In addition, the temperature manager defines the internal temperature update over the steel bar, which cannot be observed in real production. Hence, it cannot be tested directly. The fidelity of the DT is directly reflected by the movement manager, which updates the steel bar's speed and steel bar position, and the sensor manager, which records the steel bar's temperature. We analyze the movement pattern and position error of the steel bar to evaluate the movement manager and the sensor temperature error matrix to evaluate the sensor manager, which can indirectly reflect faults in the DT temperature manager.
\subsection{Testing Setup}
The testing data come from the OPC-UA server located at Bharat Forge Factory. The DT is developed by VikingAnalytics and is deployed on the server at Karlstad University, together with the Kafka broker and the Faust Agent (Kafka consumer). The Thingsboard Gateway subscribes to OPC-UA tags and publishes data stream to a local Thingsboard instance through MQTT protocol, the Kafka connector pushes the data stream to the Kafka broker, and the Faust agent as a Kafka consumer instrument the DT by creating snapshots and compares the DT output with the new snapshot from the production data. As we mentioned earlier, each snapshot is considered a test case, and the deviation between the two snapshots of the same instance is a test oracle for each test case. Here, if the deviation is more than the deviation specified in the oracle, the test case is considered failed. The fault will be localized within the snapshot data.
\subsection{Testing Results}
Figure \ref{fig:resultNP}(a) shows the movement of the steel bars in the furnace during normal production. The front bar of the consecutive rods leaves the furnace and enters a cutting-sequence cycle because the head moves back and forth. The position of the tail is dropped at telemetry 200 and 500 due to new bars being added to the furnace. During the hold mode, the consecutive steel bars move forward and back at a constant speed. This can clearly show whether the current operation is on normal production mode or hold mode, which corresponds to the hold mode indicator. Figure \ref{fig:resultWH}(a) shows how the steel bars move and change direction at regular intervals. We continuously run the testing on production data. The test oracle calculates the error for each snapshot and sensor within a snapshot, which is the difference between the estimated temperature and position of DT and the temperature and position of the real production data, and decides which snapshot (i.e., test case) passed and failed. We present results from the position sensor and two temperature sensors (one in Zone 1 and one in Zone 2).
\begin{figure*}
\centering
\subfloat[Steel bar's movement]{\includegraphics[scale=0.4]{Figures/np_rod_movement.png}}
\subfloat[Sensor\_Position: Position error]{\includegraphics[scale=0.4]{Figures/np_pos.png}}
\subfloat[Sensor1\_3: Temperature error]{\includegraphics[scale=0.4]{Figures/np_s1_3.png}}
\subfloat[Sensor2\_2: Temperature error]{\includegraphics[scale=0.4]{Figures/np_s2_2.png}} \\
\caption{(a). Shows the movement of the tail and head of the steel bar during normal production mode. Probability density function(PDF) and cumulative distribution function(CDF) of errors for each sensor in this figure: (b). The position sensor (c). The $3^{rd}$ sensor in Zone 1 (d). The $2^{nd}$ sensor in Zone 2}
\label{fig:resultNP}
\end{figure*}
\begin{figure*}
\centering
\subfloat[Steel bar's movement]{\includegraphics[scale=0.4]{Figures/wh_rod_movement.png}}
\subfloat[Sensor\_Position: Position error]{\includegraphics[scale=0.4]{Figures/wh_pos.png}}
\subfloat[Sensor1\_3: Temperature error]{\includegraphics[scale=0.4]{Figures/wh_s1_3.png}}
\subfloat[Sensor2\_2: Temperature error]{\includegraphics[scale=0.4]{Figures/wh_s2_2.png}} \\
\caption{(a). Presents the movement of the tail and head of the steel bar during holding mode. Probability density function(PDF) and cumulative distribution function(CDF) of errors for each sensor in this figure: (b). The position sensor (c). The $3^{rd}$ sensor in Zone 1 (d). The $2^{nd}$ sensor in Zone 2}
\label{fig:resultWH}
\end{figure*}
As can be seen, the position sensor in Figure \ref{fig:resultNP}(b) shows a position error of -2.5mm to 1.0mm with a spike in the probability distribution function (PDF) around -0.8mm during normal production that indicates a fault in the DT. The error deviation during the hold mode (Figure \ref{fig:resultWH}(b)) is between -3 and 2mm with a spike between -1 and 0mm. Temperature sensor Sensor1\_3 in Zone 1 deviates during normal production from the expected one by the DT between -15°C to 10°C as shown in Figure \ref{fig:resultNP}(c) with a PDF spike at 0°C (no deviation). According to the cumulative distribution function (CDF) graph, the probability that the DT deviates between -4°C and 5°C is approximately 95-10=85\%. However, during the holding mode (Figure \ref{fig:resultWH}(c)), the deviation of the DT increases, and the probability of deviation between -4°C and 5°C during the holding mode is 85-20=65\%.
The second temperature sensor in zone 2 (Sensor2\_2) deviates during normal production from the expected value calculated by the DT within the interval -70°C to 30°C (Figure \ref{fig:resultNP}(d)). In the holding mode, the deviation is between -90°C and 40°C (Figure \ref{fig:resultWH}(d)). The PDF in Figure \ref{fig:resultNP}(d) shows an error spike at around -45°C during normal production. However, during holding mode, errors are mainly distributed around -80° C and 0° C (Figure \ref{fig:resultWH} (d)).
\subsection{Discussion}
As can be seen in the results, the temperature deviation between the DT and the real production happened a few times, indicating a fault in the DT software. Without the continuous testing process and architecture, detecting and fixing those faults was impossible. Apart from those fixed faults, there are also multiple reasons for the significant temperature deviation between the DT and the real production. During the time between the collection of snapshots, the coils' power may have been updated. Considering that the DT does not update the power during this time period, the temperature output could differ in such cases, especially if a large power change occurred. For example, if the power for zone 3 is at high power in Dataset1 but suddenly changes to low power before Dataset2 is collected, the DT would run the coils' power at high power for zone 3 during the whole process, which may lead to temperature errors as reported by the DT. Another reason could be that some sensors are not calibrated well in reality and constantly record the temperature with a significant deviation, while it does not exist in the DT. Furthermore, the current DT does not consider radial or axial thermal conduction inside the steel bar, which, in reality, is especially prominent at the start of the heating process when there is a large temperature gradient between the surface and the core. Temperature sensors only record the surface temperature. All of these approximations affect the experimental results. However, in this paper, our aim is to provide a systematic DT testing method that can improve other researchers or practitioners to speed up their testing using the data-driven approach.
\section{Threats to Validity and Generalizability of the Implementation}\label{ThreatsToValidity}
There are several threats to the validity of our study on automated and systematic digital twins testing for industrial processes, and here are some ways we tried to mitigate them. One threat is selection bias, as we only included samples from our industrial processes case study. To mitigate this threat, we used a random sampling approach to select the processes for our study and ensured that our sample was diverse in terms of size, duration, and complexity. Furthermore, we continuously executed our testing process in the deployed architecture for a long time to capture snapshots.
Another threat is constructed validity, as we used a self-developed testing approach to evaluate the DT. To mitigate this threat, we thoroughly reviewed the literature to ensure that our approach was grounded in existing theories and practices. We sought feedback from industry experts in the field to validate the suitability of our approach. Furthermore, there is a threat of measurement error, as we relied on various sensors and monitoring devices to collect data on the DT. To mitigate this threat, we carefully calibrated all our devices before the study and performed ongoing checks throughout the study to ensure their accuracy. Overall, these efforts helped to reduce the potential impact of these threats on the validity of our study.
In terms of the generalizability of the proposed testing method, we believe it will likely apply to a wide range of industrial processes that use DTs. This is because our approach is systematic and automated, allowing it to be easily reproduced and applied to other contexts. In addition, we designed our approach to be flexible and adaptable so that it can be tailored to the specific needs of different processes. To further support the generalizability of our findings, we plan to conduct additional studies in different industrial settings and with different types of DT to examine the robustness of our approach. It is important to note that while the DT we are testing in this paper is specifically designed for the induction heating process at Smart Forge, our testing method can be utilized for other DT applications as long as they utilize sensor data. This is because our method evaluates the accuracy of the sensor data being used by the DT and its simulated data generation, a crucial aspect of any DT application. By verifying the accuracy of the sensor data, we can increase the overall accuracy and effectiveness of the DT. Therefore, our testing method can be applied to various DT applications in different fields.
\section{Conclusion and Future Work} \label{sec:conclusion}
This paper introduced a systematic testing architecture with production data that aims to achieve automated DT testing. We presented a systematic method using the DT to train a DRL model and how the DT simulates the induction heating line. We proposed our snapshot creation algorithm to test the DT with real production data in a real-time setting. The evaluation results show that our approach can automatically detect faults in the DT by using real production data, such as faults in the movement of the steel bar. Our method also analyzes error metrics of the temperature and position of the steel bar. Future work will focus on improving snapshot creation algorithms considering corner cases where sudden power changes occur during the DT run time, and improving DT accuracy by considering radial or axial thermal conduction inside the bar.
\section*{Acknowledgement}
This work was partially funded by Vinnova through the SmartForge project. Additional funding was provided by the Knowledge Foundation of Sweden (KKS) through the Synergy Project AIDA - A Holistic AI-driven Networking and Processing Framework for Industrial IoT (Rek:20200067).
\vspace{12pt}
\bibliographystyle{IEEEtran}
\balance
\section{Introduction}
Over the past two decades, many companies have faced production problems that have led to low efficiency, poor product quality, and high costs. In particular, traditional and heavy industries, including steel and metal processing, also face material waste challenges that have a significant environmental impact and reduce profits. An essential reason for product defects and high cost is the low degree of automation and digitalization of the existing equipment used on the shop floor \cite{Cagle2020Digitalization}. For example, a forging factory typically relies on old heavy machinery, such as large induction ovens that are used to heat steel bars. The heating process of a forging line is essential for the final product quality; however, it is still manually controlled or by using well-known recipes (predefined parameter settings from experience) created by experts and experienced operators. The standard recipe may fail when unseen events occur. Here, an inaccurate manual temperature adjustment to heat the steel bars may not meet the required production requirement. Therefore, poor process control can lead to material quality degradation or even waste because the production temperature is outside the specification and the quality of the product cannot be automatically detected during production. Many companies are investing more resources to improve digitalization and automation in process control to overcome this problem. With the improvement of digitalization, more key enabling technologies, such as artificial intelligence (AI) and industrial Internet of Things (I-IoT), can be integrated into the company's equipment to achieve AI-assisted automation\cite{ZDRAVKOVIC21AI-DHS}. As one of the emerging technologies, digital twins (DT) are becoming increasingly important in improving digitalization in the process industry due to rapidly evolving simulation and modeling capabilities \cite{Matteo2022DT}, also using the vast amount of compute processing from edge/cloud infrastructures.
A DT is a high-fidelity virtual model aimed at emulating physical processes and behaviors with the ability to evaluate, optimize, and predict\cite{Graessler2017DT}. It also provides a real-time link between the physical and digital worlds, allowing companies to have a complete digital footprint of their products and production processes throughout the production cycle \cite{Khan2018DT}. In turn, companies can realize significant value in improved operations, reduced defects, increased efficiency, and enabling predictive maintenance \cite{Panagou2022PM}. With real-time data acquisition, a DT can help operators understand the production process and make preventive decisions when an anomaly occurs \cite{LIU2021DT}. However, to make the correct decisions, DTs need to accurately simulate the physical world, which is difficult due to the complexity involved. Consequently, DTs representing the production process must be systematically tested to prove their reliability.
Testing DT as a software product is important because it identifies DT imperfections and helps increase its precision, ensuring high performance and reliable operation. With the acquisition of a large amount of production data, providing a continuous automated testing approach is crucial to ensure the overall quality of DT software. This testing process becomes more complicated when it should be done in real-time by combining the DT with the streaming production data. This is challenging because DT-based data processing is complicated due to the complexity of the physical process and the lack of consistency in the data stream. Most of the literature focuses on the use of DT to test different applications, where the DT itself is still manually tuned and tweaked with offline experimental data. Less evidence can be found for DT validation with online data streams. However, the performance of the well-tuned DT with offline data may change when the production environment changes. Hence, continuously monitoring the DT deviation from the real production line and continually building an ultra-fidelity virtual replica by entirely using physical live data streams is essential.
To address the mentioned challenges, in this paper, we provide a systematic and automated real-time method to continuously test the quality of the DT software with live production data in the forging factory (Bharat Forge Kilsta AB, Karlskoga, Sweden). This is a significant contribution as it allows the DT to be tested and validated using live production data rather than relying on simulated or historical data. The method is essential in industrial process automation, ensuring that the DT is accurate and reliable when used to optimize and control the process. The paper also presents a snapshot creation methodology that allows the DT to be tested in a systematic and automated way. The snapshot method allows the DT to be continuously tested and monitored, ensuring that it is always up-to-date and accurate. To discover faulty data, a snapshot creation methodology repetitively creates two snapshots, feeds the first snapshot to the DT, and compares the output of the DT with the second snapshot. The method identifies the deficiencies between the two snapshots and localizes the faults in the DT data. The paper also presents an architectural implementation of the testing method within the DT when a machine learning (ML) solution is used for the power control optimization algorithm.
To this end, the contributions of this paper are as follows: we 1) propose a DT test architecture for real-time data parsing, processing, visualization, and analysis, 2) propose a novel method to test DT with real production data in real-time, and 3) provide a stable and scalable approach to processing the production data. To address these contributions, this paper is organized as follows. In Section \ref{sec:RelatedWork}, we provide the necessary background and related work. Section \ref{sec:DTtest} presents our case study in which the heating line of a forging factory, the functionality of the DT, and the DT-assisted DRL structure will be explained in detail. Section \ref{sec: DT_Architecture} illustrates the DT testing architecture and the snapshot creation methodology. Section \ref{sec:results} contains the experimental evaluation, including the testing setup, the experimental results, and the discussion. Section \ref{ThreatsToValidity} explains the threats to validity, generalizability, and the applicability of the proposed method. Finally, Section \ref{sec:conclusion} concludes the paper.
\section{Background and Related Work}\label{sec:RelatedWork}
In recent years, relevant academics have investigated and evaluated the concept, reference model, and direction of development of the DT, indicating the huge potential of landing applications. In academic and industrial fields, researchers and practitioners have successively proposed some smart industry concepts based on DT\cite{MA2022DTsmartmanufacturing}, including online process monitoring\cite{LI2022SCDT}, product quality control\cite{TURAN2022DTOptim}, and process optimization methods\cite{Flores2021DTsmartproduction}.
Most DTs are either modeled by expert knowledge or tuned by offline data from production. Erhan et al.\cite{TURAN2022DTOptim} provide a DT modeling application to improve the quality of processes and products. DT is modeled with data from sensors and PLC, finite element simulations, and several data-analytic tools are also combined to minimize material consumption, maximize product performance, and improve process quality. After the DT is developed, a detailed analysis of the process characteristics, process parameters, and process challenges is performed with material modeling to validate the fidelity of the DT. With the help of DT, the thermoforming process is significantly improved and the scrap ratio decreased. However, the validation of the DT still relies on the manual tuning process and expert knowledge. Panagiotis et al. \cite{STAVROPOULOS2022DTPC} describe a method that uses DT to model thermal manufacturing processes. The work proposes DT that generalizes the concept of process control toward multivariable performance optimization. Data-driven models describe the mathematical link between process parameters and performance indicators. Furthermore, the performance of process models is examined by aggregating the details of the modeling from the process data to reduce the gap between theoretical and data-driven models. Flores et al. \cite{Flores2021DTsmartproduction} propose a framework that combines DT-based services and the Industrial Internet of Things for real-time material handling and location optimization. The research shows an improvement in the delivery, makespan and distance traveled during material handling.
DT also shows a great advantage when combined with artificial intelligence (AI) and machine learning (ML). In most applications, DT is driven by real-scene data, online monitoring in real-time, and optimization of the entire process. For example, Jinfeng et al. \cite{LIU2021PQ} describe a DT-driven approach to traceability and dynamic control for processing quality. The research introduced a Bayesian network model to analyze factors that affect processing quality. With the acquisition of real-time data in the IoT system of the manufacturing unit for dynamic control of processing quality, DT-driven dynamic control and management is proposed. Yun-Peng et al.\cite{ZHAO2022DT} describe a DT based on an artificial neural network that detects damage to the fishing net in real-time. The proposed DT can detect the final net damage accurately and quickly, even under complex sea conditions. Industrial robots can be trained in manufacturing to learn dexterous manipulation skills with DRL. However, training industrial robots on real production lines is unrealistic, as the interaction between the DRL agent and the real-world environment is time-consuming, expensive, and unsafe\cite{Meyes2017industrialrobots}. A common approach is to train robots through simulations and then deploy algorithms on physical robots.
The fidelity of DT is significant when combined with other state-of-the-art technologies, for example, by training a DRL agent with DT. To evaluate the fidelity of DT, Shimin et al. \cite{LIU2022DTEVA} construct an adaptive evaluation network for the DT machining system, which evaluation network can predict the decision-making error in the process route and assess its reliability. Houde et al. \cite{SONG2022CALDT} demonstrated an autonomous online calibration of DT using two-stage machine learning. In the offline stage, the simulated data and the corresponding measurement data are used to build an error database. Errors in the database are grouped by clustering to reduce the complexity of the calibration model. The calibration model is continuously updated in the online stage using a dynamic error database while the digital twin runs in parallel with the real operation. Similarly, He et al. \cite{ZHANG2022EVADT} proposed a consistent evaluation framework that uses the analytic hierarchy process to form a comprehensive consistent evaluation. The judgment matrix is constructed in this approach to calculate the consistency ratio. The judgment matrix is updated iteratively to obtain a consistent evaluation framework.
DT-assisted industrial production has achieved significant success in different areas. The combination of DT and state-of-the-art technologies such as AI, IIoT, and ML provides potential industrial digitalization and automation opportunities. However, in the literature, most of the work used DT for industrial optimization, quality testing, online monitoring, and control. Less work has been shown to test the actual DT software online and continuously evaluate the DT performance during production. In this sense, our work shows a data-driven approach to test the DT software. Consistent evaluation of DT is critical for industrial applications, since AI decision-making and other control algorithms closely work with DT, and the fidelity of DT directly impacts the performance of control algorithms centralized with DT.
\section{The Digital Twin Under Test}\label{sec:DTtest}
The DT under test in this paper is used to simulate the induction heating process of the Smart Forge industrial showcase. It should be mentioned that although the DT under test in this paper is specific to the induction heating process of the Smart Forge, our testing method can be applied to other DT applications as long as sensor data is involved. This is because our method focuses on evaluating the accuracy of the sensor data being used by the DT and the robustness, which is a critical aspect of any DT application.
As shown in Figure \ref{fig:furnace}, the DT is used to simulate the heating process of steel bars passed through the heating furnace. The heating furnace is divided into five zones of electrical induction heaters consisting of four induction coils, each. The temperature between each coil is monitored by pyrometers. In normal production, steel bars move inside the heating furnace toward the shear process outside Zone 5. However, when there is a disturbance downstream of the production line, the furnace is set to holding mode, in which the power to the coils is reduced and the steel bars slowly oscillate back and forth. This is essential to keep the bar temperature at a certain level and not shut down the furnace, as it will take a long time and significant resources to heat it up to a certain temperature again when the disturbance is over. A simulator is developed that aims to represent this production process by incorporating the effects of induction heating, conduction, radiative cooling, and steel bar movement into a DT of the production line. The developed DT can also facilitate our control algorithm design, e.g., combinatorial optimization algorithm and DRL algorithm. In this paper, we also provide the details of our DT-based DRL architecture as shown in Figure \ref{fig:architecture}. The following subsections provide the details of the DT and its components, including our developed ML algorithm to optimize and automate the production line.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figures/zones.pdf}
\caption{Heating Furnace}
\label{fig:furnace}
\end{figure}
\subsection{The Digital Twin}\label{sec:DT}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth ]{Figures/digital_twin.pdf}
\caption{Digital Twin}
\label{fig:DT}
\end{figure*}
The DT of the forging line simulates the heating process of one or more steel bars by moving them through one or several linearly arranged coils. Each coil is defined by its fixed start and end positions and power settings, which can change dynamically. The DT assumes that only the segment of the steel bar under a coil is heated with the coil power. Other segments of the steel bar are not affected by the coil. The dimensions, mass, and specific heat coefficient define a steel bar. The DT is implemented as a discrete event simulator. The DT simulates two modes:
\textbf{Normal mode:} In normal production, steel bars move at a configurable constant speed towards sheer. The induction coils in the oven are applied with normal production power.
\textbf{Holding mode:} Steel bars move forward and backward with a configurable holding mode speed, and the direction changes at regular intervals. The induction coils in the oven are applied with warm holding power.
The position of the bar along the production line is updated with a \emph{Movement Manager} for each simulation step. The Movement Manager has two modes of operation:
\noindent When the steel bar reaches the end of the heating line, the DT assumes that the bar is moved to the next step of production. Therefore, the DT will remove the bar and record the temperature of the head. The history of movement of the steel bar and its temperature profile for each time step are kept for further analysis. The temperature of the steel bar is updated at each DT time step by a \emph{Temperature Manager}. For every segment along the steel bar (within a configurable resolution), the segment is heated if it is within the coil range. The DT also provides an interface for algorithms to interact with it using the \emph{Controller Manager} class, which continuously provides data to external control algorithms and returns the algorithm responses to the DT.
As shown in Fig. \ref{fig:DT}, the DT consists mainly of six parts, as follows:
\begin{enumerate}
\item \textbf{Parameter settings:} define all parameters in the DT, including physical parameter settings (steel bar's mass, dimension, density, speed, etc.) and thermal parameter settings (specific heat, heating efficiency, cooling factors, etc.) of the steel bar, power settings of each zone, simulation time, coil position, sensor position, etc.
\item \textbf{Warm holding manager:} determines whether the DT simulates normal production mode or warm holding mode.
\item \textbf{Controller manager:} provides an interface to design different control algorithms such that the DT can be used to assist in algorithm development.
\item \textbf{Movement manager:} updates the speed of the steel bar, which is determined by the operation mode and the properties of the steel bar. The position of the steel bar is also updated at each discrete time step.
\item \textbf{Temperature manager:} is a crucial part of the DT. Mathematical equations define thermo-heating and cooling models. The temperature manager defines parameters such as the heating efficiency and cooling factors for different materials. Thus, the heating and cooling temperatures of the steel bar can be computed at every discrete time step. The temperature history of the steel bar is also recorded for analysis.
\item \textbf{Sensor manager:} manages sensor temperature updates. The sensors are located between the induction coils. When the steel bar moves under the sensor, the temperature of the part under the sensor is recorded on the corresponding sensor. The sensor manager also logs all sensor temperature history during simulation time to perform an analysis.
\end{enumerate}
\subsection{DT Assisted Deep Reinforcement Learning Architecture}
Most industrial controllers, such as PID controllers, are model-based, relying on the need for mathematical modeling and knowledge of process dynamics. The performance of classical PID controllers depends only on a well-tuned process model. However, identifying an unstable system model is tedious due to the unbounded nature of the output. Consequently, model-based methods are degraded with unstable process dynamics due to inevitable modeling errors \cite{SHUPRAJHAA2022RLPID}. To address this problem, a data-driven and AI-assisted controller within this DT is used that does not require expert knowledge and can achieve robust control. More specifically, reinforcement learning (RL) is used within the DT to control the production process by predicting and deciding the power to be used in the next steps of the production line. The RL framework shown in Figure \ref{fig:mdps} consists of an agent that interacts with a stochastic environment modeled as a Markov decision process (MDP). The goal of RL is to train an agent that can predict an optimal policy \cite{Spielberg2020DeepRL}. In our application, the agent is the controller of the oven power, and the environment is the DT. Also, in practice, we found that the performance of the DRL will be greatly improved if the DRL agent only controls the power. Hence, we reduce the search dimension by keeping the speed constant, which may be improved by using multi-agent DRL for future work.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figures/MDPs.png}
\caption{Markov Decision Process}
\label{fig:mdps}
\end{figure}
Figure \ref{fig:architecture} shows the overall architecture of DT where deep reinforcement learning (DRL) is used within the architecture for decision making. It should be mentioned that the evaluation of the DRL's performance is beyond the scope of this paper. However, the DT serves the decision-making process led by the DRL algorithms. More information on the design and evaluation of this DRL can be found in our latest published study \cite{Ma2022DRL}.
As shown in Figure \ref{fig:architecture}, the simulation model within the DT of the production process acts as an environment for the agent and simulates the physical dynamics at each time step. In the offline training process (blue dashed box), the offline DRL models (holding mode: $DRL\_wh$, normal production: $DRL\_n$) are trained by the agent that observes states and rewards by instrumenting the DT and outputs actions (e.g., adjust power) to the simulation model. The temperatures, positions, and speeds of the steel bars are mapped to the states of the agents. The agent observes the states and immediate rewards to update the neural networks. Once the agent is well trained, it can predict optimal power based on the observed temperatures, positions, and speeds of the steel bars at every time step. Offline models are transferred to online DRL models ($DRL^\ast\_wh$, $DRL^\ast\_n$) using, for example, transfer learning. Online DRL models are deployed on edge compute platform for real-time interaction with the control process to adjust the parameters of the heating process for the forging line (e.g., power levels).
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth, ]{Figures/DT_DRL.pdf}
\caption{Digital Twin-Based Deep Reinforcement Learning Architecture}
\label{fig:architecture}
\end{figure*}
The part in the orange dashed box in Figure \ref{fig:architecture} shows the online control process when using the trained policy. The heating oven is controlled by adjusting the power to the induction coils or changing the speed of the conveyor rollers via a PLC. Actual production data, such as the temperatures in each zone, conveyor movements, and used voltage levels, are recorded through an OPC-UA server. Online DRL models are deployed on an edge computing platform, subscribe to OPC-UA monitoring data, and predict optimal control signals for the next time step, which are forwarded to the PLC by writing to dedicated OPC-UA tags on the server.
As the control policy is trained on a DT, the DT should resemble the conditions of the physical world as much as possible, so the off-line learned policy of the DRL is also a good policy when operating on real production data. If there is a large mismatch between the DT and the real system, the trained control policy will not be a good policy when feeding it with real-time data from the actual production data. Therefore, it is important to have a high-fidelity DT. Automated testing of this DT is essential to ensure high fidelity and accuracy to make robust decisions. By comparing real and simulated data, we can measure the error and use that information to fine-tune the DT after testing.
\section{Real-time DT Testing Architecture} \label{sec: DT_Architecture}
\subsection{Testing Agent Architecture}
We aim to design and implement an automated testing architecture that can help evaluate the accuracy of DT in an online production setting. The main idea is that the live process data together with the coil power settings enter the DT, which aims to predict the temperature using the DRL, compare the prediction with the real values of the process, and visualize the deviation. Figure \ref{fig:testingarchitecture} shows the architecture of the testing agent. The architecture obtains real data from the OPC-UA server, creates snapshots continuously, and automatically instruments the DT. The DT receives the preceding snapshot from the real data, runs several simulation steps, and compares the output with the oncoming snapshot from the process data. Each snapshot acts as a test case. The deviation calculation acts as a test oracle for each snapshot. The OPC-UA server transfers sensor signal data to other services, including the temperature of the pyrometer sensors, the movement of the conveyor and the voltage levels sent to the induction coils. Thingsboard Gateway\footnote{Thingsboard web page https://thingsboard.io/} subscribes to OPC-UA tags on the factory server and publishes sensor updates as telemetry data to a local Thingsboard instance using the MQTT protocol, which stores and visualizes sensor data. To build high-performance and scalable data pipelines for DT testing, the Thingsboard server uses a Kafka connector to push sensor data to a Kafka broker. We build a Faust\footnote{Faust web page https://faust.readthedocs.io/en/latest/} agent as a Kafka consumer that subscribes to the Kafka broker to provide different functionalities: DT testing, statistics collection, and real-time error visualization through the GUI. The Faust agent instrument the DT by creating snapshots of the current production line in terms of power, temperatures, bar position, and speed, which are sent to the DT. The DT simulates expected new temperatures for the given power settings and bar position, which the Faust agent compares against a new snapshot from the production data. For a perfect DT, the new snapshot from the production data should have a slight error compared to the expected temperature calculated from the DT.
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth, ]{Figures/test_architecture.pdf}
\caption{Testing Agent Architecture}
\label{fig:testingarchitecture}
\end{figure*}
\subsection{Snapshot Creation Methodology} \label{subsec:snapshot}
The Faust agent continuously receives a data stream with telemetry information that includes the temperature of each sensor, the position of the head and tail of the bar, and the power in each zone. However, the challenge is that the OPC-UA server may lag in time, telemetry may be lost, or the update intervals of each OPC-UA tag may be different. The DT cannot be evaluated if the data are missing or not coherent. Therefore, we propose a snapshot creation algorithm in which we collect snapshots of the data and compare them with the simulation output.
\begin{table}
\centering
\caption{Snapshot Data Structure}
\begin{tabular}{||c | c||}
\hline
\textbf{Dataset(1/2)} & \textbf{Values }\\ [0.5ex]
\hline\hline
Power & List[pwr] \\
\hline
Temp & List[temp] \\
\hline
Position & List[back, head] \\
\hline
Speed & Float \\[1ex]
\hline
Timestamp & Float \\[1ex]
\hline
\end{tabular}
\label{table:snapshot}
\end{table}
A snapshot is created by continuously collecting telemetry from the real-time stream until the snapshot contains all the information listed in Table \ref{table:snapshot}: the coil power at each zone, the temperature at each sensor for all zones, bar's position including bar head and tail position, speed of the bar, the timestamp. The powers, temperatures, and positions values are logged in a list. The speed is stored as a floating point number. The timestamp, which corresponds to each data sample, is also collected. The main procedure is shown in Algorithm \ref{alg:snapshot}. Steps 4 to 10 explain that we repetitively use Dataset1 as the DT input and compare the DT output with Dataset2. In addition, we also collected other data to facilitate our analysis, such as the hold mode indicator, which can help to differentiate data on the normal production mode or hold mode. This indicator can also be verified by the speed moving direction. However, they do not directly contribute to our snapshot creation algorithm.
\begin{equation}\label{equa: simulation time}
t_{simulation} = (Position_{Dataset2} - Position_{Dataset1})/Speed
\end{equation}
\alglanguage{pseudocode}
\begin{algorithm}
\scriptsize
\caption{Snapshot Creation Algorithm}
\label{alg:snapshot}
\begin{algorithmic}[1]
\State $Dataset1 \gets Snapshot1$
\State $Dataset2 \gets Snapshot2$
\While {True}
\State Use Dataset1 as input for the DT
\State Calculate simulation time $t_{simulation}$by Equation \ref{equa: simulation time}
\State Run the DT for $t_{simulation}$ time steps
\State Compare DT output with Dataset2
\State Copy Dataset2 to Dataset1, empty the position in Dataset2
\State Overwrite temperature, power, and speed in Dataset2 until getting a new position
\State Collect snapshot and store it in Dataset2
\EndWhile
\end{algorithmic}
\end{algorithm}
\section{Experimental Evaluation} \label{sec:results}
Our experimental evaluation aims to understand whether the DT can be automatically tested with the online data stream. By analyzing the test results, we aim to know whether the movement pattern, position update, and sensor temperature update in the DT align with real production. The details of the DT structure are mentioned in Section \ref{sec:DT}. The parameter setting module, warm holding manager, and controller manager receive inputs that include parameter settings, whether the operation is in normal production or warm holding mode, and power settings to the DT. They do not contribute to the fidelity of the DT. In addition, the temperature manager defines the internal temperature update over the steel bar, which cannot be observed in real production. Hence, it cannot be tested directly. The fidelity of the DT is directly reflected by the movement manager, which updates the steel bar's speed and steel bar position, and the sensor manager, which records the steel bar's temperature. We analyze the movement pattern and position error of the steel bar to evaluate the movement manager and the sensor temperature error matrix to evaluate the sensor manager, which can indirectly reflect faults in the DT temperature manager.
\subsection{Testing Setup}
The testing data come from the OPC-UA server located at Bharat Forge Factory. The DT is developed by VikingAnalytics and is deployed on the server at Karlstad University, together with the Kafka broker and the Faust Agent (Kafka consumer). The Thingsboard Gateway subscribes to OPC-UA tags and publishes data stream to a local Thingsboard instance through MQTT protocol, the Kafka connector pushes the data stream to the Kafka broker, and the Faust agent as a Kafka consumer instrument the DT by creating snapshots and compares the DT output with the new snapshot from the production data. As we mentioned earlier, each snapshot is considered a test case, and the deviation between the two snapshots of the same instance is a test oracle for each test case. Here, if the deviation is more than the deviation specified in the oracle, the test case is considered failed. The fault will be localized within the snapshot data.
\subsection{Testing Results}
Figure \ref{fig:resultNP}(a) shows the movement of the steel bars in the furnace during normal production. The front bar of the consecutive rods leaves the furnace and enters a cutting-sequence cycle because the head moves back and forth. The position of the tail is dropped at telemetry 200 and 500 due to new bars being added to the furnace. During the hold mode, the consecutive steel bars move forward and back at a constant speed. This can clearly show whether the current operation is on normal production mode or hold mode, which corresponds to the hold mode indicator. Figure \ref{fig:resultWH}(a) shows how the steel bars move and change direction at regular intervals. We continuously run the testing on production data. The test oracle calculates the error for each snapshot and sensor within a snapshot, which is the difference between the estimated temperature and position of DT and the temperature and position of the real production data, and decides which snapshot (i.e., test case) passed and failed. We present results from the position sensor and two temperature sensors (one in Zone 1 and one in Zone 2).
\begin{figure*}
\centering
\subfloat[Steel bar's movement]{\includegraphics[scale=0.4]{Figures/np_rod_movement.png}}
\subfloat[Sensor\_Position: Position error]{\includegraphics[scale=0.4]{Figures/np_pos.png}}
\subfloat[Sensor1\_3: Temperature error]{\includegraphics[scale=0.4]{Figures/np_s1_3.png}}
\subfloat[Sensor2\_2: Temperature error]{\includegraphics[scale=0.4]{Figures/np_s2_2.png}} \\
\caption{(a). Shows the movement of the tail and head of the steel bar during normal production mode. Probability density function(PDF) and cumulative distribution function(CDF) of errors for each sensor in this figure: (b). The position sensor (c). The $3^{rd}$ sensor in Zone 1 (d). The $2^{nd}$ sensor in Zone 2}
\label{fig:resultNP}
\end{figure*}
\begin{figure*}
\centering
\subfloat[Steel bar's movement]{\includegraphics[scale=0.4]{Figures/wh_rod_movement.png}}
\subfloat[Sensor\_Position: Position error]{\includegraphics[scale=0.4]{Figures/wh_pos.png}}
\subfloat[Sensor1\_3: Temperature error]{\includegraphics[scale=0.4]{Figures/wh_s1_3.png}}
\subfloat[Sensor2\_2: Temperature error]{\includegraphics[scale=0.4]{Figures/wh_s2_2.png}} \\
\caption{(a). Presents the movement of the tail and head of the steel bar during holding mode. Probability density function(PDF) and cumulative distribution function(CDF) of errors for each sensor in this figure: (b). The position sensor (c). The $3^{rd}$ sensor in Zone 1 (d). The $2^{nd}$ sensor in Zone 2}
\label{fig:resultWH}
\end{figure*}
As can be seen, the position sensor in Figure \ref{fig:resultNP}(b) shows a position error of -2.5mm to 1.0mm with a spike in the probability distribution function (PDF) around -0.8mm during normal production that indicates a fault in the DT. The error deviation during the hold mode (Figure \ref{fig:resultWH}(b)) is between -3 and 2mm with a spike between -1 and 0mm. Temperature sensor Sensor1\_3 in Zone 1 deviates during normal production from the expected one by the DT between -15°C to 10°C as shown in Figure \ref{fig:resultNP}(c) with a PDF spike at 0°C (no deviation). According to the cumulative distribution function (CDF) graph, the probability that the DT deviates between -4°C and 5°C is approximately 95-10=85\%. However, during the holding mode (Figure \ref{fig:resultWH}(c)), the deviation of the DT increases, and the probability of deviation between -4°C and 5°C during the holding mode is 85-20=65\%.
The second temperature sensor in zone 2 (Sensor2\_2) deviates during normal production from the expected value calculated by the DT within the interval -70°C to 30°C (Figure \ref{fig:resultNP}(d)). In the holding mode, the deviation is between -90°C and 40°C (Figure \ref{fig:resultWH}(d)). The PDF in Figure \ref{fig:resultNP}(d) shows an error spike at around -45°C during normal production. However, during holding mode, errors are mainly distributed around -80° C and 0° C (Figure \ref{fig:resultWH} (d)).
\subsection{Discussion}
As can be seen in the results, the temperature deviation between the DT and the real production happened a few times, indicating a fault in the DT software. Without the continuous testing process and architecture, detecting and fixing those faults was impossible. Apart from those fixed faults, there are also multiple reasons for the significant temperature deviation between the DT and the real production. During the time between the collection of snapshots, the coils' power may have been updated. Considering that the DT does not update the power during this time period, the temperature output could differ in such cases, especially if a large power change occurred. For example, if the power for zone 3 is at high power in Dataset1 but suddenly changes to low power before Dataset2 is collected, the DT would run the coils' power at high power for zone 3 during the whole process, which may lead to temperature errors as reported by the DT. Another reason could be that some sensors are not calibrated well in reality and constantly record the temperature with a significant deviation, while it does not exist in the DT. Furthermore, the current DT does not consider radial or axial thermal conduction inside the steel bar, which, in reality, is especially prominent at the start of the heating process when there is a large temperature gradient between the surface and the core. Temperature sensors only record the surface temperature. All of these approximations affect the experimental results. However, in this paper, our aim is to provide a systematic DT testing method that can improve other researchers or practitioners to speed up their testing using the data-driven approach.
\section{Threats to Validity and Generalizability of the Implementation}\label{ThreatsToValidity}
There are several threats to the validity of our study on automated and systematic digital twins testing for industrial processes, and here are some ways we tried to mitigate them. One threat is selection bias, as we only included samples from our industrial processes case study. To mitigate this threat, we used a random sampling approach to select the processes for our study and ensured that our sample was diverse in terms of size, duration, and complexity. Furthermore, we continuously executed our testing process in the deployed architecture for a long time to capture snapshots.
Another threat is constructed validity, as we used a self-developed testing approach to evaluate the DT. To mitigate this threat, we thoroughly reviewed the literature to ensure that our approach was grounded in existing theories and practices. We sought feedback from industry experts in the field to validate the suitability of our approach. Furthermore, there is a threat of measurement error, as we relied on various sensors and monitoring devices to collect data on the DT. To mitigate this threat, we carefully calibrated all our devices before the study and performed ongoing checks throughout the study to ensure their accuracy. Overall, these efforts helped to reduce the potential impact of these threats on the validity of our study.
In terms of the generalizability of the proposed testing method, we believe it will likely apply to a wide range of industrial processes that use DTs. This is because our approach is systematic and automated, allowing it to be easily reproduced and applied to other contexts. In addition, we designed our approach to be flexible and adaptable so that it can be tailored to the specific needs of different processes. To further support the generalizability of our findings, we plan to conduct additional studies in different industrial settings and with different types of DT to examine the robustness of our approach. It is important to note that while the DT we are testing in this paper is specifically designed for the induction heating process at Smart Forge, our testing method can be utilized for other DT applications as long as they utilize sensor data. This is because our method evaluates the accuracy of the sensor data being used by the DT and its simulated data generation, a crucial aspect of any DT application. By verifying the accuracy of the sensor data, we can increase the overall accuracy and effectiveness of the DT. Therefore, our testing method can be applied to various DT applications in different fields.
\section{Conclusion and Future Work} \label{sec:conclusion}
This paper introduced a systematic testing architecture with production data that aims to achieve automated DT testing. We presented a systematic method using the DT to train a DRL model and how the DT simulates the induction heating line. We proposed our snapshot creation algorithm to test the DT with real production data in a real-time setting. The evaluation results show that our approach can automatically detect faults in the DT by using real production data, such as faults in the movement of the steel bar. Our method also analyzes error metrics of the temperature and position of the steel bar. Future work will focus on improving snapshot creation algorithms considering corner cases where sudden power changes occur during the DT run time, and improving DT accuracy by considering radial or axial thermal conduction inside the bar.
\section*{Acknowledgement}
This work was partially funded by Vinnova through the SmartForge project. Additional funding was provided by the Knowledge Foundation of Sweden (KKS) through the Synergy Project AIDA - A Holistic AI-driven Networking and Processing Framework for Industrial IoT (Rek:20200067).
\vspace{12pt}
\bibliographystyle{IEEEtran}
\balance
|
{
"arxiv_id": "2302.13263",
"language": "en",
"timestamp": "2023-02-28T02:14:36",
"url": "https://arxiv.org/abs/2302.13263",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Road extraction from satellite imagery is a long-standing computer vision
task and a hot research topic in the field of remote sensing. Extracting
roads is challenging due to the complexity of road networks along with the
diversity of geographical environments. With the rapid development of image
interpretation and the cutting-edge architecture of Convolutional Neural
Networks (CNNs) designed for computer vision tasks, automatically
extracting wide-coverage road graphs from high-resolution aerial images
becomes more accurate and efficient. As shown in Fig. \ref{fig1}, prior prevailing
works fall into two main categories: pixel-wise semantic segmentation-based
approaches and graph-based approaches. Semantic segmentation-based approaches
perform binary classification to distinguish road pixels from background pixels
and form segmentation maps as an intermediate representation of road
graphs. Road networks are constructed by road centerlines and edges through
skeletonizing segmentation masks and edge detection. In contrast,
graph-based approaches extract road graphs directly by yielding vertices
and edges of road networks. The iterative exploration paradigm has been
utilized to search for the next move in correspondence to the roads
in satellite imagery.
\begin{figure}
\includegraphics[width=13cm]{images/intro.jpg}
\caption{Illustration of segmentation-based approaches and graph-based approaches for road extraction from satellite imagery. Red and orange arrows present segmentation-based approaches that perform road segmentation with skeletonization and vectorization. The yellow arrow stands for the iterative exploration utilized in graph-based approaches.}
\label{fig1}
\end{figure}
However, extensive experiments reveal that these two approaches both
have some drawbacks when it comes to certain evaluation perspectives.
Under the circumstances of severe occlusion and geographical environments
with high complexity, segmentation-based approaches usually yield road
graphs with low connectivity. On the other hand, graph-based approaches
benefit from iterative exploring paradigms that overcome connectivity
issues, nevertheless, leading to smaller receptive fields and focusing
more on local features without a global perspective. Moreover, the
iterative exploring paradigm only searches one step at a time which is
time-consuming for road graph inference.
According to the above discussions, we propose a new scheme for satellite imagery road extraction, patch-wise road keypoints detection (termed as PaRK-Detect), which is inspired by keypoint detection and combines the inherent superiority in the global receptive field of semantic
segmentation-based approaches while ravels out its drawbacks that the absence of connectivity supervision affects road network topology. More specifically, by dividing high-resolution satellite imagery into small patches, the road graphs are partitioned section by section. The framework built on top of D-LinkNet architecture determines whether each patch contains a road section and if so, then detects the road keypoint in that patch. Establishing the adjacent relationships which represent the connection status between the road patches is also conducted by the framework. In this way, our proposed approaches enable road graph construction in a single pass rather than iterative exploration, in other words, a much faster inference speed becomes reality. Meanwhile, our proposed multi-task network also performs pixel-wise road semantic segmentation to enrich contextual information and enhance information capture capabilities.
Through evaluating our approach against the existing state-of-the-art methods
on the publicly available DeepGlobe dataset, Massachusetts Roads dataset, and
RoadTracer dataset, we show that our new scheme achieves competitive or better results
based on several metrics for road extraction and yields road networks with high
accuracy and topological integrity.
Our main contributions can be concluded as follows:
\begin{itemize}
\item We propose a patch-wise road keypoints detection scheme that combines
road extraction with keypoint detection to construct road graphs in a single
pass.
\item We propose an efficient multi-task framework that performs road
segmentation and road graphs extraction simultaneously.
\item We conduct extensive experiments over widely-used datasets and achieve
competitive or better results using several road extraction metrics.
\end{itemize}
\section{Related Work}
{\bf Pixel-wise Semantic Segmentation-based Approaches.} With the increasing
popularity and dominance of convolutional neural network (CNN) architecture,
utilizing the symmetrical~\cite{ronneberger2015u,chaurasia2017linknet}
or heterogeneous~\cite{long2015fully,zhao2017pyramid,chen2017rethinking}
encoder-decoder framework becomes the favorite paradigm for semantic segmentation
tasks. Road segmentation is a binary classification task that splits pixels of
remote sensing images into road and non-road. Many refined and improved
segmentation networks~\cite{buslaev2018fully,zhang2018road,zhou2018d,mattyus2017deeproadmapper}
have been proposed for this scene.
For example, Buslaev \emph{et al}\bmvaOneDot~\cite{buslaev2018fully} proposed a fully convolutional
network (FCN) that consists of pretrained ResNet~\cite{he2016deep} and a decoder
adapted from vanilla U-Net~\cite{ronneberger2015u}.
Zhang \emph{et al}\bmvaOneDot~\cite{zhang2018road} proposed ResUnet, an elegant architecture with
better performance that combines the strengths of residual learning and U-Net.
Zhou \emph{et al}\bmvaOneDot~\cite{zhou2018d} proposed D-LinkNet that built with LinkNet~\cite{chaurasia2017linknet} architecture and adopts dilated convolution layers in its central part to enlarge
the receptive field and ensemble multi-scale features.
DeepRoadMapper~\cite{mattyus2017deeproadmapper} improves the loss function and
the post-processing strategy that reasons about missing connections in the
extracted road topology as the shortest-path problem.
Although these powerful networks provide a global perspective of road
networks and detailed road features, segmentation-based approaches suffer
from the absence of connectivity supervision and yield segmentation masks
with redundant road information which is meaningless for road graphs construction.
Our proposed patch-wise road keypoints detection scheme fully considers the
connectivity status and focuses on the road graph while retaining a global perspective.
\\ \hspace*{\fill} \\
\noindent {\bf Graph-based Approaches.} Unlike semantic segmentation approaches,
graph-based approaches directly extract vertices and edges to construct road graphs.
Through exploiting the iterative exploring paradigm, the next moves
searched along the road are successively added to the existed road graph.
For instance, RoadTracer~\cite{bastani2018roadtracer} utilizes the iterative searching
guided by a CNN-based decision function that decides the stop probability
and angle vector of the next move, while VecRoad~\cite{tan2020vecroad} proposes a
point-based iterative exploration scheme with segmentation-cues guidance and
flexible steps.
Apart from these iterative methods, He \emph{et al}\bmvaOneDot~\cite{he2020sat2graph} proposed
Sat2Graph with a graph-tensor encoding (GTE) scheme which encodes the road graph
into a tensor representation. Bahl \emph{et al}\bmvaOneDot~\cite{bahl2021road} combined an FCN in
charge of locating road intersections, dead ends, and turns, and a Graph Neural
Network (GNN) that predicts links between these points. Our scheme constructs
road graphs in a single pass through detecting patch-wise road keypoints without
the iterative exploring paradigm.
\\ \hspace*{\fill} \\
\noindent {\bf Multi-Task Road Extraction Approaches.} Road extraction can have
different purposes and tasks in different scenarios. City planning and road network
updating focus on topology information in road graphs, while emergency response
may focus on road damage status that can be evaluated through the segmentation mask.
Cheng \emph{et al}\bmvaOneDot~\cite{cheng2017automatic} proposed CasNet, a cascaded end-to-end CNN,
to simultaneously cope with road detection and centerline extraction.
Yang \emph{et al}\bmvaOneDot~\cite{yang2019road} proposed a recurrent convolution neural network
U-Net (RCNN-UNet) with the multitask learning scheme.
Wei \emph{et al}\bmvaOneDot~\cite{wei2020simultaneous} extracted road surface and centerline concurrently
and proposed a multistage framework that consists of boosting segmentation, multiple
starting points tracing, and fusion.
Our proposed framework conducts road segmentation and road graph construction
simultaneously to improve accuracy, robustness, and efficiency.
\\ \hspace*{\fill} \\
\noindent {\bf Keypoint Detection.} In recent years, keypoint detection has enjoyed
substantial attention and has a promising application prospect in automatic driving
and human tracking. Human pose estimation, facial landmark detection, and detection
of specific types of object parts are collectively known as keypoint detection.
Since the pioneering work DeepPose~\cite{toshev2014deeppose}, many DNN-based methods
for keypoint detection have been proposed.
Basically, constructing ground truth falls into three categories: directly using
coordinates of keypoints~\cite{toshev2014deeppose}, constructing heatmap~\cite{newell2016stacked,cai2020learning}, and jointly utilizing heatmap and offsets~\cite{shi2019improved}.
The stacked hourglass network (SHN)~\cite{newell2016stacked} has been widely used as its
repeated bottom-up, top-down processing with successive pooling and upsampling ensures
different features are extracted from feature maps of different scales to achieve the
best performance.
Cai \emph{et al}\bmvaOneDot~\cite{cai2020learning} proposed Residual Steps Network (RSN) that aggregates
intra-level features to retain rich low-level spatial information. Additionally,
Pose Refine Machine (PRM), an efficient attention mechanism, has been proposed
to make a trade-off between the local and global representation for further refinement.
Shi \emph{et al}\bmvaOneDot~\cite{shi2019improved} proposed a novel improved SHN with the multi-scale
feature aggregation module designed for accurate and robust facial landmark detection.
Then, an offset learning sub-network is adopted to refine the inferred keypoints.
Inspired by these works, our proposed approach performs patch-wise offset learning for
road keypoints.
\section{Methods}
This section presents our proposed patch-wise road keypoints detection scheme. Section \ref{scheme} defines fundamental elements of road status used in patch-wise road representation and illustrates the patch-wise road keypoints detection scheme in detail. Once the definitions of road representation and the procedures of the scheme are learned, the architecture of the proposed multi-task framework is explained in section \ref{framework}. We also describe novel learning objectives designed for our scheme in section \ref{loss function}. Finally, an effective graph optimization strategy is briefly illustrated in section \ref{post process}.
\begin{figure}
\centering
\includegraphics[width=13cm]{images/scheme.jpg}
\caption{{\bf Illustration of PaRK-Detect scheme. Left:} blue patches contain road while white patches are non-road, black dots are road keypoints, and green lines represent links. {\bf Right:} The reference point of relative offset is the upper left corner of a patch. Dark yellow patches are linked with the center patch while light yellow ones are not. We order the eight adjacent patches into numbers 0-7. Here the linked patches are 2, 6, and 7.}
\label{fig2}
\end{figure}
\subsection{PaRK-Detect Scheme}
\label{scheme}
As introduced above, our PaRK-Detect scheme detects patch-wise road keypoints and constructs road graphs in a single pass. A high-resolution satellite image (e.g. of size 1024$\times$1024 pixels) can be divided into non-overlapping patches (e.g. $64^{2}$ patches of size 16$\times$16 pixels) as the minimum unit for road representation. Fig. \ref{fig2} shows the illustration of the PaRK-Detect scheme. Three fundamental elements are defined for road representation in each patch: $P$, $S$, and $L$.
After dividing entire large satellite imagery into $N^{2}$ non-overlapping small patches, the road networks are also partitioned section by section by the grids. $P$ stands for the probability of whether a patch contains a road section. If a patch does not cover road, then we should not consider it any further.
For patches that cover road, $S$ stands for the position of patch-wise road keypoint, which is also defined as the most crucial point in a patch. The priority of road intersection is higher than the road endpoint. If there is no intersection or endpoint in a patch that contains a road section, then the keypoint is defined at the geometric center of the road segment in that patch, which is also called the midpoint. We use relative offset to represent the position of the patch-wise road keypoint.
Furthermore, $L$ stands for the link status or connectivity status between these patches. Each patch can be linked to eight patches around itself. If two adjacent patches are linked, then it means there exists a road segment between the road keypoints of them. We use eight probability values to represent link status. A road graph consists of vertexes and edges which are formed by these keypoints and links.
\subsection{Framework Architecture}
\label{framework}
\begin{figure}
\includegraphics[width=13cm]{images/framework.jpg}
\caption{{\bf Overview of our proposed multi-task framework architecture.} The rectangles are feature maps of different scales. {\bf I:} input satellite image, {\bf P:} patch-wise road probability, yellow patches represent non-road while white patches represent road, {\bf S:} patch-wise road keypoint position, {\bf L:} patch-wise link status, {\bf G:} road graph, {\bf M:} road segmentation mask. Here we just show $32^{2}$ patches out of $64^{2}$ for better presentation.}
\label{fig3}
\end{figure}
As illustrated by Fig. \ref{fig3}, we designed a multi-task framework for our PaRK-Detect scheme that performs road segmentation and road graph construction simultaneously. Following the encoder-decoder frameworks paradigm, we use ResNet34~\cite{he2016deep} pretrained on ImageNet~\cite{deng2009imagenet} dataset as encoders to extract high-level semantic information such as road features (e.g. the encoder part downsamples satellite images of size 1024$\times$1024 and outputs feature maps of size 32$\times$32). We also adopt dilated convolution with skip connections same as the bottleneck of D-LinkNet~\cite{zhou2018d} after encoders to increase the receptive field of feature points while keeping detailed information.
For road segmentation, the framework retains the decoder that utilizes the transposed convolution and the skip connection same as LinkNet~\cite{chaurasia2017linknet}, restoring the resolution of feature map (e.g. from 32$\times$32 to 1024$\times$1024).
For road graph construction, three networks are designed for predicting $P$, $L$, and $S$ respectively. These three fundamental elements of patch-wise road representation are indispensable yet sufficient to construct road graphs. Patch-wise probability predicting network performs classification of whether a patch covers road. Patch-wise link predicting network determines the connection status between a patch and its eight adjacent patches. Patch-wise position predicting network locates the patch-wise keypoint in each patch. Both probability and link predicting networks only consist of convolutional layers and batch normalization with feature maps of the same size (e.g. 64$\times$64) and decreasing channels. In order to keep low-level detailed information, position predicting network combines joint pyramid upsampling (JPU)~\cite{wu2019fastfcn} additionally for multi-scale feature fusion. The outputs of these three networks are feature maps (e.g. of size 64$\times$64) with 1, 8, and 2 channels respectively.
\subsection{Learning Objectives}
\label{loss function}
To train our multi-task framework, we define a joint learning objective for road segmentation and road graph construction. The joint learning objective can be formulated as follow:
$$\mathcal{L} = \mathcal{L}_{seg} + \mathcal{L}_{graph}$$
Road segmentation loss remains the same as segmentation-based methods~\cite{zhou2018d} that evaluate the difference between labels and predicted masks with binary cross entropy (BCE) and dice coefficient loss.
For road graph construction, we define the training loss functions $\mathcal{L}_{P}$, $\mathcal{L}_{L}$, and $\mathcal{L}_{S}$ for $P$, $L$, and $S$ respectively as follows:
$$\mathcal{L}_{graph} = \alpha\mathcal{L}_{P} + \beta\mathcal{L}_{S} + \gamma\mathcal{L}_{L}$$
$$\mathcal{L}_{P}= -\sum_{i=1}^{N^{2}}\left[P_{gt}^{i}\log{P_{pre}^{i}}+\left(1-P_{gt}^{i}\right)\log{\left(1-P_{pre}^{i}\right)}\right]$$
$$\mathcal{L}_{L}= -\frac{1}{\Omega_{P}}\sum_{\Omega_{P}}\sum_{j=1}^{8}\left[L_{gt}^{ij}\log{L_{pre}^{ij}}+\left(1-L_{gt}^{ij}\right)\log{\left(1-L_{pre}^{ij}\right)}\right]$$
$$\mathcal{L}_{S}= -\frac{1}{\Omega_{P}}\sum_{\Omega_{P}}\sum_{j=1}^{2}\left|S_{gt}^{ij}-S_{pre}^{ij}\right|$$
where $\Omega_{P}=\left\{i|P_{gt}^{i}=1,i\in\left[1,N^{2}\right]\right\}$, which is a set of patches that contain road, and j stands for the dimension of each patch-wise element. Patch-wise road probability loss and patch-wise link status loss use binary cross entropy (BCE) as the loss function while patch-wise road keypoint position loss uses mean absolute error (MAE). On account that we only consider patches that cover road and ignore other patches when it comes to locating road keypoints and evaluating connection status, we define partial loss functions for L and S. $\alpha$, $\beta$, and $\gamma$ are the weights of the loss functions for three patch-wise elements.
\begin{figure}
\includegraphics[width=6.5cm]{images/graph_optimization_1.jpg}
\includegraphics[width=6.5cm]{images/graph_optimization_2.jpg}
\caption{{\bf Illustration of graph optimization strategies. Left:} connecting adjacent but unconnected endpoints. Red solid lines are links added while red dotted lines are links that should not be added. {\bf Right:} removing triangle and quadrilateral. Red dotted lines are links removed.}
\label{fig4}
\end{figure}
\subsection{Graph Optimization}
\label{post process}
As shown in Fig. \ref{fig4}, we develop several graph optimization strategies to refine the patch-wise link status to improve the performance of road network construction.
{\bf Connecting adjacent but unconnected endpoints.} Due to the occlusion of buildings and trees, the connectivity status of adjacent patches not always appears realistic. Through extensive observation, we find out that adjacent but unconnected endpoints predicted by the framework are mostly connected actually.
{\bf Removing triangle and quadrilateral.} Several intersections are so large that cover several patches, this will leads to unnecessary links between these patches and inaccurate road graphs. We remove the diagonal link if a triangular connection between adjacent patches appears. In addition, we remove the longest link if a quadrilateral connection between adjacent patches appears.
\section{Experiments}
\subsection{Experimental Datasets}
{\bf DeepGlobe Road Extraction Dataset}~\cite{demir2018deepglobe} is collected from the DigitalGlobe platform and other datasets, and consists of high-resolution satellite images of size 1024$\times$1024 pixels. It covers images captured over Thailand, Indonesia, and India, and the ground resolution is 0.5m/pixel. We utilize the original training set which consists of 6226 images and randomly split it into 3984, 997, and 1245 images for training, validation, and testing respectively.
{\bf Massachusetts Roads Dataset}~\cite{mnih2013machine} is collected from publicly available imagery and metadata and consists of satellite images of size 1500$\times$1500 pixels. It covers images captured over a wide variety of urban, suburban, and rural regions of the state of Massachusetts, and the ground resolution is 1m/pixel. We crop the 1171 original images into 3168 images of size 1024$\times$1024 pixels and randomly split them into 2027, 507, and 634 images.
{\bf RoadTracer Dataset}~\cite{bastani2018roadtracer} is collected from Google and consists of high-resolution satellite images of size 4096$\times$4096 pixels. It covers the urban core of forty cities across six countries, and the ground resolution is 0.6m/pixel. We crop and resize the training set which consists of 300 images into 1200 images of size 1024$\times$1024 pixels and randomly split it into 960 and 240 images for training and validation. 15 8192$\times$8192 images are used for testing.
The split of the datasets aims for approximate distribution of 64$\%$, 16$\%$, and 20$\%$.
\subsection{Implementation Details}
All the experiments are conducted with the PyTorch framework on NVIDIA 3090Ti GPUs.
We use DeepGlobe Road Extraction Dataset~\cite{demir2018deepglobe}, Massachusetts Roads Dataset~\cite{mnih2013machine}, and RoadTracer Dataset~\cite{bastani2018roadtracer} for experiments. The input satellite images are with a dimension of 1024$\times$1024 and the patch size is set to 16$\times$16. During training, the batch size is fixed as 4 and the total epoch is set to 300. We use Adam optimizer with the learning rate originally set to 2e-4 and reduced by 5 times once the loss keeps relatively higher for several epochs. The weights of loss functions $\alpha$, $\beta$, and $\gamma$ are set to 0.5, 1, and 1 respectively. To avoid overfitting, we utilize a variety of data augmentation ways, including horizontal flip, vertical flip, random rotation, and color jittering. We use the pixel-based F1 score~\cite{van2018spacenet} and APLS~\cite{van2018spacenet} metric to evaluate road graphs while using the IoU score to evaluate segmentation performance.
\subsection{Comparison with Other Methods}
For segmentation-based approaches, we compare the PaRK-Detect scheme with D-LinkNet~\cite{zhou2018d} on DeepGlobe and Massachusetts Roads Datasets. The comparison results are shown in Tab.\;\ref{Table1}. Note that road graphs can be constructed through skeletonizing segmentation masks generated by segmentation-based approaches for evaluating APLS metric. Only focusing on pixel-wise semantic features, D-LinkNet acquires more road morphological information as well as contour features, nevertheless, this is redundant for constructing road graphs. On the contrary, our approach provides connectivity supervision while retaining the global receptive field and achieves outstanding performance with pixel-based F1 score and APLS metric. Besides, as shown in Tab.\;\ref{inferspeed}, our method outperforms D-LinkNet by 23.1$\%$ in terms of inference speed as segmentation-based approaches require skeletonization and vectorization in post-processing procedures.
For graph-based approaches, we compare our scheme with RoadTracer~\cite{bastani2018roadtracer} and VecRoad~\cite{tan2020vecroad} on RoadTracer Dataset. The comparison results are shown in Tab.\;\ref{Table2}. Our approach achieves competitive results compared with RoadTracer and VecRoad while the inference speed of the PaRK-detect scheme far exceeds that of these approaches.
\begin{table}
\begin{center}
\scriptsize
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Method} & \multicolumn{3}{c|}{DeepGlobe} & \multicolumn{3}{c|}{Massachusetts Roads}\\
\cline{2-7}
& P-F1 & APLS & IoU & P-F1 & APLS & IoU\\
\hline
\hline
D-LinkNet & 74.87 & 62.74 & 65.27 & 71.10 & 59.08 & 56.41\\
\hline
\hline
{\bf Ours} & {\bf 78.04} & {\bf 69.83} & {\bf 65.53} & {\bf 74.80} & {\bf 63.37} & {\bf 56.80}\\
\hline
\end{tabular}
\end{center}
\vspace{-4pt}
\caption{Comparison with segmentation-based approach on DeepGlobe and Massachusetts Roads Dataset.}
\label{Table1}
\end{table}
\begin{table}[!t]
\centering
\begin{minipage}{0.44\textwidth}
\centering
\scriptsize
\begin{tabular}{|c|c|c|}
\hline
Method & P-F1 & APLS\\
\hline
\hline
VecRoad & 72.56 & {\bf 64.59}\\
RoadTracer & 55.81 & 45.09\\
\hline
\hline
{\bf Ours} & {\bf 74.50} & 63.10\\
\hline
\end{tabular}
\vspace{6pt}
\caption{Comparison with graph-based approaches on RoadTracer Dataset.}
\label{Table2}
\end{minipage}\quad
\begin{minipage}{0.46\textwidth}
\centering
\scriptsize
\begin{tabular}{|c|c|c|}
\hline
Method & Type & 8192$\times$8192 \\
\hline
\hline
VecRoad & Graph & 1327.7 \\
RoadTracer & Graph & 343.4 \\
D-LinkNet & Seg & 10.1 \\
\hline
\hline
{\bf Ours} & PaRK-Detect & {\bf 7.7} \\
\hline
\end{tabular}
\vspace{6pt}
\caption{Run-time in seconds of different approaches on one 8192$\times$8192 test image.}
\label{inferspeed}
\end{minipage}
\end{table}
\subsection{Ablation Studies}
Fig. \ref{fig5} presents the results of constructed road graphs based on our proposed PaRK-Detect scheme. To verify the effectiveness of our approach and the framework architecture, we conduct ablations studies for multi-scale feature fusion and graph optimization strategies. As shown in Tab. \ref{ablation}, introducing JPU to perform multi-scale feature fusion improves the overall performance of our method. Graph optimization strategies improve the pixel-based F1 score and APLS of constructed road graphs by 1.7$\%$ and 9.0$\%$ respectively on the DeepGlobe dataset and by 0.6$\%$ and 10.8$\%$ respectively on the Massachusetts Roads dataset. Performing graph optimization will not influence the IoU metric of road segmentation.
Besides, in order to explore the impact of the multi-task paradigm for road extraction, we remove the decoder part of our framework and compare the road graph construction results with the original framework. As shown in Table 4, simply performing road graph construction without road segmentation diminishes pixel-based F1 score and APLS, which implies road segmentation can be served as additional supervision and provides road morphological information and contour features for achieving better alignment of roads.
Patch size is a crucial hyperparameter in our PaRK-Detect scheme and different patch sizes lead to different framework architectures and discrepant road alignment accuracy. We use 16$\times$16 as the default setting and conduct extensive experiments to explore the impact of patch size. We observe that the appropriate patch size is determined by the width of the road in satellite imagery. If the road width is mostly concentrated between 12-24 pixels (e.g. DeepGlobe and Massachusetts Roads dataset), 16$\times$16 is the suitable patch size. If the road width is mostly 24-48 pixels (e.g. RoadTracer dataset), we should choose 32$\times$32. However, we utilize interpolation to resize the RoadTracer dataset and the patch size remains 16$\times$16.
\begin{figure}
\includegraphics[width=6.5cm]{images/infer_1.png}
\includegraphics[width=6.5cm]{images/infer_2.png}
\caption{Results of constructed road graphs with PaRK-Detect scheme.}
\label{fig5}
\end{figure}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\begin{table}[!t]
\scriptsize
\begin{center}
\begin{tabular}{|c c c c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\tabincell{c}{multi-scale\\feature fusion}} & \multirow{2}{*}{\tabincell{c}{graph\\optimization}} &
\multirow{2}{*}{segmentation} &
\multirow{2}{*}{\tabincell{c}{patch\\size}}
& \multicolumn{3}{c|}{DeepGlobe} & \multicolumn{3}{c|}{Massachusetts Roads} \\
\cline{5-10}
& ~ & ~ & ~ & P-F1 & APLS & IoU & P-F1 & APLS & IoU\\
\hline
\hline
~ & \checkmark & \checkmark & 16$\times$16 & 77.44 & 69.07 & 65.24 & 74.36 & 62.80 & 56.43\\
\checkmark & ~ & \checkmark & 16$\times$16 & 76.70 & 64.06 & {\bf 65.53} & 74.35 & 57.17 & {\bf 56.80}\\
\checkmark & \checkmark & ~ & 16$\times$16 & 76.97 & 69.64 & - & 72.98 & 58.07 & - \\
\checkmark & \checkmark & \checkmark & 16$\times$16 & {\bf 78.04} & {\bf 69.83} & {\bf 65.53} & {\bf 74.80} & {\bf 63.37} & {\bf 56.80}\\
\checkmark & \checkmark & \checkmark & 8$\times$8 & 66.48 & 58.12 & 65.49 & 65.52 & 49.68 & 56.72\\
\checkmark & \checkmark & \checkmark & 32$\times$32 & 76.41 & 67.91 & 65.48 & 73.57 & 60.90 & 56.74\\
\hline
\end{tabular}
\end{center}
\vspace{6pt}
\caption{Ablation studies of our proposed scheme. No mask will be predicted for evaluating IoU if we remove the road segmentation part of the proposed multi-task framework.}
\label{ablation}
\end{table}
\section{Conclusion}
In this paper, we propose a patch-wise road keypoints detection scheme that combines road extraction with keypoint detection to construct road graphs in a single pass. We also propose an efficient multi-task framework that performs road segmentation and road graph construction simultaneously. We conduct extensive experiments over widely-used datasets and achieve competitive or better road extraction results with superior inference speed against existing works. Through ablation experiments, we verify the effectiveness of our approach and demonstrate the impact of the multi-task paradigm and patch size. We believe our work presents an important step forward and we hope this paper can inspire future works on road extraction from satellite imagery.
\section{Acknowledgement}
This work was supported in part by the National Natural Science Foundation of China under Grant 62076093 and the High-Performance Computing Platform of BUPT.
|
{
"arxiv_id": "2302.13273",
"language": "en",
"timestamp": "2023-02-28T02:14:52",
"url": "https://arxiv.org/abs/2302.13273",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
\vspace{-0.35cm}
Acoustic-to-articulatory inversion (AAI) \cite{C1} mainly solves the problem of deriving the pronunciation parameters of key organs from speech audio. In recent years, it has played an important role in many fields, such as pronunciation guidance \cite{C2} and speech recognition \cite{C4,C48,C50}, so it has attracted many researchers to devote themselves to this field.
Different deep learning based models and acoustic representations have been proposed to carry out the AAI task. In the early stage, codebook \cite{C7} was used for voice inversion, but the performance highly relied on the quality of codebook. Later, The data-driven voice inversion models were presented, such as hidden Markov model (HMM) \cite{C8,C49}, mixed Gaussian network (GMM) \cite{C38}, deep neural networks (DNNs) \cite{C39} and so on. At present, the most commonly used models are the recurrent neural network (RNN) and its variants, such as long-short term memory (LSTM) \cite{C14, C34}. In \cite{C14, C40, C41}, the different speech representations such as line spectral frequencies (LSF), Mel-frequency cepstral coefficients (MFCC) and filter bank energies (FBE) were used. In our work, we take MFCC and phonemes as the input of our model.
Up to now, there are two main challenges in AAI. One is that the available datasets are very limited, because we need to record the voice and pronunciation parameters at the same time, which is not only difficult to collect, but also expensive. The most commonly used public datasets are MOCHA-TIMIT \cite{C35}, MNGU0 \cite{C21}, and HPRC \cite{C36}. Another challenge is to improve the performance in speaker-independent scenarios.
\begin{figure*}[h]
\centerline{\includegraphics[width=0.90\linewidth]{ov.jpg}}
\vspace{-0.3cm}
\caption{The framework of SPN. The P is the phoneme stream feature and the S is the speech stream feature.}
\label{fig:ov}
\vspace{-0.4cm
\end{figure*}
For the first challenge, \cite{C44} proposed the method of using the cross corpus data to solve the problem of limited data volume. For the second challenge, \cite{C14} used the vocal tract normalization to map each speaker's pronunciation space to the same space, so as to improve the performance of that model, but it led to the loss of personalized information of speech. It is worth noting that a self-supervised pretraining model was proposed to solve the above challenges and achieved the best performance in \cite{C42}. However, this work only used MFCC as the network input which may limit the performance of AAI and used 1DCNNs to extract speech features which may result in the loss of global information of speech.
In order to solve the above two challenges, we propose a novel network that consists of two parts, speech stream network and phoneme stream network, which we call SPN in brief. 1DCNNs were used to extract speech features in \cite{C42}. But it was pointed out that CNN only extracts the local features in \cite{C45}, so we add an multi-head attention model to extract the global features to better represent the voice information. In addition, we propose a new phoneme stream network. More precisely, we use transcribed phonemes to perform phoneme inversion, then take the results of phoneme stream network as the phoneme features to perform voice inversion.
The motivation is that phonemes only represent the content information of the speech instead of the identity information. Therefore the phoneme features obtained by phoneme stream network are speaker-independent, which can improve the performance of the speaker-independent experiments.
In summary, there are three contributions of this work.
\begin{itemize}
\item In order to better represent voice information, we extract the local features and the global features through 1DCNNs and multi-head attention module respectively.
\item We propose a new phoneme stream network to gain the phoneme features to improve the performance of speaker-independent experiment.
\item Based on the experimental results, it is shown that the proposed model outperforms SOTA obviously on public HPRC dataset which decreases by 0.18mm on RMSE and increases by 6\% on PCC.
\end{itemize}
\vspace{-0.5cm}
\section{PROPOSED APPROACH}
\vspace{-0.25cm}
\label{sec:method}
\subsection{Overall Framework}
\vspace{-0.1cm}
As shown in Fig.~\ref{fig:ov}, the SPN we proposed is composed of two core modules, speech stream network and phoneme stream network. We get global and local features through speech stream network, then we feed them into the SAFN \cite{C42} network to obtain the speech features, and gain the phoneme features through phoneme stream network. Later, the integrated speech features and the phoneme features are fed to the articulatory inversion network (AIN) \cite{C42} to obtain the parameters of the key organs.
\subsection{Speech Stream Network}
\vspace{-0.1cm}
It was shown in \cite{C45} that CNN can only extract local features of speech, so to better represent voice information, a speech stream network is proposed. MFCC is fed into cascaded 1DCNNs and the multi-head attention module respectively to obtain the local features and the global features, which input into SAFN as speech features.
\begin{figure}[htb]
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=8.5cm,height=4cm]{FFN2}}
\end{minipage}
\caption{ The workflow of the Speech Stream Network.}
\label{fig:FFN2}
\vspace{-0.4cm
\end{figure}
As shown in Fig.~\ref{fig:FFN2}, the speech stream network is mainly divided into two parts, one is the local feature extraction module, the other is the global feature extraction module. In the local feature extraction module, we choose the cascaded 1DCNNs used in \cite{C42}. We send the MFCC into the 1DCNNs with five different convolution kernels, whose sizes are 1, 3, 5, 7 and 9, respectively, to obtain the local features. For the global feature extraction module, we use the multi-head attention module because it can pay attention to the relationship between each frame and other frames of the speech. The attention module can be described as obtaining the outputs from a set of queries, keys and values, where outputs, queries, keys and values are the corresponding vectors. Specifically, the global feature extraction module includes six layers of multi-head attention. Each layer has eight heads in total and the dimension of keys and values is both 64. Then, we feed the features into a layerNorm layer to get the global features. Next, the local features and global features are fed into two fully connected layers which has 300 units in each layer to get the speech features. Finally we feed the speech features to the SAFN.
The computation of local features is formulated as:
\begin{equation}
\boldsymbol{y}_{i, j}^{\mathrm{local}}=b_{j}+\sum_{k=1}^{L_{i-1}} \mathbf{W}_{i} * \boldsymbol{y}_{i-1, k}^{\mathrm{local}}
\end{equation}
where * means the convolution operation and ${y}_{i, j}$ represents the feature map of $j$-th channel in $i$-th convolution layer, $b_{j}$ is the bias of $j$-th channel and $\mathbf{W}_{i}$ means the weights of $i$-th convolution layer, $L_{i-1}$ means the length of $\boldsymbol{y}_{i-1}^{\mathrm{local}}$.
\vspace{-0.3cm}
\subsection{Phoneme Stream Network}
\vspace{-0.1cm}
Inspired by the \cite{C43}, we use the outputs of phoneme stream network as the phoneme features to assist voice inversion. The reason for this is the following: phoneme itself only encodes the content information of the speech, instead of the identity information which is speaker-independent. We use the Penn phonetics lab forced aligner \cite{C46} to extract the phoneme frames. Each phoneme frame is a 39 dimensional one-hot vector.
In this module, we use three-layer BLSTM, and finally feed it to the two fully connected layers to get the pronunciation parameters as the phoneme features. The three-layer BLSTM has the same setting, which has 150 activation units in each layer, and there are 300 activation units for the fully connected layer. The output are the 12 dimensional pronunciation parameters, then we feed the pronunciation parameters as phoneme features and the speech features obtained from the speech stream network module into the AIN network to perform voice inversion. The core of the phoneme stream network is expressed as :
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
s_{i} =f\left(U x_{i}+W s_{i-1}+b\right), \notag
\end{equation}
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
s_{i}^{\prime} =f\left(U^{\prime} x_{i}+W^{\prime} s_{i+1}+b^{\prime}\right),
\end{equation}
\begin{equation}
o_{i} =g\left(V s_{i}+V^{\prime} s_{i}^{\prime}+c\right),\notag
\end{equation}
where $o_{i}$ is the phoneme features estimated at the frame $i$. ${x}_{i}$ is the input phoneme sequence at frame $i$, $s_{i}$ is the temporary state at frame $i$ and $U, W, U^{\prime}, W^{\prime}$ are the corresponding transformation matrices. $b, b^{\prime}$ are the biases.
\vspace{-0.35cm}
\section{EXPERIMENTS}
\vspace{-0.25cm}
\subsection{Experimental Setup}
\vspace{-0.1cm}
\textbf{Dataset.} In this work, the dataset we used is HPRC. There are six locations: T1, T2, T3, UL, LL and LI. To be consistent with the SOTA, we only use the locations of tongue organs in X and Z directions as our experimental predicted labels (\textit{e.g}., T1, T2, T3). The HPRC \cite{C36} dataset has eight native American English speakers (four males and four females), and each speaker data is made up of 720 utterances.
\textbf{Performance Metrics.} The performance of the experiment is measured by mean square error and Pearson correlation coefficient \cite{C11}. The first indicator measures the error between the predicted label and the real label and the second indicator represents the similarity of the two labels.
\textbf{Implementation Details.} The network based on speech decomposition and auxiliary feature proposed in \cite{C42} is the SOTA in AAI. Besides, the settings of SAFN and AIN modules in the network architecture diagram are the same as those of SOTA. In this work, we trained the model for 20 epochs and used the Adam optimizer. The learning rate is 1e-4, and the batch size is 5.
\vspace{-0.3cm}
\subsection{Comparisons with the SOTA}
To check the effectiveness of our proposed SPN network, we set three different experimental scenarios according to Table.~\ref{tab:TAB1}. S1 represents that we only train the phoneme stream network using phonemes as input to conduct speaker-independent experiments. S2 represents that we take phonemes and MFCC as inputs to conduct speaker-independent experiments. In this scenario, we take phoneme stream network as a pretraining model, and freeze the parameters while training the whole SPN. S3 represents speaker-independent experiments with phonemes and MFCC as network inputs. Unlike S2, we train phoneme stream network with the whole network. The loss function of our network is the weighted sum of the two parts which are the L2 loss of SPN and L2 loss of phoneme stream network. The formula is given as:
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
{L}_{joint} = \sum_{i=0}^{k}\left(y_{m}^{i}-\hat{y}_{m}^{i} \right)^{2} + \sum_{i=0}^{k}\left(y_{n}^{i}-\hat{y}_{n}^{i} \right)^{2},
\end{equation}
where $\hat{y}_{m}^{i}$, $\hat{y}_{n}^{i}$ represent the ground-truth of EMA. ${y}_{m}^{i}$, ${y}_{n}^{i}$ are the corresponding predicted labels. $k$ means the length of the speech feature.
More specifically, we calculated the performance of phoneme stream network and SPN at the same time in Scene 3. S3(P) represents the result of phoneme stream network and S3(S) represents the results of SPN.
It is worth noting that our experiment was conducted on the Haskins dataset and in each scenario, we trained a separate model for each speaker. We collect all the training set (80\%), validation set (20\%) from seven speakers' data and test on the left one speaker data (100\%). Finally, we take the average of the results of the eight speakers as the final results of our experiments.
\begin{table}[h]
\caption{ Experimental setup for 3 different scenarios. * means taking the proportion of data from each speaker. P and M represent phoneme and MFCC, respectively.
\label{tab:TAB1}
\centering
\begin{threeparttable}
\scriptsize
\setlength{\tabcolsep}{1.8mm}{
\begin{tabular}{ccccccc}
\toprule
\textbf{Scenarios}&\textbf{Input}&\textbf{\#Speaker}&\textbf{Train}&\textbf{Validation}&\textbf{Test}\\
\midrule
\multirow{2}{*}{\textbf{S1}}&\multirow{2}{*}{P}&N-1&80\%*&20\%*&-{}-{}-\\
&&1&-{}-{}-&-{}-{}-&100\%\\
\midrule
\multirow{2}{*}{\textbf{S2}}&\multirow{2}{*}{P, M}&N-1&80\%*&20\%*&-{}-{}-\\
&&1&-{}-{}-&-{}-{}-&100\%\\
\midrule
\multirow{2}{*}{\textbf{S3}}&\multirow{2}{*}{P, M}&N-1&80\%*&20\%*&-{}-{}-\\
&&1&-{}-{}-&-{}-{}-&100\%\\
\bottomrule
\end{tabular}}
\end{threeparttable}
\end{table}
\begin{table}[h]
\centering
\vspace{-0.4cm
\begin{threeparttable}
\caption{RMSE and PCC for SPN and SOTA
\label{tab:TAB3}
\centerin
\scriptsize
\begin{spacing}{0.85}
\setlength{\tabcolsep}{0.8mm}{
\begin{tabular}{ccccccccccc
\toprul
\textbf{Scenarios}&\textbf{F01}&\textbf{F02}&\textbf{F03}&\textbf{F04}&\textbf{M01}&\textbf{M02}&\textbf{M03}&\textbf{M04}&\textbf{RMSE}&\textbf{PCC} \\
\midrul
{\textbf{S1}}&2.279&2.182&1.681&2.322&1.753&2.546&2.036&1.898&2.087&0.755\\
\midrul
{\textbf{S2}}&2.720&2.740&2.088&2.759&2.352&3.064&2.395&2.528&2.580&0.798\\
\midrul
{\textbf{S3(P)}}&2.169&2.084&1.618&2.172&1.693&2.440&1.961&1.812&1.993&0.773\\
\midrul
{\textbf{S3(S)}}&2.634&2.665&2.100&2.602&2.235&3.195&2.368&2.504&2.537&0.810\\
\midrul
{\textbf{SOTA}}&-&-&-&-&-&-&-&-&2.721&0.751\\
\midrul
\end{tabular}}
\end{spacing}
\end{threeparttable}
\vspace{-0.5cm
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{draw1.png}
\caption{Comparisons of the predicted results of the tongue movement between SPN and SOTA. In each box, the blue line is real label, the red dashed line is predicted by SPN and the green dashed line is predicted by SOTA method.}
\label{fig:draw1}
\vspace{-0.4cm
\end{figure}
Table.~\ref{tab:TAB3} shows RMSE and PCC value in three scenarios in Haskins dataset. Because the previous SOTA works did not give the average RMSE on each speaker, we set the corresponding positions as '-' in Table.~\ref{tab:TAB3}. Basically, we can clearly observe that compared with the SOTA, our proposed SPN network decreases 0.141mm on RMSE and increases almost 5\% on PCC in scenario 2, decreases 0.184mm on RMSE and increases almost 6\% on PCC in scenario 3 which uses the joint-training strategy. It shows that the local features and global features we extracted can better represent speech information. After adding the phoneme features, we can further effectively improve the generalization ability of this model.
More interestingly, from the experimental results, we can see that in the case of scenario 3, the performance of phoneme stream network under joint training (S3(P)) is better than that training alone (S1), about 0.1mm on RMSE and almost 2\% on PCC. This shows that the performance of SPN and phoneme stream network both improved, indicating the effectiveness of joint-training strategy.
Qualitative comparisons between SPN and SOTA are shown in Fig.~\ref{fig:draw1}. It is obvious that the predicted tongue articulators generated by SPN are more similar to ground-truth than the SOTA, especially in the red circle of the Fig.~\ref{fig:draw1}. It indicates that the phoneme stream network and speech stream network have a positive effect on the reconstruction of tongue articulatory movements.
\vspace{-0.25cm}
\subsection{Ablation Study}
\vspace{-0.1cm}
To prove the effectiveness of the proposed models, we conduct the ablation experiment according to Table.~\ref{tab:TAB2}, and the results are presented in Table.~\ref{tab:TAB4}.
\begin{table}[h]
\scriptsize
\vspace{-0.5cm
\caption{\textbf\tiny{Ablation experiment proves the effectiveness of each module. SPN-S represents the model that combines the speech stream network and SOTA model.}
\centering
\label{tab:TAB2}
\begin{threeparttable}
\begin{spacing}{0.8}
\setlength{\tabcolsep}{5.0mm}{
\begin{tabular}{cccc}
\toprule
\textbf{}&\textbf{speech stream network}&\textbf{phoneme stream network}\\
\midrule
\textbf{SOTA}&\textbf{$\times$}&\textbf{$\times$}\\
\textbf{SPN-S}&\textbf{\checkmark}&\textbf{$\times$}\\
\textbf{SPN}&\textbf{\checkmark}&\textbf{\checkmark}\\
\bottomrule
\end{tabular}}
\end{spacing}
\end{threeparttable}
\end{table}
Obviously, the models with speech stream network (SPN, SPN-S) outperform that without the speech stream network (SOTA). Besides, the model with phoneme stream network (SPN) outperforms that without phoneme stream network (SPN-S). From the experimental results, we can clearly observe that in the speaker-independent experiment, the speech stream network improves the performance by extracting the local features and global features to better represent the speech information and after adding the phoneme features obtained by phoneme stream network brings a further large gain on the generalization ability of our model.
\begin{table}[h]
\scriptsize
\vspace{-0.35cm}
\caption{\textbf\tiny{RMSE and CC in three scenarios.}
\centering
\label{tab:TAB4}
\begin{threeparttable}
\begin{spacing}{0.8}
\setlength{\tabcolsep}{5.0mm}{
\begin{tabular}{cccc}
\toprule
\textbf{}&\textbf{RMSE}&\textbf{PCC}\\
\midrule
\textbf{SOTA}&\textbf{2.721}&\textbf{0.751}\\
\textbf{SPN-S}&\textbf{2.664}&\textbf{0.787}\\
\textbf{SPN}&\textbf{2.537}&\textbf{0.810}\\
\bottomrule
\end{tabular}}
\end{spacing}
\end{threeparttable}
\end{table}
\vspace{-0.7cm}
\section{Conclusion}
\vspace{-0.2cm}
To improve the speaker-independent performance and pay more attention to the global speech information, we propose a new network, including two parts. One is the speech stream network, which uses 1DCNNs and multi-head attention model to extract the local features and global features of speech to better represent the voice information. At the same time, we use the pronunciation parameters obtained by the phoneme stream network as the phoneme features to help predict the pronunciation parameters of voice inversion. The experimental results prove the effectiveness of our proposed network. In the future work, meta-learning will be explored into the field of AAI.
\vspace{-0.4cm}
\section{Acknowledgement}
\vspace{-0.2cm}
This work is supported by the National Natural Science
Foundation of China (No. 61977049), the National Natural Science Foundation of China (No. 62101351), and the GuangDong Basic and Applied Basic Research Foundation
(No.2020A1515110376).
\bibliographystyle{IEEEbib}
\section{Introduction}
\end{document}
|
{
"arxiv_id": "2302.13181",
"language": "en",
"timestamp": "2023-03-03T02:03:17",
"url": "https://arxiv.org/abs/2302.13181",
"yymm": "2302"
} | \section{Conclusion}
In conclusion, we provide a new modified definition of ``data-copying'' or generating memorized training samples for generative models that addresses some of the failure modes of previous definitions~\cite{MCD2020}. We provide an algorithm for detecting data-copying according to our definition, establish performance guarantees, and show that at least some smoothness conditions are needed on the data distribution for successful detection.
\section{Introduction}
Deep generative models have shown impressive performance. However, given how large, diverse, and uncurated their training sets are, a big question is whether, how often, and how closely they are memorizing their training data. This question has been of considerable interest in generative modeling~\citep{lopez2016revisiting,XHYGSWK18} as well as supervised learning~\citep{BBFST21, Feldman20}. However, a clean and formal definition of memorization that captures the numerous complex aspects of the problem, particularly in the context of continuous data such as images, has largely been elusive.
For generative models,~\cite{MCD2020} proposed a formal definition of memorization called ``data-copying'', and showed that it was orthogonal to various prior notions of overfitting such as mode collapse~\citep{TT20}, mode dropping~\citep{YFWYC20}, and precision-recall~\citep{SBLBG18}. Specifically, their definition looks at three datasets -- a training set, a set of generated example, and an independent test set. Data-copying happens when the training points are considerably closer on average to the generated data points than to an independently drawn test sample. Otherwise, if the training points are further on average to the generated points than test, then there is underfitting. They proposed a three sample test to detect this kind of data-copying, and empirically showed that their test had good performance.
\begin{figure}[ht]
\includegraphics[width=.45\textwidth]{page_2_figure_yo.png}
\caption{In this figure, the blue points are sampled from the halfmoons dataset (with Gaussian noise). The red points are sampled from a generated distribution that is a mixture of (40 \%) blatant data copier (that outputs a random subset of the training set), and (60 \%) a noisy underfit version of halfmoons. Although the generated distribution is clearly doing some form of copying at points $x_1$ and $x_2$, detecting this is challenging because of the canceling effect of the underfit points.}
\label{fig:page_2_figure}
\end{figure}
However, despite its practical success, this method may not capture even blatant cases of memorization. To see this, consider the example illustrated in Figure \ref{fig:page_2_figure}, in which a generated model for the halfmoons dataset outputs one of its training points with probability $0.4$, and otherwise outputs a random point from an underfit distribution. When the test of~\cite{MCD2020} is applied to this distribution, it is unable to detect any form of data copying; the generated samples drawn from the underfit distribution are sufficient to cancel out the effect of the memorized examples. Nevertheless, this generative model is clearly an egregious memorizer as shown in points $x_1$ and $x_2$ of Figure \ref{fig:page_2_figure}.
This example suggests a notion of \textit{point-wise} data copying, where a model $q$ can be thought of as copying a given training point $x$. Such a notion would be able to detect $q$'s behavior nearby $x_1$ and $x_2$ regardless of the confounding samples that appear at a global level. This stands in contrast to the more global distance based approach taken in Meehan et. al. which is unable to detect such instances. Motivated by this, we propose an alternative point-by-point approach to defining data-copying.
We say that a generative model $q$ data-copies an individual training point, $x$, if it has an unusually high concentration in a small area centered at $x$. Intuitively, this implies $q$ is highly likely to output examples that are very similar to $x$. In the example above, this definition would flag $q$ as copying $x_1$ and $x_2$.
To parlay this definition into a global measure of data-copying, we define the overall \textit{data-copying rate} as the total fraction of examples from $q$ that are copied from some training example. In the example above, this rate is $40\%$, as this is the fraction of examples that are blatant copies of the training data.
\begin{figure*}[ht]
\subfloat{\includegraphics[width=.3\textwidth]{default.png}}\hfill
\subfloat{\includegraphics[width=.3\textwidth]{add_regions.png}}\hfill
\subfloat{\includegraphics[width=.3\textwidth]{data_copy.png}}
\caption{In the three panels above, the blue points are a training sample from $p$, and the red points are generated examples from $q$. In the middle panel, we highlight in green regions that are defined to be \textit{data-copying regions}, as $q$ overrepresents them with comparison to $p$. In the third panel, we then color all points from $q$ that are considered to be copied green.}
\label{fig:triptic}
\end{figure*}
Next, we consider how to detect data-copying according to this definition. To this end, we provide an algorithm, Data\_Copy\_Detect{}, that outputs an estimate for the overall data-copying rate. We then show that under a natural smoothness assumption on the data distribution, which we call \textit{regularity}, Data\_Copy\_Detect{} is able to guarantee an accurate estimate of the total data-copying rate. We then give an upper bound on the amount of data needed for doing so.
We complement our algorithm with a lower bound on the minimum amount of a data needed for data-copying detection. Our lower bound also implies that some sort of smoothness condition (such as regularity) is necessary for guaranteed data-copying detection; otherwise, the required amount of data can be driven arbitrarily high.
\subsection{Related Work}
Recently, understanding failure modes for generative models has been an important growing body of work e.g. \citep{SGZCRC16, RW18, SBLBG18}. However, much of this work has been focused on other forms of overfitting, such as mode dropping or mode collapse.
A more related notion of overfitting is \textit{memorization} \citep{lopez2016revisiting,XHYGSWK18, C18}, in which a model outputs exact copies of its training data. This has been studied in both supervised \citep{BBFST21, Feldman20} and unsupervised \citep{BGWC21, CHCW21} contexts. Memorization has also been considered in language generation models \cite{Carlini22}.
The first work to explicitly consider the more general notion of \textit{data-copying} is \citep{MCD2020}, which gives a three sample test for data-copy detection. We include an empirical comparison between our methods in Section \ref{sec:experiments}, where we demonstrate that ours is able to capture certain forms of data-copying that theirs is not.
Finally, we note that this work focuses on detecting natural forms of memorization or data-copying, that likely arises out of poor generalization, and is not concerned with detecting \textit{adversarial} memorization or prompting, such as in \cite{Carlini19}, that are designed to obtain sensitive information about the training set. This is reflected in our definition and detection algorithm which look at the specific generative model, and not the algorithm that trains it. Perhaps the best approach to prevent adversarial memorization is training the model with differential privacy~\cite{Dwork06}, which ensures that the model does not change much when one training sample changes. However such solutions come at an utility cost.
\section{A Formal Definition of Data-Copying}
We begin with the following question: what does it mean for a generated distribution $q$ to copy a single training example $x$? Intuitively, this means that $q$ is guilty of overfitting $x$ in some way, and consequently produces examples that are very similar to it.
However, determining what constitutes a `very similar' generated example must be done contextually. Otherwise the original data distribution, $p$, may itself be considered a copier, as it will output points nearby $x$ with some frequency depending on its density at $x$. Thus, we posit that $q$ data copies training point $x$ if it has a significantly higher concentration nearby $x$ than $p$ does. We express this in the following definition.
\begin{definition}\label{defn:data_copy}
Let $p$ be a data distribution, $S \sim p^n$ a training sample, and $q$ be a generated distribution trained on $S$. Let $x \in S$ be a training point, and let $\lambda > 1$ and $0 < \gamma < 1$ be constants. A generated example $x' \sim q$ is said to be a \textbf{$(\lambda, \gamma)$-copy} of $x$ if there exists a ball $B$ centered at $x$ (i.e. $\{x': ||x' - x|| \leq r\}$) such that following hold:
\begin{itemize}
\item $x' \in B$.
\item $q(B) \geq \lambda p(B)$
\item $p(B) \leq \gamma$
\end{itemize}
\end{definition}
Here $q(B)$ and $p(B)$ denote the probability mass assigned to $B$ by $p$ and $q$ respectively.
The parameters $\lambda$ and $\gamma$ are user chosen parameters that characterize data-copying. $\lambda$ represents the rate at which $q$ must overrepresent points close to $x$, with higher values of $\lambda$ corresponding to more egregious examples of data-copying. $\gamma$ represents the maximum size (by probability mass) of a region that is considered to be data-copying -- the ball $B$ represents all points that are ``copies" of $x$. Together, $\lambda$ and $\gamma$ serve as practitioner controlled knobs that characterize data-copying about $x$.
Our definition is illustrated in Figure \ref{fig:triptic} -- the training data is shown in blue, and generated samples are shown in red. For each training point, we highlight a region (in green) about that point in which the red density is much higher than the blue density, thus constituting data-copying. The intuition for this is that the red points within any ball can be thought of as ``copies" of the blue point centered in the ball.
Having defined data-copying with respect to a single training example, we can naturally extend this notion to the entire training dataset. We say that $x' \sim q$ is copied from training set $S$ if $x'$ is a $(\lambda,\gamma)$-copy of some training example $x \in S$. We then define the \textit{data-copy rate} of $q$ as the fraction of examples it generates that are copied from $S$. Formally, we have the following:
\begin{definition}
Let $p, S, q, \lambda,$ and $\gamma$ be as defined in Definition \ref{defn:data_copy}. Then the \textbf{data-copy rate}, $cr\left(q, \lambda, \gamma\right)$ of $q$ (with respect to $p, S$) is the fraction of examples from $q$ that are $(\lambda, \gamma)$-copied. That is, $$cr\left(q, \lambda, \gamma\right) = \Pr_{x' \sim q}[q\text{ }(\lambda,\gamma)\text{-copies }x'].$$ In cases where $\lambda, \gamma$ are fixed, we use $cr_q = cr(q, \lambda, \gamma)$ to denote the data-copy rate.
\end{definition}
Despite its seeming global nature, $cr_q$ is simply an aggregation of the point by point data-copying done by $q$ over its entire training set. As we will later see, estimating $cr_q$ is often reduced to determining which subset of the training data $q$ copies.
\subsection{Examples of data-copying}
We now give several examples illustrating our definitions. In all cases, we let $p$ be a data distribution, $S$, a training sample from $p$, and $q$, a generated distribution that is trained over $S$.
\paragraph{The uniform distribution over $S$:} In this example, $q$ is an egregious data copier that memorizes its training set and randomly outputs a training point. This can be considered as the canonical \textit{worst} data copier. This is reflected in the value of $cr_q$ -- if $p$ is a continuous distribution with finite probability density, then for any $x \in S$, there exists a ball $B$ centered at $x$ for which $q(B) >> p(B)$. It follows that $q$ $(\lambda,\gamma)$- copies $x$ for all $x \in S$ which implies that $cr_q = 1$.
\paragraph{The perfect generative model: $q = p$:} In this case, $q(B) = p(B)$ for all balls, $B$, which implies that $q$ does not perform any data-copying (Definition \ref{defn:data_copy}). It follows that $cr_q = 0$, matching the intuition that $q$ does not data-copy at all.
\paragraph{Kernel Density Estimators:} Finally, we consider a more general situation, where $q$ is trained by a \textit{kernel density estimator} (KDE) over $S \sim p^n$. Recall that a kernel density estimator outputs a generated distribution, $q$, with pdf defined by $$q(x) = \frac{1}{n\sigma_n}\sum_{x_i \in S} K\left(\frac{x - x_i}{\sigma_n}\right).$$ Here, $K$ is a kernel similarity function, and $\sigma_n$ is the bandwidth parameter. It is known that for $\sigma_n = O(n^{-1/5})$, $q$ converges towards $p$ for sufficiently well behaved probability distributions.
Despite this guarantee, KDEs intuitively appear to perform some form of data-copying -- after all they implicitly include each training point in memory as it forms a portion of their outputted pdf. However, recall that our main focus is in understanding \textit{overfitting} due to data-copying. That is, we view data-copying as a function of the outputted pdf, $q$, and not of the training algorithm used.
To this end, for KDEs the question of data-copying reduces to the question of whether $q$ overrepresents areas around its training points. As one would expect, this occurs \textit{before} we reach the large sample limit. This is expressed in the following theorem.
\begin{theorem}\label{thm:KDE}
Let $1 < \lambda$ and $\gamma > 0$. Let $\sigma_n$ be a sequence of bandwidths and $K$ be any regular kernel function. For any $n > 0$ there exists a probability distribution $\pi$ with full support over $\mathbb{R}^d$ such that with probability at least $\frac{1}{3}$ over $S \sim \pi^n$, a KDE trained with bandwidth $\sigma_n$ and kernel function $K$ has data-copy rate $cr_q \geq \frac{1}{10}$.
\end{theorem}
This theorem completes the picture for KDEs with regards to data-copying -- when $n$ is too low, it is possible for the KDE to have a significant amount of data-copying, but as $n$ continues to grow, this is eventually smoothed out.
\paragraph{The Halfmoons dataset}
Returning to the example given in Figure \ref{fig:page_2_figure}, observe that our definition exactly captures the notion of data-copying that occurs at points $x_1$ and $x_2$. For even strict choices of $\lambda$ and $\gamma$, Definition \ref{defn:data_copy} indicates that the red distribution copies both $x_1$ and $x_2$. Furthermore, the data-copy rate, $cr_q$, is $40\%$ by construction, as this is the proportion of points that are outputted nearby $x_1$ and $x_2$.
\subsection{Limitations of our definition}\label{sec:limitations}
Definition \ref{defn:data_copy} implicitly assumes that the goal of the generator is to output a distribution $q$ that approaches $p$ in a mathematical sense; a perfect generator would output $q$ so that $q(M) = p(M)$ for all measurable sets. In particular, instances where $q$ outputs examples that are far away from the training data are considered completely irrelevant in our definition.
This restriction prevents our definition from capturing instances in which $q$ memorizes its training data and then applies some sort of transformation to it. For example, consider an image generator that applies a color filter to its training data. This would not be considered a data-copier as its output would be quite far from the training data in pixel space. Nevertheless, such a generated distribution can be very reasonably considered as an egregious data copier, and a cursory investigation between its training data and its outputs would reveal as much.
The key difference in this example is that the generative algorithm is no longer trying to closely approximate $p$ with $q$ -- it is rather trying to do so in some kind of transformed space. Capturing such interactions is beyond the scope of our paper, and we firmly restrict ourselves to the case where a generator is evaluated based on how close $q$ is to $p$ with respect to their measures over the input space.
\section{Detecting data-copying}
Having defined $cr_q$, we now turn our attention towards \textit{estimating it.} To formalize this problem, we will require a few definitions. We begin by defining a generative algorithm.
\begin{definition}
A \textbf{generative algorithm}, $A$, is a potentially randomized algorithm that outputs a distribution $q$ over $\mathbb{R}^d$ given an input of training points, $S \subset \mathbb{R}^d$. We denote this relationship by $q \sim A(S)$.
\end{definition}
This paradigm captures most typical generative algorithms including both non-parametric methods such as KDEs and parametric methods such as variational autoencoders.
As an important distinction, in this work we define data-copying as a property of the generated distribution, $q$, rather than the generative algorithm, $A$. This is reflected in our definition which is given solely with respect to $q, S,$ and $p$. For the purposes of this paper, $A$ can be considered an arbitrary process that takes $S$ and outputs a distribution $q$. We include it in our definitions to emphasize that while $S$ is an i.i.d sample from $p$, it is \textit{not} independent from $q$.
Next, we define a \textit{data-copying detector} as an algorithm that estimates $cr_q$ based on access to the training sample, $S$, along with the ability to draw any number of samples from $q$. The latter assumption is quite typical as sampling from $q$ is a purely computational operation. We do not assume any access to $p$ beyond the training sample $S$. Formally, we have the following definition.
\begin{definition}\label{def:data_copy_detector}
A \textbf{data-copying detector} is an algorithm $D$ that takes as input a training sample, $S \sim p^n$, and access to a sampling oracle for $q \sim A(S)$ (where $A$ is an arbitrary generative algorithm). $D$ then outputs an estimate, $D(S, q) = \hat{cr}_q$, for the data-copy rate of $q$.
\end{definition}
Naturally, we assume $D$ has access to $\lambda, \gamma >0$ (as these are practitioner chosen values), and by convention don't include $\lambda, \gamma$ as formal inputs into $D$.
The goal of a data-copying detector is to provide accurate estimates for $cr_q$. However, the precise definition of $cr_q$ poses an issue: data-copy rates for varying values of $\lambda$ and $ \gamma$ can vastly differ. This is because $\lambda, \gamma$ act as thresholds with everything above the threshold being counted, and everything below it being discarded. Since $\lambda, \gamma$ cannot be perfectly accounted for, we will require some tolerance in dealing with them. This motivates the following.
\begin{definition}\label{defn:approx_data_copy_rate}
Let $0 < \epsilon$ be a tolerance parameter. Then the \textbf{approximate data-copy rates}, $cr_q^{-\epsilon}$ and $cr_q^\epsilon$, are defined as the values of $cr_q$ when the parameters $(\lambda, \gamma)$ are shifted by a factor of $(1+\epsilon)$ to respectively decrease and increase the copy rate. That is, $$cr_q^{-\epsilon} = cr\left(q, \lambda (1+\epsilon), \gamma (1+\epsilon)^{-1}\right),$$ $$cr_q^{\epsilon} = cr\left(q, \lambda (1+\epsilon)^{-1}, \gamma (1+\epsilon)\right).$$
\end{definition}
The shifts in $\lambda$ and $\gamma$ are chosen as above because increasing $\lambda$ and decreasing $\gamma$ both reduce $cr_q$ seeing as both result in more restrictive conditions for what qualifies as data-copying. Conversely, decreasing $\lambda$ and increasing $\gamma$ has the opposite effect. It follows that $$cr_q^{-\epsilon} \leq cr_q \leq cr_q^{\epsilon},$$ meaning that $cr_q^{-\epsilon}$ and $cr_q^{\epsilon}$ are lower and upper bounds on $cr_q$.
In the context of data-copying detection, the goal is now to estimate $cr_q$ in comparison to $cr_q^{\pm \epsilon}$. We formalize this by defining \textit{sample complexity} of a data-copying detector as the amount of data needed for accurate estimation of $cr_q$.
\begin{definition}\label{def:sample_complexity}
Let $D$ be a data-copying detector and $p$ be a data distribution. Let $\epsilon, \delta > 0$ be standard tolerance parameters. Then $D$ has \textbf{sample complexity}, $m_p(\epsilon, \delta)$, with respect to $p$ if for all $n \geq m_p(\epsilon, \delta)$, $\lambda >1$, $0 < \gamma < 1$, and generative algorithms $A$, with probability at least $1 - \delta$ over $S \sim p^n$ and $q \sim A(S)$, $$cr_q^{-\epsilon} - \epsilon \leq D(S, q) \leq cr_q^{\epsilon} + \epsilon.$$
\end{definition}
Here the parameter $\epsilon$ takes on a somewhat expanded as it is both used to additively bound our estimation of $cr_q$ and to multiplicatively bound $\lambda$ and $\gamma$.
Observe that there is no mention of the number of calls that $D$ makes to its sampling oracle for $q$. This is because samples from $q$ are viewed as \textit{purely computational}, as they don't require any natural data source. In most cases, $q$ is simply some type of generative model (such as a VAE or a GAN), and thus sampling from $q$ is a matter of running the corresponding neural network.
\section{Regular Distributions}\label{sec:regular_dist}
Our definition of data-copying (Definition \ref{defn:data_copy}) motivates a straightforward point by point method for data-copying detection, in which for every training point, $x_i$, we compute the largest ball $B_i$ centered at $x_i$ for which $q(B_i) \geq \lambda p(B_i)$ and $p(B_i) \leq \gamma$. Assuming we compute these balls accurately, we can then query samples from $q$ to estimate the total rate at which $q$ outputs within those balls, giving us our estimate of $cr_q$.
The key ingredient necessary for this idea to work is to be able to reliably estimate the masses, $q(B)$ and $p(B)$ for any ball in $\mathbb{R}^d$. The standard approach to doing this is through \textit{uniform convergence}, in which large samples of points are drawn from $p$ and $q$ (in $p$'s case we use $S$), and then the mass of a ball is estimated by counting the proportion of sampled points within it. For balls with a sufficient number of points (typically $O( d\log n)$), standard uniform convergence arguments show that these estimates are reliable.
However, this method has a major pitfall for our purpose -- in most cases the balls $B_i$ will be very small because data-copying intrinsically deals with points that are very close to a given training point. While one might hope that we can simply ignore all balls below a certain threshold, this does not work either, as the sheer number of balls being considered means that their union could be highly non-trivial.
To circumvent this issue, we will introduce an interpolation technique that estimates the probability mass of a small ball by scaling down the mass of a sufficiently large ball with the same center. While obtaining a general guarantee is impossible -- there exist pathological distributions that drastically change their behavior at small scales -- it turns out there is a relatively natural condition under which such interpolation will work. We refer to this condition as \textit{regularity,} which is defined as follows.
\begin{definition}\label{def:regular}
Let $k> 0$ be an integer. A probability distribution $p$ is \textbf{$k$-regular} the following holds. For all $\epsilon > 0$, there exists a constant $0 < p_\epsilon \leq 1$ such that for all $x$ in the support of $p$, if $0 < s < r$ satisfies that $p(B(x, r)) \leq p_\epsilon$, then $$\left(1+\frac{\epsilon}{3}\right)^{-1}\frac{r^k}{s^{k}} \leq \frac{p(B(x, r))}{p(B(x, s))} \leq \left(1+\frac{\epsilon}{3}\right)\frac{r^k}{s^{k}}.$$ Finally, a distribution is \textbf{regular} if it is $k$-regular for some integer $k > 0$.
\end{definition}
Here we let $B(x, r) = \{x': ||x - x'|| \leq r\}$ denote the closed $\ell_2$ ball centered at $x$ with radius $r$.
The main intuition for a $k$-regular distribution is that at a sufficiently small scale, its probability mass scales with distance according to a power law, determined by $k$. The parameter $k$ dictates how the probability density behaves with respect to the distance scale. In most common examples, $k$ will equal the \textit{intrinsic dimension} of $p$.
As a technical note, we use an error factor of $\frac{\epsilon}{3}$ instead of $\epsilon$ for technical details that enable cleaner statements and proofs in our results (presented later).
\subsection{Distributions with Manifold Support}
We now give an important class of $k$-regular distributions.
\begin{proposition}\label{prop:manifold_works}
Let $p$ be a probability distribution with support precisely equal to a compact $k$ dimensional sub-manifold (with or without boundary) of $\mathbb{R}^d$, $M$. Additionally, suppose that $p$ has a continuous density function over $M$. Then it follows that $p$ is $k$-regular.
\end{proposition}
Proposition \ref{prop:manifold_works} implies that most data distributions that adhere to some sort of manifold-hypothesis will also exhibit regularity, with the regularity constant, $k$, being the intrinsic dimension of the manifold.
\subsection{Estimation over regular distributions}
We now turn our attention towards designing estimation algorithms over regular distributions, with our main goal being to estimate the probability mass of arbitrarily small balls. We begin by first addressing a slight technical detail -- although the data distribution $p$ may be regular, this does not necessarily mean that the regularity constant, $k$, is known. Knowledge of $k$ is crucial because it determines how to properly interpolate probability masses from large radius balls to smaller ones.
Luckily, estimating $k$ turns out to be an extremely well studied task, as for most probability distributions, $k$ is a measure of the \textit{intrinsic dimension}. Because there is a wide body of literature in this topic, we will assume from this point that $k$ has been correctly estimated from $S$ using any known algorithm for doing so (for example \cite{BJPR22}). Nevertheless, for completeness, we provide an algorithm with provable guarantees for estimating $k$ (along with a corresponding bound on the amount of needed data) in Appendix \ref{sec:estimating_alpha}.
We now return to the problem of $p(B(x, r))$ for a small value of $r$, and present an algorithm, $Est(x, r, S)$ (Algorithm \ref{alg:estimate}), that estimates $p(B(x, r))$ from an i.i.d sample $S \sim p^n$.
\begin{algorithm}
\caption{$Est(x, r, S)$}
\label{alg:estimate}
\DontPrintSemicolon
$n \leftarrow |S|$\;
$b \leftarrow O\left(\frac{d \ln \frac{n}{\delta}}{\epsilon^2} \right)$\;
$r_* = \min \{s > 0, |S \cap B(x, s)| = b\}$.\;
\uIf{$r_* > r$}{
Return $\frac{br^k}{nr_*^k}$\;
}
\uElse {
Return $\frac{|T \cap B(x, r)|}{n}$\;
}
\end{algorithm}
$Est$ uses two ideas: first, it leverages standard uniform convergence results to estimate the probability mass of all balls that contain a sufficient number of training examples $(k)$. Second, it estimates the mass of smaller balls by interpolating from its estimates from larger balls. The $k$-regularity assumption is crucial for this second step as it is the basis on which such interpolation is done.
$Est$ has the following performance guarantee, which follows from standard uniform convergence bounds and the definition of $k$-regularity.
\begin{proposition}\label{prop:est_works}
Let $p$ be a regular distribution, and let $\epsilon >0$ be arbitrary. Then if $n = O\left(\frac{d\ln\frac{d}{\delta \epsilon p_\epsilon}}{\epsilon^2 p_\epsilon}\right)$ with probability at least $1 - \delta$ over $S \sim p^n$, for all $x \in \mathbb{R}^d$ and $r > 0$, $$\left(1+\frac{\epsilon}{2}\right)^{-1}\leq \frac{Est(x, r, S)}{p(B(x, r))} \leq \left(1+\frac{\epsilon}{2}\right).$$
\end{proposition}
\section{A Data-copy detecting algorithm}
\begin{algorithm}
\caption{$DataCopyDetect(S, q, m)$}
\label{alg:main}
\DontPrintSemicolon
$m \leftarrow O\left(\frac{dn^2\ln \frac{nd}{\delta\epsilon}}{\epsilon^4}\right)$\;
Sample $T \sim q^m$\;
$\{x_1, x_2, \dots, x_n\} \leftarrow S$\;
$\{z_1, z_2, \dots, z_m\} \leftarrow T$\;
\For{$i = 1, \dots, n$}{
Let $p_i(r)$ denote $Est(x_i, r, S)$\;
Let $q_i(r)$ denote $\frac{|B(x_i, r) \cap T|}{m}$\;
$radii \leftarrow \{||z - x_i||: z \in T\} \cup \{0\}$\;
$radii \leftarrow \{r: p_i(r) \leq \gamma, r \in radii\}$\;
$r_i^* \leftarrow \max \{r: q_i(r) \geq \lambda p_i(r), r \in radii\}$\;
}
Sample $U \sim q^{20/\epsilon^2}$\;
$V \leftarrow U \cap \left(\bigcup_{i=1}^n B(x_i, r_i^*)\right)$\;
Return $\frac{|V|}{|U|}$.\;
\end{algorithm}
We now now leverage our subroutine, $Est$, to construct a data-copying detector, $Data\_Copy\_Detect$ (Algorithm \ref{alg:main}), that has bounded sample complexity when $p$ is a regular distribution. Like all data-copying detectors (Definition \ref{def:data_copy_detector}), $Data\_Copy\_Detect$ takes as input the training sample $S$, along with the ability to sample from a generated distribution $q$ that is trained from $S$. It then performs the following steps:
\begin{enumerate}
\item (line 1) Draw an i.i.d sample of $m = O\left(\frac{dn^2\ln \frac{nd}{\delta\epsilon}}{\epsilon^4}\right)$ points from $q$.
\item (lines 6 - 10) For each training point, $x_i$, determine the largest radius $r_i$ for which
\begin{equation*}
\begin{split}
&\frac{|B(x_i, r_i) \cap T|}{m} \geq \lambda Est(x_i, r_i ,S), \\
&Est(x_i, r_i , S) \leq \gamma.
\end{split}
\end{equation*}
\item (lines 12 - 13) Draw a fresh sample of points from $U \sim q^{O(1/\epsilon^2)}$, and use it to estimate the probability mass under $q$ of $\cup_{i=1}^n B(x_i, r_i)$.
\end{enumerate}
In the first step, we draw a \textit{large} sample from $q$. While this is considerably larger than the amount of training data we have, we note that samples from $q$ are considered free, and thus do not affect the sample complexity. The reason we need this many samples is simple -- unlike $p$, $q$ is not necessarily regular, and consequently we need enough points to properly estimate $q$ around every training point in $S$.
The core technical details of $Data\_Copy\_Detect{}$ are contained within step 2, in which data-copying regions surrounding each training point, $x_i$, are found. We use $Est(x, r, S)$ and $\frac{|B(x, r) \cap T|}{m}$ as proxies for $p$ and $q$ in Definition \ref{defn:data_copy}, and then search for the maximal radius $r_i$ over which the desired criteria of data-copying are met for these proxies.
The only difficulty in doing this is that this could potentially require checking an infinite number of radii, $r_i$. Fortunately, this turns out not to be needed because of the following observation -- we only need to check radii at which a new point from $T$ is included in the estimation $q_i(r)$. This is because these our estimation for $q_i(r)$ does not change between them meaning that our estimate of the ratio between $q$ and $p$ is maximal nearby these points.
Once we have computed $r_i$, all that is left is to estimate the data-copy rate by sampling $q$ once more to find the total mass of data-copying region, $\cup_{i=1}^n B(x_i, r_i)$.
\subsection{Performance of Algorithm \ref{alg:main}}
We now show that given enough data, $Data\_Copy\_Detect{}$ provides a close approximation of $cr_q$.
\begin{theorem}\label{thm:upper_bound}
$Data\_Copy\_Detect{}$ is a data-copying detector (Definition \ref{def:data_copy_detector}) with sample complexity at most $$m_p(\epsilon, \delta) = O\left(\frac{d\ln\frac{d}{\delta\epsilon p_\epsilon}}{\epsilon^2 p_\epsilon}\right),$$ for all regular distributions, $p$.
\end{theorem}
Theorem \ref{alg:main} shows that our algorithm's sample complexity has standard relationships with the tolerance parameters, $\epsilon$ and $\delta$, along with the input space dimension $d$. However, it includes an additional factor of $\frac{1}{p_\epsilon}$, which is a distribution specific factor measuring the regularity of the probability distribution. Thus, our bound cannot be used to give a bound on the amount of data needed without having a bound on $p_\epsilon$.
We consequently view our upper bound as more akin to a convergence result, as it implies that our algorithm is guaranteed to converge as the amount of data goes towards infinity.
\subsection{Applying Algorithm \ref{alg:main} to Halfmoons}\label{sec:experiments}
We now return to the example presented in Figure \ref{fig:halfmoons} and empirically investigate the following question: is our algorithm able to outperform the one given in \cite{MCD2020} over this example?
To investigate this, we test both algorithms over a series of distributions by varying the parameter $\rho$, which is the proportion of points that are ``copied." Figure \ref{fig:halfmoons} demonstrates a case in which $\rho = 0.4$. Additionally, we include a parameter, $c$, for \cite{MCD2020}'s algorithm which represents the number of clusters the data is partitioned into (with $c$-means clustering) prior to running their test. Intuitively, a larger number of clusters means a better chance of detecting more localized data-copying.
The results are summarized in the following table where we indicate whether the algorithm determined a statistically significant amount of data-copying over the given generated distribution and corresponding training dataset. Full experimental details can be found in Sections \ref{sec:app_experiments} and \ref{sec:experiments_details} of the appendix.
\begin{table}[h]
\caption{Statistical Significance of data-copying Rates over Halfmoons} \label{results_main}
\begin{center}
\begin{tabular}{ |c||c|c|c|c|c| }
\hline
\textbf{Algo} & $\mathbf{q = p}$ & $\mathbf{\rho = 0.1}$ & $\mathbf{0.2}$ & $\mathbf{0.3}$ & $\mathbf{0.4}$ \\
\hline
\hline
\textbf{Ours} & \color{blue}no & \color{red}yes & \color{red}yes & \color{red}yes & \color{red}yes \\
\hline
$\mathbf{c=1}$ & \color{blue}no & \color{blue}no & \color{blue}no & \color{blue}no & \color{blue}no \\
\hline
$\mathbf{c=5}$ & \color{blue}no & \color{blue}no & \color{blue}no & \color{blue}no & \color{red}yes \\
\hline
$\mathbf{c=10}$ & \color{blue}no & \color{blue}no & \color{blue}no & \color{blue}no & \color{red}yes \\
\hline
$\mathbf{c=20}$ & \color{blue}no & \color{blue}no& \color{blue}no & \color{red}yes & \color{red}yes\\
\hline
\end{tabular}
\end{center}
\end{table}
As the table indicates, our algorithm is able to detect statistically significant data-copying rates in all cases it exists. By contrast, \cite{MCD2020}'s test is only capable of doing so when there is a large data-copy rate and when the number of clusters, $c$, is quite large.
\section{Is smoothness necessary for data copying detection?}\label{sec:lower_bound}
Algorithm \ref{alg:main}'s performance guarantee requires that the input distribution, $p$, be regular (Definition \ref{def:regular}). This condition is essential for the algorithm to successfully estimate the probability mass of arbitrarily small balls. Additionally, the parameter, $p_\epsilon$, plays a key role as it serves as a measure of how ``smooth" $p$ is with larger values implying a higher degree of smoothness.
This motivates a natural question -- can data copying detection be done over unsmooth data distributions? Unfortunately, the answer turns out to be no. In the following result, we show that if the parameter, $p_\epsilon$ is allowed to be arbitrarily small, then this implies that for any data-copy detector, there exists $p$ for which the sample complexity is arbitrarily large.
\begin{theorem}\label{thm:lower_bound}
Let $B$ be a data-copying detector. Let $\epsilon = \delta = \frac{1}{3}$. Then, for all integers $\kappa > 0$, there exists a probability distribution $p$ such that $\frac{1}{9\kappa} \leq p_\epsilon \leq \frac{1}{\kappa}$, and $m_p(\epsilon, \delta) \geq \kappa$, implying that $$m_p(\epsilon, \delta) \geq \Omega\left(\frac{1}{p_\epsilon}\right).$$
\end{theorem}
Although Theorem \ref{thm:lower_bound} is restricted to regular distributions, it nevertheless demonstrates that a bound on smoothness is essential for data copying detection. In particular, non-regular distributions (with no bound on smoothness) can be thought of as a degenerate case in which $p_\epsilon = 0$.
Additionally, Theorem \ref{thm:lower_bound} provides a lower bound that complements the Algorithm \ref{alg:main}'s performance guarantee (Theorem \ref{thm:upper_bound}). Both bounds have the same dependence on $p_\epsilon$ implying that our algorithm is optimal at least in regards to $p_\epsilon$. However, our upper bound is significantly larger in its dependence on $d$, the ambient dimension, and $\epsilon$, the tolerance parameter itself.
While closing this gap remains an interesting direction for future work, we note that the existence of a gap isn't too surprising for our algorithm, $Data\_Copy\_Detect{}$. This is because $Data\_Copy\_Detect{}$ essentially relies on manually finding the entire region in which data-copying occurs, and doing this requires precise estimates of $p$ at all points in the training sample.
Conversely, detecting data-copying only requires an \textit{overall} estimate for the data-copying rate, and doesn't necessarily require finding all of the corresponding regions. It is plausible that more sophisticated techniques might able to estimate the data-copy rate \textit{without} directly finding these regions.
\section{Conclusion}
In conclusion, we provide a new modified definition of ``data-copying'' or generating memorized training samples for generative models that addresses some of the failure modes of previous definitions~\cite{MCD2020}. We provide an algorithm for detecting data-copying according to our definition, establish performance guarantees, and show that at least some smoothness conditions are needed on the data distribution for successful detection.
With regards to future work, one important direction is in addressing the limitations discussed in section \ref{sec:limitations}. Our definition and algorithm are centered around the assumption that the goal of a generative model is to output $q$ that is close to $p$ in a mathematical sense. As a result, we are unable to handle cases where the generator tries to generate \textit{transformed} examples that lie outside the support of the training distribution. For example, a generator restricted to outputting black and white images (when trained on color images) would remain completely undetected by our algorithm regardless of the degree with which it copies its training data. To this end, we are very interested in finding generalizations of our framework that are able to capture such broader forms of data-copying.
\section*{Acknowledgments}
We thank NSF under CNS 1804829 for research support.
|
{
"arxiv_id": "2302.13156",
"language": "en",
"timestamp": "2023-02-28T02:11:25",
"url": "https://arxiv.org/abs/2302.13156",
"yymm": "2302"
} | \section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf,authordraft]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,steve@university.edu}
\email{firstname.lastname@phillips.org}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{A woman and a girl in white dresses sit in an open car.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
Deepfakes, a term derived from ``deep learning'' and ``fake'' has gained popularity in recent years due to their ability to manipulate images, videos, and audio in a highly realistic manner using artificial intelligence (AI) algorithms. With the increasing sophistication of deepfakes, there is a growing need for effective methods of deepfake detection to combat their potential for harm~\cite{DeepfakeThreat}. In response, an increasing number of deepfake detection methods have been proposed, employing techniques such as biometric analysis using appearance and behaviors~\cite{AppearanceBehavior}, inter-frame inconsistencies using spatiotemporal data~\cite{CLRNet}, texture enhancement with multi-attention maps~\cite{MultiAttention}, few shot-based~\cite{TAR} and continual learning-based~\cite{Fretal,CoReD} methods for deepfake detection generalization, in addition to other deep learning algorithms~\cite{ShallowNet}.
The robustness of deepfake detectors is crucial, particularly in cybersecurity applications such as Facial Liveness Verification, where their failure could have serious consequences~\cite{li2022seeing}. In this paper, we aim to shed light on some of the challenges that deepfake detectors face when deployed in real-world situations. By exploring these obstacles, we hope to provide insight into the limitations of current deepfake detection methods and stimulate further research in this critical area.
{To address the challenges of deepfake detection, it is crucial to understand the limitations and pitfalls of existing approaches. Despite the emphasis on detection accuracy, there is a lack of consideration for \textit{explainability} in deepfake detection. In this article, we will delve into two specific scenarios where using deepfake detectors may lead to unexpected results. Firstly, a mismatch between the pre-processing pipeline used in the deployment and the one used during training can compromise the detector's performance. Secondly, a lack of diversity in the datasets used during training can lead to biased and unreliable results.}
\textbf{Pre-processing: }
A newcomer to the field of deepfake detection may encounter challenges in obtaining accurate results when using an off-the-shelf detection tool, as they may not be familiar with the pre-processing pipeline of the tool. It is important to note that deepfake videos are not simply fed directly into the detector. This is because the deepfake detection models typically require the input of a certain size, whereas the original image/video may be a different size. To deal with this, pre-processing is performed, which may obscure the deepfake artifacts that the detector relies on. Worse, there is no standard pre-processing pipeline nor a standardized size for inputs to deepfake detectors. This paper aims to shed light on the impact of pre-processing on the detection process and provide explainability/guidance around one of the major preprocessing tasks --- when to crop versus resize input.
Pre-processing techniques, such as resizing and cropping, are widely used in the pipeline of deepfake detectors, but their effects on detection performance are often overlooked. Our analysis reveals that resizing elongates the face to match the specified size, while cropping does not have this effect. This can lead to issues if the model is trained on stretched faces but deployed on naturally proportioned images. On the other hand, shrinking the input size through resizing may result in crucial features being lost. Therefore, we have found that selecting resizing over cropping when reducing the input size would negatively impact detection accuracy.
\textbf{Dataset/Deepfake generator diversity: }
In recent years, the diversity of deepfake datasets and deepfake generators has proliferated, supporting research in the area. Some of the most popular deepfake datasets include DeepFake Detection Challenge \cite{dolhansky2020deepfake}, FaceForensics++ \cite{rossler2019faceforensics++}, and Celeb-DF \cite{li2020celeb}; and they vary in terms of quality and methods used for generating them. Alongside this, a wide range of generators have been published, aiming to be more accessible and user-friendly, including DeepFaceLab, GAN-based and AutoEncoder-based generators.
Nevertheless, most state-of-the-art (SoTA) deepfake detectors have been developed to detect a specific type of deepfake dataset, leading to performance degradation on new, unseen deepfakes. Their generalization limitations also include the variation in deepfake quality, as demonstrated by their poor performance on compressed or manipulated inputs. Our study illuminates the correlation between deepfakes datasets. In particular, we employ both frequency transformation and deep-learning embeddings to visualize their interdependence and distribution. In this way, we highlight undisclosed reasons that may lead to the poor performance of a biased detector that was learned from limited types of deepfakes or limited generation toolkits.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/face_detect_engine.png}
\caption{Illustration of the face detected from different engines. While the first approach uses a fixed-size central cropped patch from the detected face, the second resizes it, resulting in stretched input image for detectors.}
\label{fig:facesize_dist}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=17cm]{figures/crop_vs_resize_exp.png}
\caption{High-frequency discrepancies of central cropped face vs. resize large cropped face (see our text for description).}
\label{fig:crop_vs_resize}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=17cm]{figures/heatmap_density_l2.png}
\caption{Similarities between frequency density lines from Figure \ref{fig:crop_vs_resize}. The higher values indicate the higher similarities between datasets.}
\label{fig:heatmap_l2}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=8.0cm]{figures/data_distri.png}
\caption{t-SNE visualization of various deepfake dataset use pre-trained self-supervised learning embedding model}
\label{fig:data_dist}
\end{figure}
\section{Background}
While the term deepfakes can be used to refer to any artificial replacement using AI, we limit ourselves in this work to facial deepfakes~\cite{mirsky2021creation}. In general, there are two categories of facial manipulation approaches: face reenactment and face-swapping. Face reenactment involves changing the facial expressions, movements, and speech of a person in a video to make it appear as though they are saying or doing things they never actually did. In face-swap deepfakes, the face of a person in a video or image is replaced with someone else's face, making it appear as if the latter person was present in the original footage.
In this study, we utilize the FaceForensics++ dataset \cite{rossler2019faceforensics++}, which is a well-known deepfake dataset that was created for validating different deepfake detection algorithms. From 1000 real videos, the authors generated a corresponding 1000 synthesized videos using DeepFakes \cite{deepfakes}, Face2Face \cite{thies2016face2face}, FaceSwap \cite{faceswap}, NeuralTextures \cite{thies2019deferred}, and FaceShifter \cite{li2019faceshifter} algorithms. Among these, NeuralTextures and Face2Face are reenactment methods; the others are face-swapping algorithms. In addition, to increase the diversity, we include CelebDF-v2 \cite{li2020celeb} dataset, which is created by several published deepfake apps for face swapping, and fine-tuned by a sequence of post-processing steps, making it a highly realistic dataset.
\section{Methodology}
\subsection{Data-preprocessing}
For FaceForensics++ datasets, we follow the same preprocessing step as in ADD \cite{le2021add}: 92,160, 17,920, and 17,920 images for training, validation, and testing, respectively. Each set has a balanced number of real and fake images, and the fake images are derived from all five deepfake datasets. For the CelebDF-v2, we used 16,400 for solely validating the pre-trained model.
In order to detect faces from a video, we used the dlib~\cite{dlib09} toolkit with padding factor of $15\%$ and $3\%$, respectively, and MTCNN with padding of 4\% to simulate different face detection engines.
\subsection{Training and validation}
We utilized ResNet50 as our detector and built a binary classifier. All the input images were resized to $224 \times 224$, and we used detected faces with padding of $3\%$ for training. The models were trained with the Adam \cite{kingma2014adam} optimizer with a learning rate of $2e-3$, scheduled by one-cycle strategy \cite{smith2019super}. We used a mini-batch size of 192. During every epoch, the model was evaluated ten times, and we saved the best weight based on the validation accuracy. Early stopping \cite{prechelt1998early} was applied when the model didn't not improve after $10$ consecutive validation times.
\section{Evaluation}
\begin{table*}[!t]
\centering
\resizebox{0.84\textwidth}{!}{%
\begin{tabular}{l !{\VRule} c !{\VRule} c !{\VRule} c c !{\VRule} c !{\VRule} c }
\hline
Test type & FF++ test set (dlib$_4$)& Video comp. & dlib$_{15}$ & MTCNN$_4$ & Adv. noise & CelebDF-v2 \\
\hline
ACC & \textbf{0.980} & 0.615 (\textcolor{red}{$\downarrow$ .365}) & 0.951 (\textcolor{red}{$\downarrow$ .290}) &0.970 (\textcolor{red}{$\downarrow$ .010}) &0.002 (\textcolor{red}{$\downarrow$ .978}) & 0.526 (\textcolor{red}{$\downarrow$ .454})\\
AUC & \textbf{0.994} &0.788 (\textcolor{red}{$\downarrow$ .206}) & 0.989 (\textcolor{red}{$\downarrow$ .005}) & 0.993 (\textcolor{red}{$\downarrow$ .001}) & 0.000 (\textcolor{red}{$\downarrow$ .994}) & 0.573 (\textcolor{red}{$\downarrow$ .421}) \\
\hline
\end{tabular}%
}
\caption{Performance (ACC and AUC) of deepfake detector trained on raw FaceForensics++ datasets, and validated under different circumstances.}
\label{tab:exp_results}
\end{table*}
In this section, we evaluate the pre-trained model under the following different scenarios discussed below. The experimental results are presented in Table \ref{tab:exp_results}.
\subsection{Video compression} Deepfakes are detected by their subtle artifacts, represented by high-frequency components. Various methods of lossy compression, including video compression and JPEG compression, can successfully eliminate these fine-grained artifacts, leading high rate of false-positive prediction. As shown in the second column of Table \ref{tab:exp_results}, we applied the H.264 codec with constant rate quantization parameters of 23 to raw videos. As a result, the pre-trained detector drastically dropped its accuracy from 98\% to 61.2\%.
\subsection{Face extraction approach}
The effects of using different face extraction approaches are indicated in the fourth and fifth columns of Table \ref{tab:exp_results}. While the MTCNN detector can slightly reduce the performance of the ResNet50 model, using dlib with larger padding can substantially decrease its performance in terms of both accuracy and AUC scores. We argue that since the model had learned only from the facial features, the complex background in some contexts affected the model's attention.
Our second exercise in this category was inspired by a recent live-face detection algorithm \cite{wang2022patchnet} which proposed to use fixed-size patches cropped from the original faces instead of resizing the input. The explained reason is that resizing step can distort the discriminative features. To further examine this hypothesis, we conducted a pilot study in which we selected 5,000 images from each deepfake dataset and performed central crop and resizing steps, respectively, on them, as illustrated in Figure \ref{fig:facesize_dist}. Next, we applied Fourier transformation and extracted the average density representation of each dataset in the frequency domain \cite{durall2020watch}. The results are provided in Figure \ref{fig:crop_vs_resize}. As one may observe, central cropping results in high-frequency differences between datasets. Resizing, on the other hand, pushes the high-frequency representations of datasets close together, making it difficult to distinguish.
\subsection{Adversarial noise}
Deep neural networks are well known for their adversarial nature showing through their vulnerability against adversarial examples. The adversarial samples are created by adding small, often imperceptible, perturbations to the original inputs. To validate this property, we apply one-step $L_\infty$ white-box PGD attack \cite{madry2017towards} with a small perturbation size of $1/255$ and step size $\epsilon=1/255$. As indicated in the sixth column of Table \ref{tab:exp_results}, almost all the predictions are flipped, as demonstrated by an accuracy score close to \textit{zero}. Therefore, deploying deepfake detectors in practice should consider this aspect and have proper pre-processing steps or defense mechanisms to eliminate the effect of adversarial samples.
\subsection{Data shift}
Data shift refers to changes in the statistical properties of the data distribution used to train the detection model compared to the distribution of data the model encounters in deployment. In fact, data shift in deepfake detection can be a result of different factors: ethnicity (\textit{e.g.}, Asian vs. African), environment (\textit{e.g.}, indoor vs. outdoor), generating method (\textit{e.g.}, Neural texture vs. FaceSwap). We show the results of cross-dataset validation in the final columns of Table \ref{tab:exp_results}. Although the model was trained over five datasets of FaceForensics++, it still struggles to distinguish deepfake from the CelebDF-v2 dataset, indicated by its performance of approximately random guesses.
To explain this phenomenon, we perform two experiments to visualize the relationships between datasets. First, from the density representations of datasets from Figure \ref{fig:crop_vs_resize}, we use negative distance to indicate the closeness between datasets that are formulated as $max-||a-b||^2_2$. As we can observe from Figure \ref{fig:heatmap_l2}, the cropping step introduces less relationship between datasets compared to resizing. Nevertheless, in both approaches, there is less relation between FaceForensics++ datasets and CelebDF-v2, both in real and deepfake parts. In our second experiment, we utilize a pre-trained ``self-supervised learning'' model, SBI \cite{shiohara2022detecting}, with EfficientNet-B4 backbone to get the intermediate representations of each deepfake dataset. As illustrated in Figure \ref{fig:data_dist}, each deepfake dataset has its own distribution in the latent space. Therefore, if a detection model solely learns a single dataset, its decision boundary may lose its generalization for others, leading to the degradation of its performance.
\section{Remarks}
Despite a plethora of ongoing research aimed at improving the accuracy of deepfake detectors, there is also a multitude of factors that hinder their performance. These include pre-processing steps, intended manipulation from attackers, and ongoing advancement of deepfake technology induces the low generalization of pre-trained detectors. In this paper, we quantially and visually expose these factors from the explainability viewpoints. This study also raises the awareness of researchers of not only developing effective deepfake detectors but also putting their effort into mitigating those crucial factors, reducing false positive and negative rates in practice.
|
{
"arxiv_id": "2302.13254",
"language": "en",
"timestamp": "2023-02-28T02:14:19",
"url": "https://arxiv.org/abs/2302.13254",
"yymm": "2302"
} | \section{Introduction and the Main Results}
\subsection{Problem Setting}
One of traditional problems of testing simple hypotheses
$\mathcal{H}_0$ and $\mathcal{H}_1$, concerning \\
Gaussian signal vector
$\boldsymbol{\eta}_n$ in the Gaussian noise background $\boldsymbol{\xi}_n$
(i.e., the problem of signal detection in the noise background), based on
observations
$\boldsymbol{y}_n^T=\boldsymbol{y}_n'=(y_1,\ldots,y_n)\in\mathbb{R}^n$ has
the form
\begin{equation}\label{mod0}
\begin{gathered}
\mathcal H_{0}: {\mathbf y}_{n} = \boldsymbol{\xi}_{n}, \qquad
\boldsymbol{\xi}_{n} \sim
{\mathcal N}(\boldsymbol{0},\mathbf{I}_{n}), \\
\mathcal H_{1}: {\mathbf y}_{n} = \boldsymbol{\eta}_{n}, \qquad
\boldsymbol{\eta}_{n} \sim {\mathcal N}(\mathbf{a}_{n},\mathbf{M}_{n}),
\end{gathered}
\end{equation}
where the sample $\bfm{\xi}_n^T=(\xi_1,\ldots,\xi_n)$ represents ``noise'' and
consists of independent identically distributed Gaussian random variables
with zero means and variances $1$, and $\bfm{I}_n$ -- unit covariance matrix.
Stochastic ``signal''~$\bfm{\eta}_n$ is the Gaussian random variable with
known mean $\bfm{a}_n$ and known covariance matrix $\bfm{M}_{\!n}$.
However, in practice, we usually do not know precisely the mean $\bfm{a}_n$ and
the matrix $\bfm{M}_{\!n}$, and then, in reality, the observation model
\eqref{mod0} takes the form
\begin{equation}\label{mod1}
\begin{aligned}
\mathcal{H}_0\colon &\bfm{y}_n=\bfm{\xi}_n,&~ \bfm{\xi}_n &\sim
{\mathcal{N}}(\bfm{0},\bfm{I}_n),\\ \mathcal{H}_1\colon &\bfm{y}_n=\bfm{\eta}_n,&~
\bfm{\eta}_n &\sim {\mathcal{N}}(\bfm{a}_n,\bfm{M}_{\!n}),\quad
\bfm{a}_n\in\mathcal{A}_n,\quad \bfm{M}_{\!n}\in\mathcal{M}_n,
\end{aligned}
\end{equation}
where $\mathcal{A}_n$ -- given set of possible means $\bfm{a}_n$, and
$\mathcal{M}_n$ -- given set of possible covariance matrices
$\bfm{M}_{\!n}$ (probably, depending on $\bfm{a}_n$). We denote for convenience
$$
\bfm{B}_n=(\bfm{b}_n,\bfm{V}_{\!n}),\quad \bfm{b}_n\in \mathcal{A}_n,\quad
\bfm{V}_{\!n}\in\mathcal{M}_n,\qquad \mathcal{F}_n=\{\bfm{B}_n\}=
(\mathcal{A}_n,\mathcal{M}_n).
$$
Further, for the model \eqref{mod1} we consider the problem of minimax testing
\cite{Wald, Lehmann, Poor} of the simple hypothesis $\mathcal{H}_0$ against
the composite alternative $\mathcal{H}_1$, based on observations
$\bfm{y}_n^T=\bfm{y}_n'=(y_1,\ldots,y_n)\in\mathbb{R}^n$. If for making
decision in favor of $\mathcal{H}_0$ a set $\mathcal{D}\in\mathbb{R}^n$ is
chosen, such that
\begin{equation}\label{testD}
\bfm{y}_n\in\mathcal{D}\ \Rightarrow\ \mathcal{H}_0,\qquad \bfm{y}_n
\not\in\mathcal{D}\ \Rightarrow\ \mathcal{H}_1,
\end{equation}
then the 1st-kind error probability (``\emph{false alarm}'')
$\alpha(\mathcal{D})$ and the 2nd-kind error \\
probability
(``\emph{miss probability}'') $\beta(\mathcal{D},\mathcal{A}_n,\mathcal{M}_n)$,
are defined by formulas, respectively,
\begin{equation}\label{defalpha2}
\alpha(\mathcal{D})=\P(\bfm{y}_n \not\in\mathcal{D}\mmid \mathcal{H}_0)
\end{equation}
and
\begin{equation}\label{defbeta2}
\beta(\mathcal{D},\mathcal{A}_n,\mathcal{M}_n)=\sup\limits_{\bfm{a}_n\in\mathcal{A}_n}\,
\sup\limits_{\bfm{M}_{\!n}\in\mathcal{M}_n} \P(\bfm{y}_n\in\mathcal{D}\mmid
\bfm{M}_{\!n},\bfm{a}_n).
\end{equation}
We are interested in the minimal possible 2nd-kind error probability
$\beta(\mathcal{D},\mathcal{A}_n,\mathcal{M}_n)$ (see \eqref{defalpha2} and
\eqref{defbeta2}), provided a given 1st-kind error probability $\alpha$,
$0<\alpha<1$:
\begin{equation}\label{defbeta3}
\beta(\alpha,\mathcal{A}_n,\mathcal{M}_n)
=\inf\limits_{\mathcal{D}:\:\alpha(\mathcal{D})\le\alpha}
\beta(\mathcal{D},\mathcal{A}_n,\mathcal{M}_n),
\end{equation}
and in the corresponding optimal decision set $\mathcal{D}(\alpha)$ from
\eqref{testD}.
In the paper, we consider the case when the value $\alpha$ is fixed (or vanishes
slowly with $n\to\infty$). That case sometimes is called Neyman-Pearson problem
of minimax testing of hypotheses. In that case the 1st-kind and the 2nd-kind
errors imply very different losses for the statistician, and he is mainly
interested in minimization of the 2nd-kind error probability
$\beta=\P\{\mathcal{H}_0\mmid\mathcal{H}_1\}$.
The case is quite popular in various applications
(see, e.g., \cite{ZhangPoor11} and bibliography therein).
For given mean $\bfm{a}_n$, matrix $\bfm{M}_{\!n}$ and the value $\alpha$ denote
by $\beta(\alpha,\bfm{a}_n,\bfm{M}_{\!n})$ the minimal possible
2nd-kind error probability (see \eqref{defbeta3}). The corresponding optimal
decision set $\mathcal{D}(\alpha,\bfm{a}_n,\bfm{M}_{\!n})$ is described by
Neyman -- Pearson lemma \cite{Wald,Lehmann}. Clearly,
\begin{equation}\label{ineq1}
\sup_{\bfm{M}_{\!n}\in\mathcal{M}_n}
\beta(\alpha,\bfm{a}_n,\bfm{M}_{\!n})\le\beta(\alpha,\bfm{a}_n,\mathcal{M}_n).
\end{equation}
For a fixed $\alpha$ and given sets $\mathcal{A}_n,\mathcal{M}_n$, denote also
by $\beta(\alpha,\mathcal{A}_n,\mathcal{M}_n)$ the minimal possible
2nd-kind error probability (see \eqref{defbeta3}). Then similarly to
\eqref{ineq1} we have
\begin{equation}\label{ineq1a}
\sup_{\bfm{a}_n\in\mathcal{A}_n} \,\sup_{\bfm{M}_{\!n}\in\mathcal{M}_n}
\beta(\alpha,\bfm{a}_n,\bfm{M}_{\!n})\le\beta(\alpha,\mathcal{A}_n,\mathcal{M}_n).
\end{equation}
In many practical cases the value $\beta(\alpha,\bfm{a}_n,\bfm{M}_{\!n})$
decreases exponentially in $n\to\infty$. Therefore, it is natural
(in any case, simpler and more productive) to investigate the corresponding
exponents $n^{-1}\ln\beta(\alpha,\bfm{a}_n,\bfm{M}_{\!n})$ and
$n^{-1}\ln\beta(\alpha,\mathcal{A}_n,\mathcal{M}_n)$ as $n\to\infty$
(some results on the equality in \eqref{ineq1a} are contained in \cite{Bur17a}).
In the paper, we investigate sets $\mathcal{F}_n=(\mathcal{A}_n,\mathcal{M}_n)$,
for which in \eqref{ineq1a} the following asymptotic equality holds:
\begin{equation}\label{aseq1a}
\lim_{n\to\infty}\frac{1}{n}\ln \beta(\alpha,\mathcal{A}_n,\mathcal{M}_n)
=\lim_{n\to\infty}\frac{1}{n}\ln\beta(\alpha,\bfm{a}_n,\bfm{M}_{\!n}).
\end{equation}
Motivation for investigation minimax testing of hypotheses (detection
of signals) is described in detail in \cite{Wald, Lehmann, Poor,ZhangPoor11}.
If for given sets of means $\mathcal{A}_n$ and matrices $\mathcal{M}_n$ the
relation \eqref{aseq1a} holds, then we may replace (without asymptotic losses)
the entire set $\mathcal{F}_n$ by the particular pair $(\bfm{a}_n,\bfm{M}_{\!n})$.
Recall that the optimal test for a particular pair $(\bfm{a}_n,\bfm{M}_{\!n})$ is
described by Neyman -- Pearson lemma and it reduces to the simple likelihood
ratio test (LR-test). Otherwise (without relation \eqref{aseq1a}), the optimal
minimax test is much more complicated Bayes test with respect to
the \emph{least favorable} prior distribution on the set $\mathcal{F}_n$.
Therefore, it is natural to investigate when it is possible to replace the
given set $\mathcal{F}_n$ by a particular pair
$\bfm{F}_{\!n}=(\bfm{a}_n,\bfm{M}_{\!n})$. But from technical viewpoint
it is more convenient to consider the equivalent problem: for a given
pair~$\bfm{F}_{\!n}$ to find the maximal set of pairs
$\mathcal{F}_n(\bfm{F}_{\!n})$, which can be replaced by the pair
$\bfm{F}_{\!n}$. This problem is mainly considered in the paper.
{\it Remark 1}.
Models \eqref{mod0} and \eqref{mod1} can be reduced to the
equivalent models with a diagonal matrix $\bfm{M}_{\!n}$. Indeed, since
$\bfm{M}_{\!n}$ -- a covariance matrix (i.e.,~symmetric and positive
definite), there exists an orthogonal matrix $\bfm{T}_{\!n}$ and
a diagonal matrix $\bfm{\Lambda}_n$, such that
$\bfm{M}_{\!n}=\bfm{T}_{\!n}\bfm{\Lambda}_n\bfm{T}_{\!n}'$
(see [\cite{Bellman}, \S\S\,4.7--4.9; \cite{Horn}, Theorem~4.1.5]).
In addition, the diagonal matrix
$\bfm{\Lambda}_n= \bfm{T}_{\!n}'\bfm{M}_{\!n}\bfm{T}_{\!n}$
consists of the eigenvalues $\{\lambda_i\}$ of the matrix $\bfm{M}_{\!n}$.
Note also that for any orthogonal matrix $\mathbf{T}_{n},$ the vector
$\bfm{T}_{\!n}'\bfm{\xi}_n$ has the same distribution as that
of ~$\bfm{\xi}_n$ (for the simple hypothesis $\mathcal{H}_0$ of \eqref{mod1}).
Therefore, multiplying both sides of \eqref{mod1} by $\bfm{T}_{\!n}'$,
we may reduce the model \eqref{mod1} to the equivalent case with a
diagonal matrix $\bfm{M}_{\!n}$.
{\bf Definition 1}.
For a fixed $\alpha$, and a given sequence of pairs
$\bfm{F}_{\!n}=(\bfm{a}_n,\bfm{M}_{\!n})$ define by
$\mathcal{F}_0(\bfm{F}_{\!n})$ the sequence of the largest sets of pairs,
such that the equality \eqref{aseq1a} takes the form
\begin{equation}\label{aseq11}
\lim_{n\to\infty}\frac{1}{n}\ln\beta(\mathcal{F}_0(\bfm{F}_{\!n}))
=\lim_{n\to\infty}\frac{1}{n}\ln\beta(\bfm{F}_{\!n}).
\end{equation}
Clearly, $\bfm{F}_{\!n}\in\mathcal{F}_0(\bfm{F}_{\!n})$.
In other words, for a given 1st-kind error probability $\alpha$ the sequence
$\mathcal{F}_0(\bfm{F}_{\!n})$ is the largest set of pairs, which can be
replaced (without asymptotic losses for $\beta(\mathcal{F}_0(\bfm{F}_{\!n}))$)
by one pair $\bfm{F}_{\!n}$. Below we describe (Theorem 1) the largest set
$\mathcal{F}_0(\bfm{F}_{\!n})$, satisfying \eqref{aseq11}. It generalizes
similar result from \cite{Bur20a}, where the case $\bfm{a}_n=\mathbf{0}_n$ was
considered. It also strengthens similar result from \cite{ZhangPoor11}, where
for the set $\mathcal{F}_0(\mathbf{0}_n,\bfm{M}_{\!n})$ some lower bounds were
obtained.
It is convenient first to investigate similar to $\mathcal{F}_0(\bfm{F}_{\!n})$
the maximal sets $\mathcal{F}_0^{\rm LR}(\bfm{F}_{\!n})$, which appear if
LR-detector (see Definition 2) is used. It will be shown that
$\mathcal{F}_0(\bfm{F}_{\!n})=\mathcal{F}_0^{\rm LR}(\bfm{F}_{\!n})$, i.e.,
LR-detector is asymptotically optimal.
In models \eqref{mod0} and \eqref{mod1} denote by $\P_{\bfm{I}_n}$ the
distribution of the value $\bfm{y}_n= \bfm{\xi}_n$, where
$\bfm{\xi}_n\sim {\mathcal{N}}(\bfm{0},\bfm{I}_n)$. Similarly denote by
${\mathbf{Q}}_{\mathbf{F}_{n}}$,
$\bfm{F}_{\!n}=(\bfm{a}_n,\bfm{M}_{\!n})$,
the distribution of the value $\bfm{y}_n= \bfm{\eta}_n$, where
$\bfm{\eta}_n \sim {\mathcal{N}}(\bfm{a}_n,\bfm{M}_{\!n})$.
Denote also by $p^{}_{\bfm{I}_n}(\bfm{y}_n)$ and
$p^{}_{\bfm{F}_{\!n}}(\bfm{y}_n)$, $\bfm{y}_n\in\mathbb{R}^n$, corresponding
densities of probability distributions. For ($n\times n$)-matrix
$\bfm{M}_n$ denote $|\bfm{M}_n|=\det\bfm{M}_n$. Note that,
if $|\bfm{M}_{\!n}|\ne 0$, then
\begin{equation}\label{deGaus1a}
\begin{aligned}
\ln p^{}_{\bfm{I}_n}(\bfm{y}_n)&=
-\frac{1}{2}[n\ln(2\pi)+(\bfm{y}_n,\bfm{y}_n)],\quad \bfm{y}_n\in\mathbb{R}^n,\\
\ln p^{}_{\bfm{F}_{\!n}}(\bfm{y}_n)&=-\frac{1}{2}
\bigl[n\ln(2\pi)+\ln|\bfm{M}_{\!n}|+(\bfm{y}_n-\bfm{a}_n,
\bfm{M}_{\!n}^{-1}(\bfm{y}_n-\bfm{a}_n))\bigr].
\end{aligned}
\end{equation}
For $|\bfm{M}_{\!n}|\ne 0$ introduce also the logarithm of the
likelihood ratio (see \eqref{deGaus1a})
\begin{equation}\label{deGaus2}
r_{\bfm{F}_{\!n}}(\bfm{y}_n)=
\ln\frac{p^{}_{\bfm{I}_n}}{p^{}_{\bfm{F}_{\!n}}}(\bfm{y})\\[-3pt]\\[-3pt]
=\frac{1}{2}\Bigl[\ln|\bfm{M}_{\!n}|+(\bfm{y}, (\bfm{M}_{\!n}^{-1}-
\bfm{I}_n)\bfm{y}) - 2(\bfm{y},\bfm{M}_{\!n}^{-1}\bfm{a}_n)+(\bfm{a}_n,
\bfm{M}_{\!n}^{-1}\bfm{a}_n)\Bigr].
\end{equation}
Consider first LR-detectors. Introduce the corresponding decision sets
$\mathcal{D}_{\rm LR}(\bfm{F}_{\!n},\alpha)$ in favor of the hypothesis
$\mathcal{H}_0$ (i.e., in favor of the matrix $\bfm{I}_n$), when simple
hypotheses $\bfm{I}_n$ and $\bfm{F}_{\!n}$ are tested:
\begin{equation}\label{DLR}
\mathcal{D}_{\rm LR}(\bfm{F}_{\!n},\alpha)=\{\bfm{y}_n\in\mathbb{R}^n:\:
r_{\bfm{F}_{\!n}} (\bfm{y}_n)\ge\gamma\},
\end{equation}
where $\gamma$ is such that, (see \eqref{deGaus2})
\begin{equation}\label{DLR1}
\begin{aligned}[b]
\alpha&=\P_{\bfm{I}_n}\{ \mathcal{D}_{\rm
LR}^{c}(\bfm{F}_{\!n},\alpha)\}=\P_{\bfm{I}_n}\{r_{\bfm{F}_{\!n}}
(\bfm{\xi}_n)\le\gamma\}\\
&=\P_{\bfm{I}_n}\Bigl\{ [\ln|\bfm{M}_{\!n}| +(\bfm{\xi}_n,
(\bfm{M}_{\!n}^{-1}- \bfm{I}_n)\bfm{\xi}_n) -
2(\bfm{\xi}_n,\bfm{M}_{\!n}^{-1}\bfm{a}_n)+(\bfm{a}_n,
\bfm{M}_{\!n}^{-1}\bfm{a}_n)]\le 2\gamma\Bigr\}.
\end{aligned}
\end{equation}
{\bf Definition 1}.
For a fixed $\alpha$ and a given sequence of pairs
$\bfm{F}_{\!n}=(\bfm{a}_n,\bfm{M}_n)$ denote by
$\mathcal{F}_0^{\rm LR}(\bfm{F}_{\!n})$ the sequence of the
largest sets of pairs $(\bfm{b}_n,\bfm{V}_{\!n})$, such that
\begin{equation}\label{aseq1aa}
\lim_{n\to\infty}\frac{1}{n}\ln\sup_{(\bfm{b}_n,\bfm{V}_{\!n})\in
\mathcal{F}_n^{\rm LR}(\bfm{a}_n,\bfm{M}_{\!n})} \beta(\bfm{b}_n,\bfm{V}_{\!n})
=\lim_{n\to\infty}\frac{1}{n}\ln\beta(\bfm{a}_n,\bfm{M}_{\!n}),
\end{equation}
provided the decision sets $\mathcal{D}_{\rm LR}(\alpha,\bfm{a}_n,\bfm{M}_n)$
are used.
Below in Theorem 2 the set $\mathcal{F}_0^{\rm LR}(\bfm{a}_n,\bfm{M}_{\!n})$ for
the model \eqref{mod1} is described.
We shall also need the following definition \cite{Kullback}.
{\bf Definition 2}.
For probability measures $\mathbf{P}$ and $\mathbf{Q}$ on a measurable space
$(\cal X,\mathcal{B})$ introduce the function (Kullback--Leibler distance
(or divergence) for measures $\mathbf{P}$ and $\mathbf{Q}$)
\begin{equation}\label{Stein0}
D({\mathbf{P}}\Mmid{\mathbf{Q}})=
\E_{\mathbf{P}} \ln\frac{d\mathbf{P}}{d\mathbf{Q}}(\bfm{x})\ge 0,
\end{equation}
where the expectation is taken over the measure $\mathbf{P}$.
Using formulas \eqref{deGaus1a} and \eqref{Stein0} we have
\begin{equation}\label{deGaus3a}
\begin{aligned}[b]
D({\P_{\bfm{I}_n}}\Mmid{\mathbf{Q}_{\bfm{a}_n,\bfm{M}_{\!n}}})&=
\E_{\bfm{\xi}_n}\ln\frac{p^{}_{\bfm{I}_n}}
{p^{}_{\bfm{a}_n,\bfm{M}_{\!n}}}(\bfm{\xi}_n)\\
&=\frac{1}{2}\Bigl[\ln|\bfm{M}_{\!n}|
+(\bfm{a}_n, \bfm{M}_{\!n}^{-1}\bfm{a}_n)+\E_{\bfm{\xi}_n}(\bfm{\xi}_n,
(\bfm{M}_{\!n}^{-1}-\bfm{I}_n)\bfm{\xi}_n)\Bigr]\\
&=\frac{1}{2}\Biggl[\,\sum_{i=1}^n(\ln\lambda_i +\frac{1}{\lambda_i}-1)+
(\bfm{a}_n,\bfm{M}_{\!n}^{-1}\bfm{a}_n)\Biggr],
\end{aligned}
\end{equation}
where $\{\lambda_1,\ldots,\lambda_n\}$ -- the eigenvalues (all positive)
of the covariance matrix $\bfm{M}_{\!n}$, and $\bfm{a}_n=(a_1,\ldots,a_n)$.
\subsection{Assumptions}
In the model \eqref{mod1} denote by
$\lambda_1(\bfm{M}_{\!n}),\ldots,\lambda_n(\bfm{M}_{\!n})$
the eigenvalues (all positive) of the covariance matrix $\bfm{M}_{\!n}$.
We assume that the following assumptions are satisfied:
{\bf I}. For all covariance matrices
$\mathbf{M}_{n} \in {\cal M}_{n}(\mathbf{M}_{n})$
there exists the limit (see \eqref{deGaus3a})
\begin{equation}\label{assump0}
\begin{gathered}
\lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}
\left(\ln\lambda_{i}(\mathbf{M}_{n}) +
\frac{1}{\lambda_{i}(\mathbf{M}_{n})}-1\right)
\end{gathered}
\end{equation}
(note that $\ln z + 1/z-1\geq 0$, $z > 0$).
{\bf II}. For some $\delta > 0$ we have
\begin{equation}\label{assump1}
\begin{gathered}
\lim_{n\to\infty}\frac{1}{n}\sup_{\mathbf{M}_{n} \in {\cal M}_{n}}
\sum_{i=1}^{n}\left|\frac{1}{\lambda_{i}(\mathbf{M}_{n})}-1
\right|^{1+\delta} < \infty.
\end{gathered}
\end{equation}
\subsection{Main results}
We first make an important explanation.
{\it Remark 2}.
There is the following technical problem when describing the maximal sets
$\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$. The relation \eqref{aseq1a} has the
asymptotic (as $n\to\infty$) character. Therefore, the maximal sets
$\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$ can also be described only asymptotically
(as $n\to\infty$). For that purpose, it is mostly convenient to describe the
simplest sequence of sets, which gives in the limit
the maximal sets $\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$.
In this paper, for a $n\times n$-matrix $\bfm{A}_n$ we denote
$|\bfm{A}_n|=\det\bfm{A}_n$. By $(\bfm{x},\bfm{y})$
we denote the inner product of vectors $\bfm{x},\bfm{y}$.
We write $\bfm{A}_n> \mathbf{0}$, if~$\bfm{A}_n$ is positive definite.
Let $\mathcal{C}_n$ -- the set of all $n\times n$-covariance (i.e., symmetric
and positive definite) matrices in $\mathbf{R}^{n}$. For any
$\bfm{M}_{\!n},\bfm{V}_{\!n}\in\mathcal{C}_n$, and any
$\bfm{a}_n, \bfm{b}_n\in\mathbb{R}^n$ define the function
\begin{equation}\label{deffa}
f_{\bfm{a}_n,\bfm{M}_{\!n}}(\bfm{b}_n,\bfm{V}_{\!n})=\frac{|\bfm{M}_{\!n}|e^{-K}}
{|\bfm{V}_{\!n}\rvert\lvert\bfm{B}_n|},
\end{equation}
where
\begin{equation}\label{Comput2a}
\begin{gathered}
\bfm{B}_n=\bfm{I}_n+\bfm{V}_{\!n}^{-1}-\bfm{M}_{\!n}^{-1},\qquad \bfm{d}=
\bfm{B}_n^{-1}( \bfm{V}_{\!n}^{-1}\bfm{b}_n- \bfm{M}_{\!n}^{-1}\bfm{a}_n),\\
K=(\bfm{b}_n,\bfm{V}_{\!n}^{-1}\bfm{b}_n)- (\bfm{a}_n,
\bfm{M}_{\!n}^{-1}\bfm{a}_n) -
(\bfm{d},\bfm{B}_n\bfm{d}).
\end{gathered}
\end{equation}
For a sequence of pairs $(\bfm{a}_n,\bfm{M}_{\!n})$ introduce the following
sequence of sets of pairs $(\bfm{b}_n,\bfm{V}_{\!n})$:
\begin{equation}\label{Theor1a}
\begin{aligned}[b]
\mathcal{F}_0(\bfm{a}_n,\bfm{M}_{\!n})&=\biggl\{(\bfm{b}_n,\bfm{V}_{\!n}):\:
\E_{\bfm{I}_n}\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}}
{p^{}_{\bfm{a}_n,\bfm{M}_{\!n}}}(\bfm{x}) \le e^{o(n)}\biggr\}\\
&=\Bigl\{(\bfm{b}_n,\bfm{V}_{\!n}):\: \bfm{I}_n+\bfm{V}_{\!n}^{-1}
-\bfm{M}_{\!n}^{-1}>\mathbf{0},\:
f_{\bfm{a}_n,\bfm{M}_{\!n}}(\bfm{b}_n,\bfm{V}_{\!n})\le e^{o(n)}\Bigr\},
\end{aligned}
\end{equation}
where the function $f_{\bfm{a}_n,\bfm{M}_{\!n}}(\bfm{b}_n,\bfm{V}_{\!n})$ is
defined in \eqref{deffa}.
The following Theorem is the main result of the paper. It describes the sets \\
$\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$ and
$\mathcal{F}^{\rm LR}(\bfm{a}_n,\bfm{M}_{\!n})$ from \eqref{aseq11} and
\eqref{aseq1aa}, respectively.
{\bf Theorem 1}.
\textit{
If assumptions \/ \eqref{assump0}\upn, \eqref{assump1} hold, then as $n\to\infty$
\begin{equation}\label{Theor1}
\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})=\mathcal{F}^{\rm LR}(\bfm{a}_n,\bfm{M}_{\!n})
=\mathcal{F}_0(\bfm{a}_n,\bfm{M}_{\!n}),
\end{equation}
where equalities are understood in the sense of Remark\/ \upn2}.
{\it Remark 3}.
Clearly, $(\bfm{a}_n,\bfm{M}_{\!n})\in\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$.
Moreover, the sets $\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$ and
$\mathcal{F}^{\rm LR}(\bfm{a}_n,\bfm{M}_{\!n})$ are convex in
$\bfm{b}_n,\bfm{V}_{\!n}$. Indeed, it is known
[\cite{Bellman}, \S\,8.5,Theorem~4; \cite{Horn}, Theorem~7.6.7],
that the function $f(\bfm{A}_n)= \ln|\bfm{A}_n|$ is strictly concave
on the convex set of positive definite symmetric matrices in $\mathbf{R}^n$.
Therefore, the set $\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$ is convex, i.e. any
matrices $\bfm{V}_{\!n}^{(1)}\in\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$ and
$\bfm{V}_{\!n}^{(2)}\in\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$ satisfy condition
$$
a\bfm{V}_{\!n}^{(1)}+(1-a)\bfm{V}_{\!n}^{(2)}
\in\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n}),\quad \text{for any}\ 0\le a\le 1.
$$
In a sense, $\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$ -- the set
$\mathcal{F}_0(\bfm{a}_n,\bfm{M}_{\!n})$, enlarged by a ``thin slice''
whose width has the order of $o(n)$. In other words,
$\mathcal{F}_0(\bfm{a}_n,\bfm{M}_{\!n})$ can be considered as a ``core'' of
the set $\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$.
We present also the following simplifying consequence to Theorem 1. Without
loss of generality, we may assume that the matrix $\bfm{M}_{\!n}$ is diagonal
(see Remark 1) with the eigenvalues $\{\lambda_i\}$ (all positive). We also
limit ourselves in ~\eqref{Theor1} only to diagonal matrices $\bfm{V}_{\!n}$
with positive eigenvalues $\{\nu_i\}$. The matrix
$\bfm{B}_n= \bfm{I}_n+\bfm{V}_{\!n}^{-1}-\bfm{M}_{\!n}^{-1}$ is diagonal
with the eigenvalues $\{\mu_i\}$:
\begin{equation}\label{deff1}
\mu_i=1+\frac{1}{\nu_i} - \frac{1}{\lambda_i},\quad i=1,\ldots,n.
\end{equation}
Then for $\bfm{a}_n=(a_{1,n},\ldots,a_{n,n})$,
$\bfm{b}_n=(b_{1,n},\ldots,b_{n,n})$ we have from \eqref{Comput2a}
\begin{equation}\label{deff2}
K=\sum\limits_{i=1}^n\biggl[\frac{b_{i,n}^2}{\nu_i} -
\frac{a_{i,n}^2}{\lambda_i} -
\frac{1}{\mu_i}\Bigl(\frac{b_{i,n}}{\nu_i} -
\frac{a_{i,n}}{\lambda_i}\Bigr)^2\biggr].
\end{equation}
Introduce the convex set $\mathcal{C}_{{\rm diag},n}$ of diagonal, positive
definite matrices $\bfm{V}_{\!n}$:
$$
\mathcal{C}_{{\rm diag},n}=\{\bfm{V}_{\!n}\in\mathcal{C}_n:\:
\bfm{V}_{\!n}>\mathbf{0}\,\ \text{and}\,\ \bfm{V}_{\!n}\
\text{a diagonal matrix}\}.
$$
If $\bfm{M}_{\!n},\bfm{V}_{\!n}\in\mathcal{C}_{{\rm diag},n}$, then the
function $f_{\bfm{a}_n,\bfm{M}_{\!n}}(\bfm{b}_n,\bfm{V}_{\!n})$
from \eqref{deffa} takes the form
\begin{equation}\label{deffVM}
f_{\bfm{a}_n,\bfm{M}_{\!n}}^{(0)}(\bfm{b}_n,\bfm{V}_{\!n})
=e^{-K}\prod_{i=1}^n\Bigl(\frac{\lambda_i}{\nu_i\mu_i}\Bigr),
\end{equation}
where $\{\mu_i\}$ are defined in \eqref{deff1}, and $K$ is defined
in \eqref{deff2}. It is supposed also, that $\mu_i>0$, $i=1,\ldots,n$.
For a sequence of pairs $(\bfm{a}_n,\bfm{M}_{\!n})$,
$\bfm{M}_{\!n}\in\mathcal{C}_{{\rm diag},n}$, introduce the following set of
pairs $(\bfm{b}_n,\bfm{V}_{\!n})$,
$\bfm{V}_{\!n}\in\mathcal{C}_{{\rm diag},n}$:
\begin{equation}\label{defV0}
\mathcal{V}(\bfm{a}_n,\bfm{M}_{\!n})=\Bigl\{(\bfm{b}_n,\bfm{V}_{\!n}):\:
1+1/\nu_i-1/\lambda_i>0,\: i=1,\ldots,n,\: \ln
f_{\bfm{a}_n,\bfm{M}_{\!n}}^{(0)}(\bfm{b}_n,\bfm{V}_{\!n})\le o(n)\Bigr\},
\end{equation}
where the function
$f_{\bfm{a}_n,\bfm{M}_{\!n}}^{(0)}(\bfm{b}_n,\bfm{V}_{\!n})$ is defined
in \eqref{deffVM}.
Then the following ``inner bound'' for $\mathcal{M}(\bfm{a}_n,\bfm{M}_{\!n})$
holds.
{\bf Theorem 2}.
\textit{
If assumptions\/ \eqref{assump0}\upn, \eqref{assump1} hold, then the set
$\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$
contains the set $\mathcal{V}(\bfm{a}_n,\bfm{M}_{\!n})$:
\begin{equation}\label{Cor1}
\mathcal{V}(\bfm{a}_n,\bfm{M}_{\!n}) \subseteq \mathcal{F}
(\bfm{a}_n,\bfm{M}_{\!n}),
\quad \bfm{M}_{\!n}\in \mathcal{C}_{{\rm diag},n},
\end{equation}
where the set\/ $\mathcal{V}(\bfm{a}_n,\bfm{M}_{\!n})$ is
defined in\/ \eqref{defV0}}.
The set $\mathcal{V}(\bfm{a}_n,\bfm{M}_{\!n})$ is convex in $\bfm{V}_{\!n}$
(see Remark 3).
Further, in $\S\,2$ an auxiliary Theorem 3 is given. In $\S\,3$ Theorem~1 is
proved, and in $\S\,4$ as examples some particular cases of the problem are
considered.
\section{Auxiliary Theorem }
In models \eqref{mod0}, \eqref{mod1} we first consider the testing of simple
hypotheses: the pair $(\mathbf{0}_n,{\bfm{I}}_n)$ versus a pair
$(\bfm{a}_n,\bfm{M}_n)$. Denote
$$
D(\mathbf{I}_{n}||\mathbf{a}_{n},\mathbf{M}_{n}) =
D({\mathbf P}_{\mathbf{I}_{n}}||{\mathbf Q}_{\mathbf{a}_{n},\mathbf{M}_{n}}).
$$
Next Theorem is the main auxiliary result of this paper. Its proof follows
the proof of Theorem 3 in \cite{Bur20a}. A more general result is contained
in \cite{Bur21}.
{\bf Theorem 3}. \textit{
For the minimal possible\/ $\beta(\alpha)$\upn, $0<\alpha<1$\upn, the bounds
are valid
\begin{equation}\label{Stein11}
\begin{aligned}&
\ln\beta(\alpha)\ge -\frac{D(\mathbf{I}_n\Mmid \bfm{a}_n,\bfm{M}_{\!n})+
h(\alpha)}{1-\alpha},\quad h(\alpha)=-\alpha\ln\alpha - (1-\alpha)\ln(1-\alpha),
\end{aligned}
\end{equation}
and
\begin{equation}\label{Stein12}
\ln\beta(\alpha)\le -D(\bfm{I}_n\Mmid
\bfm{a}_n,\bfm{M}_{\!n})+\mu_0(\alpha,\bfm{a}_n,\bfm{M}_{\!n}),
\end{equation}
where\/ $\mu_0(\alpha,\bfm{a}_n,\bfm{M}_{\!n})$ is defined by the relation}
\begin{equation}\label{defmu0}
\P_{\bfm{I}_n}\biggl\{\ln\frac{p^{}_{\bfm{I}_n}}
{p^{}_{\bfm{a}_n,\bfm{M}_{\!n}}}(\bfm{x})\le D(\bfm{I}_n\Mmid
\bfm{a}_n,\bfm{M}_{\!n})-\mu_0\biggr\}=\alpha.
\end{equation}
Note that both bounds \eqref{Stein11} and \eqref{Stein12} are pure analytical
relations without any limiting operations. The lower bound
\eqref{Stein11} and the upper bound \eqref{Stein12} are close to each other,
if the value $\mu_0(\alpha,\bfm{a}_n,\bfm{M}_{\!n})$ is much smaller than
$D(\bfm{I}_n\Mmid \bfm{a}_n,\bfm{M}_{\!n})$ (which usually has the
order of $n$).
Next result gives an upper bound of the order $n^{1/p}$, $p>1$, for the value
$\mu_0(\alpha,\bfm{a}_n,\bfm{M}_{\!n})$ from \eqref{defmu0}. Its proof
(see Appendix) follows the proof of Lemma 1 in \cite{Bur20a}.
{\bf Lemma \ 1}. \textit{For \/ $\mu_0(\alpha,\bfm{a}_n,\bfm{M}_{\!n})$ \/from
\eqref{Stein12} \/ the upper bound holds \upn (see \eqref{assump1}\upn)}
\begin{equation}\label{lem1a}
\mu_0(\alpha,\bfm{a}_n,\bfm{M}_{\!n})\le\Biggl(\frac{24}{\alpha}
\sum_{i=1}^n\biggl|\frac{1}{\lambda_i(\bfm{M}_{\!n})}-1\biggr|^p\Biggr)^{1/p}
+3\bigl\|\bfm{M}_{\!n}^{-1}\bfm{a}_n\bigr\|\sqrt{\ln(1/\alpha)}.
\end{equation}
\section{Proof of Theorem 1}
Since $\mathcal{F}_n^{\rm LR}(\bfm{a}_n,\bfm{M}_{\!n}) \subseteq
\mathcal{F}_n(\bfm{a}_n,\bfm{M}_{\!n})$, in order to prove Theorem 1
it is sufficient to get the ``inner bound'' for
$\mathcal{F}_n^{\rm LR}(\bfm{a}_n,\bfm{M}_{\!n})$, and then to get a similar
``outer bound'' for $\mathcal{F}_n(\bfm{a}_n,\bfm{M}_{\!n})$.
\subsection{``Inner bound'' for
$\mathcal{F}_n^{\rm LR}(\bfm{a}_n,\bfm{M}_{\!n})$}
We first estimate from above the value $\beta(\alpha,\bfm{b}_n,\bfm{V}_{\!n})$.
For that purpose in the model \eqref{mod1} we consider the testing of the simple
hypothesis $(\mathbf{0}_n,{\bfm{I}}_n)$ against the simple alternative
$(\bfm{a}_n,\bfm{M}_{\!n})$, when $\bfm{a}_n$ is known. We use the optimal
LR-test with the decision region $\mathcal{D}_{\rm
LR}(\bfm{a}_n,\bfm{M}_{\!n},\alpha)=\mathcal{A}_{\mu_0}$ in favor of
$(\mathbf{0}_n,{\bfm{I}}_n)$ (see~\eqref{DLR},~\eqref{DLR1}), where
$\mu_0 =\mu_0(\alpha,\bfm{a}_n,\bfm{M}_{\!n})> 0$ is defined in \eqref{defmu0}.
Let us consider another pair $(\bfm{b}_n,\bfm{V}_{\!n})$, and evaluate
the 2nd-kind error probability $\beta(\alpha,\bfm{b}_n,\bfm{V}_{\!n})$,
provided the decision region $\mathcal{A}_{\mu_0}$ is used. Then
\begin{equation}\label{error2V}
\begin{aligned}[b]
\beta(\alpha,\bfm{b}_n,\bfm{V}_{\!n})&=\int\limits_{\mathcal{A}_{\mu_0}}
p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}(\bfm{x})d\bfm{x}
=\int\limits_{\mathcal{A}_{\mu_0}}\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}}
{p^{}_{\bfm{a}_n,\bfm{M}_n}}(\bfm{x})
\frac{p^{}_{\bfm{a}_n,\bfm{M}_n}}{p^{}_{\bfm{I}_n}}
(\bfm{x})p^{}_{\bfm{I}_n}(\bfm{x})d\bfm{x}\\ &=e^{-D(\bfm{I}_n\Mmid
\bfm{a}_n,\bfm{M}_n)+\mu_1}
\int\limits_{\mathcal{A}_{\mu_0}}\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}}
{p^{}_{\bfm{a}_n,\bfm{M}_n}} (\bfm{x})p^{}_{\bfm{I}_n}(\bfm{x})d\bfm{x}\\
&\le\beta(\alpha,\bfm{a}_n,\bfm{M}_n)e^{\mu_0}
\E_{\bfm{I}_n}\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}}
{p^{}_{\bfm{a}_n,\bfm{M}_n}}(\bfm{x}),
\end{aligned}
\end{equation}
where $0\le\mu_1\le\mu_0(\alpha,\bfm{a}_n,\bfm{M}_{\!n})$. Due to the
assumption \eqref{assump1} and the estimate \eqref{lem1a}, we have
\begin{equation}\label{error2V1a}
\mu_0(\alpha,\bfm{a}_n,\bfm{M}_{\!n})=O(n^{1/(1+\delta)}) =o(n),\quad n\to\infty.
\end{equation}
Therefore, if
\begin{equation}\label{error2V1}
\sup_{(\bfm{b}_n,\bfm{V}_{\!n})\in\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})}
\E_{\bfm{I}_n}\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}} {p^{}_{\bfm{a}_n,\bfm{M}_n}}
(\bfm{x})\le e^{o(n)},\quad n\to\infty,
\end{equation}
then by \eqref{error2V}--\eqref{error2V1} as $n\to\infty$
\begin{equation}\label{error2V2}
\sup_{(\bfm{b}_n,\bfm{V}_{\!n})\in\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})}
\ln\beta(\alpha,\bfm{b}_n,\bfm{V}_{\!n})
\le\ln\beta(\alpha,\bfm{a}_n,\bfm{M}_n)+o(n).
\end{equation}
\subsection{``Outer bound'' for $\mathcal{F}_n(\bfm{a}_n,\bfm{M}_{\!n})$}
Now, we get a similar lower bound for $\beta(\alpha,\bfm{b}_n,\bfm{V}_{\!n})$.
Consider first the testing of the simple hypothesis $(\mathbf{0}_n,{\bfm{I}}_n)$
against the simple alternative $(\bfm{a}_n,\bfm{M}_n)$. We use the optimal
LR-test with the decision region
$\mathcal{D}_{\rm LR}(\bfm{a}_n,\bfm{M}_{\!n},\alpha)=\mathcal{A}_{\mu_0}$ in
favor of $(\mathbf{0}_n,{\bfm{I}}_n)$ (see~\eqref{DLR},~\eqref{DLR1}). Then,
denoting $p=p^{}_{\bfm{I}_n}$ and $q=p^{}_{\bfm{a}_n,\bfm{M}_{\!n}}$, we have
for error probabilities
\begin{equation}\label{Defab11}
\alpha=\P_{\bfm{I}_n}(\mathcal{A}_{\mu_0}),\qquad \beta_{\bfm{a}_n,\bfm{M}_{\!n}}
=\int\limits_{\mathcal{A}_{\mu_0}}q(\bfm{x})\,d\bfm{x}=\beta(\alpha).
\end{equation}
Consider another pair $(\bfm{b}_n,\bfm{V}_{\!n})$. Let
$\mathcal{D}\in\mathbb{R}^n$ -- a decision region in favor of
$(\mathbf{0}_n,\bfm{I}_n)$, and
$\beta_{\bfm{b}_n,\bfm{V}_{\!n}}(\mathcal{D})$ and
$\alpha=\alpha(\mathcal{D})$ -- corresponding error probabilities. Then,
denoting $q_1=p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}$, we need to have for the
2nd-kind error probability $\beta_{\bfm{b}_n,\bfm{V}_{\!n}}(\mathcal{D})$
(see \eqref{Defab11})
\begin{equation}\label{Defab1a}
\beta_{\bfm{b}_n,\bfm{V}_{\!n}}(\mathcal{D})
=\int\limits_\mathcal{D}q_1(\bfm{x})\,d\bfm{x}\le\beta(\alpha)e^{o(n)},\qquad
\alpha=\P_{\bfm{I}_n}(\mathcal{D}^{c}).
\end{equation}
For some $\delta$, $0\le\delta\le 1$, consider also the probability density
\begin{equation}\label{Defab2}
q^{}_{\delta}(\bfm{x})=(1-\delta)q(\bfm{x})+\delta q_1(\bfm{x})
\end{equation}
and the corresponding value $\beta_{\delta}$ for it:
\begin{equation}\label{Defab3a}
\beta_{\delta}=\int\limits_\mathcal{D}q^{}_{\delta}(\bfm{x})\,d\bfm{x}
=(1-\delta)\beta(\alpha)+\delta\beta_{\bfm{b}_n,\bfm{V}_{\!n}}.
\end{equation}
We have by \eqref{Defab1a} and \eqref{Defab3a}
\begin{equation}\label{Defab3}
\beta_{\delta}\le\beta(\alpha)\bigl(1-\delta+\delta e^{o(n)}\bigr).
\end{equation}
Note that the probability density $q^{}_{\delta}(\bfm{x})$
corresponds to the Bayes problem statement, when the alternative hypothesis
$\mathcal{H}_1$ with probability $1-\delta$ coincides with
~$(\bfm{a}_n,\bfm{M}_{\!n})$, and with probability $\delta$ -- with
$(\bfm{b}_n,\bfm{V}_{\!n})$. The value $\beta_{\delta}$ is
the corresponding 2nd-kind error probability.
We lowerbound the value $\beta_{\delta}$. First we have
\begin{equation}\label{Defab4}
\begin{aligned}[b]
\ln\frac{\beta_{\delta}}{1-\alpha}&=\ln\Biggl[\frac{1}{(1-\alpha)}
\int\limits_\mathcal{D}p(\bfm{x})\frac{q^{}_{\delta}}{p}(\bfm{x})\,
d\bfm{x}\Biggr]\ge\frac{1}{(1-\alpha)}
\int\limits_\mathcal{D}p(\bfm{x})\ln\frac{q^{}_{\delta}}{p}(\bfm{x})\,d\bfm{x}\\
&=-\frac{D(p(\bfm{x})\Mmid q^{}_{\delta}(\bfm{x}))}{1-\alpha} -
\frac{1}{(1-\alpha)}\int\limits_{\mathcal{D}^{c}}p(\bfm{x})
\ln\frac{q^{}_{\delta}}{p}(\bfm{x})\,d\bfm{x}.
\end{aligned}
\end{equation}
For the last term in the right-hand side of \eqref{Defab4} we have
$$
\int\limits_{\mathcal{D}^{c}}p(\bfm{x})
\ln\frac{q^{}_{\delta}}{p}(\bfm{x})\,d\bfm{x}\le\alpha\ln\Biggl[\frac{1}{\alpha}
\int\limits_{\mathcal{D}^{c}}q^{}_{\delta}(\bfm{x})\,d\bfm{x}\Biggr]
=\alpha\ln\frac{1-\beta_{\delta}}{\alpha}\le\alpha\ln\frac{1}{\alpha}.
$$
Therefore we get
\begin{equation}\label{Stein11a}
\ln\beta_{\delta}\ge -\frac{D(p(\bfm{x})\Mmid
q^{}_{\delta}(\bfm{x}))+h(\alpha)}{1-\alpha}.
\end{equation}
Consider the value $D(p(\bfm{x})\Mmid q^{}_{\delta}(\bfm{x}))$ in the
right-hand side of \eqref{Stein11a}. Denoting
\begin{equation}\label{Defr}
r(\bfm{x})=\frac{q_1(\bfm{x})}{q(\bfm{x})},
\end{equation}
we have by \eqref{Defab2} and \eqref{Defr}
$$
\frac{q^{}_{\delta}(\bfm{x})}{q(\bfm{x})}=
1-\delta+\delta\frac{q_1(\bfm{x})}{q(\bfm{x})}=1-\delta+\delta r(\bfm{x}).
$$
Therefore
\begin{equation}\label{Defab7}
D(p(\bfm{x})\Mmid q^{}_{\delta}(\bfm{x}))=-\int\limits_{\mathbb{R}^n} p(\bfm{x})
\ln\frac{q^{}_{\delta}}{p}(\bfm{x})\,d\bfm{x}=D(p(\bfm{x})\Mmid
q(\bfm{x}))+g(\delta),
\end{equation}
where
\begin{equation}\label{Defab8}
g(\delta)=- \int\limits_{\mathbb{R}^n} p(\mathbf x)\ln[1-\delta+\delta
r(\bfm{x})]\,d\bfm{x}.
\end{equation}
Therefore, by \eqref{Defab3}, \eqref{Defab7} and \eqref{Defab8} we need to
have
\begin{equation}\label{Defab9}
g(\delta)\ge -\ln(1-\delta+\delta e^{o(n)}),\quad \text{for all}\ 0<\delta\le 1.
\end{equation}
Note, that since $\ln \E\xi\ge\E\ln \xi$, then we have from \eqref{Defab8}
$$
g(\delta)\le\ln\int\limits_{\mathbb{R}^n} \frac{p(\bfm{x})}{1-\delta +\delta
r(\bfm{x})}d\bfm{x},\quad \text{for all}\ 0<\delta\le 1.
$$
Therefore, in order to have \eqref{Defab9} fulfilled, we need to have
\begin{equation}\label{Defab92}
\int\limits_{\mathbb{R}^n} \frac{p(\bfm{x})}{1-\delta +\delta
r(\bfm{x})}d\bfm{x}\ge\frac{1}{1-\delta+\delta e^{o(n)}},\quad 0<\delta\le 1.
\end{equation}
Since $\int p(\bfm{x})\,d\bfm{x}=1$, the relation \eqref{Defab92} is equivalent
to the condition
\begin{equation}\label{Defab92a}
\int\limits_{\mathbb{R}^n} \frac{p(\bfm{x})(r(\bfm{x})-1)}{1-\delta +\delta
r(\bfm{x})}d\bfm{x}\le\frac{e^{o(n)}-1}{1-\delta+\delta e^{o(n)}},\quad
0<\delta\le 1.
\end{equation}
Note, that
$$
\int\limits_{\mathbb{R}^n} \frac{p(\bfm{x})}{1-\delta +\delta
r(\bfm{x})}d\bfm{x}\le\frac{1}{1-\delta},\quad 0<\delta\le 1.
$$
Then, in order to have \eqref{Defab92a} fulfilled, we need, at least,
\begin{equation}\label{Defab94}
\int\limits_{\mathbb{R}^n} \frac{p(\bfm{x})r(\bfm{x})}{1 +\delta
r(\bfm{x})}d\bfm{x}\le\frac{e^{o(n)}}{(1-\delta)(1-\delta+\delta e^{o(n)})}.
\end{equation}
Setting $\delta \downarrow 0$, we get from \eqref{Defab94} the
necessary condition
\begin{equation}\label{Defab98}
\int\limits_{\mathbb{R}^n}p(\bfm{x})r(\bfm{x})\,d\bfm{x}=\E_{\bfm{I}_n}
\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}}{p^{}_{\bfm{a}_n,\bfm{M}_n}} (\bfm{x}) \le
e^{o(n)},
\end{equation}
which gives the ``outer bound'' for $\mathcal{F}_n(\bfm{a}_n,\bfm{M}_{\!n})$
(see \eqref{Theor1}).
Note that the ``inner bound'' \eqref{error2V1}, \eqref{error2V2} for
$\mathcal{F}_n(\bfm{a}_n,\bfm{M}_{\!n})$ coincides with ~\eqref{Defab98}.
Therefore, in order to finish the proof of Theorem 1 it remains us to express
analytically the condition \eqref{Defab98} via the matrices
$\bfm{M}_{\!n}, \bfm{V}_{\!n}$ and means $\bfm{a}_n,\bfm{b}_n$. For that
purpose we use the following result.
{\bf Lemma \ 2}.
\textit{ If\/ $\bfm{I}_n+\bfm{V}_{\!n}^{-1}-\bfm{M}_{\!n}^{-1}>\mathbf{0}$\upn,
\/then the formula holds \upn(see \eqref{deffa}--\eqref{Theor1a}\upn)}
\begin{equation}\label{lemma2}
\E_{\bfm{I}_n}\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}}
{p^{}_{\bfm{a}_n,\bfm{M}_n}}(\bfm{x})=\frac{|\bfm{M}_{\!n}|^{1/2}e^{-K/2}}
{|\bfm{V}_{\!n}|^{1/2}|\bfm{B}_n|^{1/2}}
=f^{1/2}_{\bfm{a}_n,\bfm{M}_{\!n}}(\bfm{b}_n,\bfm{V}_{\!n}),
\end{equation}
\textit{where the function\/ $f_{\bfm{a}_n,\bfm{M}_{\!n}}(\bfm{b}_n,
\bfm{V}_{\!n})$ \/is defined in \eqref{deffa}.}
\textit{If the matrix\/ $\bfm{I}_n+\bfm{V}_{\!n}^{-1}-\bfm{M}_{\!n}^{-1}$\upn
is not positive definite, then}
\begin{equation}\label{lemma2a}
\E_{\bfm{I}_n}\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}}
{p^{}_{\bfm{a}_n,\bfm{M}_n}}(\bfm{x})=\infty.
\end{equation}
{\bf Proof}. Denoting
$$
\zeta=\bigl(\bfm{\xi}_n-\bfm{b}_n,\bfm{V}_{\!n}^{-1} (\bfm{\xi}_n-\bfm{b}_n)\bigr)-
\bigl(\bfm{\xi}_n-\bfm{a}_n,\bfm{M}_{\!n}^{-1} (\bfm{\xi}_n-\bfm{a}_n)\bigr),
$$
we get by \eqref{deGaus1a}
\begin{equation}\label{Comput2}
\begin{aligned}[b]
\E_{\bfm{I}_n}\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}}
{p^{}_{\bfm{a}_n,\bfm{M}_n}}(\bfm{x})
&=\E_{\bfm{\xi}_n}\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}}
{p^{}_{\bfm{a}_n,\bfm{M}_{\!n}}}(\bfm{\xi}_n)
=\frac{|\bfm{M}_{\!n}|^{1/2}}{|\bfm{V}_{\!n}|^{1/2}} \E_{\bfm{\xi}_n}e^{-\zeta/2}\\
&=\frac{|\bfm{M}_{\!n}|^{1/2}}{|\bfm{V}_{\!n}|^{1/2}
(2\pi)^{n/2}}\int\limits_{\mathbb{R}^n} e^{-\tfrac{1}{2}\left[(\bfm{x},\bfm{x})+
(\bfm{x}-\bfm{b}_n,\bfm{V}_{\!n}^{-1} (\bfm{x}-\bfm{b}_n)) -
(\bfm{x}-\bfm{a}_n,\bfm{M}_{\!n}^{-1} (\bfm{x}-\bfm{a}_n))\right]}\,d\bfm{x}.
\end{aligned}
\end{equation}
Note that (see \eqref{Comput2})
$$
(\bfm{x},\bfm{x})+(\bfm{x}-\bfm{b},\bfm{V}^{-1}
(\bfm{x}-\bfm{b}))-(\bfm{x}-\bfm{a},\bfm{M}^{-1}
(\bfm{x}-\bfm{a}))=(\bfm{x}-\bfm{d},\bfm{B}(\bfm{x}-\bfm{d}))+K,
$$
where (see also \eqref{Comput2a})
$$
\begin{gathered}
\bfm{B}=\bfm{I}+\bfm{V}^{-1}-\bfm{M}^{-1},\qquad \bfm{d}= \bfm{B}^{-1}(
\bfm{V}^{-1}\bfm{b}- \bfm{M}^{-1}\bfm{a}),\\ K=(\bfm{b},\bfm{V}^{-1}\bfm{b})-
(\bfm{a},\bfm{M}^{-1}\bfm{a}) - (\bfm{d},\bfm{B}\bfm{d}).
\end{gathered}
$$
Therefore, we can continue \eqref{Comput2} as follows:
\begin{equation}\label{Comput3b}
\begin{aligned}[b]
\E_{\bfm{I}_n}\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}}
{p^{}_{\bfm{a}_n,\bfm{M}_n}}(\bfm{x})&=\frac{|\bfm{M}_{\!n}|^{1/2}e^{-K/2}}
{|\bfm{V}_{\!n}|^{1/2}(2\pi)^{n/2}} \int\limits_{\mathbb{R}^n}
e^{-((\bfm{x}-\bfm{d}),\bfm{B}_n(\bfm{x}-\bfm{d}))/2} \,d\bfm{x}\\
&=\frac{|\bfm{M}_{\!n}|^{1/2}e^{-K/2}} {|\bfm{V}_{\!n}|^{1/2}(2\pi)^{n/2}}
\int\limits_{\mathbb{R}^n} e^{-(\bfm{x},\bfm{B}_n\bfm{x})/2}\,d\bfm{x}.
\end{aligned}
\end{equation}
Consider the integral in the right-hand side of \eqref{Comput3b}. If
$\bfm{B}_n>\mathbf{0}$, then \cite[\S\,6.9, Theorem~3]{Bellman}
\begin{equation}\label{Comput3c}
\int\limits_{\mathbb{R}^n}
e^{-(\bfm{x},\bfm{B}_n\bfm{x})/2}\,d\bfm{x}=\frac{(2\pi)^{n/2}}{|\bfm{B}_n|^{1/2}}.
\end{equation}
Otherwise
\begin{equation}\label{Comput3d}
\int\limits_{\mathbb{R}^n} e^{-(\bfm{x},\bfm{B}_n\bfm{x})/2}\,d\bfm{x}=\infty.
\end{equation}
Assume first $\bfm{B}_n=\bfm{I}_n+\bfm{V}_{\!n}^{-1}-\bfm{M}_{\!n}^{-1}>
\mathbf{0}$, i.e.,
the matrix $\bfm{B}_n$ is positive definite. Then, by \eqref{Comput3b},
\eqref{Comput3c} we get
\begin{equation}\label{Stein3b31k}
\E_{\bfm{I}_n}\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}}
{p^{}_{\bfm{a}_n,\bfm{M}_n}}(\bfm{x})=\frac{|\bfm{M}_{\!n}|^{1/2}e^{-K/2}}
{|\bfm{V}_{\!n}|^{1/2}|\bfm{B}_n|^{1/2}}.
\end{equation}
If the matrix
$\bfm{B}_n=\bfm{I}_n+\bfm{V}_{\!n}^{-1}-\bfm{M}_{\!n}^{-1}$ is not positive
definite, then by \eqref{Comput3d}
\begin{equation}\label{Stein3b31k1}
\E_{\bfm{I}_n}\frac{p^{}_{\bfm{b}_n,\bfm{V}_{\!n}}}
{p^{}_{\bfm{a}_n,\bfm{M}_n}}(\bfm{x})=\infty,
\end{equation}
and therefore the condition \eqref{Defab98} can not be satisfied. From
\eqref{Stein3b31k}, \eqref{Stein3b31k1} Lemma 2 follows.\qed
We continue the proof of Theorem 1. Define
$\mathcal{F}(\bfm{a}_n,\bfm{M}_{\!n})$ as the maximal set, satisfying the
condition
\begin{equation}\label{Stein3b31n}
f_{\bfm{a}_n,\bfm{M}_{\!n}}(\bfm{b}_n,\bfm{V}_{\!n})\le e^{o(n)},\quad n\to\infty.
\end{equation}
That set coincides with the definition \eqref{Theor1a}. Therefore,
from \eqref{error2V1}, \eqref{Defab98}, \eqref{lemma2} and \eqref{Stein3b31n}
Theorem 1 follows.
\section{Examples. Particular cases}
\subsection{Known mean $\bfm{a}_n$ and known covariance matrix $\bfm{M}_{\!n}$}
We first consider the simplest case of known mean $\bfm{a}_n=(a_1,\ldots,a_n)$
and known matrix $\bfm{M}_{\!n}$, and apply Theorem 3. It will allow us to
estimate the rate of convergence in Theorem 1. Without loss of generality, we
may assume in model \eqref{mod1} that the covariance matrix $\bfm{M}_{\!n}$
is diagonal with positive eigenvalues $\lambda_1,\ldots,\lambda_n$
~(see Remark 1). Then (see~\eqref{deGaus3a})
\begin{equation}\label{deGaus5}
\begin{gathered}
D(\mathbf{P}_{\mathbf{I}_{n}}||{\mathbf{Q}}_{\mathbf{a}_{n},\mathbf{M}_{n}}) =
D(\boldsymbol{\xi}_{n}||{\mathbf a}_{n}+\boldsymbol{\eta}_{n}) =
\frac{1}{2}\left[\sum_{i=1}^{n}\left(\ln\lambda_{i} +
\frac{1}{\lambda_{i}}-1 + \frac{a_{i}^{2}}{\lambda_{i}}\right)\right].
\end{gathered}
\end{equation}
By \eqref{Stein11}, \eqref{Stein12} we get for
$D =D(\mathbf{P}_{\mathbf{I}_{n}}||{\mathbf{Q}}_{\mathbf{a}_{n},\mathbf{M}_{n}})$
\begin{equation}\label{Stein17} -\frac{D+1}{1-\alpha}\le\ln\beta(\alpha)\le
-D+\mu_0(\alpha,\bfm{a}_n,\bfm{M}_{\!n}),
\end{equation}
where $\mu_0(\alpha,\bfm{a}_n,\bfm{M}_{\!n})$ is estimated in \eqref{lem1a}.
In order to estimate $\mu_0(\alpha,\bfm{a}_n,\bfm{M}_{\!n})$ simpler than
\eqref{lem1a}, we assume additionally that the following condition is satisfied:
{\bf III.} There exists $C>0$, such that
\begin{equation}\label{assump3}
\begin{gathered}
{\mathbf E}_{{\mathbf P}}\left[
D(\mathbf{P}_{\mathbf{I}_{n}}||{\mathbf{Q}}_{\mathbf{a}_{n},\mathbf{M}_{n}}) -
\ln\frac{\mathbf{p}_{\mathbf{I}_{n}}}
{\mathbf{p}_{\mathbf{a}_{n},\mathbf{M}_{n}}}
({\mathbf x})\right]^{2} \leq C^{2}
D(\mathbf{P}_{\mathbf{I}_{n}}||{\mathbf{Q}}_{\mathbf{a}_{n},\mathbf{M}_{n}}).
\end{gathered}
\end{equation}
Then by Chebyshev inequality we have
\begin{equation}\label{example1}
\begin{gathered}
\alpha_{\mu} = {\mathbf P}_{\mathbf{I}_{n}}\left\{
D(\mathbf{I}_{n}||\mathbf{a}_{n},\mathbf{M}_{n}) - \ln\frac{p_{\mathbf{I}_{n}}}
{p_{\mathbf{a}_{n},\mathbf{M}_{n}}}({\mathbf x}_{n}) \geq \mu\right\} \leq \\
\leq \mu^{-2}{\mathbf E}_{{\mathbf P}}\left[
D(\mathbf{P}_{\mathbf{I}_{n}}||{\mathbf{Q}}_{\mathbf{a}_{n},\mathbf{M}_{n}}) -
\ln\frac{\mathbf{p}_{\mathbf{I}_{n}}}{\mathbf{p}_{\mathbf{a}_{n},\mathbf{M}_{n}}}
({\mathbf x})\right]^{2} \leq C^{2}\mu^{-2}
D(\mathbf{P}_{\mathbf{I}_{n}}||{\mathbf{Q}}_{\mathbf{a}_{n},\mathbf{M}_{n}}).
\end{gathered}
\end{equation}
In order to have the right-hand side of \eqref{example1} not exceeding $\alpha$,
it is sufficient to set
$$
\begin{gathered}
\mu = C\sqrt{\frac{D(\mathbf{P}_{\mathbf{I}_{n}}||
{\mathbf{Q}}_{\mathbf{a}_{n},\mathbf{M}_{n}})}{\alpha}},
\end{gathered}
$$
and then \eqref{Stein17} takes the form
$$
-\frac{D+1}{1-\alpha}\le\ln\beta(\alpha)\le -D+C\sqrt{\frac{D}{\alpha}},
$$
which estimates the rate of convergence in \eqref{Stein17}.
Note also that similarly to \eqref{lem1b}, \eqref{lem1c} we can get
\begin{equation}\label{deGaus6}
\begin{gathered}
{\mathbf E}_{{\mathbf P}}\left[
D(\mathbf{P}_{\mathbf{I}_{n}}||{\mathbf{Q}}_{\mathbf{a}_{n},\mathbf{M}_{n}}) -
\ln\frac{\mathbf{p}_{\mathbf{I}_{n}}}
{\mathbf{p}_{\mathbf{a}_{n},\mathbf{M}_{n}}}
({\mathbf x})\right]^{2} =
\frac{1}{2}\sum_{i=1}^{n}\left[\left(1-\frac{1}{\lambda_{i}}\right)^{2} +
2\frac{a_{i}^{2}}{\lambda_{i}^{2}}\right].
\end{gathered}
\end{equation}
Therefore the condition {\bf III} is equivalent to the inequality
(see \eqref{deGaus5} and \eqref{deGaus6})
$$
\sum_{i=1}^n\biggl[\Bigl(1-\frac{1}{\lambda_i}\Bigr)^2
+2\frac{a_i^2}{\lambda_i^2}\biggr]\le C^2
\Biggl[\sum_{i=1}^n\Bigl(\ln\lambda_i+\frac{1}{\lambda_i}-1
+\frac{a_i^2}{\lambda_i}\Bigr)\Biggr].
$$
{\it Remark 4}.
The assumption \eqref{assump3} is fulfilled, for example, in the
natural ``regular'' case, when elements $\bfm{a}_{n+1}$, $\bfm{M}_{n+1}$
are ``continuations'' of elements~$\bfm{a}_n$,~$\bfm{M}_n$.
\subsection{Unknown mean $\bfm{a}_n$ and known covariance matrix $\bfm{M}_{\!n}$}
Consider the case of model \eqref{mod1},
when we know the covariance matrix $\bfm{M}_{\!n}$, but we do not
know the mean $\bfm{a}_n$. Without loss of generality we may assume
the covariance matrix $\bfm{M}_{\!n}$ diagonal with positive eigenvalues
$\lambda_1,\ldots,\lambda_n$ (see Remark~ 1). Then the function
$f_{\bfm{a}_n,\bfm{M}_{\!n}}(\bfm{b}_n,\bfm{M}_n)$ from \eqref{deffa} takes the
form
$$
f_{\bfm{a}_n,\bfm{M}_{\!n}}(\bfm{b}_n,\bfm{M}_n)=e^{-K},
$$
where for $\bfm{a}_n=(a_{1,n},\ldots,a_{n,n})$ and
$\bfm{b}_n=(b_{1,n},\ldots,b_{n,n})$ we have
\begin{equation}\label{example2a}
\begin{aligned}[b]
K&=(\bfm{b}_n,\bfm{M}_{\!n}^{-1}\bfm{b}_n)- (\bfm{a}_n,\bfm{M}_{\!n}^{-1}\bfm{a}_n)-
\bigl(\bfm{M}_{\!n}^{-1}(\bfm{b}_n- \bfm{a}_n), \bfm{M}_{\!n}^{-1}(\bfm{b}_n-
\bfm{a}_n)\bigr)\\ &=\sum\limits_{i=1}^n\biggl[\frac{b_{i,n}^2-a_{i,n}^2}{\lambda_i}-
\frac{(b_{i,n}-a_{i,n})^2}{\lambda_i^2}\biggr].
\end{aligned}
\end{equation}
The corresponding maximal set
$\mathcal{F}_1(\bfm{a}_n,\bfm{M}_{\!n})=\{\bfm{b}_n\}$ in that case takes the
form (see \eqref{Theor1a})
\begin{equation}\label{example2b}
\mathcal{F}_1(\bfm{a}_n,\bfm{M}_{\!n})=\{\bfm{b}_n:\: K\ge o(n)\},
\end{equation}
where the function $K=K(\bfm{a}_n,\bfm{M}_{\!n},\bfm{b}_n)$ is defined in
\eqref{example2a}.
Note that, if $\bfm{M}_{\!n}=\bfm{I}_n$ (i.e., when hypotheses differ only by
means~$\bfm{a}_n$) formulas \eqref{example2a}, \eqref{example2b} take
especially simple form:
\begin{equation}\label{example2d}
K=2(\bfm{a}_n,\bfm{b}_n-\bfm{a}_n),\qquad
\mathcal{F}_1(\bfm{a}_n,\bfm{I}_n)=\{\bfm{b}_n:\: (\bfm{a}_n,\bfm{b}_n-\bfm{a}_n) \ge
o(n)\}.
\end{equation}
Those results follow also from papers \cite{Bur79,Bur82} (where that problem
was considered in Hilbert and Banach spaces).
\subsection{Known mean $\bfm{a}_n$ and unknown covariance matrix $\bfm{M}_{\!n}$}
We limit ourselves to the case $\bfm{a}_n=\mathbf{0}_n$. Then the function
$f_{\bfm{a}_n,\bfm{M}_{\!n}}(\bfm{b}_n,\bfm{V}_{\!n})$ from
\eqref{deffa} for $\bfm{a}_n=\bfm{b}_n=\mathbf{0}_n$ takes the form
\begin{equation}\label{example3}
f_{\mathbf{0}_n,\bfm{M}_{\!n}}(\mathbf{0}_n,\bfm{V}_{\!n})=
\frac{|\bfm{M}_{\!n}|}{|\bfm{V}_{\!n}|\cdot
\bigl|\bfm{I}_n+\bfm{V}_{\!n}^{-1}-\bfm{M}_{\!n}^{-1}\bigr|}.
\end{equation}
The corresponding maximal set
$\mathcal{F}_1(\mathbf{0}_n,\bfm{M}_{\!n})=\{\bfm{V}_{\!n}\}$
in that case takes the form (see \eqref{Theor1a})
\begin{equation}\label{example3a}
\mathcal{F}_1(\mathbf{0}_n,\bfm{M}_{\!n})=\bigl\{\bfm{V}_{\!n}:\:
f_{\mathbf{0}_n,\bfm{M}_{\!n}}(\mathbf{0}_n,\bfm{V}_{\!n})\le e^{o(n)}\bigr\}.
\end{equation}
Formulas \eqref{example3}, \eqref{example3a} coincide with the corresponding
results in \cite[Theorem 1]{Bur20a}.
\medskip
\hfill {\large\sl Proof of Lemma 1}
\medskip
Let $\bfm{\xi}_n$ -- a Gaussian random vector with the distribution
$\bfm{\xi}_n \sim {\mathcal{N}}(\bfm{0},\bfm{I}_n)$, and ~$\bfm{A}_n$~-- a
symmetric $(n \times n)$-matrix with eigenvalues $\{a_i\}$. Consider the
quadratic form $(\bfm{\xi}_n,\bfm{A}_n\bfm{\xi}_n)$. There
exists the orthogonal matrix $\bfm{T}_{\!n}$, such that
$\bfm{T}_{\!n}'\bfm{A}_n\bfm{T}_{\!n}=\bfm{B}_n$, where
$\bfm{B}_n$ -- the diagonal matrix with diagonal elements~$\{a_i\}$
\cite[\S\,4.7]{Bellman}. Since $\bfm{T}_{\!n}\bfm{\xi}_n \sim
{\mathcal{N}}(\bfm{0},\bfm{I}_n)$, the quadratic forms
$(\bfm{\xi}_n,\bfm{A}_n\bfm{\xi}_n)$ and ~$(\bfm{\xi}_n,\bfm{B}_n\bfm{\xi}_n)$
have the same distributions. Therefore, by formula \eqref{deGaus2} we have
\begin{equation}\label{Stein1bb}
\ln\frac{p^{}_{\bfm{I}_n}}{p^{}_{\bfm{a}_n,\bfm{M}_{\!n}}} (\bfm{y}_n)
\stackrel{d}{=} \frac{1}{2}\bigl[\ln|\bfm{M}_{\!n}|+(\bfm{a}_n,
\bfm{M}_{\!n}^{-1}\bfm{a}_n)+\eta_n\bigr],
\end{equation}
where
\begin{equation}\label{Stein1bc}
\eta_n= (\bfm{y}_n, (\bfm{M}_{\!n}^{-1}- \bfm{I})\bfm{y}_n) -
2(\bfm{y}_n,\bfm{M}_{\!n}^{-1}\bfm{a}_n).
\end{equation}
Introduce the value (see \eqref{defmu0})
\begin{equation}\label{lem11}
\alpha_{\mu}=\P_{\bfm{I}_n}\biggl\{\ln\frac{p^{}_{\bfm{I}_n}}
{p^{}_{\bfm{a}_n,\bfm{M}_{\!n}}}(\bfm{x}_n)\le D(\bfm{I}_n\Mmid
\bfm{a}_n,\bfm{M}_{\!n})-\mu\biggr\}.
\end{equation}
Then by \eqref{Stein1bb}, \eqref{Stein1bc} and \eqref{deGaus3a} we have for
$\alpha_{\mu}$ from \eqref{lem11}
\begin{equation}\label{lem1b}
\begin{aligned}[b]
\alpha_{\mu}&\le \P_{\bfm{\xi}_n}\Biggl\{\Biggl| (\bfm{\xi}_n, (\bfm{M}_{\!n}^{-1}-
\bfm{I})\bfm{\xi}_n) - 2(\bfm{\xi}_n,\bfm{M}_{\!n}^{-1}\bfm{a}_n)-
\sum_{i=1}^n\Bigl(\frac{1}{\lambda_i}-1\Bigr)\Biggr|>2\mu\Biggr\}\\ &=
\P_{\bfm{\xi}_n}\Biggl\{\Biggl|\sum_{i=1}^n\Bigl(\frac{1}{\lambda_i}-1\Bigr)
(\xi_i^2-1)
-2(\bfm{\xi}_n,\bfm{M}_{\!n}^{-1}\bfm{a}_n)\Bigr|>2\mu\Biggr\}\le P_1+P_2,
\end{aligned}
\end{equation}
where
\begin{equation}\label{lem1c}
\begin{aligned}
P_1&=\P_{\bfm{\xi}_n}\Biggl\{\Biggl|
\sum_{i=1}^n\Bigl(\frac{1}{\lambda_i}-1\Bigr)(\xi_i^2-1)\Biggr|>\mu\Biggr\},\\
P_2&=\P_{\bfm{\xi}_n}\bigl\{\bigl|(\bfm{\xi}_n,\bfm{M}_{\!n}^{-1}\bfm{a}_n)\bigl|
>\mu/2\bigr\}.
\end{aligned}
\end{equation}
In order to estimate the value $P_1$ in \eqref{lem1c}, we use the following
result \cite[Ch.~III.5.15]{Petrov}: let $\zeta_1,\ldots,\zeta_n$ --
independent random variables with $\E\zeta_i=0$, $i=1,\ldots,n$. Then for any
$1\le p\le 2$
\begin{equation}\label{BarEs1}
\E\Biggl|\sum_{i=1}^n\zeta_i\Biggr|^p\le 2\sum_{i=1}^n\E|\zeta_i|^p.
\end{equation}
Therefore, using for $P_1$ Chebychev inequality and \eqref{BarEs1}, we get
\begin{equation}\label{lem1d}
\begin{aligned}[b]
P_1&\le\mu^{-p}\E\Biggl|\sum_{i=1}^n
\Bigl(\frac{1}{\lambda_i}-1\Bigr)(\xi_i^2-1)\Biggr|^p\le
2\mu^{-p}\sum_{i=1}^n\Bigl|\frac{1}{\lambda_i}-1\Bigr|^p\E |\xi_i^2-1|^p\\ &\le
2\mu^{-p}\sum_{i=1}^n\Bigl|\frac{1}{\lambda_i}-1\Bigr|^p
\bigl(\E|\xi^2-1|^2\bigr)^{p/2}\le 2 \mu^{-p}6^{p/2}
\sum_{i=1}^n\Bigl|\frac{1}{\lambda_i}-1\Bigr|^p\\ &\le
12\mu^{-p}\sum_{i=1}^n\Bigl|\frac{1}{\lambda_i}-1\Bigr|^p.
\end{aligned}
\end{equation}
In order to estimate the value $P_2$ in \eqref{lem1b}, \eqref{lem1c}, note that
$$
(\bfm{\xi}_n,\bfm{M}_{\!n}^{-1}\bfm{a}_n)\sim
\mathcal{N}(0,\|\bfm{M}_{\!n}^{-1}\bfm{a}_n\|),
$$
and then
$$
(\bfm{\xi}_n,\bfm{M}_{\!n}^{-1}\bfm{a}_n) \stackrel{d}{=}
\|\bfm{M}_{\!n}^{-1}\bfm{a}_n\|\xi.
$$
Therefore, using the standard bound
$$
\P(|\xi|\ge z)\le e^{-z^2/2},\quad z\ge 0,
$$
we get ($\xi_i \sim \mathcal{N}(0,1)$)
\begin{equation}\label{lem1f}
P_2=\P_{\bfm{\xi}_n}\bigl\{|
(\bfm{\xi}_n,\bfm{M}_{\!n}^{-1}\bfm{a}_n)|>\mu/2\bigr\}\le
e^{-\mu^2/(8\|\bfm{M}_{\!n}^{-1}\bfm{a}_n\|^2)}.
\end{equation}
In order to satisfy the condition $\alpha_{\mu}\le\alpha$ we set $\mu$, such
that $\max\{P_1,P_2\}\le\alpha/2$. Then, by \eqref{lem1d} and \eqref{lem1f} it
is sufficient to set $\mu$, satisfying \eqref{lem1a}.
\section*{FUNDING}
Supported in part by the Russian Foundation for Basic Research, project
no.~19-01-00364.
|
{
"arxiv_id": "2302.13251",
"language": "en",
"timestamp": "2023-02-28T02:14:11",
"url": "https://arxiv.org/abs/2302.13251",
"yymm": "2302"
} | \section{Introduction}
\IEEEPARstart{L}{ow}-dose computed tomography (LDCT) image reconstruction is a very popular research orientation recently. Usually, researchers aim to reconstruct potential normal-dose CT (NDCT) images from LDCT images based on the image domain \cite{yin2022unpaired}, projection domain \cite{yang2022low}, or dual domains \cite{niu2022self}. This is highly beneficial to the patients and radiologists. On the one hand, compared with NDCT imaging mode, the patients, especially for the children, will suffer from lower radiation risk under the LDCT imaging mode \cite{bonney2022impact}. On the other hand, by the virtue of LDCT image reconstruction technique, it is sufficient for the radiologists
to leverage reconstructed LDCT images with reduced noise to make clinical decisions. Among various approaches, deep learning (DL)-based LDCT reconstruction methods are dominant recently \cite{wang2020deep}, due to their impressive capacities of noise suppression and structural information retaining. By utilizing LDCT/NDCT images pairs in a specific anatomical region (\textit{e.g.}, the abdomen or the head), researchers devote huge efforts to elaborate powerful reconstructed models with different perspectives, such as
edge preservation \cite{guo2022low} and texture transfer \cite{wang2022texture}. Moreover, unpaired CT reconstruction approaches, as more challenging settings, are also developed to tackle the problem of content mismatch between LDCT and NDCT images. To this end, a serial of CycleGAN-based methods \cite{gu2021adain,shi2022improved} are proposed by the means of self-supervised and adversarial schemes. More works about DL-based CT reconstruction can be found in this review \cite{wang2020deep}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth, height=7.cm]{figs/fig1.pdf}
\caption{Examples of head and abdomen scans. (a) Abdomen LDCT scans.
(b) Head LDCT scans. (c) Probability density functions of noise distributions (the subtraction of the LDCT image from the NDCT image) in selected ROIs from (a) and (b), (d) Illustration of dynamic changes of CT scan protocol (\textit{i.e.}, mAs) in two different patients with abdomen scans.}
\label{cover_fig}
\end{figure}
Usually, most existing DL-based CT reconstruction methods implicitly assume that the training CT data (\textit{a.k.a.} source domain $D_{S}$) and testing CT data (\textit{a.k.a.} source domain $D_{T}$) have a similar noise distribution \cite{guo2022low,wang2022texture,gu2021adain,shi2022improved}. However, the noise distribution between $D_{S}$ and $D_{T}$ may be different due to the mismatch of CT scan protocol (\textit{e.g.}, kVp and mAs of the CT scan device) and different anatomical regions (\textit{e.g.}, $D_{S}$ and $D_{T}$ from head and abdomen, respectively). This noise distribution shift between $D_{S}$ and $D_{T}$ will result in a deterioration of the reconstruction performance, which is recently referred to as \textit{\textbf{robust CT reconstruction problem}} \cite{lee2022unsupervised,li2022noise,huang2022cross}.
Specifically, Lee et al. \cite{lee2022unsupervised} proposed to model
the noise distribution of target domain via a simple Poisson noise inserting model, such that the pretrained convolutional neural networks (CNNs)-based model on $D_{S}$ can be transferred to target domain. By leveraging unpaired sinogram data on $D_{T}$, Li et al. \cite{li2022noise} explicitly model the noise distribution
using Gaussian mixture model (GMM) on the target domain to fine-tune the pretrained CNNs-based model on $D_{S}$ towards target domain.
However, one limitation of abovementioned approaches is that direct explicit noise modeling in the target domain using model-based schemes usually may not have sufficient capacity to represent complex and realistic noise, as discussed by \cite{lu2021dual,chen2018image}, due to the restriction of human knowledge of noise priors (\textit{e.g.}, the assumption of Poisson or Gaussian distribution). This further results in a negative impact of noise transfer from source domain to target domain. Moreover, we can observe that the dynamic changes (as the perturbation) of CT scan protocol (\textit{i.e.}, mAs) in different slices of a patient (see the Figure \ref{cover_fig}(d)) may cause nonuniform noise distributions in the intra-domain CT images (see the Figure \ref{cover_fig}(c)), as the tube current modulation is usually on to avoid excessive radiation doses \cite{kalra2004techniques}. For this perturbation, existing deterministic neural network (\textit{e.g.}, CNNs)-based CT reconstruction models show insufficient robustness due to their fixed model weights \cite{antun2020instabilities,tolle2021mean,eaton2018towards}.
To address these problems, we propose to alleviate the noise distribution shifts between source and target domains via implicit noise modeling schemes under a probabilistic framework. Note that we focus more on the robust cross-domain (\textit{i.e.}, $D_{S}$ and $D_{T}$ are collected from different anatomical regions) CT reconstruction problem in this paper, as the difference of anatomical region together with scan protocol-related
perturbation will lead to more representative and challenging noise distribution shifts (see the Figure \ref{cover_fig}(c)). Specifically, we introduce a Bayesian-endowed probabilistic framework into the robust cross-domain CT reconstruction problem as illustrated in Figure \ref{framework}. By doing so, the perturbations of cross-domain samples can be implicitly considered by the distribution of the weights for better robustness. By the virtue of this probabilistic framework, we propose to alleviate the noise distribution shifts between source and target domains in the latent space and image space, respectively. First, we propose to leverage the Bayesian uncertainty of the latent feature endowed by the probabilistic framework to implicitly model the noise distributions of two domains in latent space such that the discrepancy of noise distributions can be reduced. Second,
by utilizing the residual information (as a proxy of the noise) between LDCT images and estimated NDCT images in two domains,
an adversarial learning manner is imposed to alleviate the discrepancy of noise distribution between source and target domains in the image space.
The contributions of this work can be summarized as following:
\begin{itemize}
\item A Bayesian-endowed probabilistic model is introduced into the robust cross-domain CT reconstruction problem for a better robustness.
\item A novel Bayesian noise uncertainty alignment (BNUA) method is proposed to conduct implicit noise distribution modeling and alignment in the latent space.
\item The discrepancy of noise distribution shifts between $D_{S}$ and $D_{T}$ is reduced by a novel adversarial residual distribution alignment (RDA) scheme in the image space.
\end{itemize}
\section{Methodology}
\subsection{Problem Statement}
Here, the source domain with LDCT/NDCT image pairs can be represented as $D_{S} = \{(\mathbf{x}_{1}^{S},\mathbf{y}_{1}^{S}),\cdots, (\mathbf{x}_{N}^{S},\mathbf{y}_{N}^{S}))\}$, where $N$
denotes that the number of pairs. The target domain has LDCT images only, which can be represented as $D_{T} = \{\mathbf{x}_{1}^{T},\cdots, \mathbf{x}_{L}^T\}$, where $L$ denotes the number of LDCT images. Suppose that
$D_{S}$ and $D_{T}$ are available and collected from different anatomical regions with diverse kVp and mAs (as scan protocol-related
perturbations), \textit{\textit{e.g.}}, $D_{S}$ ($D_{T}$) is either the head (abdomen) or the abdomen (head) are different between $D_{S}$ and $D_{T}$. There exists noise distribution shifts between $D_{S}$ and $D_{T}$ as shown in Figure \ref{cover_fig}. By using paired LDCT/NDCT images on $D_{S}$ and LDCT images on $D_{T}$, the objective of robust cross-domain CT reconstruction is to learn a robust model
$F$, which can generalize well on the target domain $D_{T}$
such that the high-quality reconstructed NDCT image $\mathbf{\hat{y}}^{T}= F(\mathbf{x^{T}})$ on the target domain can be obtained.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.95\textwidth]{figs/framework.pdf}
\caption{The overall framework of the proposed method. The CT reconstruction model is disentangled into a BNN-based encoder for the
feature extraction and a BNN-based decoder for the content reconstruction from extracted features. By conducting MC sampling, the Bayesian noise uncertainty alignment can be imposed for the alleviation of noise distribution in the latent space. The residual distribution alignment is to align noise distribution of two domains in the image space using the residual information.}
\label{framework}
\end{figure*}
\subsection{Robust CT reconstruction under a probabilistic framework}
While widely-used deterministic models (\textit{\textit{e.g.}}, CNNs \cite{shan2019competitive} and self-attention module \cite{li2020sacnn}) have achieved promising performance on the CT reconstruction problem, they show insufficient robustness to input samples that suffer from diverse perturbations (\textit{\textit{e.g.}}, the dynamic changes of kVs and mAs) even for intra-domain CT construction problem (\textit{i.e.}, both of source and target domains are from the same anatomical region), as discussed by \cite{tolle2021mean,antun2020instabilities}. The underlying reason is that these deterministic models can not incorporate these perturbations as the uncertainty into the prediction directly owing to their fixed model weights. As such, they may be over-confident and produce instability predictions for the perturbation. To address this issue, recent work \cite{tolle2021mean} utilized probabilistic models to render more robust performance for intra-domain CT construction problem. It is no doubt that these scan protocol-related perturbations are more prominent on cross-domain CT reconstruction scenario, as these device parameters (\textit{\textit{e.g.}}, kVs and mAs) will be modulated dynamically for different anatomical regions \cite{inoue2018ct}.
In this paper, we propose to introduce a probabilistic framework into the robust cross-domain CT reconstruction problem. By doing so, the perturbations of cross-domain samples can be implicitly considered by the distribution of the weights as the uncertainty. Specifically, compared with deterministic models, probabilistic models turn to learn a distribution over model weights $\mathbf{w}$ as below,
\begin{eqnarray}
\label{bayesian}
\mathbf{w} &=& \arg\max_{\mathbf{w}}\log P(\mathbf{w}|D) \nonumber\\ &=&\arg\max_{\mathbf{w}}[\log P(D|\mathbf{w})+\log P(\mathbf{w})],
\label{map}
\end{eqnarray}
where $D$ denotes the training data. The first term is the complexity term, and the second term is a prior distribution of the weight that can play a role of implicit regularization for better robustness.
Here, Bayesian neural network (BNN) with variational inference \cite{hoffman2013stochastic}, a representative probabilistic model, is introduced to approximate the posterior distribution $P(\mathbf{w}|D)$ of the weight, \textit{i.e.},
\begin{equation}
\label{variational_inference}
\theta^{*} = \arg \min_{\theta} \operatorname{KL}[q(\mathbf{w}|\theta)||p(\mathbf{w}|D)].
\end{equation}
The rationale of variational inference in Eq. \ref{variational_inference} is to find the parameters $\theta$ of a distribution over the weight in order to minimize the Kullback-Leibler (KL) divergence with its true posterior distribution over the weight. Usually, Eq. \ref{variational_inference} can be reformulated as
\begin{eqnarray}
\theta^{*} &=&\arg\min_{\theta}\operatorname{KL[q(\mathbf{w}|\theta)||p(\mathbf{w})]} \nonumber \\ &-&\mathbb{E}_{\mathbf{w}\sim p(\mathbf{w}|\theta)}[\log P(D|\mathbf{w})],
\end{eqnarray}
where the first term can balance the model complexity with its prior distribution. Moreover, the second term acts an analogous way to the first term in Eq. (\ref{map}), which aims to fit the training data well as the same as deterministic models (as a reconstrution loss). This objective is so-called evidence lower bound (ELBO) loss.
By incorporating the re-parameter trick \cite{tian2020learning} and stochastic gradient descent method, the variational parameter $\theta$ can be solved flexibly.
Different from most existing encoder-decoder-based CT reconstruction models \cite{shan20183}, we propose to disentangle a CT reconstruction model into two decoupled probabilistic components (as shown in Figure \ref{framework}), including a BNN-based encoder (for the feature extraction) and
a BNN-based decoder (for the content reconstruction from extracted features), respectively, \textit{i.e.}, $F = E\circ G$.
In the next sections, we will describe how to address the noise distribution shifts between $D_{S}$ and $D_{T}$ under this probabilistic framework.
\subsection{Bayesian Noise Uncertainty Modeling}
For existing cross-domain classification problems, the differences of the image style, texture, and illumination between source and target domains will result in the \textit{domain shift} problem\cite{li2018domain}, leading to unfavorable testing performance on target domain. To address this problem,
a widely-adopted scheme is to reduce the discrepancy of latent embeddings of two domains by calculating the distance between latent embeddings \cite{wang2018deep}, which can derive robust classification results \cite{kouw2019review}.
Motivated by this strategy, an intuitive idea for robust CT reconstruction is to reduce the noise distribution discrepancy between source and target domains in the latent space, as the \textit{domain shift} of the robust cross-domain CT reconstruction mainly results from the difference of noise distribution. However, the representation of the noise distribution of LDCT images in the latent space is more difficult \cite{chen2020task} as the noise appeared in LDCT images is coupled with its content information.
To tackle this issue, we propose to leverage the Bayesian uncertainty to characterize the noise distribution in the latent space, which can in turn to implicitly reflect the noise distribution in the image space. Before introducing our proposed method, the concepts of \textit{Bayesian uncertainty} and Bayesian uncertainty-induced noise encoding \cite{hafner2018reliable} derived in the label space need to be reviewed. Specifically, by calculating the variance of
different predictions (obtained by multiple Monte Carlo samples) of a same input, Bayesian uncertainty (\textit{i.e.}, the computed variance) can encode the inherent noise on the training data $D$ through the learned distribution of the weight \cite{hafner2018reliable}. More importantly, these testing data, which are not consistent with the inherent noise distribution on training data $D$ during inference, will lead to a high uncertainty \cite{hafner2018reliable}.
Thus, it is reasonable to introduce Bayesian uncertainty from label space into the latent space for latent features (extracted by the BNN-based encoder $E$) to characterize the noise distribution in the latent space. The rationality of this scheme also can be observed from its probabilistic property, \textit{i.e.}, \textit{the posterior distribution $P(\mathbf{w}_{E})$ of the weight over the
parameterized encoder $E$ is a conditional distribution over $P(\mathbf{w}_{E}|D)$ given by training data $D$ as Eq. \ref{bayesian}. Naturally, Bayesian uncertainty of latent features, \textit{i.e.}, the probabilistic embeddings, $P(\mathbf{z}|\mathbf{x},\mathbf{w}_{E})$, can implicitly encode the noise on $D$. } Our empirical analysis in the experiment section \ref{testing_bnua} also shows the rationality of the Bayesian uncertainty.
Furthermore, we can derive that the encoder $E$ trained on the source domain $D_{S}$ will inherently encode the noise on LDCT images $\mathbf{x}^{S}$ of the source domain $D_{S}$, due to its conditional distribution property. When there exists noise distribution shifts between $D_{S}$
and $D_{T}$, the encoder $E$ trained on source domain $D_{S}$ will not be generalizable enough to extract the latent features for the LDCT images $\mathbf{x}^{T}$ on target domain $D_{T}$. As a result, it is indeed necessary to reduce the noise distribution discrepancy between source and target domains by minimizing Bayesian uncertainty of the two domains.
By doing so, a domain-invariant encoder
can be obtained, which means that the latent features $\mathbf{z}$ extracted by this domain-invariant encoder can be fed into the decoder $G$ safely and unbiasedly, leading to better robust cross-domain CT reconstruction results.
Here, how to reduce the noise distribution discrepancy in the latent space via Bayesian uncertainty will be described in detail.
Specifically, by feeding two batches of LDCT samples from source and target domains, \textit{i.e.}, $\{\mathbf{x}_{i}^{S}\}_{i=1}^{B}$ and $\{\mathbf{x}_{i}^{T}\}_{i=1}^{B}$ (where $\mathbf{x}_{i}^{S(T)} \in \mathbb{R}^{W\times H \times 1}$ denotes a LDCT sample with spatial size of $W\times H$ and the channel number of 1. $B$ denotes the number of images in a mini-batch), into the probabilistic encoder $E$, the output is a probabilistic embedding for each sample, \textit{i.e.}, $P(\mathbf{z}_{i}^{S(T)}|\mathbf{x}_{i}^{S(T)}) = P(\mathbf{z}_{i}^{S(T)}|\mathbf{x}_{i}^{S(T)},\mathbf{w}_{E})$, where $\mathbf{w}_{E}\sim P(\mathbf{w}_{E})$. The predictive distribution
of $P(\mathbf{z}^{S(T)}|\mathbf{x}^{S(T)})$ can be unbiased approximation using Monte
Carlo (MC) estimators with $M$ stochastic sampling operations over the $\mathbf{w}_{E}$, \textit{i.e.},
\begin{equation}
\label{mc}
\mathbf{z}_{i}^{S(T)} = \mathbb{E}[P(\mathbf{z}_{i}^{S(T)}|\mathbf{x}_{i}^{S(T)},\mathbf{w}_{E})] = \frac{1}{M}\sum_{j=1}^{M}E_{\mathbf{w}_{E}^{j}}(\mathbf{x}_{i}^{S(T)}),
\end{equation}
where $E_{\mathbf{w}_{E}}(\cdot)$ denotes the parameterized probabilistic encoder, and $\mathbf{z}_{i}^{S(T)} \in \mathbb{R}^{W^{'}\times H^{'}\times C}$ with $C$ feature channels.
By imposing the virtue of probabilistic framework in Eq. \ref{mc}, we can derive different Bayesian sampling embeddings of a same input $\mathbf{x}_{i}$ to render Bayesian uncertainty estimation, \textit{i.e.},$
\mathbf{Z}_{i}^{S(T)} = \{\mathbf{z}_{i,j}^{S(T)}\}_{j=1}^{M}$,
where the subscript $j$ denotes the index of MC sampling and
$\mathbf{Z}_{i}^{S(T)} \in \mathbb{R}^{M\times W^{'}\times H^{'}\times C}$.
Here, it is necessary to attain more compact latent feature representation for Bayesian uncertainty estimation, as the information on spatial dimensions $H^{'}\times W^{'}$ are redundant as discussed by \cite{hu2018squeeze}. Usually, the channel-wise statistics can reflect more expressive and representative latent features by shrinking the spatial dimensions of latent embeddings, which is observed by previous lectures \cite{yang2009linear,shen2015multi}.
By doing so, we utilize the squeeze module in \cite{hu2018squeeze} to conduct global information embedding via global average pooling. As such, we can derive a more compact and representative latent features for Bayesian uncertainty estimation in latent space. Specifically, the global average pooling can be imposed for each channel element of each sampling embedding, \textit{i.e.}, $\mathbf{z'}^{S(T)} \in \mathbb{R}^{W^{'}\times H^{'}}$ as following
\begin{equation}
\label{se}
u^{S(T)}= F_{sq}(\mathbf{z'}^{S(T)}) = \frac{1}{W^{'}\times H^{'}}\sum_{t=1}^{W^{'}}\sum_{q=1}^{H^{'}}z_{t,q}^{S(T)},
\end{equation}
where $F_{sq}(\cdot)$ denotes the global average pooling. Finally, the compact latent feature representation can be calculated as $\mathbf{U}_{i}^{S(T)} = F_{sq}(\mathbf{Z}_{i}^{S(T)})$, where $\mathbf{U}_{i}^{S(T)} \in \mathbb{R}^{M\times C}$.
More importantly, we aim to minimize the Bayesian uncertainty discrepancy of obtained latent features $\mathbf{U}_{i}^{S(T)}$ between source and target domains such that the noise distribution between the two domains can be reduced implicitly.
To this end,
we propose to leverage the covariance matrix of latent features, as second-order statistics of latent features, to quantify the uncertainty among features.
Specifically, we propose to calculate the covariance matrix over different Bayesian sampling embeddings of a same input, \textit{i.e.},$\mathbf{U}_{i}^{S(T)}$ as following
\begin{eqnarray}
\label{covariance}
\mathbf{C}_{i}^{S} &=&\frac{1}{M-1}\sum_{j=1}^{M} (\mathbf{u}_{i,j}^{S}-\bm{\mu}_{i}^{S})^{t}(\mathbf{u}_{i,j}^{S}-\bm{\mu}_{i}^{S}), \nonumber \\
\mathbf{C}_{i}^{T} &=&\frac{1}{M-1}\sum_{j=1}^{M} (\mathbf{u}_{i,j}^{S}-\bm{\mu}_{i}^{T})^{t}(\mathbf{u}_{i,j}^{T}-\bm{\mu}_{i}^{T}),
\end{eqnarray}
where $\bm{\mu}_{i}^{S}$ and $\bm{\mu}_{i}^{T}$ denote the mean of the $\mathbf{U}_{i}^{S}$ and $\mathbf{U}_{i}^{T}$, respectively. $\mathbf{u}_{i,j}^{S}$ and $\mathbf{u}_{i,j}^{T}$ represent the $j$-th row of the $\mathbf{U}_{i}^{S}$ and $\mathbf{U}_{i}^{T}$. The size of $\mathbf{C}_{i}^{S}$ and $\mathbf{C}_{i}^{T}$ is $C \times C$.
Note that the calculation in Eq. \ref{covariance} is different from existing covariance matrix alignment of latent features (\textit{e.g.}, CORAL\cite{sun2016return}) in cross-domain classification problems, as the covariance matrix in Eq. \ref{covariance} is derived from the different Bayesian sampling embeddings of a same sample rather than different samples in the source (or target) domain, which means that our proposed method can ignore the interference of content information (as the content information are same for different sampling embeddings of a same input), and then focus more on the uncertainty estimate of latent features.
Finally, the Bayesian uncertainty discrepancy as a $\mathcal{L}_{BNUA}$ objective in a batch of samples between source and target domains can be minimized as below,
\begin{equation}
\label{bnua}
\mathcal{L}_{BNUA} = \frac{1}{B\times B}\sum_{i=1}^{B}\sum_{j=1}^{B}\| \mathbf{C}_{i}^{S} - \mathbf{C}_{j}^{T}\|_{F}^{2},
\end{equation}
where $\|\cdot\|_{F}$ denotes the Frobenius norm. In this paper, the objective $\mathcal{L}_{BNUA}$ is called as \textit{\textbf{Bayesian Noise Uncertainty Alignment}} (BNUA) loss due to its Bayesian property and
implicit noise modeling via uncertainty estimation. Our proposed Bayesian noise uncertainty alignment has several advantages as following. First, it is an unsupervised manner as the computation in Eq. \ref{bnua} does not refer to any NDCT images on the target domain $D_{T}$. Second, it is a \textit{\textbf{parameter-free}} manner, as there are no additional parameters in the BNUA module need to be learned. Instead, existing noise modeling methods (\textit{e.g.}, GMM) need to solve extra parameters (\textit{e.g.}, mixing coefficients via EM algorithm). Third, compared with existing explicit noise distribution representation methods on image domain, the proposed implicit noise modeling in latent space via Bayesian uncertainty may be more powerful.
\subsection{Residual Distribution Alignment via Adversarial Training}
\label{distribution}
In the previous section, we figure out how to conduct the implicit noise distribution alignment via minimizing Bayesian uncertainty discrepancy in the latent space. Then, the encoder may be regularized to render the domain-invariant feature embeddings between source and target domains. However, it is also important
to reduce the discrepancy of noise distribution between source and target domains in the image space, as this discrepancy may affect the quality of reconstructed CT images (\textit{i.e.}, estimated NDCT images)
directly.
How to characterize the noise distribution in the image space is an urgent issue. Here, we can first recall the widely-used LDCT image degraded model, \textit{i.e.}, $\mathbf{x} = \mathbf{y} + \mathbf{n}$, where $\mathbf{x}$, $\mathbf{y}$ and $\mathbf{n}$ denote the LDCT image, the NDCT image and the noise in the image space, respectively. According to the degraded model, we can derive a simple yet effective noise estimation method through the residual information (as a proxy of the noise), \textit{i.e.}, $
\hat{\mathbf{n}} = \mathbf{x} - \hat{\mathbf{y}}$,
where $\hat{\mathbf{y}} = F(\mathbf{x})$ denotes the estimated NDCT images by the reconstruction model. By recalling the residual inforamtion, the noises on source and target domains, \textit{i.e.}, $\hat{\mathbf{n}}^{S}$ and $\hat{\mathbf{n}}^{T}$, can be approximated though their corresponding residual information. Note that the supervised information on the source domain (\textit{e.g.}, $\mathbf{x}^{S}$ and $\mathbf{y}^{S}$) can guarantee the relative usability of estimated noises on the source and target domains. Moreover,
quite a few cross-domain classification methods also leverage the predictions on the target domain (during training) as a complementary information (\textit{a.k.a.} pseudo label) to guide a promising cross-domain performance \cite{wang2020unsupervised,ren2022multi}.
Similarly, the estimated noise on target domain $\hat{\mathbf{n}}^{T}$ benefits from estimated (pseudo) NDCT images $\hat{\mathbf{y}^{T}}$ on target domain during training, \textit{i.e.}, $\hat{\mathbf{n}}^{T} = \mathbf{x}^{T} - \hat{\mathbf{y}}^{T}$. One can observe that this scheme is also decoupled with the content information existed on the LDCT image as much as possible, as the residual information is derived from a same LDCT image $\mathbf{x}$.
According to obtained estimated noises on source and target domains, reducing noise distribution discrepancy between the two domains may be feasible in the image space. It should be noted that the distributions of NDCT images on
the source and target domain may be different depending on their specific anatomical regions, which means $P(\mathbf{y}^{T}|\mathbf{x}^{T}) \neq P(\mathbf{y}^{S}|\mathbf{x}^{S})$. Instead, forcing the noise distribution of source
domain $P(\mathbf{n}^{S})$ to converge towards the noise distribution of target domain $P(\mathbf{n}^{T})$ becomes more reasonable. To this end, we utilize an adversarial learning manner to reduce the noise distribution discrepancy between source and target domains (namely residual distribution alignement (RDA)).
Specifically, a parameterized discriminator $D$ can be utilized to distinguish
whether the estimated noise $\hat{\mathbf{n}}^{S}$ on source domain is similar with the estimated noise $\hat{\mathbf{n}}^{T}$ on target domain.
The significant discrepancy of noise distribution between the two domains will induce a high discriminator loss, which would push the reconstruction model $F$ to reduce the noise distribution gap. Here, we adopt the LSGAN\cite{mao2017least}-based adversarial learning process due to its more stable convergence, which can be formulated as following,
\begin{eqnarray}
\label{adver}
\label{adversarial}
\min_{\theta_{D}} \mathcal{L}(D) &=& \mathbb{E}_{\mathbf{x}^{T}\sim P(\mathbf{x}^{T})}[D[\mathbf{x}^{T} - G(E(\mathbf{x}^{T}))] - 1] \nonumber\\ &+&\mathbb{E}_{\mathbf{x}^{S}\sim P(\mathbf{x}^{S})}[D[\mathbf{x}^{S} - G(E(\mathbf{x}^{S}))]- 0)], \nonumber \\
\min_{\mathbf{w}_{E}, \mathbf{w}_{G}} \mathcal{L}(E,G) &=& \mathbb{E}_{\mathbf{x}^{S}\sim P(\mathbf{x}^{S})}[D[\mathbf{x}^{S} - G(E(\mathbf{x}^{S}))] - 1 ]. \nonumber \\
\end{eqnarray}
One can observe from Eq \ref{adver} that the noise distribution discrepancy between source and target domains will be reduced gradually though optimizing two adversarial losses for the discriminator $D$ and the reconstruction model $F = E\circ G$. Moreover, the reconstruction loss between the estimated NDCT images $\hat{\mathbf{y}}^{S}$ and corresponding ground-truth NDCT images $\mathbf{y}^{S}$ on the source domain would avoid an excessive
noise distribution transfer from source domain to target one. By balancing the adversarial loss and reconstruction loss, our proposed method can minimize the noise distribution discrepancy between domains to obtain a transferable noise distribution. By doing so, a robust cross-domain CT reconstruction model is prone to be learned.
\subsection{Model Training and Overall Framework}
\label{Overall Framework}
Our proposed robust cross-domain CT reconstruction framework
consists of three modules, including a probabilistic encoder, a probabilistic decoder, and a discriminator. For the probabilistic encoder and decoder, we follow the structure of the popular CT reconstruction backbone, \textit{i.e.}, a widely-adopted CPCE model \cite{shan20183}.
To balance the computational consumption with the sufficient probabilistic property, we follow \cite{anonymous2023domain} to only replace the last layer of the encoder and the decoder with its Bayesian neural network version. Specifically, the last layer of the encoder is a Bayesian-based CNN layer with 32 channels and $3 \times 3$ kernel size. The last layer of the decoder is a Bayesian-based deconvolutional layer with 1 channel and $3 \times 3$ kernel size. The structure of the discriminator in the RDA module is the same as that of CycleGAN \cite{zhu2017unpaired}. The overall objective of our proposed framework consists of three components, including a ELBO loss, a Bayesian noise uncertainty alignment loss, and a distributional alignment loss. Here, the reconstruction term of ELBO loss is guided from both image space (\textit{i.e.}, the mean absolute error) and feature space (\textit{i.e.}, the perceptual loss \cite{johnson2016perceptual} $PL(\cdot,\cdot)$). The overall objective can be computed as following,
\begin{eqnarray}
\mathcal{L}&=& \sum_{i}[\|\mathbf{y}_{i}^{S} - \hat{\mathbf{y}}_{i}^{S}\|_{1}+PL(\mathbf{y}_{i}^{S},\hat{\mathbf{y}}_{i}^{S})]+ \operatorname{KL}[q_{\theta}(Q_{\phi}) \nonumber \| p(Q_{\phi})] \nonumber \\ &+& \operatorname{KL}[q_{\theta}(C_{\omega})\|p(C_{\omega})]+ \beta_{1}\mathcal{L}_{BNUA} + \beta_{2}\mathcal{L}_{RDA},
\nonumber \\
\end{eqnarray}
where the variance of log likelihood is set to 1 for simplification computed by the ground-truth $\mathbf{y}_{i}$ and its estimation $\hat{\mathbf{y}}_{i}$. The third and fourth terms aim to learn a variational distribution $q_{\theta}(\cdot)$ to approximate the Bayesian posterior distribution on the weights, while minimizing the KL divergence with its prior distribution $p(\cdot)$. $\beta_{1}$ and $\beta_{2}$ control the influence of the BNUA module and RDA module, respectively.
\begin{table}[!t]
\centering
\caption{The details of used datasets for the validation of the robust cross-domain CT reconstruction task.}
\begin{adjustbox}{width=0.4\textwidth}
\begin{tabular}{ccc}
\hline
\textit{\textbf{Dataset}} & \textit{\textbf{Domain A} } & \textit{\textbf{Domain B} } \\ \hline
\textbf{Scaning parts} & Abdomen & Head \\
\textbf{kVs} & 100 & 120 \\
\textbf{mAs} & Dynamic & Dynamic \\
\textbf{Resolution} & $512 \times 512$ & $512 \times 512$ \\
\textbf{Training Images} & 3952 (7 patients) & 1476 (49 patients) \\
\textbf{Validation Images} & 850 (1 patient) & 35 (1 patient) \\
\textbf{Testing Images} & 1108 (2 patients) & 131 (4 patients) \\ \hline
\end{tabular}
\end{adjustbox}
\end{table}
\section{Experiments and Analyses}
\subsection{Datasets and Training Protocol}
In this paper, the performance of the proposed robust cross-domain CT reconstruction is validated on public clincal dataset, i.e, the Low Dose CT Image and Projection Data (LDCT-and-Projection-data)\footnote{https://wiki.cancerimagingarchive.net/pages/}, which is released by Mayo Clinic, where \textbf{abdomen} (namely \textit{Domain A}) and \textbf{head} (namely \textit{Domain B}) CT scans are utilized as the cross-domain CT reconstruction task. In the LDCT-and-Projection-data dataset, both normal-dose and simulated low (quarter)-dose CT image are reconstructed using commercial CT system with a filtered back projection method. There are 10 patients' abdomen CT scans, where 7 patients for training, 1 patient for validation, and 2 patient for testing. Moreover, there are 54 patients' head CT scans, where 49 patients for training, 1 patient for validation, and 4 patient for testing.
As shown in Figure 3, the significant noise distribution shifts between the abdomen and head scans can be observed (see the third column of Figure 3 and corresponding probability density function (PDF).
As a result, the robust cross-domain CT reconstruction is a challenging and meaningful task. More details about the used datasets can be found in Table I. For data preprocessing, all CT scans are normalized into $0 \sim 1$ with anatomical-specific window widths (head: 90 and abdomen: 400) and window levels (head: 35 and abdomen: 40). Each of CT scans is randomly split into 8 patches with the size of $64 \time 64$ for training.
Here, $A \rightarrow B$ denotes a robust cross-domain CT reconstruction task, where the abdomen and head CT images as the source and target domains, respectively, and vice versa. Thus, there are two robust cross-domain CT reconstruction tasks.
\subsection{Implement Details}
The proposed robust cross-domain CT reconstruction model is implemented by Pytorch and BayesianTorch\footnote{https://github.com/IntelLabs/bayesian-torch}. We adopt mean-field variational inference (MFVI)\cite{graves2011practical} for the analytical approximation of the Bayesian layer, where the parameters are characterized by fully factorized Gaussian distribution endowed by variational parameters $\mu$ and $\sigma$, \textit{i.e.} $q_{\theta}(w) := \mathcal{N}(w|\mu,\sigma).$ By using BayesianTorch, deterministic neural networks can be transformed easily into their Bayesian versions. For the parameter settings of Bayesian weight priors, empirical Bayes using DNN (MOPED) \cite{krishnan2020specifying} method is used, where the initial perturbation factor $\delta$ for the weight to 0.1. The number of MC sampling during training is set to 10 empirically.
The parameters of the overall model are updated by Adam optimizer with the learning rate as $1\times 10^{-4}$. The number of the mini-batch is set to 32. For each robust cross-domain task, the training set consists of the paired LDCT/NDCT data in the training set of the source domain and the LDCT data in the training set of the target domain. The important hyperparameters of our proposed method, including $\beta_{1}$ and $\beta_{2}$, are selected on the validation set of the source domain. Specifically, $\beta_{1}$ and $\beta_{2}$ are set to 10 and 0.001 for two robust cross-domain tasks, respectively.
Moreover, the robust cross-domain performance of the model is evaluated on the testing set of the target domain.
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{figs/sample.pdf}
\caption{Example samples of the used abdomen and head datasets. The noise map of each LDCT image (in the third column) is obtained by the subtraction between the LDCT image and corresponding NDCT image. The display windows for the head scan and the abdomen scan are [-10,80] and [-160,240], respectively.}
\label{different region}
\end{figure}
\subsection{Baseline Methods and Evaluation Metrics}
In this paper, we compare our proposed method with some representative CT reconstruction approaches, including Block-matching and 3D filtering (BM3D) method \cite{dabov2006image}, optimized non-local means (ONLM) method for CT reconstruction \cite{kelm2009optimizing}, Noiser2noise \cite{moran2020noisier2noise}, CycleGAN \cite{zhu2017unpaired}, unsupervised domain adaptation method for LDCT denoising (UDA-LDCT) \cite{lee2022unsupervised}, semantic information alignment-based cross-domain LDCT denoising (SIA-LDCT) method \cite{huang2022cross}. Note that BM3D and ONLM are traditional model-based LDCT image reconstruction method. Noiser2noise only refers to the LDCT images on target domain to conduct unsupervised image denoising. By utilizing the LDCT images on target domain and the NDCT
images on source domain, CycleGAN can learn a transferable style from target domain to source domain \cite{zhu2017unpaired}. Compared with our proposed method, UDA-LDCT and SIA-LDCT aim to address the noise distribution shifts between source and target domains with a similar anatomical region. Note that
we also introduce a MFVI-based Bayesian neural network (MF-BNN)\cite{20221111787255} as a baseline model, which is only trained by paired source domain data. Except for imposing additional $\mathcal{L}_{BNUA}$ and $\mathcal{L}_{RDA}$ modules, the network structure of our proposed method is equal to that of the MF-BNN. The hyperparameters of baseline methods are tuned in a wide range on the validation set of the source domain.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.88\textwidth]{figs/abd_sample_compressed.pdf}
\caption{The visual comparison among different baseline methods for $
\mathcal{A} \rightarrow \mathcal{B}$ robust cross-domain CT reconstruction task. The green box on the left bottom of each image is the zoomed-in view of the selected ROI. The quantitative results of each image are listed in the top right corner. The display window is [-160,240].}
\label{different region}
\end{figure*}
To comprehensively evaluate the performance of different models, we introduce two kinds of evaluation approaches, including image-based evaluation metrics and perception-based evaluation metrics. The former
includes the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) \cite{wang2004image}. The latter includes the gradient magnitude similarity deviation (GMSD) \cite{xue2013gradient} and discrete cosine transform-based sub-bands similarity index (DSS) \cite{zhang2012sr}, which both derive from human visual system (as an efficient perceptual image quality tool).
\begin{table}[!t]
\centering
\caption{The experimental results on the robust cross-domain CT reconstruction task from $\mathcal{A}\rightarrow \mathcal{B}$, \textit{i.e.}, the head CT images as the source domain and the abdomen CT images as the target domain. The average value and standard deviation are reported by running each method with five times. For PSNR and SSIM, the higher, the better. For GMSD and DSS, the lower, the better. }
\setlength{\extrarowheight}{3pt}
\begin{adjustbox}{width=0.5\textwidth}
\begin{tabular}{cccll}
\hline
\textbf{Method} & \textbf{PSNR} $\uparrow$ & \textbf{SSIM} $\uparrow$ & \multicolumn{1}{c}{\textbf{GMSD} $\downarrow$} & \multicolumn{1}{c}{\textbf{DSS} $\downarrow$} \\ \hline \hline
\textbf{MF-BNN} & 19.2601$\pm$0.1769 &0.4892$\pm$0.0121 &
0.1789$\pm$0.0012 & 0.3556$\pm$0.0011 \\ \hline
\textbf{BM3D} & 20.2815$\pm$0.1976 &0.5824$\pm$0.0201 & 0.2001$\pm$0.0101& 0.4618$\pm$0.0009 \\
\textbf{ONLM} & 20.0401$\pm$0.1622 & 0.5741$\pm$0.0321 & 0.1677$\pm$0.0154 & 0.3101$\pm$0.0003 \\
\textbf{Noiser2Noise} & 19.4195$\pm$0.1893 & 0.5412$\pm$0.0211 & 0.1775$\pm$0.0178 & 0.3795$\pm$0.0011 \\
\textbf{CycleGAN} & 19.3777$\pm$0.2001 & 0.5436$\pm$0.0134 & 0.1887$\pm$0.0221 & 0.3863$\pm$0.0010 \\
\textbf{SIA-LDCT} & 19.9717$\pm$0.1799& 0.5011$\pm$0.0982 & 0.1499$\pm$0.0120 & 0.3316$\pm$0.0004\\
\textbf{UDA-LDCT} & 20.2965$\pm$0.1828 & 0.5894$\pm$0.0114 & 0.1693$\pm$0.0203 & 0.3312$\pm$0.0027 \\
\textbf{Ours} & \textbf{22.7201}$\pm$0.1782 & \textbf{0.6137}$\pm$0.0213 & \textbf{0.1269}$\pm$0.0104 & \textbf{0.2519}$\pm$0.0003 \\ \hline
\end{tabular}
\end{adjustbox}
\label{head2abd_table}
\end{table}
\subsection{Experimental Results}
\textit{\textbf{Source: Head, Target: Abdomen.}} First, we utilize the head scans as the source domain and the abdomen scans as the target domain. We can analyze from Figure 3 that the noise distribution shifts between source and target domains mainly result from the mismatch of the noise level (\textit{e.g.}, the variance of the noise distribution). Specifically, the noise level of the head scans is roughly lower than that of the abdomen scans. As a result, the model trained on head scans may not be powerful to remove the noise from the abdomen scans, which is a challenging domain gap.
\begin{figure*}[!h]
\centering
\includegraphics[width=0.85\textwidth]{figs/head_sample_compressed.pdf}
\caption{The visual comparison among different baseline methods for $
\mathcal{B} \rightarrow \mathcal{A}$ robust cross-domain CT reconstruction task. The green and red boxes on the left and right bottom of each image are the zoomed-in view of the selected ROIs. The quantitative results of each image are listed in the top right corner. The display window is [-10,80].}
\label{different region}
\end{figure*}
The quantitative results of different baseline methods are reported in the Table \ref{head2abd_table}. We have some observations as follows:
First, our proposed method outperforms all baseline methods with a clear margin. Second, UDA-LDCT, SIA-LDCT and our proposed method roughly achieve better peroformance (especially for the PSNR, GMSD and
DSS) than Noiser2noiser and CycleGAN, which is reasonable as these cross-domain CT reconstruction methods can leverage extra supervised information from the source domain compared with unsupervised/unpaired
schemes. However, compared with UDA-LDCT and SIA-LDCT, our proposed method not only can alleviate the noise distribution shifts between the source and target domains via the noise distribution alignment in the latent and image space, but also can avoid the negatively transferable effect resulting from the difference of anatomical structure. Third, although BM3D and ONLM can obtain acceptable performance in terms of PSNR and SSIM, the perceptual quantitative results (such as GMSD and DSS) are significantly worse than what our proposed method achieves.
The visual comparison among different baseline approaches is shown in Figure 4. We can observe from the zoomed-in view that our proposed method not only can suppress the noise well (please see the region of low resistance in the top right corner), but also can have a more natural texture to the NDCT images (compared with BM3D, ONLM, Noiser2noise, and CycleGAN). Moreover, our proposed method has a good contrast compared with SIA-LDCT and MF-BNN.
\begin{table}[!t]
\centering
\caption{The experimental results on the robust cross-domain CT reconstruction task from $\mathcal{B}\rightarrow \mathcal{A}$, \textit{i.e.}, the abdomen CT images as the source domain and the head CT images as the target domain. The average value and standard deviation are reported by running each method with five times. For PSNR and SSIM, the higher, the better. For GMSD and DSS, the lower, the better.}
\setlength{\extrarowheight}{3pt}
\begin{adjustbox}{width=0.5\textwidth}
\begin{tabular}{cccll}
\hline
\textbf{Method} & \textbf{PSNR} $\uparrow$ & \textbf{SSIM} $\uparrow$ & \multicolumn{1}{c}{\textbf{GMSD} $\downarrow$} & \multicolumn{1}{c}{\textbf{DSS} $\downarrow$} \\ \hline \hline
\textbf{MF-BNN} & 24.3351$\pm$0.1336 &0.6386$\pm$0.0075 &
0.1439$\pm$0.0026 & 0.2349$\pm$0.0041 \\ \hline
\textbf{BM3D} & 25.7475$\pm$0.1801 & 0.6525$\pm$0.0112 & 0.1534$\pm$0.0092& 0.3530$\pm$0.0009 \\
\textbf{ONLM} & \textbf{26.6574}$\pm$0.3231 & 0.6466$\pm$0.0282 & 0.1501$\pm$0.1231 & 0.3090$\pm$0.0213 \\
\textbf{Noiser2Noise} & 23.8407$\pm$0.1589 & 0.5792$\pm$0.0313 &0.1628$\pm$0.0091 & 0.3132$\pm$0.0233 \\
\textbf{CycleGAN} & 23.7163$\pm$0.4285 & 0.6160$\pm$0.0382 & 0.1537$\pm$0.0093 & 0.3180$\pm$0.0224 \\
\textbf{SIA-LDCT} & 23.0560$\pm$0.1961& 0.6403$\pm$0.0218& 0.1375$\pm$0.0031 & 0.2738$\pm$0.0103\\
\textbf{UDA-LDCT} & 22.4791$\pm$0.3118 & 0.6192$\pm$0.0435& 0.1773$\pm$0.0022 & 0.3421$\pm$0.0226 \\
\textcolor{red}{\textbf{Ours}} & 25.4989$\pm$0.1478 & \textbf{0.6642}$\pm$0.0345 & \textbf{0.1253}$\pm$0.0017 & \textbf{0.1950}$\pm$0.0196 \\ \hline
\end{tabular}
\end{adjustbox}
\label{abd2head_label}
\end{table}
\textit{\textbf{Source: Abdomen, Target: Head.}} The second robust cross-domain CT reconstruction task is from abdomen scans (as the source domain) to head scans (as the target domain). The quantitative results of different baseline methods are reported in the Table \ref{abd2head_label}. some
observations can be noticed as following: First, except for the PSNR, our proposed method outperforms the best performance compared with other baseline methods, in terms of SSIM, GMSD, and DSS. Instead, although ONLM and BM3D achieves the best and the second-best performances for PSNR and SSIM, they do not have competitive perceptual scores (such as GMSD and DSS) compared with our proposed method.
This phenomenon can be reflected by the visualized comparison in Figure 5. Specifically, it seems that BM3D and ONLM have better capacity for noise removal compared with our proposed method (see the zoomed-in green box). However, we can observe that 1) BM3D suffers from the over-smooth problem (see the green circle of the zoomed-in red box), leading to the unnatural texture compared with NDCT image. 2) ONLM losses the tiny structure due to the excessive noise removal (see the green circle of the zoomed-in red box).
Instead, our proposed method achieves better noise suppression compared with other deep learning-based method (see the red arrow of the zoomed-in green box). Moreover, a more natural texture and good structure retaining can be realized compared with traditional model-based approaches.
\begin{table}[!t]
\centering
\setlength{\extrarowheight}{4pt}
\caption{Ablation study on different components on $A \rightarrow B$ robust cross-domain CT reconstruction task.}
\begin{adjustbox}{width=0.5\textwidth}
\begin{tabular}{c|c|ccc|cccc}
\hline
Setting & Deterministic & Bayesian & BNUA & RDA & PSNR $\uparrow$ & SSIM $\uparrow$ & GMSD$\downarrow$ & DSS$\downarrow$ \\ \hline \hline
1 & \checkmark & & & & 18.9901$\pm$0.4311 &0.4501$\pm$0.0202 &
0.1923$\pm$0.0101 & 0.3723$\pm$0.0013 \\
2 & & \checkmark & & & 19.2601$\pm$0.2112 &0.4892$\pm$0.0121 &
0.1789$\pm$0.0012 & 0.3556$\pm$0.0011 \\
3 & & \checkmark & \checkmark & & 21.7201$\pm$0.1988 & 0.6097$\pm$0.0210 & 0.1409$\pm$0.0211 & 0.2909$\pm$0.0011 \\
4 & & \checkmark & & \checkmark & 21.1312$\pm$0.1622 & 0.6001$\pm$0.0112 & 0.1298$\pm$0.0201 & 0.2592$\pm$0.0014 \\
5 & & \checkmark & \checkmark & \checkmark & \textbf{22.7201}$\pm$0.2342 & \textbf{0.6137}$\pm$0.0213 & \textbf{0.1269}$\pm$0.0104 & \textbf{0.2519}$\pm$0.0003 \\ \hline
\end{tabular}
\end{adjustbox}
\label{ablation}
\end{table}
\begin{figure*}[!t]
\centering
\subfigure[]{
\begin{minipage}{0.29\textwidth}
\centering
\centerline{\includegraphics[width=5.7cm,height=3.0cm]{figs/ELBO.pdf}}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.3\textwidth}
\centering
\centerline{\includegraphics[width= 5.9cm,height=3.2cm]{figs/buna_loss.pdf}}
\end{minipage}
}
\subfigure[]{
\begin{minipage}{0.3\textwidth}
\centering
\centerline{\includegraphics[width= 6cm,height=3.2cm]{figs/rda_loss.pdf}}
\end{minipage}
}
\caption{The convergence curves of different losses during the training.}
\label{parameter modification}
\end{figure*}
\subsection{Further analysis}
In this section, we first explore the effectiveness of each component in our proposed framework on robust cross-domain CT reconstruction task.\\
\textit{\textbf{Deterministic Framework v.s. Probabilistic Framework.}} To
validate the effectiveness of the probabilistic framework, we compare the deterministic CPCE
model with our adopted Bayesian version. The quantitative results on the $A\rightarrow B$ task are reported in Table \ref{ablation}. From the first and the second rows of Table \ref{ablation}, we can observe the probabilistic framework outperforms the deterministic framework, especially for the SSIM score, which is reasonable as the probabilistic property of the weights in Bayesian network can act an implicit regularization for better robustness.
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{figs/bnua.png}
\caption{The relationship between Bayesian uncertainty discrepancy (between source and target domains) and quantitative results on validation set. The Bayesian uncertainty discrepancy can be computed using Eq. \ref{bnua}. The average value is reported by running each model with 10 times. Please zoom in for better view.}
\label{different_region_bnua}
\end{figure}
\textit{\textbf{Effectiveness of Bayesian Noise Uncertainty Alignment (BNUA).}} \label{testing_bnua}
BNUA module aims to alleviate the noise distribution discrepancy between the source and target domains in the latent space. Here, we can notice from the third
row of Figure \ref{ablation} that the PSNR and SSIM scores are improved significantly via equipping the $\mathcal{L}_{BNUA}$, compared with Bayesian neural network (in the second row, as a baseline). Furthermore, by testing on the validation set, we compute the Bayesian uncertainty discrepancy between source and target domains (as described in Eq. \ref{bnua}) of different models (including the baseline model and the baseline + $\mathcal{L}_{BNUA}$ model). The quantitative results can be found in Figure \ref{different_region_bnua}. As we can see, the baseline model, who does not adopt any noise distribution alignment strategy, suffers from a bigger Bayesian uncertainty discrepancy for both two tasks (see the blue star on the bold black line), which reflects the Bayesian uncertainty discrepancy can implicitly represent the noise distribution gap of two domains in the latent space. Instead, by imposing the $\mathcal{L}_{BNUA}$, the model has a smaller Bayesian uncertainty discrepancy (see the red pentagram on the bold black line), which also contributes to better quantitative results.
\textit{\textbf{Effectiveness of Residual Distribution Alignment (RDA).}} RDA aims to reduce noise distribution discrepancy between source and target domains in the image space through the residual information (as the proxy of noise). By introducing an adversarial manner, we can observe from the fourth row of the Figure \ref{ablation} that the quantitative results (especially for the GMSD and DSS) are improved with a clear margin compared with the baseline method. In the Figure \ref{differen_region_rda}, we plot the probability density functions (PDFs) of selected ROIs in the region of low resistance (which can reflect the noise distribution as much as possible). We can notice that the baseline method and corresponding RDA version can achieve an acceptable transfer from LDCT image to reconstructed image. However, the RDA version (see the red curve) is closer to the NDCT image (see the purple curve), which is reasonable as the discriminator in the RDA scheme can force the model to reduce the noise distribution (using the residual information as a proxy) discrepancy between source and target domains such that a better robust cross-domain CT reconstruction performance can be realized.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth,height=5cm]{figs/noise.pdf}
\caption{The probability density functions (PDFs) of selected ROIs in the region of low resistance.}
\label{differen_region_rda}
\end{figure}
\textit{\textbf{Empirical Convergence Analysis of Different Losses.}} Our proposed method includes the ELBO loss, BNUA loss and RDA loss. Here, we conduct an empirical convergence analysis of different losses. As shown in Figure 6, we can observe that the ELBO loss and RDA loss have a faster convergence speed than the BNUA loss during training.
\section{Conclusion}
In this paper, we address the problem of robust computed tomography (CT) reconstruction issue under a cross-domain scenario. A Bayesian-endowed probabilistic framework is introduced into a robust cross-domain CT reconstruction task. Under this probabilistic framework, we propose to
alleviate the noise distribution shifts between source and target domains via implicit noise modeling schemes in the latent space and image space. Specifically, A novel Bayesian noise uncertainty alignment (BNUA)
method is proposed to conduct implicit noise distribution
modeling and alignment in the latent space. Moreover, the discrepancy of noise distribution shifts between $D_{S}$ and $D_{T}$ is reduced by a novel adversarial residual distribution alignment (RDA) scheme in the image space. Extensive experiments show that our proposed method can achieve a better performance of robust cross-domain CT reconstruction than existing methods.
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13258",
"language": "en",
"timestamp": "2023-02-28T02:14:24",
"url": "https://arxiv.org/abs/2302.13258",
"yymm": "2302"
} | \section{PRELIMINARIES and Related Work}\label{sec-overview}
\subsection{Blockchain and Federated Learning}
By a combination of a mining competition and a consensus process,
nodes in a blockchain network maintain a distributed ledger
in which verified transactions are recorded securely in units known as \emph{blocks}.
When a new block is generated,
it is broadcasted across the network,
and any nodes that receive the block will immediately
cease processing transactions.
Blockchain has the following features:
\begin{itemize}
\item \textbf{Distributed}.
The blockchain consists of distributed computing nodes,
which may have different geographic locations and
communicate over the network to maintain the ledger jointly.
\item \textbf{Decentralization}.
Instead of a single authoritative central entity providing credit,
blockchain relies on the complete autonomy of all nodes.
Nodes use encrypted electronic evidence to gain trust and resolve conflicts
by themselves through a well-designed consensus mechanism,
thus forming a peer-to-peer network.
\item \textbf{Security}.
All nodes record each transaction and
resolve conflicts using the majority approval,
so tampering is complicated when the majority of nodes are honest.
Meanwhile,
other nodes will verify the proof of work and
transactions of the broadcast block,
and the cryptographic approach ensures that
any modification will be detected.
\item \textbf{Transparency}.
The information recorded on the blockchain
is stored in all nodes
and is entirely public,
so any entity can access it.
\item \textbf{Incentives}.
Blockchain rewards miners who successfully generate valid blocks
thus inspiring nodes to contribute and
to ensure secure and continuous operation of the network.
\item \textbf{Asynchrony}.
Each node in the blockchain works independently,
depending on the information received and
the consensus mechanism to change the active status.
For example,
miners do not start mining process simultaneously,
and new transactions do not occur at the same time.
\end{itemize}
FL is a distributed training approach enabling end devices
to independently train their models and pool their
intermediate information on a central server
to deliver global knowledge.
Its purpose is to eliminate \emph{data islands}
~\footnote{Think of data as a vast,
ever-changing ocean,
from which various organizations,
like banks, siphon off information to store it privately.
As a result,
these untouched pieces of information eventually form separate islands.}
and reap the advantages of aggregate modeling.
In the FL process,
at the beginning of each communication round,
clients update their local models with their own data and compute gradients.
These gradients are then uploaded to the central server,
which aggregates them to compute the global update.
The central server then returns the global update to the clients,
who use it to update their local models independently.
This iterative process allows FL to evolve dynamically from one round to another,
resulting in a more accurate and efficient model.
FL has the following features:
\begin{itemize}
\item \textbf{Distributed}.
The clients in federated learning are distributed and
perform their own learning at the end devices of the network.
\item \textbf{Centralization}.
Federated learning relies on an authoritative central entity
to provide credit and collect all local gradients for global updates.
\item \textbf{Privacy}.
The local learning happens in the clients,
and they upload the locally updated gradients.
Data never leave the local nodes.
\item \textbf{Synchrony}.
In general,
all nodes in FL work based on communication rounds and
complete the entire learning process in one round
before starting the next round.
\end{itemize}
\subsection{Blockchained Federated Learning (BFL)}\label{sec-bfl}
\begin{figure}[t]
\centering
\includegraphics[width=0.79\linewidth]{figures/venn.eps}
\caption{Opportunities and challenges of BFL}
\label{fig:venn}
\end{figure}
The opportunities and challenges of integrating Blockchain
and FL are visualized in \Cref{fig:venn}.
The distributed nature of both blockchain and FL offers
a strong foundation for their seamless integration.
Moreover,
the decentralization feature of blockchain
empowers FL with better autonomy.
It is worth noting that
blockchain,
as a data encryption and value-driven system,
always guarantees the security of the data exchange process in the BFL.
However,
BFL design consists of non-trivial challenges as shown in \Cref{fig:venn}.
As an illustration,
the asynchronous characteristic of blockchain necessitates a thorough integration
with the communication rounds mechanism of FL.
Also,
there is a huge trade-off
between the transparency of blockchain and the privacy of FL.
We must carefully determine which kind of data should be public to avoid weakening privacy.
\subsection{Quantum Computer Threats}
In the future, quantum computers will become one of
the primary computing infrastructures
due to their powerful parallel computing capabilities.
In contrast to conventional computers,
quantum computers run quantum bits based on quantum entities
such as photons, electrons, and ions.
Its computational power is several orders of magnitude higher
than that of conventional computers and shows a stunning
increase as the number of quantum bits increases.
The competition for quantum computing has long been underway,
and many tech giants and research institutions have already
started building their infrastructure.
Among them,
IBM and Google have already surpassed 100 quantum bits in 2021-2022,
and plan to surpass 1000 quantum bits within 2023,
and 1 million in 2030, respectively.
The cornerstone of cryptography is mathematical puzzles
that cannot be solved in a short time,
whereas the advent of quantum computers makes it possible
to perform brute force cracking in an acceptable time.
Against this background,
concerns about the security of current cryptographic
systems have been raised.
For example,
widely used cryptographic algorithms such as RCDSA, RSA, and DSA
have been theoretically proven to be impervious to quantum attacks.
As a result,
quantum computers pose a widespread threat to
systems protected by these methods.
For example,
blockchains and edge computing systems
that use RSA for signature authentication.
\subsection{Related Work}
Several notable studies have explored the integration of BFL,
including works cited as
~\cite{pokhrel_federated_2020, Xu2023, awan_poster_2019,majeed_flchain_2019,lu_blockchain_2020,li_blockchain-based_2021}.
To address privacy concerns,
\cite{awan_poster_2019} proposed a variant of the \emph{Paillier} cryptosystem
that supports homomorphic encryption and proxy re-encryption,
while \cite{majeed_flchain_2019} implemented a BFL framework called \emph{FLchain}
that leverages ``channels" to store local model gradients in blockchain blocks.
Components such as \emph{Ethereum} have extended the capabilities of \emph{FLchain},
enabling execution of global updates.
To address centralized trust issues,
\cite{lu_blockchain_2020} incorporated differential privacy
into permissioned blockchains,
and \cite{li_blockchain-based_2021} proposed a BFL framework that
stores the global model and exchanges local updates via blockchain,
effectively eliminating the need for a central server and mitigating privacy risks.
These works have aimed to improve the privacy and security of the vanilla BFL framework.
However, vanilla BFL design still faces several challenges~\cite{pokhrel_federated_2020, Xu2023}.
For example,
the asynchronous nature of blockchain requires careful integration
with the FL communication rounds mechanism and therefore the frameworks proposed in~\cite{pokhrel_federated_2020, Xu2023} are not practicable for MEC. In addition, in our earlier BFL frameworks~\cite{pokhrel_federated_2020, Xu2023},
there is little consideration for evaluating the clients' contributions,
and those that do may raise privacy concerns
~\cite{pokhrel_federated_2020,kim_blockchained_2020}.
As a result,
there is a need for an approach that overcomes these
limitations and offers improved performance and security.
In this paper,
we propose \emph{BFL-MEC},
a novel approach to BFL that features a full asynchronous design,
a fair aggregation scheme, and a contribution-based incentive mechanism.
\emph{BFL-MEC} enhances the security and robustness of BFL for
various MEC applications by addressing key limitations
of the vanilla BFL framework.
By providing an incentive mechanism for clients and implementing
a fair aggregation scheme,
\emph{BFL-MEC} aims to ensure that clients' contributions
are accurately and objectively evaluated.
Overall, \emph{BFL-MEC} offers several advantages over
the vanilla BFL framework and provides a strong foundation
for the integration of blockchain and federated learning.
\section{BFL-MEC Design}\label{sec-modeling}
This section outlines the proposed \emph{BFL-MEC} approach
and provides a detailed algorithm description.
We explain how blockchain and federated learning can
be effectively integrated by taking advantage of their internal workings.
\Cref{tab: notations} presents a summary of the notations used in this paper.
\begin{table}[htbp]
\centering
\caption{Summary of notations}
\begin{tabular}{ll}
\toprule
\multicolumn{2}{c}{Notations used in this paper} \\
\midrule
${C_i}$ & A client in BFL-MEC with index $i$. \\
${S_k}$ & A edge node in \emph{BFL-MEC} with index $k$. \\
$\mathcal D_i$ & The data set held by the client $i$ at a certain moment. \\
$\eta$ & The learning rate of the client's local model. \\
$E$ & The number of training epochs for the client's model. \\
$B$ & The batch size used by the clients. \\
$n$ & The number of clients. \\
$m$ & The number of edge nodes. \\
$\mathcal B$ & Dataset split by batch size. \\
$w$ & The calculated gradient. \\
\bottomrule
\end{tabular
\label{tab: notations
\end{table
\subsection{System Overview}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{figures/newbfl.eps}
\caption{The framework of \emph{BFL-MEC}}
\label{fig:bfl}
\end{figure}
\Cref{fig:bfl} provides a top-level perspective
of the \emph{BFL-MEC} framework,
which incorporates multiple edge nodes instead of a central server
for federated learning.
By doing so,
the framework avoids the possibility of a \emph{single point of failure}.
Also,
they play the role of miners in the blockchain,
jointly maintaining a ledger and mining blocks.
The whole process can be seen as asynchronous communication between
$n$ clients $\{ {C_i}\} _{i = 1}^n$
and a set of $m$ miners
$\{ {S_k}\} _{k = 1}^m$
to handle the blockchain process.
However,
instead of sending transaction information,
the clients send the gradients of their local models to the edge nodes,
who also compute the global gradients and record them on the block chain for sharing.
The client $C_i$ move around the environment and collect new data,
and the resulting dataset ${\mathcal D_i}$ is used to train the local model.
Moreover, they read the global model from the blockchain to update the local model.
\subsection{Post-Quantum Secure Feature}\label{pqc}
It is essential to avoid using local gradients for
subsequent global updates in a federated learning setting
without proper verification.
The risk of gradient attacks launched by malicious
clients, which can easily forge information,
is significant~\cite{nasr_comprehensive_2019}.
To mitigate the risk of gradient attacks in federated learning,
clients often generate digital signatures for
local gradients using their private keys.
Edge nodes then use clients' public keys to verify the signatures,
ensuring that the information has not been tampered with.
However,
The great majority of extant signature algorithms rely on classical cryptography,
such as RSA.
However, as aforementioned,
these schemes are no longer robust to quantum computers.
To this end,
we use post-quantum cryptography (PQC) to ensure that
the identities of both parties are verified and quantum security.
\textbf{Lattice-based Cryptography.}
Cryptography based on lattices is safe because it is impossible
to break with sufficient force in a reasonable amount of time,
not even with quantum computers.
In addition to multivariate cryptography,
the leading possibilities for post-quantum cryptography
also include hash-based cryptography and code-based encryption.
We illustrate in \Cref{alg:pqc} how to provide post-quantum security
features using lattice-based cryptography.
\begin{algorithm}[htbp]
\caption{Lattice-based Digital Signature Scheme}
\label{alg:pqc}
\begin{algorithmic}[1]
\Procedure{KeypairGen}{}
\State $\mathbf{A} \leftarrow R_{q}^{k \times \ell}$
\State $\left(\mathrm{s}_{1}, \mathrm{~s}_{2}\right) \leftarrow S_{\eta}^{\ell} \times S_{\eta}^{k}$
\State $\mathbf{t}:=\mathbf{A} \mathbf{s}_{1}+\mathbf{s}_{2}$ \\
\Return $ \left(p k=(\mathbf{A}, \mathbf{t}), s k=\left(\mathbf{A}, \mathbf{t}, \mathrm{s}_{1}, \mathbf{s}_{2}\right)\right) $
\EndProcedure
\\
\Procedure{PQCSign}{$sk, M$}
\State $ \mathbf{Z}:=\perp $
\While{$ \mathrm{z}:=\perp $}
\State $ \mathbf{y} \leftarrow S_{\gamma_{1}-1}^{\ell} $
\State $ \mathbf{w}_{1}:=\operatorname{HighBits}\left(\mathbf{A y},2\gamma_{2}\right) $
\State $ c \in B_{\tau}:=\mathrm{H}\left(M \| \mathbf{w}_{1}\right) $
\State $ \mathbf{z}:=\mathbf{y}+c s_{1} $
\If{$ \|\mathbf{z}\|_{\infty} \geq \gamma_{1}-\beta $ or $ \| \operatorname{LowBits} \left(\mathbf{A y}-\operatorname{cs}_{2},2\gamma_{2}\right) \|_{\infty} \geq \gamma_{2}-\beta $}
$ \mathrm{z}:=\perp $
\EndIf
\EndWhile
\\
\Return $ \sigma=(\mathbf{z}, c) $
\EndProcedure
\\
\Procedure{PQCVerify}{$ p k, M, \sigma=(\mathbf{z}, c) $}
\State $ \mathbf{w}_{1}^{\prime}:=\operatorname{HighBits}\left(\mathbf{A z}-c \mathbf{t},2\gamma_{2}\right) $ \\
\Return $ \llbracket\|\mathbf{z}\|_{\infty}<\gamma_{1}-\beta \rrbracket $ and $ \llbracket c=\mathrm{H}\left(M \| \mathbf{w}_{1}^{\prime}\right) \rrbracket $
\EndProcedure
\end{algorithmic}
\end{algorithm}
We employ a simplified version of \emph{Dilithium} as shown in \Cref{alg:pqc},
which is explained in the following (see \cite{ducasCRYSTALSDilithiumLatticeBasedDigital2018} for details).
\begin{inparaenum}[i)]
\item \emph{Keypair Generation.}
First,
a $ k \times \ell $ matrix $A$ is generated over
the ring $ R_{q}=\mathbb{Z}_{q}[X] /\left(X^{n}+1\right) $,
each of whose entries is a polynomial in $R_{q}$.
Then we randomly sample the private key vectors $s_1$ and $s_2$,
and therefore compute the second term of the public key as $t=As_1+s_2$.
\item \emph{PQC Sign.} First,
we generates a masking vector of polynomials $y$ with coefficients less than $ \gamma_{1} $.
And $ \gamma_{1} $ is chosen strategically:
it is large enough that the signature does not expose the secret key,
which means the signing process is zero-knowledge,
yet small enough that the signature cannot be easily forged.
Then, the clients compute $A_y$ and set $w_1$ to be
high-order bits of the coefficients in this vector,
where each coefficient $w$ of $A_y$, for example,
can be expressed canonically as $ w=w_{1} \cdot2\gamma_{2}+w_{0} $.
The challenge $c$ is then created as the hash of the local gradients and $w_1$.
Finally, clients can get the signature by computing $z=y+cs_1$.
\item \emph{PQC Verify.} The edge nodes first computes $ \mathbf{w}_{1}^{\prime} $
to be the high-order bits of $Az-ct$,
and then accepts if all the coefficients of $z$ are less than $\gamma_{1}-\beta$
and if $c$ is the hash of the message and $ \mathbf{w}_{1}^{\prime} $.
As $\mathrm{Az}-c \mathbf{t}=\mathrm{Ay}-c \mathrm{~s}_{2}$, we have
\begin{equation}
\label{eq:pqcverify}
\operatorname{HighBits}\left(\mathbf{A y},2\gamma_{2}\right)=\operatorname{HighBits}\left(\mathbf{A y}-c \mathbf{s}_{2},2\gamma_{2}\right).
\end{equation}
We know that a valid signature satisfies
$\|\operatorname{LowBits}\left(\mathbf{A y}-c \mathbf{s}_{2},2\gamma_{2}\right) \|_{\infty}<\gamma_{2}-\beta$,
and the coefficients of $cs_2$ are smaller than $\beta$,
and adding $cs_2$ is not enough to cause any carries by
increasing any low-order coefficient to have magnitude at least $\gamma_{2}$.
Thus the \Cref{eq:pqcverify} is true and the signature verifies correctly.
\end{inparaenum}
By using \Cref{alg:pqc} for the communication process between the clients and the edge nodes
and replacing the RSA component of the blockchain,
we can ensure that the whole system is post-quantum secure
and can effectively withstand the threat of quantum computers.
In our design,
every client is allocated a unique private key according to their ID,
and the corresponding public key is kept by the edge nodes.
The information is verified using the client's public key,
and the gradient information is signed with the private key
to ensure that it is not tampered with,
as demonstrated in \Cref{fig:rsa}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{figures/rsa.eps}
\caption{Miners verify transactions through PQC}
\label{fig:rsa}
\end{figure}
\textbf{Privacy-Transparency trade-off in BFL.}
Vanilla BFL records every local gradient in the blockchain.
However, in the design of blockchain,
the transaction information is public to everyone.
In this case,
vanilla BFL is actually a white-box for the attacker,
malicious nodes can use this information to
perform privacy attacks and easily track the changes
in a client's local gradient to
launch more severe model inversion attacks~\cite{fredrikson_model_2015}.
BFL-MEC alleviates this risk by not storing the
original local gradients on the blockchain.
Edge nodes, in particular,
temporarily hold all unaggregated local gradients in their local cache pools,
while globally aggregated local gradients are removed to free up space.
When a new block is created,
the edge nodes construct a transaction for each unaggregated local gradient
that includes the sender's ID, the receiver's ID,
and the local gradient's signature.
After that,
they group every transaction into a transaction set and include it in the block.
The created global gradient and the reward list will be the first transaction in the set
if a global update happens, in particular.
This reduces the cost of search for the clients.
\Cref{fig:bcrecord} depicts the structure of a block in our design.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.56\linewidth]{figures/block.eps}
\caption{Privacy preserving block structure}
\label{fig:bcrecord}
\end{figure}
\subsection{Fully Asynchronous Design}
Vanilla BFL design obeys the synchronization assumption~\cite{pokhrel_federated_2020}.
The central server samples a subset of the existing clients
and sends instructions to these clients, which then start local updates.
However, in an edge computing network,
the central server cannot know precisely the current number of clients,
nor can it guarantee that the selected clients are always in a communicable state,
as they are constantly moving.
In the real world,
a fully asynchronous design is more reasonable.
To this end,
we design independent working strategies for clients
and edge nodes, respectively,
and control these two processes through the two tunable parameters $N$ and $\phi$.
\subsubsection{Client update strategy}
Clients keeps collecting new data from the environment and check
if the local dataset size $\left|\mathcal{D}_{i}\right|$ exceeds the threshold $N$.
In the case of minimum constraints on the data size have been satisfied,
the clients start executing the following work strategies.
\textbf{Anchor Jump.}
Before beginning local training,
clients need read the most recent global gradient from the blockchain.
To avoid duplicate searches,
they employ an anchor to identify the block containing the last global gradient
and then check the first transaction of the block sequentially along this block.
If a new global gradient is found,
the index of the block that contains it is saved as the new anchor.
\textbf{Local model update.}
After reading the global gradient ${w_r}$
from the last block of the blockchain,
each client updates their local model accordingly.
The data sets ${\mathcal D_i}$ are split into batches of size $B$,
and for each epoch $i \in { 1,2,3, \ldots ,E}$, client ${C_i}$ employs
stochastic gradient descent (SGD) to derive the gradient $w _ { r+1 } ^ { i}$,
with the loss function $\ell$ and learning rate $\eta$,
as outlined in \Cref{eq3}.
\begin{equation}\label{eq3}
w^{i} \leftarrow w^{i} -
\eta \nabla \ell ( w^{i} ; b )
\end{equation}
\textbf{Uploading the gradient for mining}
After local model update,
the client ${C_i}$ will
get the updated gradient $w^{i}$,
and upload it to the edge nodes for mining.
${C_i}$ always sends its local gradients $w^{i}$ to the nearest edge node.
However, due to constant movement,
it may lose the connection before the upload is completed.
In the worst case,
it may not be able to connect with an edge node for some time,
yet the data collected from the environment has exceeded the threshold $N$.
In such a case,
the client will perform local model updates normally
while temporarily storing the not uploaded local gradients.
When a connection can be established again,
it computes the average of all temporarily stored local gradients as new $w^{i}$,
Then it generates $signature$ for $w^{i}$ using the proposed PQC algorithm and finally
sends the $w^{i}$, $signature$ and its public key $pk$ to the associated edge node.
The whole process of client update strategy is shown in \Cref{alg:clientsupdate}.
\begin{algorithm}[htbp]
\caption{Clients Uptdae}
\label{alg:clientsupdate}
\begin{algorithmic}[1]
\If {$\left| {{\mathcal{D}_i}} \right| > N$}
\State check the $TX_1$ of all blocks following the \textit{Anchor}
\If {new ${w_r}$ in $block_i$}
\State read ${w_r}$ from $block_i$
\State $ \textit{Anchor} \leftarrow index(block_i) $
\EndIf
\State $\mathcal B \leftarrow split\ {\mathcal D_i}\ into\ batches\ of\ size\ B$
\For{$each\ epoch\ i\ from\ 1\ to\ E$}
\For{$each\ batch\ b\ \in\ \mathcal B$}
\State $w^{i} \leftarrow w^{i } - \eta \nabla \ell ( w^ { i } ; b )$
\EndFor
\EndFor
\EndIf
\If{\{$w _ { 1 } ^ { i}, w _ { 2 } ^ { i}, ..., w _ {k} ^ { i}\}$ not uploaded}
\State $w^{ i} \leftarrow \frac{1}{k}\sum\limits^{k} {w_{k}^i}$
\Repeat
\State check connection with the nearest ${S_k}$
\State associate to ${S_k}$
\State $signature \leftarrow \textbf{PQCSIGN}(w^{i}, sk)$
\Until upload $w^i,signature,pk$ to ${S_k}$ finished
\EndIf
\end{algorithmic}
\end{algorithm}
\subsubsection{Edge nodes' update strategy}
Before generating a block,
the edge nodes check the distance of the latest
one transaction $\mathcal{T}$ to the previous anchor,
that is,
they check how many unaggregated local gradients have been generated
since the last global aggregation.
If the number reaches the threshold $\phi$,
they compute the global model and assign rewards; otherwise,
they only record the transactions on the blockchain.
We present here the blockchain process that must be carried out.
As for the methods of computing the global model's and assigning rewards,
we put them in \Cref{sec-cii,sec-convergenceproof}, respectively.
\textbf{Exchanging Gradients.}
The associated client set ${ {C_i} }$ for an edge node ${S_k}$
provides the updated gradient set ${ w^{i} }$.
In parallel,
each edge node broadcasts its own gradient set.
${S_k}$ verifies if the received transaction already exists
in the current gradient set ${w^{i}}$, and if not,
it appends the new transaction.
Eventually, all edge nodes possess the same gradient set.
To ensure the data has not been tampered with,
edge nodes validate the transactions from other edge nodes
using the proposed PQC algorithm,
as illustrated in \Cref{fig:rsa}.
\textbf{Block Mining and Consensus.}
The mining competition is a continuous process
that involves all edge nodes.
To be more specific,
edge nodes continuously adjust the nonce in the block header and then
evaluate whether the block's hash satisfies the $Target$ by utilizing SHA256.
This entire process can be expressed as depicted in \Cref{eq5},
where $Targe{t_1}$ is a substantial constant that represents the highest mining difficulty.
It's important to note that $Target$ is constant for all miners,
and the mining difficulty is established before the algorithm commences.
Thus, the probability of an edge node earning the right to create blocks
is based on the speed of hash calculation.
\begin{equation}\label{eq5}
H ( \ nonce + \ Block ) < \ Target\ = \frac{{Targe{t_1}}}{{difficulty}}
\end{equation}
Should an edge node find the solution to \Cref{eq5} before other edge nodes,
it will promptly assemble the transaction set ${P_{i}}$ into a new block
and disseminate this block to the network.
Upon receipt of the new block, other edge nodes will immediately halt their
current hash calculations and add the new block to their blockchain copies,
subject to the block's validity being confirmed.
The whole process of edge nodes update strategy is shown in \Cref{alg:bfl}.
\begin{algorithm}[htbp]
\caption{Edge Nodes Update}
\label{alg:bfl}
\begin{algorithmic}[1]
\State do $proof\ of\ work$
\State $W^{k}\leftarrow \{ w^{i},i = index\ of\ associate\ clients\}$
\For {$w^i \in W^{k}$}
\State $\textbf{PQCVERIFY}(w^i,pk)$
\EndFor
\State broadcast clients updated gradient $W^{k}$
\State received updated gradient $W^{v}$ form ${S_v}$
\For {$w \in W^{v}$}
\If {$w \notin W^{k}$}
\State $W^{k}$ append $w$
\EndIf
\EndFor
\If {hash satisfies target}
\State $P_i\leftarrow(sender,receiver,signature)$
\State generate and add $block(\{P_i\})$
\State broadcast
\EndIf
\If {received $bloc{k_i}$}
\State verify $proof\ of\ work$
\If {hash satisfies target}
\State stop current $proof\ of\ work$
\State blockchain add $bloc{k_i}$
\EndIf
\EndIf
\If {$\mathcal{T} - Anchor_{ts} > \phi$}
\State $w_{g} \leftarrow \frac{1}{n}\sum\limits_{i = 1}^n {w^i},w^{i} \in {W^{k}}$ \Comment{Simple Average}
\State $W^{k}$ append $w^{i}$
\State $\textbf{Contribution-based\ Incentive\ Mechanism}(W^{k})$ \label{alg:cii-in-alg:bfl}
\State $w_g \leftarrow \textbf{Fair\ Aggregation}(W^k)$ \Comment{By \Cref{eq:contriAVG}}
\State $TX_1 \leftarrow (reward\ list, w_{g})$
\EndIf
\end{algorithmic}
\end{algorithm}
\subsection{Accounting Client's Contribution}\label{sec-cii}
After edge nodes complete the gradient exchange,
edge node $S_k$ will have the gradient set $W^k$,
which contains all client gradients.
If a global update happens,
the edge nodes first compute a temporary global gradient using simple averaging,
and then append it to local gradient set $W^k$.
Finally,
the edge nodes start identifying the contributions of clients.
\begin{algorithm}[htbp]
\caption{Client's Contribution Identification Algorithm}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\label{alg:cii}
\begin{algorithmic}[1]
\Require $W^{k}$, $model\ name$, $Strategy$
\State $Group\ List \leftarrow Clustering(model\ name, W^{k})$
\For{$l_i \in Group\ List$}
\If{$w_g \in l_i$}
\For{$w^i \in l_i$}
\State $\theta _{i} \leftarrow Cosine\ Distance(w^i,w_{g})$
\State \textbf{Label} $C_i$ as high contribution
\State \textbf{Append} $\langle C_i, \theta _i/{{\sum\limits_{k = 1}^{\lambda n} {{\theta _k}} }}*base\rangle$ to $reward\ list$
\EndFor
\EndIf
\If{$w_{g} \notin l_i$}
\ForAll{$w^i \in l_i$}
\State \textbf{Label} $C_i$ as low contribution
\EndFor
\EndIf
\EndFor
\State $W^k \leftarrow Strategy(reward\ list, W^{k})$
\Ensure $reward\ list$, $W^k$
\end{algorithmic}
\end{algorithm}
To identify contributions,
we have implemented our method in \Cref{alg:cii},
which is integrated into \Cref{alg:bfl} as
described in \Cref{alg:cii-in-alg:bfl}.
We apply clustering algorithms to $W^k$ to
generate various clusters of gradients,
each cluster representing a different contribution.
It is important to note that any clustering algorithm
that meets the requirements can be employed here;
nonetheless, we have chosen \emph{DBSCAN} as the default
algorithm in our experiments due to its efficiency and simplicity.
Clients belonging to the same cluster as the global gradient
are regarded as having made a significant contribution
and will be rewarded accordingly,
whereas those who are further from the global gradient
are considered to have made a minor contribution
and will follow a predetermined strategy.
There are two main strategies for handling local gradients in our algorithm:
\begin{inparaenum}[i)]
\item keep all gradients;
\item discard low-contributing local gradients
and recalculate the global updates $w_{g}$.
\end{inparaenum}
To determine the contribution of a high-contributing client $C_i$,
we calculate the \emph{cosine} distance $\theta_{i}$ between their local gradient
and the global update, using this as the weight for the
client's contribution to the global update.
To calculate the final reward for a client,
we multiply a $base$ reward by $\theta _i/{{\sum\limits_{k = 1}^{\lambda n} {{\theta _k}} }}$.
We then record these rewards as key-value pairs
$\langle C_i, \theta _i/{{\sum\limits_{k = 1}^{\lambda n} {{\theta _k}} }}*base\rangle$
in a $reward list$.
When an edge node generates a new block,
the rewards are distributed according to the reward list
and appended to the block as transactions.
Once blockchain consensus is achieved,
clients receive their rewards.
The intuition behind \Cref{alg:cii} can be explained as follows:
We explain the intuition behind \Cref{alg:cii} as follows.
\begin{itemize}
\item \textbf{Privacy preservation.}
In the vanilla BFL approach,
clients need to provide information about their
data dimensions to determine the rewards they will receive.
This incentivizes clients to cheat in order to obtain more rewards,
and there is no way to verify the actual data set without violating FL's guidelines.
In contrast,
the gradients can provide an intermediate representation that reflects
both the data size and quality.
By using them to perform \Cref{alg:cii},
we can obtain a more objective assessment
that preserves the privacy of the clients.
\item \textbf{Malicious attack resistance.}
One potential threat to the security of the global model is the possibility
of malicious clients uploading fake local gradients.
However,
the clustering algorithm employed in \Cref{alg:cii}
can detect these spurious gradients since they differ
from the genuine ones~\cite{nasr_comprehensive_2019}.
By adopting the discarding strategy,
we can prevent these fake gradients from skewing the global model,
thereby preserving the security of \emph{BFL-MEC}.
\end{itemize}
We will thoroughly evaluate our contribution-based
incentive approach in \Cref{sec-validation}.
\subsection{Fair Aggregation for Model Convergence}\label{sec-convergenceproof}
The optimization problem addressed by \emph{BFL-MEC} is
to minimize the function $F(\mathbf{w})$,
given by the summation of local objective functions $F_{i}(\mathbf{w})$
weighted by $p_i$,
where $p_i$ is the weight of client $i$.
This is represented by the equation
\begin{equation*}
\min\limits_w\left\{{F(\mathbf{w})} \triangleq \sum_{i=1}^{n} p_{i} F_{i}(\mathbf{w})\right\}.
\end{equation*}
For the simple average aggregation,
where $p_{1}=p_{2}=...=p_{i}=\frac{1}{n}$,
the global model is updated as
\begin{equation*}\label{eq:simpleAVG}
w_{g} \leftarrow \frac{1}{n}\sum\limits_{i = 1}^n {w^i}.
\end{equation*}
The method of simple average aggregation treats all clients' gradients
equally and calculates their mean.
However,
this approach fails to account for differences
in the sample sizes across clients,
which can lead to unfairness.
To address this issue,
we use a modified approach for global gradient aggregation as follow.
\begin{equation}\label{eq:contriAVG}
{w_{g}} \leftarrow \sum\limits_{i = 1}^n {{p_i}} w^i, \text{where}\ {p_i} = \theta _i/{{\sum\limits_{k = 1}^{\lambda n} {{\theta_k}} }}.
\end{equation}
We assign aggregation weights based on the contribution of clients
to prevent model skew and improve accuracy,
which addresses the issue of unequal sample sizes.
This is more practical since each client in the mobile edge computing scenario
has a distinct data distribution and a personalized local loss function.
The stability and convergence dynamics of \emph{BFL-MEC} can still be analyzed,
despite using fair aggregation and an asynchronous design.
This allows us to evaluate the performance of the algorithm
and ensure that it achieves its intended objectives.
In our design,
local updates and gradient uploads can be performed
as soon as the client's local dataset size $\mathcal{D}_i$ exceeds the threshold $N$,
which is also known as the ``activated'' state in synchronous FL.
Therefore,
we first define concurrency,
that is,
the set of clients performing local updates at each step $t$, as $C_t$.
\begin{definition}[Concurrency]
$ \tau_{C}^{(t)} $ is defined as the size of client set
for local update at step $t$, so that $ \tau_{C}^{(t)}=\left|C_{t}\right| $.
In consequence, we can define the maximum concurrency
$ \tau_{C}=\max_{t}\left\{\tau_{C}^{(t)}\right\} $
and average concurrency
$ \bar{\tau}_{C}=\frac{1}{T+1} \sum_{t=0}^{T} \tau_{C}^{(t)} $ as well.
\end{definition}
Also, we define the average delay as follow.
\begin{definition}[Average Delay]
The average delay of a client $i$ is
\begin{equation}
\tau_{a v g}^{i}=\frac{1}{T_{i}}\left(\sum_{t: j_{t}=i} \tau_{t}+\sum_{k} \tau_{k}^{C_{T}, i}\right),
\end{equation}
where $T_i$ is the number of times client $i$ performs local updating
and $\left\{\tau_{k}^{C_{T}, i}\right\}_{k}$ is the set of delays
from gradients of the client $i$ that are left unapplied at the last iteration.
\end{definition}
Obviously,
the convergence of \emph{BFL-MEC} is highly relevant to $ \tau_{C}^{(t)} $.
Moreover,
from \cite{koloskovaSharperConvergenceGuarantees2022},
we can learn an essential relationship between average
concurrency $\bar{\tau}_{C}$ and average delay $\tau_{a v g}$,
that is,
$ \tau_{a v g}=\frac{T+1}{T+\left|C_{T}\right|-1} \bar{\tau}_{C} \stackrel{T>\left|C_{T}\right|}{=} \mathcal{O}\left(\bar{\tau}_{C}\right) $.
In this work, we also employ this observation to help our proof.
For the sake of tractability,
we have adopted the following four widely used assumptions
from the literature~\cite{stich_local_2019,li_convergence_2020,li_federated_2020,dinhFederatedLearningWireless2021}.
\begin{assumption}[$L$-smooth~\cite{stich_local_2019,li_federated_2020,dinhFederatedLearningWireless2021}]\label{ass-smooth}
Consider
${F_i}(w) \triangleq \frac{1}{n}\sum\limits_{i = 1}^n \ell
\left( {w;{b_i}} \right)$
and ${F_i}$ is L-smooth,
then for all $\mathbf{v}$ and $ \mathbf{w}$,
\[ F_{i}(\mathbf{v}) \leq F_{i}(\mathbf{w})+(\mathbf{v}- \mathbf{w})^{T}
\nabla F_{i}(\mathbf{w})+\frac{L}{2}\|\mathbf{v}-\mathbf{w}\|_{2}^{2}. \]
\end{assumption}
\begin{assumption}[$\mu$-strongly~\cite{stich_local_2019,li_federated_2020,dinhFederatedLearningWireless2021}]\label{ass-ustrongly}
${F_i}$ is u-strongly convex,
for all $ \mathbf{v} $ and $ \mathbf{w}$,
\[F_{i}(\mathbf{v}) \geq F_{i}(\mathbf{w})+(\mathbf{v}- \mathbf{w})^{T}
\nabla F_{i}(\mathbf{w})+\frac{\mu}{2}\|\mathbf{v}-\mathbf{w}\|_{2}^{2}.\]
\end{assumption}
\begin{assumption}[bounded variance~\cite{li_convergence_2020}]\label{ass-gradientbound}
The variance of stochastic gradients in each client is bounded by:
\[ \mathbb{E}\left\|\nabla F_{i}
\left(\mathbf{w}_{r}^{i}, b_i\right)-\nabla F_{i}
\left(\mathbf{w}_{r}^{i}\right)
\right\|^{2} \leq \sigma_{i}^{2} \]
\end{assumption}
\begin{assumption}[bounded stochastic gradient~\cite{li_convergence_2020}]\label{ass-gbound}
The expected squared norm of stochastic gradients
is uniformly bounded,
thus for all $ i=1, \cdots, n $ and $ r=1, \cdots, r-1$,
we have
\[\mathbb{E}\left\|\nabla F_{i}
\left(\mathbf{w}_{t}^{i}, b_i\right)
\right\|^{2} \leq G^{2}\]
\end{assumption}
\Cref{ass-smooth,ass-ustrongly,ass-gradientbound,ass-gbound}
are crucial for analyzing the convergence of \emph{BFL-MEC},
as established in several prior works.
These assumptions all impose constraints on the underlying loss function,
specifying that it must not vary too quickly (\Cref{ass-smooth})
or too slowly (\Cref{ass-ustrongly}),
while also bounding the magnitude of the gradients (\Cref{ass-gradientbound,ass-gbound}).
By leveraging these assumptions and approximating $F-F^*$ with $w-w^*$,
we can provide a convergence guarantee,
as formalized in \Cref{THEOREM1}.
\begin{theorem}\label{THEOREM1}
Given \Cref{ass-smooth,ass-ustrongly,ass-gradientbound,ass-gbound} hold,
and $\tau_{a v g}=\frac{T+1}{T+\left|C_{T}\right|-1} \bar{\tau}_{C} \stackrel{T>\left|C_{T}\right|}{=} \mathcal{O}\left(\bar{\tau}_{C}\right)$,
we have
\begin{equation}\label{eq-theorem1}
\frac{1}{T+1} \sum_{t=0}^{T}\left\|\nabla f\left(\mathrm{x}^{(t)}\right)\right\|_{2}^{2} \leq \varepsilon
\end{equation}
after $\mathcal{O}\left(\frac{\sigma^{2}}{\varepsilon^{2}}+\frac{\zeta^{2}}{\varepsilon^{2}}+\frac{\tau_{a v g} G}{\varepsilon^{\frac{3}{2}}}+\frac{\tau_{\text {avg }}}{\varepsilon}\right)$
global aggregations.
\end{theorem}
According to \Cref{eq-theorem1},
the convergence of \emph{BFL-MEC} is guaranteed by the fact that
the distance between the actual model $F$ and the optimal model $F^*$
decreases with an increasing number of global aggregations.
Unlike other methods,
we do not assume that the data is identically and independently distributed (IID),
making our method robust against variations in the data distribution.
The proof of \Cref{THEOREM1} is presented in \Cref{proof-theorem1},
and we further support it with experimental results in \Cref{sec-validation}.
\section{Approximate Performance of \emph{BFL-MEC}}\label{sec-summary}
In this section,
we look at each step in BFL-MEC to analyze the performance.
However, analyzing the overall system is difficult due to the asynchronous design,
but we can analyze the approximate performance from the client side and edge node side separately,
as shown in the figure below.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figures/delay.eps}
\caption{Approximate Performance of \emph{BFL-MEC}}
\label{fig:delay}
\end{figure}
\subsection{Analysis of the client side}
\textbf{Local Model Update.}
The time complexity of calculating \Cref{eq3} is $\mathcal O(E*\frac{{\mathcal D}_i}{B})$,
as it can be calculated $\frac{{\mathcal D}i}{B}$ times
with a batch size of $B$.
The delay $\mathcal T_{local}$ is defined as the calculation time of this step.
However,
it's important to note that $E$ and $B$ are
typically set as small constants for all clients,
so the time complexity of \Cref{eq3} is linear and can be expressed
as $\mathcal O(n)$ under normal circumstances.
\textbf{Uploading the gradient for mining.}
The only computation here is the generation of the digital signature,
so the complexity is that of the proposed procedure $PQCSIGN$,
which we write as $\mathcal{O}(PQCSIGN)$.
The time required to generate a digital signature for the
same local gradient differs significantly between schemes,
resulting in an essential source of performance discrepancies.
The time required to generate a digital signature
is denoted as the latency $\mathcal{T}_{sign}$.
\Cref{sec-general} compares the latency of
signing for various schemes in detail.
In addition,
clients are frequently on the move,
and ensuring the quality of the channel can be challenging.
Moreover,
external interferences may cause additional delays.
Given these factors,
we consider communication time as the primary source of delay in this phase,
which we denote as $\mathcal{T}_{up}$.
Consequently,
the overall latency of this stage can be expressed as the sum of the time required for signing,
$\mathcal{T}_{sign}$,
and the communication time, $\mathcal{T}_{up}$,
i.e., $\mathcal{T}_{sign}+\mathcal{T}_{up}$.
In summary,
the overall complexity on the client side is close to $\mathcal{O}(n)$
and the overall latency is $\mathcal{T}_{local}+\mathcal{T}_{sign}+\mathcal{T}_{up}$.
Where $\mathcal{T}_{local}$ is related to the computational capability of the client itself,
and $\mathcal{T}_{up}$ is related to the connection speed.
Therefore $\mathcal{T}_{sign}$ becomes the most important piece of the chain,
and in mobile edge computing scenarios,
we have minimize $\mathcal{T}_{sign}$ while ensuring post-quantum security.
\subsection{Analysis of the edge nodes side}
\textbf{Exchanging Gradients.}
This step involves basic communication and data exchange between edge nodes,
and the time complexity is linear $\mathcal O(m)$.
The edge nodes perform three main tasks:
\begin{inparaenum}[i)]
\item broadcasting their local gradient sets,
\item receiving the gradient sets from other edge nodes,
\item adding the local gradients they do not own.
\end{inparaenum}
The time required for all edge nodes to have the same gradient set
is denoted as $\mathcal{T}_{ex}$,
which is generally insignificant as long as the communication
among the edge nodes is good,
as required for practical applications.
\textbf{Accounting Client's Contribution.}
The time complexity of this step depends on the clustering algorithm
used in \Cref{alg:cii} and can be represented as $\mathcal O(clustering)$.
During this step,
edge nodes only need to execute the algorithm,
so the time required depends on the efficiency of the clustering approach.
We denote the total delay of this step as $\mathcal T_{gl}$.
\textbf{Fair Aggregation.}
The edge nodes use \Cref{eq:contriAVG} to compute the final global gradient
with safely negligible complexity and latency.
\textbf{Block Mining and Consensus.}
The calculation of the hash value in accordance with \Cref{eq5}
involves multiple iterations until the $Target$ value is reached,
and the time required for each iteration is
proportional to the size of the block.
Therefore,
the time complexity of this step can be represented as $\mathcal O(n)$,
and we refer to the total time required for this calculation as $\mathcal T_{bl}$.
This delay can be significant compared to other steps in the overall process.
Based on the above discussion,
the overall complexity on the edge node side is likewise close to $\mathcal{O}(n)$,
while the overall latency is $\mathcal{T}_{ex}+\mathcal{T}_{gl}+\mathcal{T}_{bl}$.
Where $\mathcal{T}_{ex}$ can be reduced to a minimum level by
good connectivity between edge nodes (e.g., wired Ethernet).
$\mathcal{T}_{gl}$ is related to the clustering algorithm employed,
which is a trade-off between latency and accuracy
that should be taken as optimal according to the demand.
$\mathcal{T}_{bl}$ is derived from a typical blockchain component,
thus, an optimal point can be found(e.g., applying the techniques similar to that of~\cite{pokhrel_federated_2020}).
\section{Evaluation and Discussion}\label{sec-validation}
This section details the comprehensive experiments performed
to evaluate the performance of \emph{BFL-MEC} on the real dataset.
We varied the parameters to observe the changes in performance and delay
under various conditions.
Additionally, we present some novel insights,
such as security and cost-effectiveness.
In our experiments, with important insights from~\cite{pokhrel_federated_2020, Xu2023},
we compared the performance of \emph{BFL-MEC} with three baseline methods,
including the Blockchain, \emph{FedAvg}~\cite{mcmahanCommunicationEfficientLearningDeep2017},
and \emph{FedProx}~\cite{li_federated_2020},
on the \emph{MNIST}~\cite{lecun_gradient-based_1998} benchmark dataset.
We evaluated the performance using the average accuracy metric,
which is computed as $\sum\limits_{i = 1}^n {ac{c_i}/n}$,
where $acc_i$ is the verification accuracy of client $C_i$.
We assumed non-IID data distribution and set the default parameters
as $n=100$, $m=2$, $\eta=0.01$, $E=5$, $B=10$, and $base=100$.
\subsection{Performance Impact}\label{sec-delay}
In our experiments,
we set a convergence criteria for the model.
Specifically, we define the model as converged when the change in accuracy is within $0.5\%$
for $5$ global aggregations.
Moreover,
we track the global aggregations $100$ times by default.
\subsubsection{General analysis of latency and performance}\label{sec-general}
With the fully asynchronous design,
the edge nodes and clients are fully autonomous according to the defined policies
so that the system latency depends mainly on
data arrival, cryptographic processes, and data transmission.
Here,
we assume that the data transmission is sound,
which is one of the advantages of edge computing.
Also,
to simplify the problem, data arrives continuously on each client.
Therefore,
only the additional delay due to the cryptographic process needs to be compared.
The baseline is the vanilla scheme RSA (adopted by the mainstream blockchain)
and the PQC candidates FALCON and Rainbow from NIST.
The results are shown in \Cref{tab:delay}.
\begin{table}[htbp]
\centering
\caption{Delay Comparison}
\resizebox{\linewidth}{!}{
\begin{tabular}{llccc}
\toprule
\multicolumn{2}{c}{\textbf{Schemes}} & \multicolumn{1}{p{3.375em}}{\textbf{Keygen\newline{}(ms)}} & \multicolumn{1}{p{2.19em}}{\textbf{Sign\newline{}(ms)}} & \multicolumn{1}{p{2.69em}}{\textbf{Verify\newline{}(ms)}} \\
\midrule
Vanilla & RSA (PKCS1 v1.5) & 123 & 248 & 106 \\
\midrule
\multicolumn{1}{l}{\multirow{5}[9]{*}{Post-Quantum\ Safe}} & FALCON 512 & 30 & 108 & 93 \\
\cmidrule{2-5} & FALCON 1024 & 46 & 109 & 108 \\
\cmidrule{2-5} & Rainbow 5 Classic & 4300 & 140 & 154 \\
\cmidrule{2-5} & Rainbow 5 Cyclic & 4620 & 120 & 219 \\
\cmidrule{2-5} & Ours & \textcolor[rgb]{ .929, .49, .192}{15*} & \textcolor[rgb]{ .929, .49, .192}{85*} & \textcolor[rgb]{ .929, .49, .192}{83*} \\
\bottomrule
\end{tabular
}
\label{tab:delay
\end{table
It can be seen that RSA,
as a widely used vanilla solution,
can sign and verify digital signatures at a good speed,
but it takes more time to sign the local gradient.
In post-quantum safe schemes,
compared with RSA,
Falcon has a similar signature verification speed
while generating a key pair and digital signature faster.
Rainbow is only faster than RSA in signing,
and it takes much time to initialize the key pair.
Because Rainbow pursues the best signature size.
BFL-MEC is leading in all comparisons.
For generating key pairs,
BFL-MEC is \textbf{8x} faster than RSA and \textbf{2x} faster than Falcon.
For the signature local gradient,
our method is \textbf{3x} faster than RSA,
\textbf{1.5x} faster than Falcon and Rainbow.
For verification signatures,
BFL-MEC also outperforms all baselines.
In conclusion,
with the well-designed cryptographic approach,
BFL-MEC can achieve lower latency while guaranteeing post-quantum safety.
Hence,
BFL-MEC is more competitive for latency-sensitive computing scenarios
such as mobile edge computing.
\subsubsection{Impact of Thresholds}
BFL-MEC follows a fully asynchronous and autonomous design,
where the clients and edge nodes perform the computation independently.
In this context,
$N$ defines when clients perform model updates and upload local gradients,
whereas $\phi$ defines when edge nodes compute the federated model.
These two parameters jointly shape the whole learning process.
Thus,
it is necessary to explore their impacts.
Here,
we set $N \in [50,75,100]$ and $\phi \in [5,10,15,20]$,
and then explore the impacts of various combinations of $(\phi,N)$ on the average accuracy.
The results are shown as \Cref{fig:influenceofpara,fig:convergancetime}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figures/influenceofpara.eps}
\caption{Accuracy of different combination}
\label{fig:influenceofpara}
\end{figure}
We can see that when $\phi$ is set to large thresholds (e.g., $\phi=15$ and $\phi=10$),
the accuracy of the federated model increases as $N$ increases,
and this trend becomes more pronounced the larger $\phi$ is,
such as when $\phi=20$.
This is due to the fact that more data are used to update the local model,
and the federated model is computed using more local gradients,
amplifying such a data advantage.
However,
it is worth noting that the convergence of the federated model
can be slower if $\phi$ is set to a significant value,
as when comparing $\phi=15$ with $\phi=5$.
The reason could be that in the case of continuous arrivals,
the clients have to take more time to collect new data from the environment.
Meanwhile,
the slower federated model aggregation hinders local models
from learning global knowledge in time.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\linewidth]{figures/converganetime.eps}
\caption{Convergence time of different combination}
\label{fig:convergancetime}
\end{figure}
Interestingly,
when $\phi$ is set to small thresholds,
such as $\phi=5$ and $\phi=10$,
a larger $N$ does not necessarily result in higher accuracy.
It can be seen that in this case,
$N=75$ and $N=100$ eventually have almost the same accuracy of the federated model,
yet the case of $N=75$ has higher accuracy and faster convergence speed in the early learning process.
This is caused by the fact that
a larger $N$ reduces the rate of local gradient generation
while the federated model is iterated quickly.
However,
it is important to point out that setting $N$ to a minimal value (e.g., $N=50$),
is also considered inappropriate.
In this case,
the federated model has the lowest accuracy and the slowest convergence speed.
The intuition behind that is a small $N$,
although it enables more local gradients to be uploaded simultaneously,
impairs the learning effect due to the small training data size used by the client.
That is,
the negative effect of insufficient data suppresses the
advantage of the fast distribution of global knowledge.
In conclusion,
it is clear that there is a trade-off between $\phi$ and $N$ and
thus the optimal combination $(\phi,N)$ can be found.
In the above experiment,
the optimal combination is $(5,75)$ (see \Cref{fig:influenceofpara,fig:convergancetime}).
To this end, we have the following insight.
\textbf{Insight 1:}
Small $\phi$ leads to faster global knowledge transfer between edge nodes and clients,
which helps to improve the performance of learning process.
$N$ will affect the number of local gradients generated over the same time,
but small $N$ will impair accuracy.
Thus there is a trade-off.
We can find the optimal combination to obtain the best accuracy and convergence speed.
\subsection{Cost-effectiveness}\label{sec-economy}
In this analysis,
we investigate the impact of employing \Cref{alg:cii}
with the discarding strategy on the accuracy
and convergence rate of \emph{BFL-MEC}.
We use \emph{DBSCAN} as a sample,
which is a density-based clustering method with the following advantages:
\begin{inparaenum}[i)]
\item It can find clusters of any shape without requiring the data set to be convex.
\item Outliers will not significantly affect the clustering results but be discovered.
\item It does not rely on hyperparameters such as initial values,
thus it can directly find the distance difference
between each local update and the global gradient through the clustering results.
\end{inparaenum}
We would like to emphasize that any appropriate clustering algorithm could be applied here,
not only the one used in our implementation.
It is worth noting that \emph{FedProx} also excludes clients to
enhance both the convergence speed and model accuracy.
However,
while \emph{FedProx} removes stragglers to prevent global model skewness,
our approach excludes low-contributing clients based on the results of the clustering algorithm.
To demonstrate the effectiveness of our contribution-based incentive mechanism,
we consider the \emph{FedProx} with a $drop\ percent$ of $0.02$
as a new baseline and compare its performance to \emph{BFL-MEC}.
\begin{figure}[htbp]
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{figures/general-acc.eps}
\caption{Without discarding}
\label{fig:general-acc}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{figures/economy-acc.eps}
\caption{Discarding strategy}
\label{fig:fast-acc}
\end{subfigure}
\caption{\emph{BFL-MEC} is faster without reducing accuracy}
\label{fig:fast}
\end{figure}
Then in \Cref{fig:fast},
we can see the effect of the discarding strategy
on the accuracy of \emph{BFL-MEC}.
From \Cref{fig:general-acc},
we can see that \emph{BFL-MEC} achieves a model performance
almost equivalent to that of \emph{FedAvg}.
However, \emph{FedProx} falls behind in terms of accuracy,
and its accuracy continues to fluctuate even after convergence,
which is due to its use of an inexact solution to accelerate convergence.
In contrast, \emph{BFL-MEC} leverages the discard strategy to
ensure higher accuracy and faster convergence of the model,
as demonstrated in \Cref{fig:fast-acc}
where the \emph{BFL-MEC} line consistently outperforms
the \emph{FedAvg} and original \emph{BFL-MEC} lines,
reaching the convergence point between $250$ and $300$ seconds.
While \emph{FedProx} initially converges better than the other methods,
its accuracy plateaus around $84\%$, which is lower than the others.
The superior performance of \emph{BFL-MEC} can be attributed to the fact
that low-contributing clients are excluded from global aggregation,
thereby reducing noise from low-quality data and effectively
preventing the global model from getting stuck in local optima.
By removing such clients,
\emph{BFL-MEC} improves the accuracy and overall performance of the model.
Therefore,
as discussed above and shown in \Cref{fig:fast},
the discard strategy significantly improves accuracy and reduces
the time required to reach convergence.
Our results have shown that \emph{BFL-MEC} is faster
and more efficient than other FL methods.
\textbf{Insight 2:}
The implementation of the discarding strategy can lead to faster model convergence
and higher accuracy in large-scale scenarios.
\subsection{Security by Design}\label{sec-security}
In the previous section,
we demonstrated that using the discarding strategy in \Cref{alg:cii}
can effectively improve the performance of \emph{BFL-MEC}.
In this section, we evaluate the security of the proposed
algorithm by examining its robustness against malicious clients.
We simulate an attack scenario where some clients intentionally
modify their local gradients to skew the global model.
Specifically,
during each upload of the local gradient,
the malicious clients significantly increase or
decrease the value of the actual gradient in some direction,
expecting to perform membership inference attack by
analyzing the resulting change in the federated model.
It is a typical privacy attack technique proposed in the study of \cite{nasr_comprehensive_2019}.
We also use \emph{DBSCAN} as an example to identify
variations in the contributions of different clients.
In experiments,
There are $10$ indexed clients $C_i\in[1,2,...,9,10]$,
and we observe the time elapsed for the federated model to aggregate ten times.
We consider the following two attack cases:
\begin{inparaenum}[i)]
\item crafty attackers use backdoors or Trojans to control clients
to perform the membership inference attacks described above
and are good at disguising themselves.
That means they may constantly be changing the client used.
\item The attackers are the clients themselves,
who are curious about the data of other clients and therefore
perform the same privacy attacks,
but at the same time,
as participants in the system,
they also want to be rewarded normally.
\end{inparaenum}
To this end,
For the first case,
we randomly designate $3$ clients as malicious clients and
are set to become honest once a malicious client is detected.
Meanwhile,
another client will become malicious.
But the total number of malicious clients is constant.
Then, we observe the detection rate of malicious clients.
As for another case,
we specify that clients $C_i\in[6,8,9]$ is curious about
the data of other participants and this situation will not change.
Then,
we observe the cumulative rewards obtained by all clients
during federated model aggregation.
The results are as shown in \Cref{fig:safe}.
\begin{figure}[htbp]
\centering
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{figures/attack.eps}
\caption{Attack detection rates}
\label{fig:safe-attack}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\includegraphics[width=\linewidth]{figures/reward.eps}
\caption{Clients' cumulative reward}
\label{fig:safe-reward}
\end{subfigure}
\caption{\emph{BFL-MEC} is faster without reducing accuracy}
\label{fig:safe}
\end{figure}
From \Cref{fig:safe-attack},
we can see that when the data distribution of clients satisfies IID,
the detection rate of malicious attacks is at a high level from
the very beginning (about 67\%) and eventually stabilizes at close to 80\%.
This implies that, given that the vast majority of clients are honest,
the impact of malicious clients can be clearly detected
as their modified local gradients are significantly
different from the normal ones.
In the case of IID,
such differences are more easily identified.
On the contrary,
if the data distribution of clients is heterogeneous,
the attacker's forged local gradients may successfully escape detection
by the mechanism at the beginning of the learning process.
Nevertheless,
with a few global aggregations,
the detection rate will boost quickly and
eventually reach the optimal level (about 66\%).
To put it differently,
the detection rate of malicious nodes is observed
to increase with the convergence of the model.
This is because,
as the model converges,
the local gradients become more similar to each other,
and the modified gradients used by malicious clients are more distant from the normal ones.
Thus,
the clustering algorithm can more effectively detect the anomalies.
In the case of IID,
the distribution of data is good,
and normal gradients are more spatially concentrated,
making it easier to detect anomalies from the beginning.
However,
in the case of non-IID,
the detection rate can also be achieved gradually as the model
converges.
These results indicate that \emph{BFL-MEC} can resist malicious attacks effectively,
whether in the case of IID or non-IID data distributions.
From \Cref{fig:safe-reward},
clients identified by the discard strategy as low contributing will not be rewarded,
so that the cumulative reward curve will appear as a straight line.
We can see that many local gradients of honest clients are also discarded at the beginning,
and only a few honest clients are rewarded.
However,
as long as they remain honest,
all of them will be rewarded after a few global aggregations and
will end up with a significantly higher reward than malicious clients.
Such a result is caused by Non-IID,
which coincides with \Cref{fig:safe-attack}.
Thus the effectiveness of the discard strategy is proved.
Although some malicious clients survive the discarding strategy,
the fair aggregation will still play a role as the second guarantee.
In such a case,
the rewards earned by malicious clients will be minimal,
and the entire cumulative reward curve barely rises.
In addition,
the final rewards obtained are different for honest clients,
thus capturing the differences in contributions.
This means that \Cref{alg:cii} achieves the goal of
personalized incentives very well.
In summary,
\emph{BFL-MEC} has incorporated multiple design
aspects to ensure the security and privacy of the system:
\begin{inparaenum}[i)]
\item \textbf{Post-Quantum Security.}
We employ a post-quantum
secure design to sign local gradients
to avoid interception and malicious tampering
during data transmission (see \Cref{fig:rsa}).
\item \textbf{Secure Data Sharing.}
The use of blockchain technology ensures immutability of the data,
making it difficult for attackers to modify or delete the recorded data.
\item \textbf{Black-box attack resistance.}
We adopt \Cref{alg:cii} to uncover the variance in contribution
among clients and remove the low-contributing local gradients,
which could potentially be forged by malicious attackers,
to enhance the system's resilience to attacks.
Fair aggregation as a second guarantee ensures
that malicious clients cannot get high rewards.
\item \textbf{White-box attack resistance.}
To ensure the privacy and security of the data,
the blockchain doesn't record original local gradients on.
This prevents clients from accessing and
exploiting this sensitive information (see \Cref{pqc}).
\end{inparaenum}
Thus,
\emph{BFL-MEC} provides the privacy and security guarantee by design
for the whole system dynamics.
\section{Conclusion} \label{sec-conclusions}
In this paper,
we develop a blockchain-based federated learning framework
for mobile edge computing, called BFL-MEC,
which aims to provide the foundation for future paradigms
in this field such as autonomous vehicles,
mobile crowd sensing, and metaverse.
We propose several mechanisms to address the performance,
motivational, and security challenges faced by vanilla BFL.
First, we propose a fully asynchronous design in which the
client and edge nodes have independent working policies
that can tolerate connection loss due to mobility in the MEC network.
Second, we use a signature scheme based on post-quantum cryptography,
which makes the proposed framework resilient to threats from quantum
computers and reduces the latency of the signature-verification process.
Third,
we propose a contribution-based incentive mechanism that
allows the use of multiple clustering algorithms to discover
differences in client contributions and then assign personalized rewards.
Finally,
we developed a fair aggregation mechanism that computing
the global model based on the weight of the client's contribution
minimizes the impact of attacks from the clients themselves.
|
{
"arxiv_id": "2302.13182",
"language": "en",
"timestamp": "2023-02-28T02:12:00",
"url": "https://arxiv.org/abs/2302.13182",
"yymm": "2302"
} | \section{Some general (algebraic) facts}
\label{section-morphisms}
\subsection{Germs and morphisms}
\label{subsection:resadd}
Let $\mathbb{K}$ be an integral domain. We let $\widehat{\mathrm{Diff}}(\mathbb{K},0)$ be the group of formal power series of the form
$$a_1 x + \sum_{n\geq 2} a_n x^n,$$
where $a_n \in \mathbb{K}$ and $a_1 \in \mathbb{K} \setminus \{0\}$ is invertible. The group operation is that of composition of series, which identifies to that of substitution of variables. In the case where $\mathbb{K}$ is a field, it also identifies to the group of formal germs of diffeomorphisms about the origin.
For $\ell \geq 1$, we consider the subgroup $\widehat{\mathrm{Diff}_{\ell}}(\mathbb{K},0)$ of $\widehat{\mathrm{Diff}}(\mathbb{K},0)$ formed by the germs that are $\ell$-tangent to the identity, that is, that are of the form
$$x + \sum_{n \geq \ell + 1} a_n x^n.$$
On $\widehat{\mathrm{Diff}}(\mathbb{K},0)$, there is an obvious homomorphism $\Phi_0$ into the multiplicative subgroup of the invertible elements of $\mathbb{K} \setminus \{0\}$ given by
$$\Phi_0 : \sum_{n\geq 1} a_n x^n \mapsto a_1.$$
Similarly, on each subgroup $\widehat{\mathrm{Diff}_{\ell}}(\mathbb{K},0)$, there is an obvious homomorphism $\Phi_{\ell}$ into the additive group $\mathbb{K}$, namely,
$$\Phi_\ell : x + \sum_{n \geq \ell + 1} a_n x^n \mapsto a_{\ell + 1} .$$
Actually, $\Phi_{\ell}$ is the first of $\ell$ homomorphisms $\Phi_{\ell,1}, \Phi_{\ell,2} \ldots, \Phi_{\ell,\ell}$ defined on $\widehat{\mathrm{Diff}_{\ell}}(\mathbb{K},0)$ by
$$\Phi_{\ell,i}: x + \sum_{n \geq \ell + 1} a_n x^n \mapsto a_{\ell+i}$$
(checking that these are homomorphisms is straightforward; see also (\ref{sale}) below).
Less trivially, there is another homomorphism from $\widehat{\mathrm{Diff}_{\ell}}(\mathbb{K},0)$ into $\mathbb{K}$, namely
\begin{equation}\label{def-resad}
\overline{\mathrm{Resad}}_{\ell} \!: x + \sum_{n \geq \ell + 1} a_n x^n \mapsto (\ell + 1) \, a_{\ell + 1}^2 - 2 a_{2\ell + 1}.
\end{equation}
In case $\mathbb{K}$ is a field or, at least, when $2$ has a multiplicative inverse, we let
\begin{equation}\label{eq-resad-i}
\mathrm{Resad}_{\ell} \!: x + \sum_{n \geq \ell + 1} a_n x^n \mapsto \frac{\ell + 1}{2} \, a_{\ell + 1}^2 - a_{2\ell + 1}.
\end{equation}
We call both \, $\overline{\mathrm{Resad}}_{\ell}$ \, and \, $\mathrm{Resad}_{\ell}$ \, the {\em additive residues} at order $\ell + 1$.
Checking that they yield group homomorphisms is straightforward: if
$$f (x) = x + \sum_{n \geq \ell + 1} a_n x^n \quad \mbox{ and } \quad g (x) = x + \sum_{n \geq \ell + 1} a_n' x^n,$$
then $fg(x)$ equals
$$ \left[ x + \sum_{m \geq \ell + 1} a_m' x^m \right] + \sum_{n \geq \ell + 1} a_n \left[ x + \sum_{m \geq \ell + 1} a_m' x^m \right]^n,$$
that is,
$$ x + \sum_{m = \ell + 1}^{2\ell + 1} a_m' x^m + \sum_{n = \ell + 1}^{2\ell + 1} a_n x^n + a_{\ell + 1} (\ell + 1) x^{\ell} a_{\ell + 1}' x^{\ell + 1} + T_{2\ell+2},$$
where $T_{2\ell+2}$ stands for a formal power series in $x$ all of whose terms have order at least $2\ell+2$. Therefore,
\begin{equation}\label{sale}
fg (x) = x + \sum_{n=\ell + 1}^{2 \ell} [a_n+a_n'] x^n + [a_{2\ell + 1} + a'_{2\ell + 1} + (\ell + 1) a_{\ell + 1} a_{\ell + 1}'] x^{2\ell + 1} + T_{2\ell+2},
\end{equation}
and
$$\overline{\mathrm{Resad}}_{\ell} (fg)
= (\ell + 1) \, [a_{\ell + 1} + a_{\ell + 1}']^2 - 2 \, [a_{2\ell + 1} + a'_{2\ell + 1} + (\ell + 1) a_{\ell + 1} a_{\ell + 1}'] =
\overline{\mathrm{Resad}}_{\ell} (f) + \overline{\mathrm{Resad}}_{\ell} (g).
$$
\noindent{\bf A basis for the cohomology.} In the case where $\mathbb{K}$ is the field of real numbers, Fukui proved in \cite{fukui} that the $\Phi_{\ell,i}$ together with $\mathrm{Resad}_{\ell}$ are the generators of the continuous cohomology group of $\widehat{\mathrm{Diff}_{\ell}}(\mathbb{K},0)$. In \cite{babenko}, Babenko and Bogatyi treated the case $\mathbb{K} = \mathbb{Z}$. In particular, they computed the continuous cohomology $H^1 (\widehat{\mathrm{Diff}}_1 (\mathbb{Z},0))$: besides
$$\Phi_{1}: x + \sum_{n \geq 2} a_n x^n \mapsto a_2$$
and
$$\mathrm{Resad}_1: x + \sum_{n \geq 2} a_n x^n \mapsto a_2^2 - a_3,$$
there are two homomorphisms into $\mathbb{Z}_2$, namely
$$x + \sum_{n \geq 2} a_n x^n \mapsto \frac{a_2 \, ( 1 + a_2 )}{2} + a_3 + a_4 \quad (\mbox{mod } 2)$$
and
$$x + \sum_{n \geq 2} a_n x^n \mapsto a_2 a_4 + a_4 + a_6 \quad (\mbox{mod } 2).$$
In \cite{BB}, Bogatayaa and Bogatyi do similar computations for $\mathbb{K} = \mathbb{Z}_p$, with $p$ a prime number. They show that, for $p \neq 2$, only $\Phi_{1}$ and $\mathrm{Resad}_1$ survive, but for $p=2$, the situation is slightly more complicated. Quite interestingly, it seems to be unknown whether $H^1 ( \widehat{\mathrm{Diff}}_{\ell}(\mathbb{Z},0))$ is finitely generated for values of $\ell$ larger than $1$.
\vspace{0.4cm}
\noindent{\bf Genuine diffeomorphisms.} For $\mathbb{K}$ being the fields of the real, complex or $p$-adic numbers, we let $\mathrm{Diff}^{\omega} (\mathbb{K},0)$ be the group of genuine germs of analytic diffeomorphisms fixing the origin, and $\mathrm{Diff}^{\omega}_{\ell} (\mathbb{K},0)$ the subgroup of those elements that are $\ell$-tangent to the identity. These are subgroups of the corresponding groups $\widehat{\mathrm{Diff}}(\mathbb{K},0)$ and $\widehat{\mathrm{Diff}_{\ell}}(\mathbb{K},0)$, so the homomorphisms $\Phi_{\ell,i}$ and $\mathrm{Resad}_{\ell}$ restrict to them.
More interestingly, in the real case, each of the homomorphisms $\Phi_{\ell,i}$ and $\mathrm{Resad}_{\ell}$ above involve only finitely many derivatives at the origin (namely, $\ell + i$ and $2\ell + 1$, respectively). Their definitions hence extend to the subgroup $\mathrm{Diff}^{\infty}_{\ell} (\mathbb{K},0)$ of the group $\mathrm{Diff}^{\infty} (\mathbb{K},0)$ of germs of $C^{\infty}$ diffeomorphisms fixing the origin made of the elements that are $\ell$-tangent to the identity. Actually, they even extend to the larger group of germs of $C^{2\ell + 1}$ diffeomorphisms that are $\ell$-tangent to the identity. In this framework, it is a nice exercise to check the additive property of $\mathrm{Resad}_{\ell}$ just by using the Fa\`a di Bruno formula. We will come back to this point in \S \ref{section-sc}.
Fukui's theorem stated above still holds in $\mathrm{Diff}^{\infty}_+ (\mathbb{R},0)$ (where $+$ stands for orientation-preserving maps).
We do not know whether a complex or $p$-adic version of this is also valid.
\subsection{Res-it-ad}
\label{section-resitad}
Elements in $\widehat{\mathrm{Diff}_{\ell}}(\mathbb{K},0) \setminus \widehat{\mathrm{Diff}}_{\ell + 1}(\mathbb{K},0)$ are those whose order of contact to the identity at the origin equals $\ell$.
For simplicity, they will be said to be {\em exactly $\ell$-tangent to the identity}. These correspond to series expansions of the form
$$x + \sum_{n \geq \ell + 1} a_n x^n, \quad \mbox{ with } \,\, a_{\ell + 1} \neq 0.$$
A well-known lemma gives a normal form for these elements.
\begin{lem} \label{lem:raro}
If $\mathbb{K}$ is a zero-characteristic field having a square root of $-1$, then every $f~\in~\widehat{\mathrm{Diff}_{\ell}}(\mathbb{K},0) \setminus \widehat{\mathrm{Diff}}_{\ell + 1}(\mathbb{K},0)$ is conjugate in $\widehat{\mathrm{Diff}} (\mathbb{K},0)$ to a (unique) germ of the (normal) form
\begin{equation} \label{most-general-normal-form}
x \mapsto x + x^{\ell + 1} + \mu x^{2\ell + 1}.
\end{equation}
\end{lem}
An explicit argument showing this in the complex case that applies with no modification to the present case appears in \cite{voronin}. The value of $\mu$ is called the {\em residue} of $f$, and denoted $\mathrm{Res} (f)$. As it is very well known (and easy to check), this is invariant under conjugacy. Actually, if $f$ with normal form (\ref{most-general-normal-form}) is conjugate in $\widehat{\mathrm{Diff}}(\mathbb{K},0)$ to a germ of the (reduced) form
$$x + x^{\ell + 1} + \sum_{n \geq 2\ell + 1} a_n x^{n},$$
then necessarily $a_{2\ell + 1} = \mathrm{Res} (f)$.
The residue was introduced as a conjugacy invariant in the complex setting: for a germ of a holomorphic map fixing the origin, one defines
$$\mathrm{R}(f) := \frac{1}{2 \pi i} \int_{\gamma} \frac{dz}{z - f(z)},$$
where $\gamma$ is a small, simple and positively oriented loop around the origin. For germs of the form
$$f (z) = z + z^{\ell + 1} + \sum_{n \geq 2\ell + 1} a_n z^n,$$
one easily checks that $\mathrm{R}(f) = \mathrm{Res} (f) = a_{2\ell+1}$.
The {\em iterative residue} of $f \in \widehat{\mathrm{Diff}_{\ell}}(\mathbb{K},0) \setminus \widehat{\mathrm{Diff}}_{\ell + 1}(\mathbb{K},0)$, denoted $\mathrm{Resit} (f)$, is simply the value
$$\frac{\ell + 1}{2} - \mathrm{Res} (f).$$
This satisfies the fundamental relation
\begin{equation}\label{eq:ecalle-resit}
\mathrm{Resit} (f^n) = \frac{\mathrm{Resit}(f)}{|n|}
\end{equation}
for every integer $n \neq 0$. It was introduced by \'Ecalle in the context of germs of holomorphic diffeomorphisms \cite{ecalle}.
A proof of the formula above in this setting (due to \'Ecalle) appears for instance in \cite{milnor}. Below we give an algebraic
proof that works in a much broader context (in particular, it covers the case of germs of diffeomorphisms of the real line with
finite regularity discussed just after Lemma \ref{lem-basic}).
As we will see, it just follows from the additive properties of $\Phi_{\ell}$ and $\mathrm{Resad}_{\ell}$.
Start with a germ of the form
$$f(x) = x + x^{\ell + 1} + \mu x^{2\ell + 1} + \sum_{n \geq 2\ell+2} a_n x^n.$$
Using the homomorphisms $\Phi_{\ell}$ and $\mathrm{Resad}_{\ell}$, we obtain
$$f^n (x) = x + n x^{\ell + 1} + \mu_n x^{2\ell + 1} + T_{2\ell+2},$$
with
$$\frac{(\ell + 1) \, n^2}{2} - \mu_n = \mathrm{Resad}_{\ell} (f^n) = n \, \mathrm{Resad}_{\ell} (f) = n \, \left[ \frac{(\ell + 1)}{2} - \mu \right] = n \, \mathrm{Resit}(f).$$
Assume $n > 0$. If we conjugate by $H_{\lambda_n} \! : x \mapsto \lambda x$ with $\lambda_n := \sqrt[\ell]{n}$ we obtain
$$H_{\lambda_n} f^n H_{\lambda_n}^{-1} (x) = x + x^{\ell + 1} + \frac{\mu_n}{n^2} x^{2\ell + 1} + T_{2\ell+2}'.$$
By definition, this implies that
$$\mathrm{Resit} (f^n) = \frac{\ell + 1}{2} - \frac{\mu_n}{n^2}
= \frac{1}{n^2} \left[ \frac{(\ell + 1) \, n^2}{2} - \mu_n \right] =
\frac{ n \, \mathrm{Resit} (f) }{n^2} = \frac{\mathrm{Resit}(f)}{n}.$$
as announced. Computations for $n < 0$ are analogous.
\begin{ex} It follows from (the proof of) Lemma \ref{lem:raro} (see also Lemma \ref{lem-basic} below)
that, for $f \in \widehat{\mathrm{Diff}_{\ell}}(\mathbb{K},0) \setminus \widehat{\mathrm{Diff}}_{\ell + 1}(\mathbb{K},0)$ of the form
$$f(x) = x + \sum_{n \geq \ell + 1} a_n \, x^n, \qquad a_{\ell + 1} \neq 0,$$
the value of $\mathrm{Resit} (f)$ is a polynomial function of $a_{\ell + 1},a_{\ell+2}, \ldots, a_{2\ell + 1}$ times some integer power of $a_{\ell + 1}$.
For instance, the reader may readily check that, for $a \neq 0$,
$$\mathrm{Resit} (x + ax^2+bx^3+ \ldots) = \frac{a^2 - b}{a^2},$$
$$\mathrm{Resit} (a+ax^3+bx^4+cx^5+\ldots )
= \frac{3a^3 - b^2 - ac}{2a^3}.$$
See \cite[Theorem 1]{juan} for a general result in this direction.
\end{ex}
\noindent {\bf The residues in the real case.} Much of the discussion above still makes sense
for germs of $C^{2\ell + 1}$ diffeomorphisms. In particular, the morphisms $\Phi_{\ell,i}$
and $\mathrm{Resad}_{\ell}$ extend to the subgroup $\mathrm{Diff}^{2\ell + 1}_{\ell} (\mathbb{R},0)$ made of those germs that are $\ell$-tangent to the identity. Moreover, the definition of $\mathrm{Resit}$ still extends to the elements of this subgroup that are exactly $\ell$-tangent to the identity at the origin, because of the next very well known lemma. (Compare Lemma \ref{lem:raro}.) For the statement, recall that a germ $f$ of diffeomorphisms of the real line fixing the origin is {\em contracting} (resp. {\em expanding}) if $f(x) < x$ (resp. $f(x)>x$) holds for all small-enough $x >0$.
\begin{lem} \label{lem-basic} Let $f$ be the germ of a $C^{2\ell + 1}$ diffeomorphism of the real line
that is exactly $\ell$-tangent to the identity at the origin. If $f$ is expanding (resp. contracting), then there exists a germ of polynomial diffeomorphism $h$ such that the conjugate germ $hfh^{-1}$ has
a Taylor series expansion at the origin of the (reduced) form
$$hfh^{-1} (x) = x \pm x^{\ell + 1} + \mu x^{2\ell + 1} + o (x^{2\ell + 1}),$$
where the sign is positive (resp. negative) in the expanding (resp. contracting) case. Moreover, $\mu$ is uniquely determined up to conjugacy by a germ of $C^{2\ell + 1}$ diffeomorphism.
\end{lem}
\begin{proof} By assumption, $f$ has a Taylor series expansion at the origin of the form
$$f(x) = x + \sum_{n=\ell + 1}^{2\ell + 1} a_n x^n + o (x^{2\ell + 1}), \quad \mbox{with } a_{\ell + 1} \neq 0.$$
By conjugating $f$ by a homothety, we can arrange that $a_{\ell + 1} = 1$ (resp. $a_{\ell + 1} = -1$) in the expanding (resp. contracting) case. Now, by conjugating by a germ of the form $x+\alpha x^2$, we can make $a_{\ell+2}$ vanish without changing $a_{\ell + 1} = \pm1$. Next we conjugate
by a germ of the form $x+\alpha x^3$ to make $a_{\ell+3}$ vanish without changing $a_{\ell + 1} = \pm 1$ and $a_{\ell+2}=0$. One continues this way up to conjugating by a germ of the form $x+\alpha x^{\ell}$, which allows killing $a_{2\ell}$. We leave it to the reader to fill in the details as well as the proof of the uniqueness of $\mu$ (see also \cite{voronin}).
\end{proof}
In any case above, we let
$$\mathrm{Res} (f) := \mu
\qquad \mbox{ and } \qquad
\mathrm{Resit} (f) = \frac{\ell + 1}{2} - \mu.$$
One readily checks that this definition is coherent with that of the complex analytic case.
\begin{rem}
\label{rem:diferenciable}
Strictly speaking, for a germ of diffeomorphism that is $\ell$-tangent to the identity, one does not need $C^{2\ell+1}$ regularity to define residues: $(2\ell + 1)$-differentiability at the origin is enough. Indeed, all definitions only use Taylor series expansions about the origin and their algebraic properties under composition.
\end{rem}
\subsection{Schwarzian derivatives and $\mathrm{Resad}_{\ell}$}
\label{section-sc}
For a germ of a genuine diffeomorphism $f \in \mathrm{Diff}^{\infty}(\mathbb{K},0)$ (where $\mathbb{K}$ is, for example, the field of real numbers), one has
$$a_n = \frac{D^n f (0)}{n!}.$$
Thus, if $Df (0) = 1$, then
$$\mathrm{Resad}_1 (f) = a_2^2 - a_3 = \frac{(D^2 f (0))^2}{4} - \frac{D^3f(0)}{6} =
\frac{1}{6} \left[ \frac{3}{2} \left( \frac{D^2 f(0)}{Df(0)} \right)^2 - \frac{D^3f (0)}{Df(0)} \right] = \frac{Sf (0)}{6},$$
where $Sf$ denotes the Schwarzian derivative. That $Sf$ appears as a group homomorphism is not surprising. Indeed, in a general setting, it satisfies the cocycle identity
\begin{equation}\label{cocicle-s}
S (fg) = S(g) + S (f) \circ g \cdot Dg^2,
\end{equation}
and restricted to parabolic germs this becomes
$$S(fg)(0) = Sg (0) + Sf (0).$$
In this view, $\mathrm{Resad}_{\ell}$ may be seen as a kind of ``Schwarzian derivative of higher order'' for parabolic germs.
Now, for $f \in \mathrm{Diff}_{\ell}^{\infty} (\mathbb{K},0)$, we have
$$\mathrm{Resad}_{\ell} (f)
= \frac{\ell + 1}{2} \left( \frac{D^i f (0)}{(\ell + 1)\,!} \right)^2 - \frac{D^{2\ell + 1}f(0)}{(2\ell + 1)\,|}
= \frac{1}{(2\ell + 1)\, !} \left[ {2\ell + 1 \choose \ell} \frac{(D^{\ell + 1} f(0))^2}{2} - D^{2\ell + 1} f (0) \right] \! .$$
Thus, letting
$$S_{\ell + 1} (f) := {2\ell + 1 \choose \ell} \frac{(D^{\ell + 1} f(0))^2}{2} - D^{2\ell + 1} f (0) = (2\ell+1)! \, \mathrm{Resad}_{\ell}(f),$$
we have $S_1 (f) = S(f)$.
In a slightly broader context, if $f,g$ are two germs of $C^{2\ell + 1}$ diffeomorphisms with all derivatives $D^2, D^3, \ldots ,D^{\ell}$ vanishing at the origin then, using the Fa\`a di Bruno formula, it is straightforward to check the cocycle relation below:
$$S_{\ell} (fg) = S_{\ell} (g) + S_{\ell} (f) \cdot (Dg )^{2\ell}.$$
This is an extension of the additive property of $\mathrm{Resad}_{\ell}$ to the framework of non-parabolic germs.
\vspace{0.1cm}
\begin{rem}
A classical theorem of Szekeres establishes that every expanding or contracting germ of $C^2$ diffeomorphism of the real line is the time-1 map of a flow of germs of $C^1$ diffeomorphisms; moreover, a famous lemma of Kopell establishes that its centralizer in the group of germs of $C^1$ diffeomorphisms coincides with this flow (see \cite[chapter 4]{book}). Furthermore, it follows from the work of Yoccoz \cite{yoccoz} that, for non-flat germs of class $C^k$, the flow is made of $C^{k-1}$ diffeomorphisms. Therefore, to each $f \in \mathrm{Diff}^{2\ell + 2}_{\ell} (\mathbb{R},0) \setminus \mathrm{Diff}^{2\ell + 2}_{\ell+1} (\mathbb{R},0)$ we may associate its flow $(f^t)$ made of $C^{2\ell+1}$ germs of diffeomorphisms. These are easily seen to be exactly $\ell$-flat. Moreover, equality (\ref{eq:ecalle-resit}) has a natural extension in this setting, namely, for all $t \neq 0$,
\begin{equation}\label{eq:resit-flow}
\mathrm{Resit}(f^t) = \frac{\mathrm{Resit}(f)}{|t|}.
\end{equation}
See Remark \ref{rem:resit-flow} for a proof.
\end{rem}
\begin{rem} In the real or complex setting, if
$$f(x) = x + ax^{\ell + 1} + bx^{2\ell + 1} + \ldots$$
then, letting $h (x) = x^{\ell}$, one readily checks that
\begin{equation}\label{eq:red}
h f h^{-1} (x) = x + \ell \, a \, x^2 + \left[ \ell \, b + \frac{ \ell \, (\ell-1)}{2} a^2 \right] x^3 + \ldots.
\end{equation}
Moreover, using Lemmas \ref{l:taylor} and \ref{l:debile} below, one can show that $hfh^{-1}$ is of class $C^3$ if $f$ is of class $C^{2\ell+1}$ (yet this is not crucial to define residues, according to Remark \ref{rem:diferenciable}). Now, (\ref{eq:red}) easily yields
$$\mathrm{Resad}_1 (h f h^{-1}) = \ell \,\, \mathrm{Resad}_{\ell} (f)
\qquad \mbox{ and } \qquad
\mathrm{Resit} (h f h^{-1}) = \frac{\mathrm{Resit}(f)}{\ell}.$$
The value of $\mathrm{Resad}_{\ell}$ hence equals (up to a constant) that of $\mathrm{Resad}_1$ of the conjugate, and the same holds for $\mathrm{Resit}$. Since $\mathrm{Resad}_1$ is nothing but a sixth of the Schwarzian derivative, this gives even more insight on $\mathrm{Resad}_{\ell}$ as a generalization of the Schwarzian derivative.
\label{rem:i-1}
\end{rem}
\section{On conjugacies and residues: statements, examples}
\label{section-statements}
Let us next recall the first two main results of this work.
\begingroup
\def\ref{t:B}{\ref{t:A}}
\begin{thm} \label{t:A-bis}
Given $\ell \geq 1$, let $f,g$ be two parabolic germs in $\mathrm{Diff}^{2\ell + 1}_+(\mathbb{R},0)$ that are exactly
$\ell$-tangent to the identity. If $f$ and $g$ are conjugated by a germ in $\mathrm{Diff}^{\ell + 1}_+ (\mathbb{R},0)$, then they have the same
(iterative) residue.
\end{thm}
\addtocounter{thm}{-1}
\endgroup
\begingroup
\def\ref{t:B}{\ref{t:B}}
\begin{thm} \label{t:B-bis}
Given $\ell \geq 1$, let $f,g$ be two parabolic germs in $\mathrm{Diff}^{2\ell + 1}_+(\mathbb{R},0)$
that are exactly $\ell$-tangent to the identity. If they are both expanding or both contracting,
then they are conjugated by a germ in $\mathrm{Diff}^{\, \ell}_+ (\mathbb{R},0)$.
\end{thm}
\addtocounter{thm}{-1}
\endgroup
In \S \ref{section-proofs}, we will give two complete and somewhat independent proofs of Theorem \ref{t:A}. A proof of Theorem \ref{t:B} (in the extended form given by Theorem \ref{t:Bp} from the Introduction) will be presented in \S \ref{section-parte2}. Here we illustrate both of them with several clarifying examples; in particular, in \S \ref{section-ejemplo-helene}, we give an example that shows that the converse of Theorem \ref{t:A} does not hold. The reader interested in proofs may skip most of this (somewhat long) section.
\subsection{A fundamental example: (non-)invariance of residues}
Let us consider the germs of
$$f(x) = x - x^2 \qquad \mbox{ and } \qquad \, \, g(x) = \frac{x}{1+x} = x - x^2 +x^3 - x^4 + \ldots$$
These have different residues:
$$\mathrm{Resit} (f) = \mathrm{Resad}_1 (f) = 1 \neq 0 = \mathrm{Resad}_1 (g) = \mathrm{Resit} (g).$$
Using that $\Phi_{1}(f) = \Phi_{1} (g)$, one easily concludes that any $C^2$ conjugacy between $f$ and $g$ has to be parabolic (see the first proof below).
Now, because of the invariance of $\mathrm{Resad}_1$ (equivalently, of the Schwarzian derivative) under conjugacies by parabolic germs of $C^3$ diffeomorphisms, we conclude that no $C^3$ conjugacy between them can exist. However, Theorems \ref{t:A} and \ref{t:B} above imply that, actually, these germs are not $C^2$ conjugate, though they are $C^1$ conjugate. Below we elaborate on these two claims. We do this in two different ways: the former by directly looking at the conjugacy relation, and the latter by looking at the associated vector fields.
\vspace{0.3cm}
\begin{proof}[Sketch of proof that $f,g$ are not $C^2$ conjugate.] Assume that $h$ is a germ of $C^2$ diffeomorphism conjugating $f$ and $g$, that is, $hfh^{-1} = g$. Writing $h(x) = \lambda \, x + a x^2 + o(x^2)$ for $\lambda = Dh (0) \neq 0$ and $a = D^2h(0)/2$, equality $hf = gh$ translates to
$$\lambda \, f (x) + a \, f(x)^2 + o (x^2) = h(x)- h(x)^2+o(x^2).$$
Thus,
$$\lambda \, (x-x^2) + a \, (x-x^2)^2 + o(x^2) = (\lambda x + ax^2) - (\lambda x +ax^2)^2 + o(x^2).$$
By identifying the coefficients of $x^2$, this yields
$$- \lambda + a = a - \lambda ^2,$$
hence $\lambda = \lambda^2$ and, therefore, $\lambda = 1$.
Now, knowing that $Dh(0)=1$, we deduce that there exists $C > 0$ such that
\begin{equation}\label{C2-0}
| h(x) - x | \leq C \, x^2
\end{equation}
for small-enough $x> 0$. Now notice that \, $g^n (x) = \frac{x}{1+nx}.$ \,In particular, given any fixed $x_0' > 0$,
\begin{equation} \label{C2-1}
g^n (x_0') \geq \frac{1}{n+D}
\end{equation}
for a very large $D > 0$ (namely, for $D \geq 1/x_0'$) and all $n \geq 1$. In what concerns $f$, a straightforward induction argument (that we leave to the reader; see also Proposition
\ref{prop:deviation}) shows that, for all $x_0 > 0$, there exists a (very small) constant $D' > 0$ such that, for all $n \geq 1$,
\begin{equation} \label{C2-2}
f^n (x_0) \leq \frac{1}{n + D' \log(n)}.
\end{equation}
Now fix $x_0 > 0$ and let $x_0' := h (x_0)$. The equality $hf^n (x_0) = g^n h(x_0)$ then yields
$$hf^n(x_0) - f^n (x_0) = g^n (x_0') - f^n (x_0).$$
Using (\ref{C2-0}), (\ref{C2-1}) and (\ref{C2-2}), we obtain
$$\frac{C}{(n+D'\log(n))^2} \geq C (f^n(x_0))^2 \geq \frac{1}{n+D} - \frac{1}{n + D' \log (n)} = \frac{D' \log (n) - D}{ (n+D) (n + D' \log (n))},$$
which is impossible for a large-enough $n$.
\end{proof}
\vspace{0.3cm}
\begin{proof}[Sketch of proof that $f,g$ are $C^1$ conjugate.] The existence of a $C^1$ conjugacy between diffeomorphisms as $f$ and $g$ above is folklore (see for instance \cite{rog,firmo}). An argument that goes back to Szekeres \cite{szekeres} and Sergeraert \cite{sergeraert} proceeds as follows: The equality $hfh^{-1} = g$ implies $hf^nh^{-1} = g^n$ for all $n \geq 1$, hence $h = g^{-n} h f^n$. Thus, if $h$ is of class $C^1$, then
\begin{equation}\label{eq-lim}
Dh (x) = \frac{Df^n (x)}{Dg^n (g^{-n} h f^n (x))} Dh (f^n (x)) = \frac{Df^n (x)}{Dg^n (h(x))} Dh (f^n (x)).
\end{equation}
It turns out that the sequence of functions
$$(x,y) \to A_n (x,y) := \frac{Df^n (x)}{Dg^n (y)}.$$
is somewhat well behaved. In particular, it converges to a continuous function $A$ away from the origin. Since $Dh(0)=1$ (see the proof above), equation (\ref{eq-lim}) translates to
\begin{equation}\label{Dh=A}
Dh (x) = A (x,h(x)).
\end{equation}
This is an O.D.E. in $h$. Now, a careful analysis shows that this O.D.E. has a solution, and this allows one to obtain the conjugacy between $f$ and $g$. See \cite{firmo} for the details.
\end{proof}
\vspace{0.3cm}
It is worth stressing that both proofs above used not just the conjugacy relation but an iterated version of it:
$$hfh^{-1} = g \quad \Rightarrow \quad hf^nh^{-1} = g^n.$$
In this regard, it may be clarifying for the reader to look for direct proofs just using the conjugacy relation in order to detect where things get stuck. The moral is that one really needs to use the underlying dynamics. Now, there are objects that encode such dynamics, namely, the vector fields whose time-1 maps correspond to the given diffeomorphisms. These were proved to exist by Szekeres \cite{szekeres} and Sergeraert \cite{sergeraert} in a much broader context and, among others, their properties were studied by Takens in the non-flat $C^{\infty}$ case \cite{takens} and later by Yoccoz in finite regularity \cite{yoccoz}. We next illustrate how to use them with the example above.
\vspace{0.3cm}
\begin{proof}[Proof that $f,g$ are not $C^2$ conjugate using vector fields.]
One proof for this particular case that is close to the preceding one works as follows: Let $X,Y$ be, respectively, the (unique) $C^1$ vector fields associated to $f,g$ (according to Takens and Yoccoz, these must be of class $C^{\infty}$; actually, the expression for $Y$ is explicit). For $x$ close to 0, one has (see \S \ref{sub-residues-fields} for $X$):
$$X (x) = -x^2 - x^3 + o(x^3), \qquad Y(x) = -x^2.$$
Extend $X,Y$ to $\R_+ := [0, \infty)$ so that their corresponding flows $(f^t)$ and $(g^t)$ are made of $C^{\infty}$ diffeomorphisms of $\R_+$ and are globally contracting. Fix $x_0 > 0$. The map $\tau_X \!: x\mapsto \int_{x_0}^x\frac1X$ defines a $C^{\infty}$ diffeomorphism from $\R_+^* := \, (0,\infty)$ to $\R$ satisfying $\tau_X(f^t(x_0))=t$ by definition of the flow, so that $\tau_X^{-1}$ is the map $t\mapsto f^t(x_0)$. Obviously, one has a similar construction for $Y$ and $(g^t)$. (We refer to \cite{eynard-navas-sternberg} for further details of this construction.) In particular, using the equalities
$$\int_{x_0}^{f^n (x_0)} \frac{dy}{X(y)} \,\, = \tau_X (f^n(x_0)) \,\, = \,\, n \,\, = \,\, \tau_Y (g^n(x_0)) \,\, = \,\, \int_{x_0}^{g^n (x_0)} \frac{dy}{Y(y)},$$
one gets equations for the values of $f^n(x_0), g^n(x_0)$ that easily yield the estimates (\ref{C2-1}) and~(\ref{C2-2}). Then the very same arguments as in the preceding proof allow to conclude.
Another (more direct) argument of proof works as follows. If $h$ is a $C^2$ diffeomorphism that conjugates $f$ and $g$, then it must conjugate the associated flows (this is a consequence of the famous Kopell Lemma; see for instance \cite[Proposition 2.2]{eynard-navas-sternberg}). This means that \, $X \cdot Dh = Y \circ h.$ \, Writing $h (x) = \lambda x + a x^2 + o(x^2)$, with $\lambda \neq 0$, this gives
\begin{small}
$$(-x^2 - x^3 + o(x^3)) \cdot (\lambda + 2ax + o(x)) = -h(x)^2 + o(x^3) = - (\lambda x + a x^2 + o(x^2))^2 + o(x^3).$$
\end{small}By identifying the coefficients of $x^2$ above, we obtain $- \lambda = - \lambda^2$, hence $\lambda = 1$. Next, by identifying the coefficients of $x^3$, we obtain
$$-2a-1 = -2a - \lambda = -2a \lambda = - 2a,$$
which is absurd.
\end{proof}
\vspace{0.12cm}
\begin{proof}[Proof that $f,g$ are $C^1$ conjugate using vector fields.] This is a direct consequence of the much more general result below. For the statement, we say that a (germ of) vector field $Z$ at the origin is {\em contracting} (resp. {\em expanding}) if its flow maps are contracting (resp. expanding) germs of diffeomorphisms for positive times. Equivalently, $Z(x) < 0$ (resp. $Z(x) > 0$) for all small-enough $x > 0$.
\vspace{0.1cm}
\begin{prop}
\label{p:Takens}
Let $X$ and $Y$ be two (germs of) continuous vector fields, both contracting or both expanding, that generate flows of (germs of) $C^1$
diffeomorphisms. Suppose that there exist $r>1,s>1$ and $\alpha \neq 0,\beta \neq 0$ of the same sign satisfying $X(x)\sim \alpha x^r$ and
$Y(x)\sim \beta x^s$ at $0$. Then $X$ and $Y$ are conjugate by the germ of a $C^1$ diffeomorphism if and only if $r=s$.
\end{prop}
\vspace{0.1cm}
\begin{rem}
We will see in the proof that the condition remains necessary if one only assumes $r \geq 1, s \ge 1$. However, it is not sufficient anymore if $r=s=1$. In this case, one has to add the condition $\alpha=\beta$ and, even then, one needs an additional regularity assumption on $X$ and $Y$ (for instance, $C^{1+\tau}$ with $\tau>0$ is enough \cite{MW}, but $C^{1+bv}$ and even $C^{1+ac}$ is not \cite{eynard-navas-sternberg}).
\end{rem}
\begin{rem}
Proposition \ref{p:Takens} gives a very simple criterion of $C^1$ conjugacy for contracting smooth vector fields $X$ and $Y$ \emph{which are neither hyperbolic nor infinitely flat at $0$} (or for $C^k$ vector fields which are not $k$-flat). This criterion is simply to have the same order of flatness at~$0$ and the same ``sign'', or equivalently to satisfy that $X/Y$ has a positive limit at $0$ (though the statement above is much more general, since it does not even require the vector fields to be $C^1$~!).
We do not know whether this condition remains sufficient if one considers infinitely flat vector fields. Nevertheless, in this context it is not necessary. For example, $X(x):=e^{-\frac1{x}}$ and $Y(x):=\frac12 e^{-\frac1{2x}}$ are conjugate by a homothety of ratio $2$, but $Y(x)\sim \frac12\sqrt{X(x)}$, so $X/Y$ goes to $0$. It is not necessary even if one restricts to conjugacies by germs of parabolic $C^1$ diffeomorphisms. For example, consider the vector field $X(x):= e^{-\frac1{x^2}}$ and the local (analytical) diffeomorphism $h(x):=\frac{x}{1+x}$. Then $Y=h^*X$ satisfies
$$Y(x) = \frac{(X\circ h)(x)}{Dh(x)}\sim e^{-\frac{(1+x)^2}{x^2}} = X(x) \, e^{-\frac2x-1},$$
so $X/Y$ goes to $+\infty$.
\end{rem}
\begin{proof}[Proof of the ``necessity'' in Proposition \ref{p:Takens}.] If $h$ is a germ
of $C^1$ diffeomorphisms conjugating $X$ to $Y$ then, for $x>0$ close to the origin,
$$Dh(x) = \frac{(Y\circ h)(x)}{X(x)}\sim \frac{\beta \, (h(x))^s}{\alpha x^r}\sim \frac{\beta \, (Dh(0))^s}{\alpha} x^{s-r}.$$
The last expression must have a nonzero limit at $0$, which forces $r = s$.
Observe that this part of the argument works for any values of $r$ and $s$. Moreover, if $r=s=1$ then,
since $Dh(x)$ must tend to $Dh(0)\neq0$, we have in addition $\alpha = \beta$.
\end{proof}
\begin{proof}[Proof of the ``sufficiency'' in Proposition \ref{p:Takens}.] Again, extend $X,Y$ to $\R_+ = [0, \infty)$ so that they do not vanish outside the origin and their corresponding flows $(f^t)$ and $(g^t)$ are made of $C^1$ diffeomorphisms of $\R_+$. Multiply them by $-1$ in case they are expanding, fix $x_0 > 0$, and consider the maps $\tau_X \!: x \mapsto \int_{x_0}^x\frac1X$ and $\tau_Y \!: x \mapsto \int_{x_0}^x \frac{1}{Y}$. The $C^1$ diffeomorphism $h := \tau_Y^{-1}\circ \tau_X$ of $\R_+^* \!= \, (0,\infty)$ conjugates $(f^t)$ to $(g^t)$, and thus sends $X$ to $Y$. Let us check that, under the assumption $r=s>1$, the map $h$ extends to a $C^1$ diffeomorphism of $\R_+$. For $x>0$ near $0$, one has
$$Dh(x) = \frac{(Y\circ h)(x)}{X(x)}\sim \frac{\beta}{\alpha}\left(\frac{h(x)}{x}\right)^{\! r},$$
so it suffices to show that $\frac{h(x)}{x}$ has a limit at $0$. To do this, first observe that the improper integral $\int_{x_0}^x\frac{dy}{y^r}$ diverges when $x$ goes to $0$, so
$$\tau_X(x) = \int_{x_0}^x\frac1X\sim \int_{x_0}^x \frac{dy}{\alpha y^r}\sim -\alpha' x^{1-r}$$
for some constant $\alpha'>0$. Similarly, $\tau_Y(x)\sim -\beta' x^{1-r}$. It follows that when $t$ goes to $\infty$, one has $t = \tau_Y(\tau_Y^{-1}(t))\sim -\beta' (\tau_Y^{-1}(t))^{1-r}$, so that
\begin{equation}\label{eq:estrella}
\tau_Y^{-1}(t)\sim -\beta'' t^{\frac1{1-r}}.
\end{equation}
Finally,
$$h(x) = \tau_Y^{-1}\tau_X(x)\sim \beta'' (\tau_X(x))^{\frac1{1-r}}\sim \beta''' x$$
for some new constant $\beta'''$, which concludes the proof.
\end{proof}
\begin{rem} The inoffensive relation (\ref{eq:estrella}) above is a key step. It does not work for $r=s=1$. In this case, $x^{1-r}$ must be replaced by $\varphi (x) := \log(x)$, but the function $\varphi$ does not satisfy
$$u(t)\sim v(t) \quad \Longleftrightarrow \quad \varphi(u(t))\sim \varphi(v(t)),$$
whereas the power functions do (as it was indeed used in the line just following (\ref{eq:estrella}))!
\end{rem}
\vspace{0.1cm}
So far we have discussed the $C^1$ conjugacy between elements in $\mathrm{Diff}^1_+ (\mathbb{R},0)$ that are exactly $\ell$-tangent to the identity for the same value of $\ell$ via two different methods: one of them is direct and the other one is based on associated vector fields. It turns out, however, that these approaches are somehow equivalent. Indeed, according to Sergeraert \cite{sergeraert}, the vector field
associated to a contracting germ $f$ of class $C^2$
has an explicit iterative formula:
$$X (x) = \lim_{n \to \infty} (f^n)^* (f-id) ( x ) = \lim_{n \to \infty} \frac{f^{n+1}(x)- f^n (x)}{Df^n (x)}$$
(we refer to \cite{eynard-navas-arcconnected} for the details and extensions of this construction). Now, if $h$ is a $C^1$ diffeomorphism that conjugates $f$ to another contracting germ $g$ of $C^2$ diffeomorphism, then, as already mentioned, it must send $X$ to the vector field $Y$ associated to $g$, that is, $X \cdot Dh = Y \circ h$. Using the iterative formula above, we obtain
$$Dh (x) = \lim_{n \to \infty} \frac{Df^n (x)}{f^{n+1}(x)-f^n(x)} \cdot \frac{g^{n+1}(h(x)) - g^n (h(x))}{Dg^n (h(x))}.$$
Since $hfh^{-1} = g$, we have
$$\frac{g^{n+1}(h(x)) - g^n (h(x))}{f^{n+1}(x) - f^n (x)} = \frac{ h (f^{n+1}(x))- h (f^n (x))}{f^{n+1}(x)-f^n(x)} \longrightarrow Dh (0).$$
As a consequence, if $Dh (0) = 1$, we have
$$Dh (x) = \lim_{n \to \infty} \frac{Df^n(x)}{Dg^n (h(x))},$$
and we thus retrieve the equality $Dh(x) = A (x,h(x))$ from (\ref{Dh=A}).
\end{proof}
\subsection{Examples of low regular conjugacies that preserve residues}
\label{section:ejemplo}
The statement of Theorem \ref{t:A} would be empty if all $C^{\ell + 1}$ conjugacies between parabolic germs in $\mathrm{Diff}^{2\ell + 1}_{\ell} (\mathbb{R},0) \setminus \mathrm{Diff}^{2\ell + 1}_{\ell + 1} (\mathbb{R},0)$ were automatically $C^{2\ell + 1}$. However, this is not at all the case. Below we provide the details of a specific example for which $\ell = 1$; we leave the extension to larger order of tangency to the reader.
Actually, our example directly deals with (flat) vector fields: we will give two of them of class $C^3$ that are $C^2$ conjugate but not $C^3$ conjugate. The announced diffeomorphisms will be the time-1 maps of their flows.
Let $X(x) := x^2$, $h(x) := x+x^2+x^3\log x$ and $Y := h^*X$. One easily checks that $h$ is the germ of $C^2$ diffeomorphism that is not $C^3$ on any neighborhood of the origin. (This follows from the fact that the function $x\mapsto x^{k+1} \log x$ is of class $C^k$ but not $C^{k+1}$, which can be easily checked by induction.) Every map that conjugates $X$ to $Y$ equals $h$ up to a member of the flow of $X$ (this immediately follows from \cite[Lemma 2.5]{eynard-navas-sternberg}). As these members are all of class $C^{\infty}$ and $h$ is not $C^3$, there is no $C^3$ conjugacy from $X$ to $Y$.
We are hence left to show that $Y$ is of class $C^3$. To do this, we compute:
\begin{eqnarray*}
Y(x)
&=& \frac{(X\circ h)(x)}{Dh(x)} \\
&=& \frac{(x+x^2+x^3\log x)^2}{1+2x+3x^2\log x+ x^2} \\
&=& x^2 \cdot \frac{1+x^2+x^4(\log x)^2+2x+2x^2\log x+2x^3\log x}{1+2x+3x^2\log x+ x^2}\\
&=& x^2\left(1+\frac{-x^2\log x+2x^3\log x+x^4(\log x)^2}{1+2x+3x^2\log x+ x^2}\right)\\
&=& x^2+x^4\log x \cdot \frac{-1+2x+x^2\log x}{1+2x+3x^2\log x+ x^2}.
\end{eqnarray*}
This shows that $Y$ is of the form $Y(x) = x^2 + o( x^4 \log x)$. Showing that $Y$ is of class $C^3$ (with derivatives equal to those of $x^2$ up to order 3 at the origin) requires some extra computational work based on the fact that the function $u (x) := x^4\log x$ satisfies $D^{(3-n)} u (x) = o(x^{n})$
for $n \in [\![0,3]\!] := \{0,1,2,3\}$. We leave the details to the reader.
\subsection{$C^{2\ell+1}$ germs with vanishing residue that are non $C^{\ell+1}$ conjugate}
\label{section-ejemplo-helene}
The aim of this section is to give an example showing that the converse to Theorem \ref{t:A} does not hold (for $\ell = 1$). More precisely, we will give an example of two expanding germs of $C^3$ diffeomorphisms, both exactly 1-tangent to the identity and with vanishing iterative residue, that are not $C^2$ conjugate. Notice that, by Proposition \ref{p:Takens}, these are $C^1$ conjugate.
Again, our example directly deals with (flat) vector fields. Namely, let $X(x):= x^2$ and $Y(x) := h^* X (x)$, where $h$ is given on $\R_+^*
$ by $h(x) := x + x^2 \, \log(\log x).$ Let $f$ (resp. $g$) denote the time-1 map of $X$ (resp. $Y$). We claim that these are (germs of) $C^3$ diffeomorphisms. This is obvious for $f$ (which is actually real-analytic). For $g$, this follows from the fact that $Y$ is of class $C^3$. To check this, observe that
$$Y(x) = \frac{X(h(x))}{Dh (x)} = \frac{ \big( x+x^2 \log (\log x) \big)^2}{ 1 + 2 x \log(\log x) + \frac{x}{\log x} },$$
hence, skipping a few steps,
\begin{equation}\label{eq-taylor-Y}
Y(x) = x^2 + x^3 \cdot \frac{ x \, (\log (\log x))^2 - \frac{1}{\log x}}{1 + 2 x \log(\log(x)) + \frac{x}{\log(x)} } = x^2 + o(x^3).
\end{equation}
Now, letting
$$u(x) := x^3, \qquad v(x):= \frac{ x \, (\log (\log x))^2 - \frac{1}{\log x}}{1 + 2 x \log(\log x) + \frac{x}{\log x} },$$
we have $Y(x) = x^2 + (u v) (x)$, and
$$D^{(3)} (uv) (x) = \sum_{j=0}^3 {3 \choose j} \, D^{(3-j)} u (x) \, D^{(j)} (v)(x) = \sum_{j=0}^3 c_j \, x^j \, D^{(j)} v(x)$$
for certain constants $c_j$. We are hence reduced to showing that $x^j D^{(j)}v(x)$ tends to $0$ as $x$ goes to the origin, which is straightforward and is left to the reader.
\vspace{0.1cm}
It will follow from \S \ref{sub-residues-fields} (see equation (\ref{eq-fnvf})) that both $f$ and $g$ have vanishing iterative residue. They are conjugated by $h$, which is the germ of a $C^1$ diffeomorphism which is easily seen to be non $C^2$. As in the previous example, this implies that there is no $C^2$ conjugacy between $f$ and $g$ (despite they are both $C^3$ and have vanishing iterative residue\,!).
\section{On the conjugacy invariance of residues}
\label{section-proofs}
In this section, we give two proofs of Theorem \ref{t:A} that generalize those given in the particular case previously treated.
For both proofs, a careful study of the associated vector fields is crucial. We hence begin by recalling Yoccoz' result and,
in particular, by relating the residue to their infinitesimal expression.
\subsection{Residues and vector fields}
\label{sub-residues-fields}
Let $\mathbb{K}$ be a field.
For each integer $k \geq 1$, let us consider the group $G_k (\mathbb{K})$ made of the expressions of the form
$$\sum_{n=1}^k a_n x^n, \quad \mbox{ with } \, a_1 \neq 0,$$
where the product is just the standard composition but neglecting terms of order larger than $k$. The group $\widehat{\mathrm{Diff}} (\mathbb{K},0)$ has a natural morphism into each group $G_k (\mathbb{K})$ obtained by truncating series expansions at order $k$:
$$\sum_{n \geq 1} a_n x^n \in \widehat{\mathrm{Diff}} (\mathbb{K},0) \quad \longrightarrow \quad \sum_{n=1}^{k} a_n x^n \in G_k (\mathbb{K}).$$
Notice that each group $G_k (\mathbb{K})$ is solvable and finite-dimensional. Moreover, each subgroup $G_{k,\ell} (\mathbb{K})$, $\ell < k$, obtained by truncation as above of elements in $\widehat{\mathrm{Diff}_{\ell}}(\mathbb{K},0)$ is nilpotent. In case where $\mathbb{K}$ is the field of real numbers, these groups have a well-defined exponential map, and every group element is the time-1 element of a unique flow.
Actually, it is not very hard to explicitly compute this flow for elements $\mathrm{f} \in G_{2\ell + 1,\ell } (\mathbb{K})$, that is for those of the form
$$\mathrm{f} \, (x) = x + \sum_{n=\ell + 1}^{2\ell + 1} a_n x^n.$$
Namely, this is given by
\begin{equation}\label{eq-vf}
\mathrm{f}^t (x)
= x + \sum_{n=\ell + 1}^{2 \ell} t \, a_n \, x^n + \left[ \frac{\ell + 1}{2} (t \, a_{\ell})^2 - t \, \mathrm{Resad}_{\ell} (\mathrm{f}) \right] x^{2\ell + 1},
\end{equation}
where $\mathrm{Resad}_{\ell}$ is defined as in $\widehat{\mathrm{Diff}_{\ell}}(\mathbb{K},0)$ by (\ref{eq-resad-i}) (assuming that $\mathbb{K}$ has characteristic different from $2$). Checking that this is indeed a flow is straightforward: it only uses the additive properties of the corresponding versions of $\Phi_{\ell,i}$ (for $1\leq i \leq \ell$) and $\mathrm{Resad}_{\ell}$ in $G_{2\ell + 1}(\mathbb{K})$.
Inspired by the work of Takens \cite{takens}, in \cite[Appendice 3]{yoccoz}, Yoccoz considered germs $f$ of non-flat $C^k$ diffeomorphisms of the real line fixing the origin. In particular, he proved that there exists a unique (germ of) $C^{k-1}$ vector field $X$ that is $k$ times differentiable at the origin and whose flow $(f^t)$ has time-1 map $f^1 = f$.
If $k = 2\ell + 1$, by truncating $f$ at order $2\ell + 1$, we get an element $\mathrm{f}_{2\ell + 1} \in G_{2\ell + 1} (\mathbb{R})$, and we can hence consider the flow $( \mathrm{f}^t_{2\ell + 1})$ in $G_{2\ell + 1} (\mathbb{R})$ whose time-1 element is $\mathrm{f}_{2\ell + 1}$. Let $P$ be the associated polynomial vector field defined by
$$P(x) := \frac{d}{dt}_{ |_{t=0}} \mathrm{f}^t_{2\ell + 1} (x).$$
As a key step of his proof, Yoccoz showed a general estimate that implies that
$$\lim_{x \to 0} \frac{X(x) - P(x)}{x^{2\ell + 1}} = 0.$$
According to (\ref{eq-vf}), if $f \in \mathrm{Diff}^{2\ell + 1}_{\ell } (\mathbb{R},0)$ writes in the form
$$f(x) = x + \sum_{n=\ell + 1}^{2\ell + 1} a_n x^n + o (x^{2\ell + 1}),$$
then the polynomial vector field associated to $\mathrm{f}_{2\ell + 1}$ equals
$$P(x) = \frac{d}{dt}_{ |_{t=0}} \mathrm{f}^t_{2\ell + 1} (x) = \sum_{n=\ell + 1}^{2 \ell} a_n x^n - \mathrm{Resad}_{\ell}(f) \, x^{2\ell + 1},$$
and therefore
\begin{equation}\label{eq-fnvf}
X (x) = \sum_{n=\ell + 1}^{2\ell} a_n x^n - \mathrm{Resad}_{\ell}(f) \, x^{2\ell + 1} + o(x^{2\ell + 1}).
\end{equation}
This formula in which $\mathrm{Resad}_{\ell}$ explicitly appears will be fundamental for the proof of Theorem~\ref{t:A}.
\begin{rem}
\label{rem:resit-flow}
Formula (\ref{eq:resit-flow}), which holds for every $f \in \mathrm{Diff}^{2\ell + 2}_{\ell} (\mathbb{R},0) \setminus \mathrm{Diff}^{2\ell + 2}_{\ell+1} (\mathbb{R},0)$ and all $t \neq 0$, follows from (\ref{eq-fnvf}) above. Indeed, in order to check it, we may assume that $f$ is expanding and already in reduced form, say
$$f (x) = x + x^{\ell+1} + \mu x^{2\ell+1} + o(x^{2\ell + 1}).$$
If $X$ is the $C^{2\ell+1}$ vector field associated to $f$, then the vector field $X_t$ associated to $f^t$ is $t X$. Since
$$X_t (x) = t \, X(x) =
t \left[ x^{\ell+1} - \mathrm{Resad}_{\ell}(f) \, x^{2\ell + 1} + o(x^{2\ell + 1}) \right],$$
by (\ref{eq-fnvf}) we have
$$f^t (x) = x + t \, x^{\ell+1} + \mu_t \, x^{2\ell +1} + o(x^{2\ell +1}),$$
where $\mu_t$ is such that $\mathrm{Resad}_{\ell} (f^t) = t \, \mathrm{Resad}_{\ell} (f)$. Hence,
$$\frac{(\ell +1) \, t^2}{2} - \mu_t = t \, \left[ \frac{\ell + 1}{2} - \mu \right].$$
Therefore, for $t > 0$,
$$\mathrm{Resit} (f^t) = \frac{\ell +1}{2} - \frac{\mu_t}{t^2}
= \frac{1}{t^2} \left[ \frac{(\ell +1) \, t^2}{2} - \mu_t \right] = \frac{t}{t^2} \, \left[ \frac{\ell + 1}{2} - \mu \right]
= \frac{1}{t} \, \left[ \frac{\ell + 1}{2} - \mu \right] = \frac{\mathrm{Resit} (f)}{t},$$
as announced. The case $t < 0$ is analogous.
It is worth mentioning that the argument above shows that equality (\ref{eq:resit-flow}) holds for every
$f \in \mathrm{Diff}^{2\ell + 1}_{\ell} (\mathbb{R},0) \setminus \mathrm{Diff}^{2\ell + 1}_{\ell+1} (\mathbb{R},0)$ and all $t \neq 0$ for which $f^t$ is a germ of $C^{2\ell+1}$ diffeomorphism (or, at least, has $(2\ell+1)$ derivatives at the origin; see Remark \ref{rem:diferenciable}).
\end{rem}
\subsection{A first proof of the $C^{\ell + 1}$ conjugacy invariance of $\mathrm{Resit}$ }
We next proceed to the proof of the $C^{\ell + 1}$ conjugacy invariance of $\mathrm{Resit}$ between germs of $C^{2\ell + 1}$ diffeomorphisms that are exactly $\ell$-tangent to the identity.
\vspace{0.2cm}
\begin{proof}[Proof of Theorem \ref{t:A}] Let $f$ and $g$ be elements in
$\mathrm{Diff}^{2\ell + 1}_{\ell} (\mathbb{R},0) \setminus \mathrm{Diff}^{2\ell + 1}_{\ell + 1} (\mathbb{R},0)$ conjugated by an element $h \in \mathrm{Diff}^{\ell + 1}_+ (\mathbb{R},0)$. Then they are both contracting or both expanding. We will suppose that the second case holds; the other one follows from it by passing to inverses.
As we want to show the equality $\mathrm{Resit} (f) = \mathrm{Resit} (g)$, by the definition and Lemma \ref{lem-basic}, we may assume that $f$ and $g$ have Taylor series expansions at the origin of the form
$$f(x) = x + x^{\ell + 1} + \mu \, x^{2\ell + 1} + o (x^{2\ell + 1}),
\qquad
g(x) = x + x^{\ell + 1} + \mu' \, x^{2\ell + 1} + o(x^{2\ell + 1}).
$$
Notice that
$$\mathrm{Resit} (f) = \mathrm{Resad}_{\ell}(f)
= \frac{\ell + 1}{2} - \mu =: R, \qquad \mathrm{Resit} (g) = \mathrm{Resad}_{\ell} (g) = \frac{\ell + 1}{2} - \mu' =: R'.$$
In virtue of (\ref{eq-fnvf}), the vector fields $X,Y$ associated to $f,g$, respectively, have the form
$$X(x) = x^{\ell + 1} - R \, x^{2\ell + 1} + o(x^{2\ell + 1}),
\qquad
Y(x) = x^{\ell + 1} - R' \, x^{2\ell + 1} + o(x^{2\ell + 1}).$$
Now write
$$h(x) = \lambda \, x + \sum_{n =2}^{\ell + 1} c_n x^n + o(x^{\ell + 1}).$$
Since $h$ conjugates $f$ to $g$, it must conjugate $X$ to $Y$, that is,
\begin{equation}\label{eq-inv-vf}
X \cdot Dh = Y\circ h.
\end{equation}
We first claim that this implies $\lambda = 1$. Indeed, the relation writes in a summarized way as
$$(x^{\ell + 1} + o (x^{\ell + 1})) \cdot (\lambda + o(1)) = (\lambda \, x + o (x))^{\ell + 1} + o(x^{\ell + 1}).$$
By identification of the coefficients of $x^{\ell + 1}$, we obtain $\lambda = \lambda^{\ell + 1}$. Thus, $\lambda = 1$, as announced.
We next claim that $h$ must be $\ell$-tangent to the identity. Indeed, assume otherwise and let
$2 \leq p < \ell + 1$ be the smallest index for which $c_p \neq 0$. Then relation (\ref{eq-inv-vf}) above
may be summarized as
$$(x^{\ell + 1} + o(x^{\ell + 1})) \cdot (1 + p \, c_p \, x^{p-1} + o(x^{p-1})) =
(x + c_p \, x^p + o(x^p))^{\ell + 1} + o(x^{2\ell}).$$
By identifying the coefficients of $x^{\ell+p}$ we obtain $p \, c_p = (\ell + 1) \, c_p$, which is impossible for $c_p \neq 0$.
Thus, $h$ writes in the form
$$h(x) = x + cx^{\ell + 1} + o(x^{\ell + 1}).$$
Relation (\ref{eq-inv-vf}) then becomes
\begin{small}
$$( x^{\ell + 1} - R x^{2\ell + 1} + o (x^{2\ell + 1})) \cdot (1 + (\ell + 1) c x^{\ell} + o(x^{\ell}))
=
(x + cx^{\ell + 1} + o(x^{\ell + 1}))^{\ell + 1} - R' (x + o(x^{\ell}))^{2\ell + 1} + o(x^{2\ell + 1}).$$
\end{small}Identification of the coefficients of $x^{2\ell + 1}$ then gives
$$(\ell + 1) \, c - R = (\ell + 1) \, c - R'.$$
Therefore, $R=R'$, as we wanted to show.
\end{proof}
\subsection{Residues and logarithmic deviations of orbits}
We next characterize $\mathrm{Resit}$ for contracting germs
$f \in \mathrm{Diff}^{2\ell + 1}_{\ell} (\mathbb{R},0) \setminus \mathrm{Diff}^{2\ell + 1}_{\ell + 1} (\mathbb{R},0)$
in terms of the deviation of orbits from those of the corresponding (parabolic) ramified affine flow of order $\ell$, namely,
$$f^t(x) := \frac{x}{\sqrt[\ell]{1 + t x^{\ell}}}
= x - \frac{t}{\ell} x^{\ell + 1} + \frac{1}{2 \, \ell} \left( 1+\frac{1}{\ell} \right) t^2 x^{2\ell + 1} + o (x^{2\ell + 1}).$$
Notice that the time-1 map $f := f^1$ of this flow satisfies $\mathrm{Resit} (f) = 0$ and, for all $x>0$,
$$ \frac{1}{\sqrt[\ell]{n}} - f^n (x)
= \frac{1}{\sqrt[\ell]{n}} - \frac{x}{\sqrt[\ell]{1+nx^{\ell}}}
\sim \frac{n^{\frac{\ell-1}{\ell}}}{\ell \, n \, (1+nx^{\ell})}
\sim \frac{1}{\ell \, n \, \sqrt[\ell]{n}}.$$
where the first equivalence follows from the equality \, $u^{\ell} - v^{\ell} = (u-v) (u^{\ell-1} + u^{\ell-2}v + \ldots + v^{\ell-1})$.
In particular, letting $a = 1/\ell$, we have
$$\lim_{n \to \infty} \left[ \frac{ \ell \, n \, \sqrt[\ell]{a \, \ell \, n} }{\log (n)} \left( \frac{1}{ \sqrt[\ell]{a \, \ell \, n}} - f^n (x)\right) \right] = 0.$$
This is a particular case of the general proposition below.
\vspace{0.1cm}
\begin{prop} \label{prop:deviation}
If $\ell \geq 1$ then, for
every contracting germ $f \in \mathrm{Diff}^{2\ell + 1}_{\ell} (\mathbb{R},0) \setminus \mathrm{Diff}^{2\ell + 1}_{\ell + 1} (\mathbb{R},0)$
of the form
$$f(x) = x - ax^{\ell + 1} + b x^{2\ell + 1} + o(x^{2\ell + 1}), \qquad a > 0,$$
and all $x_0 > 0$, one has
\begin{small}
\begin{equation}\label{eq-resit-desv-gen}
\mathrm{Resit} (f)
= \lim_{n \to \infty} \left[ \frac{a \, \ell^2 \, n^2}{\log (n)} \left( \frac{1}{a \, \ell \, n} -[ f^n (x_0)]^{\ell}\right) \right]
= \lim_{n \to \infty} \left[ \frac{ \ell \, n \, \sqrt[\ell]{a \, \ell \, n} }{\log (n)} \left( \frac{1}{ \sqrt[\ell]{a \, \ell \, n}} - f^n (x_0)\right) \right].
\end{equation}
\end{small}
\end{prop}
\begin{proof}
We know from \S \ref{sub-residues-fields} that the vector field $X$ associated to $f$
has a Taylor series expansion of the form
$$X(x) = -a x^{\ell + 1} - R x^{2\ell + 1} + o(x^{2\ell + 1}), \qquad R = \mathrm{Resad}_{\ell} (f).$$
We compute
\begin{eqnarray*}
n
&=& \int_{x_0}^{f^n(x_0)} \frac{dy}{X(y)} \\
&=& \int_{x_0}^{f^n(x_0)} \frac{dy}{-a y^{\ell + 1} - R y^{2\ell + 1}} + \int_{x_0}^{f^n(x_0)} \left[ \frac{1}{a y^{\ell + 1} + R y^{2\ell + 1}} - \frac{1}{a y^{\ell + 1} + R y^{2\ell + 1} + o(y^{2\ell + 1})} \right] \, dy \\
&=& \int_{x_0}^{f^n(x_0)} \frac{dy}{-a y^{\ell + 1} (1 + R y^{\ell} /a)} + \int_{x_0}^{f^n(x_0)} \left[ \frac{o(y^{2\ell + 1})}{y^{2\ell+2} (a^2 + o(y))} \right] \, dy \\
&=& \int_{x_0}^{f^n(x_0)} \left[ \frac{1 - Ry^{\ell}/a + o(y^{2\ell-1})}{-a y^{\ell + 1}} \right] \, dy + \int_{x_0}^{f^n(x_0)} o \left( \frac{1}{y} \right) \, dy \\
&=& \frac{1}{a \ell y^{\ell}}\Big|^{f^n(x_0)}_{x_0} + \frac{R \, \log(y)}{a^2}\Big|^{f^n(x_0)}_{x_0} - \frac1a \int_{x_0}^{f^n (x_0)} o \left( y^{\ell-2} \right) + o (\log (f^n (x_0))) \\
&=& \frac{1}{a \, \ell \, [f^n (x_0)]^{\ell}} + \frac{R \, \log(f^n (x_0))}{a^2} + C_{x_{0}} + o (\log (f^n(x_0))),
\end{eqnarray*}
where $C_{x_0}$ is a constant that depends only on $x_0$ (and is independent of $n$). The right-side expression is of the form
$$\frac{1}{a \, \ell \, [f^n (x_0)]^{\ell}} + o \left( \frac{1}{[f^n(x_0)]^\ell} \right),$$
which shows that
\begin{equation}\label{eq:1/n}
[f^n (x_0)]^\ell \sim \frac{1}{a \, \ell \, n}.
\end{equation}
Now, from
\begin{eqnarray*}
R
&=& \frac{a^2 \, (n-C_{x_0})}{\log (f^n (x_0))} - \frac{a}{\ell \, [f^n(x_0)]^\ell \, \log (f^n(x_0))} + o(1) \\
&=& \frac{a^2 \, (n-C_{x_0})}{[f^n(x_0)]^\ell \, \log (f^n(x_0))} \left[ [f^n(x_0)]^\ell - \frac{1}{a \, \ell \, (n-C_{x_0})} \right] + o(1),
\end{eqnarray*}
using (\ref{eq:1/n}) we obtain
$$R
= \lim_{n \to \infty} \frac{a^2 \, (n-C_{x_0})}{\frac{1}{a \, \ell \, n} \cdot \frac{1}{\ell}\log (\frac{1}{a \, \ell \, n})}
\left[ [f^n(x_0) ]^\ell - \frac{1}{a \, \ell \, (n-C_{x_0})} \right]
= \lim_{n \to \infty} \frac{a^3 \, \ell^2 \, n^2}{\log(n)} \left[ \frac{1}{a \, \ell \, n} - [f^n (x_0)]^\ell \right].$$
Therefore,
$$\mathrm{Resit}(f) = \frac{R}{a^2} = \lim_{n \to \infty} \frac{a \, \ell^2 \, n^2}{\log(n)} \left[ \frac{1}{a \, \ell \, n} - [ f^n (x_0) ]^\ell \right]
= \lim_{n \to \infty} \left[ \frac{ \ell \, n \sqrt[\ell]{a \ell n}}{\log (n)} \left( \frac{1}{ \sqrt[\ell]{a \, \ell \, n}} - f^n (x_0)\right) \right],$$
where the last equality follows from the identity \, $u^\ell - v^\ell = (u-v) (u^{\ell-1} + u^{\ell-2}v + \ldots + v^{\ell-1})$.
\end{proof}
\vspace{0.1cm}
\begin{rem}
It is a nice exercise to deduce the general case of the previous proposition from the case $\ell=1$ via conjugacy by the map $x \mapsto x^{\ell}$; see Remark \ref{rem:i-1}.
\end{rem}
\vspace{0.1cm}
What follows is a direct consequence of the previous proposition; checking the details is left to the reader. It is worth comparing the case $\ell=1$ of the statement with (\ref{C2-2}).
\vspace{0.1cm}
\begin{cor}
If $\ell \geq 1$ then, for every contracting germ $f \in \mathrm{Diff}^{2\ell + 1}_\ell (\mathbb{R},0) \setminus \mathrm{Diff}^{2\ell + 1}_{\ell + 1} (\mathbb{R},0)$ of the form
$$f(x) = x - ax^{\ell + 1} + b x^{2\ell + 1} + o(x^{2\ell + 1})$$
and all $x_0 > 0$, one has
\begin{equation}\label{eq:asymp}
f^n (x_0) = \frac{1}{\sqrt[\ell]{a \ell n}} \left( 1 - \frac{[\mathrm{Resit}(f) + \delta_n] \log (n)}{\ell \, n} \right),
\end{equation}
where $\delta_n$ is a sequence that converges to $0$ as $n$ goes to infinity.
\end{cor}
\subsection{A second proof of the $C^{\ell + 1}$ conjugacy invariance of $\mathrm{Resit}$}
We can give an alternative proof of Theorem \ref{t:A} using the estimates of the previous section.
\vspace{0.1cm}
\begin{proof}[Second proof of Theorem \ref{t:A}]
Let $f,g$ in $\mathrm{Diff}^{2\ell + 1}_{\ell} (\mathbb{R},0) \setminus \mathrm{Diff}^{2\ell + 1}_{\ell + 1} (\mathbb{R},0)$ be elements conjugated by $h \in \mathrm{Diff}^{\ell + 1}_+ (\mathbb{R},0)$. In order to show that $\mathrm{Resit} (f)$ and $\mathrm{Resit} (g)$ coincide, we may assume that $f$ and $g$ have Taylor series expansions at the origin of the form
$$f(x) = x - x^{\ell + 1} + \mu \, x^{2\ell + 1} + o (x^{2\ell + 1}),
\qquad
g(x) = x - x^{\ell + 1} + \mu' \, x^{2\ell + 1} + o(x^{2\ell + 1}).
$$
The very same arguments of the beginning of the first proof show that $h$ must write in the form
$$h(x) = x + cx^{\ell + 1} + o(x^{\ell + 1}).$$
In particular, for a certain constant $C > 0$,
\begin{equation}\label{eq:C}
|h(x) - x| \leq C \, x^{\ell + 1}
\end{equation}
for all small-enough $x > 0$. Fix such an $x_0 > 0$. From $hf = gh$ we obtain $hf^n (x_0) = g^n h(x_0)$ for all $n$, which yields
$$h f^n (x_0) - f^n(x_0) = g^n (h(x_0)) - f^n(x_0) $$
Using (\ref{eq:asymp}) and (\ref{eq:C}), this implies
\begin{equation}\label{eq:imp}
C \left[ \frac{1}{\sqrt[\ell]{a\ell n}} \left( 1 - \frac{[\mathrm{Resit} (f) + \delta_n] \log (n)}{\ell \, n} \right) \right]^{\ell + 1}
\geq \left| \frac{[\mathrm{Resit}(g) - \mathrm{Resit}(f) + \delta_n'] \log (n)}{\ell \, n \, \sqrt[\ell]{a \ell n}} \right|
\end{equation}
for certain sequences $\delta_n,\delta_n'$ converging to $0$. On the one hand, the left-hand term above is of order $1/ (n \, \sqrt[\ell]{n})$. On the other hand, if $\mathrm{Resit}(f) \neq \mathrm{Resit}(g)$, then the right-hand term is of order
$$\frac{|\mathrm{Resit}(g) - \mathrm{Resit}(f)| \, \log(n)}{n \, \sqrt[\ell]{n}}.$$
Therefore, if $\mathrm{Resit}(f) \neq \mathrm{Resit}(g)$, then inequality (\ref{eq:imp}) is impossible for large-enough $n$. Thus, $\mathrm{Resit}(f)$ and $\mathrm{Resit}(g)$ must coincide in case of $C^{\ell + 1}$ conjugacy.
\end{proof}
\section{Existence of low regular conjugacies between germs}
\label{section-parte2}
The goal of this section is to prove Theorem \ref{t:Bp}, of which Theorem \ref{t:B} is a particular case (namely, the case $r = \ell$). To begin with, notice that, passing to inverses if necessary, we may assume that the diffeomorphisms in consideration are both expanding. Looking at the associated vector fields, one readily checks that Theorem \ref{t:Bp} is a direct consequence of the next proposition. For the statement,
given $\ell \geq 1$, we say that a (germ of) vector field $Z$ is {\em exactly $\ell$-flat} if it has a Taylor series expansion about the origin of the form
$$Z(x) = \alpha x^{\ell + 1} + o(x^{\ell + 1}), \quad \mbox{with } \alpha \neq 0.$$
\vspace{0.1cm}
\begin{prop} \label{prop:CS}
Given $\ell \ge1$ and $1\le r\le \ell$, any two germs at $0$ of exactly $\ell$-flat, expanding $C^{\ell+r}$ vector fields are $C^{r}$ conjugate.
\end{prop}
\vspace{0.1cm}
For $r=1$, this is a result concerning $C^1$ conjugacies between vector fields that should be compared to Proposition \ref{p:Takens}, yet the hypothesis here are much stronger.
\begin{rem}
The conjugacy is no better than $C^{r}$ in general. For instance, in \S \ref{section-ejemplo-helene} there is an example of germs of exactly 1-flat, expanding, $C^3$ vector fields (case $\ell=1,$ $r=1$) that are $C^1$ conjugate but not $C^2$ conjugate, despite the time-1 maps have the same (actually, vanishing) iterative residue. It would be interesting to exhibit examples showing that the proposition is optimal for all the cases it covers.
\end{rem}
To prove Proposition \ref{prop:CS}, we will use a well-known general lemma concerning reduced
forms for vector fields that is a kind of simpler version of \cite[Proposition 2.3]{takens} in finite regularity. (Compare Lemma \ref{lem-basic}, which deals with the case of diffeomorphisms.)
\vspace{0.2cm}
\begin{lem}
\label{l:reduc}
Let $\ell \geq 1$ and $r \geq 1$, and let $Y$ be a germ of $C^{\ell + r}$ vector field admitting a Taylor series expansion of order $\ell + r$
about the origin that starts with $\alpha x^{\ell+1}$, where $\alpha > 0$. If $1\le r \le \ell$, then $Y$ is $C^{\infty}$ conjugate to a vector field
of the form $x\mapsto x^{\ell+1}+o(x^{\ell+r})$.
\end{lem}
\begin{proof}
For $1\le r \le \ell$, we construct by induction a sequence of $C^\infty$ conjugates $Y_1$, \dots, $Y_r$ of $Y$ such that $Y_s(x)=x^{\ell+1}+o(x^{\ell+s})$ for every $s \in [\![1,r]\!] := \{1,2,\ldots,r\}$. One first obtains $Y_1$ by conjugating $Y$ by a homothety. Assume we have constructed $Y_s$ for some $s \in [\![1,r-1]\!]$, and consider the diffeomorphism $h(x)=x+ax^{s+1}$. Since $Y_s$ is $C^{\ell+s+1}$, there exists $\alpha \in \R$ such that $Y_s(x) = x^{\ell + 1}+\alpha_s x^{\ell +s+1} + o(x^{\ell +s+1})$. Then, for $Y_{s+1}(x) := h^* Y_s(x) = \frac{Y_s(h(x))}{Dh(x)}$, we have
\begin{eqnarray*}
Y_{s+1}(x)
&=& \frac{(h(x))^{\ell+1}+\alpha_s (h(x))^{\ell +s+1}+o((h(x))^{\ell +s+1})}{1+a(s+1)x^{s}}\\
&=& \frac{(x+ax^{s+1})^{\ell+1}+\alpha_s (x+ax^{s+1})^{\ell +s+1}+o(x^{\ell+s+1})}{1+a(s+1)x^{s}}\\
&=& \left(x^{\ell +1}+((\ell+1)a+\alpha_s)x^{\ell +s+1}+o(x^{\ell +s+1})\right)\left(1-a(s+1)x^{s}+o(x^{s})\right)\\
&=& x^{\ell+1}+\left((\ell - s)a+\alpha_s \right)x^{\ell+s+1}+o(x^{\ell +s+1}).
\end{eqnarray*}
Since $s \leq r-1 < \ell$, we may let $a := \frac{\alpha_s}{s-\ell}$ to obtain $Y_{s+1}(x) = x^{\ell +1}+o(x^{\ell +s+1})$, which concludes the induction and thus finishes the proof. Observe that the conjugacy that arises at the end of the inductive process is a germ of polynomial diffeomorphism.
\end{proof}
\vspace{0.1cm}
We will also need a couple of elementary technical lemmas. For the statement, in analogy to the case of vector fields, we say that a real-valued function $u$ is {\em $\ell$-flat at $0$} it has a Taylor series expansion about the origin of the form \, $u(x) = a x^{\ell+1} + o (x^{\ell+1}).$
\vspace{0.1cm}
\begin{lem}
\label{l:taylor}
If $u$ is a $C^{k}$ germ of $m$-flat function at $0$, with $0 \leq m < k$, then $x\mapsto \frac{u(x)}{x^{m+1}}$
extends at $0$ to a germ of $C^{k-m-1}$ map.
\end{lem}
\begin{proof}
Since $u$ is of class $C^{k}$ and $m$-flat, all its derivatives at the origin up to order $m$ must vanish. Hence, since $u$ is of class $C^{m+1}$, we have the following form of Taylor's formula:
$$u(x)
= 0+0\cdot x +\dots + 0\cdot x^m +\frac{x^{m+1}}{m \, !} \int_0^1(1-t)^m D^{(m+1)} u(tx)dt.
$$
The lemma then follows by differentiating under the integral.
\end{proof}
\vspace{0.01cm}
\begin{lem}
\label{l:debile}
Let $r\ge 1$ and let $\varphi$ be a germ of continuous function at the origin that is differentiable outside $0$ and such that $\varphi(0)=1$. If $x\mapsto x \, \varphi(x)$ is of class $C^r$, so is the function $x\mapsto x \, (\varphi(x))^\vartheta$ for any $\vartheta \neq 0$.
\end{lem}
\begin{proof}
Let $u(x) := x \, \varphi(x)$ and $v(x) := x \, (\varphi(x))^\vartheta$. Assume $u$ is of class $C^r$.
If $\vartheta = 1$, there is nothing to prove, so let us assume $\vartheta \neq 1$. By Lemma \ref{l:taylor}, $\varphi$ is $C^{r-1}$. Now, outside $0$,
$$Du (x) = \varphi(x) + x \, D\varphi (x)\quad\mbox{ and }\quad Dv (x) = (\varphi(x))^\vartheta + \vartheta \, x\, D\varphi(x) \, (\varphi(x))^{\vartheta-1}.$$
By the first equality, since $Du$ and $\varphi$ are $C^{r-1}$, so is $x\mapsto x \, D \varphi (x)$. Thus, by the second equality, $Dv$
writes as a sum of products and compositions of $C^{r-1}$ maps, hence it is $C^{r-1}$. Therefore, $v$ is $C^r$, as announced.
\end{proof}
\vspace{0.1cm}
\begin{proof}[Proof of Theorem \ref{t:Bp}]
Let $f$ be an expanding germ in $\mathrm{Diff}^{\ell + 1 + r}_+(\mathbb{R},0)$ that is exactly $\ell$-tangent to the identity, and let $X$ be its associated $C^{\ell + r}$ vector field. We will show that $X$ is $C^r$ conjugate to $X_{\ell}(x):=x^{\ell+1}$. This allows concluding the proof. Indeed, if $g$ is another expanding germ in $\mathrm{Diff}^{\ell + 1 + r}_+(\mathbb{R},0)$ that is exactly $\ell$-tangent to the identity, then its associated vector field $Y$ will also be $C^r$ conjugate to $X_{\ell}$. Therefore, $X$ and $Y$ will be $C^r$ conjugate, and this will allow conjugating their time-1 maps, that is, $f$ and $g$.
Now, by Lemma~\ref{l:reduc}, we may assume that $X(x)=x^{\ell+1}(1+\delta(x))$ with $\delta(x)=o(x^{r-1})$. Moreover, by Lemma \ref{l:taylor} (applied to $k = \ell + r$ and $m = \ell$), we may also assume that $\delta$ is of class $C^{r-1}$. Equivalently, we can write $X(x)= \frac{x^{\ell +1}}{1+\epsilon(x)}$, still with $\epsilon$ of class $C^{r-1}$ and $(r-1)$-flat. To prove that $X$ is $C^{r}$ conjugate to $X_\ell$, it suffices to prove that the solutions $h$ of $Dh=X_\ell \circ h / X$ on~$\R_+^*$ (near $0$), which are of class $C^{\ell+r+1}$, extend to $C^{r}$ diffeomorphisms at the origin. Assume without loss of generality that $X$ is expanding on an interval containing the origin and $1$, and consider the solution $h$ fixing $1$. We have:
$$\int_1^{h(x)}\frac{1}{X_{\ell}} = \int_1^{x}\frac{Dh}{X_\ell \circ h}=\int_1^x \frac1Y,$$
that is
$$-\frac1{\ell \, (h(x))^\ell} = -\frac1{\ell \, x^\ell} + \int_1^x\frac{\epsilon(y)}{y^{\ell+1}}dy+c$$
for some constant $c$. Equivalently,
$$\frac1{(h(x))^\ell} = \frac1{x^\ell} - \ell \, u(x),
\quad \mbox{ with }\quad u(x)=\int_1^x\frac{\epsilon(y)}{y^{\ell +1}}dy+c.$$
This implies
$$
{(h(x))^\ell} = \frac1{\frac1{x^\ell}(1-\ell \, x^\ell \, u(x))} = \frac{x^\ell}{1-\ell \, x^\ell \, u(x)},
$$
so that
$$h(x) =\frac{x}{(1-\ell \, x^\ell \, u(x))^{1/\ell}}.$$
Recall that $\epsilon$ is $C^{r-1}$, so $u$ is $C^{r}$ outside $0$. Moreover, $\epsilon$ is $(r-1)$-flat, so $\frac{\epsilon(y)}{y^{\ell +1}} =o(y^{r-\ell -2})$, and therefore, since $r \leq \ell$,
\begin{equation}\label{eq:n=0}
u(x)=o(x^{r-\ell -1}).
\end{equation}
In particular, $x^\ell u(x)\to 0$ as $x \to 0$, since $r\ge1$.
According to Lemma~\ref{l:debile}, it thus suffices to prove that $x\mapsto x(1-\ell \, x^\ell \, u(x))$, or equivalently $v:x\mapsto x^{\ell+1} \, u(x)$, is~$C^{r}$. To do this, observe that
$$Dv(x) = (\ell +1)x^{\ell}u(x)+x^{\ell +1} Du(x) = (\ell +1)x^{\ell}u(x)+\epsilon(x).$$
We already know that $\epsilon$ is $C^{r-1}$, so we need only check that $w:x\mapsto x^\ell u(x)$ is $C^{r-1}$. On $\R_+^*$, $w$ is $C^{r}$ (as $u$) and satisfies
$$D^{(r-1)}w (x) = \sum_{n=0}^{r-1}a_n x^{\ell-(r-1-n)} D^{(n)}u (x) = \sum_{n=0}^{r}a_n x^{\ell -r+1+n} D^{(n)}u (x)$$
for some constants $a_n$. In view of this, it suffices to check that $x^{n+\ell +1-r} D^{(n)} u (x)$ has a limit at $0$ for every $n$ between $0$ and $r-1$. We have already checked this for $n=0$ (see (\ref{eq:n=0}) above). For $n\ge1$,
\begin{eqnarray*}
x^{n+\ell +1-r} D^{(n)} u (x)
&=& x^{n+\ell +1-r} \, D^{(n-1)} \! \left( \frac{\epsilon(x)}{x^{\ell +1}} \right) (x)\\
&=& x^{n+\ell +1-r}\sum_{j=0}^{n-1} c_{n,j} \, x^{-\ell-1-(n-1-j)} D^{(j)} \epsilon (x)\\
&=& \sum_{j=0}^{n-1}c_{n,j} \, x^{j-r+1} D^{(j)} \epsilon (x)
\end{eqnarray*}
for certain constants $c_{n,j}$. Finally, since $\epsilon$ is $(r-1)$-flat, we have $D^{(j)} \epsilon (x)=o(x^{r-1-j})$ for $j\le r-1$, which allows showing that $x^{n+\ell +1-r} D^{(n)} u (x) \to 0$ as $x \to 0$, thus completing the proof.
\end{proof}
\section{Conjugacies in case of coincidence of residues}
The goal of this section is to prove Theorem \ref{t:C}. Again, passing to the associated vector fields, this will follow from the next proposition:
\begin{prop}
\label{p:takens-fini}
If $\ell \geq 1$ and $r\ge \ell +2$, then every germ of $C^{\ell+r}$ expanding vector field that is exactly $\ell$-flat is $C^{r}$ conjugate to a (unique) vector field of the form $x\mapsto x^{\ell+1}+\mu x^{2\ell +1}$. This still holds for $r=\ell+1$ if one further assumes that the vector field has $(2\ell+2)$ derivatives at~$0$ (and is not only of class $C^{2\ell+1}$).
\end{prop}
\begin{rem}
In the case $r=\ell+1$, the hypothesis of existence of $(2\ell+2)$ derivatives at the origin is necessary. Indeed, in \S \ref{section-ejemplo-helene}, there is an example of a $1$-flat $C^3$ vector field that is not $C^2$ conjugate to its normal form; this provides a counterexample for $\ell=1$ and $r=2=\ell+1$.
\end{rem}
\begin{proof}[Proof of Theorem \ref{t:C} from Proposition \ref{p:takens-fini}]
Let $f$ and $g$ be two expanding germs as in the theorem, and let $X$ and $Y$ be their generating vector fields. By Proposition \ref{p:takens-fini}, these are $C^r$ conjugate to some $X_0$ and $Y_0$ of the form $x\mapsto x^{\ell+1}+\mu x^{2\ell+1}$ and $x\mapsto x^{\ell+1}+\nu x^{2\ell+1}$, respectively. Now, according to \S \ref{sub-residues-fields} and the hypothesis of coincidence of residues,
$$\mu=-\mathrm{Resit}(f)=-\mathrm{Resit}(g)=\nu.$$
Thus, $X_0 = Y_0$ and, therefore, $X$ and $Y$ are $C^r$ conjugate, and so $f$ and $g$ as well.
\end{proof}
\vspace{0.1cm}
To prove Proposition \ref{p:takens-fini}, we first need a version of Lemma \ref{l:reduc} for large derivatives.
\vspace{0.1cm}
\begin{lem}\label{l:reduc2}
Let $\ell \geq 1$ and $r \geq 1$, and let $Y$ be a germ of $C^{\ell + r}$ vector field admitting a Taylor series expansion of order $\ell + r$ about the origin that starts with $\alpha x^{\ell+1}$, where $\alpha > 0$. If $r \ge \ell+1$, then $Y$ is smoothly conjugate to a vector field of the form $x\mapsto x^{\ell+1}+\mu x^{2\ell+1}+o(x^{\ell + r})$.
\end{lem}
\begin{proof}
Thanks to Lemma \ref{l:reduc}, we can start with $Y$ of the form $Y(x) = x^{\ell +1}+ o(x^{2\ell})$, which can be rewritten in the form $Y(x) = x\mapsto x^{\ell+1}+\mu x^{2\ell+1}+ o(x^{2\ell+1})$ since $Y$ is of class $C^{2\ell+1}$. One then constructs conjugate vector fields $Y_\ell :=Y, \ldots , Y_{r-1}$ such that each $Y_s$ is of the form $x\mapsto x^{\ell+1}+\mu x^{2\ell+1}+ o(x^{\ell +s+1})$. The construction is made by induction by letting $Y_{s+1}=h^* Y_s$, with $h(x)=x+ax^{\ell +s+2}$ for some well-chosen $a$. Details are left to the reader.
\end{proof}
\vspace{0.1cm}
\begin{proof}[Proof of Proposition \ref{p:takens-fini}]
Thanks to Lemma \ref{l:reduc2}, we can start with a vector field $X$ globally defined on $\mathbb{R}_+$ that has the form
$$X(x) = x^{\ell+1}+\mu x^{2\ell+1}+o(x^{\ell + r}).$$
We then define the function $\epsilon$ by the equality
$$X(x) = x^{\ell +1}+\mu x^{2\ell +1}+x^{2\ell + 1}\epsilon(x).$$
We want to prove that $X$ is $C^{r}$ conjugate to $X_0 (x) := x^{\ell+1}+\mu x^{2\ell + 1}$. For $y\in[0,1]$, let $X_y (x) := X_0(x)+yx^{2\ell + 1}\epsilon(x)$, so that $X_1=X$. Let $Y$ be the horizontal $C^{\ell + r}$ vector field on $S := \R_+\times[0,1]$ defined by $Y(x,y)=X_y(x) \, \partial_x + 0 \cdot \partial_y$. \medskip
We claim that it suffices to find a $C^{r}$ vector field $Z$ on $S$ of the form $(x,y)\mapsto K(x,y) \, \partial_x + \partial_y$, with $K$ vanishing on $\{0\}\times[0,1]$ and $[Z,Y]=0$ near $\{0\}\times[0,1]$. To show this, denote by $\phi_Z^t$ the flow of such a $Z$. Clearly, $\phi_Z^1 |_{|\R_+\times \{0\}}$ is of the form $(x,0)\mapsto (\varphi(x),1)$ for some $C^{r}$ diffeomorphism $\varphi$ of $\R_+$. We claim that $\varphi_* X_0 = X_1$. Indeed, the equality $[Z,Y]=0$ implies that the flows of $Z$ and $Y$ commute near the ``vertical'' segment $\{0\}\times [0,1]$. Let $\mathcal{U}$ be a set of the form $\{\phi_Z^t(x,0): (x,t)\in [0,\eta] \times [0,1]\}$ where these flows commute. Then, if $f_0^1$ and $f_1^1$ denote the time-$1$ maps of $X_0$ and $X_1$, respectively, for every $x\in f_0^{-1}([0,\eta])$,
$$(\varphi(f_0(x)),1)=\phi_Z^1(f_0(x),0) = \phi_Z^1(\phi_Y^1(x,0)) = \phi_Y^1(\phi_Z^1(x,0))= \phi_Y^1(\varphi(x),1) = (f_1(\varphi(x)),1).$$
Therefore, $\varphi$ conjugates $f_0$ to $f_1$, and thus $X_0$ to $X_1$ (by the uniqueness of the vector fields associated to diffeomorphisms), as required. \medskip
We are thus reduced to proving the existence of a $Z$ as above. Let us leave aside the questions of regularity for a while and assume that all involved functions are $C^\infty$. In this case, according for example to \cite[Proposition 2.3]{takens}, the function $\epsilon$ can be assumed to be of the form $x\mapsto x \, \delta(x)$, with $\delta$ smooth at $0$. We look for $Z$ of the form
$$Z(x,y)=x^{\ell + 1}H(x,y)\partial_x+\partial_y.$$
Let $F(x) := 1+\mu x^\ell$, so that $X_0(x)=x^{\ell + 1}F(x)$. If $(Y_x,Y_y)$ and $(Z_x,Z_y)$ denote the coordinates of $Y$ and $Z$, respectively, then we have $[Z,Y] = \psi(x,y)\partial_x+0\cdot\partial_y$, with
\begin{eqnarray*}
\psi(x,y)
&=& \frac{\partial Y_x}{\partial x}Z_x - \frac{\partial Z_x}{\partial x}Y_x+\frac{\partial Y_x}{\partial y}Z_y \\
&=& \left[(\ell+1)x^\ell F(x) + (2\ell + 1)x^{2\ell}y\epsilon (x)+yx^{2\ell +1} D\epsilon(x)\right]\times x^{\ell +1}H(x,y) \\
&\qquad \,\, -& \left((\ell+1)x^{\ell}H(x,y)+x^{\ell +1}\frac{\partial H}{\partial x}(x,y)\right)\times\left(x^{\ell + 1}F(x)+x^{2\ell + 1}y\epsilon (x)\right)
\, + \, x^{2\ell + 1}\epsilon (x)\\
&=& x^{2\ell + 2}\left(\frac{\partial H}{\partial x}(x,y)(F(x)+x^\ell y\epsilon (x))
+H(x,y)y\left( \ell x^{\ell -1}\epsilon (x) + x^\ell D \epsilon(x)\right)+\delta(x)\right)
\end{eqnarray*}
(the terms involving $x^{2\ell +1}$ cancel). We hence need to solve an equation of the form
\begin{equation}
\label{e:EDL}
a(x,y) \frac{\partial H}{\partial x}(x,y)+b(x,y)H(x,y)+\delta(x)=0,
\end{equation}
with
$$a(x,y) := F(x)+x^\ell y\epsilon (x),
\qquad b(x,y) := y \, D(x^{\ell} \epsilon(x)),
\qquad \delta (x) := \frac{\epsilon(x)}{x}.$$
This is a family of linear differential equations with variable $x$ and parameter $y$, with $a$ nonvanishing near $\{0\}\times [0,1]$ and $a$, $b$ and $\delta$ smooth (for now). The resolution of such equations ensures the existence of a smooth solution $H$ (which is non unique, because of the choice of integration constants), which concludes the proof in the smooth case. \medskip
Let us now concentrate on our case, where $\ell \geq 1$, $r\ge \ell+1$ and $X$ is only $C^{\ell+r}$ (with $(2\ell + 2)$ derivatives at $0$ if $r = \ell + 1$). We want to check that, in this case, there exists $H$ satisfying (\ref{e:EDL}) such that $(x,y)\mapsto x^{\ell+1}H(x,y)$ is $C^{r}$.
\medskip
The existence of an $H$ satisfying (\ref{e:EDL}) that is (smooth in $y$ and) $C^1$ in $x$ follows from that $a,b$ and $\delta$ above are (smooth in $y$ and) continuous in $x$, as we now check. Since we already know that $\epsilon$ is as regular as $X$ away from $0$, that is, $C^{\ell + r}$, we are left with checking that $x\mapsto x^\ell\epsilon(x)$ and $x\mapsto \frac{\epsilon(x)}x$ extend respectively to a $C^1$ and a $C^0$ function near $0$. The first point follows from Lemma \ref{l:taylor} applied to $u(x):=x^{2\ell+1}\epsilon(x)$, $k=\ell+r$ and $m=\ell$ (which actually shows that $x\mapsto x^\ell\epsilon(x)$ is $C^{r-1}$ with $r-1\ge \ell\ge 1$). For the second point, if $r\ge \ell+2$, we know that
$$x^{2\ell + 1} \epsilon(x) = o(x^{\ell+r})$$
and $\ell+r\ge 2\ell+2$, hence by dividing each term by $x^{2\ell + 1}$ we get $\epsilon(x)=o(x)$, so $\delta$ indeed extends as a continuous function at $0$. If $r=\ell+1$, this is precisely where we use the additional assumption that $X$ is $(2\ell+2)$ times differentiable at $0$, which by Lemma \ref{l:reduc2} implies that one can start with $X$ of the form
$$X(x) = x^{\ell+1}+\mu x^{2\ell+1}+o(x^{2\ell+2}),$$
hence $x^{2\ell+1}\epsilon(x)=o(x^{2\ell+2})$ and thus again $\epsilon(x)=o(x)$. \medskip
To conclude the proof, we finally need to show that $(x,y)\mapsto x^{\ell+1} H(x,y)$ is $C^r$. To do this, we will actually prove by induction on $s \in [\![0,\ell + 1]\!]$ that $H_s \!: (x,y)\mapsto x^sH(x,y)$ is $C^{r-\ell+s-1}$. \medskip
\noindent \emph{Case $s=0$}. We have already given part of the ingredients for this initial case. If $r=\ell+1$, then $r-\ell+0-1=0$, and we already know that $H_0=H$ is continuous.
Now consider $r\ge \ell+2$. The theory of linear differential equations tells us that $H=H_0$ has one degree of differentiability more than the coefficients of the equation, so it suffices to check that these are $C^{r-\ell-2}$. We have already seen that $a$ and $b$ in \eqref{e:EDL} were respectively of class $C^{r-1}$ and $C^{r-2}$, so \emph{a fortiori} $C^{r-\ell-1}$ since $\ell\ge 1$. Now applying Lemma \ref{l:taylor} to $u(x):=x^{2\ell+1}\epsilon(x)$, $k=\ell+r$ and $m=2\ell$, we get that $\epsilon$ is $C^{r-\ell-1}$. Moreover, since $x^{2\ell + 1} \epsilon(x) = o(x^{\ell+r})$, we have $\epsilon(x)=o(x^{r-\ell-1})$, so $\epsilon$ is $(r-\ell-1)$-flat. Applying Lemma \ref{l:taylor} again but this time to $u:=\epsilon$, $k=r-\ell-1\ge 1$ and $m=0$, we get that $\delta:x\mapsto \frac{\epsilon(x) }x$ is $C^{r-\ell-2}$, as required.\medskip
\noindent \emph{Inductive step}. Assume now that the induction hypothesis is true for some $0 \le s \le \ell$. Then
$$\frac{\partial H_{s+1}}{\partial x}(x,y) = (s+1)x^s H(x,y) + x^{s+1}\frac{\partial H}{\partial x}(x,y).$$
The first term of the right hand side is $C^{r-\ell+s-1}$ by the induction hypothesis, and the second is equal to
$$
x^{s+1}\left(-\frac{b(x,y)}{a(x,y)}H(x,y) -\frac{c(x,y)}{a(x,y)}\right)
= -\frac{x\cdot b(x,y)}{a(x,y)}H_s (x,y) -\frac{x^{s+1}c(x,y)}{a(x,y)}.
$$
Since $a$ is $C^{r-1}$ and thus $C^{r-\ell+s-1}$, we are left with proving that the numerators of the fractions, namely
$$y (\ell x^\ell\epsilon (x)+x^{\ell+1} D\epsilon(x)) \qquad \text{and}\qquad x^s \epsilon (x),$$
are $C^{r-\ell+s-1}$. For the second one, this follows from Lemma \ref{l:taylor} still applied to $u(x):=x^{2\ell+1}\epsilon(x)$ and $k=\ell+r$ but this time with $m=2\ell-s\le 2\ell<k$. A last application with $m=r-2<k$ shows that $x\mapsto x^{\ell+1}\epsilon(x)$ is $C^{\ell+r-m-1}=C^{r+1}$, so that the first numerator is $C^r$ and \emph{a fortiori} $C^{r-\ell+s-1}$ since $s\le \ell$. This concludes the induction and thus the proof.
\end{proof}
\begin{rem}
\label{r:final}
Proposition \ref{p:takens-fini} implies Takens' normal form result: if $X$ is a $C^{\infty}$ expanding vector field that is exactly $\ell$-flat for some $\ell \geq 1$, then $X$ is $C^{\infty}$ conjugate to a vector field of the form $x \mapsto x^{\ell+1} + \mu \, x^{2\ell +1}$ for a unique $\mu$. Indeed, we have shown the existence of such a $C^r$ conjugacy $\varphi_r$ for each $r \geq \ell + 2$. However, the conjugacy is unique up to composition with a member of the flow of $X$, which is a $C^{\infty}$ diffeomorphism. Therefore, $\varphi_r$ and $\varphi_{r+1}$ differ by the composition of a $C^{\infty}$ diffeomorphism, which easily implies that all the $\varphi_r$ are actually $C^{\infty}$.
An analogous result holds for diffeomorphisms: any two $C^{\infty}$ germs that are exactly $\ell$-flat, both expanding or both contracting, and have the same iterative residue, are $C^{\infty}$ conjugate. Rather surprisingly, Takens' proof of this fact passes through the famous Borel Lemma on the realization of sequences of numbers as derivatives at the origin of $C^{\infty}$ germs, while our argument above avoids this.
\end{rem}
\section{Residues and distortion}
\label{section-last-question}
Recall that an element $g$ of a finitely-generated group is said to be {\em distorted} if
$g^n$ may be written as a product of no more than $o(n)$ factors among the generators and their inverses. (This definition does not depend on the chosen generating system.) An element of a general group is a {\em distortion element} if it is distorted inside some finitely-generated subgroup. The next question is inspired from \cite{navas}:
\begin{qs}
What are the distortion elements of the group $\mathrm{Diff}^{\omega}_+ (\mathbb{R},0)$
of germs of (orientation-preserving) real-analytic diffeomorphisms fixing the origin~?
\end{qs}
An example of distortion element in the group of germs above is
$$g (x) := \frac{x}{1+x} = x - x^2 + x^3 - x^4 +\ldots$$
Indeed, letting $h (x) = \frac{x}{2}$, one has
\begin{equation}\label{eq:affine-flow}
h \, g \, h^{-1} = g^2,
\end{equation}
which easily yields $g^{2^m} = h^m g h^{-m}$ for all $m \geq 1$. Using this, it is not hard to conclude that $g^n$ may be written as a product of $O(\log(n))$ factors $g^{\pm 1}, h^{\pm 1}$. The case of the map below is particularly challenging.
\begin{qs}
Is the germ $f(x) := x - x^2$ a distortion element of $\mathrm{Diff}^{\omega}_+ (\mathbb{R},0)$~?
\end{qs}
Observe that $f$ is a distortion element of the much larger group $\mathrm{Diff}^1_+(\mathbb{R},0)$. Indeed, by Theorem \ref{t:B}, it is $C^1$ conjugated to $g$, which is a distortion element. Actually, this is a particular case of the much more general result below.
\begin{prop}
Every (nontrivial) parabolic germ of real-analytic diffeomorphism of the line fixing the origin is a distortion element of the group $\mathrm{Diff}^1_+(\mathbb{R},0)$.
\end{prop}
\begin{proof}
Let $f$ be such a germ and $\ell$ its order of contact with the identity. By Proposition \ref{p:Takens},
$f$ is $C^1$ conjugated to a germ $g$ of the form
$$g(x) := \frac{x}{\sqrt[\ell]{1 \pm x^{\ell}}} = x \mp \frac{x^{\ell +1}}{\ell} + \ldots .$$
We are hence left to showing that $g$ is a distortion element. But as above, this follows from the relation
\, $h_\ell \, g \, h_\ell^{-1} = g^2$, \, where $h_\ell (x) := \frac{x}{2^{{1/\ell}}}$.
\end{proof}
We do not know whether the germ $f$ above is a distortion element of the smaller group $\mathrm{Diff}^2_+(\mathbb{R},0)$. A first difficulty of this question lies in that one cannot deduce distortion from a relation of type (\ref{eq:affine-flow}) because of the nonvanishing of the iterative residue of $f$, as proven below.
\begin{prop}
Let $g$ be a germ of $C^3$ diffeomorphism fixing the origin that is exactly $1$-tangent to the identity. If $g$ is $C^2$ conjugate to some element $g^t \in \mathrm{Diff}_+^3 (\mathbb{R},0)$ of its flow,
with $t \neq 1$, then $\mathrm{Resit}(g)$ vanishes.
\end{prop}
\begin{proof}
Since $\mathrm{Resit} (g)$ is invariant under $C^2$ conjugacy, by (\ref{eq:resit-flow})
(see also Remark (\ref{rem:resit-flow})), one has
$$\mathrm{Resit} (g) = \mathrm{Resit} (g^t) = \frac{\mathrm{Resit}(g)}{|t|}.$$
This easily implies $\mathrm{Resit} (g) = 0$ if $t \neq 1$. The case $t = -1$ is impossible since
a contracting diffeomorphisms cannot be conjugated to a expanding one.
\end{proof}
To finish, let us mention that we do not even know whether the germ $f$ above is a distortion element of the group $\widehat{\mathrm{Diff}} (\mathbb{R},0)$ of formal germs of diffeomorphisms. More generally, the next question seems challenging:
\begin{qs}
Given a field $\mathbb{K}$, what are the distortion elements of $\widehat{\mathrm{Diff}}(\mathbb{K},0)$~?
What about $\widehat{\mathrm{Diff}} (\mathbb{Z},0)$~?
\end{qs}
\vspace{0.3cm}
\noindent{\bf Acknowledgments.}
H\'el\`ene Eynard-Bontemps was funded by the IRGA project ADMIN of UGA and the CNRS in the context of a d\'el\'egation. She would like to thank CMM\,/\,U. de Chile for the hospitality during this
period and Michele Triestino for inspiring discussions related to this work. Andr\'es Navas was funded by Fondecyt Research Project 1220032, and would like to thank Adolfo Guillot, Jan Kiwi and Mario Ponce for useful discussions and insight concerning residues of parabolic germs in the complex setting.
\vspace{0.5cm}
\begin{small}
|
{
"arxiv_id": "2302.13282",
"language": "en",
"timestamp": "2023-02-28T02:15:09",
"url": "https://arxiv.org/abs/2302.13282",
"yymm": "2302"
} | \section{introduction}
Laser ablation is widely employed in industry as a method for laser processing (cutting, drilling), pulsed laser deposition,~\cite{Watanabe_1998,Yoshitake_2000} and nanoparticle production.~\cite{Fojtik_1993,Neddersen_1993}
The physical mechanism of laser ablation, especially with ultrashort-pulse lasers (fs laser), has attracted attention in science and industry~\cite{Kobayashi_2020,Kobayashi_2021} because it involves remarkable phenomena that cannot be observed with long-pulse lasers, such as almost no thermal damage,~\cite{Chichkov_1996, Shaheen_2013} depth of less than 1 nm,~\cite{Hashida_1999,Hashida_2002,Miyasaka_2012} and emission of high-energy ions.~\cite{Miyasaka_2012,Hashida_2010,Dachraoui_2006,Dachraoui_2006_2}
This ablation, which cannot be explained under the assumption of thermal equilibrium, is referred to as non-thermal ablation, whose effects have been reported to be dominant near the ablation threshold fluence.~\cite{Hashida_2002,Miyasaka_2012,Hashida_2010,Chichkov_1996,Shaheen_2013,Momma_1996}
Although tremendous efforts using both experimental and theoretical approaches have been devoted to elucidating the physical mechanism of the non-thermal ablation of metals, discrepancies exist between experiments and previous theoretical simulations.
Molecular dynamics (MD) simulation is a powerful computational tool to elucidate the microscopic mechanisms of metal ablation.
To date, MD simulation of laser ablation in the low-laser-fluence region has been reported for several metals [aluminum (Al),~\cite{Wu_2013} silver (Ag),~\cite{Ji_2017} copper (Cu),~\cite{Foumani_2018,Schafer_2002} gold (Au),~\cite{Zhiglei_2003} nickel (Ni),~\cite{Zhiglei_2003} and platinum (Pt)~\cite{Rouleau_2014}].
These calculation results proposed the following explanation for the ablation mechanisms of metals caused by irradiation with ultrashort laser pulse with low laser fluence.
With irradiation by ultrashort laser pulse near the ablation threshold fluence, the laser-deposited energy raises the surface temperature so that the surface starts to expand and begins to melt.
Subsequently, tensile stress occurs near the surface region, and as a result, a molten surface layer is spalled, whose thickness is more than $10\,\text{nm}$.~\cite{Zhigilei_2009,Ji_2017,Foumani_2018,Schafer_2002,Gan_2009,Zhiglei_2003}
This ablation process is called spallation, and has been observed in experiments.~\cite{Linde_1997,Sokolowski_1998,Sokolowski2_1998,Libde_2000,Rouleau_2014}
As the laser fluence increases, the thickness of the spalled layer decreases, and eventually small clusters and atoms are emitted from the overheated surface.~\cite{Wu_2013}
This ablation process is called phase explosion, and the main cause of this process is considered to be the thermodynamic instability of the overheated surface.~\cite{Miotello_1999,Miotello_2001}
Previous MD simulation studies have argued that isolated atoms are not emitted with irradiation by ultrashort laser pulse near the ablation threshold, and that the phenomenon that occurs near the ablation threshold is spallation.
This means that these explanations have a fatal problem in describing the metal ablation induced by ultrashort laser pulses near the ablation threshold fluence, since the emission of high-energy ions and sub-nanometer depth ablation have been experimentally observed in this fluence region.~\cite{Hashida_1999,Hashida_2002,Miyasaka_2012,Hashida_2010,Dachraoui_2006,Dachraoui_2006_2}
It is considered that this discrepancy between MD simulations and experiments comes from a lack of physical mechanisms in previous MD simulations, where the force acting on atoms is assumed to not be changed even in a highly excited laser-irradiated system.
Based on this consideration, some physical mechanisms have been proposed to explain the process of the non-thermal ablation of metals.~\cite{Tao_2014, Li_2015,Norman_2012,Norman_2013,Stegailov_2015,Stegailov_2016,Ilnitsky_2016,Miyasaka_2012,Dachraoui_2006,Dachraoui_2006_2}
One of the most famous ones is the Coulomb explosion (CE), which has been experimentally verified in the case of a semiconductor~\cite{Zhao_2013} an insulator,~\cite{Stoian_2000,Stoian_2000_2} and a molecular system.~\cite{Sato_2008}
CE describes the physical mechanism of non-thermal ablation as follows.
Under intense laser irradiation, electrons are emitted from a laser-irradiated surface due to the photoelectric effect and/or the thermionic emission process so that strong Coulomb interaction occurs between positively charged ions at the ionized surface.
Hence, when the Coulomb interaction is strong enough to overcome the bonding forces between these ions, they are emitted from the surface.
If CE plays a dominant role in the laser ablation process, the peak velocity of the emitted ions is scaled by the valence of the emitted ions, which has been observed by time-of-flight experiments in a semiconductor,~\cite{Zhao_2013} insulator,~\cite{Stoian_2000} and a molecular system.~\cite{Sato_2008}
These observations have been regarded as conclusive evidence of CE in these materials.
On the other hand, the peak velocity of the emitted Cu ions is not scaled by the valence of the ions.~\cite{Zhao_2013}
In addition, other experimental result~\cite{Li_2011} showed that the electric field near the surface created by the laser irradiation is shielded within the time duration of the probe pulse ($200\,\text{fs}$), and this fast electrostatic shielding is expected to be natural because the inverse of the plasma frequency is very fast ($< 1\,\text{fs}$) in bulk Cu.
This confirms the consideration based on the continuum model (CM) calculation~\cite{Lin_2012} that the electric field near surface due to the electron emission is shielded by high-mobility electrons in the bulk metal before the CE can occur.
Hence, the validity of the CE in metals is questionable.
Besides CE, other possible origins of the non-thermal ablation of metals have been proposed, for example, the kinetic energy of free electrons and changes in the charge distribution.~\cite{Norman_2012,Norman_2013,Stegailov_2015,Stegailov_2016,Ilnitsky_2016}
However, the validity of these explanations is still under debate.
Recently, we have shown by finite-temperature density functional theory (FTDFT) calculations that the laser-irradiated bulk metal becomes unstable due to the electronic entropy effect.~\cite{Tanaka_2018}
This result suggests that the non-thermal ablation of metals is induced by the electronic entropy effect, and this explanation for the non-thermal ablation of metals is called the electronic entropy-driven (EED) mechanism.~\cite{Tanaka_2018}
Based on the EED mechanism, we have developed a CM where the well-known two-temperature model (TTM)~\cite{Anisimov_1973} and the electronic entropy effect are incorporated, and succeeded in quantitatively describing the experimental ablation depth~\cite{Colombier_2005, Nielsen_2010} in the low-laser-fluence region.
To further discuss the validity of the EED mechanism and investigate the effect of electronic entropy on the non-thermal ablation of metals, MD simulation is preferred over CM simulation since it can directly describe atom emission and sub-nanometer scale ablation, which are characteristic of non-thermal ablation.~\cite{Hashida_1999,Hashida_2002,Miyasaka_2012,Hashida_2010,Dachraoui_2006,Dachraoui_2006_2}
Previously, to elucidate the microscopic mechanism of laser-irradiated metals, a two-temperature model combined with molecular dynamics (TTM-MD) scheme has been employed.~\cite{Murphy_2015,Murphy_2016,Daraszewicz_2013}
However, the previous TTM-MD scheme is not appropriate for a system in which the electronic entropy makes a large contribution, and to our knowledge, there have been no TTM-MD schemes that satisfy the law of conservation of energy in such a system.
Therefore, to carry out reliable TTM-MD simulation of the non-thermal ablation of metals, where the electronic entropy effect is proposed to be large,~\cite{Tanaka_2018} it is necessary to develop a new TTM-MD scheme.
The purpose of this study was twofold.
The first was to develop a TTM-MD scheme that satisfies the law of conservation of energy even in a system where electronic entropy effects make a dominant contribution.
The other was bridging the discrepancy between experiment and previous theoretical simulations regarding the explanation of the non-thermal ablation of metals by elucidating the effect of electronic entropy in these phenomena.
The outline of this paper is as follows.
In Sec.~\ref{sec:method}, the developed TTM-MD scheme and computational details are explained.
Owing to this development, the TTM-MD simulation can be performed while satisfying the law of conservation of energy even in a system where electronic entropy effects are large.
In Sec.~\ref{sec:results}, it is firstly shown that the law of conservation of energy is satisfied with reasonable accuracy in the developed TTM-MD simulation.
Subsequently, calculation results for the ultrashort-pulse laser ablation of a Cu film using the developed TTM-MD simulation are exhibited.
Here, the microscopic mechanisms of the metal ablation and the effect of the electronic entropy are investigated.
To confirm the validity of the TTM-MD simulation and the EED mechanism, the ablation depth in the TTM-MD simulation is compared with previous calculations~\cite{Tanaka_2018} and experimental data.~\cite{Colombier_2005,Nielsen_2010}
Finally, a brief conclusion is provided in Sec.~\ref{sec:conc}.
\section{Calculation Methods}
\label{sec:method}
\subsection{Two-temperature model (TTM)}
\label{sec:TTM}
\begin{figure}[b]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{1.0\hsize}
\begin{center}
\includegraphics[clip, width=7cm]{timescale.eps}
\end{center}
\end{minipage} \\
\end{tabular}
\caption{Schematic image of the main concept of the TTM and the time development of $T_e$ and $T_l$.
}
\label{fig: timescale}
\end{center}
\end{figure}
Fig.~\ref{fig: timescale} represents a schematic image of the two-temperature model (TTM),~\cite{Anisimov_1973} which has been widely used to describe laser-irradiated systems.~\cite{Daraszewicz_2013,Ernstorfer_2009,Giret_2011,Norman_2012,Norman_2013,Recoules_2006,Inogamov_2012,Wang_2018,Murphy_2015,Murphy_2016}
Ultrashort-pulse laser irradiation of a metal surface changes the electron subsystem (ES) from the ground state into excited states by the absorption of single or multiple photons.
The ES is thermalized to the Fermi-Dirac distribution with the electron temperature $T_e$ via the electron-electron (el.-el.) interaction, of which the scattering time $\tau_{ee}$ is approximately $10 \mathchar`-100\,\text{fs}$ in metals.~\cite{Mueller_2013, Brown_2016_2}
In this time scale, the ES and the lattice subsystem (LS) do not reach equilibrium with each other, so $T_e$ is higher than the lattice temperature $T_l$.
Ordinarily, the maximum $T_e$ reaches values more than 10 times higher than the final equilibrium temperature ($T_e \approx T_l$) since the heat capacity of an electron is very much smaller than that of the lattice.
$T_l$ begins to increase by energy transfer from the ES via electron-phonon (el.-ph.) scattering, for which the relaxation time $\tau_{el}$ is larger than several picoseconds.~\cite{Schoenlein_1987,Elsayed-Ali_1987,Elsayed-Ali_1991,Hohlfeld_2000}
Therefore, under the assumption of the instantaneous and local thermalization in the ES and the LS,
the ultrashort-laser-irradiated metals can be described by $T_e > T_l$ before $\tau_{el}$.
This explanation is the main concept of the TTM.
Based on the TTM, many previous studies~\cite{Daraszewicz_2013,Ernstorfer_2009,Giret_2011,Norman_2012,Norman_2013,Recoules_2006,Inogamov_2012,Wang_2018} have been successful in description of the experimental data.
\subsection{Two-temperature model combined with molecular dynamics (TTM-MD) scheme}
\label{sec:TTM-MD}
Here, we explain a newly developed calculation scheme for simulating the atom dynamics of metal ablation caused by irradiation with an ultrashort-pulse laser.
In the scheme, the MD scheme is hybridized with the TTM scheme to express the non-equilibrium state between the ES and the LS.
To decrease the computational cost of large-scale atomistic simulations, the CM is partly employed in LS as well as in ES.
Fig.~\ref{fig:TTM-MD} represents a schematic image of this calculation scheme.
Hereafter in this paper, this calculation scheme is called the TTM-MD scheme.
In the TTM-MD scheme, electronic effects, such as a highly excited ES near the surface, electronic thermal diffusion, electron-phonon scattering, and energy absorption due to the electronic entropy effect, are incorporated into the MD simulation through the TTM.
In the TTM-MD simulation, atom dynamics are calculated based on the MD scheme, and at the same time, other time developments such as that of $T_e$ are calculated by employing the TTM.
For reduction of computational cost, the CM is also used to calculate the time development of $T_l$ deep inside the Cu film (Region 2 in Fig.~\ref{fig:TTM-MD}).
The dynamics of each atom in this region is not as dominant in the atom dynamics of laser ablation, so only the time development of $T_e$ and $T_l$ are calculated in this region.
This region is called the CM region of LS and plays an important role in the thermal dissipation of the energy deposited by laser irradiation.
On the other hand, the region near the surface in which atoms exist is called the MD region of LS (Region 1 in Fig.~\ref{fig:TTM-MD}).
With the volume change due to expansion or ablation, the position of the surface and the CM region change during simulation.
Periodic boundary conditions are used in the $x$-axis and $y$-axis directions in Fig.~\ref{fig:TTM-MD}.
The free boundary condition is employed between the CM and the MD regions of LS.
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{1.0\hsize}
\begin{center}
\includegraphics[clip, width=8cm]{TTM-MD2.eps}
\hspace{0.6cm}
\end{center}
\end{minipage} \\
\end{tabular}
\caption{Schematic image of TTM-MD scheme used to simulate the laser-irradiated Cu film.
The laser comes from the left side of the figure.
The local electron temperature $T_e^n$ and the local lattice temperature $T_l^n$ are defined for the $n$-th 3D cell (in dotted region).
Periodic boundary conditions are used in the $x$, $y$ directions (parallel to the surface).
The free boundary condition is used at the bottom of the MD region of LS (Region 1).
In the MD region of LS, the atomic dynamics are calculated using MD simulation.
On the other hand, to reduce calculation cost, the time development of $T_l^n$ in the CM region of LS (Region 2) is calculated using the CM.
The time development of all $T_e^n$ is calculated using the CM.
}
\label{fig:TTM-MD}
\end{center}
\end{figure}
The local electron temperature $T_e^n$ and the local lattice temperature $T_l^n$ are defined in three-dimensional (3D) cells, where $n$ is the index of the 3D cells.
A region surrounded by dotted lines in Fig.~\ref{fig:TTM-MD} represents one of the 3D cells.
Although $T_l^n$ is referred to as the local ``lattice'' temperature, we do not imply that a crystalline structure is assumed in the 3D cells.
In other words, $T_l^n$ represents not only the lattice temperature but also the temperature of the atoms.
Besides, it is noted that $T_l^n$ of the MD region of LS represents the instantaneous temperature of atoms.
The time development of $T_e^n$ is calculated by solving the following nonlinear differential equation:
\begin{align}
C_e^n \frac{dT_e^n}{dt} = {\nabla } \cdot & (\kappa _e^n {\nabla }T_e^n) - G^n (T_e^n - T_l^n) \notag \\
& - {\sum}_i^{N^n} \bm{v}_i \frac{{\partial}}{{\partial}\bm{r}_i} \left( S^nT_e^n \right) + I^n, \label{eq:electron}
\end{align}
where $\bm{r}_i$ and $\bm{v}_i$ are the position and the velocity of atom $i$ in the $n$-th cell, respectively.
$C_e^n$ is the electronic heat capacity,
$\kappa _e^n$ is the electronic thermal conductivity,
$G^n$ is the electron-phonon heat transfer constant,
$N^n$ is the number of atoms,
$S^n$ is the electronic entropy,
and $I^n$ is the energy deposited by laser irradiation at each $n$-th 3D cell.
These quantities are calculated at each 3D cell by the following equations:
\begin{subequations}
\begin{align}
C_e^n &= \frac{N^n}{N_0} C_e (T_e^n), \label{eq:Cen} \\
G^n &= \frac{N^n}{N_0} G, \label{eq:Cen} \\
\kappa _e^n &= \frac{N^n}{N_0} \kappa _e (T_e^n, T_l^n), \label{eq:kappan} \\
I^n &= \frac{N^n}{N_0} I (T_e^n,T_l^n), \label{eq:depo} \\
N_0 &=\rho _0 V_c. \label{eq:numn}
\end{align}
\end{subequations}
Here, $\rho _0$ is the bulk density in the equilibrium states and $V_c$ is the volume of each 3D cell.
$C_e (T_e^n)$, $G$, $\kappa_e (T_e^n,T_l^n)$, and $I(T_e^n,T_l^n)$ represent each physical property per unit volume.
The values for these properties are the same as those used in the previous study,~\cite{Tanaka_2018} whose details are explained in the Supplemental Material.
The third term on the right-hand side of Eq.~(\ref{eq:electron}) represents the absorption of energy by the electronic entropy.
The derivation of Eq.~(\ref{eq:electron}) based on the law of conservation of energy is explained in Sec.~\ref{sec:convergedquantity}.
$T_l^n$ in the CM and the MD regions of LS is calculated by solving Eqs.~(\ref{eq:lattice}) and~(\ref{eq:instantaneoustemp}), respectively:
\begin{subequations}
\begin{align}
C_l^n \frac{\text{d}T_l^n}{\text{d}t} &= G^n(T_e^n - T_l^n), \label{eq:lattice} \\
T_l^n &= \frac{1}{3k_BN^n} \sum_i^{N^n} \left(\bm{v}_i - \bm{v}^{n}_c\right)^2. \label{eq:instantaneoustemp}
\end{align}
\end{subequations}
Here, $k_B$ is the Boltzmann constant, $\bm{v}_c^n$ is the average velocity of atoms (center-of-mass velocity) in the $n$-th 3D cell, while $C_l^n= \frac{N^n}{N_0} C_l $ is the lattice heat capacity in the $n$-th 3D cell.
$C_l $ is the lattice heat capacity per unit volume, and details are also explained in Supplemental Material.
The atomic dynamics in the MD region of LS are calculated by solving the following equations:
\begin{subequations}
\begin{align}
\frac{d \bm{r}_i}{dt}&=\bm{v}_i, \label{eq:motionx1} \\
m \frac{d \bm{v}_i}{dt}&=-\frac{{\partial}F^n}{{\partial}\bm{r}_i} - m {\xi}^n \bm{v}_i. \label{eq:motionv1}
\end{align}
\end{subequations}
Here, $m$ is the mass of an atom and ${\xi}^n$ is a coefficient that represents the force deriving from the electron-phonon interaction.
$F^n$ of Eq.~(\ref{eq:motionv1}) is the free energy of the ES in the $n$-th 3D cell, and the definition is given in the following equation:
\begin{eqnarray}
F^n = E^n -S^n T_e^n. \label{eq:def_free}
\end{eqnarray}
Here, $E^n$ represents the internal energy.
In this study, $F^n$ and $E^n$ are calculated using the $T_e$-dependent inter-atomic potential (IAP), which is based on the embedded atom method (EAM) potential.
The functional form and parameter values for the $T_e$-dependent IAP of Cu were proposed in a previous study.~\cite{Tanaka_2021}
The previous study reported that this $T_e$-dependent IAP can reproduce the FTDFT results of $T_e$-dependent physical properties, such as the volume dependence of $F_n$ and $E_n$, and the phonon dispersion.
Moreover, MD simulations using the $T_e$-dependent IAP quantitatively reproduce the results of MD simulation using FTDFT; for example, the time development of the elastic properties of nano-scale slabs and an ablation threshold $T_e$.
\subsection{Law of conservation of energy }
\label{sec:convergedquantity}
In this study, the developed TTM-MD simulations were performed to investigate the microscopic mechanism of metal ablation induced by irradiation with an ultrashort-pulse laser.
Although the $T_e$-dependent IAP is used in some previous TTM-MD simulations,~\cite{Daraszewicz_2013,Murphy_2015,Murphy_2016} the law of conservation of energy is considered not to be satisfied, the reason for which is explained below.
In these simulations, because the laser has too small a fluence to cause ablation, the effect of electronic entropy may not be very large and deviation from the law of conservation of energy might be negligible.
On the other hand, the electronic entropy effect is proposed to be dominant in metal ablation induced by irradiation with an ultrashort-pulse laser.~\cite{Tanaka_2018}
In this case we must carefully take the electron entropy effect into account to realize energy conservation in the TTM-MD simulation.
In the following Sec.\ref{sec:convergedquantity}, it is shown that the law of conservation of energy is satisfied in the developed TTM-MD scheme, theoretically.
First, to simplify the situation, the laser-deposited energy and the energy flow among the 3D cells are neglected.
In other words, only energy exchange between the ES and the LS in the 3D cells is considered.
In this situation, the conserved energy of the 3D cells is the internal energy: $E^n+\sum_i^{N^n}\frac{1}{2}m\bm{v}_i^2$.
The time derivative of the conserve energy can be calculated easily as
\begin{align}
\frac{d}{dt} & \left( E + \sum_i^{N} \frac{1}{2} m \bm{v}_i ^2 \right) \notag & \\
& = \frac{d T_e }{d t} \frac{\partial E}{\partial T_e} + \sum_i^{N} \frac{d \bm{r}_{i}}{dt} \frac{\partial E}{{\partial } \bm{r}_{i}} + \sum_i^{N} \bm{v}_{i} \left( m \frac{d\bm{v}_i }{dt} \right) \notag\\
&= \frac{d T_e }{d t} \frac{\partial E}{\partial T_e} + \sum_i^{N} \bm{v}_{i} \frac{\partial E}{{\partial }\bm{r}_{i}} - \sum_i^{N} \bm{v}_{i} \left[ \frac{\partial (E-ST_e )}{\partial \bm{r}_i} + m\xi \bm{v}_i \right] \notag \\
& = C_e \frac{d T_e }{d t} + \sum_i^{N} \bm{v}_{i} \frac{\partial }{{\partial} \bm{r}_{i}} \left( ST_e \right) - \sum_i^{N} m \xi \bm{v}_{i}^2 . \label{eq:econv1}
\end{align}
Here, to simplify notation, the 3D cell index $n$ is omitted.
In the second equality, Eqs.~(\ref{eq:motionx1}), (\ref{eq:motionv1}), and ({\ref{eq:def_free}}) are used.
In the third equality, the definition of the electronic heat capacity $C_e(T_e) = {\partial E(T_e)}/{\partial T_e}$ is used.
Since the time derivative of the conserved quantity is 0, the following equation can be derived:
\begin{eqnarray}
C_e \frac{d T_e }{d t} = - \sum_i^{N} \bm{v}_{i} \frac{\partial}{{\partial}\bm{r}_{i}} \left( ST_e \right) + \sum_i^{N} m \xi \bm{v}^2_{i}. \label{eq:econv2}
\end{eqnarray}
The first term on the right-hand side of this equation represents the absorbed energy due to the electronic entropy and the second term is the exchange energy due to the electron-phonon interaction.
It is a fundamental assumption in TTM that the electron-phonon interaction is represented by a single linear coupling term of the form of $G^n (T_e^n - T_l ^n)$.~\cite{Hohlfeld_2000}
Previously, based on this assumption, the value of $G^n$ for Cu has been investigated by experiment~\cite{Elsayed-Ali_1987} and theoretical calculations,~\cite{Lin_2008,Migdal_2016,Migdal_2016,Petrov_2013,Migdal_2015} for which the details are explained in the Supplemental Material.
Therefore, for energy conservation with respect to the electron-phonon interaction between ES and LS, the following equation must be satisfied:
\begin{eqnarray}
\sum_i^{N^n} m \xi ^n \bm{v}_{i}^2 + G^n(T_e^n - T_l^n) = 0. \label{eq:econv4}
\end{eqnarray}
Subsequently, we added the effect of the electronic thermal diffusion energy $D^n_{\text{tot}}$ and laser-deposited energy of the $n$-th 3D cell $I^n_{\text{tot}}$ to this scenario.
The former effect can be expressed as
\begin{eqnarray}
D^n_{\text{tot}} = - \int {\nabla }\cdot(\kappa _e^n{\nabla }T_e^n) dt, \label{eq:Dtot}
\end{eqnarray}
and the latter effect can be written as
\begin{eqnarray}
I^n_{\text{tot}} = \int I^n dt. \label{eq:Itot}
\end{eqnarray}
In this situation, the conserved energy in each 3D cell is $E^n+\sum_i^{N_n}\frac{1}{2}mv_i^2 + D^n_{\text{tot}} - I^n_{\text{tot}} $.
Therefore, the time derivative of the conserved energy can be written as
\begin{align}
\frac{d}{dt} & \left( E^n + \sum_i^{N^n} \frac{1}{2} m \bm{v}_i^2 + D^n_{\text{tot}} - I^n_{\text{tot}} \right) \notag \\
& = C_e^n \frac{d T_e^n }{d t} + \sum_i^{N^n} \bm{v}_{i} \frac{\partial}{{\partial} \bm{r}_{i}} \left( S^nT_e^n \right) - \sum_i^{N^n} m \xi ^n \bm{v}_{i}^2 \notag \\
& \hspace{10mm} - {\nabla }\cdot(\kappa _e^n {\nabla }T_e^n) - I^n \notag \\
& = C_e^n \frac{d T_e^n }{d t} + \sum_i^{N^n} \bm{v}_{i} \frac{\partial }{{\partial} \bm{r}_{i}}\left( S^nT_e^n \right) + G^n(T_e^n - T_l^n) \notag \\
& \hspace{10mm} - {\nabla }\cdot(\kappa _e^n {\nabla }T_e^n) - I^n. \label{eq:econv3}
\end{align}
Here, Eqs.~(\ref{eq:econv1}), (\ref{eq:Dtot}), and (\ref{eq:Itot}) are used in the first equality, and Eq.~(\ref{eq:econv4}) is used in the second equality.
Since the time derivative of the conserved quantity is 0, Eq.~(\ref{eq:electron}) can be derived using Eq.~(\ref{eq:econv3}).
Consequently, Eq.~(\ref{eq:electron}) is derived based on the law of conservation of energy.
In previous studies,~\cite{Daraszewicz_2013,Murphy_2015,Murphy_2016} forces acting on the atoms were calculated by the spatial derivative of the free energy calculated using the $T_e$-dependent IAP, and the energy exchange due to the electron-phonon interaction was considered.
However, the absorbed energy due to the electronic entropy effect was ignored.
This means that, in conventional simulations, the time development of $T_e$ is calculated by the following equation:
\begin{eqnarray}
C_e^n \frac{dT_e^n}{dt} = {\nabla }\cdot(\kappa _e^n {\nabla }T_e^n) - G^n(T_e^n - T_l^n) + I^n. \label{eq:conv_Te}
\end{eqnarray}
Hence, energy that is used to accelerate atoms and to raise the internal energy surface is supplied from the virtual electron thermal bath [Fig.~\ref{fig:image_pre}], because the third term on the right-hand side of Eq.~(\ref{eq:electron}) is ignored in the conventional TTM-MD scheme [Eq.~(\ref{eq:conv_Te})].
In this study, we developed the TTM-MD scheme by adding the $- {\sum}_i^{N^n} \bm{v}_i \frac{{\partial}}{{\partial}\bm{r}_i} \left( S^nT_e^n \right)$ term to the equation of the conventional TTM-MD scheme, which enabled us to perform simulations that satisfy the law of conservation of energy even in a system where the electronic entropy effects are large.
\begin{figure}[tp]
\begin{center}
\begin{tabular}{c}
\begin{minipage}{1.0\hsize}
\begin{center}
\includegraphics[clip, width=8cm]{image_pre.eps}
\end{center}
\end{minipage} \\
\end{tabular}
\caption{Schematic image of the energy absorbed by the electronic entropy.
Volume dependence of (a) the free energy and (b) the internal energy at $T_e = 25,000\,\text{K}$, which are calculated using FTDFT.
Calculation conditions are the same as those used in the previous study.~\cite{Tanaka_2018}
The filled and blank circles are the energies at the equilibrium volume ($V_0$) and $1.5\,V_0$, respectively.
$\Delta E_F$ represents the difference between the free energy at $V_0$ and that at $1.5\,V_0$.
Also, $\Delta E_I$ represents the difference between the internal energy at $V_0$ and that at $1.5\,V_0$.
When the volume changes from $V_0$ to $1.5\,V_0$, these energies should be absorbed from the ES as the electronic entropy effect to accelerate atoms and to raise the internal energy surface.
}
\label{fig:image_pre}
\end{center}
\end{figure}
\subsection{Calculation Conditions}
\label{sec:calculation_detail}
Here, we explain the calculation conditions.
The lateral dimensions of a laser-irradiated Cu film are $3.615\,\text{nm}\times3.615\,\text{nm}$, which is ten times the lattice constant of the conventional unit cell of the face-centered cubic (fcc) structure of Cu.
The initial MD and CM regions of LS are about $361.5\,\text{nm}$ and $638.5\,$nm, respectively.
Hence, the thickness of the computational Cu film is $1\, \mu$m.
The total number of atoms in the computational cell is about $4.0\times 10^5$.
The surface of the film is a (001) free surface of the fcc structure.
The laser pulse shape is assumed to be Gaussian.
The pulse duration times of an ultrashort-pulse laser and a ps-pulse laser are $100\,\text{fs}$ and $200\,\text{ps}$, respectively.
The size of the 3D cells is $1.205\,\text{nm} \times 1.205\,\text{nm} \times 1.205\,\text{nm}$.
Therefore, the space step $\Delta x_{\text{CM}}$ is $1.205\,\text{nm}$.
Eqs.~(\ref{eq:electron}) and~(\ref{eq:lattice}) are solved by a finite difference method (FDM).
To solve Eqs.~(\ref{eq:motionx1}) and (\ref{eq:motionv1}), the velocity Verlet algorithm is used.
A value of the time step $ {\Delta}t$ is $10\,\text{as}$.
This value is much shorter than the time step for ordinary MD simulations.
To reduce the calculation cost, the time step for MD calculation ${\Delta t}_{\text{MD}}$ is set to ${\Delta t}_{\text{MD}} = n_{\text{MD}} {\Delta} t$, where $ n_{\text{MD}}$ is an integer number.
In Sec.~\ref{sec:Test_TTM-MD}, we determine a suitable time step $\Delta t_{\text{MD}}$ so that the law of conservation of energy is satisfied with little error.
In our simulations, before irradiation of a laser pulse on the Cu film, the computational cell was relaxed using the Nos$\acute{\text{e}}$-Hoover thermostat~\cite{Hoover_1985} at $300\,\text{K}$ for $800\,\text{ps}$, where $\Delta t_{\text{MD}}=5\,\text{fs}$ was used.
Details of the calculation flow of the TTM-MD are explained in the Supplemental Material.
\section{Results and Discussion}
\label{sec:results}
\subsection{Conservation of energy}
\label{sec:Test_TTM-MD}
\begin{figure}[b]
\begin{center}
\begin{tabular}{c}
\begin{minipage}{1.0\hsize}
\begin{center}
\includegraphics[clip, width=8cm]{converge_image.eps}
\hspace{1.6cm}
\end{center}
\end{minipage} \\
\end{tabular}
\caption{Schematic image of the conserved energy of the MD region ($E_{\text{cons}}$) whose definition is $E_{\text{cons}} = E_\text{MD} + D_{\text{CM}} -I_{\text{tot}}$.
Here, $E_\text{MD}$, $D_{\text{CM}}$, and $I_{\text{tot}}$ represent the internal energy of the MD region, the energy thermally diffusing to the CM region, and the energy deposited on the Cu film by the laser, respectively.
}
\label{fig:converge_image}
\end{center}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{1.0\hsize}
\begin{center}
\includegraphics[clip, width=6cm]{all2_00.eps}
\hspace{0.8cm}
\end{center}
\end{minipage} \\
\begin{minipage}{1.0\hsize}
\begin{center}
\includegraphics[clip, width=6cm]{all2_04.eps}
\hspace{0.8cm}
\end{center}
\end{minipage} \\
\begin{minipage}{1.0\hsize}
\begin{center}
\includegraphics[clip, width=6cm]{all_06.eps}
\hspace{0.8cm}
\end{center}
\end{minipage} \\
\end{tabular}
\caption{Time development of $E_{\text{cons}}$ (solid line) and $F_{\text{uncons}}$ (dotted line).
(a) Calculation results of TTM-MD simulations in which a laser is not applied.
Calculation results for the Cu film irradiated by an ultrashort-pulse laser with (b) $J_0=0.4\,\text{J}\,\text{cm}^{-2}$, where ablation is not caused, and with (c) $J_0=0.6\,\text{J}\,\text{cm}^{-2}$, where ablation is caused.
Black, green, and blue lines represent calculation results of $t_{\text{MD}} = 0.5, 1.0$, and $5.0\,\text{fs}$, respectively.
The total number of atoms in the computational cell is approximately $4.0\times10^{5}$.
}
\label{fig:e_converge}
\end{center}
\end{figure}
Here, it is shown that the developed TTM-MD scheme satisfies the law of conservation of energy with a small error.
In addition, we show calculation results of the $\Delta t_{\text{MD}}$ dependence of the conserved energy, which is investigated to choose an appropriate $t_{\text{MD}}$ for the following simulations.
Fig.~\ref{fig:converge_image} represents a schematic image of the conservation of energy for the MD region ($E_{\text{cons}}$), which is calculated to investigate whether the developed TTM-MD simulation satisfies the law of conservation of energy.
$E_{\text{cons}}$ can be written as follows:
\begin{eqnarray}
E_{\text{cons}} = E_\text{MD} + D_{\text{CM}} -I_{\text{tot}}. \label{eq:E_cons}
\end{eqnarray}
Here, $E_\text{MD}$, $D_{\text{CM}}$, and $I_{\text{tot}}$ represent the internal energy of the MD region, the energy thermally diffusing to the CM region, and the energy deposited on the Cu film by the laser, respectively.
The internal energy of the MD region is expressed as $ \sum_n^{\text{MD\,cells}} ( E^n+\sum_i^{N^n}\frac{1}{2}m\bm{v}_i^2 )$, where the first summation is taken over all MD cells.
For comparison, we calculate the free energy of the MD region ($ F_{\text{uncons}}$), which is regarded as the conserved energy in the conventional TTM-MD scheme.
The definition of $ F_{\text{uncons}}$ is as follows:
\begin{eqnarray}
F_{\text{uncons}} = E_{\text{cons}} - \sum_n^{\text{MD\,cells}} S^n T_e^n. \label{eq:S_cons}
\end{eqnarray}
Fig.~\ref{fig:e_converge} represents the $\Delta t_{\text{MD}}$ dependence of $E_{\text{cons}}$ (solid line) and $F_{\text{uncons}}$ (dotted line) in TTM-MD simulations.
Black, green, and blue lines represent calculation results using $t_{\text{MD}} = 0.5, 1.0$, and $5.0\,\text{fs}$, respectively.
Fig.~\ref{fig:e_converge}(a) represents results for the TTM-MD simulation for $30\,\text{ps}$ without laser irradiation.
Figs.~\ref{fig:e_converge}(b) and (c) represent the results of TTM-MD simulations where the Cu film is irradiated by the ultrashort-pulse laser of (b) $J_0=0.4\,\text{J}\,\text{cm}^{-2}$ and by the laser of (c) $J_0=0.6\,\text{J}\,\text{cm}^{-2}$.
Here, $J_0$ represents the laser fluence.
Ablation does not occur in (b) the former case; on the other hand, ablation occurs in (c) the latter case.
In these two simulations, $T_e$ near the surface increases to approximately $20,000\,\text{K}$.
Fig.~\ref{fig:e_converge} shows that when sufficiently small $\Delta t_{\text{MD}}$ is used, our simulations satisfy the law of conservation of energy with error of several $10\,\text{meV}\,\text{atom}^{-1}$, and that $F_{\text{uncons}} $ is not conserved.
Since Figs.~\ref{fig:e_converge}(b) and (c) show that $E_{\text{cons}}$ returns back to the initial value at $t>20\,\text{ps}$, where low electron temperatures ($T_e \simeq 1,000\,\text{K}$) are realized, these errors of $E_{\text{cons}}$ exist only at high $T_e$.
The previous study~\cite{Tanaka_2018} showed that the electronic heat capacity ($C_e$) calculated using the $T_e$-dependent IAP is slightly overestimated compared to that calculated by FTDFT.
In this study, $E^n$ is calculated using the $T_e$-dependent IAP; on the other hand, the time development of $T_e^n$ is calculated according to Eq.~(\ref{eq:econv1}), where $C_e$ is calculated by FTDFT.
Therefore, $E^n$ calculated by the $T_e$-dependent IAP is expected to be over-estimated, which can be shown in the calculation results of the laser-irradiated system [Figs.~\ref{fig:e_converge}(b) and (c)].
Hence, to decrease the error for the law of conservation of energy, it is necessary to develop $T_e$-dependent IAP that can reproduce the electronic specific heat of FTDFT with higher accuracy.
Fig.~\ref{fig:e_converge}(a) shows that $E_{\text{cons}}$ is conserved with little or no error $\Delta t_{\text{MD}}$.
On the other hand, Figs.~\ref{fig:e_converge}(b) and (c) represent that small $\Delta t_{\text{MD}}$ is needed to conserve $E_{\text{cons}}$ when the ultrashort-pulse laser is applied.
The reason that energy conservation is not satisfied in the long time step $\Delta t_{\text{MD}} = 5.0\,\text{fs}$ can be attributed to high-velocity atoms accelerated by laser irradiation.
In all TTM-MD simulations shown in the following, appropriate $t_{\text{MD}}$ are used after verifying whether $E_{\text{cons}}$ is conserved in each simulation.
We carry out the TTM-MD simulations using $\Delta t_{\text{MD}}=1.0\,\text{fs}$ when the irradiation laser fluence is $J_0<0.9\,\text{J}\,\text{cm}^{-2}$,
and $\Delta t_{\text{MD}}=0.5\,\text{fs}$ is used in the TTM-MD simulations when the irradiation laser fluence is $J_0\ge0.9\,\text{J}\,\text{cm}^{-2}$.
\subsection{Microscopic mechanisms of metal ablation}
\begin{figure*}[t]
\begin{center}
\begin{tabular}{c}
\begin{minipage}{1.0\hsize}
\begin{center}
\includegraphics[clip, width=12cm]{compcell_all.eps}
\end{center}
\end{minipage} \\
\end{tabular}
\caption{Schematic image of the computational cell for the TTM-MD simulations.
A laser pulse comes from the left side of the figure.
The $z$ direction represents the depth direction of the Cu film.
Periodic boundary conditions are employed in the directions parallel to the surface.
The CM region is connected to the MD region on the right (see Fig.~\ref{fig:TTM-MD}).
In addition to this figure, the following snapshots of atomic configurations are visualized using the Open Visualization Tool~\cite{OVITO} (OVITO).
}
\label{fig:compcell}
\end{center}
\end{figure*}
In this section, results and analyses of the TTM-MD simulations of the ultrashort-pulse laser ablation are described.
The computational cell for the TTM-MD simulations is shown in Fig.~\ref{fig:compcell}.
The laser comes from the left side of the Cu film, which consists of MD and CM regions (see Figs.~\ref{fig:TTM-MD} and \ref{fig:compcell}).
Fig.~\ref{fig:compcell} and snapshots of the atomic configurations are visualized using Open Visualization Tool~\cite{OVITO} (OVITO).
The calculation results shown in this section were obtained under conditions in which the fluence of the applied laser changed from $0.54$ to $1.00\,\text{J}\,\text{cm}^{-2}$ while its pulse width was fixed at $100\,\text{fs}$.
\subsubsection{Ablation near the ablation threshold: emission of atoms}
\label{sec:results_indep}
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{057_0ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{057_indep0ps.eps}
\end{center}
\end{minipage}
\\
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{057_1ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{057_indep1ps.eps}
\end{center}
\end{minipage}
\\
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{057_2ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{057_indep2ps.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{Snapshots of atomic configurations near the surface after $0$-$2\,\text{ps}$ irradiation with a $100\,\text{fs}$ pulse laser of $J_0=0.57\,\text{J}\,\text{cm}^{-2}$. These simulations are carried out using (a) the $T_e$-dependent IAP and (b) the $T_e$-independent IAP.
}
\label{fig:057snaps}
\end{center}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4.5cm]{057_temp2.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4.5cm]{057i_temp2.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{$T_e$ and $T_l$ space distributions.
The irradiated laser fluence is $J_0=0.57\,\text{J}\,\text{cm}^{-2}$.
These figures represent the simulation results using (a) the $T_e$-dependent IAP and (b) the $T_e$-independent IAP.
Solid, dashed, and dotted lines represent the results at $t=0.5$, $1.0$, and $2.0\,\text{ps}$, respectively.
}
\label{fig:057temp}
\end{center}
\end{figure}
In the TTM-MD simulations using the $T_e$-dependent IAP,~\cite{Tanaka_2021} emission of an atom is observed when the Cu film is irradiated by a laser pulse with $J_0=0.55\,\text{J}\,\text{cm}^{-2}$.
Whereas, irradiation of the Cu film by a laser with $J_0=0.54\,\text{J}\,\text{cm}^{-2}$ does not cause atom emission.
From these results, the ablation threshold fluence is estimated to be $J_0=0.55\,\text{J}\,\text{cm}^{-2}$, which is about the same as our previous CM simulation results ($J_0=0.47\,\text{J}\,\text{cm}^{-2}$).~\cite{Tanaka_2018}
The kinetic energy of the emitted atom is estimated to be $46.5\,\text{eV}$.
This excessively high atom energy is consistent with the experimental value (about $30\,\text{eV}$~\cite{Hashida_2010}), which is the most probable energy for the Cu$^+$ emitted on irradiation by a laser with the ablation threshold fluence.
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{1.0\hsize}
\begin{center}
\includegraphics[clip, width=5cm]{057_entropy2.eps}
\end{center}
\end{minipage} \\
\end{tabular}
\caption{The energy absorbed by the electronic entropy effect at each depth.
The irradiated laser fluence is $J_0=0.57\,\text{J}\,\text{cm}^{-2}$.
Solid, dashed, and dotted lines represent results at $t=0.5$, $1.0$, and $1.5\,\text{ps}$, respectively.
Zero on the $x$-axis represents the initial surface position.
}
\label{fig:057entropy}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4.5cm]{057_pres.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4.5cm]{057i_pres.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{The spatial distribution of the local pressure along the $z$ direction of simulations using (a) the $T_e$-dependent IAP and (b) the $T_e$-independent IAP.
Bold, dashed, and dotted lines represent the results at $t=0.5$, $1.0$, and $2.0\,\text{ps}$, respectively.
}
\label{fig:057pres}
\end{center}
\end{figure}
As shown in Fig.~\ref{fig:057snaps}(a), several atoms are emitted from the surface irradiated by a laser with $J_0=0.57\,\text{J}\,\text{cm}^{-2}$.
These atomic configurations are simulated using the $T_e$-dependent IAP, in which the electronic entropy effect is incorporated.
TTM-MD simulations with $T_e$-independent IAPs were also performed using $300\,\text{K}$ potential parameters at all temperatures.
Snapshots of the atomic configurations obtained by this simulation are shown in Fig.~\ref{fig:057snaps}(b).
As can be seen from this figure, laser irradiation at $J_0=0.57\,\text{J}\,\text{cm}^{-2}$, which causes atom emission when simulated with the $T_e$-dependent IAP, does not cause atom emission when simulated with the $T_e$-independent IAP.
Fig.~\ref{fig:057temp} shows the spatial distribution of $T_e$ and $T_l$.
As can be seen from Fig.~\ref{fig:057temp}, at $ t = 0$, $1$, and $2\,\text{ps}$ there is little difference between the simulation results using the $T_e$-dependent IAP and that using the $T_e$-independent IAP.
Therefore, the atom emission cannot be explained by the thermalized kinetic energy of the atoms.
Since previous studies~\cite{Tanaka_2018, Tanaka_2021} show that the internal energy ($E$) becomes more attractive at high $T_e$, it is considered that the origin of the atom emission comes from the electronic entropy ($-ST_e$) effect, which is reported to induce large repulsion forces at high $T_e$.
The contribution of the electronic entropy effect can be investigated more directly by calculating the energy absorbed due to the electronic entropy effect.
Fig.~\ref{fig:057entropy} represents the energy absorbed due to the electronic entropy term of Eq.~(\ref{eq:def_free}) at each depth.
The solid, dashed, and dotted lines in this figure represent results at $t=0.5, 1.0,$ and $1.5\,\text{ps}$, respectively.
From this figure, it can been seen that the electronic entropy effect is large near the surface.
Furthermore, the electronic entropy effect regarding the pressure is investigated since the laser-induced pressure is considered to be important for the occurrence of spallation and phase explosion.~\cite{Wu_2013}
Fig.~\ref{fig:057pres} represents the distribution of the local pressure along the $z$ direction ($p_z$).
According to the simple deviation by Basinski $et\,al$.~\cite{Basinski_1971} based on the virial theorem, the local pressure $p^n$ in the $n$-th 3D cell also can be calculated from the following equation:
\begin{align}
p^n & = \frac{1}{3V^n} \notag \\
\times & \left[ < \sum_i^{N^n} m(v_{i})^2 > + < \frac{1}{2} \sum_{i \ne j} ^{N^n} \sum_j^{N^{\text{tot}}} r_{ij} \cdot f_{ij} > \right].
\end{align}
Here, $V^n$ and $N^{\text{tot}}$ are the volume of the $n$-th 3D cell and the total number of atoms, respectively.
$r_{ij}$ and $ f_{ij}$ are the distance and force between the $i$-th and the $j$-th atoms, respectively.
The bracket means the time average.
In our calculation, the value of $p^n$ is averaged within $100\,\text{fs}$.
We focus only on the local pressure along the $z$ direction, which is the most important for the ablation dynamics.
Fig.~\ref{fig:057pres}(b) shows that a pressure of less than $5\,\text{GPa}$ is created in the simulation using the $T_e$-independent IAP.
On the other hand, Fig.~\ref{fig:057pres}(a) shows that a large pressure ($\sim 35\,\text{GPa}$) is created near the surface in the simulation using the $T_e$-dependent IAP, and more than $5\,\text{GPa}$ pressure is created even deep inside the region ($> 200\,\text{nm}$).
The compressive pressure wave created by atom emission and that created by expansion near the surface would not reach the deep interior region because the velocity of sound of bulk Cu is $4.76\,\text{nm}\,\text{ps}^{-1}$.~\cite{Linde_2003}
Therefore, this high compressive pressure in the deep interior region is considered to arise from the repulsion force between atoms due to the electronic entropy effect.
As shown in Fig.~\ref{fig:057pres}(a), large negative pressure (tensile stress), which has the potential to induce spallation, is created near the surface ($\sim 5\,\text{nm}$) in the simulation using the $T_e$-dependent IAP.
Although a negative pressure is created, laser irradiation with $J_0=0.57\,\text{J}\,\text{cm}^{-2}$ does not cause spallation even after the compressive pressure wave reaches the MD/CM boundary ($361.5\,\text{nm}$).
Therefore, we thought that a higher negative pressure and a higher temperature were necessary to cause spallation, and we found that ion emission occurred at a lower laser irradiation fluence than the fluence that caused spallation.
From the number of emitted atoms, the ablation depth is estimated to be $0.65\,\text{nm}$ for laser-irradiation with $J_0=0.57\,\text{J}\,\text{cm}^{-2}$.
Not only the ablation at $J_0=0.55\,\text{J}\,\text{cm}^{-2}$ but also this result has the potential to explain the non-thermal ablation of metals where sub-nanometer ablation was observed.~\cite{Hashida_1999,Hashida_2002,Miyasaka_2012}
\subsubsection{Ablation with a laser fluence slightly larger than the ablation threshold: spallation}
\label{sec:results_indep}
\begin{figure}[b]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07_0ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07i_0ps.eps}
\end{center}
\end{minipage}
\\
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07_1ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07i_1ps.eps}
\end{center}
\end{minipage}
\\
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07_2ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07i_2ps.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{Snapshots of atomic configurations near the surface after $0$-$2\,\text{ps}$ irradiation with a $100\,\text{fs}$ pulse laser of $J_0=0.7\,\text{J}\,\text{cm}^{-2}$. These simulations are carried out using (a) the $T_e$-dependent IAP and (b) the $T_e$-independent IAP.
}
\label{fig:07snaps}
\end{center}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{1.0\hsize}
\begin{center}
\includegraphics[clip, width=5cm]{07_entropy2.eps}
\hspace{1.0cm}
\end{center}
\end{minipage} \\
\end{tabular}
\caption{Energy absorbed by the electronic entropy effect at each depth.
The laser fluence is $J_0=0.7\,\text{J}\,\text{cm}^{-2}$.
Solid, dashed, and dotted lines represent the elapsed times $t=0.5$, $1.0$, and $1.5\,\text{ps}$, respectively.
The basis of the $x$-axis represents the initial surface position.
}
\label{fig:07entropy}
\end{center}
\end{figure}
\begin{figure}[tbp]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=2.8cm]{Te-dep.eps}
\end{center}
\end{minipage}
\\
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07a_0ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07a_9ps.eps}
\end{center}
\end{minipage}
\\
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07a_3ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07a_12ps.eps}
\end{center}
\end{minipage}
\\
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07a_6ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07a_15ps.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{ Snapshots of atomic configurations near the surface after $0$-$15\,\text{ps}$ irradiation with a $100\,\text{fs}$-pulse laser of $J_0=0.7\,\text{J}\,\text{cm}^{-2}$. This simulation is carried out using the $T_e$-dependent IAP.
}
\label{fig:07asnaps}
\end{center}
\end{figure}
\begin{figure}[tb]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=2.8cm]{Te-dep.eps}
\end{center}
\end{minipage}
\\[0.8ex]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07b_0ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07b_9ps.eps}
\end{center}
\end{minipage}
\\
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07b_3ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07b_12ps.eps}
\end{center}
\end{minipage}
\\
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07b_6ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{07b_15ps.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{Snapshots of atomic configurations (depth: $\sim10\,\text{nm}$) of Fig.~\ref{fig:07asnaps}.
}
\label{fig:07bsnaps}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4.5cm]{07_pres.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4.5cm]{07i_pres.eps}
\end{center}
\end{minipage} \\
\end{tabular}
\caption{Spatial distribution of the local pressure along the $z$ direction of simulations using (a) the $T_e$-dependent IAP and (b) the $T_e$-independent IAP.
Solid, dashed, dotted, chained, and bold lines represent the results at $t=0.5$, $1.0$, $5.0$, $10.0$, and $15.0\,\text{ps}$, respectively.
The red arrow in (a) represents the point where spallation occurs.
}
\label{fig:07pres}
\end{center}
\end{figure}
Here, we exhibit the results of simulations in which the fluence of the applied laser ($J_0=0.7\,\text{J}\,\text{cm}^{-2}$) is a little higher than the ablation threshold ($J_0=0.55\,\text{J}\,\text{cm}^{-2}$).
Fig.~\ref{fig:07snaps} shows snapshots of the atomic configurations of the Cu film irradiated by the ultrashort-pulse laser.
As shown in Fig.~\ref{fig:07snaps}(b), when the $T_e$-independent IAP is used in the simulation, ablation does not occur.
On the other hand, Fig.~\ref{fig:07snaps}(a) shows that ablation occurs when the $T_e$-dependent IAP is used.
In addition, Fig.~\ref{fig:07entropy} shows that the absorption due to electronic entropy is more than $4.0\,\text{eV}\,\text{atom}^{-1}$, which is larger than that of simulation at a lower laser irradiation and cohesive energy.
According to the calculation results, the electronic entropy plays an important role in causing atom emission even at this laser fluence.
Fig.~\ref{fig:07asnaps} shows snapshots of the atomic configurations of $0$ to $15\,\text{ps}$ after laser irradiation.
As shown in these figures, spallation is also observed for laser irradiation with $J_0=0.7\,\text{J}\,\text{cm}^{-2}$.
Part of the atomic configuration (depth: $\sim10\,\text{nm}$) of Fig.~\ref{fig:07asnaps} is shown in Fig.~\ref{fig:07bsnaps}.
Since it is thought that the trigger for spallation is tensile stress, the time development of local pressure in the TTM-MD simulation is calculated to investigate the electronic entropy contribution.
The space distribution of the local pressure along the $z$ direction is shown in Fig.~\ref{fig:07pres}(a).
At least within $t=5\,\text{ps}$, the pressure wave passes through a point indicated by the red arrow in Fig.~\ref{fig:07pres}(a), at which the surface layer is spalled, and a large negative pressure is created.
Owing to the negative pressure, a void begins to be formed around $t=9\,\text{ps}$, and as a result, spallation occurs.
In the simulation with the $T_e$-independent IAP, spallation does not occur at least within $100\,\text{ps}$, which is enough time for the recoil pressure created near the surface to reach the MD/CM boundary.
Fig.~\ref{fig:07pres}(b) shows that the negative pressure for the simulation using the $T_e$-independent IAP is smaller than that using the $T_e$-dependent IAP by one order of magnitude.
Since it has been widely accepted that large negative pressure is the origin of spallation,~\cite{Wu_2013} we consider that one of the reasons that spallation is not caused in the $T_e$-independent IAP simulation is the small negative pressure.
The reason for the small negative pressure is considered to be the lack of atom emission and the small internal pressure due to neglecting the effect of electronic entropy.
From these results, we conclude that the effect of electronic entropy enhances not only atom emission but also spallation.
\subsubsection{Ablation a little higher than the ablation threshold: transition to phase explosion}
\label{sec:results_phase}
\begin{figure}[b]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=2.8cm]{Te-dep.eps}
\end{center}
\end{minipage}
\\[0.8ex]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{1005_0ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{1005_30ps.eps}
\end{center}
\end{minipage}
\\
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{1005_10ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{1005_40ps.eps}
\end{center}
\end{minipage}
\\
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{1005_20ps.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[clip, width=4cm]{1005_50ps.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{Snapshots of atomic configurations near the surface after $0$-$50\,\text{ps}$ irradiation with a $100\,\text{fs}$ pulse laser of $J_0=1.0\,\text{J}\,\text{cm}^{-2}$. This simulation is carried out using the $T_e$-dependent IAP. }
\label{fig:10asnaps}
\end{center}
\end{figure}
Here, we exhibit the results of a simulation in which the fluence of the applied laser is $J_0=1.0\,\text{J}\,\text{cm}^{-2}$.
Fig.~\ref{fig:10asnaps} shows snapshots of the atomic configurations of the Cu film irradiated by the pulse laser.
In the TTM-MD simulation, the $T_e$-dependent IAP is used.
The ablation depth of this simulation is $37.8\,\text{nm}$, which is estimated from the number of atoms emitted before the pressure wave reaches the MD/CM boundary.
Fig.~\ref{fig:10asnaps} shows that homogeneous evaporation (see Fig.~\ref{fig:10asnaps} at $t=10\,\text{ps}$) is observed near the laser-irradiated surface, which is considered to be an indication of phase explosion.
These results indicate that as the laser fluence becomes larger, the ablation process transforms from spallation to phase explosion.
This result is qualitatively consistent with a previous MD simulation~\cite{Wu_2013} in which the $T_e$-independent IAP is used.
\subsection{Ablation depth}
\label{sec:depth}
\begin{figure}[b]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{1.0\hsize}
\begin{center}
\includegraphics[clip, width=8cm]{depth_sta3.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{The developed TTM-MD results for the ablation depth and comparison of the results with a previous CM simulation~\cite{Tanaka_2018} and experimental results.~\cite{Colombier_2005,Nielsen_2010}
Circles represent the average ablation depth results from three TTM-MD simulations with different initial thermalization times.
Dotted and dashed lines represent the results of previous CM calculations including the electronic entropy effect and ignoring the electronic entropy effect, respectively.~\cite{Tanaka_2018}
Triangles and squares represent experimental results.~\cite{Colombier_2005,Nielsen_2010}
}
\label{fig:TTM_depth}
\end{center}
\end{figure}
Here, we show the simulation results for the ablation depth and comparison of the results with a previous CM simulation~\cite{Tanaka_2018} and experimental results.~\cite{Colombier_2005,Nielsen_2010}
The ablation depth is estimated from the number of atoms emitted before the pressure wave reaches the MD/CM boundary.
Our calculation results for the ablation depth are plotted in Fig.~\ref{fig:TTM_depth}.
The open circles represent the average ablation depth results from three TTM-MD simulations with different initial thermalization times.
The averaged values for the ablation depth at $J_0=0.55$ and $0.57\,\text{J}\,\text{cm}^{-2}$ in the TTM-MD simulation are $0.01$ and $0.30\,\text{nm}$, respectively.
Since spallation occurs sometimes and does not occur at other times at $J_0= 0.60\,\text{J}\,\text{cm}^{-2}$, there is a wide range of ablation depth at this laser fluence, as shown by the bars accompanying the circles.
Therefore, the ablation depth changes by more than two orders of magnitude around $J_0= 0.60\,\text{J}\text{cm}^{-2}$.
The TTM-MD simulation is qualitatively consistent with experiment,~\cite{Hashida_1999,Hashida_2010} where similar large changes in the ablation depth are observed.
The dotted and dashed lines represent the results of the previous CM calculation~\cite{Tanaka_2018} including the electronic entropy effect and ignoring the electronic entropy effect, respectively.
Triangles and squares represent experimental results.~\cite{Colombier_2005,Nielsen_2010}
As shown in Fig.~\ref{fig:TTM_depth}, our TTM-MD simulations of ablation depth are in qualitative agreement with previous experimental and calculation studies.
\subsection{Pulse-width dependence of ablation threshold}
\label{sec:comp}
\begin{figure}[b]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{1.0\hsize}
\begin{center}
\includegraphics[clip,width=7cm]{07_long2.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{Results of $T_e$(red line) and $T_l$(blue line) space distributions.
The duration of the laser pulse is $200\,\text{ps}$ and its fluence is $J_0=0.70\,\text{J}\,\text{cm}^{-2}$.
The fluence peak reaches the surface at $t=200\,\text{ps}$.
The solid, dashed, and dotted lines represent the results at $t=100$, $200$, and $250\,\text{ps}$, respectively.
The red and blue dashed lines and red and blue dotted lines overlap with each other.
}
\label{fig:07longtemp}
\end{center}
\end{figure}
Here, the pulse-width dependence of the ablation threshold is investigated.
A previous study~\cite{Hashida_2002} reported that the ablation threshold fluence of an ultrashort-pulse laser is lower than that of a ps-laser.
We investigate whether our simulation can qualitatively reproduce this experimental result.
Figure~\ref{fig:07longtemp} represents the space distributions of $T_e$ and $T_l$.
The time duration of the ps-laser pulse is $200\,\text{ps}$ and its fluence is $J_0=0.70\,\text{J}\,\text{cm}^{-2}$.
The peak of the laser pulse reaches the surface at $t=200\,\text{ps}$.
In this simulation, ablation does not occur, at least within $300\,\text{ps}$.
In the case of ultrashort-pulse laser ($100\,\text{fs}$ laser) irradiation at the same fluence, ablation occurs (see Fig.~\ref{fig:07snaps}).
These calculation results show that the ablation threshold fluence of the ultrashort-pulse laser is lower than that of the ps-laser.
This means that the developed TTM-MD simulations can qualitatively reproduce the experimental results of the pulse-width dependence of the ablation threshold.
The reason for the difference between irradiation with the ultrashort-pulse laser and ps-laser can be explained as follows.
As can be seen from Fig.~\ref{fig:07longtemp}, in the case of ps-laser irradiation, the difference between $T_e$ and $T_l$ is small compared with the simulation for ultrashort-pulse laser irradiation (see Fig.~\ref{fig:057temp}).
In addition, $T_e$ reaches only about $3,000\,\text{K}$, which is one order of magnitude lower than with the $100\,\text{fs}$ laser irradiation.
At low $T_e$, the electronic entropy effect is small, so atom emission and spallation are suppressed compared with ultrashort-pulse laser irradiation.
Therefore, ablation does not occur for the irradiation with the ps-pulse laser.
\section{Conclusion}
\label{sec:conc}
The microscopic mechanism of metal ablation induced by irradiation with an ultrashort-pulse laser was investigated.
First, a new TTM-MD scheme was developed considering the electronic entropy effect.
To satisfy the law of conservation of energy, the correction term [$ - {\sum}_i^{N^n} \bm{v}_i \frac{{\partial}}{{\partial}\bm{r}_i} \left( S^nT_e^n \right) $] is added to the conventional equation of the TTM-MD scheme.
The energy conservation in the new scheme was verified by simulation of the laser-irradiated Cu film with $T_e$-dependent IAP.
With the TTM-MD simulations, laser ablation of Cu films with an ultrashort laser pulse was investigated.
The TTM-MD simulation predicts high-energy atom emission and sub-nanometer depth ablation near the ablation threshold fluence ($J_0=0.55\,\text{J}\,\text{cm}^{-2}$), which were observed in experiments.
This finding bridges the discrepancy between experiments and previous theoretical simulations in explaining the physical mechanism of the non-thermal ablation of metals.
Comparing the TTM-MD simulation with the electronic entropy effect and that without this effect, it is found that the electronic entropy plays an important role in atom emission.
In the case of the ultrashort laser pulse with $J_0=0.7\,\text{J}\,\text{cm}^{-2}$, atom emission and spallation were induced only in the case of the $T_e$-dependent IAP, indicating that electronic entropy plays an important role in causing not only atom emission but also spallation.
Moreover, the TTM-MD results for ablation depth were in harmony with the CM calculation results and the experimental data, qualitatively.
Additionally, the dependence on the pulse width was analyzed.
Ablation does not occur with irradiation by 200 ps laser pulse with $J_0=0.7\,\text{J}\,\text{cm}^{-2}$ since ps-laser irradiation does not realize excessively high $T_e$ in the system.
Hence, the ablation threshold fluence of the ultrashort-pulse laser is found to be lower than that of the ps-laser, which is consistent with experiment.
In this paper, using the developed TTM-MD scheme, we demonstrated that the electronic entropy effect plays an important role in the ultrashort-pulse laser ablation of metals, and supports the EED mechanism to explain the non-thermal ablation of metals.
\begin{acknowledgments}
This work was supported in part by the Innovative Center for Coherent Photon Technology (ICCPT) in
Japan and by JST COI Grant Number JPMJCE1313 and also by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) Quantum Leap Flagship Program (MEXT Q-LEAP) Grant No. JPMXS0118067246.
Y. T. was supported by the Japan Society for the Promotion of Science through the Program for Leading Graduate Schools (MERIT)\@.
\end{acknowledgments}
|
{
"arxiv_id": "2302.13165",
"language": "en",
"timestamp": "2023-02-28T02:11:37",
"url": "https://arxiv.org/abs/2302.13165",
"yymm": "2302"
} | \section{Conclusion}
\label{Conclusion}
The problem of detecting non-convex clusters has led to the development of numerous clustering methods. One of the well-known graph-based clustering methods is spectral clustering and SpctralNet. Both spectral clustering and SpectralNet require a graph that connects points in the same cluster with edges of high weights. The intuition is simple, strongly connected points will become closer in the embedding space and can be easily detected.
Graph reduction requires extensive use of parameters that need careful setting for each dataset. The graph reduction algorithm proposed in this study does not require any parameters to reduce the graph, yet it is able to maintain spectral clustering and SpectralNet accuracies. It takes an input as a full graph or a $k$-nearest neighbor graph (in the case of a large number of points). Then, it reduces the graph edges using statistical measures that require low computations. The experiments revealed that the proposed method provides a stable alternative compared to other methods that require parameters tuning.
The proposed method does not reduce the graph vertices, which could boost the computational efficiency. A useful extension of the proposed method would be a vertices reduction component that is aware of local statistics. Another potential improvement of this work is to use a different kernel other than gaussian kernel to compute pairwise similarities.
\section{Experiments and discussions}
\label{Experiments}
In the experiments we used four synthetic datasets, as shown in Figure\ \ref{Fig:Fig-06}. \texttt{Dataset 1} to \texttt{3} were created by \cite{RN237}, while \texttt{Dataset 4} was created by us. We also used seven real datasets (see Table \ref{Table:Table-01}). Apart from the \texttt{MNIST} dataset, all the real datasets were retrieved from UCI machine learning. Each dataset was run with two parameter sets to evaluate the effect
\begin{figure*
\centering
\includegraphics[width=0.7\textwidth,height=20cm,keepaspectratio]{Fig-06.pdf}
\caption{Synthetic datasets used in the experiments.}
\label{Fig:Fig-06}
\end{figure*}
\begin{table
\centering
\caption{The four synthetic and seven real datasets used in the experiments; $N$ is the number of points, $d$ is the number of dimensions, $C$ is the number of clusters, $m$ is the size of the reduced set of vertices, and $\mu_0$ is the number of neighbors used as a threshold to include or exclude further neighbors.}
\includegraphics[width=0.45\textwidth,height=20cm,keepaspectratio]{Table-01.pdf}
\label{Table:Table-01}
\end{table}
Six methods were used for comparison and these are shown in Table \ref{Table:Table-02}. \texttt{Methods 1} to \texttt{5} \cite{RN232,RN275,RN276} rely on the parameter $m$, which is the number of representatives to build the graph $G$. They used iterative algorithms like $k$-means and self-organizing maps to construct the graph, which makes them produce a slightly different graph with each run. \texttt{Method 6} \cite{RN421} relied on the parameter $\mu_0$ to build the graph $G$, where $\mu_0$ is the number of neighbors whose mean was used as a threshold to include or exclude further neighbors. The code is available at \url{https://github.com/mashaan14/Spectral-Clustering}.
\begin{table*
\centering
\caption{Methods used in the experiments. $m$ is the number of reduced vertices, $N$ is the number of all vertices, $t$ is the number of iterations, $k_{max}$ is the parameter used to construct $k$-nn graph.}
\includegraphics[width=0.65\textwidth,height=20cm,keepaspectratio]{Table-02.pdf}
\label{Table:Table-02}
\end{table*}
\begin{figure*
\centering
\includegraphics[width=0.98\textwidth]{Results-synthetic.png}
\caption{Results with the synthetic data, all values are for 50 runs. (Best viewed in color)}
\label{Table:Table-03}
\end{figure*}
All the methods were evaluated using three evaluation metrics: 1) clustering accuracy (\textbf{ACC}) 2) the Adjusted Rand Index (\textbf{ARI}) \cite{RN365}, and 3) the percentage of edges used compared to all the edges in a full graph (\textbf{E\%}).
ACC computes the percentage of hits between ground truth labels $T_i$ and labels obtained through clustering $L_i$. It is defined as \cite{Cai2005Document}:
\begin{equation}
ACC(T,L)=\frac{\sum_{i=1}^{N}{\ \delta(T_i,map(L_i))}}{N},
\label{Eq-ACC}
\end{equation}
\noindent
where $N$ is the number of points and the function $\delta(x,y)$ is the Kronecker delta function, which equals one if $x=y$ and zero otherwise. The function $map(L_i)$ permutes the grouping obtained through clustering for the best fit with the ground-truth grouping. ARI needs two groupings $T$ and $L$, where $T$ is the ground truth and $L$ is the grouping predicted by the clustering method. If $T$ and $L$ are identical, ARI produces one, and zero in case of random grouping. ARI is calculated using: $n_{11}$: pairs in the same cluster in both $T$ and $L$; $n_{00}$: pairs in different clusters in both $T$ and $L$; $n_{01}$: pairs in the same cluster in $T$ but in different clusters in $L$; $n_{10}$: pairs in different clusters in $T$ but in the same cluster in $L$.
\begin{equation}
ARI(T,L)=\frac{2(n_{00}n_{11}-n_{01}n_{10})}{(n_{00}+n_{01})(n_{01}+n_{11})+(n_{00}+n_{10})(n_{10}+n_{11})}\ .
\label{Eq-007}
\end{equation}
\noindent
The computational efficiency can be measured by the method’s running time, but, this is influenced by the type of machine used. We chose to measure the computational efficiency by the percentage of edges, \textbf{E\%}:
\begin{equation}
E\%=\frac{E(G_{reduced})}{E(G_{full})}\ .
\label{Eq-009-01}
\end{equation}
\subsection{Experiments on synthetic data}
\label{SyntheticData}
In the synthetic datasets the proposed method delivered a performance that ranked it as 2nd, 2nd, 1st, and 2nd for \texttt{Datasets 1} to \texttt{4} respectively (see Figure\ \ref{Table:Table-03}). \texttt{Method 6} was the top performer on three occasions. However, its performance dropped significantly when we changed the parameter $\mu_0$. For example, its performance dropped by 50\% with \texttt{Dataset 4} when we changed $\mu_0=3$ to $\mu_0=7$. This shows how parameters could affect the performance. Another observation is the consistency of ACC and ARI metrics over the 50 runs. By looking at Figure\ \ref{Table:Table-03}, the methods \texttt{1} to \texttt{5} have a wide standard deviation. This is explained by the iterative algorithms used by methods \texttt{1} to \texttt{5} to construct the graph. \texttt{Method 6} and the proposed method do not have this problem, and they have a small standard deviation. This is due to their deterministic nature when constructing the graph, which makes them consistent over independent executions.
In terms of the used edges, the proposed method used 6.32\%, 1.45\%, and 0.51\% of the full graph edges for \texttt{Datasets 2} to \texttt{4} respectively. But in \texttt{Dataset 1} there was a sharp increase where the proposed method used 16\% of the full graph edges. This sharp increase could be explained by the points in dense clusters being fully connected.
\begin{figure*
\centering
\includegraphics[width=0.9\textwidth]{Results-real.png}
\caption{Results with real data, all values are for 50 runs. (Best viewed in color)}
\label{Table:Table-04}
\end{figure*}
\subsection{Experiments on real data}
\label{RealData}
With real datasets in Figure\ \ref{Table:Table-04}, the proposed method continued to be the most consistent method over all tested methods. It kept a very small standard deviation, while other methods had a wide standard deviation. The performance of other methods was determined by their parameters. For example, \texttt{Method 3} was the best performer on \texttt{iris} dataset when $m=16$. However, when we changed $m$ to $32$ , its performance dropped by more than 15\%. Another observation with \texttt{statlog} and \texttt{MNIST} the proposed method did not perform well. This indicated that a cluster in these datasets does not have the same statistics across its regions. Therefore, characterizing clusters using local $\sigma$ might not be a good choice. Instead, we should use CONN to discover clusters discontinuity, rather than tracking local statistics.
\begin{figure*
\centering
\includegraphics[width=0.48\textwidth]{irisM.pdf}
\includegraphics[width=0.48\textwidth]{irisMu.pdf}
\caption{Testing the methods’ performance with the \texttt{iris} dataset under different settings of parameters $m$ and $\mu_0$. (Best viewed in color)}
\label{Fig:Fig-07}
\end{figure*}
\begin{figure*
\centering
\includegraphics[width=0.7\textwidth,height=20cm,keepaspectratio]{Fig-08.pdf}
\caption{Datasets used in the SpectralNet experiments.}
\label{Fig:Fig-08}
\end{figure*}
\begin{figure*
\centering
\includegraphics[width=0.75\textwidth,height=20cm,keepaspectratio]{Results-SpectralNet.png}
\caption{Results of the experiments for integration with SpectralNet for 10 runs. (Best viewed in color)}
\label{Table:Table-05}
\end{figure*}
\subsection{Effect of the parameters on the spectral clustering performance}
\label{ParametersEffect}
In this experiment, we investigated how a wide selection of parameters could affect the accuracy of the spectral clustering. The parameters $m$ and $\mu_0$ were given the following values: $m \in \{ 10,20,30,40,50,60,70,80,90,100\}$ and $\mu_0 \in \{ 3,7,10,20,30,40,50,60,70,80\}$. In Figure\ \ref{Fig:Fig-07} (left), the performance \texttt{Methods 1} to \texttt{5} fluctuated with different values of $m$, with a clear downward trend seen as $m$ increased. The dashed horizontal line is the performance of the proposed method. In Figure\ \ref{Fig:Fig-07} (right), \texttt{Method 6} started with low performance, peaking around $\mu_0=30$, and then it took a downward trend. By eliminating the use of $\mu_0$, our method delivered a stable performance as shown by the horizontal dashed line.
\subsection{Experiments for integration with SpectralNet}
\label{SpectralNetExperiments}
The SpectralNet integration experiment was conducted using three datasets shown in Figure\ \ref{Fig:Fig-08}. The evaluation metrics are \textbf{ACC} shown in equation \ref{Eq-ACC}, \textbf{ARI} shown in equation \ref{Eq-007}, and \textbf{total pairs}, which is the number of pairs passed to the Siamese net. We used four methods to construct positive and negative pairs. The first two methods used a $k$-nearest neighbor graph with $k=2$ and $k=4$. Simply, the nearest $k$ neighbors were set as the positive pairs, and $k$ random farthest neighbors were set as the negative pairs. The third method used the parameter $\mu_0$ proposed by Alshammari, et al. \cite{RN421} to construct pairs.
In Figure\ \ref{Table:Table-05}, the proposed method delivered the best performance for the \texttt{cc} and \texttt{compound} datasets. This good performance was coupled with good computational efficiency, with an average of $8,468$ for the total pairs passed to the Siamese net. Only $k=2$ could deliver fewer total pairs, but with a massive loss in performance. For the \texttt{aggregation} dataset, $k=2$ delivered the best performance. This experiment highlighted the need for setting the number of positive pairs dynamically. The methods following this approach (the $\mu_0$ method and our method) were the best performers for two of the three datasets.
\section{Introduction}\label{Introduction}
The problem of detecting clusters of non-convex geometric shape, has been long studied in the literature of pattern recognition. The solutions of this problem could be broadly classified into two categories: kernel- and graph-based methods. Kernel-based methods attempt to map the points into a space where they can be separated. The embedding function $\phi: \mathbb{R}^D \rightarrow \mathbb{R}^M$ maps points from the original space to an embedding space. Defining the embedding function $\phi$ is usually unknown and could be computationally expensive \cite{RN411}. On the other hand, graph-based methods use the graph $G(V,E)$ whose set of vertices represents the data points and its set of edges represents the similarity between each pair of vertices. Finding non-convex clusters in a graph could be done in three ways: 1) by iteratively coarsening and partitioning the graph \cite{RN412}, 2) by performing spectral clustering \cite{RN421}, and 3) by feeding the graph $G(V,E)$ to a neural network (SpectralNet) \cite{RN360}. The first way of detecting clusters in a graph is iterative which involves two deficiencies: the risk of being trapped in a local minima and the need for a stopping condition. This makes spectral clustering and SpectralNet more appealing for studies conducting graph-based clustering.
Spectral clustering starts by constructing a graph $G(V,E)$. The sets of vertices $V$ and edges $E$ represent data points and their pairwise similarities. Spectral clustering detects clusters by performing eigen-decomposition on the graph Laplacian matrix $L$ and running $k$-means on its top eigenvectors \cite{RN233}. The computational bottleneck represented by the eigen-decomposition would cost the algorithm computations in the order of $\mathcal{O}(N^3)$ \cite{RN232}. This stimulated the research on reducing these computations by reducing the graph vertices and/or edges. However, the need for a memory efficient graph creates another problem related to the number of parameters associated with the process of graph construction. Deciding the number of reduced vertices and how the edges are eliminated would create several parameters that need careful tuning.
SpectralNet \cite{RN360} uses Siamese nets to learn affinities between data points. Then it feeds these affinities to a neural network to find a map function $F_\theta$, which maps the graph vertices $V$ to an embedding space where they can be separated using $k$-means. The Siamese nets expect the user to label which pairs are positive (similar) and which are negative (dissimilar). An unsupervised pairing uses the $k$-nearest neighbors, where the nearest neighbors are the positive pairs and the farthest neighbors are the negative pairs. The parameter $k$ requires manual tuning. It also restricts the number of edges to be exactly $k$, regardless of the surrounding density around the data point.
In her spectral clustering paper, von Luxburg \cite{RN234} wrote about the advantages of mutual k-nearest neighbor graph, and how it \textit{``tends not to connect areas of different density''}. She highlighted the need for having a \textit{``heuristic to choose the parameter k''}. We introduce a graph reduction method that does not require any parameters to produce a mutual graph with a reduced number of edges compared to the size of the full graph $E = N \times N$, where $N$ is number of all vertices. It initially finds the mean distance that best describes the density around a point. Then, it computes the pairwise similarities based on: 1) the distance between a pair of points, and 2) the mean distance of the surrounding density. Finally, we construct a mutual graph where a pair of vertices must be in each other’s nearest neighbors sets. We used two graph applications for the experiments: spectral clustering and SpectralNet. The proposed method provides a stable alternative compared to other methods where their performance was determined by the selected parameters.
Our main contribution in this work, is eliminating manually tuning parameters that affect the clustering accuracy when changed. The graph partitioning methods used in this work are spectral clustering \cite{RN234} and SpectralNet \cite{RN360}.
\section{Related work}
\label{RelatedWork}
The problem of detecting non-convex clusters has led to the development of numerous clustering methods. These methods have abandoned the assumption that a cluster has a single mean. Instead, they rely on pairwise similarities to detect clusters. Graph-based clustering involves two steps: 1) reducing the graph, and 2) partitioning the graph. The proposed method in this paper falls under graph construction methods.
Spectral clustering uses eigen-decomposition to map the points into an embedding space, then groups similar points. One of the important application of spectral clustering is subspace clustering \cite{Liu2013Robust,Elhamifar2013Sparse}. The performance of spectral clustering is determined by the similarity metric used to construct the affinity matrix $A$. The earlier works of subspace clustering used affinities based on principal angles \cite{Wolf2003Learning}. But recent studies have used sparse representation of points to measure the similarity \cite{Liu2013Robust,Peng2015Subspace,Peng2021Kernel}. Spectral clustering requires computations in order of $\mathcal{O}(N^3)$, due to the eigen-decomposition step. A straightforward solution to this problem is to reduce the size of the affinity matrix $A$. This can be done in two ways: 1) reducing the set of vertices $V$, or 2) reducing the set of edges $E$.
Reducing the number of vertices is done by placing representatives on top of the data points, and then using those representatives as graph vertices. Placing a representative could be done by sampling (like $k$-means++ \cite{RN240}) or by vector quantization (like self-organizing maps \cite{RN42}). A well-known method in this field is ``k-means-based approximate spectral clustering (KASP)'' proposed by Yan et al. \cite{RN232}. KASP uses $k$-means to place representatives. Other efforts by Tasdemir \cite{RN275} and Tasdemir et al. \cite{RN276} involved placing representatives using vector quantization, and a nice feature of these methods is that the pairwise similarities are computed during the vector quantization. The problem with these methods is the parameter $m$, which is the number of representatives. Specifically, how should we set $m$? And how would different values of $m$ affect the clustering results?
Reducing the graph edges could be done by setting the neighborhood conditions. For instance, let $p$ be the center of a ball $B(p,r)$ with radius $r$ and $q$ be the center of a ball $B(q,r)$. $p$ and $q$ are connected if and only if the intersection of $B(p,r)$ and $B(q,r)$ does not contain other points \cite{Marchette2004Random}. Such graphs are called Relative Neighborhood Graphs (RNGs). Correa and Lindstrom \cite{RN254} used a $\beta$-skeleton graph for spectral clustering. However, the parameter $\beta$ needs tuning. Alshammari et al. \cite{RN421} introduced a method to filter edges from a $k$-nearest neighbor graph. However, it still needs an influential parameter, which was the mean of the baseline distribution of distances $\mu_0$. Another local method to reduce the number of edges was proposed by Satuluri et al. \cite{Satuluri2011Local}. The authors measured the similarity between two vertices using adjacency lists overlap, a metric known in the literature as shared nearest neighbor similarity \cite{Jarvis1973Clustering}. A graph sparsification method based on effective resistance was proposed by Spielman and Srivastava \cite{Spielman2011Graph,Spielman2011Spectral}. Their method was theoretically solid, but the definition of effective resistance breaks the cluster structure of the graph. Vertices with more short paths have low effective resistance and the method disconnects them. Retaining the cluster structure of the graph requires connecting such vertices \cite{Satuluri2011Local}.
In spectral clustering, the obtained spectral embedding cannot be extended to unseen data, a task commonly known as out-of-sample-extension (OOSE). Several studies have proposed solutions to this problem. Bengio et al. \cite{Bengio2003Extensions} used Nystr\"{o}m method to approximate the eigenfunction for the new samples. But they have to check the similarity between the training and new samples \cite{Nie2011Spectral}. Alzate and Suykens \cite{Alzate2010Multiway} proposed binarizing the rows of eigenvectors matrix where each row corresponds to a single training data point. By counting row occurrences, one can find the $k$ most occurring rows, where each row represents an encode vector for a cluster. To label a test sample, its projection was binarized and it is assigned to the closest cluster based on the minimum Hamming distance between its projection and encoding vectors. Levin et al. \cite{Levin2018extension} proposed a linear least squares OOSE, which was very close to Bengio et al. \cite{Bengio2003Extensions} approach. They also proposed a maximum-likelihood OOSE that produces a binary vector $\overrightarrow{a}$ indicating whether the unseen sample has an edge to the training samples or not.
All previous methods that provided an out-of-sample-extension (OOSE) to spectral clustering have relied on eigen-decomposition, which becomes infeasible for large datasets. The newly proposed SpectralNet \cite{RN360} is different from spectral clustering in a way that it does not use eigen-decomposition step. Instead, SpectralNet passes the affinity matrix $A$ into a deep neural network to group points with high similarities. Yet, SpectralNet still needs a graph construction method. Previous SpectralNet works have used $k$-nearest neighbour graph, but they have to set the parameter $k$ manually. It also restricts the number of edges to be exactly $k$, regardless of the surrounding density around the data point. Dense clusters require more edges to be strongly connected. Strong connections ensure closer positions in the embedding space. Also, SpectralNet methods randomly choose the negative pairs. This random selection makes the method inconsistent in independent executions.
Considering the literature on reduced graphs for spectral clustering and SpectralNet, it is evident that they have two deficiencies. First, certain parameters are required to drive the graph reduction process. Second, the involvement of random steps makes these methods inconsistent over independent executions.
\section{Reducing the graph size without the need for parameters}
\label{ProposedApproach}
The motivation behind our work was to avoid the use of any parameters during the graph reduction. The input for our method is a $k$-nearest neighbor graph. Although this $k$-nn graph is sparsified, it still connects clusters with different densities. The value of $k$ has limited influence on the final graph because it is not final, and most of the unnecessary edges created by $k$-nn will be removed in the reduction process. The method starts by finding the value of $\sigma_p$ that best describes the local statistics around a randomly chosen point $p$. Then, it filters the edges with low weights. Finally, it checks the mutual agreement for each edge.
\begin{figure*
\centering
\includegraphics[width=\textwidth,height=20cm,keepaspectratio]{Fig-03.png}
\caption{The process of computing $\sigma_p$ for a point $p$. (Best viewed in color)}
\label{Fig:Fig-03}
\end{figure*}
\subsection{Finding the value of $\sigma_p$}
\label{sigmap}
To compute pairwise similarities, we used the similarity measure introduced by \cite{RN237}, which is defined as follows:
\begin{equation}
A_{pq}=\exp{\left(\frac{-d^2\left(p,q\right)}{\sigma_p \sigma_q\ }\right)}.
\label{Eq-003}
\end{equation}
\noindent
where $-d^2\left(p,q\right)$ is the distance between points $p$ and $q$. $\sigma_p$ and $\sigma_q$ are the local scales at points $p$ and $q$ respectively. What is good about this similarity measure is that it uses two sources of information to compute the pairwise similarity: 1) the distance between them, and 2) the surrounding density for each point. Points belonging to clusters with different densities would have a low similarity even if they are separated by a small distance This makes this measure superior for highlighting different clusters separated by a small distance.
One problem that arises from using this measure in equation \ref{Eq-003} is how to set the value of $\sigma_p$ in the denominator. In previous studies, it was set as the distance to the 7th neighbor \cite{RN237,RN274}. However, there is no evidence that the distance to the 7th neighbor would work in every dataset. Using the data to select this parameter would be more practical.
The idea behind the parameter $\sigma_p$ is to measure the sparseness of a cluster. If $p$ lies in a sparse cluster, it would have a large $\sigma_p$; whereas if $p$ lies in a dense cluster, it would have a small $\sigma_p$. To achieve this, we need to exclude neighbors with different local density than $p$ from being included in computing $\sigma_p$. We used a smooth histogram of distances to characterize the local density for $p$ neighbors (as shown in Figure\ \ref{Fig:Fig-03}). The intuition is if a neighbor has different local density than $p$, this would be represented as a peak on the histogram. The histogram bin values for each point are smoothed using the moving weighted average (MWA). The smoothing was designed as follows:
\begin{equation}
MWA_{i}={\frac{v_{i-1}+v_{i}+v_{i+1}}{r_{i-1}+r_{i}+r_{i+1}}},
\label{Eq-004}
\end{equation}
\noindent
where $v$ is the value of the bin and $r$ is the rank of the bin, with $r=1$ being the bin containing the closest points to the point $p$. This smoothing assigns weights to the bins based on their distance from $p$, with high weights assigned to closer bins and low weights assigned to bins further away.
The histogram threshold tells us that up to the Kth neighbor, the local density of $p$ has not changed. Then, we compute $\sigma_p$ as the mean distance from the 1st to the Kth neighbor. This process is described in statements 4 to 9 in Algorithm \ref{Alg:Alg-01}.
\subsection{Reducing the graph edges}
\label{ReduceGraphEdge}
Once we have $\sigma_p$ for each point, we can calculate the pairwise similarities using the formula in equation \ref{Eq-003}, as shown in statements 10 to 14 in Algorithm \ref{Alg:Alg-01}. Large values indicate highly similar points, whereas low values indicate dissimilarity. We build another histogram of all the pairwise similarities using the Freedman–Diaconis rule \cite{RN387} as shown in Figure\ \ref{Fig:Fig-05}. For each point, similarities lower than the threshold $T_p$ are eliminated. If the maximum similarity is larger than the mean plus the standard deviation $\mu + \sigma$, the threshold is set as $T=\mu + \sigma$. If not, the threshold is set as $T=\mu - \sigma$. Figure\ \ref{Fig:Fig-05} shows the included similarities as blue bins and the excluded similarities as red bins. The graph edges are defined as:
\begin{equation}
(p,q) \in E(G) \Leftrightarrow A_{pq} > T_p.
\label{Eq-005}
\end{equation}
\noindent
where $(p,q)$ is the edge between points $p$ and $q$. $A_{pq}$ is the weight assigned to the edge $(p,q)$. This process is described in statements 15 to 21 in Algorithm \ref{Alg:Alg-01}.
The last step of our reduction method is to build a mutual graph. In a mutual graph, a pair of points should agree to accept an edge. This makes the graph $G$ to be defined as:
\begin{equation}
(p,q) \in E(G) \Leftrightarrow A_{pq} > T_p \quad \text{and} \quad A_{qp} > T_q.
\label{Eq-006}
\end{equation}
\noindent
where $T_p$ is threshold of acceptance for the vertex $p$.
\begin{figure*
\centering
\includegraphics[width=0.7\textwidth,height=20cm,keepaspectratio]{Fig-05.png}
\caption{After computing the pairwise similarities, we include highly similar edges for a point $p$. (Best viewed in color)}
\label{Fig:Fig-05}
\end{figure*}
\begin{figure}[!t]
\let\@latex@error\@gobble
\begin{algorithm}[H]
\DontPrintSemicolon
\KwInput{$k$-nn graph where $k=k_{max}$ of $N$ vertices.}
\KwOutput{Reduced graph of $N$ vertices.}
Construct distance matrix $D(N,k_{max})$ of $k$-nn graph
Construct a histogram $H_D$ of all elements in $D$ using FD rule
Save bin width in $H_D$ to the variable $bin_D$
\tcc{The following loop has computations in order of $\mathcal{O}(Nk_{max})$}
\For{$p = 1$ to $N$}
{
Construct a histogram $H_p$ of $D_{p,1 \text{ to } k_{max}}$ using $bin_D$
Apply MWA to bin values in $H_p$ (equation \ref{Eq-004})
Set K$^{\text{th}}$ as the first bin that exceeds MWA threshold
$\sigma_p = \text{mean}(D_{p,1 \text{ to } \text{K}^{\text{th}}})$
}
\tcc{The following loop has computations in order of $\mathcal{O}(Nk_{max})$}
\For{$p = 1$ to $N$}
{
\For{$q = 1$ to $k_{max}$}
{
$A_{p,q}=\exp{\left(\frac{D\left(p,q\right)}{\sigma_p \sigma_q\ }\right)}$
}
}
\tcc{The following loop has computations in order of $\mathcal{O}(Nk_{max})$}
\For{$p = 1$ to $N$}
{
\uIf{$\text{max}(A_{p,1 \text{ to } k_{max}}) > \mu(A_{p,1 \text{ to } k_{max}})+\sigma(A_{p,1 \text{ to } k_{max}})$}
{
$A_{p,1 \text{ to } k_{max}} < \mu(A_{p,1 \text{ to } k_{max}})+\sigma(A_{p,1 \text{ to } k_{max}}) = 0$
}
\Else
{
$A_{p,1 \text{ to } k_{max}} < \mu(A_{p,1 \text{ to } k_{max}})-\sigma(A_{p,1 \text{ to } k_{max}}) = 0$
}
}
Construct a reduced graph using affinity matrix $A(N,k_{max})$
\caption{Reducing a $k$-nearest neighbor graph}
\label{Alg:Alg-01}
\end{algorithm}
\end{figure}
\subsection{Integration with SpectralNet}
\label{SpectralNetApproach}
Our graph filtering method can be seamlessly integrated to the newly proposed spectral clustering using deep neural networks (SpectralNet) \cite{RN360}. SpectralNet uses Siamese nets \cite{RN426} to learn affinities between data points. Siamese nets expect the user to label which pairs are positive and which are negative. An unsupervised pairing uses the $k$-nearest neighbors, where the nearest neighbors are positive pairs and the farthest neighbors are negative pairs. Our graph filtering can be used to obtain positive and negative pairs. It offers the advantage of setting the number of pairs per point dynamically. This cannot be achieved using $k$-nearest neighbors, where all the points are restricted to have a fixed number of positive pairs. Also, we do not have to set $k$ manually. We let our method assigns the positive and negative pairs for each point. A pseudocode illustrating the steps of the proposed method is shown in Algorithm \ref{Alg:Alg-01} |
{
"arxiv_id": "2302.13221",
"language": "en",
"timestamp": "2023-02-28T02:13:16",
"url": "https://arxiv.org/abs/2302.13221",
"yymm": "2302"
} |
\section{Introduction}
\vspace{-0.1cm}
Feature Selection (FS) aims to select an optimal feature subset for a downstream predictive task.
Effective FS can help to reduce dimensionality, shorten training times, enhance generalization, avoid overfitting, improve predictive accuracy, and provide better interpretation and explanation.
In real-world practices, there are two major disruptions for FS: 1) generalization; 2) robustness.
Firstly, different selection algorithms use different criteria to select representative features, making it difficult to find the best algorithm for cross-domain datasets. Generalization in FS aims to answer: how can we achieve high accuracy across domains?
Secondly, some domains (e.g., biomedical) involve a huge number of features, but sample sizes are limited due to costs, privacy, and ethnicity.
When data are high-dimensional, point-point distances tend to be the same, and data patterns are non-discriminative. Meanwhile, high dimensionality can increase FS complexity and time costs in a discrete space. When data are low sample-sized, distributions are sparse, and data patterns are unclear.
Robustness in FS intends to answer: how can we automatically identify an effective yet small-sized feature subset with input dimensionality-agnostic time costs?
Relevant studies can only partially solve the two challenges.
Classic FS algorithms can be grouped into three categories:
(i) filter methods (e.g., univariate FS \cite{kbest,forman2003extensive}, correlation-based FS \cite{hall1999feature,yu2003feature}), in which features are ranked by a specific score.
However, feature relevance or redundancy scores are usually domain-specific, non-learnable, and cannot generalize well to all applications.
(ii) wrapper methods (e.g., evolutionary algorithms \cite{yang1998feature,kim2000feature}, branch and bound algorithms \cite{narendra1977branch,kohavi1997wrappers}), in which optimal feature subset is identified by a search strategy that collaborates with predictive tasks.
However, such methods have to search a large feature space of $2^N$ feature subspace candidates, where $N$ is the number of input features.
(iii) embedded methods (e.g., LASSO \cite{lasso}, decision tree \cite{sugumaran2007feature}), in which FS is part of the optimization objective of predictive tasks.
However, such methods are subject to the strong structured assumptions of predictive models (e.g., the L1 norm assumption of LASSO). That is, FS and predictive models are coupled together, lack flexibility, and are hard to generalize to other predictors.
Therefore, we need a novel perspective to derive the novel formulation and solver of generalized and disruption-robust FS.
\noindent\textbf{Our contributions: a discrete subsetting as continuous optimization perspective.}
We formulate the problem of discrete FS as a gradient-based continuous optimization task.
We propose a new perspective: the discrete decisions (e.g., select or deselect) of FS can be embedded into a continuous embedding space, thereafter, solved by a more effective gradient-based solution.
We show that this perspective can be implemented by feature subset encoding, gradient-optimized search, and reconstruction.
Training deep models requires big training data, and we find that reinforcement FS can be used as a tool to automatically generate features-accuracy pairs in order to increase the automation, diversity, and scale of training data preparation.
We demonstrate that integrating both reinforcement-generated and classic selection algorithms-generated experiences learns a better feature subset embedding space because: 1) using reinforcement is to crowd-source unknown exploratory knowledge and 2) using classic selection algorithms is to exploit existing peer knowledge.
We highlight that a globally-described discriminative embedding space, along with the joint objective of minimizing feature subset reconstruction loss and accuracy estimation loss, can strengthen the denoising ability of gradient search, eliminate noisy and redundant features, and yield an effective feature subset.
We observe that, by representing feature subsets into fixed-sized embedding vectors, the time costs of gradient-based optimization are input dimensionality-agnostic (only relates to embedding dimensionality).
The feature subset decoder can automatically reconstruct optimal selected features and doesn't need to manually identify the number of best features.
\noindent\textbf{Summary of Proposed Approach.} Inspired by these findings, this paper presents a generic and principled framework for deep differentiable feature selection.
It has two goals: 1) generalized across domains; 2) disruption-robust: overcome training data bottlenecks, reduce feature subset size while maintaining accuracy, and control time costs against input dimensionality.
To achieve Goal 1, we achieve FS via a pipeline of representation-search-reconstruction.
Specifically, given a downstream predictor (e.g., decision tree), we collect the features-accuracy pairs from diverse FS algorithms so that training data represent the diversity of samples.
An embedding space is learned to map feature subsets into vectors.
We advance the representability and generalization of the embedding space by exploiting the diverse and representative train samples to optimize the joint loss of feature subset reconstruction and accuracy evaluation.
We leverage the accuracy evaluator as feedback to infer gradient direction and degree to improve the search for optimal feature subset embeddings.
To achieve Goal 2, we devise different computing strategies: 1) We develop a multi-agent reinforcement FS system that can automatically generate large-scale high-quality features-accuracy pairs in a self-optimizing fashion and overcome training data bottlenecks.
2) We reformulate a feature subset as an alphabetical non-ordinal sequence. We devise the strategy of jointly minimizing not just accuracy evaluation loss but also sequence reconstruction loss in order to position the gradient toward a more denoising direction over the continuous embedding space to reconstruct a shorter feature subset.
We find that the reduction of feature space size improves generalization.
3) Since we embed feature subsets into a fixed-size embedding space, gradient search in a fixed-size space is input dimensionality agnostic.
Finally, we present extensive experimental results to show the small-sized, accurate, input-dimensionality agnostic, and generalized properties of our selected features.
\iffalse
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figure/intro_pf.pdf}
\caption{The construction of continuous representation from discrete subsetting. On the left side is an exhaustive enumeration of the subsets. Our proposed reinforcement-learning-based data collection can sample some high-quality subsets from the left and construct the continuous representation space as the right side shows. }
\label{intro_fig}
\end{figure}
\fi
\section{Preliminaries}
Before introducing the problem definition, we start by revisiting the essence of the classic machine learning problems and feature selection techniques.
Suppose $\mathcal{D} = \{D_{train}, D_{val}\}$ is a given dataset.
$D_{(\cdot)}=\{X_{(\cdot)}, y_{(\cdot)}\}$ represents the training or validation split, where $X_{(\cdot)}$ is features and $y_{(\cdot)}$ is the predictive target.
The classic machine learning problem will optimize a given model with the training data and then evaluate the validation dataset.
Formally, given a downstream machine learning method $M: X \to \hat{y}$, which converged on the train set. we can quantify the prediction accuracy $a$ on this dataset with metric $A$, denoted as $a=A(M(X_{val}), y_{val})$.
Formally, suppose given a feature selection method $S$, we can obtain the feature selection result on the training set by $S: D_{train} \to s$, where $s$ denoted a sequence including the selected column index.
By using the index $s$, we can obtain a better dataset as: $D' = \{X[s], y\}$. To evaluate the selection result, a simple solution is report the downstream performance on the validation split of $D'$.
Now we can give a formal definition of our goal. Suppose we have an Encoder $E_\phi: s \to e$ that can embed the given selection $s$ to a continuous representation $e$.
And we have a Generator $G_\psi: e\to \hat{s}$ that can reconstruct a selection sequence via a given hidden state $e$.
Meanwhile, we have a Predictor $P_\omega: e\to \hat{a}$, which is designed to estimate the performance of input embedding.
Our target is to build a search function $F: e \to \mathcal{E}$ that revise the $e$ to a set of hidden states $\mathcal{E}=\{e^+_i\}_{i=1}^{k}$. Each element $e^+_i$ represents a candidate of optimal hidden state, and $k$ is a hyperparameter to limit the generated number. Mathematically, we can present this as:
\begin{equation}
\begin{aligned}
e^* &= \argmax_{e^+_i \in \mathcal{E}} A(M(X_{train}[G_{\psi^*}(e^+_i)]), y_{train}), \\
\text{s.t. } & \phi^*, \psi^* ,\omega^* = \argmin_{\phi, \psi, \omega} \mathcal{L}( E_\phi,G_\psi,P_\omega; s, a),
\end{aligned}
\end{equation}
where $\mathcal{L}$ denote the loss function for training the $E_\phi$, $G_\psi$, and $P_\omega$.
After obtaining the optimal hidden state $e^*$, we can generate an optimal feature selection index $s^*$ via Generator $G_{\psi^*}$ and construct a validation dataset to evaluate the result.
\section{Problem Statement}
\vspace{-0.1cm}
Our goal is to develop a generalized and robust deep differentiable FS framework.
Formally, given a dataset $D=\{X, y\}$, where $X$ is a feature set, and $y$ is predictive target labels.
We utilize classic FS algorithms to $D$ to collect $n$ feature subset-accuracy pairs as training data, denoted by $R=\{(\mathbf{f}_i, v_i)\}_{i=1}^n$, where $\mathbf{f}_i=[f_1,f_2,\cdots,f_T]$ is the feature ID token set of the $i$-th feature subset, and $v_i$ is corresponding downstream predictive accuracy.
Thereafter, we pursue two aims:
1) constructing an optimal continuous space of feature subsets.
We learn a mapping function $\phi$, a reconstructing function $\psi$, and an evaluation function $\omega$ via joint optimization to convert $R$ into a continuous embedding space $\mathcal{E}$, in which each embedding vector represents the feature ID token set of a feature subset and corresponding model performance.
2) searching the optimal feature subset.
We adopt gradient-based search to find the optimal feature token set $\mathbf{f}^{*}$, given by:
\begin{equation}
\mathbf{f}^{*} = \psi(\mathbf{E}^{*}) = \argmax_{\mathbf{E}\in\mathcal{E}} \mathcal{A}(\mathcal{M}(X[\psi(\mathbf{E})]),y),
\end{equation}
where $\psi$ is a reconstruction function to reconstruct a feature ID token set from any embedding of $\mathcal{E}$;
$\mathbf{E}$ is an embedding vector in $\mathcal{E}$ and $\mathbf{E}^{*}$ is the optimal one;
$\mathcal{M}$ is the downstream ML model and $\mathcal{A}$ is the performance indicator.
We apply $\mathbf{f}^{*}$ to $X$ to select the optimal feature subset $X[\mathbf{f}^*]$ to maximize downstream ML model performances.
\section{Deep Differentiable Feature Selection}
\vspace{-0.1cm}
We present an overview of our method, then introduce the technical details of each component.
\begin{figure*}[!h]
\centering
\includegraphics[width=\linewidth]{figure/Framework.pdf}
\caption{An overview of our framework. GAINS\ is made up of four main components: 1) feature-accuracy training data preparation; 2) deep feature subset embedding; 3) gradient-optimized best embedding search; 4) optimal feature subset reconstruction.}
\label{model_overview}
\vspace{-0.5cm}
\end{figure*}
\subsection{Framework Overview}
Figure~\ref{model_overview} shows the overview of our framework, including four steps: 1) feature-accuracy training data preparation; 2) deep feature subset embedding; 3) gradient-optimized best embedding search; 4) optimal feature subset reconstruction.
Step 1 is to collect selected feature subsets and corresponding predictive performances as training data for deep feature subset embedding.
In particular, to exploit existing peer knowledge, we utilize classical feature selection methods (e.g., K-Best, mRMR) to collect feature subset-accuracy records;
to explore crowdsource unknown knowledge, we utilize reinforcement learning (RL) based feature selector to collect diverse feature subset-accuracy records.
Step 2 is to develop a deep encoder-evaluator-decoder model to learn the optimal embedding space of feature subsets.
In particular, given a feature subset, the encoder maps the feature subset ID tokens into a continuous embedding vector;
the evaluator optimizes the embedding vector along the gradient direction by predicting the corresponding model performance;
the decoder reconstructs the feature ID tokens using feature subset embedding vectors.
We learn the optimal embedding space by jointly optimizing the reconstruction and evaluation losses of the encoder, evaluator, and decoder.
Step 3 aims to expedite the gradient-optimized search of optimal feature subset embedding vector in the embedding space.
In particular, we first select top-K historical feature subset-accuracy pairs as search seeds (starting points) and obtain corresponding embeddings using the well-trained encoder.
We then search by starting from these embeddings at a minute rate in the gradient direction of performance improvement to identify optimal embeddings.
Step 4 is to exploit the well-trained decoder to reconstruct the optimal feature ID tokens as candidate feature subsets from these identified embeddings.
We evaluate these candidate feature subsets with a downstream ML model to present the best feature subset with the highest performance.
\subsection{Feature-Accuracy Training Data Preparation}
We collect a set of feature-accuracy pairs as training data to construct an effective continuous space that depicts the properties of the original feature set.
\noindent\textbf{Leveraging exploitation and exploration for automated training data preparation.}
The diversity, scale, and comprehensiveness of the training data population are essential for learning an effective embedding space.
The training data is collected via two strategies. 1) \textit{exploitation of existing FS algorithms}: we apply classical feature selectors (e.g., KBest, Lasso.) to the given feature set. Each algorithm represents one algorithmic perspective and produces a small number of feature subset-accuracy records. It is challenging to collect large-scale, diverse training data.
2) \textit{exploration of other unknown candidate subsets}: We find that reinforcement learning can be used to automatically explore and select various feature subsets to evaluate corresponding feature subset performance~\cite{marlfs}. In other words, reinforcement feature selection can be viewed as a tool for automated training data preparation.
In particular, we develop a multi-agent reinforcement feature selection system, where each agent is to select or deselect a feature in order to progressively explore feature subsets and find high-accuracy low-redundancy feature subsets. The reinforcement feature selection system leverage randomness and self-optimization to automatically generate high-quality, diverse, comprehensive feature subset-accuracy records to overcome data sparsity.
\noindent\textbf{Leveraging order agnostic property to augment training data.}
To learn a feature subset embedding space, we view a feature subset as a token sequence and exploit seq2vec models to embed a feature subset as a vector.
Since the feature subset is order-agnostic, the token sequence is order-agnostic.
We leverage the order-agnostic property to augment the feature subset-accuracy training data by shuffling and rearranging the orders of selected features in a feature token sequence.
Formally, given a selected feature subset and corresponding predictive accuracy, denoted by $\{\mathbf{f},v\}$, where $\mathbf{f} = [f_1,f_2,\cdots,f_T]$ is the ID tokens of selected features and $v_i$ is corresponding performance.
We shuffle the old token order and obtain a new order of the $T$-length feature subset $\mathbf{\tilde{f}}=[f_3,f_T,\cdots,f_1]$.
We add the new shuffled token sequence and corresponding accuracy into the training data to enhance data diversity and comprehensiveness.
\subsection{Deep Feature Subset Embedding}
\textbf{Why effective embedding space construction matters.}
Most classic feature selection algorithms are of a discrete choice formulation.
Such formulation results in suboptimal performance since it is hard to enumerate all possible feature combinations in a high-dimensional dataset.
To efficiently search the optimal feature subset, it is crucial to change FS from searching in discrete space to searching in continuous space, where gradient-based optimization can be used for faster and more effective selection.
\noindent\textbf{Leveraging the encoder-evaluator-decoder framework to embed feature subset tokens.}
We aim to construct an effective continuous embedding space in order to search for better feature subsets using gradient-based search in the gradient direction of higher performance.
We develop a novel learning framework that includes the encoder, evaluator, and decoder.
Next, we use $\mathbf{f}$ to denote the feature ID tokens of selected features and $v$ to denote the corresponding model performance.
\noindent\textbf{The feature subset encoder $\phi$:}
The encoder is to learn a mapping function $\phi$ that takes $\mathbf{f}$ as input, and outputs its continuous embedding $\mathbf{E}$, denoted by $\phi(\mathbf{f}) = \mathbf{E}$.
The encoder is designed based on a single layer of Long Sort-Term Memory~\cite{lstm} (LSTM), where $\mathbf{f} \in\mathbb{R}^{1\times T}$ and $T$ is the length of the feature subset.
After inputting $\mathbf{f}$ into $\phi$, we collect each feature ID token embedding to form $\mathbf{E}$.
Here, $\mathbf{E} = [\mathbf{h}_1, \mathbf{h}_2,\cdots, \mathbf{h}_T] \in \mathbb{R}^{T\times d}$, where $\mathbf{h}_t \in \mathbb{R}^{1\times d}$ is the $t$-th token embedding with dimension $d$.
\noindent\textbf{The feature subset decoder $\psi$:}
The decoder $\psi$ aims to reconstruct the feature ID tokens $\mathbf{f}$ of of an embedding vector $\mathbf{E}$, denoted by $\psi(\mathbf{E})=\mathbf{f}$.
Similar to the encoder, we employ a single-layer LSTM to implement the decoder.
We train the decoder in an autoregressive manner.
The initial input of the decoder is the last embedding of $\mathbf{E}$, denoted by $\mathbf{\hat{h}}_0=\mathbf{h}_T$.
We take the $i$-th step in LSTM as an example to demonstrate the calculation process.
Specifically, we input $\mathbf{\hat{h}}_{i-1}$ and the $i$-th feature ID token in $\mathbf{s}$ into the LSTM layer to get the current embedding $\mathbf{\hat{h}}_i$.
The output of the decoder and the input of the encoder share the same feature ID token dictionary.
Thus, to forecast the next most possible token easily, we utilize the attention mechanism~\cite{attention} to learn the attention weights between $\mathbf{\hat{h}}_i$ and each token embedding in $\mathbf{E}$.
Then, we generate the enhanced embedding $\mathbf{\tilde{h}}_i$ by aggregating the knowledge of $\mathbf{E}$ using the attention weights.
This process can be defined by:
\begin{equation}
\begin{aligned}
\mathbf{\Tilde{h}}_i=\sum_{\mathbf{h}_j \in \mathbf{E}}a_{ij} \mathbf{h}_j, \text{ where } a_{ij} = \frac{\exp(\mathbf{\hat{h}}_i\cdot \mathbf{h}_j)}{\sum_{\mathbf{h}_k\in \mathbf{E}}\exp(\mathbf{\hat{h}}_i\cdot \mathbf{h}_k)}.
\end{aligned}
\end{equation}
where $\mathbf{h}_j$ is the $j$-th embedding from $\mathbf{E}$ and $a_{ij}$ is the attention weight between $\mathbf{\hat{h}}_i$ to $\mathbf{h}_j$.
Later, we concatenate $\mathbf{\hat{h}}_i$ and $\mathbf{\Tilde{h}}_i$ together and input them into a fully-connected layer activated by Softmax to estimate the distribution of each feature ID token for the $i$-th step, which can be formulated by:
\begin{equation}
P_{\psi}(f_i|\mathbf{E}, \mathbf{f}_{<i}) = \frac{\exp(\mathbf{W}_{\mathbf{h}_i}\text{ concat}(\mathbf{\tilde{h}}_i,\mathbf{\hat{h}}_i))}{\sum_{\mathbf{h}_k \in \mathbf{E}}\exp({\mathbf{W}_{\mathbf{h}_k}\text{ concat}(\mathbf{\tilde{h}}_i,\mathbf{\hat{h}}_i))}},
\end{equation}
where $\text{concat}(\cdot)$ is the concatenate operation, $\mathbf{W}_{(\cdot)}$ represents the weight matrix. $\mathbf{f}_{< i}$ represent the previous tokens before the $i$-th step.
We may take the token with the highest probability as the predicted feature ID token, denoted by $f_i$.
If multiple the probability at each step, the probability distribution of the whole token sequence $\mathbf{f}$ can be represented by:
\begin{equation}
P_{\psi}(\mathbf{f}|\mathbf{E}) = \prod_{t=1}^{T}P_{\psi}(f_t|\mathbf{E}, \mathbf{f}_{<t}).
\end{equation}
\noindent\textbf{The Feature Subset Evaluator $\omega$:}
The evaluator $\omega$ aims to predict the model performance given the continuous embedding of $\mathbf{E}$.
Specifically, we apply the mean pooling to the embedding $\mathbf{E}$ in a column-wise manner to obtain the embedding $\mathbf{\Bar{\mathbf{e}}}\in\mathbb{R}^{1\times d}$.
We then input it into a fully-connected layer to predict the model performance $\Ddot{v}$, which can be defined by:
$
\Ddot{v} = \omega(\bar{\mathbf{e}}).
$
\noindent\textbf{The joint optimization.}
We jointly optimize the encoder, decoder, and evaluator.
There are two goals:
1) reconstructing the feature ID tokens $\mathbf{f}$.
To achieve this goal, we minimize the negative log-likelihood of $\mathbf{f}$ given $\mathbf{E}$, defined by:
\begin{equation}
\mathcal{L}_{rec}=-\log P_{\psi}(\mathbf{f}|\mathbf{E})=-\sum_{t=1}^{T} \log P_{\psi}(f_t|\mathbf{E}, \mathbf{f}_{<t}).
\end{equation}
2) minimizing the difference between the predicted performance $\Ddot{v}$ and the real one $v$.
To achieve this goal, we minimize the
Mean Squared Error (MSE) between $\Ddot{v}$ and $v$, defined by:
\begin{equation}
\mathcal{L}_{est} = \text{MSE}(v, \Ddot{v}).
\label{p_loss}
\end{equation}
We integrate the two losses to form a joint training loss $\mathcal{L}$, defined by:
\begin{equation}
\mathcal{L} = \lambda \mathcal{L}_{rec} + (1-\lambda)\mathcal{L}_{est},
\end{equation}
where $\lambda$ is the trade-off hyperparameter to balance the contribution of the two objectives during the learning process.
\subsection{Gradient-Optimized Best Embedding Search}
\textbf{Why selecting search seeds matters.}
When the encoder-evaluator-decoder model converges, we perform a gradient-based search in the well-built continuous space for better feature selection results.
Similar to the significance of initialization for deep neural networks, good starting points can accelerate the search process and enhance its performance.
These points are called ``search seeds''.
Thus, we first rank the selection records based on the corresponding model performance.
The top-K selection records are then chosen as search seeds.
\noindent\textbf{Gradient-ascent Optimizer Search.}
Assuming that one of the top-K selection records is
$(\mathbf{f}, v)$.
We input $\mathbf{f}$ into the encoder to obtain the embedding $\mathbf{E}=[\mathbf{h}_1, \mathbf{h}_2,\cdots, \mathbf{h}_T]$.
Then, we move each embedding in $\mathbf{E}$ along the gradient direction induced by the evaluator $\omega$:
\begin{equation}
\mathbf{h}_t^+=\mathbf{h}_t + \eta \frac{\partial \omega}{\partial \mathbf{h}_t}, \mathbf{E}^+=\{\mathbf{h}_1^+,\mathbf{h}_2^+,\cdots,\mathbf{h}_T^+,\},
\end{equation}
where $\eta$ is the step size and $\mathbf{E}^+$ is the enhanced embedding.
We should set $\eta$ within a reasonable range (e.g., small enough) to make the model performance of the enhanced embedding $\mathbf{E}^+$ is better than $\mathbf{E}$, denoted by $\omega(\mathbf{E}^+) \geq \omega(\mathbf{E})$.
We conduct the search process for each record and collect these enhanced embeddings, denoted by $[\mathbf{E}^+_1,\mathbf{E}^+_2,\cdots,\mathbf{E}^+_K]$.
Since we have embedded discrete selection records into $d$ dimensional embedding space, the time complexity of the searching process is independent of the size of the original feature set.
\subsection{Optimal Feature Subset Reconstruction}
The enhanced embeddings indicate possible better feature selection results.
To find the optimal one, we should reconstruct the feature ID tokens using them.
Specifically, we input $[\mathbf{E}^+_1,\mathbf{E}^+_2,\cdots,\mathbf{E}^+_K]$ into the well-trained decoder to get the reconstructed feature ID tokens $[\mathbf{f}_1^+,\mathbf{f}_2^+,\cdots,\mathbf{f}_K^+]$.
We do not need to set the number of feature ID tokens throughout the decoding process; instead, we identify each token until the stop token, similar to how natural language generation works.
Then, we use them to select different feature subsets from the original feature set.
Finally, we adopt the downstream ML model to evaluate these subsets and output the optimal feature ID tokens $\mathbf{f}^*$, which can be used to select the best feature subset with the highest model performance.
\section{Experiments}
\subsection{Experimental Setup}
\noindent{\bf Dataset Description.}
We conducted extensive experiments using 19 publicly available datasets from UCI, OpenML, CAVE, Kaggle, and LibSVM.
These datasets are categorized into 3 folds based on the types of ML tasks: 1) binary classification (C); 2) multi-class classification (MC); 3) regression (R).
Table \ref{table_overall_perf} shows the statistics of these datasets, which is accessible via the website address provided in the Abstract.
\input{table/main_comparison}
\noindent{\bf Evaluation Metrics.}
For the binary classification task, we adopted F1-score, Precision, Recall, and ROC/AUC.
For the multi-classification task, we used Micro-F1, Precision, Recall, and Macro-F1.
For the regression task, we utilized 1-Mean Average Error (1-MAE), 1-Mean Square Error (1-MSE), and 1-Root Mean Square Error (1-RMSE).
For all metrics, the higher the value is, the better the model performance is.
\noindent{\bf Baseline Algorithms.}
We compared {GAINS} with 10 widely used feature selection methods: (1) \textbf{K-Best} selects K features with the highest feature scores~\cite{kbest}; (2) \textbf{mRMR} intends to select a feature subset with the greatest relevance to the target and the least redundancy among themselves~\cite{mrmr}; (3) \textbf{LASSO} selects features by regularizing model parameters, which shrinks the coefficients of useless features into zero~\cite{lasso}; (4) \textbf{RFE} recursively removes the weakest features until the specified number of features is reached~\cite{rfe};
(5) \textbf{LASSONet} designs a novel objective function to conduct feature selection in neural networks~\cite{lassonet}; (6) \textbf{GFS} selects features using genetic algorithms, which recursively
generates a population based on a possible feature subset, then uses a predictive model to evaluate it ~\cite{gfe};
(7) \textbf{MARLFS} builds a multi-agent system for selecting features, wherein each agent is associated with a single feature, and feature redundancy and downstream task performance are viewed as incentives~\cite{marlfs};
(8) \textbf{SARLFS} is a simplified version of MARLFS, which uses one agent to determine the selection of all features in order to alleviate computational costs~\cite{sarlfs};
(9) \textbf{RRA} first collects distinct selected feature subsets, and then integrates them based on statistical sorting distributions ~\cite{seijo2017ensemble};
(10) \textbf{MCDM} first obtains a decision matrix using the ranks of features, then assigns feature scores based on the matrix for ensemble feature selection~\cite{mcdm}.
Among these methods, K-Best and mRMR are filter methods; Lasso, RFE, and LassoNet are embedded methods; GFS, SARLFS, and MARLFS are wrapper methods; RRA and MCDM belong to the hybrid feature selection methods.
We randomly split each dataset into two independent sets. The prior 80\% is the training set, and the remaining 20\% is the testing set.
We conducted all experiments with the hold-out setting to ensure a fair comparison.
This means that the optimal feature subset index learned on the training set was directly applied to the testing set.
We adopted Random Forest as the downstream machine learning model and reported the performance of each method by running five-fold cross-validation on the testing set.
Random Forest is a robust, stable, well-tested method, thus, we can reduce performance variation caused by the model, and make it easy to study the impact of feature selection.
\noindent{\bf Hyperparameter Settings and Reproducibility.}
We ran MARLFS for 300 epochs to collect historical feature subsets and the corresponding downstream task performance.
To augment data, we permutated the index of each feature subset 25 times.
These augmented data can be used for {GAINS } training.
The Encoder and Generator have the same model structure, which is a single-layer LSTM.
The Predictor is made up of 2-layer feed-forward networks.
The hidden state sizes of the Encoder, Generator, and Predictor are 64, 64, and 200, respectively.
The embedding size of each feature index is 32.
To train {GAINS }, we set the batch size as 1024, the learning rate as 0.001, and $\lambda$ as 0.8 respectively.
During the model inference stage, we used the top 25
feature selection records as generation seeds.
\noindent{\bf Environmental Settings}
All experiments were conducted on the Ubuntu 18.04.6 LTS operating system, AMD EPYC 7742 CPU, and 8 NVIDIA A100 GPUs, with the framework of Python 3.9.10 and PyTorch 1.8.1.
\begin{figure*}[!h]
\centering
\subfigure[Spectf]{
\includegraphics[width=4.25cm]{figure/RQ1-2/spectf.pdf}
}
\hspace{-3mm}
\subfigure[German Credit]{
\includegraphics[width=4.25cm]{figure/RQ1-2/german.pdf}
}
\hspace{-3mm}
\subfigure[OpenML\_589]{
\includegraphics[width=4.25cm]{figure/RQ1-2/openml_589.pdf}
}
\hspace{-3mm}
\subfigure[Mice Protein]{
\includegraphics[width=4.25cm]{figure/RQ1-2/mice_protein.pdf}
}
\vspace{-0.35cm}
\caption{The influence of data collection (GAINS$^{-d}$) and data augmentation (GAINS$^{-a}$) in GAINS.}
\label{w/oaug}
\vspace{-0.6cm}
\end{figure*}
\subsection{Experimental Results}
\noindent{\bf Overall Comparison.}
This experiment aims to answer: \textit{Can GAINS\ effectively select the feature subset with excellent performance?}
Table~\ref{table_overall_perf} shows the comparison results in terms of F1-score, Micro-F1, and 1-RAE.
We observed that GAINS\ outperforms other baseline models across all domains and various tasks.
The underlying driver is that GAINS\ converts the discrete feature selection records into a discriminative and effective embedding space by integrating the knowledge of historical selection records.
It enables the gradient-based search to effectively perceive the properties of the original feature set to obtain superior feature subsets.
Another interesting observation is that the performances of various baseline models vary over datasets: high in some datasets, yet low in other datasets.
Such an observation indicates that classic FS methods can address the FS task on limited number of data environments, but perform unstably in diverse data environments.
Overall, this experiment demonstrate that GAINS\ has effective and robust performance in various data environments and application scenarios.
\smallskip
\noindent{\bf Examining the impact of data collection and augmentation.}
This experiment aims to answer: \textit{Is it essential to collect feature selection records and augment them to maintain GAINS\ performance?}
To establish the control group, we developed two model variants: 1) GAINS$^{-d}$, we collected feature selection records at random rather than building basic feature selectors.
2) GAINS$^{-a}$, we removed the data augmentation process of GAINS.
Figure~\ref{w/oaug} shows the comparison results on Spectf, German Credit, OpenML\_589, and Mice Protein.
We found that the performance of GAINS\ is much better than GAINS$^{-d}$.
The underlying driver is that, compared to random generation, selection records generated by feature selectors are more robust and denoising, which is important for creating a more effective embedding space for searching better feature subsets.
Moreover, we observed that GAINS\ is superior to GAINS$^{-a}$ in all cases.
The underlying driver is that data augmentation can increase the data diversity, resulting in a more robust and effective learning process for GAINS.
Thus, this experiment validates that data collection and
augmentation is significant for maintaining GAINS\ performance.
\smallskip
\noindent{\bf Study of scalability.}
This experiment aims to answer: \textit{Does GAINS\ have excellent scalability in different datasets?}
According to the dimension of the feature set, we chose 6 datasets: SVMGuide3, Megawatt1, Spambase, Mice Protein, Coil-20, and MNIST, from small to large.
We analyzed the variation in the average time required to make an inference during a single search step and in the parameter size employed by the encoder-evaluator-decoder model.
Figure~\ref{scalable} shows the comparison results.
We found that in spite of the almost 40-fold increase in feature dimension from SVMGuide3 to MNIST, the inference time increases by just about 4-fold, and the parameter size only enlarges by almost 2-fold.
The underlying driver is that the deep feature subset embedding module has integrated the discrete selection records into a fixed continuous embedding space, significantly reducing the searching time and embedding model size.
The time-cost bottleneck only focuses on the downstream ML model validation.
This experiment validates that GAINS\ has strong scalability when dealing with datasets of varying dimensions.
\begin{figure}[!t]
\centering
\hspace{-4mm}
\subfigure[Inference Time (s)]{
\includegraphics[width=4.3cm]{figure/inference_time.pdf}
}
\hspace{-3mm}
\subfigure[Parameter Size (MB)]{
\includegraphics[width=4.3cm]{figure/parameter_num.pdf}
}
\hspace{-4mm}
\vspace{-0.3cm}
\caption{Scalability check of GAINS\ in terms of feature size, inference time, and parameter size.}
\label{scalable}
\vspace{-0.2cm}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{figure/feature_num.pdf}
\caption{Comparison of the feature set size of GAINS, the best baseline model (Second Best), and the original feature set (Original).}
\label{feat_num}
\vspace{-0.5cm}
\end{figure}
\input{table/robust_chekc.tex}
\smallskip
\noindent{\bf Study of feature subset sizes.}
This experiment aims to answer: \textit{Can GAINS\ select the small-sized yet effective feature subset?}
We randomly chose seven datasets and compared the feature set size of GAINS, the best baseline, and the original one.
Figure~\ref{feat_num} shows the comparison findings in terms of feature ratio, which reflects the proportion of selected features relative to the total feature set.
We found that the feature subset selected by GAINS\ is much smaller than the second-best baseline while it maintains better performance.
A possible reason is that jointly optimizing feature subset reconstruction loss and accuracy estimation loss can enhance the denoising capability of gradient search, reduce noisy and redundant features, and select a small but effective feature subset.
This experiment indicates that the selected features of GAINS\ can not only improve the model performance but also reduce computational costs.
\smallskip
\noindent{\bf Robustness check of {GAINS } over downstream MLtasks.}
This experiment aims to answer: \textit{Is GAINS\ robust when confronted with distinct ML models as downstream tasks?}
We replaced the downstream ML model with Random Forest (RF), XGBoost (XGB), Support Vector Machine (SVM), K-Nearest Neighborhood (KNN), and Decision Tree (DT) respectively.
Table~\ref{table_robust} shows the comparison results on German Credit in terms of F1-score.
We observed that {GAINS } beats other baselines regardless of the downstream model.
The underlying driver is that GAINS\ can customize the deep embedding space based on the model performance assessed by a specific downstream ML model, making GAINS\ have remarkable customization ability.
This experiment validates the robustness and stableness of GAINS.
\smallskip
\noindent{\bf Study of the feature selection result of {GAINS }.}
This experiment aims to answer: \textit{What are the distinctions between GAINS\ and search seeds?}
We visualized two images from the MNIST-fashion dataset in the original format, the best search seed, and GAINS.
Here, the best search seed is used for searching the optimal result in the continuous embedding space of GAINS.
Figure~\ref{case_sty} shows the comparison results.
There are two interesting observations:
1) The feature dimension selected by the best search seed is greater than GAINS;
2) GAINS\ focuses on keeping the shape and boundary information of central objects.
The underlying driver is that GAINS\ sufficiently integrates the knowledge of historical selection records to eliminate redundant features, resulting in a decrease in feature dimension (i.e., from 502 to 407).
Meanwhile, the gradient learned by GAINS\ grasps the core properties of the original feature set to identify important features of the original feature set.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{figure/RQ4:Case Study/mnist.pdf}
\vspace{-0.6cm}
\caption{The visualization of two cases for comparing the original image (Original), the image selected by the best base selector (Seed), and the image selected by GAINS.}
\label{case_sty}
\vspace{-0.5cm}
\end{figure}
\section{Related Work}
\vspace{-0.1cm}
\textbf{Feature Selection} can be broadly categorized as wrapper, filter, and embedded methods according to the selection strategies~\cite{li2017feature}.
Filter methods choose the highest-scoring features based on the feature relevance score derived from the statistical properties of the data~\cite{kbest,mrmr}.
These approaches have low computational complexity and can efficiently select features from high-dimensional datasets.
But, they ignore feature-feature dependencies and interactions, resulting in suboptimal performance.
Wrapper methods assess the quality of the selected feature subset based on a predefined machine learning (ML) model in an iterative manner~\cite{gfe,marlfs,sarlfs}.
The performance of these methods is typically superior to that of filter methods because they evaluate the entire feature set.
However, enumerating all possible feature subsets is an NP-hard problem, leading to cannot identify the optimal feature subset.
Embedded methods convert the feature selection task into a regularization item in a prediction loss of the ML model to accelerate the selection process~\cite{lasso,lassonet,rfe}.
These methods may have outstanding performance on the incorporated ML model but are typically difficult to generalize to others.
Moreover, other works have proposed hybrid feature selection methods, which have two technical categories: 1) homogeneous approach~\cite{seijo2017testing,pes2017exploiting}; 2) heterogeneous approach~\cite{haque2016heterogeneous,seijo2019developing}.
Their performance is all limited by their basic aggregating methods.
Unlike these works, GAINS\ proposes a new feature selection perspective, which maps historical discrete selection records into a continuous embedding space and then employs the gradient-based search to
efficiently identify the optimal feature subset.
\section{Conclusion}
\vspace{-0.05cm}
While FS has been investigated as a discrete choice problem, this paper studied the research question: can we solve FS in a continuous space so FS can be more automated, effective, and generalized?
We proposed a new perspective: discrete FS as a gradient-optimized search.
We formulated FS as a continuous optimization task.
We developed a four-step framework: 1) automated reinforcement training data preparation, 2) deep feature subset embedding, 3) gradient optimal feature subset search, and 4) feature subset reconstruction.
We found that: 1) reinforcement FS is a tool of automated training data collection in order to advance the diversity, scale, and comprehensiveness of training data; 2) a joint optimization of encoder, evaluator, and decoder can effectively construct a continuous embedding representation space; 3) the strategy of discrete FS as gradient search can reduce feature subset sizes, increase generalization, and is automated, effective, and input dimensionality agnostic.
|
{
"arxiv_id": "2302.13173",
"language": "en",
"timestamp": "2023-02-28T02:11:48",
"url": "https://arxiv.org/abs/2302.13173",
"yymm": "2302"
} | \section{Introduction}
In the past two years, artificial intelligence-generated content (AIGC) has received widespread attention. AIGC is seen as the next step for professionally generated content (PGC) and user-generated content (UGC). The text-to-image model can draw beautiful images in the blink of an eye. ChatGPT \cite{openai2022chatgpt} can bring users high-quality chatting and creative writing. On the one hand, the advancement of pre-trained models (PM) has driven the vigorous development of industries such as artificial intelligence, cloud computing, and big data. On the other hand, models may have the potential for misuse, irresponsibility, or unethical issues. Against this background, we introduce MetaAID 2.0, a framework for developing Metaverse applications. MetaAID 2.0 is an updated version of MetaAID 1.0 \cite{zhu2022metaaid}, dedicated to supporting a flexible, ethical, and human-controllable process of generating Metaverse content. MetaAID 2.0 further expands the capabilities of multimodal data generation and provides a mechanism to control human creativity and responsibility.
Existing systems mainly provide AI services based on large-scale cloud computing platforms, such as AI drawing \cite{ramesh2022hierarchical}, AI writing \cite{openai2022chatgpt}, etc. Few lightweight frameworks support multimodal generation with controllable creativity and responsibility. With the growth of the related open-source community, it has become possible to accomplish this task. The concept of Model-as-a-Service (MaaS) enables end-to-end processing of multimodal data. The idea of the MetaAID 1.0 framework is to form a good collaborative relationship between AI technology and human editors. MetaAID 2.0 continues this philosophy, adding the concept of human-controllable AI given the rapid growth in the capabilities of pre-trained models. Human controllability can be divided into two dimensions: the controllability of human creativity and the controllability of model outputs responsibility. Through the PM information flow \cite{zhu2022switchnet} set by users, the controllability of human creativity can be realized, the value of human imagination can be maximized, and cultural innovation can be carried out. By generating URI-extension for output resources, responsibility for output data can be controlled, enabling ``data ownership" by the way. Our framework supports receiving multimodal input (text, image, video, and audio), unleashing human creativity through PM information flows and enabling responsible and ethical interaction of model outputs through URI-extension.
Through the customized PM information flow, it is possible to control human creativity, create rich and colorful metaverse content, and realize the input and output of multimodal data. The challenge of this problem is that the framework needs to contain rich functions and be able to handle multimodal data flexibly. For example, good articles can be written through text generation models, but it faces the limitation of not being vivid. Adding pictures to the article will enhance the ornamental and artistic value of the work. Additionally, adding voice can improve the user experience. Furthermore, showing in the form of generated videos can gain attention on video platforms and generate real-world feedback. We hope to unleash the value of human creativity through PMs and propose PM information flows.
The second challenge is the controllable responsibility of the model. Although image and video digital watermarking techniques \cite{mohanarathinam2020digital} have been extensively studied in prior works, the generated data will inevitably be modified and edited, and even digital watermarks may be removed. This poses a challenge to the controllable responsibility of model data. In addition, the nature of text data is inconvenient to add digital watermarks, and the modified content is more difficult to identify. We hope to resolve this problem through multimodal representation and retrieval. Traditional methods are not easy to achieve fast retrieval and identification of modified data. Inspired by the Semantic Web, we introduce a semantic representation-based knowledge base for controllable responsibility. We propose a URI-extension consisting of URI, detailed description, and URI Embeddings for multimodal data, to realize retrieval and identification.
To evaluate this framework, we constructed an experimental website. The innovations of this paper are as follows:
(1) We introduce MetaAID 2.0, which aims to realize the controllability of human creativity and the controllability of model outputs.
(2) We propose PM information flow and URI-extension mechanisms. On the input side, users can give full play to their creativity by controlling the PM information flow. On the output side, the framework realizes controllable data responsibility and ethical interaction through URI-Extension.
(3) We develop a website based on this framework and achieve good application results.
\section{Related Work}
\subsection{Single-Modal Pre-trained Models}
ChatGPT \cite{openai2022chatgpt,ouyang2022training} is a chatbot program developed by OpenAI, which uses a language model based on the GPT-3.5 architecture and is trained by reinforcement learning. They collected a dataset of model output rankings and further fine-tuned the model through reinforcement learning with human feedback.
Google proposes Bard which is built based on the large language model LaMDA \cite{thoppilan2022lamda}. LaMDA is trained on large-scale online data to generate convincing responses to user prompts, and the model can remove unsafe answers and select high-quality responses.
Schick et al. present Toolformer\cite{schick2023toolformer}, which learns to call different tools such as search engines, calculators, and translation systems using APIs in a self-supervised manner. Toolformer can select the APIs, timing, and parameters to be called after training.
OpenAI's Brown et al. \cite{brown2020language} pre-train GPT-3 which contains 175 billion parameters. GPT-3 requires no fine-tuning and achieves good performance on zero-shot, one-shot, and few-shot NLP tasks.
Baidu's Wang et al. \cite{wang2021ernie} build ERNIE 3.0 Titan which contains 260 billion parameters. Based on the ERNIE 3.0 framework, this model achieved state-of-the-art results on 68 datasets. With a self-supervised adversarial loss and a controllable language modeling loss, the model is improved in generating believable and controllable text.
\subsection{Multi-Modal Pre-trained Models}
Rombach et al. \cite{rombach2021highresolution} propose a latent diffusion model to carry out the diffusion process in the latent space, thus greatly reducing the computational complexity. The latent diffusion mechanism can significantly improve the training and sampling efficiency, and the model uses a cross-attention mechanism for a conditional generation.
DALL-E 2 \cite{ramesh2022hierarchical} combines a CLIP-based text encoder with an image embedding decoder to construct a complete text-to-image model. The proposed two-stage model first generates CLIP image embeddings from text descriptions and then uses a decoder to generate images based on the image embeddings.
Agostinelli et al. propose MusicLM \cite{agostinelli2023musiclm}, a model for generating music from text or other signals, which can quickly generate high-quality music at 24 kHz according to conditional input. The model treats the conditional music generation process as a hierarchical sequence-to-sequence modeling task.
Shah et al. \cite{shah2022lm} propose large model navigation LM-Nav, which realizes mobile robot navigation based on text instructions. LM-Nav combines three pre-trained models for parsing user commands, estimating the probability of observations matching those landmarks, and estimating navigational features and robot actions.
\section{Pre-trained Models Information Flow}
\subsection{Functional module architecture}
In this subsection, we will introduce the PM modules in the framework, each named after some Beijing locations where I have lived. To realize the controllability of human creativity, we design different PM information flows \cite{zhu2022switchnet}. As shown in Figure \ref{arch.fig}, the modules in it can be combined in a custom way. Let's demonstrate with 2 PM information flows as an example.
\begin{figure*}[!h]
\centering
\includegraphics[width=4in]{pic/arch.jpg}
\caption{Schematic diagram of Our PM information flows}
\label{arch.fig}
\end{figure*}
(1) The initial text is extended by a text generation module, and then the content is expressed as a description suitable for image generation by manually adjusting the generated text. Text-to-image models can generate images based on conditional text input. Further, in order to add an artistic effect to the image, the original image is fused with world-famous paintings such as Van Gogh's ``The Starry Night" to achieve style transfer, and finally, the image output is obtained, and a URI-extension is generated at the same time.
(2) Image-to-text models convert input images into textual descriptions. Then, the text is fed into a text generation model for expanded creative content. Manually polish the text content and let the speech synthesis model generate audio files. At the same time, part of the text is also used to generate video files. Finally, the system outputs audio and video files and generates the corresponding URI-extension.
As a tool to assist humans in developing applications, its flexibility and rich functions are crucial. The framework strives to unleash human creativity and break functional constraints. Specific data modalities use corresponding models, and this framework realizes the transformation and generation between various modes through PM information flows.
\subsection{Text Modal}
We introduce the Zhongguancun (中关村) text module. This module mainly consists of text generation and Chatbot.
\subsubsection{Text generation}
Text generation aims to generate fluent natural language text based on input data \cite{li2021pretrained}.
We introduce Chinese-English bilingual text generation models. For English, OpenAI's Radford et al. \cite{radford2019language} propose GPT-2 which is a text generation model based on a transformer decoder. The transformer decoder is mainly composed of three components, multi-head self-attention mechanism, masked multi-head self-attention mechanism, and fully connected feedforward network. GPT-2 is an autoregressive model, and the model generates a word every iteration during the computing process. A language model can be expressed as Equation (1)
\begin{align}
p(x) = \prod_{i=1}^n p(x_i|x_{i-1},...,x_1)
\end{align}
where x is the input text and $x_i$ represents the i-th word. When GPT-2 performs zero-shot learning, it also regards the task form as an input, and can directly predict the answer of the downstream task without fine-tuning the model. Similarly, few-shot learning takes the form of using the task template and a small number of examples as input to guide the model to directly predict the answers to downstream tasks.
GPT-2 first pre-trains a language model on large-scale text data and then applies the language model directly to a variety of supervised downstream tasks without the need for task-specific fine-tuning. By designing specific templates for various tasks, the generated texts are used as prediction results. This form of zero-shot learning gradually developed into the paradigm of ``prompt learning" \cite{liu2023pre} and was formally proposed.
For Chinese, Zhao et al. \cite{zhao2019uer} propose a UER toolkit and train Chinese models. The toolkit supports pre-training and fine-tuning and has loosely coupled and encapsulated rich modules.
\subsubsection{Chatbot}
Chatbots can converse with users through natural language and understand their intent. A text generation model can support a dialogue system. We have implemented a Pre-trained language model (PLM)-based dialogue system which supports bilingual chat in Chinese and English.
For English, Roller et al. \cite{roller2020recipes} propose an open-domain chatbot, BlenderBot. They let three different models jointly decide whether to introduce knowledge by combining a retriever and a generative model. Finally, better results are achieved through the Beam Search decoding strategy.
For Chinese, we use ChatYuan\footnote{https://github.com/clue-ai/ChatYuan}, which is based on PromptCLUE\footnote{https://github.com/clue-ai/PromptCLUE}. For the training process, ChatYuan used a combined dataset of large-scale functional dialogues and multi-round dialogues. PromptCLUE is built on the T5 model, which is pre-trained for large-scale Chinese corpus and performs multi-task learning on more than 100 tasks. The Chinese language understanding evaluation (CLUE) \cite{xu2020clue} benchmark is established in 2019 and corresponds to the foreign General Language Understanding Evaluation (GLUE) \cite{wang2018glue} benchmark.
\subsection{Audio Modal}
We present the Houhai (后海) audio module which consists of speech recognition and speech synthesis functions.
\subsubsection{Speech Recognition}
The audio technology module mainly includes the functions of speech-to-text and text-to-speech.
For Chinese and English, Li et al. \cite{li2020robutrans} improved transformerTTS \cite{li2019neural} and proposed RobuTrans. The data features of the encoder include phoneme and prosodic features, which make the synthesized audio more natural and use duration-based hard attention to effectively improve the robustness of the model. Some companies provide high-quality speech recognition APIs, such as Baidu, HKUST Xunfei, etc.
\subsubsection{Speech Synthesis}
Due to the significant difference in the pronunciation synthesis of different languages, we use different models for different languages. For Chinese, Gao et al. \cite{gao2022paraformer} proposed a single-round non-autoregressive model Paraformer. The predictor module is used to predict the number of target characters in the speech, and the sampler transforms the acoustic feature vector and the target character vector into a feature vector containing semantic information.
For English, Gao et al. \cite{gao2020universal} propose universal ASR which supports both streaming and non-streaming ASR models.
\subsection{Image Modal}
We present the Wudaokou (五道口) image module composed of image editing and image generation.
\subsubsection{Image Editing}
Image editing refers to the processing and modification of the original image so that the image meets the needs of a specific scene. Image enhancement focuses on transforming digital images into a state more suitable for display or analysis.
GAN can generate excellent texture information, but it is unstable and will generate false textures. Liang et al. \cite{liang2022LDL} propose an image super-resolution model which explicitly distinguishes GAN-generated pseudo-textures from real details. They propose a local discriminative learning (LDL) framework to regularize the adversarial training process.
Deng et al. \cite{deng2021stytr2} propose a style transfer transformer $StyTr^2$ which uses two transformer encoders to encode content and style features. Different from the auto-regressive processing method, they use a decoder to predict the output using all consecutive image patches as input.
Kulshreshtha et al. \cite{kulshreshtha2022feature} propose a multi-scale refinement technique which is an iterative refinement method from coarse to fine to improve the inpainting quality of neural networks at high resolution.
Image modification with stable diffusion \cite{rombach2021highresolution} achieves promising results. This model is implemented based on the latent diffusion technique, which can reduce memory and computational complexity. The model mainly consists of three parts, autoencoder, U-Net, and text encoder, and is trained to generate a latent (compressed) representation of the image.
\subsubsection{Image Generation}
Text-to-image models take a textual description as input and output a corresponding image that matches the input. This module contains different open-source image generation models, such as DALL·E, DALL·E 2, Stable diffusion, Imagen, etc. to generate images in various styles.
OpenAI's Ramesh et al.\cite{ramesh2022hierarchical} propose DALL·E 2 which realizes the text-conditioned image generation process through a two-stage model. First, the input text is encoded through the CLIP model and converted into a representation of the image features, and then the final image is generated by a decoder according to the image features. The image decoder is learned to reverse the image encoding process via the GLIDE model (diffusion model) to randomly decode the CLIP image embedding. Google's Saharia et al. propose Imagen \cite{saharia2022photorealistic} which has a simple and powerful process of generating images from text. First, the diffusion model is used to generate images from text, and then the model uses two super-resolution diffusion models based on the obtained images to achieve image resolution enhancement.
Since these models only support English input, to meet the needs of multilingual image generation, We integrate machine translation and prompt expansion functions to achieve multilingual image generation tasks. The prompt expansion process is based on the semantic retrieval of the prompt keyword knowledge base. Images generated without expanded descriptions cannot achieve satisfactory results. It is not easy for ordinary users to master the skills of using keywords, but with this function, users' efficiency is greatly improved.
\subsubsection{Image-to-Text}
Google's Dosovitskiy et al. \cite{dosovitskiy2020image} propose a vision transformer (ViT) that applies transformer to image classification tasks. GPT-2 is a language model that can be used to generate text. These two models can be combined to form the visual encoding and decoding process. The combination of the two models can be achieved via the cross-attention mechanism, which enables the decoder to retrieve key information from the encoder.
\subsection{Video Modal}
We introduce the Gulou (鼓楼) video module which consists of video editing and video generation functions.
\subsubsection{Video Editing}
The video technology module is designed to edit, transform, enhance, and generate video files.
Petrov et al. \cite{perov2020deepfacelab} propose to use deepfake \cite{nguyen2022deep} technology for face swapping, realize model learning through face detection, face alignment, and face segmentation processes, and provide a set of adjustments tools for human-computer interaction.
Chan et al. \cite{chan2022investigating} propose a video super-resolution model, which consists of a stochastic degradation process that uses longer sequences rather than larger batches during training, which allows for more efficient use of temporal information and more stable performance. Besides, an image pre-cleaning module is proposed to balance detail synthesis and artifact suppression.
Narasimhan et al. propose CLIP-It \cite{narasimhan2021clip} which generates a video summary from either a user-defined natural language description or an automatically generated dense video description based on the original video. The language-guided attention head fuses image and language embeddings, and the transformer for video frame scoring attends to all frames to compute their relevance scores. During inference, video summaries are constructed by converting video frame scores to shot scores and using a knapsack algorithm to select shots with high scores.
\subsubsection{Video Generation}
Text-to-video aims to generate video directly based on the natural language description.
Imagen video \cite{ho2022imagen} is a text-conditioned video generation system based on the cascaded video diffusion model, which uses a basic video generation model and spatial-temporal super-resolution models to generate high-definition videos. They propose to extend the text-to-image Imagen model \cite{saharia2022photorealistic} in the temporal domain to migrate to video generation.
Make-A-Video \cite{singer2022make} extended the diffusion-based text-to-image model to text-to-video by spatiotemporally decomposing the diffusion model. They spatially and temporally approximate the decomposed temporal U-Net and attention tensor and generate high-definition, high-frame-rate video through super-resolution methods. Hong et al. propose CogVideo \cite{hong2022cogvideo} which contains a hierarchical learning method with multiple frame rates. CogVideo is built on top of the CogView2 model \cite{ding2022cogview2} used for image generation. They propose a spatiotemporal bi-directional attention mechanism to learn smooth temporal video frame information.
\section{Human-Controllable Mechanisms}
\subsection{Universal Resource Identifier Extension}
In the Semantic Web, a Universal Resource Identifier (URI) is defined as a unique string used to identify a resource, which can represent any object or abstract concept. We introduce URI-extension consisting of 3 elements, a URI, a detailed description, and URI embeddings. In terms of working mechanisms, this technology leverages semantic representation, high-performance storage, and semantic retrieval. We will describe the 3 parts of the URI-extension.
(1) Inspired by the URI in the Semantic Web, we also take the form of URIs, aiming at generating URIs for each resource to uniquely locate the outputs.
(2) Detailed description is used to add more descriptions to facilitate tracking of the framework output. This problem arises in the context of big data. Many users use AI to generate outputs in different locations and at different times. The detailed description contains more information that is convenient for positioning and analysis, and the content of this description is relatively long.
(3) URI embeddings are used for efficient identification and retrieval. This technology is used for the convenience of information comparison, and identification to realize efficient retrieval. It is inevitable for people to modify the framework output. Even with slight modifications, it is difficult to retrieve from a large amount of output using traditional search methods. We hope to extract the implicit semantic representation through pre-trained models, to deal with fuzzy retrieval.
The working mechanism consists of two parts, URI-extension, and semantic retrieval. Detailed descriptions and semantic representations are stored in a lightweight fashion.
The resource information extraction algorithm is designed to generate unique URIs for the model output. we add information such as device, IP address, user account, date, etc. to generate detailed descriptions. Next, we will further describe the generation of multimodal URI embeddings.
\subsection{Multimodal URI Embeddings}
This part introduces the method to generate part (3) of URI-extension, by which URI-extension can be retrieved efficiently.
For images, we adopt the ViT model \cite{wu2020visual} to generate vector representations. ViT employs a convolutional neural network (CNN) to convert an image into a sequence of image patches, thereby utilizing the transformer model to produce a vector representation. This vector representation contains the structural information and content information of the image.
An acoustic fingerprint is a compressed representation of an audio signal that can be used to identify audio samples or retrieve similar audio files in a database. Audio fingerprinting
\cite{borkar2021music} aims to generate vector representations for audio. We employ this technique to generate audio representations.
Huang et al. \cite{huang2021self,qing2022hico} propose a method to learn high-quality video representation. This method separates static context and motion prediction, including the task of learning coarse-grained representations for context matching, and the task of learning fine-grained representations for motion prediction.
Sentence-BERT \cite{reimers-2019-sentence-bert} feeds a text sequence into a BERT model to obtain an overall representation vector, which is used for downstream tasks such as retrieval and clustering.
URI, detailed descriptions, and URI-embeddings are kept corresponding. Their efficient management is a challenging problem. Considering that management will become difficult as the amount of data increases, we do not use a general database. For vector representation, it is not suitable to use a database for storage and computation. We use flexible serialization tools for the persistent storage of data.
Parallel acceleration can be achieved through tensor computation which facilitates fast semantic retrieval.
\section{Experiment}
\subsection{Setup}
We developed a website based on this framework and invited users to test functions. This application runs on an AMD Ryzen-9 5900X Processor @ 3.7GHz (Mem: 128G) and RTX 4080 GPUs (16G).
\subsection{Results of PM Information Flow}
This section takes the information flow shown in Figure \ref{arch.fig} as an example to illustrate the processes including text generation, image generation, audio generation, and video generation. Let's take the story of ``AI and humans falling in love" as an example to illustrate the ability of PM.
(1) To generate a script, we use a GPT-2 to write movie scripts. Then based on this movie script, the movie plot is optimized manually.
\texttt{
The artificial intelligence becomes a beautiful woman Meggie calling for help, and the man Jack enters the virtual world. Jack was killed in the virtual world. A few months later, another woman joined Meggie, her name is ``Aries". However, Jack is rescued by a woman named ``Cynthia", who is the companion to The Immortal Mother...}
(2) To generate audio, we use the speech synthesis module for the narration of the story. If the movie script contains dialogues of characters, we can also use the speech synthesis model to generate dialogues of characters with different timbres.
(3) We use a text-image model to generate scenes for the movie, including natural environments, backgrounds, characters, tools, etc., as shown in Figure \ref{person.fig}.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=6.5in]{pic/person.jpg}
\caption{Scenes generated for the movie plot}
\label{person.fig}
\end{figure*}
(4) Sometimes it is necessary to add an image style to make this video look more interesting, as shown in Figure \ref{cyber.fig}.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=6.5in]{pic/cyber.jpg}
\caption{Adding Van Gogh's ``The Starry Night" style to a virtual city with style transfer}
\label{cyber.fig}
\end{figure*}
(5) For shots with dynamic content, e.g., the moment a person falls, the process of skiing, etc., the text-to-video module is used for the generation, as shown in Figure \ref{video.fig}.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=6.5in]{pic/rocket.jpg}
\caption{Dynamic content generated for the movie plot}
\label{video.fig}
\end{figure*}
(6) Finally, we compose a video based on a series of materials. We can combine the above materials through programmatic operations, or for convenience, we use video editing software to combine them.
\subsection{Results of URI Embeddings}
This section mainly shows the experimental results of URI embeddings. The similarity is computed based on the embedding representations. We use image and text data as demonstrations. For image data, we have made significant changes to the original image, such as masking regions in the image and adding new content. Then we use PCA to visualize the URI embeddings, as shown in Figure \ref{pca.fig}. We can see that the URI embedding maintains the similarity of the same image.
\begin{figure}[!htbp]
\centering
\includegraphics[width=3in]{pic/pca.png}
\caption{A reduced dimensional representation after adding noise to the image, where the positive sample represents the original image, the negative sample represents a different image used for comparison, and the noise samples represent the modified image}
\label{pca.fig}
\end{figure}
We made obvious changes to the text data, such as removing several parts, adding other content, switching the order, changing sentences, etc. Then we visualized the URI embeddings with PCA as shown in Figure \ref{text.fig}. We can see that the URI embedding maintains the similarity of the same text.
\begin{figure}[!htbp]
\centering
\includegraphics[width=3in]{pic/textpca.png}
\caption{A reduced-dimensional representation with noise added to the movie plot, where the positives represent the original movie plot, the negatives represent another textual description used for comparison, and the noise samples represent the modified movie plots}
\label{text.fig}
\end{figure}
\section{Conclusion and future work}
This paper introduces MetaAID 2.0, which implements human-controllable PM information flow. The controllability of human creativity is achieved through multimodal PM information flow. The controllability of model responsibilities is achieved through URI-extension. The multimodal PM information flow in this paper covers processing modules for text, image, video, and audio data.
This paper mainly introduces the human-controllable PM information flow. In the future, we hope to integrate this framework with real-world application scenarios to empower the digital economy.
\end{CJK*}
\clearpage
\bibliographystyle{aaai}
|
{
"arxiv_id": "2302.13161",
"language": "en",
"timestamp": "2023-02-28T02:11:33",
"url": "https://arxiv.org/abs/2302.13161",
"yymm": "2302"
} | \section{Introduction}
\IEEEPARstart{D}{igitalisation} and automation are transforming the power systems. The transformation is being facilitated through capabilities of power systems being able to access and communicate via communications networks (network connectivity) as well as the introduction of Artificial Intelligence (AI) enabled diagnostic systems \cite{NakahataDigitalizationMaintenance,HONG2014CYBERSYSTEMS}. Prior to 2010, there were very few connections between Operational Technology (OT) and the Information Technology (IT) \cite{Gregory-Brown2016SecurityWorld}. The reviews of the advanced computer aided communication systems in the smart grid context show that these systems add significant value to the productivity and efficiency of power systems \cite{Pour2017ASystems,Shrestha2020AInfrastructures}.
Until recently, there were only a few connections and very little knowledge about the OT infrastructure of power systems. Hence it was not on the radar of the cybersecurity, and broadly, the industry had few if any security measures apart from physical isolation in place. Over the last decade, the OT and IT domains have started to merge through the introduction of advanced monitoring and control systems, as well as remote access and control systems. However, emerging cybersecurity challenges are the downside of these new technologies as it has been discussed in the literature \cite{Marksteiner2019CyberModeling,Otuoze2018SmartThreats,Sun2018CyberState-of-the-art,Reich2020CybersecurityProviders}.
With the integration of IT with OT infrastructures, there is a need to consider the cyber risks, identify the gaps in security frameworks \cite{CIGRED2.382017FrameworkInfrastructure,Sarkar2022AAutomation}, policies, and introduce effective mitigation strategies \cite{CIGREWGB5.662020CIGRE2020.,CIGREWGD2.462020CIGRE2020.,Devanarayana2019TestingSimulator,Taljaard2018NetworkD2-309,Dondossola2020CyberSystems} as the OT becomes exposed to cyber threats \cite{CenterforStrategicandInternationalStudies2021Significant2006}.
A power transformer is a critical end device asset within the electricity network that is undergoing digitalisation, as it is fundamental to the production and transmission of power to customers \cite{Shi2020AServices}. Thus, power transformers could be targeted to potentially induce catastrophic failure of the power systems and significant and long-term supply disruptions \cite{9372271,Weiss2019LargeYears,Koelemij2020OTTransformers.pdf}.
The digitisation of transformers happens by inclusion of online condition monitoring (OLCM) \cite{Ivar-Ulvestad-Raanaa2020ConditionSubstations,ElectricPowerResearchInstituteEPRI2013IECData,Dolata2011On-Line61850,Hui2012TheTransformer} equipment with AI enabled diagnostic systems. The real time monitoring equipment using data acquisition systems produces enormous amount of data that needs to be protected. The data either is used to indirectly operate a critical device or influence a decision to be made at supervisory level of a substation.
Data security is becoming the main focus of the power transformer OT security. There are many types of attacks including network topology attack, False Data Injection (FDI), jamming attack, GPS spoofing \cite{Wang2018DistributedClocks}, and time synchronization attack\cite{Shereen2020FeasibilityEstimation}. However, among all these atttacks FDI has been found the most known common attack \cite{Lu2022GeneticSystems,Unsal2021EnhancingMitigation,Yang2017TowardGrid,Hittini2020FDIPP:Systems,Dayaratne2022FalseGrids,Mujeeb-Ahmed2018NoisePrint:Systems}.
This even becomes more critical for assets such as power transformers that are equipped with diagnostic data acquisition systems with a life span of 40 years in the electrical network. The associated hardware and software will also be required to match that the 40 years specification. When comparing the existing IT cybersecurity technologies to the ones that are required in the power industry as part of Operational Technology (OT), there are major differences. The traditional network defense mechanisms for cyber elements in power system application have been challenged by researchers \cite{Karimipour2019AGrids}. The long life span of power system equipment, the strict timing requirements, higher availability, less latency requirements, mostly asymmetric and message oriented communication technologies are only a few differences that make the power networks unique in comparison to IT technologies of the office networks. These have been highlighted in IEC standards \cite{2022IEC/TRGuidelines} and literature \cite{Maglaras2018CyberInfrastructures,Taljaard2018NetworkD2-309,Dayaratne2022FalseGrids}.
Although there are numerous studies on various aspects of smart grid cybersecurity that highlight the criticality of the cybersecurity in smart grid technologies \cite{Shrestha2020AInfrastructures,Sun2018CyberState-of-the-art,Kumar2019SmartIssues,Sakhnini2021SecuritySurvey,Baumeister2010LiteratureSecurity} and intelligent grid equipment such as smart meters \cite{Wang2019ReviewChallenges,Hassan2022AMeters}, to the best of our knowledge there are no studies that focus on smart transformers. In this paper, we address this gap and make the following contributions:
1) Providing smart transformer's architecture and well-formed taxonomy of the relevant cyber physical attacks.
2) Detailed review of smart transformers topology, vulnerable components and the implication of cybersecurity breaches.
3) Discussion of the potential future cybersecurity challenges of smart transformers.
The remainder of the paper is orgnaised as follows. Section II introduces the smart transformers topology and its emerging condition monitoring technologies. Then, it details intelligent transformers and their individual components in relation to the substation architecture. Finally, it presents phase shifting transformers. Section III summarises some of the major historical attacks on critical infrastructure in the world. Section IV discusses the vulnerabilities and attack vectors specific to smart grids and section V details the attacks specific to power transformers. Anomaly detection algorithms and a summary of the simulation tools and test beds are presented in sections VI and VII respectively. The use of AI techniques for cybersecurity of power transformers has been discussed in section VIII and finally the conclusion is given in section IX.
\section{Smart Transformers Topology}
Electric power is generated by converting other forms of energy into electricity. The power produced is then transmitted geographically, typically over long distances using transmission lines, through transmission substations and then through distribution substations and ultimately to consumers. Transformers play a very important role in matching voltage levels to the needs of the network and to those of its customers. Arguably the most concentrated, expensive, and critical components of the system, transformers are long-lived assets with a service life of greater than 40 years. As they are one of the most expensive elements of the grid, their integrity and operational performance are always of concern to utilities. Thus, technologies that can assist in their efficient use and early diagnosing of degradation monitoring are valuable within the asset management frameworks as they are used to gain the greatest value for the installed equipment.
Even so, it is observed that for power transformers, the adoption of digitisation has been slow and it is only since around 2018 that major players in the industry have been launching digital smart transformer products. For example, Reinhausen (also known as Maschinenfabrik Reinhausen MR) launched ETOS® - Embedded Transformer Operation System for operation, control and monitoring of transformers \cite{MaschinenfabrikReinhausenGmbHETOSSystem}. Siemens Energy, has likewise launched Internet of Things (IOT) equipped transformers as "Sensformer" \cite{SiemensEnergySensformerSecurity} and Hitachi Energy has developed TXpert™ Enabled digital power transformers \cite{ABBsHitachiEnergy2018TXpertTransformer}.
All these companies highlighted the potential cybersecurity challenges and they are working towards solutions to overcome existing and future cybersecurity issues associated with the new generation of transformers.
Typically such transformers are known as smart/digital transformers and are conventional transformers equipped with intelligent monitoring systems, aimed at the early detection of abnormalities within the transformer. Due to the gains in operational and financial efficiency afforded by such systems, it is anticipated that there will be a wider adoption of such technology across existing transformer fleets.
While such systems provide considerable benefits, they do open the door to new vulnerabilities, via connections through data communication channels.
This includes the substation client server, real time services and the GPS communication services. In a worst-case scenario for the network, it may be possible for a cyber attack to gain control over switching operations leading to local catastrophic consequences and potential long term supply disruption. In a lesser case, the sending of false instrument reading may make the operator take them offline otherwise healthy equipment leading to supply restrictions or blackouts by an incorrect decision.
To date, there has been little attention paid to the potential cyber threats against power transformers \cite{8274710,10.1007/978-3-319-23802-9_20,8909786,9126843,Jahromi2020,9137377,Olijnyk2022DesignTransformer,Ahmad2022AdvancedSubstation}. Also very few writers have been actively warning of the emerging cyber-attack issues for transformers \cite{Koelemij2020OTTransformers.pdf,GoudCyberExplosions,King2021HowCyberattack,Ribeiro2021ChineseAdministrations}. Digitisation of transformers involves a suite of sensors, data collection and analysis tools to be installed on every transformer to allow for real-time monitoring (Figure 1). With digitisation, vulnerabilities of these new technologies to cybersecurity is an ever-present threat.
\begin{figure}[h]
\includegraphics[width=\linewidth]{photo/Monitoring_System.jpg}
\caption{Power transformer monitoring system } \label{Monitoring}
\end{figure}
The following sections will be detailing the transformer within substation architecture and how the transformer itself and its condition monitoring devices are connected to the rest of the network. Then each smart component of a transformer has been closely investigated to understand the involved communication protocols and data transmission technologies.
\subsection {Substation Architecture and Transformers}
Over the years conventional substations are transformed into modern and digital substations, through replacement of copper wired connections with advanced communication technologies. This is driven by the need to achieve higher efficiency and to lower the cost of operation. However, with multi-vendor systems and their associated increased connectivity, there is also the evolution of unprecedented vulnerabilities \cite{JACOBS2021PowerControl,MissionSupportCenterIdahoNationalLaboratory2016CyberSector,Sahoo2021CyberVulnerabilities,Hahn2013CyberEvaluation}. This exposes the efficiency and stability of the network to cyber risks with significant operational and financial consequences \cite{Musleh2020}.
Various physical critical equipment is located in substation architecture as it is shown in Figure 3.
Typically this architecture is structured in the following three levels:
1) Station level: Supervisory systems, Local Supervisory control and data acquisition (SCADA) systems and substation automation.
2) Bay level: Control functions and System protection
3) Process level: Device level
Transformers and other substation field devices such as Circuit Breakers (CB), Current Transformers (CT) and Voltage Transformers (VT), etc., are positioned at the process level. The analogue signals produced by these field devices get converted to digital signals through merging units. The bay level equipment comprises of the control, protection and measurement of Intelligent Electronic Device (IED). The station level includes the elements that provide a human-machine interface (HMI), dealing with the station computers and gateways.
The risk modelling performed by a South Australian utility \cite{Automated2020ParisGrid} shows that process level data information has the highest risk within a substation environment. This shows the necessity of having a defence in the process level equipment and communication channels associated with these devices.
Conventional substations hard wired the station level systems to bay level devices and to process level equipment using copper conductors. Later with the need for improvement in network reliability and stability, rapid fault isolation and restoration \cite{Kumar2021Toward61850}, the modern substations emerged, replacing the hard-wired connections and serial connections with data network communication protocols.
Three types of communication are available in the substation environment:
1) Distributed Network Protocol (DNP) 3.0 or IEC 60870-5; at the station level, the data exchange in and out of substation (servers, HMIs and gateways, including the remote access).
2) Generic Object Oriented Substation events (GOOSE) messages; data exchange between relays and protection devices and interlocking.
3) Manufacturing Message Specification (MMS); used for Control and protection of data exchange between station and bay level.
Conventional systems are still using Modbus and DNP3 which are considered vulnerable to cybersecurity \cite{10.1007/978-3-319-23802-9_20}. Despite the use of modern IEC 61850 protocols for substations, this communication protocol has also proven to be vulnerable to attacks.
The purpose of using such communication protocols is to enable the transfer of the process level and bay level equipment information to network supervisory systems such as SCADA and Energy Management Systems (EMS). This enables grid operators to make informed decisions about the state of the energy supply system and to facilitate the reliable and secure supply of electricity to network customers \cite{CEN/CENELEC/ETSIJointWorkingGrouponStandardsforSmartGrids2012CEN-CENELEC-ETSISecurity}.
With the transition to digital substations, the switchyard primary equipment such as transformers with a fast-transient response has been given special attention by International Electrotechnical Commission 61850 due to their critical role in the network \cite{Kumar2021Toward61850}.
\subsection {Transformer condition monitoring equipment- Device level}
In the current market, there is an ongoing effort toward the development and advancement of sensors, monitoring tools and digital interfaces for transformers. These are to support the transformer fleet at three levels: generation, transmission, and distribution with modular integration of functions in the following areas:
1. Voltage regulation using Onload Tap-Changers (OLTC): Drive, Control and monitoring,
2. Transformer insulation and bushing condition,
3. Transformer internal temperature, and control of fans and pumps,
4. Dissolved Gas Analysis (DGA) system; Gases produced by discharge activity inside the transformer tank.
The above-mentioned monitoring and diagnostics devices could be tampered with and pose a hazard to the operation of transformers and the power network. False alarm signalling, tampering with temperature indication devices, manipulation of OLTCs in setting voltage levels. Maloperation of fans by manipulating temperature gauges is also a way of causing overheating in transformers. As the loss of a transformer in operation will lead to costly outages, loss of production, and possible widespread blackouts, it is important that the integrity of the controls and indications is maintained.
\subsection {Transformer OLTC}
OLTCs are an important component for the regulation of voltages on the network and directly influence the network's voltage stability. If the voltage stability is targeted or impacted negatively during a peak load condition, it can result in voltage collapse, failures and consequently blackouts. OLTC control systems are in use for transmission transformers to ensure dynamic and adaptive control of voltages. These intelligent devices increase the attack surface as the control signals travel through the communication channels of the network. Any malicious data modification or manipulation of voltage control on OLTCs could result in a maloperation \cite{10.1007/978-3-319-23802-9_20}.
OLTCs are one of the vulnerable parts of the transformer likely to be targeted by attackers who gain access through the RTU or SCADA. Transmission network operators change the operating tap of transformers for many reasons and by various means. This includes the utilization of solutions from load flow and optimal power flow (OPF) calculations \cite{9126843}.
Vulnerabilities of OLTC have been analysed for two types of attacks \cite{9126843,10.1007/978-3-319-23802-9_20}. This includes False Data Injection (FDI) and stealthy false command injection attacks. As the FDI has been found to be the most common attack in studies on transmission systems, the false control signal injection can be even more hazardous when the attack is performed stealthily with the aid of FDI \cite{9126843}. A. Anwar \cite{10.1007/978-3-319-23802-9_20} has addressed the risks and the impacts of FDI attack on OLTC with two attack scenarios when more power is demanded in the system, and when nodes of the systems are placed under the lower stability margin, however, no mitigation strategies have been provided.
S. Chakrabarty \cite{9126843} has applied an estimation-based algorithm with a specific threshold that provides a 100\% success rate. However, this may not be an effective mitigation strategy for the future growing grid and increased complexities, as well as the advancement of attacks. The application of model-free data-driven techniques capable of detection with lower thresholds can be an effective defence mechanism for cyber securing OLTCs.
\subsection {Transformer insulation and its bushings condition}
The monitoring tools are designed to communicate the transformer health data to SCADA or Digital Control System (DCS) through relays for warning, alarm and error status.
The active part of transformers has a higher failure rate and the monitoring of the windings, insulation condition as well as bushings are thus of paramount importance. These new monitoring systems are designed to provide information on the status of the transformer insulation including bushing condition through several indicators. This information is critical to the proper assessment of the transformer’s health and is dependent on the known condition and the remaining life of its insulation. The paper and oil insulation used in power transformers can tolerate the high temperatures under normal operating conditions. However, if the transformer is overloaded for an extended period, over-fluxed, or has its cooling interrupted, then the insulation temperature may exceed design limits, greatly accelerate the decomposition of its paper and lead to the degradation of the insulation reducing its electric strength and mechanical integrity. If this is unchecked or misleading information is sent to supervisory systems because of an attack, the ultimate result would be a devastating failure or a major disruption to the power supply.
Typically, the insulation parameters of Partial Discharge (PD), Dielectric Dissipation Factor (DDF) and Capacitance are measured by the monitoring system. To the best of authors knowledge this attack vector has not been identified or discussed in the existing literature.
\subsection {Transformer internal temperature, and control of fans and pumps}
Transformer life expectancy is very dependent on its operating temperature and more specifically on the hot spot temperature in the transformer \cite{Heathcote2007TheBook}. To understand the importance of this, Australian Standard AS60076 indicates that every 6 degrees increase above the allowed temperature, will result in halving of the expected transformer life. Thus, on large power transformers, there are typically fans and pumps used with the addition of large radiators.
To control the internal temperature, monitoring systems send temperature indications to the supervisory systems so that fans and pumps can be operated. Any FDI via cyber-attack can result in the maloperation of fans and pumps and possible overheating and failure of the transformer.
\subsection {Transformer DGA System}
DGA is one of the key transformer monitoring tools that can be used in both online and offline service conditions. This tool measures the gases accumulated in transformer oil and can be used to infer faults or stresses within the insulation of the transformer \cite{Brochure2019D1/A2Systems}. Receiving any false data or control commands through DGA units could lead to legitimate stress indications being manipulated into benign readings. As a result, incorrect decision-making of the supervisory system could cause consequences from a possible unnecessary outage to a failure scenario at worst. B. Ahn \cite{9372271} has explored the vulnerabilities of two commercial DGA systems and discussed the potential threats using a threat model. The threat has been identified at three levels of 1) Device Level, 2) Digital Substation, and 3) Remote Access.
At the device level, the USB slave connection of the device could be the vulnerable point to malware injection or unauthorised access. As a result of such an attack, the firmware can be manipulated, and the data storage could also be targeted. The windows-based program of these systems is connected through an Ethernet connection supported by DNP3, Modbus and IEC61850. Attacks that bypass the firewall can break down the authentication by Structured Query Language (SQL) injection. FDI and Denial of Service (DoS) are the examples of attacks at the substation level. Remote access level also provides a back door for attacks such as Malware injection, Spoofing and DoS.
\subsection {Transformer associated devices - substation level}
Transformers can be attacked through their associated equipment at the substation level. The overall grid performance can be significantly impacted by the disruption of these devices by attackers. Based on \cite{8274710,Jahromi2020} the following devices associated with transformers can be tampered with and lead to failures and outages.
1. Metering equipment; Current Transformers (CT) and Voltage Transformers (VT)
2. Circuit Breakers (CB)
3. Differential protection
The involvement of industrial control systems (ICS) with the existing electrical process and mechanical functions, along with connectivity to the internet, creates another attack surface in the network.
CTs and VTs, as the substation metering devices collect the electrical data from the power transformers as an analogue signal and feed them to the merging units (MU). The MUs then convert those to digital data packets and transfer them via switches and IEDs to control rooms \cite{Kumar2021Toward61850}.
The data packets are then carried as Sampled Values (SVs) and Generic Object Oriented Substation events (GOOSE) messages to control systems. Any change to the size of their byte size will result in sending a command to circuit breakers to interrupt the power flow to protect the equipment.
The state of the circuit breakers (open/close conditions) can now be managed remotely. This brings efficiency and cost savings for utilities, however, it provides more access to adversaries \cite{SenateRPC2021INFRASTRUCTUREGRID}. Targeted periodic overloading can lead to degradation of the insulation of transformers over time and ultimately to failure and loss of operation. Hypothetical scenarios of the periodic overloading of a transformer bank (3 x 1 phase transformers) have been studied and the impacts have been shown with simulations \cite{8274710}.
Papers \cite{Jahromi2020,Olijnyk2022DesignTransformer,Khaw2021ARelays} have studied the cyber-attack scenarios on differential protection devices associated with transformers as a critical asset in the power network. Disruption to such protection equipment can lead to catastrophic failures at the network operation level. Detection of the abnormal behaviour of protection relays because of cyber threats is becoming very important for utilities.
\subsection {Transformer remote control systems – remote access}
As the condition monitoring data will be accessible through engineering PCs, workstations, and cloud based remote systems for analysis, this creates multiple points of access for adversaries.
The sensitive information that is collected through merging units (MU), Remote Terminal Units (RTU) and SCADA is often remotely accessible for further analysis by engineering staff. This increases the chance of attacks through these vulnerable points in the network.
The use of remote interfaces has even been further encouraged by service providers during the Covid pandemic, as a means of coping with reduced site access.
\subsection {Special transformers; Phase Shifting Transformers}
Phase shifting transformers (PST) are used to control the active power flow in the power grid, prevent overloading and cross-network regulation of power flow. As their commands are typically transferred through SCADA systems, they can be readily targeted by cyber-attackers.
The consequences of such an attack could be transformer overloading and outages of transmission lines as well as cross-network trading losses \cite{9137377}.
PST are controlling the real power flow by either enforcing or blocking it by regulating the phase displacement as these two are very closely connected. The phasing shifts (either leading or lagging) by changing the tap position across its tapping range. Remote Terminal Units (RTUs) facilitate the connection of PSTs, through SCADA to the command centre \cite{9137377}.
Paper \cite{9137377} explored attack scenarios targeting phase shifting transformers by gaining access to the RTUs and proposed a detection algorithm for discerning unexpected phase shifts. Malicious phase shift command attack has been proved to be a potential cause of generation load imbalance and in severe cases, lead to significant financial losses.
\hspace{1cm}
\section{Simulation Tools and Test Beds}
To achieve a comprehensive level of analysis across a complex and fast changing power grid, modelling and simulation are important. Papers \cite{Yohanandhan2020Cyber-PhysicalApplications,Smadi2021AChallenges} provide a survey of simulation platforms and available test beds. Often test beds have been designed for specific tasks or research fields and they provide limited set scenarios. However, some provide greater flexibility in both cyber/ software platforms as well as the physical hardware elements. Table 2 also provides a summary of the most used modelling and simulation tools. Paper \cite{Devanarayana2019TestingSimulator} demonstrates the importance of the simulation environment in developing mitigation strategies.
Real-time simulators have improved the opportunities for large scale grid simulation and allow for coverage of dynamic features of grid equipment, such as inverters \cite{Song2021ResearchGrid}. Models can be applied to different infrastructures; electrical models, communications/cyber system models, and co-simulation models.
For transformers, thermal modelling also has significant importance as overloading \cite{8274710} has been shown to be one of the most possible scenarios from cyber attacks.
\begin{table}
\centering
\caption{List of simulation software used in literature}
\begin{tabular}{|l|l|l|l}
\cline{1-3}
\textbf{Power System~} & \textbf{Cyber System~} & \textbf{Co-Simulation~} & \\
\cline{1-3}
RTDS-RSCAD~ & ns2 and ns-3 & ParaGrid & \\
\cline{1-3}
MATLAB-Simpower~ & OMNet++ & Modelica & \\
\cline{1-3}
GridDyn & Java & Dymola & \\
\cline{1-3}
PSCAD/EMTDC & RINSE & MathModelica & \\
\cline{1-3}
PowerWorld Simulator & OPNET & MapleSim & \\
\cline{1-3}
OpenDSS & Visual Studio & JModelica & \\
\cline{1-3}
DIgSILENT & GridSim & Ptolemy II & \\
\cline{1-3}
EMTP-RV & NeSSi2 & Simantics & \\
\cline{1-3}
OPAL-RT & GridStat & Mosaik & \\
\cline{1-3}
ETAP & DeterLab & Simscape & \\
\cline{1-3}
GridLab-D & WANE & EPOCHS & \\
\cline{1-3}
PSLF & UPPAAL & Simulink & \\
\cline{1-3}
MATPOWER & Stateflow & LabVIEW & \\
\cline{1-3}
EnergyPlus & TIMES-Pro & & \\
\cline{1-3}
PowerFactory & MATLAB - SimEvents & & \\
\cline{1-3}
UWPFLOW & GLOMOSIM & & \\
\cline{1-3}
TEFTS & Cloonix & & \\
\cline{1-3}
PST & GNS3 & & \\
\cline{1-3}
InterPSS & IMUNES & & \\
\cline{1-3}
OpenPMU & Shadow & & \\
\cline{1-3}
rapid61850 & EXata & & \\
\cline{1-3}
Aspen & & & \\
\cline{1-3}
PLECS & & & \\
\cline{1-3}
adevs & & & \\
\cline{1-3}
NEPLAN & & & \\
\cline{1-3}
EUROSTAG & & & \\
\cline{1-3}
Homer & & & \\
\cline{1-3}
PCFLO & & & \\
\cline{1-3}
Psap & & & \\
\cline{1-3}
\end{tabular}
\end{table}
\section {Historical cybersecurity in Electrical Network}
\subheading{Attacks to Australian Critical Infrastructure.}
According to the Australian cybersecurity Centre (ACSC) report, 67,500 cybercrimes with an estimated loss of \$33 billion were reported in the financial year 2020-2021. This is an almost 13\% increase when compared to the previous year 2019-2020. Almost one quarter of these attacks affected critical infrastructure which includes electricity networks \cite{AustralianCyberThreatCentre2021ACSC2020-21}. There is no doubt that cyber incidents are on the rise and the electricity network is being targeted. A recent ransomware incident has been reported by CS Energy the Australian based generation company on 27th Nov 2021 \cite{CSEnergy2021CSINCIDENT}.
\subheading{Worldwide Attacks.}
US based utilities ranked the risk of cybersecurity as 4.37 out of 5. The risk studies of UK's Cambridge centre estimate the losses over the 5 years as GPB 442 billion \cite{TransGrid2018TransGridConditions}. Among reported attacks, energy sector attacks have been at the top of the list for the USA (CIGRE Technical Brochure 833 \cite{JACOBS2021PowerControl}).
Utilities worldwide have reported that the level of sophistication and the frequency of cybersecurity has continued to increase \cite{MissionSupportCenterIdahoNationalLaboratory2016CyberSector}. In 2010, Iranian’s nuclear plant PLCs were infected through engineering PCs. In 2015 Ukrainian power plant control centre PCs were remotely controlled and RTUs infected and destroyed. In 2016, a Ukrainian power plant was attacked by malware. In 2017 PLC malware targeted plant safety systems in the Middle East. Table 1 shows the history of some of the major reported cybersecurity in relation to the electrical network and other critical infrastructure.
One of the major historical attacks on the electricity network is the remote cyber attack directed against Ukraine’s electricity infrastructure in Dec 2015 which maliciously operated SCADA systems. This caused a blackout that hit part of the Ukrainian’s capital city and affected approximately 80,000 customers \cite{Mohan2020ASystems}.
\begin{table*}[t]
\centering
\caption{Significant Attacks History}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
\vcell{\begin{tabular}[b]{@{}l@{}}\\\textbf{Year}\end{tabular}} & \vcell{\textbf{Location}} & \vcell{\textbf{Attack}} & \vcell{\textbf{Type}} & \vcell{\textbf{Targeted/ Infected}} & \vcell{\textbf{Consequence}} & \vcell{\textbf{Infrastructure}} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2021} & \vcell{USA} & \vcell{Ransomware} & \vcell{Malware} & \vcell{Colonial pipeline} & \vcell{\begin{tabular}[b]{@{}l@{}} Energy company shut down the\\ pipeline and paid a 5MM \$ ransom\end{tabular}} & \vcell{Oil} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2021} & \vcell{Florida, USA} & \vcell{\begin{tabular}[b]{@{}l@{}} Unauthorized \\Access\end{tabular}} & \vcell{Stolen Credentials
} & \vcell{Water treatment plant} & \vcell{\begin{tabular}[b]{@{}l@{}}Boosted treatment chemicals\\ (NaOH) to dangerous levels\end{tabular}} & \vcell{Water} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2020} & \vcell{Texas, USA} & \vcell{\begin{tabular}[b]{@{}l@{}} Supply Chain\\Attack\end{tabular}} & \vcell{Malicious Code} & \vcell{\begin{tabular}[b]{@{}l@{}} Network Management\\ Company and its clients\end{tabular}} & \vcell{\begin{tabular}[b]{@{}l@{}} Compromised network \\and system data\end{tabular}} & \vcell{IT} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2020} & \vcell{Japan} & \vcell{\begin{tabular}[b]{@{}l@{}} Unauthorized\\Access\end{tabular}} & \vcell{Stolen Credentials
} & \vcell{\begin{tabular}[b]{@{}l@{}} Japan's largest electronic\\ equipment manufacturer\end{tabular}} & \vcell{\begin{tabular}[b]{@{}l@{}} Compromised details \\of missile designs\end{tabular}} & \vcell{Defense} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2018} & \vcell{New York, USA} & \vcell{Possible botnets} & \vcell{\begin{tabular}[b]{@{}l@{}}Demand \\Manipulation \end{tabular}} & \vcell{Transformers} & \vcell{\begin{tabular}[b]{@{}l@{}}Transformer explosion \\and grounded flights \end{tabular}} & \vcell{Energy} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2017} & \vcell{Middle East} & \vcell{Triton} & \vcell{Malware~~} & \vcell{\begin{tabular}[b]{@{}l@{}}PLC Malware targeting \\plant safety system\end{tabular}} & \vcell{Forced shut down of the power plant} & \vcell{Energy} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2016} & \vcell{Kiev, Ukraine} & \vcell{Crash Override} & \vcell{Malware} & \vcell{\begin{tabular}[b]{@{}l@{}}Malware with 101/104 \\ IEC 61850 modules\end{tabular}} & \vcell{\begin{tabular}[b]{@{}l@{}}Electricity supply interruption\\ and delayed recovery operations\end{tabular}} & \vcell{Energy} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2015} & \vcell{Kiev, Ukraine} & \vcell{Black Energy} & \vcell{FDI} & \vcell{\begin{tabular}[b]{@{}l@{}}Control centre PCs \\remote-controlled, \\RTUs infected/destroyed\end{tabular}} & \vcell{\begin{tabular}[b]{@{}l@{}}Blackout 225k \\customers\end{tabular}} & \vcell{Energy} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2012} &
\vcell{\begin{tabular}[b]{@{}l@{}} Saudi Arabia\\ and Qatar\end{tabular}}
& \vcell{Shamoon \cite{Bronk2013}} & \vcell{Malware} & \vcell{\begin{tabular}[b]{@{}l@{}}Malware affected \\generation/delivery\\~at Aramco and RasGass\end{tabular}} & \vcell{\begin{tabular}[b]{@{}l@{}} Disrupted Aramco Oil\\ Company for two weeks\end{tabular}} & \vcell{Energy} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2010} & \vcell{Iran} & \vcell{Stuxnet} & \vcell{Malware} & \vcell{\begin{tabular}[b]{@{}l@{}}PLCs infected via \\Eng. PCs\end{tabular}} & \vcell{1000 centrifuges} & \vcell{Energy} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2008} & \vcell{Turkey} & \vcell{} & \vcell{FDI} & \vcell{\begin{tabular}[b]{@{}l@{}}Control system \\parameters~of\\~the oil pipeline\end{tabular}} & \vcell{Oil Explosion} & \vcell{Oil} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2007} & \vcell{Idaho, USA} & \vcell{Aurora} & \vcell{FDI} & \vcell{\begin{tabular}[b]{@{}l@{}}Circuit Breaker\\~of a Generator\end{tabular}} & \vcell{\begin{tabular}[b]{@{}l@{}}Generator \\Explosion\end{tabular}} & \vcell{Energy} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\vcell{2003} & \vcell{Ohio, USA} & \vcell{Slammer Worm} & \vcell{Malware} & \vcell{\begin{tabular}[b]{@{}l@{}}Nuclear plant \\control system\end{tabular}} & \vcell{\begin{tabular}[b]{@{}l@{}}System \\Display Off\end{tabular}} & \vcell{Energy} \\[-\rowheight]
\printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop & \printcelltop \\
\hline
\end{tabular}
\end{table*}
\section {Vulnerabilities and Attack Vectors in Smart Grid}
Generally, the attacks are divided into two categories: passive attacks and active attacks. The passive refers to those that target the system information; however, it doesn't directly affect system resources. Examples of these attacks are spying, eavesdropping and traffic analysis. On the other hand, the active attacks aim to make changes to the data and information. These include malware attacks and DoS. Based on the reported incidents often the active attacks caused severe consequences.
Several attack vectors have been addressed in the existing literature. Based on the very few available threat models \cite{9372271,9126843}, these could be aimed at the monitoring device level, substation level, or through remote access. At each of these levels, a variety of cyber vulnerabilities have been addressed.
Among all the attacks, FDI has been identified as the most common class of threat and the most challenging with the widest impacts \cite{Musleh2020VulnerabilitiesOverview}.
From \cite{Musleh2020} the smart grid structure is vulnerable to 4 major group of attacks (Figure 2):
• Physical based (e.g. damage to physical part, tampering with device input, Electromagnetic radiation, heat, light and sound emmision),
• Cyber based (e.g. changes to software or the firmware, Code or command manipulation),
• Communication based (e.g. the physical link breakdown, GPS spoofing) and,
• Network based (e.g. manipulate the data within packets in network, Denial of Service).
FDI has been found the common type of attack in the all of the above 4 group of attacks.
At the physical layer, FDI attack is feasible by manipulating the device input. The data can be manipulated at cyber layer at application level with no changes to the codes. Injecting a falsified data via GPS is a possible scenario of communication layer attack using FDI. When the data manipulation occurs in network packets it is also considered as FDI \cite{Musleh2020}.
\begin{figure}[h]
\includegraphics[width=\linewidth]{photo/Common_Attacks2.jpg}
\caption{Common attacks in smart grid} \label{Common Attacks}
\end{figure}
\section {Transformers Attack Surface}
Figure 3 shows the attack surface using a common substation transformer architecture. Remote access points (e.g. corporate office, IT, network control centre and the home internet) can be compromised by malicious actors. A common pathway is the loss of login credentials of diagnostic equipment using a spoofed page.
Testing equipment for secondary systems or protection equipment testing is usually connected to the station bus. This creates another attack surface. The testing companies have realised the risk and isolated the testing PC/laptop from the station bus, however, the test equipment still needs to remain connected to the bus to allow for the testing to be completed.
Unauthorised access through the engineering PC can target the gateway and the IEDs by introducing a malicious configuration file.
Based on the very few threat models \cite{9372271, Olijnyk2022DesignTransformer}, generally the following common vulnerabilities and exposures are addressed:
1) Tampering
2) Spoofing
3) Denial of Service
4) Information Disclosure
5) Repudiation
6) Elevation of Privilege
These mainly target the monitoring equipment associated with transformers. When it comes to the transformer accessories such as the OLTC, FDI and hidden command are the main attacks discussed in \cite{10.1007/978-3-319-23802-9_20} and \cite{9126843}. FDI has been studied extensively by many researchers in smart grids \cite{TRAN2020DesignGeneration,Musleh2020,Xu2021DetectionLearning,Kumar2017EfficientGrids}, however, in the case of power transformers, there have been only a few studies considering the FDI. Papers \cite{Khaw2021ARelays} and \cite{Jahromi2021DataProtection} discuss the supply chain issue and the vulnerability of the MUs, as they are collecting sensitive data such as voltages and currents.
Paper \cite{Olijnyk2022DesignTransformer} explored the methods of attack when manipulating the physical data of the transformers. Researchers have covered the following:
• Physics-centric attack,
• Interposition attacks,
• Coordinated distributed attacks,
• Physics-centric malware emulation attack,
• Harmonic restraint attack emulation,
• Differential attack emulation,
• OLTC/Tap-changer attack emulation.
Paper \cite{Automated2020ParisGrid} classified the process level data information as the highest risk and the most sensitive data. This is where the transformers are located and with the reference to \cite{Automated2020ParisGrid}, the next step is to consider mitigation strategies.
\begin{figure}[h]
\includegraphics[width=\linewidth]{photo/Attack_Vectors_3.png}
\caption{Substation architecture and attack vectors} \label{Attack Vectors}
\end{figure}
\section{Detection Algorithms}
Smart grid asset is proven to be vulnerable to data integrity attacks. The FDI attacks are targeting the power transformers. To tackle this issue detection algorithms have been developed over the years.
Detection algorithms could mainly categorised into two categories of rule-based methods and AI-based techniques.
\subheading{Rule-Based Methods.}
Paper \cite{Musleh2020} provides a summary of the largest sets of the algorithms used in the smart grid context for the mostly referred FDI attack. Figure 4 shows these algorithms can be classified into two categories; model based and data based. Figure 5 shows the statistics for algorithms used in FDI attacks.
From the statistical analysis, there is a growing appetite for the use of data-driven algorithms. This is a consideration for both researchers in academia and utilities globally \cite{Kulkarni2020CyberStudy,Sattinger2020CriticalThreats,Autonoma2020AnIntelligence}. And has come about mainly due to the limitation of the model-based algorithm in relation to the growing grid and its increased complexity. The recent literature \cite{Karimipour2019AGrids} suggests unsupervised deep learning can be computationally efficient and reliable for detection of attacks in a large scale smart grid. This results in high rates of detection (99\%) and are in good agreement when compared to the simulation results of an IEEE 2448 bus system. Another South Australian utility detailed some case studies and showed by example how machine learning techniques, such as deep learning, helped them with detecting and responding to cyber threats \cite{Sattinger2020CriticalThreats}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{photo/algorithms_types.jpeg}
\caption{Algorithms classification}\label{Algorithms}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{photo/Statistics_FDI_Algorithms.jpg}
\caption{Statistics of used algorithms for FDI attack in existing literature.}\label{Algorithms Statistics}
\end{figure}
\subheading{Use of AI Techniques for Power
Transformers.}
To date, there has been little research on the disruption of smart grid operation due to targeted cyberattacks on power transformers. With the move to the next generation of so-called smart transformers, there is an increasing concern about their cybersecurity \cite{GoudCyberExplosions,SiemensEnergySensformerSecurity}, with only limited research towards the development of an AI-based cyber defence framework for their protection.
Some previously used methods \cite{Karimipour2019AGrids} are not well suited to large and complex electrical networks due to their high computation demands and system storage requirements. The focus of the existing literature has been on network security~\cite{zheng2020towards,janicke2020security}.
To tackle smart grid security threats for smart transformers, suitable models are required using source classification and simulation tools for threat sensing and response. The focus needs to be on technical operation security and the system’s data management security. This includes the security in reliability and resiliency of operations and real-time recording, monitoring and storing of data and information.
As the present sensors and accessories on transformers have limited computational power, adversarial attacks with a focus on poisoning attacks needs to be explored. A novel threat profile for these smart components is still missing from the existing literature.
Identifying the different sources of security threats and challenges, including the known and unknown attacks is to be studied and identified. These are to be classified, and the application of suitable models, analysis, simulation, and optimization solutions are to be reviewed.
There has been extensive research focused on smart meters, to identify various attacks and defences against them. However, there is limited work to automatically defend the smart transformers. The nature of periodical data collection is completely unique and different from smart meters. This includes real-time monitoring of transformer vital parameters (Voltage regulation and OLTC monitoring, temperature and cooling system control, online Dissolved Gas Analysis, online bushing health information, etc.).
Machine learning algorithms such as clustering algorithms, regression or classification are still to be explored specifically for transformers when it is required to distinguish between real data and data from bad actors.
Previous studies have shown device attacks, application service attacks, network attacks, web interface attacks and data integrity attacks are all applicable to the smart meter context. The practicalities of this attack on smart transformers are to be explored. Many of the existing AI based algorithms are built around a particular understanding of historical data. However, the attack surface and the nature of attacks are evolving. Therefore, the use of AI based frameworks, making use of machine learning techniques appear to offer an effective way to stay ahead of cybersecurity.
\section{Conclusion}
Power transformers are complex elements of the power system which have their own expertise in operation and assessment. Their concentrated nature, high cost and long lead time, make them a high priority target for malicious actors. In the past they have been relatively safe from external interference but with the advent networked controls and monitoring they are now vulnerable. Studies have shown there is a potential for cyber threats to the important components of power transformers, including OLTC, DGA systems, connected CBs, differential protection and phase shifting transformers. A new generation of intelligent monitoring systems that are integral to a Smart Transformer are coming into wider use. As these technologies are evolving and the attackers are motivated, it is suggested that AI and machine learning approaches may be necessary to best secure such systems from not only network-based attacks but also malicious internal actors.
AI diagnostic systems, in particular those employing state estimation based algorithms, are best place at present to block a vast range of attacks from external and internal threats. To counter evolution in threats, it is important to detect stealthy attacks which appear to need a degree of unsupervised learning approaches. Presently there is no mitigation strategy in the existing literature for online diagnostic systems. Transformer differential protection has received more attention, however, the suggested mitigation algorithms have been only tested against a very limited number of hypothetical scenarios and hence their robustness against advanced cybersecurity is unclear.
Due to the higher rates of attack and resources available to counter them, USA utilities are leading in this field. Australian utilities are also learning from their own and global experience and actively working to keep their networks secure.
This research summarises the cybersecurity challenges associated with power transformers within electrical networks and brings together an understanding of attack vectors for transformer components and their near associated devices. As a new generation of power transformers equipped with smart monitoring system enters use, the issues highlighted are a pathway to develop transformer specific mitigation strategies. The studied literature provided some insight to the cyber protection of smart transformers, however, there is still much to be explored. With continuous advancements in cybersecurity, there is a need for equally continuously development in robust counter measures.
It is encouraged to investigate the provided mitigation strategies such as the one in paper \cite{Jahromi2020} with a larger range of attack scenarios to provide a better line of defence for these assets . The complex, dynamic and ever increasing nature of the grid along with the various states of the equipment and their operating conditions needs to be considered \cite{Transgrid2021EnergyVision,Musleh2020}. Another problem for study is how best to distinguish an attackers telemetry indications and operating commands from the true state of the transformer and its associated systems, which has been touched on for a Wide-Area Monitoring Systems in \cite{Musleh2021OnlineSystems}.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
{\footnotesize
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13218",
"language": "en",
"timestamp": "2023-02-28T02:13:13",
"url": "https://arxiv.org/abs/2302.13218",
"yymm": "2302"
} | \section{Introduction}
We consider the one-dimensional Schr\"{o}dinger equation with a finite number
of $\delta$-interactions
\begin{equation}
-y^{\prime\prime}+\left( q(x)+\sum_{k=1}^{N}\alpha_{k}\delta(x-x_{k})\right)
y=\lambda y,\quad0<x<b,\;\lambda\in\mathbb{C},\label{Schrwithdelta}%
\end{equation}
where $q\in L_{2}(0,b)$ is a complex valued function, $\delta(x)$ is the Dirac
delta distribution, $0<x_{1}<x_{2}<\dots<x_{N}<b$ and $\alpha_{1}%
,\dots,\alpha_{N}\in\mathbb{C}\setminus\{0\}$. Schr\"{o}dinger equations with
distributional coefficients supported on a set of measure zero naturally
appear in various problems of mathematical physics
\cite{alveberio1,alveberio2,atkinson,barrera1,coutinio,uncu} and have been
studied in a considerable number of publications and from different
perspectives. In general terms, Eq. (\ref{Schrwithdelta}) can be interpreted
as a regular equation, i.e., with the regular potential $q\in L_{2}(0,b)$,
whose solutions are continuous and such that their first derivatives satisfy
the jump condition $y^{\prime}(x_{k}+)-y^{\prime}(x_{k}-)=\alpha_{k}y(x_{k})$
at special points \cite{kochubei, kostenko}. Another approach consists in
considering the interval $[0,b]$ as a quantum graph whose edges are the
segments $[x_{k},x_{k+1}]$, $k=0,\dots,N$, (setting $x_{0}=0$, $x_{N+1}=b$),
and the Schr\"{o}dinger operator with the regular potential $q$ as an
unbounded operator on the direct sum $\bigoplus_{k=0}^{N}H^{2}(x_{k},x_{k+1}%
)$, with the domain given by the families $(y_{k})_{k=0}^{N}$ that satisfy the
condition of continuity $y_{k}(x_{k}-)=y_{k+1}(x_{k}+)$ and the jump condition
for the derivative $y_{k+1}^{\prime}(x_{k}+)-y_{k}^{\prime}(x_{k}-)=\alpha
_{k}y_{k}(x_{k})$ for $k=1,\dots N$ (see, e.g.,
\cite{gesteszy1,kurasov1,kurasov2}). This condition for the derivative is
known in the bibliography of quantum graphs as the $\delta$-type condition
\cite{berkolaikokuchment}. Yet another approach implies a regularization of
the Schrodinger operator with point interactions, that is, finding a subdomain
of the Hilbert space $L_{2}(0,b)$, where the operator defines a function in
$L_{2}(0,b)$. For this, note that the potential $q(x)+\sum_{k=1}^{N}\alpha
_{k}\delta(x-x_{k})$ defines a functional that belongs to the Sobolev space
$H^{-1}(0,b)$. In \cite{bondarenko1,gulyev,hryniv,schkalikov} these forms of
regularization have been studied, rewriting the operator by means of a
factorization that involves a primitive $\sigma$ of the potential.
Theory of transmutation operators, also called transformation operators, is a
widely used tool in studying differential equations and spectral problems
(see, e.g., \cite{BegehrGilbert, directandinverse, levitan,
marchenko,SitnikShishkina Elsevier}), and it is especially well developed for
Schr\"{o}dinger equations with regular potentials. It is known that under
certain general conditions on the potential $q$ the transmutation operator
transmuting the second derivative into the Schr\"{o}dinger operator can be
realized in the form of a Volterra integral operator of the second kind, whose
kernel can be obtained by solving a Goursat problem for the Klein-Gordon
equation with a variable coefficient \cite{hugo2,levitan, marchenko}.
Furthermore, functional series representations of the transmutation kernel
have been constructed and used for solving direct and inverse Sturm-Liouville
problems \cite{directandinverse,neumann}. For Schr\"{o}dinger equations with
$\delta$-point interactions, there exist results about equations with a single
point interaction and discontinuous conditions $y(x_{1}+)=ay(x_{1}-)$,
$y^{\prime}(x_{1}+)=\frac{1}{a}y^{\prime}(x_{1}-)+dy(x_{1}-)$, $a,b>0$ (see
\cite{hald,yurkoart1}), and for equations in which the spectral parameter is
also present in the jump condition (see \cite{akcay,mammadova,manafuv}).
Transmutation operators have also been studied for equations with
distributional coefficients belonging to the $H^{-1}$-Sobolev space in
\cite{bondarenko1,hryniv,schkalikov}. In \cite{hugo2}, the possibility of
extending the action of the transmutation operator for an $L_{1}$-potential to
the space of generalized functions $\mathscr{D}^{\prime}$, was studied.
The aim of this work is to present a construction of a transmutation operator
for the Schr\"{o}dinger equation with a finite number of point interactions.
The transmutation operator appears in the form of a Volterra integral
operator, and with its aid we derive analytical series representations for
solutions of (\ref{Schrwithdelta}). For this purpose, we obtain a closed form
of the general solution of (\ref{Schrwithdelta}). From it, the construction of
the transmutation operator is deduced, where the transmutation kernel is
ensembled from the convolutions of the kernels of certain solutions of the
regular equation (with the potential $q$), in a finite number of steps. Next,
the spectral parameter power series (SPPS) method is developed for Eq.
(\ref{Schrwithdelta}). The SPPS method was developed for continuous
(\cite{kravchenkospps1,sppsoriginal}) and $L_{1}$-potentials (\cite{blancarte}%
), and it has been used in a piecewise manner for solving spectral problems
for equations with a finite number of point interactions in
\cite{barrera1,barrera2,rabinovich1}. Following \cite{hugo}, we use the SPPS
method to obtain an explicit construction of the image of the transmutation
operator acting on polynomials. Similarly to the case of a regular potential
\cite{neumann}, we obtain a representation of the transmutation kernel as a
Fourier series in terms of Legendre polynomials and as a corollary, a
representation for the solutions of equation (\ref{Schrwithdelta}) in terms of
a Neumann series of Bessel functions. Similar representations are obtained for
the derivatives of the solutions. It is worth mentioning that the methods
based on Fourier-Legendre representations and Neumann series of Bessel
functions have shown to be an effective tool in solving direct and inverse
spectral problems for equations with regular potentials, see, e.g.,
\cite{directandinverse,neumann, KravTorbadirect}.
In Section 2, basic properties of the solutions of (\ref{Schrwithdelta}) are
compiled, studying the equation as a distributionional sense in
$\mathscr{D}^{\prime}(0,b)$ and deducing properties of its regular solutions.
Section 3 presents the construction of the closed form solution of
(\ref{Schrwithdelta}). In Section 4, the construction of the transmutation
operator and the main properties of the transmutation kernel are developed. In
Section 5, the SPPS method is presented, with the mapping and transmutation
properties of the transmutation operator. Section 6 presents the
Fourier-Legendre series representations for the transmutation kernels and the
Neumann series of Bessel functions representations for solutions of
(\ref{Schrwithdelta}), and a recursive integral relation for the Fourier-Legendre coefficients is obtained. Finally, in Section 7, integral and Neumann series of
Bessel functions representations for the derivatives of the solutions are presented.
\section{Problem setting and properties of the solutions}
We use the standard notation $W^{k,p}(0,b)$ ($b>0$) for the Sobolev space of
functions in $L_{p}(0,b)$ that have their first $k$ weak derivatives in
$L_{p}(0,b)$, $1\leqslant p\leqslant\infty$ and $k\in\mathbb{N}$. When $p=2$,
we denote $W^{k,2}(0,b)=H^{k}(0,b)$. We have that $W^{1,1}(0,b)=AC[0,b]$, and
$W^{1,\infty}(0,b)$ is precisely the class of Lipschitz continuous functions
in $[0,b]$ (see \cite[Ch. 8]{brezis}). The class of smooth functions with
compact support in $(0,b)$ is denoted by $C_{0}^{\infty}(0,b)$, then we define
$W_{0}^{1,p}(0,b)=\overline{C_{0}^{\infty}(0,b)}^{W^{1,p}}$ and $H_{0}%
^{1}(0,b)=W_{0}^{1,2}(0,b)$. Denote the dual space of $H^{1}_{0}(0,b)$ by
$H^{-1}(0,b)$. By $L_{2,loc}(0,b)$ we denote the class of measurable functions
$f:(0,b)\rightarrow\mathbb{C}$ such that $\int_{\alpha}^{\beta}|f(x)|^{2}%
dx<\infty$ for all subintervals $[\alpha, \beta]\subset(0,b)$.
The characteristic function of an interval $[A,B]\subset\mathbb{R}$ is denoted
by $\chi_{[A,B]}(t)$. In order to simplify the notation, for the case of a
symmetric interval $[-A,A]$, we simply write $\chi_{A}$. The Heaviside
function is given by $H(t)=\chi_{(0,\infty)}(t)$. The lateral limits of the
function $f$ at the point $\xi$ are denoted by $f(\xi\pm)=\lim_{x\rightarrow
\xi{\pm}}f(x)$. We use the notation $\mathbb{N}_{0}=\mathbb{N}\cup\{0\}$. The
space of distributions (generalized functions) over $C_{0}^{\infty}(0,b)$ is
denoted by $\mathscr{D}^{\prime}(0,b)$, and the value of a distribution
$f\in\mathscr{D}^{\prime}(0,b)$ at $\phi\in C_{0}^{\infty}(0,b)$ is denoted by
$(f,\phi)_{C_{0}^{\infty}(0,b)}$.\newline
Let $N\in\mathbb{N}$ and consider a partition $0<x_{1}<\dots<x_{N}<b$ and the
numbers $\alpha_{1}, \dots, \alpha_{N}\in\mathbb{C}\setminus\{0\}$. The set
$\mathfrak{I}_{N}=\{(x_{j},\alpha_{j})\}_{j=1}^{N}$ contains the information
about the point interactions of Eq. (\ref{Schrwithdelta}). Denote
\[
q_{\delta,\mathfrak{I}_{N}}(x):= \sum_{k=1}^{N}\alpha_{k}\delta(x-x_{k}%
),\quad\mathbf{L}_{q}:= -\frac{d^{2}}{dx^{2}}+q(x), \quad\mathbf{L}%
_{q,\mathfrak{I}_{N}}:= \mathbf{L}_{q}+q_{\delta,\mathfrak{I}_{N}}(x).
\]
For $u\in L_{2,loc}(0,b)$, $\mathbf{L}_{q,\mathfrak{I}_{N}}u$ defines a
distribution in $\mathscr{D}^{\prime}(0,b)$ as follows
\[
(\mathbf{L}_{q,\mathfrak{I}_{N}}u,\phi)_{C_{0}^{\infty}(0,b)}:= \int_{0}^{b}
u(x)\mathbf{L}_{q}\phi(x)dx+ \sum_{k=1}^{N}\alpha_{k} u(x_{k})\phi(x_{k})
\quad\mbox{for } \, \phi\in C_{0}^{\infty}(0,b).
\]
Note that the function $u$ must be well defined at the points $x_{k}$, $k=1,
\dots, N$. Actually, for a function $u\in H^{1}(0,b)$, the distribution
$\mathbf{L}_{q,\mathfrak{I}_{N}}u$ can be extended to a functional in
$H^{-1}(0,b)$ as follows
\[
(\mathbf{L}_{q,\mathfrak{I}_{N}}u,v)_{H_{0}^{1}(0,b)}:= \int_{0}^{b}
\{u^{\prime}(x)v^{\prime}(x)+q(x)u(x)v(x) \}dx+ \sum_{k=1}^{N}\alpha_{k}
u(x_{k})v(x_{k})\quad\mbox{for }\, v\in H_{0}^{1}(0,b).
\]
We say that a distribution $F\in\mathscr{D}^{\prime}(0,b)$ is $L_{2}$-regular,
if there exists a function $g\in L_{2}(0,b)$ such that $(F,\phi)_{C_{0}%
^{\infty}(0,b)}=(g,\phi)_{C_{0}^{\infty}(0,b)}:=\int_{0}^{b}g(x)\phi(x)dx$ for
all $\phi\in{C_{0}^{\infty}(0,b)}$.
Denote $x_{0}=0$, $x_{N+1}=b$. We recall the following characterization of
functions $u\in L_{2,loc}(0,b)$ for which $\mathbf{L}_{q,\mathfrak{I}_{N}}u$
is $L_{2}$-regular.
\begin{proposition}
\label{propregular} If $u\in L_{2,loc}(0,b)$, then the distribution
$\mathbf{L}_{q,\mathfrak{I}_{N}}u$ is $L_{2}$-regular iff the following
conditions hold.
\begin{enumerate}
\item For each $k=0, \dots, N$, $u|_{(x_{k},x_{k+1})}\in H^{2}(x_{k},x_{k+1}%
)$.
\item $u\in AC[0,b]$.
\item The discontinuities of the derivative $u^{\prime}$ are located at the
points $x_{k}$, $k=1, \dots, N$, and the jumps are given by
\begin{equation}
\label{jumpderivative}u^{\prime}(x_{k}+)-u^{\prime}(x_{k}-)=\alpha_{j}
u(x_{k}) \quad\mbox{for } k=1, \cdots, N.
\end{equation}
\end{enumerate}
In such case,
\begin{equation}
\label{regulardist}(\mathbf{L}_{q,\mathfrak{I}_{N}}u,\phi)_{C_{0}^{\infty
}(0,b)}=(\mathbf{L}_{q}u,\phi)_{C_{0}^{\infty}(0,b)} \quad\mbox{for all }
\phi\in{C_{0}^{\infty}(0,b)}.
\end{equation}
\end{proposition}
\begin{proof}
Suppose that $\mathbf{L}_{q,\mathfrak{I}_{N}}u$ is $L_{2}$-regular. Then there
exists $g\in L_{2}(0,b)$ such that
\begin{equation}
\label{aux0prop1}(\mathbf{L}_{q,\mathfrak{I}_{N}}u,\phi)_{C_{0}^{\infty}%
(0,b)}=(g,\phi)_{C_{0}^{\infty}(0,b)} \quad\mbox{for all } \phi\in
{C_{0}^{\infty}(0,b)}.
\end{equation}
\begin{enumerate}
\item Fix $k\in\{1, \dots, N-1\}$. Take a test function $\phi\in C_{0}%
^{\infty}(0,b)$ with $\operatorname{Supp}(\phi) \subset(x_{k},x_{k+1})$.
Hence
\begin{equation}
\label{auxprop1}\int_{x_{k}}^{x_{k+1}}g(x)\phi(x)dx=(\mathbf{L}%
_{q,\mathfrak{I}_{N}}u,\phi)_{C_{0}^{\infty}(0,b)}= \int_{x_{k}}^{x_{k+1}%
}u(x)\mathbf{L}_{q}\phi(x)dx,
\end{equation}
because $\phi(x_{j})=0$ for $j=1, \dots, N$. From (\ref{auxprop1}) we obtain
\[
\int_{x_{k}}^{x_{k+1}}u(x)\phi^{\prime\prime}(x)dx= \int_{x_{k}}^{x_{k+1}%
}\{q(x)u(x)-g(x)\}\phi(x)dx.
\]
Set $v(x)=\int_{0}^{x}\int_{0}^{t}\{q(s)u(s)-g(s)\}dsdt$. Hence $v\in
W^{2,1}(x_{j},x_{j+1})$, $v^{\prime\prime}(x)=q(x)u(x)-g(x)$ a.e. $x\in
(x_{j},x_{j+1})$, and we get the equality
\begin{equation}
\label{auxiliareq}\int_{x_{k}}^{x_{k+1}}(u(x)-v(x))\phi^{\prime\prime}(x)dx=0
\quad\forall\phi\in C_{0}^{\infty}(x_{k}, x_{k+1}).
\end{equation}
Equality (\ref{auxiliareq}) implies that $u(x)=v(x)+Ax+B$ a.e. $x\in
(x_{k},x_{k+1})$ for some constants $A$ and $B$ (\cite[pp. 85]{vladimirov}).
In consequence $u\in W^{2,1}(x_{k}, x_{k+1})$ and
\begin{equation}
\label{aux2prop1}-u^{\prime\prime}(x)+q(x)u(x)= g(x) \quad\mbox{a.e. }
x\in(x_{k},x_{k+1}).
\end{equation}
Furthermore, $u\in C[x_{k},x_{k+1}]$, hence $qu\in L_{2}(x_{k},x_{k+1})$ and
then $u^{\prime\prime}=qu-g\in L_{2}(x_{k},x_{k+1})$. In this way
$u|_{(x_{k},x_{k+1})}\in H^{2}(x_{k},x_{k+1})$.
Now take $\varepsilon>0$ and an arbitrary $\phi\in C_{0}^{\infty}%
(\varepsilon,x_{1})$. We have that
\[
(\mathbf{L}_{q,\mathfrak{I}_{N}}u,\phi)_{C_{0}^{\infty}(0,b)}= \int
_{\varepsilon}^{x_{1}}\{-u(x)\phi^{\prime\prime}(x)+q(x)u(x)\phi
(x)\}dx=\int_{\varepsilon}^{x_{1}}g(x)\phi(x)dx.
\]
Applying the same procedure as in the previous case we obtain that $u\in
H^{2}(\varepsilon,x_{1})$ and satisfies Eq. (\ref{aux2prop1}) in the interval
$(\varepsilon,x_{1})$. Since $\varepsilon$ is arbitrary, we conclude that $u$
satisfies (\ref{aux2prop1}) for a.e. $x\in(0,x_{1})$. Since $q,g\in
L_{2}(0,x_{1})$, then $u|_{(0,x_{1})}\in H^{2}(0,x_{1})$ (see \cite[Th.
3.4]{zetl}). The proof for the interval $(x_{N},b)$ is analogous.
Since $u\in C^{1}[x_{k},x_{k+1}]$, $k=0,\dots, N$, the following equality is
valid (see formula (6) from \cite[pp. 100]{kanwal})
\begin{align}
\int_{0}^{b} u(x)\phi^{\prime\prime}(x)dx & = \sum_{k=1}^{N}\left\{
u^{\prime}(x_{k}+)-u^{\prime}(x_{k}-) \right\} \phi(x_{k})\label{aux3prop1}\\
& \quad-\sum_{k=1}^{N}\left\{ u(x_{k}+)-u(x_{k}-) \right\} \phi^{\prime
}(x_{k}) +\int_{0}^{b} u^{\prime\prime}(x)\phi(x)dx, \qquad\forall\phi\in
C_{0}^{\infty}(0,b).\nonumber
\end{align}
Fix $k\in\{1, \cdots, N\}$ arbitrary and take $\varepsilon>0$ small enough
such that $(x_{k}-\varepsilon,x_{k}+\varepsilon)\subset(x_{k-1},x_{k+1})$.
Choose a cut-off function $\psi\in C_{0}^{\infty}(x_{k}-\varepsilon
,x_{k}+\varepsilon)$ satisfying $0\leqslant\psi\leqslant1$ on $(x_{k}%
-\varepsilon, x_{k}+\varepsilon)$ and $\psi(x)=1$ for $x\in(x_{k}%
-\frac{\varepsilon}{3}, x_{k}+\frac{\varepsilon}{3})$.
\item By statement 1, it is enough to show that $u(x_{k}+)=u(x_{k}-)$.
Set\newline$\phi(x)=(x-x_{k})\psi(x)$, in such a way that $\phi(x_{k})=0$ and
$\phi^{\prime}(x_{k})=1$. Hence
\[
(\mathbf{L}_{q,\mathfrak{I}_{N}}u, \phi)_{C_{0}^{\infty}(0,b)}= \int
_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}u(x)\mathbf{L}_{q}\phi(x)dx.
\]
By (\ref{aux3prop1}) we have
\[
\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}u(x)\phi^{\prime\prime}(x)dx =
u(x_{k}-)-u(x_{k}+)+\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}%
u^{\prime\prime}(x)\phi(x)dx,
\]
because $\phi(x_{k})=0$ and $\phi^{\prime}(x_{k})=1$. Since $u$ satisfies
(\ref{aux0prop1}), we have
\[
\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}(\mathbf{L}_{q}u(x)-g(x))\phi
(x)dx+u(x_{k}+)-u(x_{k}-)=0.
\]
By statement 1, $\mathbf{L}_{q}u=g$ on both intervals $(x_{k-1},x_{k})$,
$(x_{k}, x_{k+1})$. Then we obtain that $u(x_{k}+)-u(x_{k}-)$=0.
\item Now take $\psi$ as the test function. Hence
\[
(\mathbf{L}_{q,\mathfrak{I}_{N}}u,\psi)_{C_{0}^{\infty}(0,b)}= \int
_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}u(x)\mathbf{L}_{q}\psi(x)dx+\alpha
_{k}u(x_{k}),
\]
because $\operatorname{Supp}(\psi)\subset(x_{k}-\varepsilon,x_{k}%
+\varepsilon)$ and $\psi\equiv1$ on $(x_{k}-\frac{\varepsilon}{3}, x_{k}%
+\frac{\varepsilon}{3})$. On the other hand, by (\ref{aux3prop1}) we obtain
\[
\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}u(x)\psi^{\prime\prime}(x)dx =
u^{\prime}(x_{k}+)-u^{\prime}(x_{k}-)+\int_{x_{k}-\varepsilon}^{x_{k}%
+\varepsilon}u^{\prime\prime}(x)\psi(x)dx,
\]
because $\psi^{\prime}(x_{k})=0$. Thus, by (\ref{aux0prop1}) we have
\[
\int_{x_{k}-\varepsilon}^{x_{k}+\varepsilon}(\mathbf{L}_{q}u(x)-g(x))\psi
(x)dx+ u^{\prime}(x_{k}-)-u^{\prime}(x_{k}+)+\alpha_{k}u(x_{k})=0.
\]
Again, by statement 1, we obtain (\ref{jumpderivative}).
\end{enumerate}
Reciprocally, if $u$ satisfies conditions 1,2 and 3, equality (\ref{aux3prop1}%
) implies (\ref{regulardist}). By condition 1, $\mathbf{L}_{q,\mathfrak{I}%
_{N}}u$ is $L_{2}$-regular.
\end{proof}
\begin{definition}
The $L_{2}$-\textbf{regularization domain} of $\mathbf{L}_{q, \mathfrak{I}%
_{N}}$, denoted by $\mathcal{D}_{2}\left( \mathbf{L}_{q, \mathfrak{I}_{N}%
}\right) $, is the set of all functions $u\in L_{2,loc}(0,b)$ satisfying
conditions 1,2 and 3 of Proposition \ref{propregular}.
\end{definition}
If $u\in L_{2,loc}(0,b)$ is a solution of (\ref{Schrwithdelta}), then
$\mathbf{L}_{q-\lambda,\mathfrak{I}_{N}}u$ equals the regular distribution
zero. Then we have the next characterization.
\begin{corollary}
\label{proppropofsolutions} A function $u\in L_{2,loc}(0,b)$ is a solution of
Eq. (\ref{Schrwithdelta}) iff $u\in\mathcal{D}_{2}\left( \mathbf{L}_{q,
\mathfrak{I}_{N}}\right) $ and for each $k=0, \dots, N$, the restriction
$u|_{(x_{k},x_{k+1})}$ is a solution of the regular Schr\"odinger equation
\begin{equation}
\label{schrodingerregular}-y^{\prime\prime}(x)+q(x)y(x)=\lambda y(x)
\quad\mbox{for } x_{k}<x<x_{k+1}.
\end{equation}
\end{corollary}
\begin{remark}
\label{remarkidealdomain} Let $f\in\mathcal{D}_{2}\left( \mathbf{L}%
_{q,\mathfrak{I}_{N}}\right) $. Given $g\in C^{1}[0,b]$, we have
\begin{align*}
(fg)^{\prime}(x_{k}+)-(fg)^{\prime}(x_{k}-) & =f^{\prime}(x_{k}%
+)g(x_{k})+f(x_{k})g^{\prime}(x_{k}+)-f^{\prime}(x_{k}-)g(x_{k})-f(x_{k}%
)g^{\prime}(x_{k}-)\\
& = \left[ f^{\prime}(x_{k}+)-f^{\prime}(x_{k}-)\right] g(x_{k}) = \alpha_{k}
f(x_{k})g(x_{k})
\end{align*}
for $k=1, \dots, N$. In particular, $fg\in\mathcal{D}_{2}\left( \mathbf{L}%
_{q,\mathfrak{I}_{N}}\right) $ for $g\in H^{2}(0,b)$.
\end{remark}
\begin{remark}
\label{remarkunicityofsol} Let $u_{0}, u_{1}\in\mathbb{C}$. Consider the
Cauchy problem
\begin{equation}
\label{Cauchyproblem1}%
\begin{cases}
\mathbf{L}_{q, \mathfrak{I}_{N}}u(x) = \lambda u(x), \quad0<x<b,\\
u(0)=u_{0}, \; u^{\prime}(0)= u_{1}.
\end{cases}
\end{equation}
If the solution of the problem exists, it must be unique. It is enough to show
the assertion for $u_{0}=u_{1}=0$. Indeed, if $w$ is a solution of such
problem, by Corollary \ref{proppropofsolutions}, $w$ is a solution of
(\ref{schrodingerregular}) on $(0, x_{1})$ satisfying $w(0)=w^{\prime}(0)=0$.
Hence $w\equiv0$ on $[0,x_{1}]$. By the continuity of $w$ and condition
(\ref{jumpderivative}), we have $w(x_{1})=w^{\prime}(x_{1}-)=0$. Hence $w$ is
a solution of (\ref{schrodingerregular}) satisfying these homogeneous
conditions. Thus, $w\equiv0$ on $[x_{1}, x_{2}]$. By continuing the process
until the points $x_{k}$ are exhausted, we arrive at the solution $w\equiv0$
on the whole segment $[0,b]$.
The uniqueness of the Cauchy problem with conditions $u(b)=u_{0}$, $u^{\prime
}(b)=u_{1}$ is proved in a similar way.
\end{remark}
\begin{remark}
Suppose that $u_{0}=u_{0}(\lambda)$ and $u_{1}=u_{1}(\lambda)$ are entire
functions of $\lambda$ and denote by $u(\lambda,x)$ the corresponding unique
solution of (\ref{Cauchyproblem1}). Since $u$ is the solution of the Cauchy
problem $\mathbf{L}_{q}u=\lambda u$ on $(0,x_{1})$ with the initial conditions
$u(\lambda,0)=u_{1}(\lambda)$, $u^{\prime}(\lambda,0)=u_{1}(\lambda)$, both
$u(\lambda,x)$ and $u^{\prime}(\lambda,x+)$ are entire functions for any
$x\in[0,x_{1}]$ (this is a consequence of \cite[Th. 3.9]{zetl} and \cite[Th.
7]{blancarte}). Hence $u^{\prime}(\lambda,x_{1}-)=u^{\prime}(\lambda
,x_{1}+)-\alpha_{1}u(\lambda,x_{1})$ is entire in $\lambda$. Since $u$ is the
solution of the Cauchy problem $\mathbf{L}_{q}u=\lambda u$ on $(x_{1},x_{2})$
with initial conditions $u(\lambda,x_{1})$ and $u^{\prime}(\lambda,x_{1}+)$,
we have that $u(\lambda,x)$ and $u^{\prime}(\lambda,x+)$ are entire functions
for $x\in[x_{1},x_{2}]$. By continuing the process we prove this assertion for
all $x\in[0,b]$.
\end{remark}
\section{Closed form solution}
In what follows, denote the square root of $\lambda$ by $\rho$, so
$\lambda=\rho^{2}$, $\rho\in\mathbb{C}$. For each $k\in\{1, \cdots, N\}$ let
$\widehat{s}_{k}(\rho,x)$ be the unique solution of the Cauchy problem
\begin{equation}%
\begin{cases}
-\widehat{s}_{k}^{\prime\prime}(\rho,x)+q(x+x_{k})\widehat{s}_{k}(\rho
,x)=\rho^{2}\widehat{s}_{k}(\rho, x) \quad\mbox{ for } 0<x<b-x_{k},\\
\widehat{s}_{k}(\rho,0)=0, \; \widehat{s}_{k}^{\prime}(\rho, 0)=1.
\end{cases}
\end{equation}
In this way, $\widehat{s}_{k}(\rho, x-x_{k})$ is a solution of $\mathbf{L}%
_{q}u=\rho^{2} u$ on $(x_{k},b)$ with initial conditions $u(x_{k})=0$,
$u^{\prime}(x_{k})=1$. According to \cite[Ch. 3, Sec. 6.3]{vladimirov},
$(\mathbf{L}_{q}-\rho^{2})\left( H(x-x_{k})\widehat{s}_{k}(\rho,
x-x_{k})\right) =-\delta(x-x_{k})$ for $x_{k}<x<b$. \newline
We denote by $\mathcal{J}_{N}$ the set of finite sequences $J=(j_{1}, \dots,
j_{l})$ with $1<l\leqslant N$, $\{j_{1}, \dots, j_{l}\}\subset\{1, \dots, N\}$
and $j_{1}<\cdots<j_{l}$. Given $J\in\mathcal{J}_{N}$, the length of $J$ is
denoted by $|J|$ and we define $\alpha_{J}:= \alpha_{j_{1}}\cdots
\alpha_{j_{|J|}}$.
\begin{theorem}
\label{TheoremSolCauchy} Given $u_{0}, u_{1}\in\mathbb{C}$, the unique
solution $\displaystyle u_{\mathfrak{I}_{N}}\in\mathcal{D}_{2}\left(
\mathbf{L}_{q,\mathfrak{I}_{N}}\right) $ of the Cauchy problem
(\ref{Cauchyproblem1}) has the form
\begin{align}
u_{\mathfrak{I}_{N}}(\rho,x) & = \widetilde{u}(\rho,x)+ \sum_{k=1}^{N}%
\alpha_{k}\widetilde{u}(\rho,x_{k})H(x-x_{k})\widehat{s}_{k}(\rho
,x-x_{k})\nonumber\\
& \;\;\;\; +\sum_{J\in\mathcal{J}_{N}}\alpha_{J} H(x-x_{j_{|J|}})
\widetilde{u}(\rho,x_{j_{1}}) \left( \prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}%
}(\rho,x_{j_{l+1}}-x_{j_{l}})\right) \widehat{s}_{j_{|J|}}(\rho, x-x_{j_{|J|}%
}),\label{generalsolcauchy}%
\end{align}
where $\widetilde{u}(\rho,x)$ is the unique solution of the regular
Schr\"odinger equation
\begin{equation}
\label{regularSch}\mathbf{L}_{q}\widetilde{u}(\rho,x)= \rho^{2}\widetilde
{u}(\rho,x), \quad0<x<b,
\end{equation}
satisfying the initial conditions $\widetilde{u}(\rho,0)=u_{1}, \;
\widetilde{u}^{\prime}(\rho,0)=u_{1}$.
\end{theorem}
\begin{proof}
The proof is by induction on $N$. For $N=1$, the proposed solution has the
form
\[
u_{\mathfrak{I}_{1}}(\rho,x)=\widetilde{u}(\rho,x)+\alpha_{1}H(x-x_{1}%
)\widetilde{u}(\rho,x_{1})\widehat{s}_{1}(\rho,x-x_{1}).
\]
Note that $u_{\mathfrak{I}_{1}}(\rho,x)$ is continuous, and $u_{\mathfrak{I}%
_{1}}(\rho,x_{1})=\widetilde{u}(\rho,x_{1})$. Hence
\[
(\mathbf{L}_{q}-\rho^{2})u_{\mathfrak{I}_{1}}(\rho,x)= \alpha_{1}\widetilde
{u}(\rho,x_{1})(\mathbf{L}_{q}-\rho^{2})\left( H(x-x_{1})\widehat{s}_{1}%
(\rho,x-x_{1})\right) = -\alpha_{1}\widetilde{u}(\rho,x_{1})\delta(x-x_{1}),
\]
that is, $u_{\mathfrak{I}_{1}}(\rho,x)$ is a solution of (\ref{Schrwithdelta})
with $N=1$. Suppose the result is valid for $N$. Let $u_{\mathfrak{I}_{N+1}%
}(\rho,x)$ be the proposed solution given by formula (\ref{generalsolcauchy}).
It is clear that $u_{\mathfrak{I}_{N+1}}(\rho, \cdot)|_{(x_{k},x_{k+1})}\in
H^{2}(x_{k},x_{k+1})$, $k=0, \cdots, N$, $u_{\mathfrak{I}_{N+1}}(\rho,x)$ is a
solution of (\ref{schrodingerregular}) on each interval $(x_{k},x_{k+1})$,
$k=0, \dots, N+1$, and $u_{\mathfrak{I}_{N+1}}^{(j)}(\rho,0)= \widetilde
{u}^{(j)}(\rho,0)=u_{j}$, $j=0,1$. Furthermore, we can write
\[
u_{\mathfrak{I}_{N+1}}(\rho,x)=u_{\mathfrak{I}_{N}}(\rho,x)+H(x-x_{N+1}%
)f_{N}(\rho,x),
\]
where $\mathfrak{I}_{N}= \mathfrak{I}_{N+1}\setminus\{(x_{N+1}, \alpha
_{N+1})\}$, $u_{\mathfrak{J}_{N}}(\rho,x)$ is the proposed solution for the
interactions $\mathfrak{I}_{N}$, and the function $f_{N}(\rho,x)$ is given by
\begin{align*}
f_{N}(\rho,x) & = \alpha_{N+1}\widetilde{u}(\rho,x_{N+1})\widehat{s}%
_{N+1}(x-x_{N+1})\\
& \quad+\sum_{\overset{J\in\mathcal{J}_{N+1}}{j_{|J|}=N+1}}\alpha_{J}
\widetilde{u}(\rho,x_{j_{1}}) \left( \prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}%
}(\rho,x_{j_{l+1}}-x_{j_{l}})\right) \widehat{s}_{N+1}(\rho, x-x_{N+1}),
\end{align*}
where the sum is taken over all the sequences $J=(j_{1}, \dots, j_{|J|}%
)\in\mathcal{J}_{N}$ with $j_{|J|}=N+1$. From this representation we obtain
$u_{\mathfrak{I}_{N+1}}(\rho,x_{N+1}\pm)=u_{\mathfrak{I}_{N}}(\rho,x_{N+1})$
and hence $u_{\mathfrak{I}_{N+1}}\in AC[0,b]$. By the induction hypothesis,
$u_{\mathfrak{I}_{N}}(\rho,x)$ is the solution of (\ref{Schrwithdelta}) for
$N$, then in order to show that $u_{\mathfrak{I}_{N+1}}(\rho,x)$ is the
solution for $N+1$ it is enough to show that $(\mathbf{L}_{q}-\rho^{2})\hat
{f}_{N}(\rho,x)=-\alpha_{N}u_{N}(x_{N+1})\delta(x-x_{N+1})$, where $\hat
{f}_{N}(\rho,x)=H(x-x_{N+1})f_{N}(\rho,x)$. Indeed, we have
\begin{align*}
(\rho^{2}-\mathbf{L}_{q})\hat{f}_{N}(\rho,x) & = \alpha_{N+1}\widetilde
{u}(\rho,x_{N+1})\delta(x-x_{N+1})+\\
& \quad\; +\sum_{\overset{J\in\mathcal{J}_{N+1}}{j_{|J|}=N+1}}\alpha_{J}
\widetilde{u}(\rho,x_{j_{1}}) \left( \prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}%
}(\rho,x_{j_{l+1}}-x_{j_{l}})\right) \delta(x-x_{N+1})\\
& = \alpha_{N+1}\delta(x-x_{N+1})\Bigg{[} \widetilde{u}(\rho,x_{N+1}%
)+\sum_{k=1}^{N}\alpha_{k}\widetilde{u}(\rho,x_{N+1})\widehat{s}_{k}(\rho,
x_{N+1}-x_{k})\\
& \quad\;\;\,+ \sum_{J\in\mathcal{J}_{N}}\alpha_{J}\widetilde{u}%
(\rho,x_{j_{1}})\left( \prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}(\rho,x_{j_{l+1}%
}-x_{j_{l}})\right) \widehat{s}_{j_{|J|}}(\rho, x_{N+1}-x_{j_{|J|}})
\Bigg{]}\\
& = \alpha_{N+1}u_{\mathfrak{I}_{N}}(\rho,x_{N+1})\delta(x-x_{N+1}%
)=\alpha_{N+1}u_{\mathfrak{I}_{N+1}}(\rho,x_{N+1})\delta(x-x_{N+1}),
\end{align*}
where the second equality is due to the fact that
\[
\{J\in\mathcal{J}_{N+1}\,|\, j_{|J|}=N+1 \} = \{(J^{\prime},N+1)\, |\,
J^{\prime}\in\mathcal{J}_{N} \}\cup\{(j,N+1) \}_{j=1}^{N}.
\]
Hence $u_{\mathfrak{I}_{N+1}}(\rho,x)$ is the solution of the Cauchy problem.
\end{proof}
\begin{example}
Consider the case $q\equiv0$. Denote by $e_{\mathfrak{I}_{N}}^{0}(\rho,x)$ the
unique solution of
\begin{equation}
\label{Deltadiracpotentialexample}-y^{\prime\prime}+\left( \sum_{k=1}%
^{N}\alpha_{k} \delta(x-x_{k})\right) y =\rho^{2} y, \quad0<x<b,
\end{equation}
satisfying $e_{\mathfrak{I}_{N}}^{0}(\rho, 0)=1$, $e_{\mathfrak{I}_{N}}%
^{0}(\rho, 0)=i\rho$. In this case we have $\widehat{s}_{k}(\rho,x)=\frac
{\sin(\rho x)}{\rho}$ for $k=1,\dots, N$. Hence, according to Theorem
\ref{TheoremSolCauchy}, the solution $e_{\mathfrak{I}_{N}}^{0}(\rho,x)$ has
the form
\begin{align}
e_{\mathfrak{I}_{N}}^{0}(\rho,x) & =e^{i\rho x}+\sum_{k=1}^{N}\alpha
_{k}e^{i\rho x_{k}}H(x-x_{k})\frac{\sin(\rho(x-x_{k}))}{\rho}\nonumber\\
& \quad+\sum_{J\in\mathcal{J}_{N}}\alpha_{J} H(x-x_{j_{|J|}})e^{i\rho
x_{j_{1}}}\left( \prod_{l=1}^{|J|-1}\frac{\sin(\rho(x_{j_{l+1}}-x_{j_{l}}%
))}{\rho}\right) \frac{\sin(\rho(x-x_{j_{|J|}}))}{\rho}.\label{SolDiracDeltae}%
\end{align}
\end{example}
\section{Transmutation operators}
\subsection{Construction of the integral transmutation kernel}
Let $h\in\mathbb{C}$. Denote by $\widetilde{e}_{h}(\rho,x)$ the unique
solution of Eq. (\ref{regularSch}) satisfying $\widetilde{e}_{h}(\rho,0)=1$,
$\widetilde{e}^{\prime}_{h}(\rho, 0)=i\rho+h$. Hence the unique solution
$e_{\mathfrak{I}_{N}}^{h}(\rho,x)$ of Eq. (\ref{Schrwithdelta}) satisfying
$e_{\mathfrak{I}_{N}}^{h}(\rho,0)=1$, $(e_{\mathfrak{I}_{N}}^{h})^{\prime
}(\rho,0)=i\rho+h$ is given by
\begin{align}
e_{\mathfrak{I}_{N}}^{h}(\rho,x) & =\widetilde{e}_{h}(\rho,x)+\sum_{k=1}%
^{N}\alpha_{k}\widetilde{e}_{h}(\rho, x_{k})H(x-x_{k})\widehat{s}_{k}(\rho,
x-x_{k})\label{SoleGral}\\
& \quad+\sum_{J\in\mathcal{J}_{N}}\alpha_{J} H(x-x_{j_{|J|}})\widetilde
{e}_{h}(\rho,x_{j_{1}})\left( \prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}%
(\rho,x_{j_{l+1}}-x_{j_{l}})\right) \widehat{s}_{j_{|J|}}(\rho,x-x_{j_{|J|}%
}).\nonumber
\end{align}
It is known that there exists a kernel $\widetilde{K}^{h}\in C(\overline
{\Omega})\cap H^{1}(\Omega)$, where $\Omega=\{(x,t)\in\mathbb{R}^{2}\, |\,
0<x<b, |t|<x\}$, such that $\widetilde{K}^{h}(x,x)=\frac{h}{2}+\frac{1}{2}%
\int_{0}^{x}q(s)ds$, $\widetilde{K}^{h}(x,-x)=\frac{h}{2}$ and
\begin{equation}
\label{transm1}\widetilde{e}_{h}(\rho,x)=e^{i\rho x}+\int_{-x}^{x}%
\widetilde{K}^{h}(x,t)e^{i\rho t}dt
\end{equation}
(see, e.g., \cite{levitan, marchenko}). Actually, $\widetilde{K}^{h}%
(x,\cdot)\in L_{2}(-x,x)$ and it can be extended (as a function of $t$) to a
function in $L_{2}(\mathbb{R})$ with a support in $[-x,x]$. For each $k\in\{1,
\dots, N\}$ there exists a kernel $\widehat{H}_{k}\in C(\overline{\Omega_{k}%
})\cap H^{1}(\Omega_{k})$ with $\Omega_{k}=\{(x,t)\in\mathbb{R}^{2}\,|\,
0<x<b-x_{k}, \; |t|\leqslant x\}$, and $\widehat{H}_{k}(x,x)=\frac{1}{2}%
\int_{x_{k}}^{x+x_{k}}q(s)ds$, $\widehat{H}_{k}(x,-x)=0$, such that
\begin{equation}
\label{representationsinegeneral1}\widehat{s}_{k}(\rho,x)=\frac{\sin(\rho
x)}{\rho}+\int_{0}^{x}\widehat{H}_{k}(x,t)\frac{\sin(\rho t)}{\rho}dt
\end{equation}
(see \cite[Ch. 1]{yurko}). From this we obtain the representation
\begin{equation}
\label{representationsinegeneral2}\widehat{s}_{k}(\rho,x-x_{k})=\frac
{\sin(\rho(x-x_{k}))}{\rho}+\int_{0}^{x-x_{k}}\widehat{H}_{k}(x-x_{k}%
,t)\frac{\sin(\rho t)}{\rho}dt=\int_{-(x-x_{k})}^{x-x_{k}}\widetilde{K}%
_{k}(x,t)e^{i\rho t}dt,
\end{equation}
where
\begin{equation}
\label{kernelauxiliarsine}\widetilde{K}_{k}(x,t)=\frac{1}{2}\chi_{x-x_{k}%
}(t)+\displaystyle \frac{1}{2}\int_{|t|}^{x-x_{k}}\widehat{H}_{k}%
(x-x_{k},s)ds.
\end{equation}
We denote the Fourier transform of a function $f\in L_{1}(\mathbb{R})$ by
$\mathcal{F}f(\rho)=\int_{\mathbb{R}}f(t)e^{i\rho t}dt$ and the convolution of
$f$ with a function $g\in L_{1}(\mathbb{R})$ by $f\ast g(t)= \int_{\mathbb{R}%
}f(t-s)g(s)ds$. We recall that $\mathcal{F}(f\ast g)(\rho)=\mathcal{F}%
f(\rho)\cdot\mathcal{F}g(\rho)$. Given $f_{1}, \dots, f_{M} \in
L_{2}(\mathbb{R})$ with compact support, we denote their convolution product
by $\left( \prod_{l=1}^{M}\right) ^{\ast}f_{l}(t):= (f_{1}\ast\cdots\ast
f_{M})(t)$. For the kernels $\widetilde{K}^{h}(x,t), \widetilde{K}_{k}(x,t)$,
the operations $\mathcal{F}$ and $\ast$ will be applied with respect to the
variable $t$.
\begin{lemma}
\label{lemaconv} Let $A,B>0$. If $f\in C[-A,A]$ and $g\in C[-B,B]$, then
$(\chi_{A} f)\ast(\chi_{B} g)\in C(\mathbb{R})$ with $\operatorname{Supp}%
\left( (\chi_{A} f)\ast(\chi_{B} g)\right) \subset[-(A+B),A+B]$.
\end{lemma}
\begin{proof}
The assertion $\operatorname{Supp}\left( (\chi_{A} f)\ast(\chi_{B} g)\right)
\subset[-(A+B),A+B]$ is due to \cite[Prop. 4.18]{brezis}. Since $(\chi_{A}
f)\in L_{1}(\mathbb{R})$ and $(\chi_{B} g)\in L_{\infty}(\mathbb{R})$, it
follows from \cite[Prop. 8.8]{folland} that $(\chi_{A} f)\ast(\chi_{B} g)\in
C(\mathbb{R})$.
\end{proof}
\begin{theorem}
\label{thoremtransmoperator} There exists a kernel $K_{\mathfrak{I}_{N}}%
^{h}(x,t)$ defined on $\Omega$ such that
\begin{equation}
\label{transmutationgeneral}e_{\mathfrak{I}_{N}}^{h}(\rho,x)=e^{i\rho x}%
+\int_{-x}^{x}K_{\mathfrak{I}_{N}}^{h}(x,t)e^{i\rho t}dt.
\end{equation}
For any $0<x\leqslant b$, $K_{\mathfrak{J}_{N}}^{h}(x,t)$ is piecewise
absolutely continuous with respect to the variable $t\in[-x,x]$ and satisfies
$K_{\mathfrak{I}_{N}}^{h}(x,\cdot)\in L_{2}(-x,x)$. Furthermore,
$K_{\mathfrak{I}_{N}}^{h}\in L_{\infty}(\Omega)$.
\end{theorem}
\begin{proof}
Susbtitution of formulas (\ref{transm1}) and (\ref{representationsinegeneral2}%
) in (\ref{SoleGral}) leads to the equality
\begin{align*}
& e_{\mathfrak{I}_{N}}^{h}(\rho,x)= e^{i\rho x}+\int_{-x}^{x}\widetilde{K}%
^{h}(x,t)e^{i\rho t}dt+\\
& +\sum_{k=1}^{N}\alpha_{k}H(x-x_{k})\left( e^{i\rho x_{k}}+\int
\limits_{-x_{k}}^{x_{k}}\widetilde{K}^{h}(x_{k},t)e^{i\rho t}dt\right) \left(
\int\limits_{-(x-x_{k})}^{x-x_{k}}\widetilde{K}_{k}(x,t)e^{i\rho t}dt\right)
\\
& +\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\Bigg[\left( e^{i\rho
x_{j_{1}}}+\int\limits_{-x_{j_{1}}}^{x_{j_{1}}}\widetilde{K}^{h}(x_{j_{1}%
},t)e^{i\rho t}dt\right) \left( \prod_{l=1}^{|J|-1}\int\limits_{-(x_{j_{l+1}%
}-x_{j_{l}})}^{x_{j_{l+1}}-x_{j_{l}}}\widetilde{K}_{k}(x_{j_{l+1}},t)e^{i\rho
t}dt\right) \\
& \qquad\qquad\qquad\qquad\cdot\int\limits_{-(x-x_{j_{|J|}})}^{x-x_{j_{|J|}}%
}\widetilde{K}_{k}(x,t)e^{i\rho t}dt\Bigg]
\end{align*}
Note that
\begin{align*}
\prod_{l=1}^{|J|-1}\int\limits_{-(x_{j_{l+1}}-x_{j_{l}})}^{x_{j_{l+1}%
}-x_{j_{l}}}\widetilde{K}_{k}(x_{j_{l+1}},t)e^{i\rho t}dt & = \mathcal{F}%
\left\{ \left( \prod_{l=1}^{|J|-1}\right) ^{\ast}\left( \chi_{x_{j_{l+1}%
}-x_{j_{l}}}(t)\widetilde{K}_{k}(x_{j_{l+1}},t)\right) \right\} .
\end{align*}
In a similar way, if we denote $I_{A,B}=\left( e^{i\rho A}+\int\limits_{-A}%
^{A}\widetilde{K}^{h}(A,t)e^{i\rho t}dt\right) \left( \int\limits_{-B}%
^{B}\widetilde{K}_{k}(B,t)e^{i\rho t}dt\right) $ with $A,B\in(0,b)$, then
\begin{align*}
I_{A,B} = & e^{i\rho A}\int\limits_{-B}^{B}\widetilde{K}_{k}(B,t)e^{i\rho
t}dt+ \mathcal{F}\left( \chi_{A}(t)\widetilde{K}^{h}(A,t)\ast\chi
_{B}(t)\widetilde{K}_{k}(B,t)\right) \\
= & \mathcal{F}\left( \chi_{[A-B,B+A]}(t)\widetilde{K}_{k}(B,t-A)+ \chi
_{A}(t)\widetilde{K}^{h}(A,t)\ast\chi_{B}(t)\widetilde{K}_{k}(B,t)\right) .
\end{align*}
Set $R_{N}(\rho,x)=e_{N}(\rho,x)-e^{i\rho x}$. Thus,
\begin{align*}
R_{N}(\rho,x) = & \mathcal{F}\Bigg[ \chi_{x}(t)\widetilde{K}^{h}(x,t)\\
& +\sum_{k=1}^{N}\alpha_{k}H(x-x_{k}) \left( \chi_{[2x_{k}-x,x]}%
(t)\widetilde{K}_{k}(x,t-x_{k})+ \chi_{x_{k}}(t)\widetilde{K}^{h}(x_{k}%
,t)\ast\chi_{x-x_{k}}(t)\widetilde{K}_{k}(x,t)\right) \\
& +\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\left( \prod
_{l=1}^{|J|-1}\right) ^{\ast}\left( \chi_{x_{j_{l+1}}-x_{j_{l}}}%
(t)\widetilde{K}_{k}(x_{j_{l+1}},t)\right) \\
& \ast\Big(\chi_{[x_{j_{|J|}}+x_{j_{1}}-x, x-(x_{j_{|J|}}-x_{j_{1}}%
)]}(t)\widetilde{K}_{j_{|J|}}(x,t-x_{j_{1}})\\
& \qquad\; + \chi_{x_{j_{1}}}(t)\widetilde{K}^{h}(x_{j_{1}},t)\ast
\chi_{x-x_{j_{|J|}}}(t)\widetilde{K}_{j_{|J|}}(x,t)\Big)\Bigg]
\end{align*}
According to Lemma \ref{lemaconv}, the support of $\left( \prod_{l=1}%
^{|J|-1}\right) ^{\ast}\left( \chi_{x_{j_{l+1}}-x_{j_{l}}}(t)\widetilde{K}%
_{k}(x_{j_{l+1}},t)\right) $ lies in \newline$[x_{j_{1}}-x_{j_{|J|}},
x_{j_{|J|}}-x_{j_{1}}]$ and $\chi_{ x-(x_{j_{|J|}}-x_{j_{1}})}(t)\widetilde
{K}_{j_{|J|}}(x,t-x_{j_{1}})+ \chi_{x_{j_{1}}}(t)\widetilde{K}^{h}(x_{j_{1}%
},t)\ast\chi_{x-x_{j_{|J|}}}(t)\widetilde{K}_{j_{|J|}}(x,t)$ has its support
in $[x_{j_{|J|}}+x_{j_{1}}-x, x-(x_{j_{|J|}}-x_{j_{1}})]$. Hence the
convolution in the second sum of $R_{N}(\rho,x)$ has its support in $[-x,x]$.
On the other hand, $\chi_{x_{k}}(t)\widetilde{K}^{h}(x_{k},t)\ast\chi
_{x-x_{k}}(t)\widetilde{K}_{k}(x,t)$ has its support in $[-x,x]$, and since
$[2x_{k}-x,x]\subset[-x,x]$, we conclude that $\operatorname{Supp}\left(
\mathcal{F}^{-1}R_{N}(\rho,x)\right) \subset[-x,x]$.
Thus, we obtain (\ref{transmutationgeneral}) with
\begin{align}
K_{\mathfrak{I}_{N}}^{h}(x,t) = & \chi_{x}(t)\widetilde{K}^{h}%
(x,t)\nonumber\\
& +\sum_{k=1}^{n}\alpha_{k}H(x-x_{k}) \left( \chi_{[2x_{k}-x,x]}%
(t)\widetilde{K}_{k}(x,t-x_{k})+ \chi_{x_{k}}(t)\widetilde{K}^{h}(x_{k}%
,t)\ast\chi_{x-x_{k}}(t)\widetilde{K}_{k}(x,t)\right) \nonumber\\
& +\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\left( \prod
_{l=1}^{|J|-1}\right) ^{\ast}\left( \chi_{x_{j_{l+1}}-x_{j_{l}}}%
(t)\widetilde{K}_{j_{l}}(x_{j_{l+1}},t)\right) \label{transmkernelgeneral}\\
& \qquad\ast\Big(\chi_{ x-(x_{j_{|J|}}-x_{j_{1}})}(t)\widetilde{K}_{j_{|J|}%
}(x,t-x_{j_{1}}) + \chi_{x_{j_{1}}}(t)\widetilde{K}^{h}(x_{j_{1}},t)\ast
\chi_{x-x_{j_{|J|}}}(t)\widetilde{K}_{j_{|J|}}(x,t)\Big),\nonumber
\end{align}
and $K_{\mathfrak{I}_{N}}(x,\cdot)\in L_{2}(x,-x)$. By formula
(\ref{transmkernelgeneral}) and the definitions of $\widehat{K}^{h}(x,t)$ and
$\widetilde{K}_{k}(x,t)$, $K_{\mathfrak{I}_{N}}(x,t)$ is piecewise absolutely
continuous for $t\in[-x,x]$. Since $\widehat{K}^{h},\widetilde{K}_{k}\in
L_{\infty}(\Omega)$, is clear that $K_{\mathfrak{I}_{N}}^{f}\in L_{\infty
}(\Omega)$.
\end{proof}
\newline
As a consequence of (\ref{transmutationgeneral}), $e_{\mathfrak{I}_{N}}%
^{h}(\rho,x)$ is an entire function of exponential type $x$ on the spectral
parameter $\rho$.
\begin{example}
\label{beginexample1} Consider (\ref{SolDiracDeltae}) with $N=1$. In this
case the solution $e_{\mathfrak{I}_{1}}^{0}(\rho,x)$ is given by
\[
e_{\mathfrak{I}_{1}}^{0}(\rho,x)= e^{i\rho x}+\alpha_{1}e^{i\rho x_{1}%
}H(x-x_{1})\frac{\sin(\rho(x-x_{1}))}{\rho}.
\]
We have
\[
e^{i\rho x_{1}}\frac{\sin(\rho(x-x_{1}))}{\rho}=\frac{1}{2}\int_{x_{1}%
-x}^{x-x_{1}}e^{i\rho(t+x_{1})}dt= \frac{1}{2}\int_{2x_{1}-x}^{x}e^{i\rho
t}dt.
\]
Hence
\[
e_{\mathfrak{I}_{1}}^{0}(\rho,x)= e^{i\rho x}+\int_{-x}^{x}K_{\mathfrak{I}%
_{1}}^{0}(x,t)e^{i\rho t}dt \quad\mbox{with }\, K_{\mathfrak{I}_{1}}%
^{0}(x,t)=\frac{\alpha_{1}}{2}H(x-x_{1})\chi_{[2x_{1}-x,x]}(t).
\]
\end{example}
\begin{example}
\label{beginexample2} Consider again Eq. (\ref{SolDiracDeltae}) but now with
$N=2$. In this case the solution $e_{\mathfrak{I}_{2}}^{0}(\rho,x)$ is given
by
\begin{align*}
e_{\mathfrak{I}_{2}}^{0}(\rho,x) = & e^{i\rho x}+\alpha_{1}e^{i\rho x_{1}%
}H(x-x_{1})\frac{\sin(\rho(x-x_{1}))}{\rho}+\alpha_{2}e^{i\rho x_{2}}%
H(x-x_{2})\frac{\sin(\rho(x-x_{2}))}{\rho}\\
& +\alpha_{1}\alpha_{2}e^{i\rho x_{1}}H(x-x_{2})\frac{\sin(\rho(x_{2}%
-x_{1}))}{\rho}\frac{\sin(\rho(x-x_{2}))}{\rho},
\end{align*}
and the transmutation kernel $K_{\mathfrak{I}_{2}}^{0}(x,t)$ has the form
\begin{align*}
K_{\mathfrak{I}_{2}}^{0}(x,t) & = \frac{\alpha_{1}H(x-x_{1})}{2}\chi
_{[2x_{1}-x,x]}(t)+\frac{\alpha_{2}H(x-x_{2})}{2}\chi_{[2x_{1}-x,x]}(t)\\
& \qquad+\frac{\alpha_{1}\alpha_{2}H(x-x_{2})}{4}\left( \chi_{x_{2}-x_{1}}%
\ast\chi_{x-x_{2}}\right) (t-x_{1}).
\end{align*}
Direct computation shows that
\begin{multline*}
\chi_{x_{2}-x_{1}}\ast\chi_{x-x_{2}}(t-x_{1})=\\%
\begin{cases}
0, & t\not \in [2x_{1}-x,x],\\
t+x-2x_{1}, & 2x_{1}-x< t< -|2x_{2}-x-x_{1}|+x_{1},\\
x-x_{1}-|2x_{2}-x-x_{1}|, & -|2x_{2}-x-x_{1}|+x_{1}< t<|2x_{2}-x-x_{1}%
|+x_{1}\\
x-t, & |2x_{2}-x-x_{1}|+x_{1}<t<x.
\end{cases}
\end{multline*}
In Figure \ref{levelcurves}, we can see some level curves of the kernel
$K_{\mathfrak{I}_{2}}^{0}(x,t)$ (as a function of $t$), $\mathfrak{I_{2}%
}=\{(0.25,1), (0.75,2)\}$, for some values of $x$. \begin{figure}[h]
\centering
\includegraphics[width=11.cm]{ejemplo1kernel.pdf} \caption{The graphs of
$K_{\mathfrak{I}_{2}}^{0}(x,t)$, as a function of $t\in[-1,1]$, for some
points $x\in(0,1)$ and $\mathfrak{I_{2}}=\{(0.25,1), (0.75,2)\}$.}%
\label{levelcurves}%
\end{figure}
\end{example}
For the general case we have the following representation for the kernel.
\begin{proposition}
The transmutation kernel $K_{\mathfrak{I}_{N}}^{0}(\rho,x)$ for the solution
$e_{\mathfrak{I}_{N}}^{0}(\rho,x)$ of (\ref{SolDiracDeltae}) is given by
\begin{align}
K_{\mathfrak{I}_{N}}^{0}(x,t) & = \sum_{k=0}^{N}\frac{\alpha_{k}H(x-x_{k}%
)}{2}\chi_{[2x_{k}-x,x]}(t)\nonumber\\
& +\sum_{J\in\mathcal{J}_{N}}\frac{\alpha_{J}H(x-x_{j_{|J|}})}{2^{|J|}%
}\left( \left( \prod_{l=1}^{|J|-1}\right) ^{\ast}\chi_{x_{j_{l+1}}-x_{j_{l}}%
}(t) \right) \ast\chi_{x-x_{j_{|J|}}}(t-x_{j_{1}})\label{kerneldeltaN}%
\end{align}
\end{proposition}
\begin{proof}
In this case $\widetilde{e}_{0}(\rho,x)=e^{i\rho x}$, $\widehat{s}_{k}%
(\rho,x-x_{k})=\frac{\sin(\rho(x-x_{k}))}{\rho}$, hence $\widetilde{K}%
^{0}(x,t)\equiv0$, $\widetilde{K}_{k}(x,t)=\frac{1}{2}\chi_{x-x_{k}}(t)$.
Substituting these expressions into (\ref{transmkernelgeneral}) and taking
into account that $\chi_{x_{j_{|J|}}+x_{j_{1}}-x, x-(x_{j_{|J|}}-x_{j_{1}}%
)}(t)=\chi_{x-x_{j_{|J|}}}(t-x_{j_{1}})$ we obtain (\ref{kerneldeltaN})
\end{proof}
\newline
Let
\begin{equation}
\label{transmutationoperator1}\mathbf{T}_{\mathfrak{I}_{N}}^{h} u(x):= u(x)+
\int_{-x}^{x} K_{\mathfrak{I}_{N}}^{h}(x,t)u(t)dt.
\end{equation}
By Theorem \ref{thoremtransmoperator}, $\mathbf{T}_{\mathfrak{I}_{N}}^{f}%
\in\mathcal{B}\left( L_{2}(-b,b)\right) $ and
\begin{equation}
\label{transmutationrelation1}e_{\mathfrak{I}_{N}}^{h}(\rho,x)= \mathbf{T}_{
\mathfrak{I}_{N}}^{h} \left[ e^{i\rho x}\right] .
\end{equation}
\subsection{Goursat conditions}
Let us define the function
\begin{equation}
\label{Sigmafunction}\sigma_{\mathfrak{I}_{N}}(x):= \sum_{k=1}^{N}\alpha
_{k}H(x-x_{k}).
\end{equation}
Hence $\sigma_{\mathfrak{I}_{N}}^{\prime}(x)=q_{\delta,\mathfrak{I}_{n}}(x)$
in the distributional sense ( $(\sigma_{\mathfrak{I}_{N}},\phi)_{C_{0}%
^{\infty}(0,b)}=-(q_{\delta,\mathfrak{I}_{N}}, \phi^{\prime})_{C_{0}^{\infty
}(0,b)}$ for all $\phi\in C_{0}^{\infty}(0,b)$). Note that in Examples
\ref{beginexample1} and \ref{beginexample2} we have
\[
K_{\mathfrak{J}_{N}}^{0}(x,x)= \frac{1}{2}\left( \int_{0}^{x}q(s)ds+\sigma
_{\mathfrak{I}_{N}}(x)\right) \;\,\mbox{ and }\;\, K_{\mathfrak{I}_{N}%
}^{0}(x,-x)=0 \;\, \mbox{ for } N=1,2.
\]
More generally, the following statement is true.
\begin{proposition}
\label{propGoursat} The integral transmutation kernel $K_{\mathfrak{I}_{N}%
}^{h}$ satisfies the following Goursat conditions for $x\in[0,b]$
\begin{equation}
K_{\mathfrak{J}_{N}}^{h}(x,x)= \frac{1}{2}\left( h+\int_{0}^{x}q(s)ds+\sigma
_{\mathfrak{I}_{N}}(x)\right) \qquad\mbox{ and }\qquad K_{\mathfrak{I}%
_{N}}^{h}(x,-x)=\frac{h}{2}.
\end{equation}
\end{proposition}
\begin{proof}
Fix $x\in[0,b]$ and take $\xi\in\{-x,x\}$. By formula
(\ref{transmkernelgeneral}) we can write
\[
K_{\mathfrak{I}_{N}}^{h}(x,\xi)= \widetilde{K}^{h}(x,\xi)+\sum_{k=1}^{N}%
\alpha_{k}H(x-x_{k})\chi_{[2x_{k}-x,x]}(\xi)\widetilde{K}_{k}(x,\xi
-x_{k})+F(x,\xi),
\]
where
\begin{align*}
F(x,t) = & \sum_{k=1}^{n}\alpha_{k}H(x-x_{k}) \chi_{x_{k}}(t)\widetilde
{K}^{h}(x_{k},t)\ast\chi_{x-x_{k}}(t)\widetilde{K}_{k}(x,t)\\
& +\sum_{J\in\mathcal{J}_{N}}\alpha_{J}H(x-x_{j_{|J|}})\left( \prod
_{l=1}^{|J|-1}\right) ^{\ast}\left( \chi_{x_{j_{l+1}}-x_{j_{l}}}%
(t)\widetilde{K}_{j_{l}}(x_{j_{l+1}},t)\right) \\
& \qquad\ast\Big(\chi_{ x-(x_{j_{|J|}}-x_{j_{1}})}(t)\widetilde{K}_{j_{|J|}%
}(x,t-x_{j_{1}}) + \chi_{x_{j_{1}}}(t)\widetilde{K}^{h}(x_{j_{1}},t)\ast
\chi_{x-x_{j_{|J|}}}(t)\widetilde{K}_{j_{|J|}}(x,t)\Big).
\end{align*}
In the proof of Theorem \ref{thoremtransmoperator} we obtain that
$\operatorname{Supp}(F(x,t))\subset[-x,x]$. Since $\widetilde{K}^{h}(x_{j},t)$
and $\widetilde{K}_{k}(x_{j},t)$ are continuous with respect to $t$ in the
intervals $[-x_{j},x_{j}]$ and $[x_{k}-x_{j},x_{j}-x_{k}]$ respectively for
$j=1, \dots, N$, $k\leqslant j$, by Lemma \ref{lemaconv} the function $F(x,t)$
is continuous for all $t\in\mathbb{R}$. Thus $F(x,\xi)=0$. For the case
$\xi=x$, we have that $\widetilde{K}^{h}(x,x)=\frac{h}{2}+\frac{1}{2}\int
_{0}^{x}q(s)ds$, $\chi_{[2x_{k}-x,x]}(x)=1$ and
\[
\widetilde{K}_{k}(x,x-x_{k})=\frac{1}{2}\chi_{x-x_{k}}(x-x_{k}%
)+\displaystyle \frac{1}{2}\int_{|x-x_{k}|}^{x-x_{k}}\widehat{H}_{k}%
(x-x_{k},s)ds= \frac{1}{2}
\]
(we assume that $x\geqslant x_{k}$ in order to have $H(x-x_{k})=1$).
Thus\newline$K_{\mathfrak{I}_{N}}^{h}(x,x)=\frac{1}{2}\left( h+\int_{0}%
^{x}q(s)ds+\sigma_{\mathfrak{I}_{N}}(x)\right) $. For the case $\xi=-x$,
$\widetilde{K}^{h}(x,-x)=\frac{h}{2}$ and\newline$\chi_{[2x_{k}-x,x]}(-x)=0$.
Hence $K_{\mathfrak{I}_{N}}^{h}(x,x)=\frac{h}{2}$.
\end{proof}
\begin{remark}
\label{remarkantiderivativegousart} According to Proposition \ref{propGoursat}%
, $2K_{\mathfrak{I}_{N}}^{h}(x,x)$ is a (distributional) antiderivative of the
potential $q(x)+q_{\delta,\mathfrak{I}_{N}}(x)$.
\end{remark}
\subsection{The transmuted Cosine and Sine solutions}
Let $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$ and $s_{\mathfrak{I}_{N}}(\rho,x)$ be
the solutions of Eq. (\ref{Schrwithdelta}) satisfying the initial conditions
\begin{align}
c_{\mathfrak{I}_{N}}^{h}(\rho, 0)= 1, & (c_{\mathfrak{I}_{N}}^{h})^{\prime
}(\rho, 0)=h,\label{cosinegralinitialconds}\\
s_{\mathfrak{I}_{N}}(\rho, 0)=0, & s_{\mathfrak{I}_{N}}^{\prime}(\rho,0)=
1.\label{sinegralinitialconds}%
\end{align}
Note that $c_{\mathfrak{I}_{N}}^{h}(\rho,x)=\frac{e_{\mathfrak{I}_{N}}%
^{h}(\rho,x)+e_{\mathfrak{I}_{N}}^{h}(-\rho,x)}{2}$ and $s_{\mathfrak{I}_{N}%
}(\rho,x)= \frac{e_{\mathfrak{I}_{N}}^{h}(\rho,x)-e_{\mathfrak{I}_{N}}%
^{h}(-\rho,x)}{2i\rho}$.
\begin{remark}
\label{remarksinecosinefunda} By Corollary \ref{proppropofsolutions},
$c_{\mathfrak{I}_{N}}^{h}(\rho,\cdot), s_{\mathfrak{I}_{N}}(\rho,\cdot)\in
AC[0,b]$ and both functions are solutions of Eq. (\ref{schrodingerregular}) on
$[0,x_{1}]$, hence their Wronskian is constant for $x\in[0,x_{1}]$ and
\begin{align*}
1 & = W\left[ c_{\mathfrak{I}_{N}}^{h}(\rho,x), s_{\mathfrak{I}_{N}}%
(\rho,x)\right] (0)=W\left[ c_{\mathfrak{I}_{N}}^{h}(\rho,x), s_{\mathfrak{I}%
_{N}}(\rho,x)\right] (x_{1}-) =
\begin{vmatrix}
c_{\mathfrak{I}_{N}}^{h}(\rho,x_{1}) & s_{\mathfrak{I}_{N}}(\rho,x_{1})\\
(c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x_{1}-) & s^{\prime}_{\mathfrak{I}%
_{N}}(\rho,x_{1}-)
\end{vmatrix}
\\
& =
\begin{vmatrix}
c_{\mathfrak{I}_{N}}^{h}(\rho,x_{1}) & s_{\mathfrak{I}_{N}}(\rho,x_{1})\\
(c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x_{1}+)-\alpha_{1} c_{\mathfrak{I}%
_{N}}^{h}(\rho,x_{1}) & s^{\prime}_{\mathfrak{I}_{N}}(\rho,x_{1}+)-\alpha_{1}
s_{\mathfrak{I}_{N}}(\rho,x_{1})
\end{vmatrix}
\\
& =
\begin{vmatrix}
c_{\mathfrak{I}_{N}}^{h}(\rho,x_{1}) & s_{\mathfrak{I}_{N}}(\rho,x_{1})\\
(c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x_{1}+) & s^{\prime}_{\mathfrak{I}%
_{N}}(\rho,x_{1}+)
\end{vmatrix}
= W\left[ c_{\mathfrak{I}_{N}}^{h}(\rho,x), s_{\mathfrak{I}_{N}}%
(\rho,x)\right] (x_{1}+)
\end{align*}
(the equality in the second line is due to (\ref{jumpderivative})). Since
$c_{\mathfrak{I}_{N}}^{h}(\rho,x), s_{\mathfrak{I}_{N}}(\rho,x)$ are solutions
of (\ref{schrodingerregular}) on $[x_{1},x_{2}]$, then $W\left[
C_{\mathfrak{I}_{N}}^{h}(\rho,x), s_{\mathfrak{I}_{N}}(\rho,x)\right] $ is
constant for $x\in[x_{1},x_{2}]$. Thus, \newline$W\left[ c_{\mathfrak{I}_{N}%
}^{h}(\rho,x), s_{\mathfrak{I}_{N}}(\rho,x)\right] (x)=1$ for all
$x\in[0,x_{2}]$. Continuing the process we obtain that the Wronskian equals
one in the whole segment $[0,b]$. Thus, $c_{\mathfrak{I}_{N}}^{h}(\rho,x),
s_{\mathfrak{I}_{N}}(\rho,x)$ are linearly independent. Finally, if $u$ is a
solution of (\ref{Schrwithdelta}), by Remark \ref{remarkunicityofsol}, $u$ can
be written as $u(x)=u(0)c_{\mathfrak{I}_{N}}^{h}(\rho,x)+u^{\prime
}(0)s_{\mathfrak{I}_{N}}(\rho,x)$. In this way, $\left\{ c_{\mathfrak{I}_{N}%
}^{h}(\rho,x), s_{\mathfrak{I}_{N}}(\rho,x)\right\} $ is a fundamental set of
solutions for (\ref{Schrwithdelta}).
\end{remark}
Similarly to the case of the regular Eq. (\ref{regularSch}) (see \cite[Ch.
1]{marchenko}), from (\ref{transmutationgeneral}) we obtain the following representations.
\begin{proposition}
The solutions $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$ and $s_{\mathfrak{I}_{N}%
}(\rho,x)$ admit the following integral representations
\begin{align}
c_{\mathfrak{I}_{N}}^{h}(\rho, x) & = \cos(\rho x)+\int_{0}^{x}
G_{\mathfrak{I}_{N}}^{h}(x,t)\cos(\rho t)dt,\label{transmcosinegral}\\
s_{\mathfrak{I}_{N}}(\rho,x) & = \frac{\sin(\rho x)}{\rho}+\int_{0}%
^{x}S_{\mathfrak{I}_{N}}(x,t)\frac{\sin(\rho t)}{\rho}%
dt,\label{transmsinegral}%
\end{align}
where
\begin{align}
G_{\mathfrak{I}_{N}}^{h}(x,t) & = K_{\mathfrak{I}_{N}}^{h}%
(x,t)+K_{\mathfrak{I}_{N}}^{h}(x,-t),\label{cosineintegralkernelgral}\\
S_{\mathfrak{I}_{N}}(x,t) & = K_{\mathfrak{I}_{N}}^{h}(x,t)-K_{\mathfrak{I}%
_{N}}^{h}(x,-t).\label{sineintegralkernelgral}%
\end{align}
\end{proposition}
\begin{remark}
\label{remarkGoursatcosinesine} By Proposition \ref{propGoursat}, the cosine
and sine integral transmutation kernels satisfy the conditions
\begin{equation}
G_{\mathfrak{I}_{N}}^{h}(x,x)= h+\frac{1}{2}\left( \int_{0}^{x}q(s)ds+\sigma
_{\mathfrak{I}_{N}}(x)\right) ,
\end{equation}
\begin{equation}
S_{\mathfrak{I}_{N}}(x,x)= \frac{1}{2}\left( \int_{0}^{x}q(s)ds+\sigma
_{\mathfrak{I}_{N}}(x)\right) \quad\mbox{and }\;\; S_{\mathfrak{I}_{N}%
}(x,0)=0.
\end{equation}
Introducing the cosine and sine transmutation operators
\begin{equation}
\label{cosineandsinetransop}\mathbf{T}_{\mathfrak{I}_{N},h}^{C}u(x)=
u(x)+\int_{0}^{x}G_{\mathfrak{I}_{N}}^{h}(x,t)u(t)dt, \quad\mathbf{T}%
_{\mathfrak{I}_{N}}^{S}u(x)= u(x)+\int_{0}^{x}S_{\mathfrak{I}_{N}}(x,t)u(t)dt
\end{equation}
we obtain
\begin{equation}
\label{transmutedcosineandsinesol}c_{\mathfrak{I}_{N}}^{h}(\rho,x)=\mathbf{T}%
_{\mathfrak{I}_{N},h}^{C}\left[ \cos(\rho x)\right] , \quad s_{\mathfrak{I}%
_{N}}(\rho,x) = \mathbf{T}_{\mathfrak{I}_{N}}^{S}\left[ \frac{\sin(\rho
x)}{\rho}\right] .
\end{equation}
\end{remark}
\begin{remark}
\label{remarkwronskian} According to Remark \ref{remarksinecosinefunda}, the
space of solutions of (\ref{Schrwithdelta}) has dimension 2, and given
$f,g\in\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $
solutions of (\ref{Schrwithdelta}), repeating the same procedure of Remark
\ref{remarksinecosinefunda}, $W[f,g]$ is constant in the whole segment
$[0,b]$. The solutions $f,g$ are a fundamental set of solutions iff
$W[f,g]\neq0$.
\end{remark}
\section{The SPPS method and the mapping property}
\subsection{Spectral parameter powers series}
As in the case of the regular Schr\"odinger equation
\cite{blancarte,sppsoriginal}, we obtain a representation for the solutions of
(\ref{Schrwithdelta}) as a power series in the spectral parameter (SPPS
series). Assume that there exists a solution $f\in\mathcal{D}_{2}\left(
\mathbf{L}_{q,\mathfrak{I}_{N}}\right) $ that does not vanish in the whole
segment $[0,b]$.
\begin{remark}
\label{remarknonhomeq} Given $g\in L_{2}(0,b)$, a solution $u\in
\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $ of the
non-homogeneous Cauchy problem
\begin{equation}
\label{nonhomoCauchy}%
\begin{cases}
\mathbf{L}_{q,\mathfrak{I}_{N}}u(x)= g(x), \quad0<x<b\\
u(0)=u_{0}, \; u^{\prime}(0)=u_{1}%
\end{cases}
\end{equation}
can be obtained by solving the regular equation $\mathbf{L}_{q}u(x)=g(x)$ a.e.
$x\in(0,b)$ as follows. Consider the Polya factorization $\mathbf{L}%
_{q}u=-\frac{1}{f}Df^{2}D\frac{u}{f}$, where $D=\frac{d}{dx}$. A direct
computation shows that $u$ given by
\begin{equation}
\label{solutionnonhomogeneouseq}u(x)= -f(x)\int_{0}^{x}\frac{1}{f^{2}(t)}%
\int_{0}^{t} f(s)g(s)ds+\frac{u_{0}}{f(0)}f(x)+(f(0)u_{1}-f^{\prime}%
(0)u_{0})f(x)\int_{0}^{x}\frac{dt}{f^{2}(t)}%
\end{equation}
satisfies (\ref{nonhomoCauchy}) (actually, $f(x)\int_{0}^{x}\frac{1}{f^{2}%
(t)}dt$ is the second linearly independent solution of $\mathbf{L}_{q}u=0$
obtained from $f$ by Abel's formula). By Remark \ref{remarkidealdomain},
$u\in\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{J}_{N}}\right) $ and by
Proposition \ref{propregular} and Remark \ref{remarkunicityofsol}, formula
(\ref{solutionnonhomogeneouseq}) provides the unique solution of
(\ref{nonhomoCauchy}). Actually, if we denote $\mathcal{I}u(x):= \int_{0}%
^{x}u(t)dt$ and define $\mathbf{R}_{\mathfrak{I}_{N}}^{f}:= -f\mathcal{I}%
f^{2}\mathcal{I}$, then $\mathbf{R}_{\mathfrak{I}_{N}}^{f}\in\mathcal{B}\left(
L_{2}(0,b)\right) $, $\mathbf{R}_{\mathfrak{I}_{N}}^{f}\left( L_{2}%
(0,b)\right) \subset\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}%
}\right) $ and is a right-inverse for $\mathbf{L}_{q,\mathfrak{I}_{N}}$, i.e.,
$\mathbf{L}_{q,\mathfrak{I}_{N}}\mathbf{R}_{\mathfrak{I}_{N}}^{f} g=g$ for all
$g\in L_{2}(0,b)$.
\end{remark}
Following \cite{sppsoriginal} we define the following recursive integrals:
$\widetilde{X}^{(0)}\equiv X^{(0)} \equiv1$, and for $k\in\mathbb{N}$
\begin{align}
\widetilde{X}^{(k)}(x) & := k\int_{0}^{x}\widetilde{X}^{(k-1)}(s)\left(
f^{2}(s)\right) ^{(-1)^{k-1}}ds,\\
X^{(k)}(x) & := k\int_{0}^{x}X^{(k-1)}(s)\left( f^{2}(s)\right) ^{(-1)^{k}%
}ds.
\end{align}
The functions $\{\varphi_{f}^{(k)}(x)\}_{k=0}^{\infty}$ defined by
\begin{equation}
\label{formalpowers}\varphi_{f}^{(k)}(x):=
\begin{cases}
f(x)\widetilde{X}^{(k)}(x),\quad\mbox{if } k \mbox{ even},\\
f(x)X^{(k)}(x),\quad\mbox{if } k \mbox{ odd}.
\end{cases}
\end{equation}
for $k\in\mathbb{N}_{0}$, are called the \textit{formal powers} associated to
$f$. Additionally, we introduce the following auxiliary formal powers
$\{\psi_{f}^{(k)}(x)\}_{k=0}^{\infty}$ given by
\begin{equation}
\label{auxiliaryformalpowers}\psi_{f}^{(k)}(x):=
\begin{cases}
\frac{\widetilde{X}^{(k)}(x)}{f(x)},\quad\mbox{if } k \mbox{ odd},\\
\frac{X^{(k)}(x)}{f(x)},\quad\mbox{if } k \mbox{ even}.
\end{cases}
\end{equation}
\begin{remark}
\label{remarkformalpowers} For each $k \in\mathbb{N}_{0}$, $\varphi_{f}%
^{(k)}\in\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $.
Indeed, direct computations show that the following relations hold for all
$k\in\mathbb{N}_{0}$:
\begin{align}
D\varphi_{f}^{(k)} & = \frac{f^{\prime}}{f}\varphi_{f}^{(k)}+k\psi
_{f}^{(k-1)}\label{formalpowersderivative}\\
D^{2}\varphi_{f}^{(k)} & = \frac{f^{\prime\prime}}{f}\varphi_{f}%
^{(k)}+k(k-1)\varphi_{f}^{(k-2)}\label{Lbasisproperty}%
\end{align}
Since $\varphi_{f}^{(k)}, \psi_{f}^{(k)}\in C[0,b]$, using the procedure from
Remark \ref{remarkidealdomain} and (\ref{formalpowersderivative}) we obtain
$\varphi_{f}^{(k)}\in\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}%
}\right) $.
\end{remark}
\begin{theorem}
[SPPS method]\label{theoremspps} Suppose that $f\in\mathcal{D}_{2}\left(
\mathbf{L}_{q,\mathfrak{I}_{N}}\right) $ is a solution of (\ref{Schrwithdelta}%
) that does not vanish in the whole segment $[0,b]$. Then the functions
\begin{equation}
\label{SPPSseries}u_{0}(\rho,x)= \sum_{k=0}^{\infty}\frac{(-1)^{k}\rho
^{2k}\varphi_{f}^{(2k)}(x)}{(2k)!}, \quad u_{1}(\rho,x)= \sum_{k=0}^{\infty
}\frac{(-1)^{k}\rho^{2k}\varphi_{f}^{(2k+1)}(x)}{(2k+1)!}%
\end{equation}
belong to $\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $, and
$\{u_{0}(\rho,x), u_{1}(\rho,x)\}$ is a fundamental set of solutions for
(\ref{Schrwithdelta}) satisfying the initial conditions
\begin{align}
u_{0}(\rho,0)=f(0), & u^{\prime}_{0}(\rho,0)=f^{\prime}%
(0),\label{initialspps0}\\
u_{1}(\rho,0)=0, & u^{\prime}_{1}(\rho,0)=\frac{1}{f(0)}%
,\label{initialspps1}%
\end{align}
The series in (\ref{SPPSseries}) converge absolutely and uniformly on
$x\in[0,b]$, the series of the derivatives converge in $L_{2}(0,b)$ and the
series of the second derivatives converge in $L_{2}(x_{j},x_{j+1})$, $j=0,
\cdots, N$. With respect to $\rho$ the series converge absolutely and
uniformly on any compact subset of the complex $\rho$-plane.
\end{theorem}
\begin{proof}
Since $f\in C[0,b]$, the following estimates for the recursive integrals
$\{\widetilde{X}^{(k)}(x)\}_{k=0}^{\infty}$ and $\{X^{(k)}(x)\}_{k=0}^{\infty
}$ are known:
\begin{equation}
\label{auxiliarestimatesspps}|\widetilde{X}^{(n)}(x)|\leqslant M_{1}^{n}
b^{n}, \; |X^{(n)}(x)|\leqslant M_{1}^{n} b^{n}\quad\mbox{for all } x\in[0,b],
\end{equation}
where $M_{1}=\|f^{2}\|_{C[0,b]}\cdot\left\| \frac{1}{f^{2}}\right\| _{C[0,b]}$
(see the proof of Theorem 1 of \cite{sppsoriginal}). Thus, by the Weierstrass
$M$-tests, the series in (\ref{SPPSseries}) converge absolutely and uniformly
on $x\in[0,b]$, and for $\rho$ on any compact subset of the complex $\rho
$-plane. We prove that $u_{0}(\rho,x)\in\mathcal{D}_{2}\left( \mathbf{L}%
_{q,\mathfrak{I}_{N}}\right) $ and is a solution of (\ref{Schrwithdelta}) (the
proof for $u_{1}(\rho,x)$ is analogous). By Remark \ref{remarkformalpowers},
the series of the derivatives of $u_{0}(\rho,x)$ is given by $\frac{f^{\prime
}}{f}\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k}\varphi_{f}^{(2k)}}{(2k)!}%
+\sum_{k=1}^{\infty}\frac{(-1)^{k}\rho^{2k}\psi_{f}^{(2k-1)}}{(2k-1)!}$. By
(\ref{auxiliarestimatesspps}), the series involving the formal powers
$\varphi_{f}^{(k)}$ and $\psi_{f}^{(k)}$ converge absolutely and uniformly on
$x\in[0,b]$. Hence, $\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{k}D\varphi
_{f}^{(2k)}(x)}{(2k)!}$ converges in $L_{2}(0,b)$. Due to \cite[Prop.
3]{blancarte}, $u_{0}(\rho,\cdot)\in AC[0,b]$ and $u_{0}^{\prime}%
(\rho,x)=\frac{f^{\prime}(x)}{f(x)}\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho
^{2k}\varphi_{f}^{(2k)}}{(2k)!}+\sum_{k=1}^{\infty}\frac{(-1)^{k}\rho^{2k}%
\psi_{f}^{(2k-1)}}{(2k-1)!}$ in $L_{2}(0,b)$. Since the series involving the
formal powers defines continuous functions, then $u_{0}(\rho,x)$ satisfies the
jump condition (\ref{jumpderivative}). Applying the same reasoning it is shown
that $u_{0}^{\prime\prime}(\rho,x)=\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho
^{2k}D^{2}\varphi_{f}^{(2k)}}{(2k)!}$, the series converges in $L_{2}%
(x_{j},x_{j+1})$ and $u_{0}(\rho,\cdot)|_{(x_{j},x_{j+1})}\in H^{2}%
(x_{j},x_{j+1})$, $j=0, \dots, N$.
Since $\widetilde{X}^{(n)}(0)=0$ for $n\geqslant1$, we have
(\ref{initialspps0}). Finally, by (\ref{Lbasisproperty})
\begin{align*}
\mathbf{L}_{q}u_{0}(\rho,x) & = \sum_{k=0}^{\infty}\frac{(-1)^{k}\rho
^{2k}\mathbf{L}_{q}\varphi_{f}^{(2k)}(x)}{(2k)!}=\sum_{k=2}^{\infty}%
\frac{(-1)^{k+1}\rho^{2k}\varphi_{f}^{(2k-2)}(x)}{(2k-2)!}\\
& = \rho^{2}\sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k}\varphi_{f}^{(2k)}%
(x)}{(2k)!}=\rho^{2}u(\rho,x),
\end{align*}
this for a.e. $x\in(x_{j},x_{j+1})$, $j=0, \dots, N$.
Using (\ref{initialspps0}) and (\ref{initialspps1}) we obtain $W[u_{0}%
(\rho,x),u_{1}(\rho,x)](0)=1$. Since the Wronskian is constant (Remark
\ref{remarkwronskian}), $\{u_{0}(\rho,x), u_{1}(\rho,x)\}$ is a fundamental
set of solutions.
\end{proof}
\subsection{Existence and construction of the non-vanishing solution}
The existence of a non-vanishing solution is well known for the case of a
regular Schr\"odinger equation with continuous potential (see \cite[Remark
5]{sppsoriginal} and \cite[Cor. 2.3]{camporesi}). The following proof adapts
the one presented in \cite[Prop. 2.9]{nelsonspps} for the Dirac system.
\begin{proposition}
[Existence of non-vanishing solutions]\label{propnonvanishingsol} Let
$\{u,v\}\in\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $ be a
fundamental set of solutions for (\ref{Schrwithdelta}). Then there exist
constants $c_{1},c_{2}\in\mathbb{C}$ such that the solution $f=c_{1}u+c_{2}v$
does not vanish in the whole segment $[0,b]$.
\end{proposition}
\begin{proof}
Let $\{u,v\}\in\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $
be a fundamental set of solutions for (\ref{Schrwithdelta}). Then $u$ and $v$
cannot have common zeros in $[0,b]$. Indeed, if $u(\xi)=v(\xi)=0$ for some
$\xi\in[0,b]$, then $W[u,v](\xi+)=u(\xi)v^{\prime}(\xi+)-v(\xi)u^{\prime}%
(\xi+)=0$. Since $W[u,v]$ is constant in $[0,b]$, this contradicts that
$\{u,v\}$ is a fundamental system.
This implies that in each interval $[x_{j},x_{j+1}]$, $j=0, \cdots, N$, the
map $F_{j}: [x_{j}, x_{j+1}]\rightarrow\mathbb{CP}^{1}$, $F_{j}(x):=\left[
u|_{[x_{j}, x_{j+1}]}(x) :v|_{[x_{j}, x_{j+1}]}(x)\right] $ (where
$\mathbb{CP}^{1}$ is the complex projective line, i.e., the quotient of
$\mathbb{C}^{2}\setminus\{(0,0)\}$ under the action of $\mathbb{C}^{*}$, and
$[a:b]$ denotes the equivalent class of the pair $(a,b)$) is well defined and
differentiable. In \cite[Prop. 2.2]{camporesi} it was established that a
differentiable function $f: I\rightarrow\mathbb{CP}^{1}$, where $I\subset
\mathbb{R}$ is an interval, is never surjective, using that Sard's theorem
implies that $f(I)$ has measure zero.
Suppose that $(\alpha,\beta)\in\mathbb{C}^{2}\setminus\{(0,0)\}$ is such that
$\alpha u(\xi)-\beta v(\xi)=0$ for some $\xi\in[0,b]$. Hence $%
\begin{vmatrix}
u(\xi) & \beta\\
v(\xi) & \alpha
\end{vmatrix}
=0$, that is, $(u(\xi), v(\xi))$ and $(\alpha,\beta)$ are proportional. Since
$\xi\in[x_{j},x_{j+1}]$ for some $j\in\{0,\cdots, N\}$, hence $[\alpha
:-\beta]\in F_{j}\left( [x_{j}, x_{j+1}]\right) $.
Thus, the set $C:= \left\{ [\alpha:\beta]\in\mathbb{CP}^{1}\, |\,\exists
\xi\in[0,b] \,:\, \alpha u(\xi)+\beta v(\xi)=0\right\} $ is contained in
\newline$\cup_{j=0}^{N}F_{j}\left( [x_{j}, x_{j+1}]\right) $, and then $C$
has measure zero. Hence we can obtain a pair of constants $(c_{1},c_{2})
\in\mathbb{C}^{2}\setminus\{ (0,0)\}$ with $[c_{1}:-c_{2}]\in\mathbb{CP}%
^{1}\setminus C$ and $f=c_{1}u+c_{2}v$ does not vanish in the whole segment
$[0,b]$.
\end{proof}
\begin{remark}
If $q$ is real valued and $\alpha_{1}, \cdots, \alpha_{N}\in\mathbb{R}%
\setminus\{0\}$, taking a real-valued fundamental system of solutions for the
regular equation $\mathbf{L}_{q} y=0$ and using formula
(\ref{generalsolcauchy}), we can obtain a real-valued fundamental set of
solutions $\{u,v\}$ for $\mathbf{L}_{q,\mathfrak{I}_{N}}y=0$. In the proof of
Proposition \ref{propnonvanishingsol} we obtain that $u$ and $v$ have no
common zeros. Hence $f=u+iv$ is a non vanishing solution.
For the complex case, we can choose randomly a pair of constants $(c_{1}%
,c_{2})\in\mathbb{C}^{2}\setminus\{(0,0)\}$ and verify if the linear
combination $c_{1}u+c_{2}v$ has no zero. If there is a zero, we repeat the
process until we find the non vanishing solution. Since the set $C$ (from the
proof of Proposition \ref{propnonvanishingsol}) has measure zero, is almost
sure to find the coefficients $c_{1},c_{2}$ in the first few tries.
\end{remark}
By Proposition \ref{propnonvanishingsol}, there exists a pair of constants
$(c_{1},c_{2})\in\mathbb{C}^{2}\setminus\{(0,0)\}$ such that
\begin{align}
y_{0}(x) & = c_{1}+c_{2}x+\sum_{k=1}^{N}\alpha_{k}(c_{1}+c_{2}x_{k}%
)H(x-x_{k})(x-x_{k})\nonumber\\
& \quad+\sum_{J\in\mathcal{J}_{N}}\alpha_{J}(c_{1}+c_{2}x_{j_{1}%
})H(x-x_{j_{|J|}})\left( \prod_{l=1}^{|J|-1}(x_{j_{l+1}}-x_{j_{1}})\right)
(x-x_{j_{|J|}})\label{nonvanishingsolqzero}%
\end{align}
is a non-vanishing solution of (\ref{Schrwithdelta}) for $\rho=0$ (if
$\alpha_{1}, \dots,\alpha_{k} \in(0,\infty)$, it is enough with take $c_{1}%
=1$, $c_{2}=0$). Below we give a procedure based on the SPPS method
(\cite{blancarte,sppsoriginal}) to obtain the non-vanishing solution $f$ from
$y_{0}$.
\begin{theorem}
Define the recursive integrals $\{Y^{(k)}\}_{k=0}^{\infty}$ and $\{\tilde
{Y}^{(k)}\}_{k=0}^{\infty}$ as follows: $Y^{(0)}\equiv\tilde{Y}^{(0)}\equiv1$,
and for $k\geqslant1$
\begin{align}
Y^{(k)}(x) & =
\begin{cases}
\int_{0}^{x} Y^{(k)}(s)q(s)y_{0}^{2}(s)ds, & \mbox{ if } k \mbox{ is even},\\
\int_{0}^{x} \frac{Y^{(k)}(s)}{y_{0}^{2}(s)}ds, & \mbox{ if } k
\mbox{ is odd},
\end{cases}
\\
\tilde{Y}^{(k)}(x) & =
\begin{cases}
\int_{0}^{x} \tilde{Y}^{(k)}(s)q(s)y_{0}^{2}(s)ds, & \mbox{ if } k
\mbox{ is odd},\\
\int_{0}^{x} \frac{\tilde{Y}^{(k)}(s)}{y_{0}^{2}(s)}ds, & \mbox{ if } k
\mbox{ is even}.
\end{cases}
\end{align}
Define
\begin{equation}
\label{sppsforparticularsol}f_{0}(x)= y_{0}(x)\sum_{k=0}^{\infty}\tilde
{Y}^{(2k)}(x), \qquad f_{1}(x) = y_{0}(x)\sum_{k=0}^{\infty}Y^{(2k+1)}(x).
\end{equation}
Then $\{f_{0},f_{1}\}\subset\mathcal{D}_{2}\left( \mathbf{L}_{q,
\mathfrak{I}_{N}}\right) $ is a fundamental set of solution for $\mathbf{L}%
_{q,\mathfrak{I}_{N}}u=0$ satisfying the initial conditions $f_{0}(0)=c_{1}$,
$f^{\prime}_{0}(0)=c_{2}$, $f_{1}(0)=0$, $f^{\prime}_{1}(0)=1$. Both series
converge uniformly and absolutely on $x\in[0,b]$. The series of the
derivatives converge in $L_{2}(0,b)$, and on each interval $[x_{j},x_{j+1}]$,
$j=0, \dots, N$, the series of the second derivatives converge in $L_{2}%
(x_{j}, x_{j+1})$. Hence there exist constants $C_{1},C_{2}\in\mathbb{C}$ such
that $f=C_{1}f_{0}+C_{2}f_{1}$ is a non-vanishing solution of $\mathbf{L}_{q,
\mathfrak{I}_{N}}u=0$ in $[0,b]$.
\end{theorem}
\begin{proof}
Using the estimates
\[
|\tilde{Y}^{(2k-j)}(x)|\leqslant\frac{M_{1}^{(n-j)}M_{2}^{n}}{(n-j)!n!},
\quad|Y^{(2k-j)}(x)|\leqslant\frac{M_{1}^{n}M_{2}^{(n-j)}}{n!(n-j)!}, \quad
x\in[0,b], \; j=0,1,\; k \in\mathbb{N},
\]
where $M_{1}= \left\| \frac{1}{y_{0}^{2}}\right\| _{L_{1}(0,b)}$ and $M_{2}=
\|qy_{0}^{2}\|_{L_{1}(0,b)}$, from \cite[Prop. 5]{blancarte}, the series in
(\ref{sppsforparticularsol}) converge absolutely and uniformly on $[0,b]$. The
proof of the convergence of the derivatives and that $\{f_{0},f_{1}%
\}\in\mathcal{D}_{2}\left( \mathbf{L}_{q, \mathfrak{I}_{N}}\right) $ is a
fundamental set of solutions is analogous to that of Theorem \ref{theoremspps}
(see also \cite[Th. 1]{sppsoriginal}) and \cite[Th. 7]{blancarte} for the
proof in the regular case).
\end{proof}
\subsection{The mapping property}
Take a non vanishing solution $f\in\mathcal{D}_{2}\left( \mathbf{L}%
_{q,\mathfrak{I}_{N}}\right) $ normalized at zero, i.e., $f(0)=1$, and set
$h=f^{\prime}(0)$. Then the corresponding transmutation operator and kernel
$\mathbf{T}^{h}_{\mathfrak{I}_{N}}$ and $K_{\mathfrak{I}_{N}}^{h}(x,t)$ will
be denoted by $\mathbf{T}^{f}_{\mathfrak{I}_{N}}$ and $K_{\mathfrak{I}_{N}%
}^{f}(x,t)$ and called the \textit{canonical} transmutation operator and
kernel associated to $f$, respectively (same notations are used for the cosine
and sine transmutations).
\begin{theorem}
\label{theoretransprop} The canonical transmutation operator $\mathbf{T}%
^{f}_{\mathfrak{I}_{N}}$ satisfies the following relations
\begin{equation}
\label{transmproperty}\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left[ x^{k}\right] =
\varphi_{f}^{(k)}(x) \qquad\forall k\in\mathbb{N}_{0}.
\end{equation}
The canonical cosine and sine transmutation operators satisfy the relations
\begin{align}
\mathbf{T}_{\mathfrak{I}_{N},f}^{C}\left[ x^{2k}\right] & = \varphi
_{f}^{(2k)}(x) \qquad\forall k\in\mathbb{N}_{0}.\label{transmpropertycos}\\
\mathbf{T}_{\mathfrak{I}_{N}}^{S}\left[ x^{2k+1}\right] & = \varphi
_{f}^{(2k+1)}(x) \qquad\forall k\in\mathbb{N}_{0}.\label{transmpropertysin}%
\end{align}
\end{theorem}
\begin{proof}
Consider the solution $e_{\mathfrak{I}_{N}}^{h}(\rho,x)$ with $h=f^{\prime
}(0)$. By the conditions (\ref{initialspps0}) and (\ref{initialspps1}),
solution $e_{\mathfrak{I}_{N}}^{h}(\rho,x)$ can be written in the form
\begin{align}
e_{\mathfrak{I}_{N}}^{h}(\rho,x) & = u_{0}(\rho,x)+i\rho u_{1}%
(\rho,x)\nonumber\\
& = \sum_{k=0}^{\infty}\frac{(-1)^{k}\rho^{2k}\varphi_{f}^{(2k)}(x)}%
{(2k)!}+\sum_{k=0}^{\infty}\frac{i(-1)^{k}\rho^{2k+1}\varphi_{f}^{(2k+1)}%
(x)}{(2k+1)!}\nonumber\\
& = \sum_{k=0}^{\infty}\frac{(i\rho)^{2k}\varphi_{f}^{(2k)}(x)}{(2k)!}%
+\sum_{k=0}^{\infty}\frac{(i\rho)^{2k+1}\varphi_{f}^{(2k+1)}(x)}%
{(2k+1)!}\nonumber\\
& = \sum_{k=0}^{\infty}\frac{(i\rho)^{k}\varphi_{f}^{(k)}(x)}{k!}%
\label{auxseriesthmap}%
\end{align}
(The rearrangement of the series is due to absolute and uniform convergence,
Theorem \ref{theoremspps}). On the other hand
\[
e_{\mathfrak{I}_{N}}^{h}(\rho,x) = \mathbf{T}_{\mathfrak{I}_{N}}^{f}\left[
e^{i\rho x}\right] = \mathbf{T}_{\mathfrak{I}_{N}}^{f}\left[ \sum
_{k=0}^{\infty}\frac{(i\rho)^{k}x^{k}}{k!} \right]
\]
Note that $\displaystyle \int_{-x}^{x}K_{\mathfrak{I}_{N}}^{f}(x,t)\left(
\sum_{k=0}^{\infty}\frac{(i\rho)^{k}t^{k}}{k!} \right) dt= \sum_{k=0}^{\infty
}\frac{(i\rho)^{k}}{k!}\int_{-x}^{x}K_{\mathfrak{I}_{N}}^{f}(x,t)t^{k}dt$, due
to the uniform convergence of the exponential series in the variable $t
\in[-x,x]$. Thus,
\begin{equation}
\label{auxiliarseriesthmap}e_{\mathfrak{I}_{N}}^{h}(\rho,x) = \sum
_{k=0}^{\infty}\frac{(i\rho)^{k}\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left[
x^{k}\right] }{k!}.
\end{equation}
Comparing (\ref{auxiliarseriesthmap}) and (\ref{auxseriesthmap}) as Taylor
series in the complex variable $\rho$ we obtain (\ref{transmproperty}).
Relations (\ref{transmpropertycos}) and (\ref{transmpropertysin}) follows from
(\ref{transmproperty}), (\ref{cosineintegralkernelgral}),
(\ref{sineintegralkernelgral}) and the fact that $G_{\mathfrak{I}_{N}}%
^{f}(x,t)$ and $S_{\mathfrak{I}_{N}}(x,t)$ are even and odd in the variable
$t$, respectively.
\end{proof}
\begin{remark}
\label{remarkformalpowersasymp} The formal powers $\{\varphi_{f}%
^{(k)}(x)\}_{k=0}^{\infty}$ satisfy the asymptotic relation\newline%
$\varphi_{f}^{(k)}(x)=x^{k}(1+o(1))$, $x\rightarrow0^{+}$, $\forall
k\in\mathbb{N}$.
Indeed, by Theorem \ref{theoretransprop} and the Cauchy-Bunyakovsky-Schwarz
inequality we have
\begin{align*}
|\varphi_{f}^{(k)}(x)-x^{k}| & = \left| \int_{-x}^{x}K_{\mathfrak{I}_{N}%
}^{f}(x,t)t^{k}dt\right| \leqslant\left( \int_{-x}^{x}\left| K_{\mathfrak{I}%
_{N}}^{f}(x,t)\right| ^{2}dt\right) ^{\frac{1}{2}}\left( \int_{-x}^{x}%
|t|^{2k}dt\right) ^{\frac{1}{2}}\\
& \leqslant\sqrt{2b}\left\| K_{\mathfrak{I}_{N}^{f}}\right\| _{L_{\infty
}(\Omega)}\sqrt{\frac{2}{2k+1}}x^{k+\frac{1}{2}}%
\end{align*}
(because $K_{\mathfrak{I}_{N}}^{f}\in L_{\infty}(\Omega)$ by Theorem
\ref{thoremtransmoperator}). Hence
\[
\left| \frac{\varphi_{f}^{(k)}(x)}{x^{k}}-1\right| \leqslant\sqrt{2b}\left\|
K_{\mathfrak{I}_{N}^{f}}\right\| _{L_{\infty}(\Omega)}\sqrt{\frac{2}{2k+}%
}x^{\frac{1}{2}} \rightarrow0, \qquad x\rightarrow0^{+}.
\]
\end{remark}
\begin{remark}
\label{remarktransmoperatorlbais} Denote $\mathcal{P}(\mathbb{R}%
)=\mbox{Span}\{x^{k}\}_{k=0}^{\infty}$. According to Remark
\ref{remarkformalpowers} and Proposition \ref{propregular} we have that
$\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left( \mathcal{P}(\mathbb{R})\right)
=\mbox{Span}\left\{ \varphi_{f}^{(k)}(x)\right\} _{k=0}^{\infty}$, and by
(\ref{Lbasisproperty}) we have the relation%
\begin{equation}
\label{transmrelationdense}\mathbf{L}_{q,\mathfrak{I}_{N}}\mathbf{T}%
_{\mathfrak{I}_{N}}^{f} p = -\mathbf{T}_{\mathfrak{I}_{N}}^{f}D^{2}p
\qquad\forall p \in\mathcal{P}(\mathbb{R}).
\end{equation}
According to \cite{hugo2}, $\mathbf{T}_{q,\mathfrak{I}_{N}}^{f}$ is a
transmutation operator for the pair $\mathbf{L}_{q,\mathfrak{I}_{N}}$,
$-D^{2}$ in the subspace $\mathcal{P}(\mathbb{R})$, and $\{\varphi_{f}%
^{(k)}(x)\}_{k=0}^{\infty}$ is an $\mathbf{L}_{q,\mathfrak{I}_{N}}$-basis.
Since $\varphi_{f}^{(K)}(0)=D\varphi_{f}^{(k)}(0)=0$ for $k\geqslant2$,
$\{\varphi_{f}^{(k)}(x)\}_{k=0}^{\infty}$ is called a \textbf{standard }
$\mathbf{L}_{q,\mathfrak{J}_{N}}$-basis, and $\mathbf{T}_{\mathfrak{I}_{N}%
}^{f}$ a standard transmutation operator. By Remark \ref{remarknonhomeq} we
can recover $\varphi_{f}^{(k)}$ for $k\geqslant2$ from $\varphi_{f}^{(0)}$ and
$\varphi_{f}^{(0)}$ by the formula
\begin{equation}
\label{recorverformalpowers}\varphi_{f}^{(k)} (x) =-k(k-1)\mathbf{R}%
_{\mathfrak{I}_{N}}^{f}\varphi_{f}^{(k)}(x) =k(k-1)f(x)\int_{0}^{x}\frac
{1}{f^{2}(t)}\int_{0}^{t}f(s)\varphi_{f}^{(k-2)}(s)ds
\end{equation}
(compare this formula with \cite[Formula (8), Remark 9]{hugo2}).
\end{remark}
The following result adapts Theorem 10 from \cite{hugo2}, proved for the case
of an $L_{1}$-regular potential.
\begin{theorem}
The operator $\mathbf{T}_{\mathfrak{I}_{N}}^{f}$ is a transmutation operator
for the pair $\mathbf{L}_{q, \mathfrak{I}_{N}}$, $-D^{2}$ in $H^{2}(-b,b)$,
that is, $\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left( H^{2}(-b,b)\right)
\subset\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}}\right) $ and
\begin{equation}
\label{transmpropertygeneral}\mathbf{L}_{q, \mathfrak{I}_{N}} \mathbf{T}%
_{\mathfrak{I}_{N}}u= -\mathbf{T}_{\mathfrak{I}_{N}}D^{2}u \qquad\forall u\in
H^{2}(-b,b)
\end{equation}
\end{theorem}
\begin{proof}
We show that
\begin{equation}
\label{auxiliareqtransmpropgen}\mathbf{T}_{\mathfrak{I}_{N}}u(x) =
u(0)\varphi_{f}^{(0)}(x)+u^{\prime}(0)\varphi_{f}^{(1)}(x)-\mathbf{R}%
_{\mathfrak{I}_{N}}^{f}\mathbf{T}_{\mathfrak{I}_{N}}^{f}u^{\prime\prime
2}(-b,b).
\end{equation}
Let us first see that (\ref{auxiliareqtransmpropgen}) is valid for
$p\in\mathcal{P}(\mathbb{R})$. Indeed, set $p(x)=\sum_{k=0}^{M}c_{k} x^{k}$.
By the linearity of $\mathbf{T}_{\mathfrak{I}_{N}}^{f}$, Theorem
\ref{theoretransprop} and (\ref{recorverformalpowers}) we have
\begin{align*}
\mathbf{T}_{\mathfrak{I}_{N}}^{f}p(x) & = c_{0}\varphi_{f}^{(0)}%
+c_{1}\varphi_{f}^{(1)}(x)+\sum_{k=2}^{M}c_{k}\varphi_{f}^{(k)}(x)\\
& = p(0)\varphi_{f}^{(0)}+p^{\prime}(0)\varphi_{f}^{(1)}(x)-\sum_{k=2}%
^{M}c_{k}k(k-1)\mathbf{R}_{\mathfrak{I}_{N}}^{f}\varphi_{f}^{(k-2)}(x)\\
& = p(0)\varphi_{f}^{(0)}+p^{\prime}(0)\varphi_{f}^{(1)}(x)-\sum_{k=2}%
^{M}c_{k}k(k-1)\mathbf{R}_{\mathfrak{I}_{N}}^{f}\mathbf{T}_{\mathfrak{I}_{N}%
}^{f} \left[ x^{k-2}\right] \\
& = p(0)\varphi_{f}^{(0)}+p^{\prime}(0)\varphi_{f}^{(1)}(x)-\mathbf{R}%
_{\mathfrak{I}_{N}}^{f}\mathbf{T}_{\mathfrak{I}_{N}}^{f}p^{\prime\prime}(x)
\end{align*}
This establishes (\ref{auxiliareqtransmpropgen}) for $p\in\mathcal{P}%
(\mathbb{R})$. Take $u\in H^{2}(-b,b)$ arbitrary. There exists a sequence
$\{p_{n}\}\subset\mathcal{P}(\mathbb{R})$ such that $p_{n}^{(j)}%
\overset{[-b,b]}{\rightrightarrows} u^{(j)}$, $j=0,1$, and $p^{\prime\prime
}_{n}\rightarrow u$ in $L_{2}(-b,b)$, when $n\rightarrow\infty$ (see
\cite[Prop. 4]{hugo2}). Since $\mathbf{R}_{\mathfrak{I}_{N}}^{f}%
\mathbf{T}_{\mathfrak{I}_{N}}^{f}\in\mathcal{B}\left( L_{2}(-b,b),
L_{2}(0,b)\right) $ we have
\begin{align*}
\mathbf{T}_{\mathfrak{I}_{N}}^{f}u(x) & = \lim_{n\rightarrow\infty}
\mathbf{T}_{\mathfrak{I}_{N}}^{f}p_{n}(x) = \lim_{n\rightarrow\infty}\left[
p_{n}(0)\varphi_{f}^{(0)}+p^{\prime}_{n}(0)\varphi_{f}^{(1)}(x)-\mathbf{R}%
_{\mathfrak{I}_{N}}^{f}\mathbf{T}_{\mathfrak{I}_{N}}^{f}p^{\prime\prime}%
_{n}(x) \right] \\
& = u(0)\varphi_{f}^{(0)}(x)+u^{\prime}(0)\varphi_{f}^{(1)}(x)-\mathbf{R}%
_{\mathfrak{I}_{N}}^{f}\mathbf{T}_{\mathfrak{I}_{N}}^{f}u^{\prime\prime}(x)
\end{align*}
and we obtain (\ref{auxiliareqtransmpropgen}). Hence, by Remark
\ref{remarknonhomeq}, $\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left( H^{2}%
(-b,b)\right) \subset\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}_{N}%
}\right) $, and since $\mathbf{L}_{q,\mathfrak{I}_{N}}\varphi_{f}^{(k)}=0$ for
$k=0,1$, applying $\mathbf{L}_{q,\mathfrak{I}_{N}}$ in both sides of
(\ref{auxiliareqtransmpropgen}) we have (\ref{transmpropertygeneral}).
\end{proof}
\section{Fourier-Legendre and Neumann series of Bessel functions expansions}
\subsection{Fourier-Legendre series expansion of the transmutation kernel}
Fix $x\in(0,b]$. Theorem \ref{thoremtransmoperator} establishes that
$K_{\mathfrak{I}_{N}}^{h}(x,\cdot)\in L_{2}(-x,x)$, then $K_{\mathfrak{I}_{N}%
}^{h}(x,t)$ admits a Fourier series in terms of an orthogonal basis of
$L_{2}(-x,x)$. Following \cite{neumann}, we choose the orthogonal basis of
$L_{2}(-1,1)$ given by the Legendre polynomials $\{P_{n}(z)\}_{n=0}^{\infty}$.
Thus,
\begin{equation}
\label{FourierLegendreSeries1}K_{\mathfrak{I}_{N}}^{h}(x,t)= \sum
_{n=0}^{\infty}\frac{a_{n}(x)}{x}P_{n}\left( \frac{t}{x}\right)
\end{equation}
where
\begin{equation}
\label{FourierLegendreCoeff1}a_{n}(x)=\left( n+\frac{1}{2}\right) \int
_{-x}^{x}K_{\mathfrak{I}_{N}}^{h}(x,t)P_{n}\left( \frac{t}{x}\right)
dt\qquad\forall n\in\mathbb{N}_{0}.
\end{equation}
The series (\ref{FourierLegendreSeries1}) converges with respect to $t$ in the
norm of $L_{2}(-x,x)$. Formula (\ref{FourierLegendreCoeff1}) is obtained
multiplying (\ref{FourierLegendreSeries1}) by $P_{n}\left( \frac{t}{x}\right)
$, using the general Parseval's identity \cite[pp. 16]{akhiezer} and taking
into account that $\|P_{n}\|_{L_{2}(-1,1)}^{2}=\frac{2}{2n+1}$, $n\in
\mathbb{N}_{0}$.
\begin{example}
Consider the kernel $K_{\mathfrak{I}_{1}}^{0}(x,t)=\frac{\alpha_{1}}%
{2}H(x-x_{1})\chi_{[2x_{1}-x,x]}$ from Example \ref{beginexample1}. In this
case, the Fourier-Legendre coefficients has the form
\[
a_{n}(x) = \frac{\alpha_{1}}{2}\left( n+\frac{1}{2}\right) H(x-x_{1}%
)\int_{2x_{1}-x}^{x}P_{n}(t)dt=\frac{\alpha_{1}}{2}\left( n+\frac{1}%
{2}\right) xH(x-x_{1})\int_{2\frac{x_{1}}{x}-1}^{1}P_{n}(t)dt.
\]
From this we obtain $a_{0}(x)=\frac{\alpha_{1}}{2}H(x-x_{1})(x-x_{1})$. Using
formula $P_{n}(t)=\frac{1}{2n+1}\frac{d}{dt}\left( P_{n+1}(t)-P_{n-1}%
(t)\right) $ for $n\in\mathbb{N}$, and that $P_{n}(1)=0$ for all
$n\in\mathbb{N}$, we have
\[
a_{n}(x)= \frac{\alpha_{1}}{4}xH(x-x_{1})\left[ P_{n-1}\left( \frac{2x_{1}}%
{x}-1\right) -P_{n+1}\left( \frac{2x_{1}}{x}-1\right) \right]
\]
\end{example}
\begin{remark}
From (\ref{FourierLegendreCoeff1}) we obtain that the first coefficient
$a_{0}(x)$ is given by
\begin{align*}
a_{0}(x) & = \frac{1}{2}\int_{-x}^{x}K_{\mathfrak{I}_{N}}^{h}(x,t)P_{0}%
\left( \frac{t}{x}\right) dt =\frac{1}{2}\int_{-x}^{x}K_{\mathfrak{I}_{N}}%
^{h}(x,t)dt\\
& = \frac{1}{2}\mathbf{T}_{\mathfrak{I}_{N}}^{h}[1]-\frac{1}{2} = \frac{1}%
{2}(e_{\mathfrak{I}_{N}}^{h}(0,x)-1).
\end{align*}
Thus, we obtain the relations
\begin{equation}
\label{relationfirstcoeff1}a_{0}(x)= \frac{1}{2}(e_{\mathfrak{I}_{N}}%
^{h}(0,x)-1), \qquad e_{\mathfrak{I}_{N}}^{h}(0,x)= 2a_{n}(x)+1.
\end{equation}
\end{remark}
For the kernels $G_{\mathfrak{I}_{N}}^{h}(x,t)$ and $S_{\mathfrak{I}_{N}%
}(x,t)$ we obtain the series representations in terms of the even and odd
Legendre polynomials, respectively,%
\begin{align}
G_{\mathfrak{I}_{N}}^{h}(x,t) & = \sum_{n=0}^{\infty}\frac{g_{n}(x)}%
{x}P_{2n}\left( \frac{t}{x}\right) ,\label{Fourierexpansioncosine}\\
S_{\mathfrak{I}_{N}}(x,t) & = \sum_{n=0}^{\infty}\frac{s_{n}(x)}{x}%
P_{2n+1}\left( \frac{t}{x}\right) ,\label{Fourierexpansionsine}%
\end{align}
where the coefficients are given by
\begin{align}
g_{n}(x) & = 2a_{2n}(x)= (4n+1)\int_{0}^{x}G_{\mathfrak{I}_{N}}%
^{h}(x,t)P_{2n}\left( \frac{t}{x}\right)
dt,\label{Fourierexpansioncosinecoeff}\\
s_{n}(x) & = 2a_{2n+1}(4n+3)\int_{0}^{x}S_{\mathfrak{I}_{N}}(x,t)P_{2n+1}%
\left( \frac{t}{x}\right) dt.\label{Fourierexpansionsinecoeff}%
\end{align}
The proof of these facts is analogous to that in the case of Eq.
(\ref{schrodingerregular}), see \cite{neumann} or \cite[Ch. 9]%
{directandinverse}.
\begin{remark}
\label{remarkfirstcoeffcosine} Since $g_{0}(x)=2a_{0}(x)$, then
$g_{0}(x)=e_{\mathfrak{I}_{N}}^{h}(0,x)-1$. Since $e_{\mathfrak{I}_{N}}%
^{h}(0,x)$ is the solution of (\ref{Schrwithdelta}) with $\rho=0$ satisfying
$e_{\mathfrak{I}_{N}}^{h}(0,0)=1$, $(e_{\mathfrak{I}_{N}}^{h})^{\prime
}(0,0)=h$, hence by Remark \ref{remarkunicityofsol}, $e_{\mathfrak{I}_{N}}%
^{h}(0,x)=c_{\mathfrak{I}_{N}}^{h}(0,x)$ and
\begin{equation}
\label{relationfirstcoeff}g_{0}(x)= c_{\mathfrak{I}_{N}}^{h}(0,x)-1.
\end{equation}
On the other hand, for the coefficient $s_{0}(x)$ we have the relation
\[
s_{0}(x)= 3\int_{0}^{x}H_{\mathfrak{I}_{N}}(x,t)P_{1}\left( \frac{t}{x}\right)
dt= \frac{3}{x}\int_{0}^{x}H_{\mathfrak{I}_{N}}(x,t)tdt.
\]
Since $\frac{\sin(\rho x)}{\rho}\big{|}_{x=0}=x$, from (\ref{transmsinegral})
we have
\begin{equation}
\label{relationcoeffs0}s_{0}(x)= 3\left( \frac{s_{\mathfrak{I}_{N}}(0,x)}%
{x}-1\right) .
\end{equation}
\end{remark}
For every $n\in\mathbb{N}_{0}$ we write the Legendre polynomial $P_{n}(z)$ in
the form $P_{n}(z)=\sum_{k=0}^{n}l_{k,n}z^{k}$. Note that if $n$ is even,
$l_{k,n}=0$ for odd $k$, and $P_{2n}(z)=\sum_{k=0}^{n}\tilde{l}_{k,n}z^{2k}$
with $\tilde{l}_{k,n}=l_{2k,2n}$. Similarly $P_{2n+1}(z)=\sum_{k=0}^{n}\hat
{l}_{k,n}z^{2k+1}$ with $\hat{l}_{k,n}=l_{2k+1,2n+1}$. With this notation we
write an explicit formula for the coefficients (\ref{FourierLegendreCoeff1})
of the canonical transmutation kernel $K_{\mathfrak{J}_{N}}^{f}(x,t)$.
\begin{proposition}
\label{propcoeffwithformalpowers} The coefficients $\{a_{n}(x)\}_{n=0}%
^{\infty}$ of the Fourier-Legendre expansion (\ref{FourierLegendreSeries1}) of
the canonical transmutation kernel $K_{\mathfrak{I}_{N}}^{f}(x,t)$ are given
by
\begin{equation}
\label{CoeffFourier1formalpowers}a_{n}(x)= \left( n+\frac{1}{2}\right) \left(
\sum_{k=0}^{n}l_{k,n}\frac{\varphi_{f}^{(k)}(x)}{x^{k}}-1\right) .
\end{equation}
The coefficients of the canonical cosine and sine kernels satisfy the
following relations for all $n\in\mathbb{N}_{0}$
\begin{align}
g_{n}(x) & = (4n+1)\left( \sum_{k=0}^{n}\tilde{l}_{k,n}\frac{\varphi
_{f}^{(2k)}(x)}{x^{2k}}-1\right) ,\label{CoeffFourierCosineformalpowers}\\
s_{n}(x) & = (4n+3)\left( \sum_{k=0}^{n}\hat{l}_{k,n}\frac{\varphi
_{f}^{(2k+1)}(x)}{x^{2k+1}}-1\right) ,\label{CoeffFourierSineformalpowers}%
\end{align}
\end{proposition}
\begin{proof}
From (\ref{FourierLegendreCoeff1}) we have
\begin{align*}
a_{n}(x) & = \left( n+\frac{1}{2}\right) \int_{-x}^{x}K_{\mathfrak{I}_{N}%
}^{f}(x,t)\left( \sum_{k=0}^{n}l_{k,n}\left( \frac{t}{x}\right) ^{k}\right)
dt\\
& = \left( n+\frac{1}{2}\right) \sum_{k=0}^{n}\frac{l_{k,n}}{x^{k}}\int
_{0}^{x}K_{\mathfrak{I}_{N}}^{f}(x,t)t^{k}dt\\
& = \left( n+\frac{1}{2}\right) \sum_{k=0}^{n}\frac{l_{k,n}}{x^{k}}\left(
\mathbf{T}_{\mathfrak{I}_{N}}^{f}\left[ x^{k}\right] -x^{k} \right) .
\end{align*}
Hence (\ref{CoeffFourier1formalpowers}) follows from Theorem
\ref{theoretransprop} and that $P_{n}(z)=1$. Since $g_{n}(x)=2a_{2n}(x)$,
$s_{n}(x)=2a_{2n+1}(x)$, $l_{2k+1,2n}=0$, $l_{2k,2n+1}=0$ and $l_{2k,2n}%
=\tilde{l}_{k,n}$,$l_{2k+1,2n+1}=\hat{l}_{k,n}$, we obtain
(\ref{CoeffFourierCosineformalpowers}) and (\ref{CoeffFourierSineformalpowers}).
\end{proof}
\begin{remark}
\label{remarkcoefficientsaregood} By Remark \ref{remarkformalpowersasymp},
formula (\ref{CoeffFourier1formalpowers}) is well defined at $x=0$. Note that
$x^{n}a_{n}(x)$ belongs to $\mathcal{D}_{2}\left( \mathbf{L}_{q,\mathfrak{I}%
_{N}}\right) $ for all $n\in\mathbb{N}_{0}$.
\end{remark}
\subsection{Representation of the solutions as Neumann series of Bessel
functions}
Similarly to the case of the regular Eq. (\ref{regularSch}) \cite{neumann}, we
obtain a representation for the solutions in terms of Neumann series of Bessel
functions (NSBF). For $M\in\mathbb{N}$ we define
\[
K_{\mathfrak{I}_{N},M}^{h}(x,t):= \sum_{n=0}^{M}\frac{a_{n}(x)}{x}P_{n}\left(
\frac{t}{x}\right) ,
\]
that is, the $M$-partial sum of (\ref{FourierLegendreSeries1}).
\begin{theorem}
\label{ThNSBF1} The solutions $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$ and
$s_{\mathfrak{I}_{N}}(\rho,x)$ admit the following NSBF representations
\begin{align}
c_{\mathfrak{I}_{N}}^{h}(\rho,x)= \cos(\rho x)+\sum_{n=0}^{\infty}%
(-1)^{n}g_{n}(x)j_{2n}(\rho x),\label{NSBFcosine}\\
s_{\mathfrak{I}_{N}}(\rho,x)= \frac{\sin(\rho x)}{\rho}+\frac{1}{\rho}%
\sum_{n=0}^{\infty}(-1)^{n}s_{n}(x)j_{2n+1}(\rho x),\label{NSBFsine}%
\end{align}
where $j_{\nu}$ stands for the spherical Bessel function $j_{\nu}%
(z)=\sqrt{\frac{\pi}{2z}}J_{\nu+\frac{1}{2}}(z)$ (and $J_{\nu}$ stands for the
Bessel function of order $\nu$). The series converge pointwise with respect to
$x$ in $(0,b]$ and uniformly with respect to $\rho$ on any compact subset of
the complex $\rho$-plane. Moreover, for $M\in\mathbb{N}$ the functions
\begin{align}
c_{\mathfrak{I}_{N},M}^{h}(\rho,x)= \cos(\rho x)+\sum_{n=0}^{M}(-1)^{n}%
g_{n}(x)j_{2n}(\rho x),\label{NSBFcosineaprox}\\
s_{\mathfrak{I}_{N},M}(\rho,x)= \frac{\sin(\rho x)}{\rho}+\frac{1}{\rho}%
\sum_{n=0}^{M}(-1)^{n}s_{n}(x)j_{2n+1}(\rho x),\label{NSBFsineaprox}%
\end{align}
obey the estimates
\begin{align}
|c_{\mathfrak{I}_{N}}^{h}(\rho,x)-c_{\mathfrak{I}_{N},M}^{h}(\rho,x)| &
\leqslant 2\epsilon_{2M}(x)\sqrt{\frac{\sinh(2bC)}{C}}%
,\label{estimatessinecosineNSBF}\\
|\rho s_{\mathfrak{I}_{N}}(\rho,x)-\rho s_{\mathfrak{I}_{N},M}(\rho,x)| &
\leqslant 2\epsilon_{2M+1}(x)\sqrt{\frac{\sinh(2bC)}{C}}%
,\label{estimatessinesineNSBF}%
\end{align}
for any $\rho\in\mathbb{C}$ belonging to the strip $|\operatorname{Im}%
\rho|\leqslant C$, $C>0$, and where\newline$\epsilon_{M}(x)=\|K_{\mathfrak{I}%
_{N}}^{h}(x,\cdot)-K_{\mathfrak{I}_{N},2M}^{h}(x,\cdot)\|_{L_{2}(-x,x)}$.
\end{theorem}
\begin{proof}
We show the results for the solution $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$ (the
proof for $s_{\mathfrak{I}_{N}}(\rho,x)$ is similar). Substitution of the
Fourier-Legendre series (\ref{Fourierexpansioncosine}) in
(\ref{transmcosinegral}) leads us to
\begin{align*}
c_{\mathfrak{I}_{N}}^{h}(\rho,x) & = \cos(\rho x)+\int_{0}^{x}\left(
\sum_{n=0}^{\infty}\frac{g_{n}(x)}{x}P_{2n}\left( \frac{t}{x}\right) \right)
\cos(\rho t)dt\\
& = \cos(\rho x)+\sum_{n=0}^{\infty}\frac{g_{n}(x)}{x}\int_{0}^{x}P_{2n}\left(
\frac{t}{x}\right) \cos(\rho t)dt
\end{align*}
(the exchange of the integral with the summation is due to the fact that the
integral is nothing but the inner product of the series with the function
$\overline{\cos(\rho t)}$ and the series converges in $L_{2}(0,x)$). Using
formula 2.17.7 in \cite[pp. 433]{prudnikov}
\[
\int_{0}^{a}\left\{
\begin{matrix}
P_{2n+1}\left( \frac{y}{a}\right) \cdot\sin(by)\\
P_{2n}\left( \frac{y}{a}\right) \cdot\cos(by)\\
\end{matrix}
\right\} dy= (-1)^{n}\sqrt{\frac{\pi a}{2b}}J_{2n+\delta+\frac{1}{2}}(ab),
\quad\delta=\left\{
\begin{matrix}
1\\
0
\end{matrix}
\right\} , \; a>0,
\]
we obtain the representation (\ref{NSBFcosine}). Take $C>0$ and $\rho
\in\mathbb{C}$ with $|\operatorname{Im}\rho|\leqslant C$. For $M\in\mathbb{N}$
define $G_{\mathfrak{I}_{N},M}^{h}(x,t):= K_{\mathfrak{I}_{N},2M}%
^{h}(x,t)-K_{\mathfrak{I}_{N},2M}^{h}(x,-t)= \sum_{n=0}^{M}\frac{g_{n}(x)}%
{x}P_{2n}\left( \frac{t}{x}\right) $, the $M$-th partial sum of
(\ref{Fourierexpansioncosine}). Then
\[
c_{\mathfrak{I}_{N},M}^{h}(\rho,x) = \cos(\rho x)+\int_{0}^{x}G_{\mathfrak{I}%
_{N},M}^{h}(x,t)\cos(\rho t)dt.
\]
Using the Cauchy-Bunyakovsky-Schwarz inequality we obtain
\begin{align*}
|c_{\mathfrak{I}_{N}}^{h}(\rho,x)-C_{\mathfrak{I}_{N},M}^{h}(\rho,x)| & =
\left| \int_{0}^{x}\left( G_{\mathfrak{I}_{N}}^{h}(x,t)-G_{\mathfrak{I}_{N}%
,M}^{h}(x,t)\right) \cos(\rho t)dt\right| \\
& = \left| \left\langle \overline{G_{\mathfrak{I}_{N}}^{h}%
(x,t)-G_{\mathfrak{I}_{N},M}^{h}(x,t)}, \cos(\rho t) \right\rangle
_{L_{2}(0,x)} \right| \\
& \leqslant\|G_{\mathfrak{I}_{N}}^{h}(x,\cdot)-G_{\mathfrak{I}_{N},M}%
^{h}(x,\cdot)\|_{L_{2}(0,x)}\|\cos(\rho t)\|_{L_{2}(0,x)}.
\end{align*}
Since $\|K_{\mathfrak{I}_{N}}^{h}(x,\cdot)-K_{\mathfrak{I}_{N},2M}^{h}%
(x,\cdot)\|_{L_{2}(-x,x)}=\frac{1}{2}\|G_{\mathfrak{I}_{N}}^{h}(x,\cdot
)-G_{M,n}^{h}(x,\cdot)\|_{L_{2}(0,x)}$,
\begin{align*}
\int_{0}^{x}|\cos(\rho t)|^{2}dt & \leqslant\frac{1}{4}\int_{0}^{x}\left(
|e^{i\rho t}|+|e^{-i\rho t}|\right) ^{2}dt \leqslant\frac{1}{2}\int_{0}^{x}
\left( e^{-2t\operatorname{Im\rho}}+ e^{2t\operatorname{Im\rho}}\right) dt\\
& = \int_{-x}^{x}e^{-2\operatorname{Im}\rho t}dt = \frac{\sinh
(2x\operatorname{Im}\rho)}{\operatorname{Im}\rho}%
\end{align*}
and the function $\frac{\sinh(\xi x)}{\xi}$ is monotonically increasing in
both variables when $\xi,x\geqslant0$, we obtain
(\ref{estimatessinecosineNSBF}).
\end{proof}
Given $H\in\mathbb{C}$, we look for a pair of solutions $\psi_{\mathfrak{I}%
_{N}}^{H}(\rho,x)$ and $\vartheta_{\mathfrak{I}_{N}}(\rho,x)$ of
(\ref{Schrwithdelta}) satisfying the conditions
\begin{align}
\psi_{\mathfrak{I}_{N}}^{H}(\rho,b)=1, & (\psi_{\mathfrak{I}_{N}}%
^{H})^{\prime}(\rho,b)=-H,\label{conditionspsi}\\
\vartheta_{\mathfrak{I}_{N}}(\rho,b)=0, & \vartheta_{\mathfrak{I}_{N}%
}^{\prime}(\rho,b)=1.\label{conditionstheta}%
\end{align}
\begin{theorem}
The solutions $\psi_{\mathfrak{I}_{N}}^{H}(\rho,x)$ and $\vartheta
_{\mathfrak{I}_{N}}(\rho,x)$ admit the integral representations
\begin{align}
\psi_{\mathfrak{I}_{N}}^{H}(\rho,x) & = \cos(\rho(b-x))+\int_{x}%
^{b}\widetilde{G}_{\mathfrak{I}_{N}}^{H}(x,t)\cos(\rho
(b-t))dt,\label{solpsiintrepresentation}\\
\vartheta_{\mathfrak{I}_{N}}(\rho,x) & = \frac{\sin(\rho(b-x))}{\rho}%
+\int_{x}^{b}\widetilde{S}_{\mathfrak{I}_{N}}^{H}(x,t)\frac{\sin(\rho
(b-t))}{\rho}dt,\label{solthetaintrepresentation}%
\end{align}
where the kernels $\widetilde{G}_{\mathfrak{I}_{N}}^{H}(x,t)$ and
$\widetilde{S}_{\mathfrak{I}_{N}}(x,t)$ are defined in $\Omega$ and satisfy
$\widetilde{G}_{\mathfrak{I}_{N}}^{H}(x,\cdot), \widetilde{S}_{\mathfrak{I}%
_{N}}(x,\cdot) \in L_{2}(0,x)$ for all $x\in(0,b]$. In consequence, the
solutions $\psi_{\mathfrak{I}_{N}}^{H}(\rho,x)$ and $\vartheta_{\mathfrak{I}%
_{N}}(\rho,x)$ can be written as NSBF
\begin{equation}
\label{NSBFforpsi}\psi_{\mathfrak{I}_{N}}^{H}(\rho,x)= \cos(\rho
(b-x))+\sum_{n=0}^{\infty}(-1)^{n}\tau_{n}(x)j_{2n}(\rho(b-x)),
\end{equation}
\begin{equation}
\label{NSBFfortheta}\vartheta_{\mathfrak{I}_{N}}(\rho,x)= \frac{\sin
(\rho(b-x))}{\rho}+\sum_{n=0}^{\infty}(-1)^{n}\zeta_{n}(x)j_{2n}(\rho(b-x)),
\end{equation}
with some coefficients $\{\tau_{n}(x)\}_{n=0}^{\infty}$ and $\{\zeta
_{n}(x)\}_{n=0}^{\infty}$.
\end{theorem}
\begin{proof}
We prove the results for $\psi_{\mathfrak{I}_{N}}^{H}(\rho,x)$ (the proof for
$\vartheta_{\mathfrak{I}_{N}}(\rho,x)$ is similar). Set $y(\rho,x)=
\psi_{\mathfrak{I}_{N}}^{H}(\rho,b-x)$. Note that $y(\rho, 0)=1$, $y^{\prime
}(\rho,0)= H$ and for $\phi\in C_{0}^{\infty}(0,b)$ we have%
\begin{align*}
(y^{\prime\prime2}y(x), \phi(x))_{C_{0}^{\infty}(0,b)} & = (\psi
_{\mathfrak{I}_{N}}^{H}(\rho,x),\phi^{\prime\prime2}\phi(b-x) )_{C_{0}%
^{\infty}(0,b)}\\
& = (q(x)\psi_{\mathfrak{I}_{N}}^{H}(\rho,x), \phi(b-x))_{C_{0}^{\infty}%
(0,b)}+\sum_{k=0}^{N}\alpha_{k}\psi_{\mathfrak{I}_{N}}^{H}(\rho,x_{k}%
)\phi(b-x_{k})\\
& = (q(b-x)y(x),\phi(x))_{C_{0}^{\infty}(0,b)}+\sum_{k=0}^{N}\alpha
_{k}y(b-x_{k})\phi(b-x_{k}),
\end{align*}
that is, $\psi_{\mathfrak{I}_{N}}^{H}(\rho,x)$ is a solution of
(\ref{Schrwithdelta}) iff $y(x)=\psi_{\mathfrak{I}_{N}}^{H}(\rho,b-x)$ is a
solution of
\begin{equation}
\label{reversedequation}-y^{\prime\prime}(x)+\left( q(b-x)+\sum_{k=0}%
^{N}\alpha_{k}\delta(x-(b-x_{k}))\right) y(x)=\rho^{2}y(x).
\end{equation}
Since $0<b-x_{N}<\cdots<b-x_{0}<b$, hence (\ref{reversedequation}) is of the
type (\ref{Schrwithdelta}) with the point interactions $\mathfrak{I}_{N}%
^{*}=\{(b-x_{N-j},\alpha_{N-j})\}_{j=0}^{N}$ and $\psi_{\mathfrak{I}_{N}}%
^{H}(\rho,b-x)$ is the corresponding solution $c_{\mathfrak{I}_{N}^{*}}%
^{H}(\rho,x)$ for (\ref{reversedequation}). Hence
\begin{equation}
\label{integralrepresentationpsi}\psi_{\mathfrak{I}_{N}}^{H}(\rho,b-x)=
\cos(\rho x)+ \int_{0}^{x}G_{\mathfrak{I}_{N}^{*}}^{H}(x,t)\cos(\rho t)dt
\end{equation}
for some kernel $G_{\mathfrak{I}_{N}^{*}}^{H}(x,t)$ defined on $\Omega$ with
$\widetilde{G}_{\mathfrak{I}_{N}}^{H}(x,\cdot)\in L_{2}(0,x)$ for $x\in(0,b]$.
Thus,
\begin{align*}
\psi_{\mathfrak{I}_{N}}(\rho,x) & =\cos(\rho(b-x))+\int_{0}^{b-x}%
G_{\mathfrak{I}_{N}^{*}}^{H}(b-x,t)\cos(\rho t)dt\\
& =\psi_{\mathfrak{I}_{N}}(\rho,x)=\cos(\rho(b-x))+\int_{x}^{b}G_{\mathfrak{I}%
_{N}^{*}}^{H}(b-x,b-t)\cos(\rho(b-t))dt,
\end{align*}
where the change of variables $x\mapsto b-x$ was used. Hence we obtain
(\ref{solpsiintrepresentation}) with $\widetilde{G}_{\mathfrak{I}_{N}^{*}}%
^{H}(x,t)=G_{\mathfrak{I}_{N}^{*}}^{H}(b-x,b-t)$ In consequence, by Theorem
\ref{ThNSBF1} we obtain (\ref{NSBFforpsi}).
\end{proof}
\begin{remark}
As in Remark \ref{remarkfirstcoeffcosine}
\begin{equation}
\label{firstcoeffpsi}\tau_{0}(x)= \psi_{\mathfrak{I}_{N}}^{H}(0,x)-1
\quad\mbox{and }\; \zeta_{0}(x)=3\left( \frac{\vartheta_{\mathfrak{I}_{N}%
}(0,x)}{b-x}-1\right) .
\end{equation}
\end{remark}
\begin{remark}
\label{remarkentirefunction} Let $\lambda\in\mathbb{C}$ and $\lambda=\rho
^{2}$.
\begin{itemize}
\item[(i)] The functions $\widehat{s}_{k}(\rho,x-x_{k})$ are entire with
respect to $\rho$. Then from (\ref{generalsolcauchy}) $c_{\mathfrak{I}_{N}%
}^{h}(\rho,x)$, $s_{\mathfrak{I}_{N}}(\rho,x)$ and $\psi_{\mathfrak{I}_{N}%
}^{H}(\rho, x)$ are entire as well.
\item[(ii)] Suppose that $q$ is real valued and $\alpha_{0}, \dots, \alpha
_{N}, u_{0}, u_{1}\in\mathbb{R}$. If $u(\lambda,x)$ is a solution of
$u^{(k)}(\lambda,0)=u_{k}$, $k=0,1$, then by the uniqueness of the Cauchy
problem $\overline{u(\lambda,x)}=u(\overline{\lambda},x)$. In particular, for
$\rho, h, H\in\mathbb{R}$, the solutions $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$,
$s_{\mathfrak{I}_{N}}(\rho,x)$ and $\psi_{\mathfrak{I}_{N}}^{H}(\rho, x)$ are
real valued.
\end{itemize}
\end{remark}
\subsection{A recursive integration procedure for the coefficients $\{a_n(x)\}_{n=0}^{\infty}$}
Similarly to the case of the regular Schr\"odinger equation \cite{directandinverse,neumann,neumann2}, we derive formally a recursive integration procedure for computing the Fourier-Legendre coefficients $\{a_n(x)\}_{n=0}^{\infty}$ of the canonical transmutation kernel $K_{\mathfrak{J}_N}^f(x,t)$. Consider the sequence of functions $\sigma_n(x):=x^na_n(x)$ for $n\in \mathbb{N}_0$. According to Remark \ref{remarkcoefficientsaregood}, $\{\sigma_n(x)\}_{n=0}^{\infty}\subset \mathcal{D}_2\left(\mathbf{L}_{q,\mathfrak{J}_N}\right)$.
\begin{remark}\label{Remarksigmafunctionproperties}
\begin{itemize}
\item[(i)] By Remark \ref{remarkfirstcoeffcosine},
\begin{equation}\label{sigma0}
\sigma_0(x)=\frac{f(x)-1}{2}.
\end{equation}
\item[(ii)] By (\ref{CoeffFourier1formalpowers}), $a_1(x)=\frac{3}{2}\left(\frac{\varphi_f^{(1)}(x)}{x}-1\right)$. Thus, from (\ref{formalpowers}) and (\ref{auxiliaryformalpowers}) we have
\begin{equation}\label{sigma1}
\sigma_1(x)=\frac{3}{2}\left(f(x)\int_0^x\frac{dt}{f^2(t)}-x\right).
\end{equation}
\item[(iii)] For $n\geqslant 2$, $\sigma_n(0)=0$, and by (\ref{CoeffFourier1formalpowers}) we obtain
\begin{align*}
D\sigma_n(x) & = \left(n+\frac{1}{2}\right)\sum_{k=0}^{n}l_{k,n}D\left(x^{n-k}\varphi_f^{(k)}(x)\right)\\
& =\left(n+\frac{1}{2}\right)\left(\sum_{k=0}^{n-1}l_{k,n} (n-k)x^{n-k-1}\varphi_f^{(k)}(x)+\sum_{k=0}^{n}l_{k,n}x^{n-k}D\varphi_f^{(k)}(x)\right).
\end{align*}
By (\ref{formalpowersderivative}) and (\ref{auxiliaryformalpowers}), $D\varphi_f^{(k)}(0)=0$ for $k\geqslant 1$. Hence, $\sigma_n'(0)=0$.
\end{itemize}
\end{remark}
Denote by $c_{\mathfrak{J}_N}^f(\rho,x)$ the solution of (\ref{Schrwithdelta}) satisfying (\ref{cosinegralinitialconds}) with $h=f'(0)$. On each interval $[x_k,x_{k+1}]$, $k=0, \cdots, N$, $c_{\mathfrak{J}_N}^f(\rho,x)$ is a solution of the regular equation (\ref{schrodingerregular}). In \cite[Sec. 6]{neumann} by substituting the Neumann series (\ref{NSBFcosine}) of $c_{\mathfrak{J}_N}^f(\rho,x)$ into Eq. (\ref{schrodingerregular}) it was proved that the functions $\{\sigma_{2n}(x)\}_{n=0}^{\infty}$ must satisfy, at least formally, the recursive relations
\begin{equation}\label{firstrecursiverelation}
\mathbf{L}_q\sigma_{2n}(x) = \frac{4n+1}{4n-3}x^{4n-1}\mathbf{L}_q\left[\frac{\sigma_{2n-2}(x)}{x^{4n-3}}\right], \quad x_k<x<x_k
\end{equation}
for $k=0,\cdots, N$. Similarly, substitution of the Neumann series (\ref{NSBFsine}) of $s_{\mathfrak{J}_N}(\rho,x)$ into (\ref{schrodingerregular}) leads to the equalities
\begin{equation}\label{secondrecursiverelation}
\mathbf{L}_q\sigma_{2n+1}(x) = \frac{4n+3}{4n+1}x^{4n+3}\mathbf{L}_q\left[\frac{\sigma_{2n+1}(x)}{x^{4n+1}}\right], \quad x_k<x<x_k.
\end{equation}
Taking into account that $\sigma_n\in \mathcal{D}_2\left(\mathbf{L}_{q,\mathfrak{J}_N}\right)$ and combining (\ref{firstrecursiverelation}), by Remark \ref{Remarksigmafunctionproperties}(iii) and (\ref{secondrecursiverelation}) we obtain that the functions $\sigma_n(x)$, $n\geqslant 2$, must satisfy (at least formally) the following Cauchy problems
\begin{equation}\label{recurrencerelationsigma}
\begin{cases}
\displaystyle \mathbf{L}_{q,\mathfrak{J}_N}\sigma_n(x) = \frac{2n+1}{2n-3}x^{2n-1}\mathbf{L}_q\left[\frac{\sigma_{n-2}(x)}{x^{2n-3}}\right], \quad 0<x<b, \\
\sigma_n(0) =\sigma'_n(0)=0.
\end{cases}
\end{equation}
\begin{remark}\label{remarkcontinuousquotient}
If $g\in \mathcal{D}_2\left(L_{q,\mathfrak{J}_N}\right)$, then $\frac{g}{f}\in H^2(0,b)$.
Indeed, $\frac{g}{f}\in C[0,b]$, and the jump of the derivative at $x_k$ is given by
\begin{align*}
\left(\frac{g}{f}\right)'(x_k+)- \left(\frac{g}{f}\right)'(x_k-) & = \frac{g'(x_k+)f(x_k)-f'(x_k+)g(x_k)}{f^2(x_k)}-\frac{g'(x_k-)f(x_k)-f'(x_k-)g(x_k)}{f^2(x_k)}\\
& = \frac{1}{f^2(x_k)}\left[\left(g'(x_k+)-g'(x_k-)\right)f(x_k)-g(x_k)\left(f'(x_k+)-f'(x_k-)\right) \right]\\
& = \frac{1}{f^2(x_k)}\left[ \alpha_kg(x_k)f(x_k)-\alpha_kg(x_k)f(x_k) \right]=0.
\end{align*}
Hence $\frac{g}{f}\in AC[0,b]$, and then $\frac{g}{f}\in H^2(0,b)$.
\end{remark}
\begin{proposition}
The sequence $\{\sigma_n(x)\}_{n=0}^{\infty}$ satisfying the recurrence relation (\ref{recurrencerelationsigma}) for $n\geqslant 2$, with $\sigma_0(x)=\frac{f(x)-1}{2}$ and $\sigma_1(x)= \frac{3}{2}\left(f(x)\int_0^x\frac{dt}{f^2(t)}-x\right)$, is given by
\begin{equation}\label{recurrenceintegralsigma}
\sigma_n(x)= \frac{2n+1}{2n-3}\left( x^2\sigma_{n-2}(x)+2(2n-1)\theta_n(x)\right), \quad n\geqslant 2,
\end{equation}
where
\begin{equation}
\theta_n(x):= \int_0^x\left(\eta_n(t)-tf(t)\sigma_{n-2}(t)\right)\frac{dt}{f^2(t)}, \quad n\geqslant 2,
\end{equation}
and
\begin{equation}
\eta_n(x):= \int_0^x\left((n-1)f(t)+tf'(t)\right)\sigma_{n-2}(t)dt, \quad n\geqslant 2.
\end{equation}
\end{proposition}
\begin{proof}
Set $g\in \mathcal{D}_2\left(\mathbf{L}_{q,\mathfrak{J}_N}\right)$ and $n\geqslant 2$. Consider the Cauchy problem
\begin{equation}\label{auxcauchyproblem2}
\begin{cases}
\displaystyle \mathbf{L}_{q,\mathfrak{J}_N}u_n(x) = \frac{2n+1}{2n-3}x^{2n-1}\mathbf{L}_q\left[\frac{g(x)}{x^{2n-3}}\right], \quad 0<x<b, \\
u_n(0) =u'_n(0)=0.
\end{cases}
\end{equation}
By formula (\ref{solutionnonhomogeneouseq}) and the Polya factorization $\mathbf{L}_q= -\frac{1}{f}Df^2D\frac{1}{f}$ we obtain that the unique solution of the Cauchy problem (\ref{auxcauchyproblem2}) is given by
\[
u_n(x)= \frac{2n+1}{2n-3} f(x)\int_0^x \frac{1}{f^2(t)}\left(\int_0^t s^{2n-1} Df^2(s)D\left[ \frac{g(s)}{s^{2n-3}f(s)}\right]ds \right)dt.
\]
Consider an antiderivative $\int s^{2n-1} Df^2(s)D\left[ \frac{g(s)}{s^{2n-3}f(s)}\right]ds$. Integration by parts gives
\begin{align*}
\int s^{2n-1} Df^2(s)D\left[ \frac{g(s)}{s^{2n-3}f(s)}\right]ds & = s^{2n-1}f^2(s)D\left(\frac{g(s)}{s^{2n-3}f(s)}\right)-(2n-1)sf(s)g(s) \\
& \quad + \int \left((2n-1)(2n-2)f(s)+2(2n-1)sf'(s)\right)g(s)ds.
\end{align*}
Note that
\begin{align*}
s^{2n-1}f^2(s)D\left(\frac{g(s)}{s^{2n-3}f(s)}\right) & = s^{2n-1}f^2(s)\frac{D\left(\frac{g(s)}{f(s)}\right)}{s^{2n-3}}-s^{2n-1}f^2(s)\frac{\frac{g(s)}{f(s)}}{s^{4n-6}}(2n-3)s^{2n-4}\\
& = s^2f^2(s)D\left(\frac{g(s)}{f(s)}\right)-(2n-3)sf(s)g(s).
\end{align*}
Since $g\in \mathcal{D}_2\left(\mathbf{L}_{q,\mathfrak{J}_N}\right)$, by Remark \ref{remarkcontinuousquotient}, $D\left(\frac{g(s)}{f(s)}\right)$ is continuous in $[0,b]$. Thus,
\begin{align*}
\int s^{2n-1} Df^2(s)D\left[ \frac{g(s)}{s^{2n-3}f(s)}\right]ds
& = s^2f^2(s)D\left(\frac{g(s)}{f(s)}\right)-(4n-4)sf(s)g(s)\\
&\quad + 2(2n-1)\int\left((n-1)f(s)+sf'(s)\right)ds
\end{align*}
is well defined at $s=0$ and is continuous in $[0,b]$. Then we obtain that
\begin{align*}
\Phi(t) & := \int_0^t s^{2n-1} Df^2(s)D\left[ \frac{g(s)}{s^{2n-3}f(s)}\right]ds \\
& = t^2f^2(t)D\left(\frac{g(t)}{f(t)}\right)-(4n-4)tf(t)g(t) + 2(2n-1)\Theta_n[g](t),
\end{align*}
with $H_n[g](t):= \displaystyle \int_0^t \left((n-1)f(s)+sf'(s)\right)g(s)ds$,
is a continuous function in $[0,b]$. Now,
\begin{align*}
\int_0^x \Phi(t)\frac{dt}{f^2(t)} &= \int_0^x t^2D\left[\frac{g(t)}{f(t)}\right]dt-(4n-4)\int_{0}^{t} t\frac{g(t)}{f(t)}dt+2(2n-1)H_n[g](t)\\
& = x^2\frac{g(x)}{f(x)}-2(2n-1)\int_0^x\left[H_n[g](t)-tf(t)g(t)\right]dt.
\end{align*}
Hence
\begin{equation}\label{auxsol2}
u_n(x)= \frac{2n+1}{2n-3}\left(x^2g(x)-2(2n-1)\Theta_n[g](x)\right),
\end{equation}
with $\Theta_n[g](x):= \displaystyle \int_0^x\left[H_n[g](t)-tf(t)g(t)\right]dt$.
Finally, since $\sigma_0, \sigma_1\in \mathcal{D}_2\left(\mathbf{L}_{q,\mathfrak{J}_{N}}\right)$, formula (\ref{recurrenceintegralsigma}) is obtained for all $n\geqslant 2$ by induction, taking $g=\sigma_{2n-2}$ in (\ref{auxcauchyproblem2}) and $\eta_n(x)=H_n[\sigma_{n-2}](x)$, $\theta_n(x)=\Theta_n[\sigma_{n-2}](x)$ in (\ref{auxsol2}).
\end{proof}
Integral relations of type (\ref{recurrenceintegralsigma}) are effective for the numerical computation of the partial sums (\ref{NSBFcosineaprox}) and (\ref{NSBFsineaprox}), as seen in \cite{neumann,neumann2}.
\section{Integral representation for the derivative}
Since $e_{\mathfrak{I}_{N}}^{h}(\rho, \cdot)\in AC[0, b]$, it is worthwhile
looking for an integral representation of the derivative of $e_{\mathfrak{I}%
_{N}}^{h}(\rho,x)$. Differentiating (\ref{SoleGral}) we obtain
\begin{align*}
(e_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x) & =\widetilde{e}_{h}^{\prime
}(\rho,x)+\sum_{k=0}^{N}\alpha_{k}\widetilde{e}_{h}(\rho, x_{k})H(x-x_{k}%
)\widehat{s}^{\prime}_{k} (\rho, x-x_{k})\\
& \quad+\sum_{J\in\mathcal{I}_{N}}\alpha_{J} H(x-x_{j_{|J|}})\widetilde
{e}_{h}(\rho,x_{j_{1}})\left( \prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}%
(\rho,x_{j_{l+1}}-x_{j_{l}})\right) \widehat{s}_{j_{|J|}}^{\prime}%
(\rho,x-x_{j_{|J|}}).
\end{align*}
Differentiating (\ref{representationsinegeneral1}) and using that $\widehat
{H}_{k}(x,x)=\frac{1}{2}\int_{0}^{x}q(t+x_{k})dt$, we obtain
\begin{align*}
\widehat{s}_{k}^{\prime}(\rho,x) & = \cos(\rho x)+\frac{1}{2}\frac{\sin(\rho
x)}{\rho}\int_{0}^{x}q(t+x_{k})dt+\int_{0}^{x}\partial_{x}\widehat{H}%
_{k}(x,t)\frac{\sin(\rho t)}{\rho}dt.
\end{align*}
Denote
\begin{equation}
\label{antiderivativeq}w(y,x):= \frac{1}{2}\int_{y}^{x}q(s)ds \quad
\mbox{ for }\; x,y\in[0,b].
\end{equation}
Hence, the derivative $\widehat{s}_{k}^{\prime}(\rho,x-x_{k})$ can be written
as
\begin{equation}
\widehat{s}_{k}^{\prime}(\rho,x-x_{k})= \cos(\rho(x-x_{k}))+\int_{-(x-x_{k}%
)}^{x-x_{k}}\widetilde{K}_{k}^{1}(x,t)e^{i\rho t}dt,
\end{equation}
where $\displaystyle \widetilde{K}_{k}^{1}(x,t)= w(x_{k},x)+\frac{1}{2}%
\int_{|t|}^{x-x_{k}}\partial_{x}\widehat{H}_{k}(x,t)dt$.
On the other hand, differentiation of (\ref{transm1}) and the Goursat
conditions for $\widetilde{K}^{h}(x,t)$ lead to the equality
\begin{equation}
\tilde{e}^{\prime}_{h}(\rho,x)= (i\rho+w(0,x))e^{i\rho x}+h\cos(\rho
x)+\int_{-x}^{x} \partial_{x}\widetilde{K}^{h}(x,t)e^{i\rho t}dt.
\end{equation}
Using the fact that
\[
\cos(\rho A)\int_{-B}^{B}f(t)e^{i\rho t}dt= \int_{-(B+A)}^{B+A}\frac{1}%
{2}\left( \chi_{[-(B+A),B-A]}(t)f(t-A)+\chi_{[A-B,B+A]}(t)f(t+A)\right)
e^{i\rho t}dt
\]
for $A,B>0$ and $f\in L_{2}(\mathbb{R})$ with $\operatorname{Supp}%
(f)\subset[-B,B]$, we obtain
\begin{align*}
\tilde{e}_{h}(\rho, x_{j})\widehat{s}_{k}^{\prime}(\rho,x-x_{k}) & =e^{i\rho
x_{j}}\cos(\rho(x-x_{k}))+\mathcal{F}\left[ \widehat{K}_{x_{j},x_{k}%
}(x,t)\right] ,
\end{align*}
where
\begin{align*}
\widehat{K}_{x_{j},x_{k}}(x,t) & = \chi_{[x_{k}-x-x_{j},x-x_{k}-x_{j}%
]}(t)\widetilde{K}_{k}^{1}(x,t-x_{j})+\chi_{x_{j}}(t)\widetilde{K}^{h}%
(x_{j},t)\ast\chi_{x-x_{k}}(t)\widehat{K}_{k}^{1}(x,t)\\
& \qquad+\frac{1}{2}\chi_{[x_{k}-x_{j}-x,x_{j}-x+x_{k}]}(t)\widehat{K}%
^{h}(x_{j},t-x+x_{k})\\
& \qquad+\frac{1}{2}\chi_{[x-x_{k}-x_{j},x-x_{k}+x_{j}]}(t)\widehat{K}%
^{h}(x_{j},t+x-x_{k})\Big).
\end{align*}
By Lemma \ref{lemaconv} the support of $\widehat{K}_{x_{j},x_{k}}(x,t)$
belongs to $[x_{k}-x-x_{j},x-x_{k}+x_{j}]$. Using the equality
\[
\prod_{l=1}^{|J|-1}\widehat{s}_{j_{l}}(\rho,x_{j_{l+1}}-x_{j_{l}}%
)=\mathcal{F}\left\{ \left( \prod_{l=1}^{|J|-1}\right) ^{\ast}\left(
\chi_{x_{j_{l+1}}-x_{j_{l}}}(t)\widetilde{K}_{k}(x_{j_{l+1}},t)\right)
\right\}
\]
we have
\[
(e_{\mathfrak{I}_{N}}^{h})^{\prime i\rho x}+h\cos(\rho x)+\sum_{k=0}^{N}%
\alpha_{k}H(x-x_{k})e^{i\rho x_{k}}\cos(\rho(x-x_{k}))+\mathcal{F}\left\{
E_{\mathfrak{I}_{N}}^{h}(x,t)\right\}
\]
where
\begin{align*}
E_{\mathfrak{I}_{N}}^{h}(x,t) & = \chi_{x}(t)\partial_{x}\widetilde{K}%
^{h}(x,t)+\sum_{k=0}^{N}\alpha_{k} H(x-x_{k})\widehat{K}_{x_{k},x_{k}}(x,t)\\
& \quad+\sum_{J\in\mathcal{I}_{N}}\alpha_{J} H(x-x_{j_{|J|}}) \widehat
{K}_{x_{j_{1}},x_{j_{|J|}}}(x,t)\ast\left( \prod_{l=1}^{|J|-1}\right) ^{\ast
}\left( \chi_{x_{j_{l+1}}-x_{j_{l}}}(t)\widetilde{K}_{k}(x_{j_{l+1}},t)\right)
.
\end{align*}
Again, by Lemma \ref{lemaconv} the support of $E_{\mathfrak{I}_{N}}^{h}(x,t)$
belongs to $[-x,x]$. Since $e^{i\rho x_{k}}\cos(\rho(x-x_{k}))=\frac{1}%
{2}e^{i\rho x}\left( 1+e^{-2i\rho(x-x_{k})}\right) $, we obtain the following representation.
\begin{theorem}
The derivative $(e_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x)$ admits the
integral representation
\begin{align}
(e_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x) & = \left( i\rho+w(0,x)+\frac
{1}{2}\sigma_{\mathfrak{I}_{N}}(x)\right) e^{i\rho x}+h\cos(\rho x)\nonumber\\
& \quad+\sum_{k=0}^{N}\frac{\alpha_{k}}{2}H(x-x_{k})e^{-2i\rho(x-x_{k})}%
+\int_{-x}^{x}E_{\mathfrak{I}_{N}}^{h}(x,t)e^{i\rho t}%
dt,\label{derivativetransme}%
\end{align}
where $E_{\mathfrak{I}_{N}}^{h}(x,\cdot)\in L_{2}(-x,x)$ for all $x\in(0,b]$.
\end{theorem}
\begin{corollary}
The derivatives of the solutions $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$ and
$s_{\mathfrak{I}_{N}}(\rho,x)$ admit the integral representations
\begin{align}
(c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x) & =-\rho\sin(\rho x)+ \left(
h+w(0,x)+\frac{1}{2}\sigma_{\mathfrak{I}_{N}}(x)\right) \cos(\rho
x)\nonumber\\
& \quad+\sum_{k=0}^{N}\frac{\alpha_{k}}{2}H(x-x_{k})\cos(2\rho(x-x_{k}%
))+\int_{0}^{x}M_{\mathfrak{I}_{N}}^{h}(x,t)\cos(\rho
t)dt,\label{derivativetransmcosine}\\
s^{\prime}_{\mathfrak{I}_{N}}(\rho,x) & =\cos(\rho x)+ \left( w(0,x)+\frac
{1}{2}\sigma_{\mathfrak{I}_{N}}(x)\right) \frac{\sin(\rho x)}{\rho}\nonumber\\
& \quad-\sum_{k=0}^{N}\alpha_{k}H(x-x_{k})\frac{\sin(2\rho(x-x_{k}))}{2\rho
}+\int_{0}^{x}R_{\mathfrak{I}_{N}}(x,t)\frac{\sin(\rho t)}{\rho}%
dt,\label{derivativetransmsine}%
\end{align}
where
\begin{align}
N_{\mathfrak{I}_{N}}^{h}(x,t) & = E_{\mathfrak{I}_{N}}^{h}%
(x,t)+E_{\mathfrak{I}_{N}}^{h}(x,-t)\quad\label{kernelderivativecosine}\\
R_{\mathfrak{I}_{N}}^{h}(x,t) & = E_{\mathfrak{I}_{N}}^{h}%
(x,t)-E_{\mathfrak{I}_{N}}^{h}(x,-t),\label{kernelderivativesine}%
\end{align}
defined for $x\in[0,b]$ and $|t|\leqslant x$.
\end{corollary}
\begin{corollary}
\label{CorNSBFforderivatives} The derivatives of the solutions
$c_{\mathfrak{I}_{N}}^{h}(\rho,x)$ and $s_{\mathfrak{I}_{N}}(\rho,x)$ admit
the NSBF representations
\begin{align}
(c_{\mathfrak{I}_{N}}^{h})^{\prime}(\rho,x) & =-\rho\sin(\rho x)+ \left(
h+w(0,x)+\frac{1}{2}\sigma_{\mathfrak{I}_{N}}(x)\right) \cos(\rho
x)\nonumber\\
& \quad+\sum_{k=0}^{N}\frac{\alpha_{k}}{2}H(x-x_{k})\cos(2\rho(x-x_{k}%
))+\sum_{n=0}^{\infty}(-1)^{n}l_{n}(x)j_{2n}(\rho
x),\label{NSBFderivativetransmcosine}\\
s^{\prime}_{\mathfrak{I}_{N}}(\rho,x) & =\cos(\rho x)+ \left( w(0,x)+\frac
{1}{2}\sigma_{\mathfrak{I}_{N}}(x)\right) \frac{\sin(\rho x)}{\rho}\nonumber\\
& \quad-\sum_{k=0}^{N}\alpha_{k}H(x-x_{k})\frac{\sin(2\rho(x-x_{k}))}{2\rho
}+\sum_{n=0}^{\infty}(-1)^{n}r_{n}(x)j_{2n+1}(\rho
x),\label{NSBFderivativetransmsine}%
\end{align}
where $\{l_{n}(x)\}_{n=0}^{\infty}$ and $\{r_{n}(x)\}_{n=0}^{\infty}$ are the
coefficients of the Fourier-Legendre expansion of $M_{\mathfrak{I}_{N}}%
^{h}(x,t)$ and $R_{\mathfrak{J}_{N}}(x,t)$ in terms of the even and odd
Legendre polynomials, respectively.
\end{corollary}
\section{Conclusions}
The construction of a transmutation operator that transmute the solutions of
equation $v^{\prime\prime }+\lambda v=0$ into solutions of (\ref{Schrwithdelta}) is
presented. The transmutation operator is obtained from the closed form of the
general solution of equation (\ref{Schrwithdelta}). It was shown how to
construct the image of the transmutation operator on the set of polynomials,
this with the aid of the SPPS method. A Fourier-Legendre series representation
for the integral transmutation kernel is obtained, together with a
representation for the solutions $c_{\mathfrak{I}_{N}}^{h}(\rho,x)$,
$s_{\mathfrak{I}_{N}}(\rho,x)$ and their derivatives as Neumann series of
Bessel functions, together with integral recursive relations for the construction of the Fourier-Legendre coefficients. The series (\ref{NSBFcosine}), (\ref{NSBFsine}),
(\ref{NSBFderivativetransmcosine}), (\ref{NSBFderivativetransmsine}) are
useful for solving direct and inverse spectral problems for
(\ref{Schrwithdelta}), as shown for the regular case
\cite{inverse1,directandinverse,neumann,neumann2}.
\section*{Acknowledgments}
Research was supported by CONACYT, Mexico via the project 284470.
|
{
"arxiv_id": "2302.13203",
"language": "en",
"timestamp": "2023-02-28T02:12:49",
"url": "https://arxiv.org/abs/2302.13203",
"yymm": "2302"
} | \subsubsection*{\bibname}}
\begin{document}
\twocolumn[
\runningtitle{Sample Complexity of DR Q-learning}
\aistatstitle{A Finite Sample Complexity Bound\\ for Distributionally Robust $Q$-learning}
\aistatsauthor{ Shengbo Wang \And Nian Si \And Jose Blanchet \And Zhengyuan Zhou }
\aistatsaddress{ Stanford University \And The University of Chicago \And Stanford University \And New York University} ]
\begin{abstract}
We consider a reinforcement learning setting in which the deployment environment is different from the training environment. Applying a robust Markov decision processes formulation, we extend the distributionally robust $Q$-learning framework studied in \cite{pmlr-v162-liu22a}. Further, we improve the design and analysis of their multi-level Monte Carlo estimator. Assuming access to a simulator, we prove that the worst-case expected sample complexity of our algorithm to learn the optimal robust $Q$-function within an $\epsilon$ error in the sup norm is upper bounded by $\tilde O(|S||A|(1-\gamma)^{-5}\epsilon^{-2}p_{\wedge}^{-6}\delta^{-4})$, where $\gamma$ is the discount rate, $p_{\wedge}$ is the non-zero minimal support probability of the transition kernels and $\delta$ is the uncertainty size. This is the first sample complexity result for the model-free robust RL problem. Simulation studies further validate our theoretical results.
\end{abstract}
\section{Introduction}
Reinforcement learning (RL)~\citep{powell2011approDP, Bertsekas2012control, Szepesvari2012rlAlgo, sutton2018reinforcement}
has witnessed impressive empirical success in simulated environments, with applications spanning domains such as robotics~\citep{Kober2013RLinRoboticsSurvey, Sergey2017DeepRLinRobotics}, computer vision \citep{Sergey2017RLsingleImage, huang2017RLinCV}, finance \citep{li2009americanOption,CHOI2009rlSavingBehavior,Deng2017rlFinanceSignal} and achieving superhuman performance in well-known games such as Go and poker~\citep{alphago2016, alphazero2018}.
However, existing RL algorithms often
make the implicit assumption that the training environment (i.e. a simulator) is the same as the deploying environment, thereby rendering the learned policy \textit{fragile}. This fragility presents a significant impediment for carrying the remarkable success of RL into real environments, because in practice, such discrepancy between training and deploying environments is ubiquitous.
On the one hand, simulator models often cannot capture the full complexity of the real environment, and hence will be mis-specified.
On the other hand, even if a policy is trained directly in a real environment, the new deployment environment may not be the same and hence suffer from distributional shifts.
As an example of the latter,
personalized promotions engine (learned from existing user browsing data collected in one region or market) may need to be deployed in a different region when the company intends to enter a new market.
The new market may have similar but different population characteristics.
Another example occurs in robotics, where, as articulated in~\cite{pmlr-v130-zhou21d} ``a robot trained to perform certain maneuvers (such as walking~\cite{Abbeel2013robotWalking} or folding laundry~\citep{Abbeel2010robotFold}) in an environment can fail catastrophically~\citep{Abbeel2015DARPA} in a slightly different environment, where the terrain landscape (in walking) is slightly altered or the laundry object (in laundry folding) is positioned differently".
\par Motivated by the necessity of policy robustness in RL applications, \cite{pmlr-v130-zhou21d} adapted the distributionally robust (DR) Markov decision processes (MDPs) to a tabular RL setting and proposed a DR-RL paradigm. Subsequent works \cite{yang2021} have improved on the sample complexity bounds, although the optimal bound is still unknown as of this writing. However, all these works all adopt a model-based approach which, as widely known, is computationally intensive, requires extensive memory storage, and does not generalize to function approximation settings. Motivated by this concern, the very recent work~\cite{pmlr-v162-liu22a} introduced the first distributionally robust $Q$-learning for robust MDPs, thus showing that $Q$-learning can indeed be made distributionally robust. However, an important issue is that the expected number of samples needed to run the algorithm in ~\cite{pmlr-v162-liu22a} to converge to a fixed error distributionally robust optimal policy is infinite. As such, this naturally motivates the following question:
\textit{Can we design a distributionally robust $Q$-learning that has finite sample complexity guarantee?}
\subsection{Our Contributions}
In this paper, we extend the MLMC-based distributionally robust Bellman estimator in \cite{pmlr-v162-liu22a} such that the expected sample size of constructing our estimator is of \textit{constant order}. We establish unbiasedness and moment bounds for our estimator in Propositions \ref{prop:unbiased} and \ref{prop:var_sup_bound} that are essential to the complexity analysis. Hinging on these properties, we prove that the expected sample complexity of our algorithm is $\tilde O\crbk{|S||A|(1-\gamma)^{-5}\epsilon^{-2}p_{\wedge}^{-6}\delta^{-4}}$ under rescaled-linear or constant stepsizes, where $|S|$ and $|A|$ are the number of states and actions, $\gamma\in(0,1)$ the discount factor, $\epsilon$ the target error in the infinity norm of the DR $Q$-function, $p_{\wedge}$ the minimal support probability, and $\delta$ the size of the (see Theorems \ref{thm:algo_error_bound} and \ref{thm:sample_complexity}). Our result is based on the finite sample analysis of stochastic approximations (SA) framework recently established by \cite{chen2020}. To our knowledge, this is the first model-free algorithm and analysis that guarantee solving the DR-RL problem with a finite expected sample complexity. Further, our complexity is tight in $|S||A|$ and nearly tight in the effective horizon $(1-\gamma)^{-1}$ at the same time. Finally, we numerically exhibit the validity of our theorem predictions and demonstrate the improvements of our algorithm over that in \cite{pmlr-v162-liu22a}.
\subsection{Related Work}
Distributionally robust optimization (DRO) is well-studied in the supervised learning setting; see, e.g., \cite{bertsimas2004price,delage2010distributionally,Hu2012KullbackLeiblerDC,shafieezadeh-abadeh_distributionally_2015,bayraksan2015data,gao2016distributionally,namkoong2016stochastic,duchi2016statistics,staib2017distributionally,shapiro2017distributionally,lam2017empirical,volpi2018generalizing,raginsky_lee,nguyen2018distributionally,yang2020Wasserstein,MohajerinEsfahani2017,zhao2017distributionally,abadeh2018Wasserstein,ZHAO2018262,sinha2018certifiable,gao2018robust,wolfram2018,ghosh2019robust,blanchet2016quantifying,duchi2018learning,lam2019recovering,duchi2019distributionally,ho2020distributionally}. Those works focus on the optimization formulation, algorithms, and statistical properties in settings where labeled data and a pre-specified loss are available. In those settings, vanilla empirical risk minimizers are outperformed by distributionally robust solutions because of either overfitting or distributional shifts.
In recent years, distributionally robust formulations also find applications in a wide range of research areas including dimensionality reduction under fairness \cite{vu2022distributionally} and model selection \cite{cisneros2020distributionally}.
Minimax sample complexities of standard tabular RL have been studied extensively in recent years. \cite{azar2013,sidford2018near_opt,pmlr-v125-agarwal20b,li2020breaking} proposed algorithms and proved optimal upper bounds (the matching lower bound is proved in \cite{azar2013}) $\tilde \Theta(|S||A|(1-\gamma)^{-3}\epsilon^{-2})$ of the sample complexity to achieve $\epsilon$ error in the model-based tabular RL setting. The complexity of model-free $Q$-learning has also been studied extensively \citep{even2003learning,wainwright2019l_infty,li2021q}. It has been shown by \cite{li2021q} to have a minimax sample complexity $\tilde \Theta(|S||A|(1-\gamma)^{-4}\epsilon^{-2})$. Nevertheless, variance-reduced variants of the $Q$-learning, e.g., \cite{wainwright2019}, achieves the aforementioned model-based sample complexity lower bound $\tilde \Theta(|S||A|(1-\gamma)^{-3}\epsilon^{-2})$.
Recent advances in sample complexity theory of $Q$-learning and its variants are propelled by the breakthroughs in finite time analysis of SA. \cite{wainwright2019l_infty} proved a sample path bound for the SA recursion. This enables variance reduction techniques that help to achieve optimal learning rate in \cite{wainwright2019}. In comparison, \cite{chen2020} established finite sample guarantees of SA only under a second moment bound on the martingale difference noise sequence.
Our work uses the theoretical framework of the classical minimax control and robust MDPs; see, e.g., \citet{gonzalez-trejo2002,Iyengar2005, wiesemann2013,huan2010,shapiro2022}, where those works establish the concept of distributional robustness in MDPs and derive the distributionally robust Bellman equation.
\par Recently, learning distributionally robust policies from data gains attention \citep{si2020distributional,pmlr-v130-zhou21d, pmlr-v162-liu22a,yang2021}. Among those works, \cite{si2020distributional} studies the contextual bandit setting, while \citet{pmlr-v130-zhou21d, Panaganti2021, pmlr-v162-liu22a,yang2021} focus on the tabular RL regime.
\section{Distributionally Robust Policy Learning Paradigm}
\subsection{Standard Policy Learning}
Let ${\mathcal{M}}_0 = \left(S, A, R, {\mathcal{P}}_0, {\mathcal{R}}_0, \gamma\right)$ be an MDP, where $ S$, $ A$, and $R\subsetneq \mathbb{R}_{\geq 0}$ are finite state, action, and reward spaces respectively.
${\mathcal{P}}_0 = \set{p_{s,a}, s\in S,a\in A}$ and $
\mathcal{R}_0 = \set{\nu_{s,a},s\in S,a\in A}$
are the sets of the reward and transition distributions. $\gamma \in (0,1)$ is the discount factor. Define $r_{\max} = \max\set {r\in R}$ the maximum reward. We assume that the transition is Markovian, i.e., at each state $s\in S$, if action $a\in A$ is chosen, then the subsequent state is determined by the conditional distribution $p_{s,a}(\cdot)= p(\cdot|s, a)$.
The decision maker will therefore receive a randomized reward $r\sim \nu_{s,a}$.
Let $\Pi$ be the history-dependent policy class (see Section 1 of the supplemental materials for a rigorous construction). For $\pi\in \Pi$, the value function $V^{\pi}(s)$ is defined as:
$$V^{\pi}(s) := \mathbb{E}\sqbkcond{\sum_{t=0}^\infty \gamma^{t}r_t }{s_0 = s}.$$ The optimal value function $
V^*(s) \coloneqq \max_{\pi\in\Pi} V^\pi(s)$,
$\forall s\in S$. It is well known that the optimal value function is the unique solution of the following Bellman equation:
\begin{equation*}
V^*(s) = \max_{a\in A} \left\lbrace\mathbb{E}_{r\sim \nu_{s,a}}[r] + \gamma\mathbb{E}_{s'\sim p_{s,a}}\left[V^*(s')\right]\right\rbrace.
\end{equation*}
An important implication of the Bellman equation is that it suffices to optimize within the stationary Markovian deterministic policy class.
\par The optimal $Q$-function and its Bellman equation:
\begin{equation*}
\begin{aligned}
Q^*(s,a) &:= \mathbb{E}_{r\sim \nu_{s,a}}[r] + \gamma\mathbb{E}_{s'\sim p_{s,a}}\left[V^*(s')\right]\\
&= \mathbb{E}_{r\sim \nu_{s,a}}[r] + \gamma\mathbb{E}_{s'\sim p_{s,a}}\left[\max_{b\in A}Q^*(s',b)\right].
\end{aligned}
\end{equation*}
The optimal policy
$\pi^*(s) = \arg\max_{a\in A}Q^*(s,a)$. Therefore, policy learning in RL environments can be achieved if we can learn a good estimate of $Q^*$.
\subsection{Distributionally Robust Formulation}
We consider a DR-RL setting, where both transition probabilities and rewards are perturbed based on the KL divergence $D_{\textrm{KL}}(P\|Q) := \int_\Omega \log\frac{dP}{dQ}P(d\omega)$ when $P\ll Q$ ($P$ is absolutely continuous w.r.t. $Q$). For each $(s,a)\in S\times A$, we define KL uncertainty set that are centered at $p_{s,a}\in\mathcal{P}_0$ and $\nu_{s,a}\in\mathcal{R}_0$ by $
{\mathcal{P}}_{s,a}(\delta)\coloneqq \left\lbrace p : D_{\textrm{KL}}\left(p\|p_{s,a}\right)\leq \delta\right\rbrace$ and ${\mathcal{R}}_{s,a}(\delta)\coloneqq \left\lbrace \nu: D_{\textrm{KL}}(\nu\|\nu_{s,a})\leq\delta\right\rbrace$.
The parameter $\delta > 0$ controls the size of the uncertainty sets. These uncertainty sets quantify the possible distributional shifts from the reference model $\mathcal{P}_0,\mathcal{R}_0$.
\begin{definition}\label{def:DRvalue}
The DR Bellman operator $\mathcal{B}_\delta$ for the value function is defined as the mapping
\begin{equation} \label{def.robust_Bellman_opt}
\begin{aligned}
&\mathcal{B}_\delta(v)(s):=\\
&\max_{a\in A} \inf_{\mbox{$\begin{subarray}{c} p\in{\mathcal{P}}_{s,a}(\delta),\\
\nu\in {\mathcal{R}}_{s,a}(\delta)\end{subarray}$}}
\left\lbrace \mathbb{E}_{r\sim \nu} [r] + \gamma\mathbb{E}_{s'\sim p}\left[v(s')\right]\right\rbrace.
\end{aligned}
\end{equation}
Define the optimal DR value function $V^*_\delta$ as the solution of the DR Bellman equation:
\begin{equation}\label{def.robust_Bellman}
V^*_\delta = \mathcal{B}_\delta(V^*_\delta)
\end{equation}
\end{definition}
\begin{remark}
The definition assumes the existence and uniqueness of a fixed point of the DR Bellman equation. This is a consequence of $\mathcal{B}_\delta$ being a contraction. Moreover, it turns out that under the notion of \textit{rectangularity} \citep{Iyengar2005, wiesemann2013}, this definition is equivalent to the minimax control optimal value
\[
V^*_{\delta}(s)= \sup_{\pi\in\Pi}\inf_{P\in \mathcal{K}^\pi(\delta)}\mathbb{E}_{P}\sqbkcond{\sum_{t=0}^\infty \gamma^t r_t }{s_0 = s}
\]
for some appropriately defined \textit{history-dependent} policy class $\Pi$ and $\pi$-consistent uncertainty set of probability measures $\mathcal{K}^\pi(\delta)$ on the sample path space; cf. \cite{Iyengar2005}. Intuitively, this is the optimal value when the controller chooses a policy $\pi$, an adversary observes this policy and chooses a possibly history-dependent sequence of reward and transition measure within some uncertainty set indexed by a parameter $\delta$ that is consistent with this policy. Therefore, we can interpret $\delta > 0$ as the power of this adversary. The equivalence of minimax control optimal value and Definition \ref{def:DRvalue} suggests the optimality of stationary deterministic Markov control policy and, under such policy, stationary Markovian adversarial distribution choice. We will rigorously discuss this equivalence in Section 1 of the supplemental materials.
\end{remark}
\subsection{Strong Duality}
The r.h.s. of \eqref{def.robust_Bellman_opt} could be hard to work with because the measure underlying the expectations are not directly accessible. To resolve this, we use strong duality:
\begin{lemma} [\cite{Hu2012KullbackLeiblerDC}, Theorem 1] \label{lemma.dual}
Suppose $H(X)$ has finite moment generating function in the neighborhood of zero. Then for any $\delta >0$,
\begin{equation*}
\begin{aligned}
&\sup_{P: D_{\emph{KL}}(P\|P_0)\leq \delta}\mathbb{E}_P\left[H(X)\right] =\\
&\inf_{\alpha\geq 0}\left\lbrace \alpha\log\mathbb{E}_{P_0}\left[e^{H(X)/\alpha}\right] +\alpha\delta\right\rbrace.
\end{aligned}
\end{equation*}
\end{lemma}
Boundedness of $Q$ allow us to directly apply Lemma \ref{lemma.dual} to the r.h.s. of \eqref{def.robust_Bellman}. The DR value function $V^{*}_{\delta}$ in fact satisfies the following \textit{dual form} of the DR Bellman's equation.
\begin{equation}\label{eq.robust_dual}
\begin{aligned}
&V^{ *}_{\delta}(s) = \\
& \max_{a\in A} \left\lbrace \sup_{\alpha\geq 0}\left\lbrace -\alpha\log\mathbb{E}_{r\sim \nu_{s,a}}\left[e^{-r/\alpha}\right] - \alpha\delta\right\rbrace \right. + \\
& \left. \gamma\sup_{\beta\geq 0}\left\lbrace -\beta\log\mathbb{E}_{s'\sim p_{s,a}}\left[e^{-V^{*}_{\delta}(s')/\beta}\right] - \beta\delta\right\rbrace \right\rbrace.
\end{aligned}
\end{equation}
\subsection{Distributionally Robust $Q$-function and its Bellman Equation}
As in the classical policy learning paradigm, we make use of the optimal DR state-action value function, a.k.a. $Q$-function, for solving the DR control problem. The $Q$-function maps $(s,a)$ pairs to the reals, thence can be identified with $Q\in \mathbb{R}^{S\times A}$. We will henceforth assume this identification. Let us define the notation $v(Q)(s) = \max_{b\in A}Q(s,b)$. We proceed to rigorously define the optimal $Q$-function and its Bellman equation.
\begin{definition}
The optimal DR $Q$-function is defined as
\begin{equation}\label{eq.def_delta_Q*}
Q_\delta^*(s,a) := \inf_{\mbox{$\begin{subarray}{c} p\in{\mathcal{P}}_{s,a}(\delta),\\
\nu\in {\mathcal{R}}_{s,a}(\delta)\end{subarray}$}}
\left\lbrace \mathbb{E}_{r\sim \nu} [r] + \gamma\mathbb{E}_{s'\sim p}\left[V^*_\delta(s')\right] \right\rbrace
\end{equation}
where $V^*_\delta$ is the DR optimal value function in Definition \ref{def:DRvalue}.
\end{definition}
By analogy with the Bellman operator, we can define the DR Bellman operator for the $Q$-function as follows:
\begin{definition}
Given $\delta>0$ and $Q\in\mathbb{R}^{ S\times A}$, the \textit{primal form} of the DR Bellman operator ${\mathcal{T}}_\delta:\mathbb{R}^{ S\times A}\to \mathbb{R}^{ S\times A}$ is defined as
\begin{equation}\label{eq.def_op_pr}
\begin{aligned}
&{\mathcal{T}}_\delta(Q)(s,a):= \\
&\inf_{\mbox{$\begin{subarray}{c} p\in{\mathcal{P}}_{s,a}(\delta),\\
\nu\in {\mathcal{R}}_{s,a}(\delta)\end{subarray}$}}
\left\lbrace \mathbb{E}_{r\sim \nu} [r] + \gamma\mathbb{E}_{s'\sim p}\left[v(Q)(s')\right] \right\rbrace\\
\end{aligned}
\end{equation}
The \textit{dual form} of the DR Bellman operator is defined as
\begin{equation}\label{eq.def_op}
\begin{aligned}
&{\mathcal{T}}_\delta(Q)(s,a)\coloneqq\\
& \sup_{\alpha\geq 0}\left\lbrace -\alpha\log\mathbb{E}_{r\sim \nu_{s,a}}\left[e^{-r/\alpha}\right] - \alpha\delta\right\rbrace+ \\
& \gamma\sup_{\beta\geq 0}\left\lbrace -\beta\log\mathbb{E}_{s'\sim p_{s,a}}\left[e^{-v(Q)(s')/\beta}\right] - \beta\delta\right\rbrace.
\end{aligned}
\end{equation}
\end{definition}
The equivalence of the primal and dual form follows from Lemma \ref{lemma.dual}. Note that by definition \eqref{eq.def_delta_Q*} and the Bellman equation \eqref{def.robust_Bellman}, we have $v(Q_{\delta}^*) = V^*_\delta$. So, our definition implies that $Q^{*}_\delta$ is a fixed point of ${\mathcal{T}}_\delta$ and the Bellman equation $Q^{*}_\delta = \mathcal{T}_\delta(Q^{*}_\delta)$.
\par The optimal DR policy can be extracted from the optimal $Q$-function by $\pi_\delta^*(s) = \arg\max_{a\in A}Q_\delta^*(s,a).$
Hence the goal the DR-RL paradigm is to learn the $\delta$-DR $Q$-function and extract the corresponding robust policy.
\section{$Q$-Learning in Distributionally Robust RL}
\subsection{A Review of Synchronized $Q$-Learning and Stochastic Approximations}\label{section.review_Q}
The synchronized $Q$-learning estimates the optimal $Q$-function using point samples. The classical synchronous $Q$-learning proceeds as follows. At iteration $k\in \mathbb{Z}_{\geq 0}$ and each $(s,a)\in S\times A$, we draw samples $r\sim \nu_{s,a}$ and $s'\sim p_{s,a}$. Then perform the $Q$-learning update
\begin{equation}
\begin{aligned}
&Q_{k+1}(s,a)=\\
&(1-\alpha_{k})Q_k(s,a) + \alpha_{k}(r +\gamma v(Q_k)(s'))
\end{aligned}\label{eq.q-learning}
\end{equation}
for some chosen step-size sequence $\set{\alpha_k}$.
\par Stochastic approximations (SA) for the fixed point of a contraction operator $\mathcal{H}$ refers to the class of algorithms using the update
\begin{equation}\label{eq.sa_update}
X_{k+1} = (1-\alpha_k)X_{k} + \alpha_k\mathcal{H}(X_k) + W_k.
\end{equation}
$\set{W_k}$ is a sequence satisfying $\mathbb{E}[W_{k}|\mathcal{F}_{k-1}] = 0$,
thence is known as the \textit{martingale difference noise}. The asymptotics of the above recursion are well understood, cf. \cite{kushner2013stochastic}; while finite time behavior is discussed in the literature review. The recursion representation of the $Q$-learning \eqref{eq.q-learning} fits into the SA framework: Note that $r+\gamma v(Q)(s')$ is an \textit{unbiased} estimator of $\mathcal{T}(Q)$ where $\mathcal{T}$ is the Bellman operator for the $Q$-function. This representation motivates the DR $Q$-learning.
\subsection{Distributionally Robust $Q$-learning}
\par A foundation to the possibility of employing a $Q$-learning is the following result.
\begin{proposition}\label{prop:contraction}
The DR Bellman operator $\mathcal{T}_\delta$ is a $\gamma$-contraction on the Banach space $(\mathbb{R}^{S\times A},\|\cdot\|_\infty)$.
\end{proposition}
\par Given a simulator, a natural estimator for $\mathcal{T}_\delta(Q)$ is the empirical dual Bellman operator: replace the population transition and reward measures in \eqref{eq.def_op} with the empirical version. However, the nonlinearity in the underlying measure, which can be seen from the dual functional, makes this estimator biased in general. Instead, \cite{pmlr-v162-liu22a} propose an alternative by employing the idea in \cite{ blanchet2019unbiased}; i.e., producing unbiased estimate of nonlinear functional of a probability measure using multi-level randomization. Yet, the number of samples requested in every iteration in \cite{pmlr-v162-liu22a} is infinite in expectation. We improve this by extending the construction to a regime where the expected number of samples used is constant.
\par Before moving forward, we introduce the following notation. Denote the empirical distribution on $n$ samples with $\nu_{s,a,n}$ and $p_{s,a,n}$ respectively; i.e. for $f:U\rightarrow \mathbb{R}$, where $U$ could be the $S$ or $R$,
\[
\mathbb{E}_{u\sim \mu_{s,a,n}}f(u) = \frac{1}{n}\sum_{j=1}^nf(u_i)
\]
for $\mu = \nu,p$ and $u_i = r_i,s'_i$. Moreover, we use $\mu^{O}_{s,a,n}$ and $\mu^{E}_{s,a,n}$ to denote the empirical distribution formed by the odd and even samples in $\mu_{s,a,2n}$. With this notation, we defined our estimator:
\begin{definition}
For given $g\in(0,1)$ and $Q\in\mathbb{R}^{ S\times A}$, define the MLMC-DR estimator:
\begin{equation}
\widehat{{\mathcal{T}}}_{\delta,g}(Q)(s,a)\coloneqq\widehat{R}_\delta(s,a)+\gamma \widehat{V}_\delta(Q)(s,a).
\end{equation}
For $\widehat{R}_\delta(s,a)$ and $\widehat{V}_\delta(s,a)$, we sample $N_1,N_2$ from a geometric distribution $\mathrm{Geo}(g)$ independently, i.e., $\mathbb{P}(N_j=n)=p_n:=g(1-g)^n, n\in\mathbb{Z}_{\geq 0},j=1,2$. Then, we draw $2^{N_1+1}$ samples $r_i\sim \nu_{s,a}$ and $2^{N_2+1}$ samples $s'_i\sim p_{s,a}$. Finally, we compute
\begin{align}
\widehat{R}_\delta(s,a)&\coloneqq r_1+\frac{\Delta_{N_1,\delta}^R}{p_{N_1}},\label{eq.R}.\\
\widehat{V}_{\delta}(Q)(s,a)&\coloneqq v(Q)(s_1')+\frac{\Delta_{N_2,\delta}^P(Q)}{p_{N_2}}\label{eq.V}.
\end{align}
where
\begin{equation}\label{def.delta_r}
\begin{aligned}
&\Delta_{n,\delta}^R\coloneqq\\
&\sup_{\alpha\geq0}\left\lbrace -\alpha\log\mathbb{E}_{r\sim\nu_{ s,a,2^{n+1}}}\sqbk{e^{-r/\alpha}} - \alpha\delta\right\rbrace-\\
&\frac{1}{2}\sup_{\alpha\geq0}\left\lbrace -\alpha\log\mathbb{E}_{r\sim\nu^{E}_{s,a,2^{n}}}\sqbk{e^{-r/\alpha}} - \alpha\delta\right\rbrace-\\ &\frac{1}{2}\sup_{\alpha\geq0}\left\lbrace -\alpha\log\mathbb{E}_{r\sim\nu^{O}_{s,a,2^{n}}}\sqbk{e^{-r/\alpha}} - \alpha\delta\right\rbrace
\end{aligned}
\end{equation}and
\begin{equation}\label{def.delta_q}
\begin{aligned}
&\Delta_{n,\delta}^P(Q)\coloneqq\\
&\sup_{\beta\geq0}\left\lbrace -\beta\log\mathbb{E}_{s'\sim p_{s,a,2^{n+1}}}\sqbk{e^{-v(Q)(s')/\beta}} - \beta\delta\right\rbrace-\\
&\frac{1}{2}\sup_{\beta\geq0}\left\lbrace -\beta\log\mathbb{E}_{s'\sim p^{E}_{s,a,2^{n}}}\sqbk{e^{-v(Q)(s')/\beta}} - \beta\delta\right\rbrace-\\
&\frac{1}{2}\sup_{\beta\geq0}\left\lbrace -\beta\log\mathbb{E}_{s'\sim p^{O}_{s,a,2^{n}}}\sqbk{e^{-v(Q)(s')/\beta}} - \beta\delta\right\rbrace.
\end{aligned}
\end{equation}
\end{definition}
Let $\set{\widehat{{\mathcal{T}}}_{\delta,g,k};k\in \mathbb{Z}_{\geq 0}}$ be i.i.d. copies of $\widehat{{\mathcal{T}}}_{\delta,g}$. We construct our DR $Q$-Learning algorithm in Algorithm \ref{alg.Q_learning}.
\begin{algorithm}[ht]
\caption{Multi-level Monte Carlo Distributionally Robust $Q$-Learning (MLMCDR $Q$-learning)}
\label{alg.Q_learning}
\begin{algorithmic}
\STATE {\bfseries Input:} Uncertainty radius $\delta>0$, parameter $g\in (0,1)$, step-size sequence $\set{\alpha_k:k\in \mathbb{Z}_{\geq 0}}$, termination time $T$ (could be random).
\STATE {\bfseries Initialization:} $\widehat{Q}_{\delta,0} \equiv 0$, $k=0$.
\REPEAT
\FOR{every $(s,a)\in S\times A$}
\STATE Sample independent $N_1,N_2\sim \mathrm{Geo}(g)$.
\STATE Independently draw $2^{N_1+1}$ samples $r_i\sim \nu_{s,a}$ and $2^{N_2+1}$ samples $s'_i\sim p_{s,a}$.
\STATE Compute $\widehat{R}_\delta(s,a)$ and $\widehat{V}_\delta(\widehat{Q}_{\delta,k})(s,a)$ using Equation \eqref{eq.R}-\eqref{def.delta_q}.
\ENDFOR
\STATE Compute $\widehat{{\mathcal{T}}}_{\delta,g,k+1}(\widehat{Q}_{\delta,k})=
\widehat{R}_\delta+\gamma \widehat{V}_\delta(\widehat{Q}_{\delta,k}).$
\STATE Perform synchronous $Q$-learning update:
\[
\widehat{Q}_{\delta,k+1}= (1-\alpha_{t})\widehat{Q}_{\delta,k}+\alpha_{k}\widehat{{\mathcal{T}}}_{\delta,g,k+1}(\widehat{Q}_{\delta,k}).\]
$k\leftarrow k+1$.
\UNTIL{$k = T$}
\end{algorithmic}
\end{algorithm}
\begin{remark}
The specific algorithm used in \cite{pmlr-v162-liu22a} only has asymptotic guarantees and requires an infinite number of samples to converge, whereas we propose a variant that yields finite-sample guarantees.
More specifically, in the algorithm, we choose $g\in(1/2,3/4)$, while they choose $g\in(0,1/2)$. This has important consequences: each iteration of \cite{pmlr-v162-liu22a}'s algorithm requires an infinite number of samples in expectation, while in our algorithm, the expected number of samples used until iteration $k$ is $n(g)|S||A|k$, where $n(g)$ doesn't depend on $\gamma$ and the MDP instance.
\end{remark}
\section{Algorithm Complexity}
All proofs to the results in this section are relegated to Sections 2 - 6 in the supplementary materials.
\par Let $(\Omega,\mathcal{F},\set{\mathcal{F}_k}_{k\in\mathbb{Z}_{\geq 0}},\mathbb{P})$ be the underlying filtered probability space, where $\mathcal{F}_{k-1}$ is the $\sigma$-algebra generated by the random variates used before iteration $k$. We motivate our analysis by making the following observations. If we define the noise sequence
\[
W_{k+1}(\widehat{Q}_{\delta,k}) := \widehat{{\mathcal{T}}}_{\delta,g,k+1}(\widehat{Q}_{\delta,k})-{\mathcal{T}}_{\delta}(\widehat{Q}_{\delta,k}),
\]
then the update rule of Algorithm \ref{alg.Q_learning} can be written as
\begin{equation}\label{eq.SA_form_of_DRQ}
\begin{aligned}
\widehat{Q}_{\delta,k+1}=(1-\alpha_k) \widehat{Q}_{\delta,k}+ \alpha_{k}\crbk{{\mathcal{T}}_{\delta}(\widehat{Q}_{\delta,k}) + W_{k+1}(\widehat{Q}_{\delta,k})}.
\end{aligned}
\end{equation}
By construction, we expect that under some condition, $\widehat{{\mathcal{T}}}_{\delta,g,k+1}(Q)$ is an unbiased estimate of ${\mathcal{T}}_{\delta}(Q)$. Hence $\mathbb{E}[W_{k+1}(\widehat{Q}_{\delta,k})|\mathcal{F}_k] = 0$. Therefore, Algorithm \ref{alg.Q_learning} has update of the form \eqref{eq.sa_update}, and hence can be analysed as a stochastic approximation.
\par We proceed to rigorously establish this. First we introduce the following complexity metric parameter:
\begin{definition}\label{def.min_supp_prob}
Define the \textit{minimal support probability} as
\small
\[
p_\wedge := \inf_{s,a\in S\times A}\sqbk{\inf_{r\in R:\nu_{s,a}(r) > 0}\nu_{s,a}(r)\wedge\inf_{s'\in S:p_{s,a}(s')>0} p_{s,a}(s')}.
\]
\normalsize
\end{definition}
The intuition of why the complexity of the MDP should depend on this minimal support probability is that in order to estimate the DR Bellman operator accurately, in worst case one must know the entire support of transition and reward distributions. Therefore, at least $1/ p_\wedge$ samples are necessary. See \cite{si2020distributional} for a detailed discussion.
\begin{assumption} \label{assump:var_bound_assumptions}
Assume the following holds:
\begin{enumerate}
\item The uncertainty set size $\delta$ satisfies $\frac{1}{2}p_\wedge\geq 1-e^{-\delta}.$
\item The geometric probability parameter $g\in(0,3/4)$.
\end{enumerate}
\end{assumption}
\begin{remark}The first entry is a technical assumption that ensures the differentiability of the dual form of the robust functional. We use this specific form just for cleanness of presentation. Moreover, we conjecture that such restriction is not necessary to get the same complexity bounds. See the supplement Subsection 6.1 for a detailed discussion.
\end{remark}
With Assumption \ref{assump:var_bound_assumptions} in place, we are ready to state our key tools and results. The following propositions underly our iteration and expected sample complexity analysis:
\begin{proposition}\label{prop:Q_infty_norm_bound}
Let $Q_\delta^*$ be the unique fixed point of the DR Bellman operator $\mathcal{T}_\delta$. Then $\|Q_\delta^*\|_{\infty} \leq r_{\max}({1-\gamma})^{-1}.$
\end{proposition}
\begin{proposition}\label{prop:unbiased} Suppose Assumption \ref{assump:var_bound_assumptions} is in force. For fixed $Q:S\times A\rightarrow \mathbb{R}$, $\widehat \mathcal{T}_{\delta}(Q)$ is an unbiased estimate of $\mathcal{T}_{\delta,g}(Q)$; i.e. $\mathbb{E}W(Q) = 0$.
\end{proposition}
Proposition \ref{prop:unbiased} guarantees the validity of our construction of the unbiased MLMC-DR Bellman estimator. As explained before, this enables us to establish Algorithm \ref{alg.Q_learning} as SA to the fixed point of $\mathcal{T}_\delta$.
For simplicity, define the log-order term
\begin{equation}\label{eqn:def_tlide_l}
\begin{aligned}
\tilde l =& (3+\log (|S||A||R|)\vee \log (|S|^2|A|))^2\\ &\times \log(11/p_\wedge)^2\frac{4(1-g)}{g(3-4g)}.
\end{aligned}
\end{equation}
\begin{proposition}\label{prop:var_sup_bound} Suppose Assumption \ref{assump:var_bound_assumptions} is in force. For fixed $Q$, there exists constant $c > 0$ s.t.
\[
\mathbb{E}\|W(Q)\|_\infty^2\leq\frac{c\tilde l}{\delta^4 p_\wedge^6} \crbk{r_{\max}^2 + \gamma^2\|Q\|_\infty^2}.
\]
\end{proposition}
Proposition \ref{prop:var_sup_bound} bounds the infinity norm squared of the martingale difference noise. It is central to our complexity results in Theorems \ref{thm:algo_error_bound} and \ref{thm:sample_complexity}.
\begin{theorem}\label{thm:algo_error_bound}
Suppose Assumption \ref{assump:var_bound_assumptions} is in force. Running Algorithm \ref{alg.Q_learning} until iteration $k$ and obtain estimator $\widehat{Q}_{\delta,k}$, the following holds:
\par Constant stepsize:
there exists $c,c'> 0$ s.t. if we choose the stepsize sequence
\[
\alpha_k \equiv \alpha\leq \frac{ (1-\gamma)^2\delta^4p_\wedge^6}{c'\gamma^2\tilde l \log (|S||A|)},
\]
then we have
\[
\begin{aligned}
&\mathbb{E}\|\widehat{Q}_{\delta,k}-Q_\delta^*\|^2_\infty \leq \\
&\frac{3r_{\max}^2}{2(1-\gamma)^2}\crbk{1-\frac{(1-\gamma)\alpha}{2}}^k + \frac{c\alpha r_{\max}^2\log(|S||A|)\tilde l }{\delta^4 p_\wedge^6(1-\gamma)^4 }.
\end{aligned}
\]
\par Rescaled linear stepsize: there exists $c,c'> 0$ s.t. if we choose the stepsize sequence
\[
\alpha_k = \frac{4}{(1-\gamma)(k+K)},
\]
where
\[
K = \frac{c'\tilde l \log(|S||A|)}{\delta^4p_\wedge^6(1-\gamma)^{3}},
\]
then we have
\[
\begin{aligned}
\mathbb{E}\|\widehat{Q}_{\delta,k}-Q_\delta^*\|^2_\infty
\leq \frac{c r_{\max}^2\tilde l\log(|S||A|)\log(k + K) }{\delta^4 p_\wedge^6(1-\gamma)^5(k+K) }.
\end{aligned}
\]
\end{theorem}
We define the iteration complexity as
\begin{align*}
k^*(\epsilon) &:= \inf\{k\geq 1 : E\norm{Q_{k} - Q_\delta^*}_\infty^2 \leq \epsilon^2\}.
\end{align*}
The proof of Theorem \ref{thm:algo_error_bound} is based on the recent advances of finite-time analysis of stochastic approximation algorithms \citep{chen2020}. Theorem \ref{thm:algo_error_bound} bounds the algorithmic error by the current iteration completed. This implies an iteration complexity bound, which we will make clear afterwards.
\par We consider the expected number of samples we requested from the generator to compute the MLMC-DR estimator for one $(s,a)$-pair. It depends on the geometric parameter $g$. Denote this by $n(g)$, then
\[
n(g) = \mathbb{E}\sqbk{2^{N_1+1}+2^{N_2+1}} = \frac{4g}{2g-1}.
\]
We define the expected sample complexity $n^*(\epsilon)$ as the total expected number of samples used until $k^*(\epsilon)$ iterations:
$n^*(\epsilon) := |S||A|n(g)k^*(\epsilon).$
Note that when $g > 1/2$, $n(g)$ is finite. This finiteness and Theorem \ref{thm:algo_error_bound} would imply a finite expected sample complexity bound.
\begin{assumption}\label{assump:g_finite_E_nsample}
In addition to Assumption \ref{assump:var_bound_assumptions}, assume $g\in(1/2,3/4)$.
\end{assumption}
\begin{theorem}\label{thm:sample_complexity}
Suppose Assumptions \ref{assump:var_bound_assumptions} and \ref{assump:g_finite_E_nsample} are enforced. The expected sample complexity of Algorithm \ref{alg.Q_learning} for both stepsizes specified in Theorem \ref{thm:algo_error_bound} satisfies
\[
n^*(\epsilon)\lesssim \frac{r_{\max}^2|S||A|}{\delta^4p_\wedge^6(1-\gamma)^5\epsilon^2}.
\]
\end{theorem}
\begin{remark}Theorem \ref{thm:sample_complexity} follows directly from Theorem \ref{thm:algo_error_bound} by choosing the stepsize in the constant stepsize case
\[
\alpha \simeq \frac{ (1-\gamma)^4\delta^4p_\wedge^6}{\gamma^2\tilde l \log (|S||A|)},
\]
where $\simeq$ and $\lesssim$ mean equal and less or equal up to a log factor and universal constants. The choice of the stepsizes, however, is dependent on $p_{\wedge}$, which is typically unknown a priori. As discussed before, this dependent on $p_{\wedge}$ is an intrinsic source of complexity of the DR-RL problem. So, one direction for future works is to come up with efficient procedure to consistently estimate $p_{\wedge}$. Also, our bound has a $\delta^{-4}$ dependence. However, we believe that it should be $O(1)$ as $\delta\downarrow 0$. Because the algorithmic behavior will converge to that of the classical $Q$-learning
\end{remark}
The sample complexity bound in Theorem \ref{thm:sample_complexity} is not uniform in $g\in (1/2,3/4)$ as we think of $g$ being a design parameter, not an inherit model parameter. The sample complexity dependence on $g$ is $\frac{n(g)4(1-g)}{g(3-4g)}$, minimized at $g \approx 0.64645$. For convenience, we will use $g = 5/8 = 0.625$.
\section{Numerical Experiments}
In this section, we empirically validate our theories using two numerical experiments. Section \ref{sec:hard_MDP} dedicates to hard MDP instances, which are constructed in \cite{li2021q} to prove the lower bound of the standard $Q$-learning. In Section \ref{sec:inventary}, we use the same inventory control problems as the one used in \cite{pmlr-v162-liu22a} to demonstrate the superiority of our algorithms to theirs. More details on the experiment setup are in Section 7 of the supplemental materials.
\subsection{Hard MDPs for $Q$-learning}
\label{sec:hard_MDP}
\begin{figure}[ht]
\centering
\includegraphics[width = 0.65\linewidth]{hard_mdp.png}
\caption{Hard MDP instances transition diagram.}
\label{fig:hard_mdp_instance}
\end{figure}
First, we will test the convergence of our algorithm on the MDP in Figure \ref{fig:hard_mdp_instance}. It has 4 states and 2 actions, the transition probabilities given action 1 and 2 are labeled on the arrows between states. Constructed in \cite{li2021q}, it is shown that when $p = (4\gamma-1)/(3\gamma)$ the standard non-robust $Q$-learning will have sample complexity $\tilde \Theta((1-\gamma)^{-4}\epsilon^{-2})$. We will use $\delta = 0.1$ in the proceeding experimentation.
\begin{figure}[ht]\label{fig:hard_conv_test}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width = 0.9\linewidth]{log-log_err-iter500-5000delta0.1gammas.png}
\caption{log-log average error with rescaled-linear stepsize.}
\label{fig:hard_mdp_convergence}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width = 0.9\linewidth]{log_err-iter500-5000delta0.1gammas.png}
\caption{log average error with constant stepsize.}
\label{fig:hard_mdp_convergence_const}
\end{subfigure}
\caption{Convergence of Algorithm \ref{alg.Q_learning} on MDP \ref{fig:hard_mdp_instance}}
\end{figure}
\par Figures \ref{fig:hard_mdp_convergence} and \ref{fig:hard_mdp_convergence_const} show convergence properties of our algorithm with the rescaled-linear and constant step-size, respectively. Figure \ref{fig:hard_mdp_convergence} is a log-log scale plot of the average (across 200 trajectories) error $\|Q_{\delta,k}-Q^*_\delta\|_\infty$ achieved at a given iteration $k$ under rescaled-linear step-size $\alpha_k = 1/(1+(1-\gamma)k)$. We see that the algorithm is indeed converging. Moreover, Theorem \ref{thm:algo_error_bound} predicts that the slope of each line should be close to $-1/2$, which corresponds to the canonical asymptotic convergence rate $n^{-1/2}$ of the stochastic approximations under the Robbins–Monro step-size regime. This is confirmed in Figure \ref{fig:hard_mdp_convergence}. The algorithm generates Figure \ref{fig:hard_mdp_convergence_const} uses constant step-size $\alpha_k \equiv 0.008$. In Figure \ref{fig:hard_mdp_convergence_const}, the horizontal axis is in linear scale. So, we observe that for all three choices of $\gamma$, the averaged errors first decay geometrically and then stay constant as the number of iterations increases. This is also consistent with the prediction of Theorem \ref{thm:algo_error_bound}.
\begin{figure}[t]
\centering
\includegraphics[width =0.9\linewidth]{test_gamma_plt_200_rep,slope.png}
\caption{log averaged error against $\log(1-\gamma)$, the slopes of the regression line for iteration $k = 500,1000,1500$ are $-2.031, -2.007, -2.021$.}
\label{fig:hard_mdp_gamma_dep}
\end{figure}
\par Next, we would like to visualize the $\gamma$-dependence of our algorithm with the rescaled-linear step-size on this hard MDP. Note that if we construct a sequence of hard MDPs with different $(\gamma,Q_{\delta}^*(\gamma),Q^*(\gamma)$, then for fixed iteration $k$, Theorem \ref{thm:algo_error_bound} (ignoring $p_\wedge$ as a function of $\gamma$) implies that $\log E\|Q_{\delta,k}(\gamma)-Q_\delta^*(\gamma)\|_\infty\lesssim - \frac{5}{2}\log (1-\gamma).$
On the other hand the standard $Q$-learning, from \cite{li2021q}, $\log \|Q_{k}(\gamma)-Q^*(\gamma)\|_\infty \asymp - 2\log (1-\gamma)$; corresponding to a $(1-\gamma)^{-4}$ dependence.
\par Figure \ref{fig:hard_mdp_gamma_dep} plots the average error of the sequence of MLMCDR $Q$-learning at iteration $k = 500, 1000, 1500$ against $\log(1-\gamma)$, and performs a linear regression to extract the slope. We see that for all $k$, the slope is very close to $-2$, suggesting a $(1-\gamma)^{-4}$ dependence. Given that our analysis is based on the finite analysis of SA algorithms in \cite{chen2020}, which, if applied to the classical $Q$-learning, will also yield a $(1-\gamma)^{-5}$ dependence, we think the actual worst case sample complexity is $(1-\gamma)^{-4}$. However, the validity is less clear: the classical non-robust $Q$-learning employs the empirical bellman operator based on one sample each iteration, which is bounded; on the other hand, our estimator uses a random number of samples per iteration, which is only finite w.p.1. This distinction is not visible through the framework of \cite{chen2020} and may result in a rate degradation.
\par As we point out eariler, we believe that the complexity dependence on $\delta$ should be $O(1)$ as $\delta\downarrow 0$. We also included some experimentation to confirm this in the appendix.
\subsection{Lost-sale Inventory Control}
\label{sec:inventary}
In this section, we apply Algorithm \ref{alg.Q_learning} to the lost-sale inventory control problem with i.i.d. demand, which is also used in \cite{pmlr-v162-liu22a}.
\par In this model, we consider state and action spaces $S = \set{0,1,\dots,n_s}$, $A = \set{0,1,\dots,n_a}$, $n_a\leq n_s$; the state-action pairs $\set{(s,a)\in S\times A:s + a\leq n_s }$. The demand has support $D = \set{0,1,\dots,n_d}$. We assume that at the beginning of the day $t$, we observe the inventory level $s_t$ and place an order of $a_t$ items which will arrive instantly. The cost incurred on day $t$ is
\begin{align*}
C_t =& k\cdot 1\set{A_t > 0}+ h\cdot (S_t+A_t - D_t)_+ \\
&+ p\cdot (S_t+A_t - D_t)_-
\end{align*}
where $k$ is the ordering cost, $h$ is the holding cost per unit of inventory, and $p$ is the lost-sale price per unit of inventory.
\par For this numerical experiment, we use $\delta = 0.5$, $\gamma = 0.7$, $n_s = n_a = n_d = 7$, $k = 3$, $h = 1$, $p = 2$. Under the data-collection environment, we assume $D_t = \text{Unif}(D)$ and is i.i.d. across time,
\begin{figure}[t]
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width =0.9\linewidth]{avg_avg.png}
\caption{log-log plot of the average error v. the average number of samples at a particular iteration.}
\label{fig:inventory_avg_avg}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width =0.9\linewidth]{sample_complexity.png}
\caption{(Smoothed) algorithm error v. number of samples.}
\label{fig:inventory_sample_at_err}
\end{subfigure}
\caption{Algorithm comparison: inventory model.}
\end{figure}
\par
Figure \ref{fig:inventory_avg_avg} and \ref{fig:inventory_sample_at_err} compares the sample complexity of our algorithm to the one proposed in \cite{pmlr-v162-liu22a} ($g\in (1/2,3/4)$ verses. $g\in (0,1/2)$). We run 300 independent trajectories and record the errors and number of samples used at iteration $1000:5000$. We clearly observe that our algorithm (black line) outperforms the one in \cite{pmlr-v162-liu22a} (red line).
Specifically, Figure \ref{fig:inventory_avg_avg}, the log-log plot, shows a black line (our algorithm) of slope close to $-1/2$. This is consistent with our theory and also Figure \ref{fig:hard_mdp_convergence}. The red line (the algorithm in \cite{pmlr-v162-liu22a}) seems to be affine as well with a similar slope. However, there are visible jumps (horizontal segments) along the line. This is due to the MLMC-DR Bellman estimators having infinite mean when $g< 1/2$.
Furthermore, the overall performance of our algorithm is significantly better. Figure \ref{fig:inventory_sample_at_err} is a (smoothed) scatter plot of the (lg-error, number of sample used) pairs. We can clearly see that not only our algorithm (black line) has better sample complexity performance at every error value, but also the black line has significantly less variation compare to the red line (the algorithm in \cite{pmlr-v162-liu22a}). Again, this is due to our MLMC estimator having a constant order expected sample size in contrast to the infinite expected sample size of the MLMC estimator in \cite{pmlr-v162-liu22a} with the parameter $g < 1/2$.
\section{Conclusion}
We establish the first model-free finite sample complexity bound for the DR-RL problem: $\tilde{O}(|S||A|(1-\gamma)^{-5}\epsilon^{-2}\delta^{-4} p_\wedge^{-6})$. Though optimal in $|S||A|$, we believe that the dependence on other parameters are sub-optimal for Algorithm \ref{alg.Q_learning} and need research efforts for further improvements. Also, a minimax complexity lower bound of our algorithm will facilitate a better understanding of its performance. We leave these for future works.
\acknowledgments{Material in this paper is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-20-1-0397. Additional support is gratefully acknowledged from NSF grants 1915967 and 2118199.
This work is supported in part by National
Science Foundation grant CCF-2106508, and the Digital Twin research grant from Bain \&
Company. Zhengyuan Zhou acknowledges the generous
support from New York University’s 2022-2023 Center for Global Economy and Business
faculty research grant.}
\bibliographystyle{apalike}
|
{
"arxiv_id": "2302.13255",
"language": "en",
"timestamp": "2023-02-28T02:14:22",
"url": "https://arxiv.org/abs/2302.13255",
"yymm": "2302"
} | \section{Introduction}
There is a folklore analogy between the ramification theory of $\ell$-adic Galois representations and the singularity of vector bundles with integrable connections. A general dictionary is that the unramified, tamely ramified, and wildly ramified Galois representations correspond to the removable, regular, and irregular singularities, respectively. Furthermore, an integrable connection with at worst regular singularities is encoded by its topological monodromy representation by the Riemann--Hilbert corresopndence. Thus, one expects that the unramifiedness of $\ell$-adic monodromy representation is related to the triviality of topological monodromy representation.
On the other hand, the classical N\'eron--Ogg--Shafarevich criterion relates the good reduction of an abelian variety with the unramifiedness of its $\ell$-adic monodromy representation. Guided by the viewpoint of the previous paragraph, we first formulate and prove the corresponding criterion regarding the triviality of topological monodromy representation of a family of abelian varieties, which we call the \emph{archimedean analogue} of the N\'eron--Ogg--Shafarevich criterion. To avoid confusion, from now on, we will refer to the classical N\'eron--Ogg--Shafarevich criterion as the $\ell$-adic N\'eron--Ogg--Shafarevich criterion.
The main purpose of the note is to deduce the $\ell$-adic N\'eron--Ogg--Shafarevich criterion from the archimedean N\'eron--Ogg--Shafarevich criterion. The proof crucially uses the arithmetic toroidal compactification of the moduli of principally polarized abelian varieties as in the work of Faltings--Chai \cite{FC}. This new proof of the $\ell$-adic criterion explicitly realizes the folklore analogy between the ramification theory of $\ell$-adic Galois representations and the singularities of topological monodromy representations.
The idea of the proof is to use the ``degeneration of degenerations.'' Namely, we use the fact that Mumford's degeneration of abelian varieties can realize all semi-abelian degeneration of abelian varieties, and also that such degeneration can be put into a family. Perhaps the most well-known instance of this fact is that, for elliptic curves, the Tate uniformization can be put into a universal family, called the \emph{universal Tate elliptic curve} of Raynaud. The idea of relating $\ell$-adic and topological monodromy via a family of degenerations was prominently used in \cite{Oda}, which studies the Galois representation of a higher-genus curve using the information about its topological monodromy.
The article is organized as follows. In \S2, we will first precisely formulate and prove the archimedean N\'eron--Ogg--Shafarevich criterion. Then, in \S3, we will first deduce the $\ell$-adic N\'eron--Ogg--Shafarevich criterion from the archimedean criterion in the case of elliptic curves, using the more familiar universal Tate elliptic curve. In \S4, we will end with the proof of the $\ell$-adic criterion in the general case by using Mumford's construction of degeneration of abelian varieties.
\subsection*{Acknowledgements} We thank John Halliday for pointing out a mistake in a previous version.
\section{Archimedean N\'eron--Ogg--Shafarevich criterion}
We first formulate the ``archimedean analogue'' of the N\'eron--Ogg--Shafarevich criterion, which is considerably easier to prove than the $\ell$-adic N\'eron--Ogg--Shafarevich criterion.
\begin{prop2}[(Archimedean N\'eron--Ogg--Shafarevich criterion)]\label{ArchNOS}Let $f:A\rightarrow D^{\times}$ be a holomorphic family of abelian varieties of dimension $g$ over the punctured disc $D^{\times}$. Then, $f$ extends to a family of abelian varieties over $D$ if and only if the monodromy representation $\rho:\pi_{1}(D^{\times},t_{0})\cong\mathbb{Z}\rightarrow\Aut H^{1}(A_{t_{0}},\mathbb{Z})$ is trivial, where $t_{0}\in D^{\times}$ is a fixed base point.
\end{prop2}
\begin{proof}
If the family extends to the unit disc $D$, the monodromy representation factors through $\pi_{1}(D,t_{0})$, which is trivial. Conversely, suppose the monodromy representation is trivial. We first prove this direction assuming that the family $f:A\rightarrow D^{\times}$ is a family of \emph{principally polarized} abelian varieties. Then, the family $f$ defines a period morphism from $D^{\times}$ modulo the monodromy to the Siegel upper half plane $\mathbb{H}_{g}$. As the monodromy is trivial, the period morphism is a holomorphic map $p:D^{\times}\rightarrow\mathbb{H}_{g}$. As $\mathbb{H}_{g}$ is conformally equivalent to a bounded domain, $0=D-D^{\times}$ is a removable singularity of $p$.
Now let us prove the converse direction in the general case. By Zahrin's trick, there is a family of principally polarized abelian varieties $A'\rightarrow D^{\times}$ whose topological monodromy representation is trivial and which contains $A\subset A'$ as a family of abelian subvarieties. By the previous paragraph, $A'\rightarrow D^{\times}$ extends to a family of principally polarized abelian varieties $\widetilde{A'}\rightarrow D$. Let $\widetilde{A}$ be the closure of $A$ in $\widetilde{A'}$, with its reduced scheme structure. We claim that $\widetilde{A}\rightarrow D$ is a family of abelian varieties, or that $\widetilde{A}_{0}$ is an abelian variety. Note that the fiberwise group structure $\mu:\widetilde{A'}\times_{D}\widetilde{A'}\rightarrow \widetilde{A'}$ restricts to $\mu':\widetilde{A}\times_{D}\widetilde{A}\rightarrow\widetilde{A'}$. Since $\mu'\rvert_{A\times_{D^{\times}}A}$ factors through $A\subset\widetilde{A'}$, $\mu'$ factors through $\widetilde{A}$. This shows that $\widetilde{A}_{0}$ is a reduced closed algebraic subgroup of the abelian variety $\widetilde{A'}_{0}$. Since a reduced connected closed algebraic subgroup of an abelian variety over a characteristic zero field is an abelian variety, it is now sufficient to show that $\widetilde{A}_{0}$ is connected, which is true as it is the continuous image of the closure of a connected set $A\times_{D^{\times}}A$, which is itself connected.
\end{proof}
Using this, we would like to prove the $\ell$-adic N\'eron--Ogg--Shafarevich criterion. From now on, we fix a finite extension $K/\mathbb{Q}_{p}$.
\begin{thm2}[($\ell$-adic N\'eron--Ogg--Shafarevich criterion)]\label{NOS}Let $A/K$ be an abelian variety. Let $\rho:G_{K}\rightarrow\GL(T_{\ell}A)$ be the $\ell$-adic monodromy representation for $\ell\ne p$. Then, $A$ has good reduction if and only if $\rho\rvert_{I_{K}}$ is trivial.
\end{thm2}
The rest of the article will be dedicated to proving Theorem \ref{NOS}. As one direction is immediate, we are left to prove the other nontrivial direction, namely proving the good reduction of $A$ assuming that $\rho\rvert_{I_{K}}$ is trivial.
As the first reduction step, we note that both sides of the statement of Theorem \ref{NOS} are invariant under an unramified base change. Furthermore, by Zahrin's trick, one can assume that $A$ is principally polarized.
Next, we note that only semistable abelian varieties need to be considered, using the geometry of the arithmetic toroidal compactification of the moduli of abelian varieties with level structures.
\begin{prop2}\label{Reduction1}
Let $A/K$ be a principally polarized abelian variety, and $\rho$ be the $\ell$-adic monodromy representation of $A$ for $\ell\ne p$. If $\rho\rvert_{I_{K}}$ is trivial, then $A$ has at worst semistable reduction.
\end{prop2}
\begin{proof}
Let $g$ be the dimension of $A$. By assumption, $A[\ell^{n}]$ is unramified for any $n\ge1$. Take a finite unramified extension $L/K$ over which $A[\ell^{3}]$ splits. Then, $A[\ell^{3}](L)\cong(\mathbb{Z}/\ell^{3}\mathbb{Z})^{2g}$. Fix one such isomorphism $\iota$. Then, $(A,\iota)$ defines a point $P\in A_{g,\ell^{3}}(L)$, where $A_{g,\ell^{3}}$ is the moduli space of principally polarized abelian varieties of dimension $g$ with full level $\ell^{3}$ structure. By \cite[Theorem 7.10]{GIT}, it is known that $A_{g,\ell^{3}}$ is represented by a smooth quasi-projective scheme over $\mathbb{Z}[1/\ell]$. Furthermore, by \cite[Theorem IV.6.7]{FC}, for a good enough choice of auxiliary data, there is a smooth projective $\mathbb{Z}[1/\ell]$-scheme $\overline{A}_{g,n}$, which contains $A_{g,n}$ as a dense open subscheme. By the valuative criterion for properness, the point $P\in A_{g,\ell^{3}}(L)$ extends to an $\mathcal{O}_{L}$-point $\mathcal{P}\in\overline{A}_{g,\ell^{3}}(\mathcal{O}_{L})$.
\iffalse
\begin{thm2}[(Mumford, Faltings--Chai)]Let $A_{g,n}$ be the category fibered in groupoids over the category $(\Sch/\mathbb{Z}[1/n])_{fppf}$ of $\mathbb{Z}[1/n]$-schemes, equipped with the fppf topology, defined by \[\mathrm{Ob}(A_{g,n}(S))=\left\lbrace (A/S,\lambda,\alpha): \textbox{20}{\vspace*{-3ex}$(A/S,\lambda)$ is a principally polarized abelian variety over $S$ of relative dimension $g$, and $\alpha:A[n]\hspace{0.02in}\xrightarrow{\raisebox{-0.8ex}[0ex][0ex]{\hspace{-0.02in}$\sim$\hspace{0.02in}}}\hspace{-0.02in}(\mathbb{Z}/n\mathbb{Z})^{2g}$ is a symplectic isomorphism of finite locally free group schemes over $S$}\right\rbrace.\]
\begin{enumerate}
\item (\cite[Theorem 7.10]{GIT}) If $n\ge3$, $A_{g,n}$ is represented by a smooth quasi-projective scheme over $\mathbb{Z}[1/n]$.
\item (\cite[Theorem IV.6.7]{FC}) If $n\ge3$, for a good enough choice of auxiliary data, there is a smooth projective $\mathbb{Z}[1/n]$-scheme $\overline{A}_{g,n}$, which contains $A_{g,n}$ as a dense open subscheme. Furthermore, there is a semi-abelian scheme $\overline{G}\rightarrow\overline{A}_{g,n}$ extending the universal abelian scheme $G\rightarrow A_{g,n}$.
\end{enumerate}
\end{thm2}\fi
Recall that there is a semi-abelian scheme $\overline{G}\rightarrow\overline{A}_{g,\ell^{3}}$, extending the universal abelian scheme $G\rightarrow A_{g,n}$. After pulling back this family via the $\mathcal{O}_{L}$-point $\mathcal{P}:\Spec\mathcal{O}_{L}\rightarrow \overline{A}_{g,\ell^{3}}$, we obtain a semi-abelian scheme over $\mathcal{O}_{L}$ whose generic fiber is $A_{L}$. This implies that $A_{L}$ has at worst semistable reduction. As the reduction type is invariant under an unramified base change, we conclude that $A$ has at worst semistable reduction, as desired.
\end{proof}\section{Warm-up: the case of elliptic curves}
Mumford's construction of degenerating abelian varieties in the case of elliptic curves is exactly the \emph{Tate uniformization}. By the means of Raynaud's universal Tate elliptic curve, one even knows that the degeneration of elliptic curves can be put into a one-dimensional family. In this section, we show how to prove Theorem \ref{NOS} using the universal Tate elliptic curve, to illustrate the usefulness of a family of degenerations.
\begin{proof}[Proof of Theorem \ref{NOS}, in the case of elliptic curves]
Suppose that an elliptic curve $A$ over $K$ has bad reduction but also has trivial monodromy over the inertia group $I_{K}$. As we know from Proposition \ref{Reduction1}, $A$ has semistable reduction. We will leverage the fact that $A$ has the Tate uniformization, and furthermore that the Tate uniformization is obtained from the \emph{universal Tate elliptic curve} of Raynaud. Our reference for the construction is \cite[\S9.2]{Bosch} and \cite[\S2.5]{Conrad}.
Let $K^{\nr}$ be the maximal unramified extension of $K$. Then, the \emph{universal Tate elliptic curve} is a family $f:\mathfrak{T}\rightarrow\mathfrak{S}=\Spec\mathcal{O}_{K^{\nr}}[[q]]$
such that, if $E$ is an elliptic curve over $K^{\nr}$ with semistable bad reduction, then the (semistable) N\'eron model $\mathfrak{E}$ of $E$ over $\Spec \mathcal{O}_{K^{\nr}}$
is obtained by pulling back $\mathfrak{T}$ along the map $\mathcal{O}_{K^{\nr}}[[q]]\rightarrow\mathcal{O}_{K^{\nr}}$, $q\mapsto q(E)$, the $q$-parameter of $E$. The family $\mathfrak{T}$ used here is obtained by taking the base-change of $\und{\mathrm{Tate}}_{1}$ in \cite[\S2.5]{Conrad}, which is in turn the algebraization of the (formal) universal Tate elliptic curve in \cite[\S9.2]{Bosch}, from $\mathbb{Z}[[q]]$ to $\mathcal{O}_{K^{\nr}}[[q]]$.
Let $\mathfrak{V}=\Spec\mathcal{O}_{K^{\nr}}$ be the closed subscheme of $\mathfrak{S}$ defined by the
map $\mathcal{O}_{K^{\nr}}[[q]]\xrar{q\mapsto q(A)}\mathcal{O}_{K^{\nr}}$, and let $\eta$ be the generic point of $\mathfrak{V}$. Let $i:\mathfrak{V}\hookrightarrow\mathfrak{S}$ be the closed embedding. Then, \[f\rvert_{\mathfrak{V}}:i^{*}\mathfrak{T}\rightarrow\mathfrak{V},\]is a semistable model of $A_{K^{\nr}}$. This in particular means that $f\rvert_{\mathfrak{V}_{\eta}}:(i^{*}\mathfrak{T})_{\eta}\rightarrow\mathfrak{V}_{\eta}$ is identified with $A_{K^{\nr}}\rightarrow\Spec(K^{\nr})$. Upon choosing a geometric generic point $\overline{\eta}:\Spec(\overline{K})\rightarrow\Spec(K^{\nr})\rightarrow\mathfrak{V}$, the pro-$\ell$-part of the monodromy representation \[\rho_{\mathfrak{V}_{\eta},\overline{\eta}}:\pi_{1,\operatorname{\acute{e}t}}(\mathfrak{V}_{\eta},\overline{\eta})_{\ell}\rightarrow\Aut(R^{1}(f\rvert_{{\overline{\eta}}})_{*}\und{{\mathbb{Q}}_{\ell}}),\]is identified with the $\ell$-adic monodromy representation of $A$ restricted to the pro-$\ell$-part of the inertia group $(I_{K})_{\ell}$, \[\rho\rvert_{(I_{K})_{\ell}}:(I_{K})_{\ell}\rightarrow\Aut(H^{1}(A_{\overline{K}},{\mathbb{Q}}_{\ell})).\]
On the other hand, as $A_{K^{\nr}}\rightarrow\Spec(K^{\nr})$ is put in a larger family, the above monodromy representation $\rho_{\mathfrak{V}_{\eta},\overline{\eta}}$ factors through the monodromy over a larger base $\mathfrak{U}=\lbrace q\ne0\rbrace=\Spec\mathcal{O}_{K^{\nr}}((q))\subset\mathfrak{V}$:
\[\xymatrix{\pi_{1,\operatorname{\acute{e}t}}(\mathfrak{V}_{\eta},\overline{\eta})_{\ell}\ar[rd]\ar[rr]^-{\rho_{\mathfrak{V}_{\eta},\overline{\eta}}}&&\Aut(R^{1}(f\rvert_{\mathfrak{V}_{\overline{\eta}}})_{*}\und{\mathbb{Q}_{\ell}})\\&\pi_{1,\operatorname{\acute{e}t}}(\mathfrak{U},\overline{\eta})_{\ell}\ar[ru]_-{\rho_{\mathfrak{U},\overline{\eta}}}}\]namely, the left arrow is the natural map $\pi_{1,\operatorname{\acute{e}t}}(\mathfrak{V}_{\eta},\overline{\eta})_{\ell}\rightarrow\pi_{1,\operatorname{\acute{e}t}}(\mathfrak{U},\overline{\eta})_{\ell}$ via $\mathcal{O}_{K^{\nr}}((q))\xrar{q\mapsto q(A)}K^{\nr}$. Note that, by Abhyankar's lemma, all \'etale $\ell$-covers of both $\mathfrak{U}$ and $\mathfrak{V}_{\eta}$ are Kummer covers, so both $\pi_{1,\operatorname{\acute{e}t}}(\mathfrak{V}_{\eta},\overline{\eta})_{\ell}$ and $\pi_{1,\operatorname{\acute{e}t}}(\mathfrak{U},\overline{\eta})_{\ell}$ are isomorphic to $\mathbb{Z}_{\ell}$. After identifying both with $\mathbb{Z}_{\ell}$, the natural map is multiplication by $\ell^{n}$ for some $n\ge0$.\footnote{This was pointed out to us by John Halliday.} By assumption, $\rho_{\mathfrak{V}_{\eta},\overline{\eta}}=\rho\rvert_{(I_{K})_{\ell}}$ is trivial, so we conclude that $\rho_{\mathfrak{U},\overline{\eta}}$ has finite image.
On the other hand, $\mathfrak{T}$ is smooth and proper over $\mathfrak{U}$, so the specialization map of \'etale fundamental groups identifies the monodromy over $\mathfrak{U}$ at $\overline{\eta}$ is identified with the monodromy over $\mathfrak{U}$ at $\overline{\iota}$, where $\overline{\iota}=\Spec \overline{K^{\nr}((q))}$ is a geometric generic point of $\mathfrak{U}=\Spec\mathcal{O}_{K^{\nr}}((q))$. Thus, the monodromy representation
\[\rho_{\mathfrak{U},\overline{\iota}}:\pi_{1,\operatorname{\acute{e}t}}(\mathfrak{U},\overline{\iota})_{\ell}\rightarrow\Aut(R^{1}(f\rvert_{\overline{\iota}})_{*}\und{\mathbb{Q}_{\ell}}),\]has finite image.
\iffalse
---
As the discriminant is nonvanishing at $q\ne0$, $\mathscr{T}$ is smooth over $\mathcal{O}_{K^{\nr}}((q))$. Also, by Tate uniformization, $\mathscr{T}\rvert_{q=q(A)}$ gives a semistable model of $A_{K^{\nr}}$. Let
\begin{itemize}
\item $V$ be the slice $\lbrace q=q(A)\rbrace\cong\Spf\mathcal{O}_{K^{\nr}}$,
\item $\eta$ be the generic point of $V$, $\overline{\eta}$ be the geometric point underlying it,
\item $U=\lbrace q\ne0\rbrace=\Spf \mathcal{O}_{K^{\nr}}((q))$,
\item $\iota$ be the generic point $\Spf K^{\nr}((q))\in U$, and $\overline{\iota}$ be the geometric generic point underlying it.
\end{itemize}
Then on $\pi_{1,\operatorname{\acute{e}t}}(A_{\overline{K^{\nr}}})_{\ell}=\pi_{1,\operatorname{\acute{e}t}}(\mathscr{T}_{\overline{\eta}})_{\ell}$, there are two monodromy actions you can think of:
\[\rho_{\eta,\overline{\eta}}:\pi_{1,\operatorname{\acute{e}t}}(\eta,\overline{\eta})_{\ell}(\cong (G_{K^{\nr}})_{\ell}=(I_{K})_{\ell})\rightarrow\Aut(\pi_{1,\operatorname{\acute{e}t}}(\mathscr{T}_{\overline{\eta}})_{\ell}),\]
\[\rho_{U,\overline{\eta}}:\pi_{1,\operatorname{\acute{e}t}}(U,\overline{\eta})_{\ell}\rightarrow\Aut(\pi_{1,\operatorname{\acute{e}t}}(\mathscr{T}_{\overline{\eta}})_{\ell}).\]The first ($\rho_{\eta,\overline{\eta}}$) factors through the second ($\rho_{U,\overline{\eta}}$) via the natural map $\pi_{1,\operatorname{\acute{e}t}}(\eta,\overline{\eta})_{\ell}\rightarrow\pi_{1,\operatorname{\acute{e}t}}(U,\overline{\eta})_{\ell}$. By Abhyankar's lemma, both the target and the source are isomorphic to $\mathbb{Z}_{\ell}$; all $\ell$-covers of $U$ are Kummer covers, as is for $K^{\nr}$. Thus, We assumed $\rho_{\eta,\overline{\eta}}$ is trivial, so $\rho_{U,\overline{\eta}}$ has finite image. Now $\mathscr{T}\rvert_{U}\rightarrow U$ is formally smooth and proper, so the specialization map is an isomorphism in this case. Thus $\rho_{U,\overline{\eta}}$ is, up to conjugation, the same as
\[\rho_{U,\overline{\iota}}:\pi_{1,\operatorname{\acute{e}t}}(U,\overline{\iota})_{\ell}\rightarrow\Aut(\pi_{1,\operatorname{\acute{e}t}}(\mathscr{T}_{\overline{\iota}})_{\ell})\]\fi
Now $\pi_{1,\operatorname{\acute{e}t}}(\mathcal{O}_{K^{\nr}})=1$, so $\rho_{\mathfrak{U},\overline{\iota}}$ will stay the same even if we base change $\mathcal{O}_{K^{\nr}}$ to a characteristic zero algebraically closed field. We fix an abstract field embedding $j:K_{\nr}\hookrightarrow\mathbb{C}$ and consider the base-change to $\mathbb{C}$ using this embedding. Therefore, the monodromy of the $\mathbb{C}$-base-change
\[\rho_{\mathfrak{U}_{\mathbb{C}},\overline{\iota}_{\mathbb{C}}}:\pi_{1,\operatorname{\acute{e}t}}(\mathfrak{U}_{\mathbb{C}},\overline{\iota}_{\mathbb{C}})_{\ell}=\Gal(\overline{\mathbb{C}((q))}/\mathbb{C}((q)))_{\ell}\rightarrow\Aut(R^{1}(f\rvert_{\overline{\iota}_{\mathbb{C}}})_{*}\und{\mathbb{Q}_{\ell}}),\]has finite image.
The crucial point is that the family $\mathfrak{T}_{\mathbb{C}[[q]]}\rightarrow\Spec\mathbb{C}[[q]]$, obtained by base-changing $f:\mathfrak{T}\rightarrow\mathfrak{S}$ via $\mathcal{O}_{K^{\nr}}\hookrightarrow K^{\nr}\xrar{j}\mathbb{C}$, is the formal germ of a complex-analytic family $f':\mathcal{T}\rightarrow D$ of complex analytic varieties over the unit disk. One could see this by noticing that the power series defining the Tate uniformization has coefficients in $\mathbb{Z}$ and can be verbatim used to define a complex-analytic family over the punctured unit disk. Another way to realize this is that the aforementioned family induces a map $p:\Spec\mathbb{C}[[q]]\rightarrow X(1)$, where $X(1)$ is the moduli of elliptic curves. As $X(1)$ is an algebraic space, one could take the open unit disk around the point $p\in X(1)^{\an}$, where $X(1)^{\an}$ is a Moishezon manifold.
Using the above observation, we see that $\rho_{\mathfrak{U}_{\mathbb{C}},\overline{\iota}_{\mathbb{C}}}$ is the $\ell$-completion of the topological monodromy $\rho_{f'}:\pi_{1}(D^{\times},t_{0})\rightarrow\Aut H^{1}(A_{t_{0}},\mathbb{Z})$, where $t_{0}\in D^{\times}$. As the family $\mathcal{T}\rightarrow D$ is semistable, this local topological monodromy is unipotent. However, any unipotent matrix in $\GL_{2}(\mathbb{Z})$ is either trivial or has infinite order, so this concludes that the topological monodromy over $D^{\times}$ is trivial. By the archimedean N\'eron--Ogg--Shafarevich criterion, Proposition \ref{ArchNOS}, $\mathcal{T}\rvert_{D^{\times}}\rightarrow D^{\times}$ can be extended to a family of abelian varieties $\mathcal{T}'\rightarrow D$. As $\mathcal{T}\rvert_{D^{\times}}\rightarrow D^{\times}$ can be extended to the semistable family $\mathcal{T}\rightarrow D$, this contradicts the fact that there is at most one way of extending a family of elliptic curves over $D^{\times}$ to $D$.\iffalse
----
Now this is over $\mathbb{C}$, so this is the $\ell$-completion of the topological monodromy of the family $\mathscr{T}$ \emphC{seen as a family over a small puncutured disc} which is the analytification of the formal family we have. We know that the analytification of $\mathscr{T}$ makes sense over a unit punctured disc $\lbrace0<|q|<1\rbrace$ in the complex numbers, so we can consider the topological monodromy of this. As $\pi_{1}(\lbrace0< |q|<1\rbrace)\cong\mathbb{Z}$, we see that the topological monodromy also has to be of finite image. We can see in many ways that we have a contradiction here.\begin{itemize}\item One can see explicitly how cycles go under monodromy using that $\mathscr{T}\cong\mathbb{C}^{\times}/q^{\mathbb{Z}}$ as complex torus. One sees that upon a choice of appropriate basis, the monodromy representation sends $1$ to $\begin{pmatrix}1&1\\0&1\end{pmatrix}$.
\item One can argue abstractly using basic Hodge theory, making use of the archimedean N\'eron--Ogg--Shafarevich. Since the monodromy is quasi-unipotent, we know that over a cyclic cover (say over an $n$-cover) the monodromy becomes unipotent. Then the monodromy up above should be trivial because there is no nontrivial unipotent matrix of finite order. Now the whole family pulled back via $\lbrace |q|<1\rbrace\xrar{q\mapsto q^{n}}\lbrace|q|<1\rbrace$ realizes not only the family over the cyclic cover of the punctured disc but also its singular fiber, so the family over the cyclic cover has trivial monodromy which can also be extended to a family over the whole disc with singular fiber at $0$. There is only one way of extending a family of elliptic curves (due to Deligne, on the level of variation of Hodge structures, and for elliptic curves it's equivalent to thinking in terms of variation of Hodge structures), so it contradicts the archimedean N\'eron--Ogg--Shafarevich criterion Proposition \ref{ArchNOS}.
\end{itemize}\fi
\end{proof}
\iffalse
\section{Degeneration of abelian varieties}
In \cite{FC}, it is proven that any semi-abelian degeneration of abelian varieties can be given by \emph{Mumford's construction of degenerating abelian varieties}.
\begin{thm2}[(Mumford, Faltings--Chai)]Let $S=\Spec R$, where $R$ is an $I$-adic completion of a normal excellent ring. Let $\eta$ be its generic point. Let $G/S$ be a semi-abelian scheme whose generic fiber $G_{\eta}$ is an abelian scheme. Then, it can be associated with the following data:
\begin{itemize}
\item an extension $0\rightarrow T\rightarrow \widetilde{G}\xrar{\pi} A\rar0$, where $A/S$ is an abelian scheme and $T/S$ is a torus;
\item a homomorphism $\iota:\und{Y}\rightarrow\widetilde{G}\otimes\Frac R$, where $\und{Y}$ is an etale sheaf on $S$ whose fibers are free abelian groups of rank $\dim_{S}T$;
\item a cubical invertible sheaf $\widetilde{\mathcal{L}}$ on $\widetilde{G}$ induced from a cubical invertible sheaf on $A$;
\item an action $\und{Y}$ on $\widetilde{Y}_{\eta}$ satisfying some positivity condition.
\end{itemize}
Furthermore, $G$ arises as a ``quotient'' of $\widetilde{G}$ by $Y$ in a suitable sense.
\end{thm2}\begin{exam2}
Suppose $E$ is an elliptic curve over $K$ with semistable bad reduction. Then,
\end{exam2}
The idea of our proof of the N\'eron--Ogg--Shafarevich criterion is to exhibit a degeneration over a \emph{two-dimensional base}, so that we could relate the $\ell$-adic monodromy representation with the topological monodromy of the geometric fiber of the degeneration.
--Add more--
\fi
\section{A proof of N\'eron--Ogg--Shafarevich criterion}
The crucial part of the proof of the previous section is that you can put a semistable elliptic curve into a family of semi-abelian varieties over $\Spec\mathcal{O}_{K^{\nr}}[[q]]$ whose base-change to $\mathbb{C}[[q]]$ is the formal germ of a complex-analytic family of semi-abelian varieties. With this in mind, we can now prove Theorem \ref{NOS} in the general case.
\begin{proof}[Proof of Theorem \ref{NOS}]
As in the proof of Proposition \ref{Reduction1}, there is a point $P\in A_{g,\ell^{3}}(K^{\nr})$ such that $G_{P}\cong A_{K^{\nr}}$, and there is an extension $i:\Spec\mathcal{O}_{K^{\nr}}\rightarrow \overline{A}_{g,\ell^{3}}$. Let $\overline{P}\in A_{g,\ell^{3}}(\overline{\mathbb{F}}_{p})$ be the point in the special fiber of $i$. As $\overline{A}_{g,\ell^{3}}$ is smooth over $\mathbb{Z}[1/\ell]$, the local ring of $\overline{A}_{g,\ell^{3}}$ at $\overline{P}$ is \[\mathcal{O}_{\overline{A}_{g,\ell^{3}},\overline{P}}\cong W(\overline{\mathbb{F}}_{p})[[X_{1},\cdots,X_{d}]], \]where $d=\frac{g(g+1)}{2}$. The morphism $i:\Spec\mathcal{O}_{K^{\nr}}\rightarrow\overline{A}_{g,\ell^{3}}$ then induces $\Spf\mathcal{O}_{K^{\nr}}\rightarrow\Spf\mathcal{O}_{\overline{A}_{g,\ell^{3}},\overline{P}}$, or a map $f:W(\overline{\mathbb{F}}_{p})[[X_{1},\cdots,X_{d}]]\rightarrow\mathcal{O}_{K^{\nr}}$. As its mod $p$ reduction is $\overline{P}$, if we choose a uniformizer $\pi\in\mathcal{O}_{K^{\nr}}$, then $f(X_{i})$ is divisible by $\pi$. Thus, $f$ factors through
\[W(\overline{\mathbb{F}}_{p})[[X_{1},\cdots,X_{d}]]\xrar{X_{i}\mapsto \frac{f(X_{i})}{\pi}X}\mathcal{O}_{K^{\nr}}[[X]]\xrar{X\mapsto \pi}\mathcal{O}_{K^{\nr}}.\]Let $p:\Spec\mathcal{O}_{K^{\nr}}[[X]]\rightarrow \overline{A}_{g,\ell^{3}}$ be the corresponding morphism, and consider $p^{*}\overline{G}\rightarrow\Spec\mathcal{O}_{K^{\nr}}[[X]]$. This is, as before, a one-dimensional family of semi-abelian schemes, which has the following properties.\begin{itemize}\item It specializes to $A_{K^{\nr}}$ at a certain point.
\item It is smooth away from the $X=0$ locus.
\item Its base-change to $\mathbb{C}[[X]]$ is the formal germ of a complex-analytic semistable family of abelian varieties over a unit disc.
\end{itemize}
The proof then proceeds exactly as in the proof in \S3, where the only difference is that we have to use the fact that a non-identity unipotent matrix in $\GL_{n}(\mathbb{Z})$ has infinite order for any $n$.
\iffalse
In the proof of the case of elliptic curves, the key was that there is a \emph{one-dimensional degeneration} of degenerating elliptic curves, where the singular fiber is not a degeneration of abelian varieties. Using Mumford's construction, we can similarly form a \emph{one-dimensional degeneration of degenerating abelian varieties} over $\Spf\mathcal{O}_{K^{\nr}}[[q]]$. Suppose that $A$ has semistable bad reduction. We get degeneration data out of the semi-abelian scheme $A$ we have, and in particular $\dim_{S}T>0$ as we have bad reduction. Now consider a family of degenerating abelian varieties over $\Spf\mathcal{O}_{K^{\nr}}[[q]]$ formed by the same degeneration data but $\iota=q\iota_{A}$, the $\iota$ of $A$ scaled by $q$. This is a nonconstant one-dimensional family of general degenerating abelian varieties where the singular fiber at $q=0$ is not actually a degenerating abelian variety. We can then use the proof of the case of elliptic curves word by word; now the last part about topological monodromy will need the archimedean N\'eron--Ogg--Shafarevich criterion, Proposition \ref{ArchNOS}.
\fi
\end{proof}
\iffalse
---------
Note that, unlike $\pi_{1,\operatorname{\acute{e}t}}(\eta,\overline{\eta})_{\ell}\hspace{0.02in}\xrightarrow{\raisebox{-0.8ex}[0ex][0ex]{\hspace{-0.02in}$\sim$\hspace{0.02in}}}\hspace{-0.02in}\pi_{1,\operatorname{\acute{e}t}}(U,\overline{\eta})_{\ell}$, it is not the case that the natural map $\pi_{1,\operatorname{\acute{e}t}}(\iota,\overline{\iota})_{\ell}\rightarrow\pi_{1,\operatorname{\acute{e}t}}(U,\overline{\iota})_{\ell}$ is an isomorphism. Indeed, $\pi_{1,\operatorname{\acute{e}t}}(\iota,\overline{\iota})_{\ell}$ is the $\ell$-part of the absolute Galois group of $K^{\nr}((q))$, so there is a whole zoo of field extensions coming from $q$-part. But again, if we exclude these kinds of field extensions, then it will be the same as $\pi_{1,\eta}(U,\overline{\iota})_{\ell}$. In concise terms, the natural map is an isomorphism when it's restricted to ``the inertia group of $G_{K^{\nr}((q))}$ for $q=0$.''
Let's make this precise. Let $B$ be the strict henselization of $\mathcal{O}_{K^{\nr}}[[q]]$ with respect to the ideal $(q)$. Then $\mathcal{O}_{K^{\nr}}[[q]]\rightarrow B$ gives $\Spec B[1/q]\rightarrow U$, and $\overline{\iota}$ factors through this map. Let $\iota_{B}$ be the generic point of $\Spec B[1/q]$; then $\iota_{B}$ underlies $\overline{\iota}$ too. Then we have natural maps
\[\pi_{1,\operatorname{\acute{e}t}}(\iota_{B},\overline{\iota})_{\ell}\rightarrow\pi_{1,\operatorname{\acute{e}t}}(\Spec B[1/q],\overline{\iota})_{\ell}\rightarrow\pi_{1,\operatorname{\acute{e}t}}(U,\overline{\iota})_{\ell}\]and
\fi
\bibliographystyle{alpha}
|
{
"arxiv_id": "2302.13248",
"language": "en",
"timestamp": "2023-02-28T02:14:07",
"url": "https://arxiv.org/abs/2302.13248",
"yymm": "2302"
} | \section{Introduction}
The magnetoelectric (ME) effect is a phenomenon, where magnetization is generated by applying an electric field or, conversely, an electric polarization is generated by applying a magnetic field. This effect appears in various systems without inversion symmetry and time-reversal symmetry. Most research on the ME effect has focused on spin degrees of freedom, e.g., in multiferroics \cite{fiebig2005revival,wang2010multiferroic,tokura2014multiferroics,dong2015multiferroic,fiebig2016evolution}, after the first observation in an antiferromagnet, $\mathrm{Cr_2O_3}$ \cite{1571980074033746432,astrov1960magnetoelectric,PhysRevLett.6.607}. However, besides spin magnetization, orbital magnetization also contributes to the total magnetization that can be induced by an electric field. Such an orbital magnetization induced by an electric field is called the orbital ME effect.
The orbital ME effect has been mainly studied in topological insulators at zero temperature. In topological insulators, it consists of two terms, a non-topological term (Kubo term) and a topological term (the Chern-Simons term) \cite{PhysRevB.78.195424,PhysRevLett.102.146805,Malashevich_2010,PhysRevB.82.245118,PhysRevLett.112.166601}.
The theoretical description of the orbital ME effect in metals at finite temperature is an ongoing problem because the orbital magnetic dipole moment is based on the position operator, $\hat{\bm{r}}$. Because the position operator is ill-defined in periodic systems using the Bloch basis, the direct calculation of the orbital magnetization is complicated. This difficulty also appears when calculating the orbital magnetization in equilibrium. This problem can be solved in insulators by a formalization based on the Wannier representation \cite{PhysRevLett.95.137205,PhysRevB.74.024408}, and by the semiclassical theory \cite{PhysRevLett.95.137204}.
While these two approaches assume zero temperature, subsequent researchers derive equations of the orbital magnetization that are applicable even in metals at finite temperature using semiclassical theory \cite{PhysRevLett.97.026603}.
Furthermore, in a full quantum mechanical approach, in Ref.~\cite{PhysRevLett.99.197202}, the orbital magnetization $M^i$ is defined in the thermodynamic sense by a derivative of the free energy $F$ by the magnetic field $B^i$ ($M^i = \partial F/ \partial B^i$). In practice, it is calculated by the thermodynamic relation $\partial M^i /\partial \mu = \partial N/\partial B^i$. This approach avoids the use of the position operator. However, this method is unique to equilibrium and cannot be applied to nonequilibrium states, such as quantum states in the presence of a current when an electric field is applied.
Recently, the orbital ME effect in metals at finite temperatures has been attracting attention. It has been experimentally observed in monolayer $\mathrm{MoS_2}$ \cite{lee2017valley,PhysRevLett.123.036806} and twisted bilayer graphene \cite{doi:10.1126/science.aaw3780,he2020giant}. Theoretical research is also steadily progressing. The orbital ME effect in metals is partially derived in Refs.~\cite{yoda2015current,PhysRevLett.116.077201,PhysRevB.102.184404,PhysRevB.102.201403,PhysRevB.96.035120}. This effect is also proposed in superconductors \cite{PhysRevResearch.3.L032012,PhysRevLett.128.217703}. These researches cover the extrinsic part, which originates in the change of the Fermi distribution function by an electric field and is proportional to the relaxation time $\tau$. In general, response functions also include an intrinsic part originating in the change of the wave function by an electric field, leading to, e.g., the anomalous Hall conductivity, which is related to the Berry curvature. Thus, we can expect that there is an intrinsic orbital ME effect even in metals at finite temperature, and it is, in fact, discussed using semiclassical theory \cite{PhysRevB.103.115432,PhysRevB.103.045401}. There are still no works calculating the intrinsic orbital ME effect using a fully quantum mechanical approach. However, a fully quantum mechanical approach deriving the effect going beyond the semiclassical theory is essential to understand quantum effects and calculating the effect in interacting systems.
In this work, we derive the intrinsic orbital ME response in a fully quantum mechanical approach using the Kubo formula. In Sec.~\ref{formalization}, we discuss the formalization of the orbital ME tensor from the current-current correlation function. We derive the intrinsic orbital ME tensor and discuss the physical meaning of this formula, a relationship with the thermodynamical orbital magnetic quadrupole moment, and the symmetry constraints in Sec.~\ref{sec_IOME}. Finally, we analyze the intrinsic orbital ME response in a model with antiferromagnetic loop-current order with $\mathcal{PT}$-symmetry in Sec.~\ref{model_calculation} and conclude this paper in Sec.~\ref{conclusion}.
\begin{comment}
In general, the linear response function can be divided to the two contributions, an intraband part proportional to a relaxation time $\tau$, which is the extrinsic property, and an interband part independent on $\tau$, which is called the intrinsic nature such as the anomalous Hall conductivity coming from the Berry curvature[]. Such a classification can be in the orbital ME effect.
\end{comment}
\section{Formalization} \label{formalization}
In this section, we will discuss the formalization of the orbital ME effect.
To derive it, we use the fact that the orbital ME response tensor is included in the current-current correlation function $\Phi_{JJ}^{ij}(\bm{q},\omega)$.
In the following, we will see how to extract the orbital ME tensor from the correlation function, particularly the first-order derivative by the wave number, following Ref.~\cite{PhysRevB.82.245118}.
In linear response theory, the current-current correlation function, $\Phi^{ij}_{JJ}(\bm{q},\omega)$, is defined using the current density $\bm{J}_{\bm{q},\omega}$ and a monochromatic electromagnetic field $\bm{A}(\bm{x},t) = \bm{A}_{\bm{q},\omega} e^{-i\omega t + i \bm{q} \cdot \bm{x}}$
\begin{eqnarray}
J^i_{\bm{q},\omega} = \Phi^{ij}_{JJ}(\bm{q},\omega) A^j_{\bm{q},\omega}. \label{linear}
\end{eqnarray}
Let us consider the first-order derivative by the wave number $q$ of the correlation function, $\Phi^{ij,k}_{JJ}(\omega)=\partial_{q_k}\Phi^{ij}_{JJ}(\bm{q}=0,\omega)$.
To simplify the discussion,
we split $\Phi^{ij,k}_{JJ}(\omega)$ into a time-reversal odd part $\Phi^{\mathrm{(S)}ij,k}_{JJ}(\omega)$, which is symmetric in the indices $i \leftrightarrow j$, and a time-reversal even part
$\Phi^{\mathrm{(A)}ij,k}_{JJ}(\omega)$, which is antisymmetric in the indices
$i \leftrightarrow j$.
We will see that these two parts include the multipole response tensors, such as the ME tensor, and we will see how to decompose these parts into multipole response tensors.
At first, we consider the antisymmetric part. This part has 9 independent components. Therefore it can be decomposed by a rank-2 tensor $\alpha_{ij}$ as
\begin{subequations}
\begin{align}
&\Phi^{\mathrm{(A)}ij,k}_{JJ}(\omega)
=
i \varepsilon_{ikl} \alpha_{jl}(\omega) -i \varepsilon_{jkl} \alpha_{il}(\omega) \label{asymm_phi}, \\
& \alpha_{ij}(\omega)
=
-\frac{1}{4i} \varepsilon_{jkl} \Bigl(
2 \Phi^{\mathrm{(A)}ik,l}_{JJ}(\omega)
-
\Phi^{\mathrm{(A)}kl,i}_{JJ}(\omega)
\Bigr).
\end{align}
\end{subequations}
Substituting Eq.~(\ref{asymm_phi}) in Eq.~(\ref{linear}) and using $\Phi^{ij}_{JJ}(\bm{q},\omega)=\Phi^{ij,k}_{JJ}(\omega)q_k$ for small wave numbers,
Eq.~(\ref{asymm_phi}) can be written as
\begin{subequations}
\begin{align}
&\bm{J}^{\mathrm{(A)}}_{\bm{q},\omega} = \bm{J}^{\mathrm{(A)}\bm{B}}_{\bm{q},\omega} + i\bm{q} \times \bm{M}^{\mathrm{(A)}\bm{E}}_{\bm{q},\omega} \\
&J^{\mathrm{(A)}\bm{B} i}_{\bm{q},\omega} = -i\omega P^{\mathrm{(A)}\bm{B} i}_{\bm{q},\omega} = \alpha_{ij}(\omega) B^j_{\bm{q},\omega} \\
&M^{\mathrm{(A)}\bm{E} i}_{\bm{q},\omega} = \frac{1}{i \omega} \alpha_{ji}(\omega) E^j_{\bm{q},\omega} \label{exme_effect},
\end{align}
\end{subequations}
where $\bm{E}_{\bm{q},\omega} = i\omega \bm{A}_{\bm{q},\omega}$ is the electric field, and $\bm{B}_{q,\omega} = i\bm{q} \times \bm{A}_{\bm{q},\omega}$ is the magnetic field. In Ref.~\cite{PhysRevLett.116.077201}, $\alpha_{ij}$ is calculated in the range of nonabsorbing frequencies, $\omega \ll \Delta \epsilon_{\mathrm{gap}}$. Then, $\alpha_{ij}$ can be calculated as
\begin{eqnarray}
\alpha_{ij}(\omega) = \frac{i \omega \tau}{1- i \omega \tau} \sum_{n} \int \frac{d^3k}{(2\pi)^3} \frac{\partial f(\epsilon_{n\bm{k}})}{\partial k_i} m_{n\bm{k} j} \label{ex_metensor},
\end{eqnarray}
where $\tau$ is the relaxation time, $f(\epsilon_{n\bm{k}})=1/(e^{\beta(\epsilon_{n\bm{k}}- \mu)}+1)$ is the Fermi distribution function of the $n$-th band energy $\epsilon_{n\bm{k}}$, and $\bm{m}_{n\bm{k}}$ is the spin and orbital magnetic moment.
In the long wavelength limit $\bm{q} \to 0$, only $\bm{J}^{\bm{B}}_{0,\omega}$ contributes to the transport current induced by a magnetic field. This phenomenon is called the gyrotropic magnetic effect, and Eq.~(\ref{exme_effect}) is interpreted as the ME effect \cite{PhysRevLett.116.077201}.
This ME effect is an extrinsic part and originates from the change of the Fermi distribution function. In the following, we will see that there is another contribution derived from the symmetric part, which is the main result of this paper.
Thus, let us discuss the symmetric part $\Phi^{\mathrm{(S)}ij,k}_{JJ}(\omega)$. This part is known to generate the intrinsic nonreciprocal directional dichroism \cite{PhysRevLett.122.227402}. In the following, we will see that this part also includes the ME tensor.
The symmetric part has 18 independent components. Therefore
it can be decomposed using a traceless rank-2 tensor $\beta_{ij}$ and a totally symmetric rank-3 tensor $\gamma_{ijk}$ \cite{PhysRevB.82.245118}. $\beta_{ij}$ has 8 independent components and $\gamma_{ijk}$ has 10 independent components. Together, they have 18 independent components in total. Thus, there is a one-to-one correspondence to the symmetric part as
\begin{subequations}
\begin{align}
& \Phi^{(\mathrm{S})ij,k}_{JJ}(\omega)
=
i \varepsilon_{jkl} \beta_{il}(\omega) + i \varepsilon_{ikl} \beta_{jl}(\omega) + \omega \gamma_{ijk}(\omega) \label{symm_phi} \\
& \beta_{li}(\omega) = \frac{1}{3i} \varepsilon_{ijk} \Phi^{(\mathrm{S})lj,k}_{JJ}(\omega) \label{beta} \\
& \gamma_{ijk}(\omega) = \frac{1}{3\omega} \Bigl( \Phi^{(\mathrm{S})ij,k}_{JJ}(\omega)+\Phi^{(\mathrm{S})jk,i}_{JJ}(\omega)+\Phi^{(\mathrm{S})ki,j}_{JJ}(\omega) \Bigr) . \label{gamma} \nonumber \\
\end{align}
\end{subequations}
Substituting Eq.~(\ref{symm_phi}) in Eq.~(\ref{linear}), we obtain
\begin{subequations}
\begin{align}\label{symm_phi_J}
&
\bm{J}^{(\mathrm{S})}_{\bm{q},\omega}
=
\bm{J}^{(\mathrm{S}) \bm{B} \bm{E}}_{\bm{q},\omega} + i\bm{q} \times \bm{M}^{(\mathrm{S})\bm{E}}_{\bm{q},\omega} \\
&
J^{(\mathrm{S}) \bm{B} \bm{E}i}_{\bm{q},\omega}
=
-i\omega \Bigl(
P^{\mathrm{(S)}\bm{B} i}_{\bm{q},\omega}
-
i q_j Q^{\mathrm{(S)}\bm{E} ij}_{\bm{q},\omega}
\Bigr) \\
& P^{\mathrm{(S)\bm{B}}i}_{\bm{q},\omega} = \frac{1}{i\omega} \beta_{ij}(\omega) B^j_{\bm{q},\omega} \\
& Q^{\mathrm{(S)}\bm{E}ij}_{\bm{q},\omega} = -\frac{1}{i\omega} \gamma_{ijk}(\omega) E^k_{\bm{q},\omega} \\
&
M^{(\mathrm{S})\bm{E}i}_{\bm{q},\omega} = \frac{1}{i\omega} \beta_{ji}(\omega) E^{j}_{\bm{q},\omega}.
\end{align}
\end{subequations}
In Eq.~(\ref{symm_phi_J}), $\beta_{ij}$ corresponds to the ME tensor. Thus, the ME response tensor is given by $\alpha_{ij}$ and $\beta_{ij}$.
Furthermore, since $\gamma_{ijk}$ is totally symmetric, it can only be regarded as the response of an electric quadrupole moment and cannot be included in the ME effect. Thus, $\gamma_{ijk}$ is the pure response tensor of the electric quadrupole moment induced by an electric field $Q^{\mathrm{(S)}\bm{E} ij}_{\bm{q},\omega}$.
The decomposition in Eq.~(\ref{symm_phi}) can also be interpreted as the expansion of the free energy by the magnetoelectric field in the path-integral sense \cite{altland_simons_2010} as
\begin{eqnarray}
F[\tilde{\bm{A}},\bm{A}] &\simeq& \tilde{A}^i_{-\bm{q},-\omega} \Phi^{(S)ij,k}_{JJ}(\omega) q_k A^j_{\bm{q},\omega} \nonumber \\
&=& \tilde{E}^i_{-\bm{q},-\omega} \frac{\beta_{ij}(\omega)}{i \omega} B^j_{\bm{q},\omega}
+ \tilde{B}^i_{-\bm{q},-\omega} \frac{\beta_{ji}(\omega)}{i \omega} E^j_{\bm{q},\omega} \nonumber \\
&& + (\partial_i \tilde{E}^j)_{-\bm{q},-\omega} \frac{- \gamma_{ijk} (\omega)}{i\omega} E^k_{\bm{q},\omega}.
\end{eqnarray}
Here, we use $X=A, E, B$ as the external field and $\tilde{X}$ as the auxiliary field generating the observables such as the current density.
This equation shows that the expression $\beta_{ji}/i\omega$ is identical to the definition $\partial^2 F/\partial \tilde{B}^i_{-\bm{q},\omega} \partial E^j_{\bm{q},\omega}$ which is exactly the ME tensor. We note that in this definition, the electric field and the magnetic field are interconvertible with each other due to the Faraday law ($\bm{\nabla} \times \bm{E} + \partial \bm{B}/ \partial t =0$). Thus, when focusing on the response of the magnetization by the electric field, all components of $\partial_i E^j$, which can be transformed into a magnetic field, must be taken into account. Of course, there are components that cannot be converted into a magnetic field. These components correspond to $\gamma_{ijk}$.
First, we comment on some merits of this definition of the ME tensor. In finite systems, the naively defined polarization, magnetization, multipoles, and their response functions depend on the origin of the coordinate system \cite{10.1093/acprof:oso/9780198567271.001.0001}. However, using the current-current correlation function and the decomposition of this correlation function into the multipole response functions, we can appropriately calculate their response functions independent of the origin, even well-defined in bulk systems. In addition, the response functions are guaranteed to be gauge-invariant.
In the above discussion, the vector potential is used to describe the electric field and the magnetic field. Of course, the scalar potential also generates an electric field. In Appendix \ref{derivation_Jn}, we show that the response tensor induced by the scalar potential also includes the information on the orbital magnetic ME tensor and results in the same equations as the following discussion.
The goal of this paper is the derivation of the
orbital ME effect originating in the $\beta_{ij}$
in a uniform and static electric field. Thus, we need to calculate $\lim_{\omega \to 0} \beta_{ij}(\omega)/i\omega$.
\section{Derivation of the intrinsic orbital ME tensor} \label{sec_IOME}
In this section, we will derive the formula of the intrinsic orbital ME tensor at finite temperatures in periodic systems.
In this paper, we focus on the response under a uniform and static electric field. Thus, in the final step, we will take two limits, $\bm{q} \to 0$ and $\omega \to 0$. In general, interchanging these two limits gives different results. In this paper, we consider the uniform limit (taking $\bm{q} \to 0$ before $\omega \to 0$) to calculate the dynamical response. However, we will see that the intrinsic orbital ME response originates from interband effects. Therefore, there is no need to be cautious about the order of the two limits, unlike in the extrinsic case, because there is no singularity.
\subsection{Set-up}
The Hamiltonian used in this paper is
\begin{eqnarray}
H_0 = \frac{\bm{p}^2}{2m} + V(\bm{x}) + \frac{1}{4m^2} \biggl( \frac{\partial V(\bm{x})}{\partial \bm{x}} \times \bm{p} \biggr) \cdot \bm{\sigma},
\end{eqnarray}
where $m$ is the mass of an electron, $V(\bm{x}) = V(\bm{x} +\bm{a})$ is a periodic potential, and $\bm{\sigma}$ is the Pauli matrix representing the spin degrees of freedom. The third term is the spin-orbital coupling term. In this paper, we use $\hbar=1$. The velocity operator $\bm{v}_0 = i [H_0 , \bm{x}]$ is
\begin{eqnarray}
\bm{v}_0 = \frac{\bm{p}}{m} + \frac{1}{4m^2} \bm{\sigma} \times \frac{\partial V(\bm{x})}{\partial \bm{x}}.
\end{eqnarray}
Applying a monochromatic electromagnetic field $\bm{A}(\bm{x},t) = \bm{A}_{\bm{q},\omega} e^{-i\omega t + i\bm{q} \cdot \bm{x}}$, we find that the momentum changes as $\bm{p} \to \bm{p} + e \bm{A}(\bm{x},t)$, where $-e$ is the charge of an electron. We here neglect the Zeeman term because we focus on the orbital magnetization. Then, the perturbed Hamiltonian in the first order of the electromagnetic field is
\begin{eqnarray}
H_{A} = \frac{e}{2} \Bigl( \bm{v}_0 \cdot \bm{A}(\bm{x},t) + \bm{A}(\bm{x},t) \cdot \bm{v}_0 \Bigr) .
\end{eqnarray}
The velocity is also changed by the electromagnetic field as
$\bm{v}_{A} = i [H_A ,\bm{x}] = e \bm{A}(\bm{x},t)/m$,
which is the diamagnetic term. The total current operator is given by $\bm{J}(\bm{r},t) = -e \{ \bm{v}_{\mathrm{tot}} , \delta(\bm{r} - \bm{x}) \}/2$, where $\bm{r}$ is just a coordinate and not an operator unlike $\bm{x}$, and $\bm{v}_{\mathrm{tot}} = \bm{v}_0 + \bm{v}_A $. This current operator satisfies the equation of continuity $\bm{\nabla}_{\bm{r}} \cdot \bm{J}(\bm{r},t) = - i[H_0 +H_A , n(\bm{r} ) ]$ , where $n(\bm{r})=-e \delta(\bm{r} - \bm{x})$ is the electron density.
Fourier transforming this current operator yields
\begin{eqnarray}
J_{\bm{q},\omega} &=& \int d^3r J(\bm{r},t) e^{i\omega t -i \bm{q} \cdot \bm{r}} \nonumber \\
&=&-e \biggl( \frac{1}{2} \{ \bm{v}_0 , e^{+i \omega t -i\bm{q} \cdot \bm{x}} \} + \frac{e\bm{A}_{\bm{q},\omega}}{m} \biggr).
\end{eqnarray}
Using the second quantization and the Kubo linear response theory, the symmetric part of the current-current correlation function $\Phi^{(\mathrm{S})ij}_{JJ}(\bm{q},\omega)$ is given by
\begin{widetext}
\begin{eqnarray}
\Phi^{(\mathrm{S})ij}_{JJ} (\bm{q},\omega)
&=&
-e^2 \int_{\mathrm{BZ}} \frac{d^3k}{(2\pi)^3}\sum_{mn} \frac{f(\epsilon_{n\bm{k}+\bm{q}/2}) - f(\epsilon_{m\bm{k}-\bm{q}/2})}{\epsilon_{n\bm{k}+\bm{q}/2} - \epsilon_{m\bm{k}-\bm{q}/2} -( \omega + i\delta )}
\mathrm{Re} \Bigl( \bra{u_{m\bm{k}-\bm{q}/2}} v^i_{\bm{k}} \ket{u_{n\bm{k}+\bm{q}/2}} \bra{u_{n\bm{k}+\bm{q}/2}} v^j_{\bm{k}} \ket{u_{m\bm{k}-\bm{q}/2}} \Bigr) \nonumber \\
&& - e^2 \int_{\mathrm{BZ}} \frac{d^3k}{(2\pi)^3} \sum_{n} \frac{f(\epsilon_{n\bm{k}})}{m} \delta_{ij}.
\end{eqnarray}
Here, we use the following notations:
\begin{equation}
H_0 \ket{\psi_{n\bm{k}}} = \epsilon_{n\bm{k}} \ket{\psi_{n\bm{k}}},\hspace{10pt}
\ket{u_{n\bm{k}}} = e^{-i \bm{k} \cdot \bm{x}} \ket{\psi_{n\bm{k}}},\hspace{10pt}
H_{\bm{k}} = e^{-i \bm{k} \cdot \bm{x}} H_0 e^{i \bm{k} \cdot \bm{x}},\hspace{10pt}
\bm{v}_{\bm{k}} = \frac{\partial H_{\bm{k}}}{\partial \bm{k}} .
\end{equation}
The Bloch wave number $\bm{k}$ lies in the first Brillouin zone (BZ). $\epsilon_{n\bm{k}}$ is the eigenenergy of the $n$-th band, $\ket{\psi_{n\bm{k}}}$ is a Bloch function, $\ket{u_{n\bm{k}}}$ is the periodic part of the Bloch function, and $\bm{v}_{\bm{k}}$ is the velocity operator of the Bloch basis.
Expanding $\Phi^{\mathrm{(S)}ij}_{JJ}(\bm{q},\omega)$ by $\bm{q}$ up to the first order and using the relationship in Eq.~(\ref{beta}) and Eq.~(\ref{gamma}), we can obtain the uniform and static orbital ME tensor $\chi^{\mathrm{(me)}}_{ij}$ and
the pure electric quadrupole conductivity $\sigma^{\mathrm{(eq)}}_{ijk}$, which represents the contribution from the electric quadrupole moment to the current density.
\subsection{Intrinsic orbital ME tensor}
In this subsection, we show the equations of the intrinsic orbital ME tensor and the pure electric quadrupole conductivity and discuss the physical meaning of these responses.
The expressions for the intrinsic orbital ME tensor and the pure electric quadrupole conductivity are (see Appendix~\ref{derivation_JJ} for a detailed derivation)
\begin{eqnarray}
&&\chi^{\mathrm{(me)}}_{ij}
=
\lim_{\omega \to 0} \mathrm{Re} \biggl[ \frac{\beta_{ij}(\omega)}{i\omega} \biggr]
=
-e^2\int_{\mathrm{BZ}} \frac{d^3k}{(2\pi)^3}
\sum_{n} f(\epsilon_{n\bm{k}}) \biggl(
\frac{1}{3} \varepsilon_{klj} \partial_{l} g^{ik}_{n\bm{k}} - \sum_{m(\neq n)} \frac{2}{\epsilon_{nm\bm{k}}}
\mathrm{Re} \Bigl[
\mathcal{A}^i_{nm} M^j_{mn}
-\frac{1}{3} \delta_{ij} \mathcal{A}^k_{nm} M^k_{mn}
\Bigr]
\biggr) \label{me_response} \nonumber \\ \\
&& \sigma^{\mathrm{(eq)}}_{ijk}
=
\lim_{\omega \to 0 } \mathrm{Re} \gamma_{ijk}(\omega)
=
e^2 \int_{\mathrm{BZ}} \frac{d^3k}{(2\pi)^3} \sum_{n} \biggl\{
\frac{-f'(\epsilon_{n\bm{k}})}{\delta^2} (\partial_i \epsilon_{n\bm{k}}) (\partial_j \epsilon_{n\bm{k}}) (\partial_k \epsilon_{n\bm{k}})
+
\frac{f(\epsilon_{n\bm{k}})}{3} \Bigl(
\partial_{i}g^{jk}_{n\bm{k}} +\partial_{k}g^{ij}_{n\bm{k}} +\partial_{j}g^{ki}_{n\bm{k}}
\Bigr)
\biggr\}
\label{eq_response}, \nonumber \\
\end{eqnarray}
\end{widetext}
where $\partial_l = \partial/ \partial k_l$, $\epsilon_{nm\bm{k}} = \epsilon_{n\bm{k}} - \epsilon_{m\bm{k}}$ and $f'(\epsilon_{n\bm{k}}) = \partial f(\epsilon_{n\bm{k}})/\partial \epsilon_{n\bm{k}}$. These equations are the main results of this paper.
In the following, we discuss the physical meaning of each term in Eq.~(\ref{me_response}) and Eq.~(\ref{eq_response}).
The first term in Eq.~(\ref{me_response}) and the second term in Eq.~(\ref{eq_response}) include $g^{ij}_{n\bm{k}}$, which is the quantum metric \cite{provost1980riemannian,resta2011insulating},
\begin{eqnarray}
g^{ij}_{n\bm{k}} = \sum_{m (\neq n)} \mathrm{Re} [\mathcal{A}^i_{nm\bm{k}} \mathcal{A}^j_{mn\bm{k}}],
\end{eqnarray}
where $\mathcal{A}^i_{nm\bm{k}} = i \braket{u_{n\bm{k}} | \partial_i u_{m\bm{k}}}$ is the Berry connection.
The quantum metric is a metric
measuring the distance between wave functions on a parameter space (e.g., the Bloch wave number). This metric is an important geometric quantity characterizing quantum states in the Brillouin zone together with the Berry curvature. The quantum metric can be interpreted as an electric quadrupole moment of a wave packet \cite{PhysRevB.99.121111,PhysRevLett.122.227402}, and it also contributes to the thermodynamical electrical quadrupole moment \cite{PhysRevB.102.235149}.
This metric is also included in the pure electric quadrupole conductivity in the second term of Eq.~(\ref{eq_response}). This term represents the electric quadrupole moment induced by the change of the distribution function (see also Eq.~(10) in Ref.~\cite{PhysRevB.102.235149}). On the other hand, the first term in Eq.~(\ref{me_response}) comprises the antisymmetric parts between the indices of the derivative and the index of the metric.
This term can be rewritten using integration by parts to $\varepsilon_{klj} v^0_{n\bm{k}} g^{ik}_{n\bm{k}}$
on the Fermi surface.
Because the metric behaves as $g^{ij} \sim x^ix^j$ having the same symmetry as the electric quadrupole moment, this term behaves like $x^i (\bm{x}
\times \bm{v}^0_{n\bm{k}})^j$.
Here, $x_i$ represents the position, and
we use $\bm{v}^0_{n\bm{k}} = \partial \epsilon_{n\bm{k}} / \partial \bm{k}$,
which is the group velocity of the $n$-th band. Thus, this term corresponds to a magnetic quadrupole moment and also appears in the thermodynamical orbital magnetic quadrupole moment \cite{PhysRevB.98.020407,PhysRevB.98.060402}, moreover, contributing to the orbital ME effect.
The second term in the Eq.~(\ref{me_response}) includes
\begin{eqnarray}
\bm{M}_{mn} = \sum_{l(\neq n)} \frac{1}{2} (\bm{v}_{ml\bm{k}} + \bm{v}^0_{n\bm{k}} \delta_{ml} ) \times \bm{\mathcal{A}}_{ln\bm{k}},
\end{eqnarray}
where $\bm{v}_{ml\bm{k}} = \bra{u_{m\bm{k}}} \bm{v}_{\bm{k}} \ket{u_{l\bm{k}}}$ is the matrix element of the velocity operator $\bm{v}_{\bm{k}}$. $\bm{M}_{mn}$ has the form of $\bm{v} \times \bm{r}$, so it can be interpreted as an off-diagonal element of the orbital magnetic moment. The second term in Eq.~(\ref{me_response}) can be interpreted as the perturbation of the wave function affecting the orbital magnetization $M$ created by an electric field.
Thus, this term includes the Berry connection $\mathcal{A}$, which acts like a polarization conjugate with the electric field. This term also appears in the orbital magnetic quadrupole moment \cite{PhysRevB.98.020407,PhysRevB.98.060402}.
Finally, we comment on the gauge invariance of the obtained equations. The parts of the equation that depend on wave functions are in the form of the off-diagonal Berry connection, $\bm{\mathcal{A}}_{nm\bm{k}}$. The off-diagonal Berry connection is gauge-invariant; therefore, these equations are also gauge-invariant. Because this approach is based on the current-current response function, the gauge invariance should be guaranteed. In addition, we calculate these tensors using a scalar potential instead of the vector potential, and we obtain the same result as Eq.~(\ref{me_response}) and Eq.~(\ref{eq_response}) (see Appendix \ref{derivation_Jn}). Thus, these tensors do not depend on the choice of the gauge of the electromagnetic field.
\subsection{Relation with the orbital magnetic quadrupole moment and symmetry constraints}
The magnetic quadrupole moment is believed to be the origin of the ME effect.
The ME effect needs both inversion symmetry and time-reversal symmetry breaking. These conditions are the same for the emergence of the odd-parity magnetic multipole, including the magnetic quadrupole moment.
The ME tensor $\chi^{\mathrm{me}}_{ij}$, in general, can be decomposed into three terms, the magnetic monopole moment (the trace of $\chi^{\mathrm{me}}_{ij}$), the magnetic toroidal moment (the antisymmetric part of $\chi^{\mathrm{me}}_{ij}$), and the magnetic quadrupole moment (the traceless and symmetric part of $\chi^{\mathrm{me}}_{ij}$) following their symmetry \cite{PhysRevB.76.214404,spaldin2008toroidal,PhysRevB.88.094429,PhysRevB.93.195167}. This fact implies that the ME response originates from the multipole. To solidify this statement, we can show the direct relationship between the orbital magnetic quadrupole moment and the orbital ME response, known as the St\v{r}eda formula. The formula is first presented by St\v{r}eda as the relation between the quantum Hall conductivity and the orbital magnetization in insulators at zero temperature \cite{Streda_1982}. A similar relationship is also valid for the orbital ME response, as discussed in Ref.~\cite{PhysRevB.98.020407,PhysRevB.98.060402}. Our equation satisfies this relationship,
\begin{eqnarray}
\chi^{\mathrm{(me)}}_{ij} = -e \frac{\partial Q^{\mathrm{(m)}}_{ij}}{\partial \mu},
\end{eqnarray}
in insulators at zero temperature, as we can see by comparing the orbital magnetic quadrupole moment $Q^{\mathrm{(m)}}_{ij}$ in the Refs.~\cite{PhysRevB.98.060402,PhysRevB.98.020407}.
In addition, the connection between them is also discussed in quantum wells \cite{PhysRevResearch.2.043060}, and these facts imply the possibility of the detection of orbital magnetic quadrupole moments using the orbital ME effect.
We note that our equation does not include the trace of the orbital ME tensor, which corresponds to the monopole term. This part is known to be a boundary effect and originates in the axion coupling described by the Chern-Simons action as discussed in 3-dimensional topological insulators \cite{PhysRevB.78.195424,PhysRevLett.102.146805,Malashevich_2010,PhysRevB.82.245118}.
This problem also occurs in an identical approach at zero temperature \cite{PhysRevB.82.245118} and the thermodynamical definition of the orbital magnetic quadrupole moment, which should include the Chern-Simons term to fulfill the St\v{r}eda formula \cite{PhysRevB.98.020407}. On the other hand, the semiclassical approach in Ref.~\cite{PhysRevB.103.115432} successfully derives the Chern-Simons term in non-Chern insulators. However, the extension of this term in general metals and insulators including Chern insulators is an ongoing problem in both the full quantum approach and the semiclassical theory.
Next, let us discuss symmetry constraints. As mentioned above, the ME effect is nonzero in systems without inversion and time-reversal symmetry. Time-reversal symmetry breaking can be satisfied in a macroscopic sense, such as a system including dissipation.
Thus, the extrinsic ME effect in the Eq.~(\ref{ex_metensor}) can occur in the DC-limit ($\lim_{\omega \to 0} \alpha_{ij}(\omega)/i\omega$) even if the system described by the Hamiltonian $H_0$ has time-reversal symmetry. However, the intrinsic part in Eq.~(\ref{me_response}) is zero. On the other hand, if the Hamiltonian $H_0$ satisfies $\mathcal{PT}$-symmetry, which is the product of the inversion operation $\mathcal{P}$ and the time-reversal operation $\mathcal{T}$, the orbital magnetic moment $\bm{m}_{n\bm{k}}$ is zero resulting in a vanishing extrinsic ME effect. In this situation, only the intrinsic orbital ME effect can exist. Thus, $\mathcal{PT}$-symmetric systems, such as antiferromagnetic order, are suitable for the experimental observation of the intrinsic orbital ME effect. In Sec.~\ref{model_calculation}, we will calculate the intrinsic orbital ME effect for a model Hamiltonian with $\mathcal{PT}$-symmetry. In particular, we look at a system with antiferromagnetic loop current order purely originating from the orbital degrees of freedom.
Finally, we comment on two previous works \cite{PhysRevB.103.115432,PhysRevB.103.045401} studying the intrinsic orbital ME effect. They derived equations applicable to two-dimensional systems, including metals, and three-dimensional non-Chern insulators, using the semiclassical theory. This dimensional constraint is attributed to the second Chern form. In our work, we use a fully quantum mechanical approach, and our equations are applicable to arbitrary-dimensional systems, including metals. However, our formula of the intrinsic OME tensor does not include the Chern-Simons term as mentioned above. In addition, we note that there is a difference between our equation and the equation in Ref.~\cite{PhysRevB.103.115432,PhysRevB.103.045401}, i.e., the $1/3$ factor in Eq.~(\ref{me_response}) is replaced to $1/2$ in those papers.
\section{model calculation} \label{model_calculation}
\begin{figure*}[t]
\includegraphics[width=0.4\linewidth]{ybco.pdf}
\includegraphics[width=0.4\linewidth]{energy.pdf}
\caption{(Left) The loop current order on the $\mathrm{Cu}$-$\mathrm{O_2}$ plane. The blue circles are Cu atoms, and the red circles are O atoms. The blue and red shaded areas represent the local orbital magnetization with opposite signs along the $z$ axis.
(Right) The band structure of the used model. This model has three bands and includes four Dirac points at energies $E=-0.44t$, $E=-0.27t$, and $E=0.33t$.} \label{ybco}
\end{figure*}
\begin{figure}
\includegraphics[width=1.0\linewidth]{ome2.pdf}
\caption{The intrinsic orbital ME tensor $\chi^{\mathrm{(me)}}_{y'z}$ for different chemical potentials and temperatures. We set $t=1.0, r=1.5t, t'=0.5t$ for the numerical calculation, and we introduce an infinitesimal dissipation $i \delta = 0.001i$ because the response function behaves as a delta function at the Dirac points, as discussed in the main text.
We use $\mu$ in units of $t$ and $\chi^{\mathrm{(me)}}_{y'z}$ in units of $e^2 a/\hbar$.
}
\label{IOME}
\end{figure}
In this section, we calculate the intrinsic orbital ME tensor for a model Hamiltonian. Because the orbital magnetization does, in principle, not need the spin degrees of freedom, we consider here a spinless system, focusing on the contribution of the orbital moment. As discussed above, the intrinsic orbital ME effect is dominant for $\mathcal{PT}$-symmetric systems. Thus, we analyze an example with an antiferromagnetic loop current order.
Loop current order is a kind of orbital order. Intuitively, electrons rotate locally across some sites inducing local orbital magnetic moments. Loop current order has been studied in cuprates \cite{PhysRevB.55.14554,PhysRevLett.96.197001,li2008unusual,BOURGES2011461,PhysRevLett.111.047005,zhao2017global} and recently reported in the Kagome superconductors $\mathrm{AV_3Sb_5}$ ($\mathrm{A} = \mathrm{K}, \mathrm{Rb}, \mathrm{Cs}$) \cite{mielke2022time}, a Mott insulator $\mathrm{Sr_2IrO_4}$ \cite{zhao2016evidence,jeong2017time,PhysRevX.11.011021} as candidates and it is also discussed in an orbital order of the twisted bilayer graphenes \cite{liu2021orbital,PhysRevX.9.031021,PhysRevB.103.035427,PhysRevX.10.031034}.
In this section, we use a model Hamiltonian for antiferromegnatic loop current order in cuprates shown in Fig.~\ref{ybco}.
This loop current order belongs to an orbital magnetic quadrupole order.
This Hamiltonian can be written as \cite{PhysRevB.98.060402,BOURGES2011461,PhysRevB.85.155106}
\begin{eqnarray}
H_{\bm{k}} =
\begin{pmatrix}
0 & its_x + ir c_x & its_y + irc_y \\
-its_x -irc_x & 0 & t' s_x s_y \\
-its_y -i rc_y & t' s_x s_y & 0
\end{pmatrix} ,
\end{eqnarray}
where $s_i = \sin (k_i/2)$ and $c_i = \cos(k_i/2)$, and we set the lattice constant $a=1$. This Hamiltonian includes three orbitals without spin degrees of freedom. The basis is $\ket{d}$ on the copper sites, and $\ket{p_x}$ and $\ket{p_y}$ on the oxygen sites. $t$ and $t'$ are the hopping parameter and $r$ is the order parameter of the loop current in Fig.\ref{ybco}.
The band structure of this model is shown in Fig.~\ref{ybco}. This model has four Dirac points at energies $E=-0.44t$, $E=-0.27t$, and $E=0.33t$.
We rename the coordinates $(k_x,k_y)$ to $(k_{x'},k_{y'})$ for reasons of simplicity in the following discussion.
Let us discuss the intrinsic orbital ME tensor in this model. In 2-dimensional systems, the diagonal components of $\chi^{\mathrm{(me)}}_{ij}$ vanish. Because this Hamiltonian has $y'$-mirror symmetry, the only nonzero component is $\chi^{\mathrm{(me)}}_{y'z}$. We show the result of $\chi^{\mathrm{(me)}}_{y'z}$ in Fig.~\ref{IOME} and plot the dependence on the temperature and the chemical potential.
We here use the following parameters $t=1.0$, $t'=0.5t$, and $r=1.5t$ for the numerical calculations, and we introduce an infinitesimal dissipation $i\delta$ as an adiabatic factor.
In the result of the orbital ME effect, we can see peak structures at the energies of the Dirac points at low temperatures. This behavior is typical for a system with linear dispersion.
When we analyze $\tilde{Q}^{\mathrm{(m)}}_{y'z}(\mu) = \int ^{\mu}_{-\infty} d\mu' \chi^{\mathrm{(me)}}_{y'z}(\mu')$, which is a part of the orbital magnetic quadrupole moment, in a Dirac Hamiltonian having the same symmetry as our model above \cite{PhysRevB.98.060402},
\begin{eqnarray}
H^{\mathrm{Dirac}}_{\bm{k}} = v' k_{x'} + v_x k_{x'} \sigma_{x} + v_y k_{y'} \sigma_{y},
\end{eqnarray}
we can see that $\tilde{Q}^{\mathrm{(m)}}_{y'z}(\mu)$ shows a step function behavior and jumps from $e^2v' |v_y|/16\pi |v_x|$ to $-e^2v' |v_y|/16\pi |v_x|$ at the Dirac points with an additional logarithmic dependence (see Appendix \ref{dirac_hamiltonian}). Thus, $\chi^{\mathrm{(me)}}_{y'z}(\mu)$ behaves at the Dirac point as a delta function.
\section{conclusion} \label{conclusion}
In summary, we have derived the intrinsic orbital ME effect within a fully quantum mechanical approach using the Kubo formula. The obtained formula is based on the current-current correlation function, so the response tensor is gauge invariant and does not depend on the origin of the coordinate system. It is well-defined as an observable quantity in bulk systems. The formula is applicable to insulators and metals at zero and finite temperatures and is valid in arbitrary dimensions.
We have shown that the intrinsic ME tensor satisfies the St\v{r}eda formula, which is the direct relationship to the thermodynamic orbital magnetic quadrupole moment in insulators at zero temperature. The intrinsic part of the orbital ME tensor is dominant in $\mathcal{PT}$-symmetric systems because the extrinsic part is zero. Thus, we have applied the obtained formula to an antiferromagnetic loop current order proposed in cuprates. We have demonstrated that the intrinsic ME tensor is strongly enhanced at low temperatures, especially around the Dirac points.
The obtained formula can also apply to other $\mathcal{PT}$-systems such as antiferromagnets. Because this formula describes the orbital magnetization induced by an electric field, it will be helpful in the study of orbital orders, e.g., loop current orders beyond spin orders. Furthermore, this formula can be used in first-principle calculations and allows for detailed calculations in real materials.
Finally, shortly before finishing our work, we noticed a related work studying the spatially dispersive natural optical conductivity \cite{pozo2022multipole}. In our paper, we focus on the orbital ME effect.
\section*{acknowledgements}
K.S. acknowledges support as a JSPS research fellow and is supported by JSPS KAKENHI, Grant No.22J23393. A.K. acknowledges support by JST, the establishment of university fellowships towards the creation of science,
technology, and innovation, Grant Number JPMJFS2123. R.P. is supported by JSPS KAKENHI No.~JP18K03511.
|
{
"arxiv_id": "2302.13194",
"language": "en",
"timestamp": "2023-02-28T02:12:24",
"url": "https://arxiv.org/abs/2302.13194",
"yymm": "2302"
} | \section*{Results}
\subsection*{Theory of de~Broglie wave packets}
\noindent
de~Broglie posited two distinct entities accompanying massive particles: an \textit{internal} `clock' and an \textit{external} `phase wave' \cite{MacKinnon76AJP,Espinosa82AJP}. For a particle of rest mass $m_{\mathrm{o}}$ whose energy is expressed as $E_{\mathrm{o}}\!=\!m_{\mathrm{o}}c^{2}\!=\!\hbar\omega_{\mathrm{o}}$, the internal clock and the infinite-wavelength phase wave coincide at the same de~Broglie frequency $\omega_{\mathrm{o}}$ in the particle's rest frame [Fig.~\ref{Fig:ConceptofDBM}(a)]; here $c$ is the speed of light in vacuum, and $\hbar$ is the modified Planck constant. When the particle moves at a velocity $v$, the frequencies observed in the rest frame diverge: the internal frequency drops to $\omega\!=\!\omega_{\mathrm{o}}\sqrt{1-\beta_{v}^{2}}$ whereas the phase-wave frequency increases to $\omega\!=\!\omega_{\mathrm{o}}\big/\sqrt{1-\beta_{v}^{2}}$ and takes on a finite wavelength $\lambda$, where $\beta_{v}\!=\!\tfrac{v}{c}$ [Fig.~\ref{Fig:ConceptofDBM}(b)]. The wave number $k\!=\!\tfrac{2\pi}{\lambda}$ for the phase wave is determined by the de~Broglie dispersion relationship $\omega^{2}\!=\!\omega_{\mathrm{o}}^{2}+c^{2}k^{2}$ [Fig.~\ref{Fig:ConceptofDBM}(c)], so that it is a solution to the Klein-Gordon equation. Because de~Broglie phase waves are extended, a particle with a well-defined velocity cannot be localized. Instead, spatially localizing the particle requires introducing an \textit{ad hoc} uncertainty in the particle velocity (a spread from $v$ to $v+\Delta v$) to induce a bandwidth $\Delta\omega$ (from $\omega_{\mathrm{c}}$ to $\omega_{\mathrm{c}}+\Delta\omega$), or $\Delta k$ (from $k_{\mathrm{c}}$ to $k_{\mathrm{c}}+\Delta k$) \cite{deBroglie25,Mackinnon78FP}, thus resulting in a finite-width wave packet that is also a solution to the Klein-Gordon equation [Fig.~\ref{Fig:ConceptofDBM}(c)]. The wave-packet \textit{group velocity} $\widetilde{v}\!=\!1\big/\tfrac{dk}{d\omega}\big|_{\omega_{\mathrm{c}}}\!=\!v$ is equal to the particle velocity, whereas its phase velocity is $v_{\mathrm{ph}}\!=\!\tfrac{\omega}{k}\!=\!\tfrac{c^{2}}{v}$ ($v_{\mathrm{ph}}\widetilde{v}\!=\!c^{2}$; see Methods). However, de~Broglie wave packets are dispersive $\tfrac{d\widetilde{v}}{d\omega}\!\neq\!0$. Moreover, because there is no upper limit on the exploitable bandwidth [Fig.~\ref{Fig:ConceptofDBM}(c)], de~Broglie wave packets lack an intrinsic length scale; that is, there is no \textit{minimum} wave-packet length that is uniquely identified by the particle parameters (mass $m_{\mathrm{o}}$ and velocity $v$).
\subsection*{Non-dispersive de~Broglie-Mackinnon (dBM) wave packets}
\noindent
Mackinnon proposed an altogether different conception for constructing localized \textit{non-dispersive} wave packets out of de~Broglie phase waves that jettisons the need for introducing an \textit{ad hoc} uncertainty in particle velocity to localize it. Key to this proposal is a Copernican inversion of the roles of particle and observer. Rather than a single privileged observer associated with the rest frame in Fig.~\ref{Fig:ConceptofDBM}(c), Mackinnon considered a continuum of potential observers traveling at physically accessible velocities (from $-c$ to $c$). The wave-packet bandwidth $\Delta k$ that is established in a particular reference frame is a result of the spread in the particle velocity as observed in all these possible frames. Consequently, the particle can be localized, and a unique wave-packet length scale identified, even when its velocity is well-defined.
The physical setting envisioned by Mackinnon is depicted in Fig.~\ref{Fig:ConceptofDBM}(d), where the particle moves at a velocity $v$ and an observer moves at $u$, both with respect to a common rest frame in which the dBM wave packet is constructed. Each potential observer records a different phase-wave frequency and wavelength. The crucial step is that \textit{all} potential observers travelling at velocities $u$ ranging from $-c$ to $c$ report their observations to the selected rest frame. These phase waves are superposed in this frame -- after accounting for Lorentz contraction and time dilation (Methods) -- to yield a wave packet uniquely identified by the particle rest mass $m_{\mathrm{o}}$ and velocity $v$.
Consider first the simple scenario where the particle is at rest with respect to the selected frame ($v\!=\!0$). Each observer reports to the common rest frame a frequency $\omega'\!=\!\omega_{\mathrm{o}}/\sqrt{1-\beta_{u}^{2}}$ and a wave number $k'\!=\!-k_{\mathrm{o}}\beta_{u}/\sqrt{1-\beta_{u}^{2}}$, where $\beta_{u}\!=\!\tfrac{u}{c}$. Accounting for time dilation results in $\omega'\rightarrow\omega\!=\!\omega_{\mathrm{o}}$, and accounting for Lorentz contraction produces $k'\rightarrow k\!=\!-k_{\mathrm{o}}\beta_{u}$. Therefore, the frequency in the rest frame based on the recordings of \textit{all} the observers is $\omega\!=\!\omega_{\mathrm{o}}$, just as in the case of a conventional de~Broglie phase wave, but the wave number now extends over the range from $-k_{\mathrm{o}}$ to $k_{\mathrm{o}}$ as the observer velocity $u$ ranges from $c$ to $-c$ [Fig.~\ref{Fig:ConceptofDBM}(e)]. In other words, the observer velocity $u$ serves as an internal parameter that is swept to establish a new dispersion relationship whose slope is zero, thus indicating a particle at rest $\widetilde{v}\!=\!v\!=\!0$ \cite{Mackinnon78FP,Saari04PRE}. The spectral representation of the support domain for this wave packet is a horizontal line $\omega\!=\!\omega_{\mathrm{o}}$ in $(k,\tfrac{\omega}{c})$-space delimited by the two light-lines $k\!=\!\pm\tfrac{\omega}{c}$ [Fig.~\ref{Fig:ConceptofDBM}(e)]. In contradistinction to conventional de~Broglie wave packets, a physically motivated length scale emerges for the dBM wave packet. The maximum spatial bandwidth is $\Delta k\!=\!2k_{\mathrm{o}}$, which corresponds to a minimum wave-packet length scale of $L_{\mathrm{min}}\!\sim\!\tfrac{\lambda_{\mathrm{o}}}{2}$, where $\lambda_{\mathrm{o}}\!=\!\tfrac{2\pi}{k_{\mathrm{o}}}$. This can be viewed as an `optical theorem', whereby the dBM wave packet for a stationary particle cannot be spatially localized below the associated de~Broglie wavelength $\lambda_{\mathrm{o}}$. Taking an equal-weight superposition across all the wave numbers, the dBM wave packet is $\psi(z;t)\propto e^{-i\omega_{\mathrm{o}}t}\mathrm{sinc}(\tfrac{\Delta k}{\pi}z)$, where $\mathrm{sinc}(x)\!=\!\tfrac{\sin{\pi x}}{\pi x}$ \cite{Mackinnon78FP}.
A similar procedure can be followed when $v\!\neq\!0$, whereupon the frequency and wave number in the selected reference frame are $\omega\!=\!\omega_{\mathrm{o}}(1-\beta_{v}\beta_{u})\big/\sqrt{1-\beta_{v}^{2}}$ and $k\!=\!k_{\mathrm{o}}(\beta_{v}-\beta_{u})\big/\sqrt{1-\beta_{v}^{2}}$, respectively (Methods). Because $v$ is fixed whereas $u$ extends from $-c$ to $c$, a linear dispersion relationship between $\omega$ and $k$ is established, $k\!=\!\tfrac{1}{\beta_{v}}(\tfrac{\omega}{c}-\tfrac{k_{\mathrm{o}}^{2}}{k_{1}})$, where $k_{1}\!=\!k_{\mathrm{o}}/\sqrt{1-\beta_{v}^{2}}$. The slope of the dBM dispersion relationship indicates that $\widetilde{v}\!=\!v$ as in conventional de~Broglie wave packets, but the dBM wave packet is now \textit{non-dispersive}, $\tfrac{d\widetilde{v}}{d\omega}\!=\!0$ [Fig.~\ref{Fig:ConceptofDBM}(f)]. The limits on the spatial and temporal bandwidths for the dBM wave packet are $\Delta k\!=\!2k_{1}$ and $\tfrac{\Delta\omega}{c}\!=\!\beta_{v}\Delta k$, respectively, leading to a reduced characteristic length scale $L_{\mathrm{min}}\!\sim\!\tfrac{\lambda_{\mathrm{o}}}{2}\sqrt{1-\beta_{v}^{2}}$ as a manifestation of Lorentz contraction; a faster particle is more tightly localized. By assigning equal complex amplitudes to all the phase waves associated with this moving particle, the propagation-invariant dBM wave packet is $\psi(z;t)\propto e^{i\beta_{v}\Delta k(z-\widetilde{v}t)}\mathrm{sinc}(\tfrac{\Delta k}{\pi}(z-\widetilde{v}t))$. Crucially, unlike conventional de~Broglie wave packets, the dBM wave packet is \textit{not} a solution to the Klein-Gordon equation, although a modified wave equation can perhaps be constructed for it \cite{Mackinnon78FP}.
\subsection*{Optical de~Broglie-Mackinnon wave packets in free space}
\noindent
Despite their intrinsic interest from a fundamental point of view, dBM wave packets have remained to date theoretical entities. It has nevertheless been recognized that \textit{optical} waves in free space may provide a platform for their construction \cite{Saari04PRE,ZamboniRached2008PRA}. Because $(1+1)$D optical waves in free space are dispersion-free ($k\!=\!\tfrac{\omega}{c}$ and $v_{\mathrm{ph}}\!=\!\widetilde{v}\!=\!c$), producing optical dBM wave packets requires first adding a transverse coordinate $x$ to enlarge the field dimensionality to $(2+1)$D. The dispersion relationship thus becomes $k_{x}^{2}+k_{z}^{2}\!=\!(\tfrac{\omega}{c})^{2}$, which represents the surface of a `light-cone' \cite{Donnelly93ProcRSLA,Yessenov22AOP}; here $k_{x}$ and $k_{z}$ are the transverse and longitudinal components of the wave vector along $x$ and $z$, respectively. The spectral support of any optical field corresponds to some region on the light-cone surface [Fig.~\ref{Fig:OpticaldBMConcept}(a)]. For a fixed value of $k_{x}\!=\!\pm\tfrac{\omega_{\mathrm{o}}}{c}$, we retrieve the axial dispersion relationship for de~Broglie phase waves $\omega^{2}\!=\!\omega_{\mathrm{o}}^{2}+c^{2}k_{z}^{2}$. A convenient parametrization of the field makes use of the propagation angle $\varphi(\omega)$ with respect to the $z$-axis for the plane wave at a frequency $\omega$, whereupon $k_{x}(\omega)\!=\!\tfrac{\omega}{c}\sin{\varphi(\omega)}$ and $k_{z}(\omega)\!=\!\tfrac{\omega}{c}\cos{\varphi(\omega)}$. \textit{Angular dispersion} is thus introduced into the $(2+1)$D field \cite{Torres10AOP,Fulop10Applications}, and its spectral support on the light-cone surface is a one-dimensional (1D) trajectory. We take \textit{optical} dBM wave packets to be those whose axial dispersion relationship $\omega(k_{z})$ conforms to that of a dBM wave packet. This requires that the projection of the spectral support onto the $(k_{z},\tfrac{\omega}{c})$-plane be linear and delimited by the light-lines $k_{z}\!=\!\pm\tfrac{\omega}{c}$. Indeed, the spectral projections onto the $(k_{z},\tfrac{\omega}{c})$-plane in Fig.~\ref{Fig:OpticaldBMConcept}(a,b) coincide with those in Fig.~\ref{Fig:ConceptofDBM}(e,f).
Consider first a monochromatic field $\omega\!=\!\omega_{\mathrm{o}}$ whose spectral support is the circle at the intersection of the light-cone with a horizontal iso-frequency plane [Fig.~\ref{Fig:OpticaldBMConcept}(a)]. This monochromatic field comprises plane waves of the same frequency $\omega_{\mathrm{o}}$ that travel at angles $\varphi$ extending from 0 to $2\pi$, whose axial wave numbers are $k_{z}(\varphi)\!=\!\pm\sqrt{k_{\mathrm{o}}^{2}-k_{x}^{2}}\!=\!k_{\mathrm{o}}\cos\varphi$ and extend from $-k_{\mathrm{o}}$ to $k_{\mathrm{o}}$. This optical wave packet [Fig.~\ref{Fig:OpticaldBMConcept}(a)] corresponds to the dBM wave packet for a particle in its rest frame [Fig.~\ref{Fig:ConceptofDBM}(e)], and $\varphi$ serves as the new internal parameter to be swept in order to produce the targeted dBM dispersion relationship, corresponding to the observer velocity $u$ in Fig.~\ref{Fig:ConceptofDBM}(e). By setting the spectral amplitudes equal for all the plane-wave components, we obtain $\psi(x,z;t)\propto e^{-i\omega_{\mathrm{o}}t}\,\mathrm{sinc}(\tfrac{\Delta k_{z}}{\pi}\sqrt{x^{2}+z^{2}})$, where $\Delta k_{z}\!=\!2k_{\mathrm{o}}$ [Fig.~\ref{Fig:OpticaldBMConcept}(a)]. Such a wave packet can be produced by a stationary, monochromatic planar dipole placed at the origin of the $(x,z)$-plane. Observing this optical field requires coherent field detectors arranged around the $2\pi$ angle subtended by the dipole, and then communicating the recorded measurements to a central station. This procedure is therefore \textit{not} dissimilar in principle from that envisioned by Mackinnon for the dBM wave packet associated with a stationary particle, in which the measurements recorded by observers traveling at different velocities are communicated to the common rest frame [Fig.~\ref{Fig:ConceptofDBM}(d)].
When the dipole moves at a velocity $v$ along the $z$-axis with respect to stationary detectors encircling it, each constituent plane-wave undergoes a different Doppler shift in the rest frame of the detectors. The field still comprises plane waves travelling at angles $\varphi$ extending from 0 to $2\pi$, but each plane wave now has a \textit{different} frequency $\omega$. Nevertheless, the new spectral support for the dBM wave packet on the light-cone is related to that for the stationary monochromatic dipole. Indeed, the Lorentz transformation associated with the relative motion between the source and detectors tilts the horizontal iso-frequency spectral plane in Fig.~\ref{Fig:OpticaldBMConcept}(a) by an angle $\theta$ with respect to the $k_{z}$-axis as shown in Fig.~\ref{Fig:OpticaldBMConcept}(b), where $\tan{\theta}\!=\!\beta_{v}$ \cite{Belanger86JOSAA,Longhi04OE,Saari04PRE,Kondakci18PRL}, thus yielding a tilted ellipse whose projection onto the $(k_{x},\tfrac{\omega}{c})$ is:
\begin{equation}\label{Eq:EllipseInFreeSpace}
\frac{k_{x}^{2}}{k_{\mathrm{o}}^{2}}+\frac{(\omega-ck_{1})^{2}}{(\Delta\omega/2)^{2}}=1.
\end{equation}
The spectral projection onto the $(k_{z},\tfrac{\omega}{c})$-plane is now the line $k_{z}\!=\!k_{+}+\tfrac{\omega-\omega_{+}}{\widetilde{v}}\!=\!\tfrac{1}{\beta_{v}}(\tfrac{\omega}{c}-\tfrac{k_{\mathrm{o}}^{2}}{k_{1}})$, where $\widetilde{v}\!=\!c\tan{\theta}\!=\!v$ is the wave-packet group velocity along $z$, $k_{+}\!=\!\tfrac{\omega_{+}}{c}\!=\!k_{\mathrm{o}}\sqrt{\tfrac{1+\beta_{v}}{1-\beta_{v}}}$, and $k_{1}\!=\!k_{\mathrm{o}}/\sqrt{1-\beta_{v}^{2}}$. The spatial and temporal bandwidths are related via $\tfrac{\Delta\omega}{c}\!=\!\beta_{v}\Delta k_{z}$, where $\Delta k_{z}\!=\!2k_{1}$. Each plane wave travels at a different direction in the $(x,z)$-plane in such a way that their \textit{axial} wave numbers $k_{z}$ reproduce the dBM dispersion relationship [compare Fig.~\ref{Fig:ConceptofDBM}(f) to Fig.~\ref{Fig:OpticaldBMConcept}(b)]. By setting the complex spectral amplitudes constant for all frequencies, we obtain the dBM wave packet (with $\widetilde{v}<c$):
\begin{equation}\label{Eq:OpticaldBM}
\psi(x,z;t)\propto e^{i\beta_{v}\Delta k_{z}(z-\widetilde{v}t)}\;\mathrm{sinc}\left(\frac{\Delta k_{z}}{\pi}\sqrt{x^{2}+(z-\widetilde{v}t)^{2}}\right),
\end{equation}
Two parameters uniquely identify the optical dBM wave packet: the group velocity $\widetilde{v}$ (corresponding to the particle velocity) and the wave number $k_{\mathrm{o}}$ (corresponding to the particle mass). Furthermore, the signature of the dBM wave packet in Eq.~\ref{Eq:OpticaldBM} is its circularly symmetric spatio-temporal profile in $(x,t)$-space in any axial plane $z$. In contrast, all other propagation-invariant wave packets that have been observed in free space are X-shaped \cite{Saari97PRL,Porras03PRE2,Turunen10PO,FigueroaBook14,Yessenov22AOP} and are \textit{not} circularly symmetric. Indeed, truncating the spectrum of the optical dBM wave packet obstructs the formation of the circularly symmetric profile and gives rise instead to the more familiar X-shaped counterpart \cite{ZamboniRached2008PRA,Yessenov22AOP}. The O-shaped spatio-temporal profile as indicated by Eq.~\ref{Eq:OpticaldBM} can be observed only when the full bandwidth -- delimited by the light-lines -- is included.
The field in the $(x,z)$-plane recorded by stationary detectors encircling the dipole takes the form shown in Fig.~\ref{Fig:OpticaldBMConcept}(b), as pointed out recently in a thought experiment by Wilczek \cite{WilczekBook}. Despite the conceptual simplicity of this optical scheme for producing dBM wave packets, it nevertheless faces obvious experimental challenges. Encircling an optical dipole moving at a relativistic speed with stationary detectors is far from practical realizability. The more realistic configuration in which the detectors are restricted to a small angular range within the paraxial regime centered on the $z$-axis truncates the recorded field and precludes observing of the O-shaped spatio-temporal profile \cite{ZamboniRached2008PRA,Yessenov22AOP}. For these reasons, it is not expected that the O-shaped dBM wave packet can be observed using spatio-temporally structured optical fields in free space.
\subsection*{Optical de Broglie-Mackinnon wave packets in a dispersive medium}
\noindent
The necessity of including the entire bandwidth delimited by the intersection of the dBM dispersion relationship with the free-space light-cone [Fig.~\ref{Fig:OpticaldBMConcept}(a-b)] presents insurmountable experimental obstacles. Producing paraxial dBM wave packets necessitates confining the spectrum to a narrow range of values of $k_{z}$ centered at a value $k_{z}\!\sim\!k_{\mathrm{o}}\!>\!0$. Crucially, the linear spatio-temporal spectrum projected onto the $(k_{z},\tfrac{\omega}{c})$-plane must remain delimited at both ends by the light-line, to produce the circularly symmetric spatio-temporal wave-packet profile. Clearly these requirements cannot be met in free space. Nevertheless, this challenge can be tackled by exploiting the unique features of optical-wave propagation in the presence of anomalous GVD. Specifically, the light-cone structure is modified in presence of anomalous GVD so that the curvature of the light-line has the same sign as that of the de~Broglie dispersion relationship [Fig.~\ref{Fig:OpticaldBMConcept}(c)]. In this case, imposing the characteristically linear dBM dispersion relationship produces a spectral support domain on the dispersive light-cone surface that satisfies all the above-listed requirements: (1) $k_{z}\!>\!0$ is maintained throughout the entire span of propagation angles $\varphi(\omega)$; (2) the field simultaneously remains within the paraxial regime; and (3) the spectrum is delimited at both ends by the light-line [Fig.~\ref{Fig:OpticaldBMConcept}(c)], thus yielding a wave packet having a circularly symmetric spatio-temporal profile. The spectral support is in the form of an ellipse at the intersection of the dispersive light-cone with a tilted spectral plane. The center of this ellipse is displaced to a large value of $k_{z}$, and the spectral projection onto the $(k_{z},\tfrac{\omega}{c})$-plane is a line making an angle $\theta$ with the $k_{z}$-axis. The resulting wave packet is propagation-invariant in the dispersive medium and travels at a velocity $\widetilde{v}\!=\!c\tan{\theta}$ independently of the physical parameters of the dispersive medium.
In the anomalous-GVD regime, the wave number is given by $k(\omega)\!=\!n(\omega)\omega/c=k(\omega_{\mathrm{o}}+\Omega)\!\approx\!n_{\mathrm{m}}k_{\mathrm{o}}+\tfrac{\Omega}{\widetilde{v}_{\mathrm{m}}}-\tfrac{1}{2}|k_{2\mathrm{m}}|\Omega^{2}+\cdots$; where $n(\omega)$ is the refractive index, and the following quantities are all evaluated at $\omega\!=\!\omega_{\mathrm{o}}$: $n_{\mathrm{m}}\!=\!n(\omega_{\mathrm{o}})$ is the refractive index, $\widetilde{v}_{\mathrm{m}}\!=\!1\big/\tfrac{dk}{d\omega}\big|_{\omega_{\mathrm{o}}}$ is the group velocity for a plane-wave pulse in the medium, and $k_{2\mathrm{m}}\!=\!\tfrac{d^{2}k}{d\omega^{2}}\big|_{\omega_{\mathrm{o}}}\!=\!-|k_{2\mathrm{m}}|$ is the negative-valued anomalous GVD coefficient \cite{SalehBook07}. The dispersion relationship in the medium $k_{x}^{2}+k_{z}^{2}\!=\!k^{2}$ corresponds geometrically to the surface of the modified dispersive light-cone in Fig.~\ref{Fig:OpticaldBMConcept}(c). Similarly to the free-space scenario, we impose a spectral constraint of the form $k_{z}\!=\!n_{\mathrm{m}}k_{\mathrm{o}}+\tfrac{\Omega}{\widetilde{v}}\!=\!\tfrac{1}{\beta_{v}}\left\{\tfrac{\omega}{c}-k_{\mathrm{o}}(1-n_{\mathrm{m}}\beta_{v})\right\}$ in the medium, where $\Omega\!=\!\omega-\omega_{\mathrm{o}}$ and $\widetilde{v}\!=\!c\tan{\theta}$ is the group velocity of the wave packet [Fig.~\ref{Fig:OpticaldBMConcept}(c)]. The wave-packet spectrum as defined by this constraint is delimited by the light-line at its two ends, both located however in the range $k_{z}\!>\!0$, in contrast to the previous scenarios depicted in Fig.~\ref{Fig:ConceptofDBM}(e,f) and Fig.~\ref{Fig:OpticaldBMConcept}(a,b); see Methods.
The spectral projections onto the $(k_{x},\tfrac{\omega}{c})$ and $(k_{x},k_{z})$ planes of the spectral support on the dispersive light-cone are ellipses (Methods):
\begin{equation}\label{Eq:DispersiveEllipseOmegaKx}
\frac{k_{x}^{2}}{k_{x,\mathrm{max}}^{2}}+\frac{(\omega-\omega_{\mathrm{c}})^{2}}{(\Delta\omega/2)^{2}}=1,\quad \frac{k_{x}^{2}}{k_{x,\mathrm{max}}^{2}}+\frac{(k_{z}-k_{\mathrm{c}})^{2}}{(\Delta k_{z}/2)^{2}}=1,
\end{equation}
where the temporal bandwidth is $\tfrac{\Delta\omega}{c}\!=\!2\tfrac{k_{\mathrm{o}}}{\sigma_{\mathrm{m}}}\tfrac{1-\beta_{v}'}{\beta_{v}^{2}}\!=\!\beta_{v}\Delta k_{z}$, $\sigma_{\mathrm{m}}\!=\!c\omega_{\mathrm{o}}|k_{2\mathrm{m}}|$ is a dimensionless dispersion coefficient, $\beta_{v}'\!=\!\tfrac{\widetilde{v}}{\widetilde{v}_{\mathrm{m}}}$, $k_{x,\mathrm{max}}\!=\!\tfrac{1}{2}\tfrac{\Delta\omega}{c}\sqrt{n_{\mathrm{m}}\sigma_{\mathrm{m}}}$, $\omega_{\mathrm{c}}\!=\!\omega_{\mathrm{o}}-\Delta\omega/2$, $k_{\mathrm{c}}\!=\!n_{\mathrm{m}}k_{\mathrm{o}}-\tfrac{\Delta k_{z}}{2}$, $k_{x,\mathrm{max}}\!\ll\!n_{\mathrm{m}}k_{\mathrm{o}}$, $\Delta k_{z}\!\ll\!n_{\mathrm{m}}k_{\mathrm{o}}$, and $\Delta\omega\!\ll\!\omega_{\mathrm{o}}$. It is crucial to recognize that the ellipse projected onto the $(k_{x},k_{z})$-plane does \textit{not} enclose the origin $(k_{x},k_{z})\!=\!(0,0)$, but is rather displaced to a central value $k_{\mathrm{c}}\!\gg\!\Delta k_{z}$. Therefore, the optical field comprises plane-wave components that propagate only in the forward direction within a small angular range centered on the $z$-axis, and the field thus remains within the paraxial domain. Nevertheless, because the spectrum is delineated at both ends by the curved dispersive light-line, the resulting spatio-temporal profile is circularly symmetric in any axial plane $z$. This wave packet in the dispersive medium thus satisfies all the above-listed desiderata for an optical dBM wave packet, but can be readily synthesized and observed in contrast to its free-space counterparts. One difficulty, however, arises from the form of $\varphi(\omega)$ in the dispersive medium, which differs fundamentally from that in free space. Each frequency $\omega$ in a free-space optical dBM wave packet is associated with two propagation angles $\pm\varphi(\omega)$. However, each propagation angle $\varphi$ is associated with a single frequency, so that $|\phi(\omega)|$ is one-to-one. In the optical dBM wave packet in the dispersive medium, each $\omega$ is still associated with two propagation angles $\pm\varphi(\omega)$; but $\varphi(\omega)$ is now two-to-one, so that $\varphi(\omega)$ is folded back on itself [Fig.~\ref{Fig:OpticaldBMConcept}(c)]. To synthesize such a field configuration, a synthesizer capable of sculpting $\varphi(\omega)$ almost arbitrarily is required.
\subsection*{Experimental confirmation}
\noindent\textbf{Setup.} To construct the optical dBM wave packet in free space from a generic pulsed beam in which the spatial and temporal degrees-of-freedom are uncoupled, we introduce angular dispersion by assigning to each wavelength $\lambda$ a particular pair of angles $\pm\varphi(\lambda)$, thereby coupling the spatial and temporal degrees-of-freedom. We carry out this task using a universal angular-dispersion synthesizer \cite{Hall21OE2}, in which a spatial light modulator (SLM) deflects each wavelength from a spectrally resolved laser pulse at prescribed angles, as illustrated in Fig.~\ref{Fig:Setup} (Methods). Because each wavelength $\lambda$ is deflected at $\varphi(\lambda)$ independently of all other wavelengths, $\varphi(\lambda)$ need \textit{not} be one-to-one. Indeed, it can readily be a two-to-one mapping as required for paraxial optical dBM wave packets. The dBM wave packet is formed once all the wavelengths are recombined by a grating to reconstitute the pulsed field. The spatio-temporal spectrum of the synthesized wave packet is acquired by operating on the spectrally resolved field with a spatial Fourier transform and recording the intensity with a CCD camera. This measurement yields the spatio-temporal spectrum projected onto the $(k_{x},\lambda)$-plane, from which we can obtain the spectral projection onto the $(k_{z},\lambda)$-plane. The spatio-temporal envelope $I(x;\tau)$ of the intensity profile at a fixed axial plane $z$ is reconstructed in the frame travelling at $\widetilde{v}$ ($\tau\!=\!t-z/\widetilde{v}$) via linear interferometry exploiting the procedure developed in Refs.~\cite{Kondakci19NC,Yessenov19OE,Bhaduri20NatPhot} (Methods). The dispersive medium exploited in our measurements is formed of a pair of chirped Bragg mirrors providing an anomalous GVD coefficient of $k_{2\mathrm{m}}\!\approx\!-500$~fs$^2$/mm and $\widetilde{v}_{\mathrm{m}}\!\approx\!c$ (Methods).
\noindent\textbf{Measurement results.} We first verify the unique signature of dBM wave packets in presence of anomalous GVD; namely, the O-shaped spatio-temporal intensity profile at any axial plane after inculcating into the field the dBM dispersion relationship. In Fig.~\ref{Fig:MeasurementsChangingV} we verify three sought-after features: (1) The closed elliptical spatio-temporal spectrum projected onto the $(k_{x},\lambda)$-plane; (2) the linear spectral projection onto the $(k_{z},\lambda)$-plane, indicating non-dispersive propagation in the dispersive medium; and (3) the circularly symmetric spatio-temporal intensity profile $I(x;\tau)$ reconstructed at a fixed axial plane ($z\!=\!30$~mm). In Fig.~\ref{Fig:MeasurementsChangingV}(a) we plot the measurements for an optical dBM wave packet having a group velocity $\widetilde{v}\!=\!0.9975c$. The temporal bandwidth is constrained to a maximum value of $\Delta\lambda\!\approx\!16$~nm, and the associated spatial bandwidth $\Delta k_{x}\!\approx\!0.03$~rad/$\mu$m, thus resulting in a pulsewidth $\Delta T\!\approx\!200$~fs at $x\!=\!0$, and a spatial profile width $\Delta x\!\approx\!38$~$\mu$m at $\tau\!=\!0$. The spectral projection onto the $(k_{z},\lambda)$-plane is delimited at both ends by the curved light-line of the dispersive medium. In other words, a larger bandwidth is \textit{in}compatible at this group velocity with propagation invariance in the dispersive medium. Further increase in the bandwidth extends the spectral projection \textit{below} the dispersive light-line, which contributes to only evanescent field components. The measured spatio-temporal profile $I(x;\tau)$ therefore has the smallest dimensions in space and time for a circularly symmetric dBM wave packet compatible with the selected group velocity in the medium.
To the best of our knowledge, this is the first observation of an O-shaped spatio-temporal intensity profile for a dispersion-free wave packet in a linear dispersive medium. Previous realizations of dispersion-free ST wave packets in dispersive media (whether in the normal- or anomalous-GVD regimes) revealed X-shaped spatio-temporal profiles \cite{Hall22LPR} similar to those observed in free space \cite{Saari97PRL,Kondakci17NP,Kondakci19NC} or in non-dispersive dielectrics \cite{Bhaduri20NatPhot}. In these experiments, however, the wave packets were \textit{not} delimited spectrally by the dispersive-medium light-line, which is the prerequisite for the realization of O-shaped optical dBM wave packets.
As mentioned earlier, two parameters characterize a dBM wave packet: the velocity $v$ and the rest mass $m_{\mathrm{o}}$. The corresponding variables associated with the optical dBM wave packet are $\widetilde{v}$ and $\lambda_{\mathrm{o}}$, which can both be readily tuned in our experimental arrangement by changing the functional dependence of $\varphi$ on $\lambda$. In this way we can vary the first parameter; namely, the group velocity $\widetilde{v}$. Increasing the group velocity from $\widetilde{v}\!=\!0.9975c$ [Fig.~\ref{Fig:MeasurementsChangingV}(a)] to $\widetilde{v}\!=\!0.9985c$ [Fig.~\ref{Fig:MeasurementsChangingV}(b)] and then to $\widetilde{v}\!=\!0.999c$ [Fig.~\ref{Fig:MeasurementsChangingV}(c)] reduces the maximum exploitable temporal bandwidth from $\Delta\lambda\!\approx\!16$~nm to $\Delta\lambda\!\approx\!8$~nm and $\Delta\lambda\!\approx\!6$~nm, respectively, while retaining the closed elliptic spectral projection onto the $(k_{x},\lambda)$-plane, the linear spectral projection onto the $(k_{z},\lambda)$-plane, and the associated O-shaped spatio-temporal profile $I(x;\tau)$. The corresponding spatial bandwidths drop to $\Delta k_{x}\!\approx\!0.023$~rad/$\mu$m and $\Delta k_{x}\!\approx\!0.017$~rad/$\mu$m, respectively. In all three dBM wave packets in Fig.~\ref{Fig:MeasurementsChangingV}, we retain a fixed intersection with the dispersive light-line at $\lambda_{\mathrm{o}}\!\approx\!1054$~nm (corresponding to a fixed particle mass), such that reducing $\widetilde{v}$ decreases the wavelength of the second intersection point. The second parameter, the wavelength $\lambda_{\mathrm{o}}$ corresponding to particle rest mass $m_{\mathrm{o}}$ for de~Broglie phase waves, can also be readily tuned [Fig.~\ref{Fig:MeasurementsFixedV}]. Here, the maximum exploitable bandwidth changes as a result of shifting the value of $\lambda_{\mathrm{o}}$ from $\lambda_{\mathrm{o}}\!=\!1054$~nm [Fig.~\ref{Fig:MeasurementsFixedV}(a)] where $\Delta\lambda\!=\!16$~nm, to $\lambda_{\mathrm{o}}\!=\!1055$~nm [Fig.~\ref{Fig:MeasurementsFixedV}(b)] where $\Delta\lambda\!=\!14$~nm, and then to $\lambda_{\mathrm{o}}\!=\!1056$~nm [Fig.~\ref{Fig:MeasurementsFixedV}(c)] where $\Delta\lambda\!=\!12$~nm. Once again, both the spatial and temporal widths of the circularly symmetric O-shaped profile in the $(x,t)$-domain change accordingly.
The Airy wave packet, as mentioned earlier, is the \textit{unique} non-dispersive solution to Schr{\"o}dinger's equation -- no other waveform will do \cite{Unnikrishnan96AJP}. Although Mackinnon obtained a particular `sinc'-function-shaped wave packet \cite{Mackinnon78FP}, this waveform is \textit{not} unique. Indeed, the sinc-function results from combining all the de~Broglie phase waves with equal weights. However, dBM wave packets can take on in principle arbitrary waveforms by associating different magnitudes or phases with the plane-wave components constituting it. We confirm in Fig.~\ref{Fig:ChangingProfile} that the spatio-temporal profile $I(x;\tau)$ of optical dBM wave packets can be modified while remaining propagation invariant in the dispersive medium. First, setting the complex spectral amplitudes equal along the elliptical spectral support, we obtain propagation-invariant circularly symmetric wave packets in the dispersive medium [Fig.~\ref{Fig:ChangingProfile}(a)]. Truncating the ellipse and eliminating the plane wave components in the vicinity of $k_{x}\!=\!0$ disrupts the formation of the full circular profile, but the wave packet nevertheless propagates invariantly [Fig.~\ref{Fig:ChangingProfile}(b)]. By introducing a $\pi$-step in the spectral phase along $k_{x}$, a spatial null is formed along $x\!=\!0$ in the profile of the dBM wave packet [Fig.~\ref{Fig:ChangingProfile}(c)], whereas introducing the $\pi$-phase-step along $\lambda$ produces a temporal null along $\tau\!=\!0$ [Fig.~\ref{Fig:ChangingProfile}(d)]. Finally, alternating the phases between 0 and $\pi$ in the four quadrants of the spatio-temporal spectral plane $(k_{x},\lambda)$ produces spatial and temporal nulls along $x\!=\!0$ and $\tau\!=\!0$, respectively [Fig.~\ref{Fig:ChangingProfile}(e)]. Despite such variations in their spatio-temporal profiles, all these optical dBM wave packets propagate invariantly in the dispersive medium.
\section*{Discussion}
\noindent
The rapidly evolving versatile techniques for synthesizing optical fields \cite{Forbes21NP,Yessenov22AOP} played a critical role in the realization of dBM wave packets as demonstrated here. This has helped confirm the theoretical proposal made by Mackinnon almost 45 years ago for constructing a non-dispersive wave packet from dispersive de~Broglie phase waves \cite{Mackinnon78FP}. Furthermore, the experimental procedure implemented here points to a general synthesis strategy that extends beyond the particular scenario of dBM wave packets. The overarching theme is that novel dispersion relationships for the axial propagation of a wave packet can be imposed by first adding another dimension to the space, and then exploiting the new dimension to tailor the dispersion relationship before spectral projection back onto the original reduced-dimensionality space.
In the scenario studied here, we start with a $(1+1)$D physical wave in which an axial dispersion relationship $\omega(k_{z})$ is enforced by the dynamics of the wave equation. Increasing the dimensionality of the space from $(1+1)$D to $(2+1)$D by including a transverse coordinate $x$ yields a new dispersion relationship $\omega(k_{x},k_{z})$. In free space, optical wave packets are subject to the constraint $\omega\!=\!ck_{z}$ in $(1+1)$D and $\omega(k_{x},k_{z})\!=\!c\sqrt{k_{x}^{2}+k_{z}^{2}}$ in $(2+1)$D. Now, by judiciously associating each transverse wave number $k_{x}$ with a particular axial wave number $k_{z}$, a reduced-dimensional axial dispersion relationship $\omega_{\mathrm{red.}}(k_{z})$ is obtained: $\omega(k_{x},k_{z})\!=\!\omega(k_{x}(k_{z}),k_{z})\!\mapsto\!\omega_{\mathrm{red.}}(k_{z})$, which can be engineered almost arbitrarily. In the experiment reported here, we employed this strategy to produce a linear dispersion relationship $\omega(k_{z})\!=\!(k_{z}-k_{\mathrm{o}})\widetilde{v}$ projected onto the $(k_{z},\tfrac{\omega}{c})$-plane that deviates away from the light-line $\omega\!=\!ck_{z}$. In presence of anomalous GVD, such a spatio-temporal spectrum is delimited at both ends by the curved light-line in the dispersive medium -- thereby yielding the circular symmetric spatio-temporal profile characteristic of dBM wave packets. Here, the transverse wave number $k_{x}$ played the role of the observer velocity $u$ in the physical configuration envisioned by Mackinnon [Fig.~\ref{Fig:ConceptofDBM}(d)]. However, one may envision a variety of other scenarios that can be facilitated by this general strategy. For example, besides tuning the group velocity in free space, linear dispersive media, or nonlinear optical materials and structures, one may produce accelerating wave packets \cite{Yessenov20PRLaccel,Hall22OLaccel,Li20SR} whose group velocity changes with propagation in such media. These features have been recently predicted to produce a host of new phenomena related to two photon-emission \cite{Sloan22NatPhys} and relativistic optics \cite{Bliokh12PRA,Caloz20IEEE}.
Intriguingly, the strategy employed here is not constrained to optical waves. Indeed, our approach to spatio-temporal structuring of the field is agnostic with respect to the physical substrate, and can be implemented in principle with acoustic waves, microwaves, surface plasmon polaritons \cite{Schepler20ACSPhot}, electron beams, neutron beams, or other massive particles. In all cases, an added spatial dimension can be exploited to override the intrinsic dispersion relationship of the particular wave phenomenon, thus producing novel propagation dynamics.
The dimensionality of the $(2+1)$D dBM wave packets synthesized here can be further extended to the full-dimensional $(3+1)$D space of $(x,y,z;t)$ by including the second transverse coordinate $y$. This can now be achieved in light of very recent progress in producing so-called 3D ST wave packets that are localized in all dimensions of $(3+1)$D space \cite{Guo21Light,Pang22OE,Yessenov22NC}. Combining this new synthesis methodology with the procedure outlined here for producing dBM wave packets in the anomalous-GVD regime will yield spherically symmetric propagation-invariant pulsed field structures. Such field configurations provide a platform for exploring proposed topological structures associated with polarization (spin texture) \cite{Guo21Light} \textit{without} resorting to stereo-projection onto a 2D plane. Moreover, such spherically symmetric optical dBM wave packets are compatible with coupling to optical fibers and waveguides, thus enabling new opportunities in optical communications, optical signal processing, and nonlinear and quantum optics.
Finally, the ideal spectral constraint underlying optical dBM wave packets implies an exact association between the spatial and temporal frequencies. Such idealized wave packets consequently have infinite energy \cite{Sezginer85JAP}. In any realistic system, however, a spectral uncertainty is inevitably introduced into this association, resulting in a finite-energy wave packet traveling for a finite distance over which it is approximately invariant \cite{Kondakci19OLClassic}. In our experiments, this spectral uncertainty arises from the finite spectral resolution of the diffraction grating employed (Fig.~\ref{Fig:Setup}), which is estimated to be $\approx\!16$~pm, corresponding to a propagation distance of $\approx\!32$~m at a spectral tilt angle $\theta=44.99^{\circ}$ \cite{Yessenov19OE}.
|
{
"arxiv_id": "2302.13226",
"language": "en",
"timestamp": "2023-02-28T02:13:20",
"url": "https://arxiv.org/abs/2302.13226",
"yymm": "2302"
} | \section{Introduction}
It has been known since the works of Goldreich and Peale in the 1960s that a planet in an eccentric orbit can settle into a spin rate $\omega_{p}$ that is slightly faster than the synchronous angular velocity $\omega_{p} = n$, where $n$ is the orbital mean motion.\cite{Goldreich_TAJ1966,Goldreich_Peale_SpinOrb} In the final spin state, the orbit-averaged tidal torque should equal zero, and this happens for a planetary spin rate greater the mean motion angular velocity and less the periapsis angular velocity. However, if the planet has sufficient permanent mass asymmetry (in addition to the dynamic tidal distortion), then gravity-gradient torques on this asymmetry can dominate the tidal torque and force synchronous rotation. Other resonant spin states e.g. $\omega_{p}/n = 1/2, 1, 3/2, 2, \ldots$ are also possible, but this paper is entirely focused on the synchronous case.
The classical analysis of Goldreich and Peale\cite{Goldreich_TAJ1966,Goldreich_Peale_SpinOrb} uses a 1D single-axis model of a single rigid body, and does not examine the case that the body is differentiated. The icy ocean world Europa is well-known to be differentiated into an ice shell and solid interior, dynamically decoupled by a subsurface ocean, and as a result, the simple conclusions of Goldreich and Peale will not apply. This is important because despite a wealth of relevant studies, it is still not known for certain whether or not Europa is in a state of super-synchronous rotation, nor is it known the degree to which its ice shell and core can move independently.\cite{Ojakangas1989EuropaPW,Sarid2002PolarWander,GreenbergEuropaRotation2002,Schenk_TPW_circ,Burnett_Icarus} Instead of using Goldreich and Peale's simple 1D model to approach the dynamics of this problem, a slightly more complex 2D model accounting for the independent single-axis rotational and librational movements of the ice shell and core is desirable. Such a model was previously developed by Van Hoolst et al.\cite{VanHoolst_Europa} to study librations of Europa's ice shell, but the model developed in that paper can also be applied to the larger problem of full rotations and tidal settling into synchronous or super-synchronous spin states. In this paper we explore the tidal locking process with this more appropriate model.
This analysis is not only relevant to Europa -- it is potentially relevant to the other icy ocean worlds of the outer solar system as well.
Europa has been a subject of intense interest since the Voyager missions, and this paper follows a long history of research into the orientation and spin history of Europa's icy surface. The Voyager mission revealed a smooth surface nearly devoid of impact craters, with a subsurface ocean suggested as the likely culprit for the rapid resurfacing.\cite{EuropaVoyagerWater1983} Due to the pervasive tectonic features observed on its surface, whose implied background stress field cannot be generated by diurnal tidal-driven stresses alone, it was suggested that Europa could rotate faster than the synchronous angular velocity, producing the necessary large stress fields in the ice shell.\cite{Greenberg1984}
Additional works have subsequently argued for non-synchronous rotation (NSR) by interpretation of tectonic features from both Voyager and Galileo data\cite{HoppaScience1999, Greenberg2003TidalStress,GreenbergEuropaRotation2002,Kattenhorn2002,Geissler1998}, suggesting NSR periods on the order of $10^{4}$ to $10^{5}$ yrs. It is worth noting that there are competing explanations for the source of the background stress field, including ice shell freezing\cite{Nimmo2004} and polar wander\cite{Greenberg2003TidalStress,Ojakangas1989EuropaPW,Sarid2002PolarWander}. Direct observation of NSR has proven quite difficult due to the extremely long expected timescale. Hoppa et al. used pairs of images from Voyager 2 and by Galileo 17 years later to attempt a direct observation of NSR.\cite{Hoppa1999a} They found no signature at all, and from the precision of their measurements, they determined that the period of NSR must be greater than 12000 years. Recently, Burnett and Hayne used Europa's hemispheric color dichotomy, which is produced by anisotropic exogenic processes, to search for a signature of NSR.\cite{Burnett_Icarus} We found no signature at all, and our analysis furthermore found that the synodic period of NSR should be at least $10^{6}$ yrs.
While many researchers have used observations from geology to argue for or against particular types of motion of Europa's ice shell, there is comparatively less literature focused on the study of Europa's overall spin state from a dynamical perspective. However, there are still some very important studies which were influential for this work. As previously mentioned, the model by Van Hoolst et al.\cite{VanHoolst_Europa} studied the gravitational coupling between Europa's ice shell and core, providing much of the theoretical groundwork for this study. The work of Ojakangas et al. is also quite relevant.\cite{Ojakangas1989EuropaPW} They show that a decoupled ice shell on Europa can become dynamically unstable as it approaches thermal equilibrium, reorienting the shell by 90 degrees about the axis connecting the Jovian and sub-Jovian points. They also present a wealth of insights about the rotational dynamics of Europa's ice shell. Lastly, Bills et al.\cite{Bills_EuropaRotational} provides an authoritative study on the dynamics of Europa's ice shell, with a review of past work on the precession, libration, and potential non-synchronous rotation of Europa. They note that many of the relevant mechanisms and parameters are neither sufficiently well-measured nor well-understood for us to make convincing conclusions about Europa's spin state with dynamical arguments alone. For example, they derive an equilibrium NSR rate for Europa for which the (poorly constrained) tidal torque would vanish, and obtain a predicted synodic period of 15 years, noting that this result is far too short to be correct. Clearly the primary role of dynamical models for studying Europa
is to aid understanding of the kinds of processes that could be at work there, and how they produce certain outcomes in Europa's dynamical evolution. It is to this end that this paper extends the classical spin-orbit coupling analysis of Goldreich and Peale to the differentiated icy ocean world Europa.
\section{Methods and Results}
The approach in this work is to examine how the analyses in References~\citenum{Goldreich_TAJ1966} and \citenum{Goldreich_Peale_SpinOrb} are extended to the case that a planet or satellite is differentiated into dynamically decoupled ice shell and core. In particular, it is of interest to see when full rotations of the ice shell, core, or both are dynamically permitted, and how the moments of inertia of the ice shell and core influence final body spin state behavior with tidal energy dissipation. This work begins with a brief review of the classical analysis.
\subsection{Dynamics of a Spinning Planet}
Consider the situation depicted by Figure~\ref{fig:goldreich1}, discussed in Reference~\citenum{Goldreich_TAJ1966}. An asymmetric satellite orbits with true anomaly $f$ and rotates with rotation angle $\theta$ measured from periapsis to the long axis of the planet. The angle $\psi$ is also defined, where $\psi = \theta - f$, and bounded libration occurs if $|\psi| < \pi$.
\begin{figure}[h!]
\centering
\includegraphics[width=2.8in]{RigidAsymmPlanet.pdf}
\caption{Libration and spin angles for a rigid asymmetric satellite. The satellite orbits about its focus with true anomaly $f$, rotation angle $\theta$, and libration angle $\psi$.}
\label{fig:goldreich1}
\end{figure}
From Reference~\citenum{Goldreich_TAJ1966}, the dynamics of this problem are given as below for the case of zero dynamic tidal torque:
\begin{equation}
\label{GoldEq1}
C\ddot{\psi} + \frac{3}{2}\frac{(B-A)GM}{r^{3}}\sin{2\psi} = -C\ddot{f}
\end{equation}
where $r$ is the orbit radius, $G$ is the gravitational constant, and $M$ is the mass of the attracting body. Neglecting the effects of eccentricity, the behavior of the averaged angle $\eta = \theta - nt$ is instructive. Its dynamics are given below:
\begin{equation}
\label{GoldEq2}
C\ddot{\eta} + \frac{3}{2}(B-A)n^{2}\sin{2\eta} = 0
\end{equation}
The angle $\psi$ behaves the same as $\eta$ but with additional small oscillations superimposed as a result of the orbital eccentricity. Eq.~\eqref{GoldEq2} is that of a nonlinear pendulum, and admits an energy integral:
\begin{equation}
\label{GoldEq3}
E = \frac{1}{2}C\dot{\eta}^{2} - \frac{3}{4}(B-A)n^{2}\cos{2\eta}
\end{equation}
While $E$ is only conserved for Eq.~\eqref{GoldEq2}, in the absence of the dynamic tidal torque (i.e. if the planet is perfectly rigid), the energy will oscillate only slightly about some conserved mean value for the system given by Eq.~\eqref{GoldEq1}. It turns out that the energy quantity is quite useful for many reasons, and this will be discussed later in the paper.
In contrast to the gravitational torque of the primary body on the permanent mass asymmetry of its satellite, the ``dynamic" tidal torque denotes the torque on the lagging asymmetric bulge raised by the tides. The dynamic tidal torque can be written as below:\cite{SSD_1999,Goldreich_TAJ1966}
\begin{equation}
\label{TidesEq1}
L_{\text{tidal}} = - \tilde{D}\left(\frac{a}{r}\right)^{6}\text{sign}(\dot{\psi})
\end{equation}
\begin{equation}
\label{TidesEq2}
\tilde{D} = \frac{3}{2}\frac{k_{2}}{Q_{s}}\frac{n^{4}}{G}R_{s}^{5}
\end{equation}
where $k_{2}$ is the second degree tidal Love number, $Q_{s}$ is the satellite's specific tidal dissipation factor, $G$ is the gravitational constant, and $R_{s}$ is the satellite radius. It can be shown that this tidal torque influences the total energy as below:
\begin{equation}
\label{GoldEq4}
\dot{E} = L_{\text{tidal}}(t)\dot{\eta} \ \approx \ \overline{L}_{\text{tidal}}\dot{\eta}
\end{equation}
where $\overline{L}_{\text{tidal}}$ is the orbit-averaged tidal torque, and changes to the energy are quite slow, so orbit averaging is appropriate. When the orbit-averaged tidal torque equals zero, there is no further change to the energy on average, and a final spin state is achieved.
\subsection{The Case of a Differentiated Body}
While the single rigid body assumption is suitable for spin-orbit coupling analysis of some worlds such as Mercury and the Moon,\cite{Goldreich_TAJ1966} it is not appropriate for the icy ocean worlds, for which the complex interaction between ice shell and interior should not be discounted, and the shell itself can potentially become significantly displaced from the planetary interior on very short geologic timescales.\cite{Ojakangas1989EuropaPW} The insufficiency of the Goldreich \& Peale model is especially apparent for cases where the motion of the ice shell itself is of interest. Reference~\citenum{VanHoolst_Europa} discusses the case that an icy satellite is differentiated into an ice shell and solid interior, nearly dynamically decoupled due to the existance of a deep ocean layer. This is depicted in Figure~\ref{fig:EuropaInternal1}. The single-axis rotational equations of motion of the ice shell and interior are given below:
\begin{subequations}
\label{VH1}
\begin{align}
& C_{s}\ddot{\theta}_{s} + \frac{3}{2}\left( B_{s} - A_{s}\right) n^{2}\rho^{-3}\sin{\left( 2\left( \theta_{s} - f\right) \right)} = - K_{G}\sin{\left(2\left( \theta_{s} - \theta_{i}\right)\right)}
\\
& C_{i}\ddot{\theta}_{i} + \frac{3}{2}\left( B_{i} - A_{i}\right) n^{2}\rho^{-3}\sin{\left( 2\left( \theta_{i} - f\right) \right)} = K_{G}\sin{\left(2\left( \theta_{s} - \theta_{i}\right)\right)}
\end{align}
\end{subequations}
where $\rho = r/a$, $K_{G}$ is the constant of gravity-gradient coupling between the ice shell and interior, and $\theta_{s}$ and $\theta_{i}$ measure from from periapsis to the long axis of the ice shell and interior, respectively. The constant $K_{G}$ is given below assuming a solid interior:
\begin{equation}
\label{VH1b}
K_{G} = \frac{4\pi G}{5}\left(\rho_{s}\beta_{s} + (\rho_{0}-\rho_{s})\beta_{0}\right)\left(1 - \frac{\rho_{0}}{\rho_{i}}\right)\left(B_{i} - A_{i}\right)
\end{equation}
where $\rho_{s}$, $\rho_{0}$, and $\rho_{i}$ denote the densities of the shell, ocean, and interior, respectively, and similarly the $\beta$ terms are their respective equatorial flattenings.
Eq. \eqref{VH1} makes some simplifying assumptions. First, only the longitudinal axis of rotation is considered. Also, this model assumes that the ice shell and interior are rigid -- neglecting the time-varying rotational effects on the polar moments of inertia $C_{i}$ and $C_{s}$, the zonal tidal effects on the polar flattening, and also dissipation within the shell for large differential angles $\theta_{s} - \theta_{i}$. Note that the time-varying rotational effects on a body's polar moment of inertia can be computed from the angular velocity $\dot{\theta}$ and the mean rotation rate $\omega$ as below:
\begin{equation}
\label{TVrot1}
\frac{\delta C}{C} = \frac{5}{6}k_{2}\frac{\omega}{\pi G \rho}\left(\dot{\theta} - \omega\right)
\end{equation}
where $\rho$ is the mean density and $\delta C$ is the time-varying deviation induced in $C$. It can be shown that $\delta C/C \sim 10^{-4}$ for Europa, for reasonable values of $k_{2}$ and for the rotational rates considered in this work.\cite{Wahr_EuropaTides, Sotin2009TidesAT} Additionally, Eq.~\eqref{VH1} neglects any dynamic coupling effects from the subsurface ocean - including viscous torques and pressure gradient torques. Historically it has been argued that for the purely longitudinal case, this coupling will be small in comparison to the gravity-gradient effects -- see e.g. References~\citenum{VanHoolst_Europa} and~\citenum{Ojakangas1989EuropaPW}.
While the effects of non-rigidity of the shell and interior could be important, and so could the effects of dynamic coupling due to the ocean, there are many interesting results and analyses available assuming a rigid shell and core that interact primarily gravitationally.
The model given by Eq.~\eqref{VH1} was used in analysis in Reference~\citenum{VanHoolst_Europa} to show that the existence and nature of a subsurface ocean and other aspects of Europa's interior can be studied via the librational response using the predictions of their model, which are modified from the classical rigid satellite case. However, there is more analysis that can be done with this model, beyond libration. Namely, full rotation can also be studied using the same equations.
\begin{figure}[h!]
\centering
\includegraphics[]{europa_fig1.pdf}
\caption{An icy ocean world differentiated into an ice shell and solid interior, separated by a global ocean (not to scale). Angles $\theta_{i}$ and $\theta_{s}$ measure the orientation of the solid interior and ice shell, respectively.}
\label{fig:EuropaInternal1}
\end{figure}
The behavior of the averaged angles $\eta_{s} = \theta_{s} - nt$ and $\eta_{i} = \theta_{i} - nt$ is now studied. Making the necessary substitutions and neglecting the effects of orbit eccentricity, Eq.~\eqref{VH1} is transformed:
\begin{subequations}
\label{VH2}
\begin{align}
C_{s}\ddot{\eta}_{s} + \frac{3}{2}\left( B_{s} - A_{s}\right) n^{2}\sin{2\eta_{s}} + K_{G}\sin{\left(2\left( \eta_{s} - \eta_{i}\right)\right)} = \ & 0 \\
C_{i}\ddot{\eta}_{i} + \frac{3}{2}\left( B_{i} - A_{i}\right) n^{2}\sin{2\eta_{i}} - K_{G}\sin{\left(2\left( \eta_{s} - \eta_{i}\right)\right)} = \ & 0
\end{align}
\end{subequations}
This equation is that of two nonlinear pendula coupled by a nonlinear spring. It can be shown that this equation admits an energy integral, similar to the single-pendulum case:
\begin{equation}
\label{VH3}
E = \frac{1}{2}C_{i}\dot{\eta}_{i}^{2} + \frac{1}{2}C_{s}\dot{\eta}_{s}^{2} - \frac{3}{4}\left(B_{i} - A_{i}\right)n^{2}\cos{2\eta_{i}} - \frac{3}{4}\left(B_{s} - A_{s}\right)n^{2}\cos{2\eta_{s}} - \frac{1}{2}K_{G}\cos{\left(2\left(\eta_{s} - \eta_{i}\right)\right)}
\end{equation}
The last term is a coupling term and is a function of both coordinates $\eta_{s}$ and $\eta_{i}$. This type of term has previously been observed for other coupled oscillatory systems and can itself be useful for analysis\cite{DeSousa_CoupledSystemEnergy}.
Similarly to before, the rate of change of the energy induced by dynamic tidal torque is given as below in terms of the separate orbit-averaged tidal torques on the interior and shell:
\begin{equation}
\label{VH4}
\dot{E} \approx \overline{L}_{\text{tidal},i}\dot{\eta}_{i} + \overline{L}_{\text{tidal},s}\dot{\eta}_{s}
\end{equation}
The shared existence of an energy integral for the homogeneous and differentiated reduced satellite rotational equations and the similarities of Eqs.~\eqref{GoldEq4} and \eqref{VH4} encourage that an approach similar to the classical analysis of Goldreich and Peale\cite{Goldreich_TAJ1966, Goldreich_Peale_SpinOrb} also be applied to the case of a differentiated body. The work of Goldreich concludes with simple inequalities relating the planetary moment of inertia imbalance and orbit eccentricity that result in qualitatively different spin behavior (libration vs. full rotation).
It is of interest to see how these conditions are modified for the model of an icy ocean world, and also to see what other insights can be gained for this unique problem.
\subsection{Energy Analysis: Permitted Motions and Final Spin States}
The investigation in Reference~\citenum{Goldreich_TAJ1966} examines the long-term behavior and implications of Eq.~\eqref{GoldEq4} by orbit-averaging assuming a fixed value of $\dot{\theta}$ (and implicitly, small eccentricity), obtaining an averaged expression in terms of a ``turn-around angle" $\Delta$ indicating critical true anomaly values $f = \pm(\frac{\pi}{2} - \Delta)$ at which $\dot{\psi} = 0$ (with the limiting case that libration is replaced by full rotation, then $\Delta = \pm\frac{\pi}{2}$). That expression is given below:
\begin{equation}
\label{Ean1}
\dot{E} = \frac{2\tilde{D}}{\pi}\left(4e\cos{\Delta} - \Delta\right)\dot{\eta}
\end{equation}
This result then gets averaged over a period of motion of $\eta$, and analytic quantities such as the period of $\eta$ are substituted, simplifying to the following simple condition for a planet to stop rotating super-synchronously and settle into a state of libration:
\begin{equation}
\label{Ean2}
\left(\frac{3}{2}\left(\frac{B-A}{C}\right)\right)^{\frac{1}{2}} > \frac{9.5\pi e^{2}}{2\sqrt{2}}
\end{equation}
If this inequality is not satisfied, the planet will not settle into a libration state, but will continue to rotate super-synchronously.
There are some challenges preventing an exact paralleling of the above simple analysis for the differentiated case. First, assuming significant independent motion is geometrically permitted, there are potentially two timescales at work - that of the libration/rotation of the ice shell, and the libration/rotation period of the core. Additionally, this problem presents two degrees of freedom, as opposed to the single degree-of-freedom analysis in Reference~\citenum{Goldreich_TAJ1966}. Despite this, some immediate results bounding the spin states based on total energy can be found. First, note that generally we expect $C_{i} \gg C_{s}$. This can be easily shown by using the equations of the moments of inertia for a homogeneous sphere of radius $r_{i}$ and thin shell of radius $r_{s}$ and thickness $d$:
\begin{equation}
\label{Ci_moi}
C_{i} = \frac{2}{5}mr_{i}^{2} = \frac{8}{15}\pi r_{i}^{5}\rho_{i}
\end{equation}
\begin{equation}
\label{Cs_moi}
C_{s} = \frac{8}{3}\pi r_{s}^{4}d\rho_{s}
\end{equation}
\begin{equation}
\label{CiOCs}
\frac{C_{i}}{C_{s}} = \frac{1}{5}\frac{r_{i}}{d}\frac{\rho_{i}}{\rho_{s}}
\end{equation}
Substituting for example $d \approx 20$ km, $r_{i} \approx 1340$ km, and $\rho_{i}/\rho_{s} \sim 3$, then $C_{i}/C_{s} \sim 40$. Note that Reference~\citenum{VanHoolst_Europa} gives $\rho_{s} \approx 920 \ \text{kg} \ \text{m}^{-3}$ and $\rho_{i}$ of at least $3000 \ \text{kg} \ \text{m}^{-3}$, and predict values of $C_{i}/C_{s}$ between 7 and 200. As a result of this imbalance, the motion of the interior tends to be less affected by the shell than vice-versa.
To study how the total system energy relates to the permissible spin states, the concept of the \textit{zero-velocity curve} is borrowed from celestial mechanics (see e.g. Reference~\citenum{Koon:2006rf}). Namely, setting $\dot{\eta}_{s} = \dot{\eta}_{i} = 0$ and examining curves of constant values of $E$, for a given energy level, the range of reachable angles $\{\eta_{i}, \ \eta_{s} \}$ is externally bounded by the curve at that fixed energy value. Figure~\ref{fig:ZVC_ex1} depicts zero-velocity curves for multiple select values of $E$, with dummy values $\alpha_{s} = \frac{3}{2}\left(\frac{B_{s}-A_{s}}{C_{s}}\right)n^{2} = \alpha_{i}=1$, $K_{G} = 4$, $C_{s}=0.5$, and $C_{i} = 10$. Note that as $E$ is decreased from $-2.7$ to $-3.0$, the closed curves contract inward. As $E$ is decreased further, the curves eventually contract to the fixed points at $\left( \eta_{i}, \eta_{s} \right) = (0,0)$ and $\left( \eta_{i}, \eta_{s} \right) = (0,\pm \pi)$. The figure also illustrates that as the energy increases, the ice shell is allowed to fully rotate before it is possible for the interior to do so. This inspires a consideration of the critical values of the energy that differentiate different types of qualitative behaviors.
\begin{figure}[h!]
\centering
\includegraphics[]{ZVC_ex1_SMALL.pdf}
\caption{Example zero-velocity curves for independent ice shell and core, with dummy values $\alpha_{s} = \frac{3}{2}\left(\frac{B_{s}-A_{s}}{C_{s}}\right)n^{2} = \alpha_{i}=1$, $K_{G} = 4$, $C_{s}=0.5$, and $C_{i} = 10$. The zero-velocity curves for a given energy level internally bound the motion of any configuration with less energy. With this property, it is possible to bound and analyze system behavior from the perspective of the energy alone.}
\label{fig:ZVC_ex1}
\end{figure}
To understand why the zero-velocity curves externally bound the admissible angles $\{\eta_{i}, \ \eta_{s} \}$ for a certain energy level, consider the curve at $E = -3$, and trajectories with energy level $E = -3$. From the other zero-velocity curves, it is clear that the curves outside $E = -3$ have higher energy (i.e. they are less negative). If the system was characterized by a choice of angles outside the $E = -3$ curve, the associated potential energy would be greater than $E = -3$, and no choice of velocities (no choice of kinetic energy $T = \frac{1}{2}C_{i}\dot{\eta}_{i}^{2} + \frac{1}{2}C_{s}\dot{\eta}_{s}^{2} \geq 0$) would yield the specified energy $E = -3$. However, points inside the curve have lower potential energy (more negative) and there is an energy ``budget" for the inherently positive kinetic energy.
There are two critical values of the energy to explore, where the zero-velocity curves self-intersect:
\begin{equation}
\label{ZVC1}
E_{\text{crit,1}} = -\frac{3}{4}(B_{i} - A_{i})n^{2} + \frac{3}{4}(B_{s} - A_{s})n^{2} + \frac{1}{2}K_{G}
\end{equation}
\begin{equation}
\label{ZVC2}
E_{\text{crit,2}} = \frac{3}{4}(B_{i} - A_{i})n^{2} + \frac{3}{4}(B_{s} - A_{s})n^{2} - \frac{1}{2}K_{G}
\end{equation}
For this particular case, it can be shown by examination of the relevant curves in Figure~\ref{fig:ZVC_ex1} that if the total energy $E < E_{\text{crit,1}}$, full super-synchronous rotation of either the ice shell or core is impossible, and both parts will be in states of libration. If $E_{\text{crit,2}} > E > E_{\text{crit,1}}$, the ice shell is free to rotate, but the core is still tidally locked. The core is only free to rotate if $E > E_{\text{crit,2}}$, at which point both the ice shell and core can occupy super-synchronous rotation states. Thus, if the body starts in a high energy fully super-synchronous rotation state, energy loss due to the dynamic tidal torques can tidally lock the interior before the ice shell. The use of the zero-velocity curves to bound admissible behavior is quite an important concept in this paper, allowing the behavior of a system with four states to be studied with a single scalar quantity.
\subsection{Application to Europa}
To apply the preceding arguments to Europa, it is necessary to determine the range of possible scalings of the parameters in Eq.~\eqref{VH1}. There is considerable uncertainty in the depth of Europa's ice shell, the depth of its ocean, and its internal composition. We assume 25 km ice shell thickness and 100 km ocean depth, which is consistent with the range of values predicted in recent literature.\cite{Anderson1998,Howell_2021,Wahr_EuropaTides} With a mean radius of $r_{E} \approx 1560$ km, this yields $r_{i} \approx 1435$ km. In keeping with Reference~\citenum{VanHoolst_Europa}, we use $\rho_{s} = 920 \ \text{kg} \ \text{m}^{-3}$ and $\rho_{i} = 3000 \ \text{kg} \ \text{m}^{-3}$. Next, the approximate scale of the gravitational coupling constant needs to be discerned. There is considerable uncertainty in the respective equatorial flattenings of the interior components of Europa, so the scale of $K_{G}$ is determined by a bulk flattening parameter $\tilde{\beta}$ defined below:
\begin{equation}
\label{Kg_scale}
\begin{split}
K_{G} = & \ \frac{4\pi G}{5}\rho_{0}\left(\frac{\rho_{s}}{\rho_{0}}\beta_{s} + (1-\frac{\rho_{s}}{\rho_{0}})\beta_{0}\right)\left(1 - \frac{\rho_{0}}{\rho_{i}}\right)\left(B_{i} - A_{i}\right) \\
= & \ \frac{4\pi G}{5}\rho_{0}\tilde{\beta}\left( B_{i} - A_{i}\right)
\end{split}
\end{equation}
Note $\rho_{0} \approx 1000 \ \text{kg} \ \text{m}^{-3}$ and $G = 6.674 \times 10^{-11} \text{m}^{3}/\text{kg} \ \text{s}^{2}$. For the purpose of analysis of the averaged system given in Eq.~\eqref{VH1}, it is more convenient to work with the following equations with non-dimensional time $\tau = nt$:
\begin{subequations}
\label{VH2_new}
\begin{align}
C_{s}\eta_{s}^{''} + \frac{3}{2}\left( B_{s} - A_{s}\right)\sin{2\eta_{s}} + \frac{K_{G}}{n^{2}}\sin{\left(2\left( \eta_{s} - \eta_{i}\right)\right)} = \ & 0 \\
C_{i}\eta_{i}^{''} + \frac{3}{2}\left( B_{i} - A_{i}\right)\sin{2\eta_{i}} - \frac{K_{G}}{n^{2}}\sin{\left(2\left( \eta_{s} - \eta_{i}\right)\right)} = \ & 0
\end{align}
\end{subequations}
For this system there exists modified energy integral $\hat{E}$:
\begin{equation}
\label{VH3_new}
\hat{E} = \frac{E}{n^{2}} = \frac{1}{2}C_{i}\eta_{i}^{'2} + \frac{1}{2}C_{s}\eta_{s}^{'2} - \frac{3}{4}\left(B_{i} - A_{i}\right)\cos{2\eta_{i}} - \frac{3}{4}\left(B_{s} - A_{s}\right)\cos{2\eta_{s}} - \frac{1}{2}\hat{K}_{G}\cos{\left(2\left(\eta_{s} - \eta_{i}\right)\right)}
\end{equation}
where $\hat{K}_{G} = K_{G}/n^{2}$ as below:
\begin{equation}
\label{khatG1}
\hat{K}_{G} = \frac{4\pi G}{5n^{2}}\rho_{0}\tilde{\beta}\left( B_{i} - A_{i}\right)
\end{equation}
Then $G\rho_{0} \approx 6.7 \times 10^{-8} \ \text{s}^{-2}$ and $n \approx 2\times 10^{-5} \ \text{s}^{-1}$, thus $\hat{K}_{G} \approx 398 \tilde{\beta} (B_{i} - A_{i})$. Noting that the equatorial flattening is $\beta = (r_{\text{max}} - r_{\text{min}})/r_{\text{max}}$, where $r_{\text{min}}$ and $r_{\text{max}}$ are measured along the equator, we expect values of the $\beta_{s}$ and $\beta_{i}$ to be on the order of $10^{-3}$ (see e.g. Reference~\citenum{Nimmo2007TheGS}). Because the other terms in $\tilde{\beta}$ are just ratios of densities, the bulk flattening parameter $\tilde{\beta}$ should also be on the same order. Conservatively, we can explore the problem for Europa with $\hat{K}_{G}$ at multiple orders in the following range:
\begin{equation}
\label{khatG_range}
0.1(B_{i} - A_{i}) < \hat{K}_{G} < 10(B_{i} - A_{i})
\end{equation}
\subsubsection{Zero-Velocity Curves}
Similarly to as discussed previously, the curves of constant values of $E$ with $\eta^{'}_{s} = \eta^{'}_{i} = 0$ demarcate which parts of the parameter space are accessible at various energy levels. There are also two critical values of the energy that serve as the boundaries between qualitatively different behaviors:
\begin{equation}
\label{ZVCnew1}
\hat{E}_{\text{crit,1}} = \frac{E_{\text{crit,1}}}{n^{2}} = -\frac{3}{4}(B_{i} - A_{i}) + \frac{3}{4}(B_{s} - A_{s}) + \frac{1}{2}\hat{K}_{G}
\end{equation}
\begin{equation}
\label{ZVCnew2}
\hat{E}_{\text{crit,2}} = \frac{E_{\text{crit,2}}}{n^{2}} = \frac{3}{4}(B_{i} - A_{i}) + \frac{3}{4}(B_{s} - A_{s}) - \frac{1}{2}\hat{K}_{G}
\end{equation}
where the normalization scheme $\hat{E} = E/n^{2}$ is introduced for convenience to remove the appearance of the orbital mean motion $n$. The value $\hat{E}_{\text{crit,1}}$ is formed when the zero-velocity curves intersect at $\eta_{i} = \pm k\pi$, $k = 0, 1, 2 \ldots$, and $\eta_{s} = \pm l\pi/2$, $l = 1, 3, 5 \ldots$, and the value $\hat{E}_{\text{crit,2}}$ is formed for similar intersections at $\eta_{i} = \pm k\pi/2$, $k = 1, 3, 5 \ldots$ and $\eta_{s} = \pm l\pi/2$, for nonzero odd integers $l$. There is a critical value of $\hat{K}_{G}$ for which $\hat{E}_{\text{crit,1}} = \hat{E}_{\text{crit,2}}$:
\begin{equation}
\label{Khat_G_crit}
\hat{K}_{G}^{*} = \frac{K_{G}^{*}}{n^{2}} = \frac{3}{2}\left(B_{i} - A_{i}\right)
\end{equation}
By careful study of the zero velocity curves, it can be shown that when $\hat{K}_{G} < \hat{K}_{G}^{*}$, it is possible for the body to decay into a state where the core is tidally locked while the ice shell continues to revolve independently. By contrast, if $\hat{K}_{G} > \hat{K}_{G}^{*}$, the final state will be one of synchronous rotation or simultaneous tidal locking of the ice shell and core. Zero-velocity contours for these two different cases are illustrated with numerical results in Figure \ref{fig:ZVC_final}. Note for this study that we set $C_{i} \equiv 1.0$ for simplicity and assume $(B_{i} - A_{i})/C_{i} = (B_{s} - A_{s})/C_{s} = 0.0015$, then assume $C_{i}/C_{s} = 28$. The choice of $C_{i} \equiv 1$ merely normalizes the dynamic equations, and does not impact the realism of the simulations. It is interesting to note that the value of $\hat{K}_{G}$ of Europa is not known with sufficient precision to determine if it is greater or less than $\hat{K}_{G}^{*}$.
\begin{figure}[h!]
\centering
\subfloat[Indep. rotation possible, $\hat{K}_{G} = 0.5(B_{i}-A_{i})$]{\includegraphics[scale=1.0]{ZVC_final_small_kG0p5_v2.pdf}}
\subfloat[Indep. rotation impossible, $\hat{K}_{G} =2.0(B_{i}-A_{i})$]{\includegraphics[scale=1.0]{ZVC_final_small_kG2p0_v2.pdf}}
\caption{Zero-velocity curves with constant values of $E/n^{2}$, where $C_{i} \equiv 1.0$. These plots illustrate the qualitatively different behavior of the zero velocity curved depending on whether $\hat{K}_{G}$ is less than or greater than the critical value $\hat{K}_{G}^{*}$.}
\label{fig:ZVC_final}
\end{figure}
For the weakly coupled case given in Figure~\ref{fig:ZVC_final}(a), it can be shown that if the total energy $E < E_{\text{crit,1}}$, full super-synchronous rotation of either the ice shell or core is impossible, and both parts will be in states of libration. If $E_{\text{crit,2}} > E > E_{\text{crit,1}}$, the ice shell is free to rotate, but the core is still tidally locked. The core is only free to rotate if $E > E_{\text{crit,2}}$, at which point both the ice shell and core will tend to be in super-synchronous rotation states. For the strongly coupled case given in Figure~\ref{fig:ZVC_final}(b), the ice shell and core will tend to co-rotate, with full rotation of both ice shell and core possible for $E > E_{\text{crit,2}}$. For $E_{\text{crit,1}} > E > E_{\text{crit,2}}$, full revolution of the ice shell requires full revolution of the core, and vice-versa. It is impossible for the core to be bounded while the ice shell continues to revolve, except for very high energy cases with $E > E_{\text{crit,1}}$. When $E > E_{\text{crit,1}}$, the zero-velocity curve boundaries shrink into inaccessible ``islands" of states that the system cannot occupy, and paths that permit unbounded motion in $\eta_{s}$ with bounded oscillations in $\eta_{i}$ become theoretically possible.
Finally, when $E < E_{\text{crit,2}}$, both the ice shell and core become tidally locked. Note that it in some instances, the energy level will be sufficiently high to permit rotation of the ice shell and/or core, but all or part of the system will be in a bounded rotation state. This is because the energy level only provides limits on \textit{admissible} behavior.
The two aforementioned cases show two very different possible dynamical evolutions of the rotating moon, depending on the strength of the gravity gradient torque between ice shell and core. In the case that $\hat{K}_{G} < \hat{K}_{G}^{*}$, energy loss from an initially super-synchronous rotation state can tidally lock the interior before the ice shell. However, if $\hat{K}_{G} > \hat{K}_{G}^{*}$, energy loss from an initially super-synchronous rotation state will eventually result in the simultaneous locking of the interior and ice shell. These cases are illustrated via numerical simulation later in the paper, and these conclusions are summarized graphically in Figure~\ref{fig:Energies}.
\begin{figure}[h!]
\centering
\includegraphics[]{energies_v2.pdf}
\caption{Tidal locking and system energy. As the total energy decreases over time due to tidal dissipation, the accessible configurations of ice shell and interior become increasingly limited. With weak gravitational coupling ($K_{G} < K_{G}^{*}$), the accessible region changes in a way that allows independent ice shell revolutions while the solid interior is tidally locked. With strong gravitational coupling ($K_{G} > K_{G}^{*}$), this is not permitted, and the ice shell and solid interior lock simultaneously.}
\label{fig:Energies}
\end{figure}
\subsubsection{Simulations with the Averaged Dynamics}
Numerical simulations of Eq.~\eqref{VH1} can be used to validate the fundamentals of the preceding energy analysis by plotting the behavior of the system in 2D with the relevant zero-velocity curves to illustrate how the state is bounded. Note for these simulations, the dynamic tidal torques are not considered, so the system conserves energy.
For the first example, consider the case of bounded librations of both ice shell and core, with the energy insufficient to permit full rotations. The relevant physical parameters and initial conditions are given in the first entry of Table \ref{table:SimSet1}. The oscillations of the interior and the ice shell are given in Figure~\ref{fig:ZVCfEx1}(a). The states are also plotted in Figure~\ref{fig:ZVCfEx1}(b), which shows the evolving angle pair $\{\eta_{i}, \eta_{s} \}$ in blue, bounded externally by the black zero-velocity curve for energy $\hat{E} = f(\eta_{i,0}, \eta_{s,0}, \eta_{i,0}^{'}, \eta_{s,0}^{'})$.
\begin{table}[h!]
\centering
\caption{Parameters for Simulations with Averaged Dynamics}
\label{table:SimSet1}
\begin{tabular}{ll|l}
Parameter & Simulation 1 & Simulation 2 \\ \cline{1-3}
\multicolumn{1}{l|}{Polar mom. inert., $C_{i}$} & $C_{i} \equiv 1$ & $C_{i} \equiv 1$ \\
\multicolumn{1}{l|}{$\gamma = C_{i}/C_{s}$} & 28.0 & 28.0 \\
\multicolumn{1}{l|}{$\kappa_{j} = \frac{B_{j}-A_{j}}{C_{j}}$} & $\kappa_{i} = \kappa_{s} = 0.0015$ & $\kappa_{i} = \kappa_{s} = 0.0015$ \\
\multicolumn{1}{l|}{ Grav. coupling, $\hat{K}_{G}$} & $0.5(B_{i}-A_{i})$ & $0.5(B_{i}-A_{i})$ \\
\multicolumn{1}{l|}{Initial orientations $(\eta_{i}, \eta_{s})$, deg} & $0,0$ & $-15, -70$ \\
\multicolumn{1}{l|}{Initial angular velocities $(\dot{\eta}_{i}, \dot{\eta}_{s})$, deg/s} & $10^{-5},10^{-4}$ & $-4\times 10^{-5}, 6.75 \times 10^{-5}$ \\
\multicolumn{1}{l|}{Duration (num. Europa orbits)} & 300 & 600
\end{tabular}
\end{table}
\begin{figure}[h!]
\centering
\subfloat[Rotation Angles]{\includegraphics[scale=0.8]{vanhoolst_new_Feb21_2.pdf}}
\subfloat[State Bounded by Zero-Velocity Curves]{\includegraphics[scale=0.8]{traj_avg_ZVC_Feb21_2.pdf}}
\caption{Zero-velocity curves for a case with tidally locked interior and shell, and no energy dissipation. The zero-velocity curves demarcate the maximum extent of the state $\{\eta_{i}, \eta_{s} \}$.}
\label{fig:ZVCfEx1}
\end{figure}
\begin{figure}[h!]
\centering
\subfloat[Rotation Angles]{\includegraphics[scale=0.8]{vanhoolst_new_Feb21_1.pdf}}
\subfloat[State Bounded by Zero-Velocity Curves]{\includegraphics[scale=0.8]{traj_avg_ZVC_Feb21_1.pdf}}
\caption{Zero-velocity curves for a case with tidally locked interior, unbounded shell movement, and no energy dissipation. Zero-velocity curves demarcate the maximum extent of the state $\{\eta_{i}, \eta_{s} \}$.}
\label{fig:ZVCfEx2}
\end{figure}
For the second example, given in the second entry of Table \ref{table:SimSet1}, the energy level and gravitational coupling constant $\hat{K}_{G}$ are chosen such that the interior is tidally locked but the ice shell is free to revolve. The oscillations of the interior and the ice shell are given in Figure~\ref{fig:ZVCfEx2}(a), and these states are also plotted in Figure~\ref{fig:ZVCfEx2}(b), which shows the angles $\{\eta_{i}, \eta_{s} \}$ bounded externally by the zero-velocity curve for energy $\hat{E} = f(\eta_{i,0}, \eta_{s,0}, \eta_{i,0}^{'}, \eta_{s,0}^{'})$. Note that in this case the zero-velocity curves provide no constraints on the motion of $\eta_{s}$, but restrict the maximum oscillations of $\eta_{i}$, depending on the value of $\eta_{s}$. The behavior of the system for this case is quite complex, but the zero-velocity curves clearly show the fundamental limitations on the motion of the system. There is a simple underlying explanation for the chaotic shell motion when the orientation of the interior is also considered. Figure \ref{fig:DeltaHist} shows the differential angle $\delta = \eta_{s} - \eta_{i}$ between shell and core at the instant the ice shell reverses direction, with wrapping $|\delta| \leq \pi$. Large differential angles between the shell and interior generate a powerful torque on the ice shell, occasionally slowing the ice shell's differential rotation until it reverses direction. The torque is zero for $\delta = 0$, $\delta = \pm\pi/2$, and $\delta = \pm\pi$, explaining the drop-offs in turn-around instances near these values.
\begin{figure}[h!]
\centering
\includegraphics[]{AvDynEx2_DeltaHist2.pdf}
\caption{Histogram of $\delta = \eta_{s} - \eta_{i}$, at the instant the ice shell reverses direction ($\eta_{s}^{'} = 0$), with wrapping $|\delta| \leq \pi$. This shows that the relative orientation of the ice shell and interior strongly governs the behavior of the ice shell, and that the gravitational coupling between shell and core is responsible for the chaotic shell rotations. Generated by simulation of the chaotic solution (second entry of Table \ref{table:SimSet1}) for $5\times 10^{4}$ orbits.}
\label{fig:DeltaHist}
\end{figure}
The complex behavior of the second example given in Figure~\ref{fig:ZVCfEx2} prompts a broader exploration of the possible behaviors exhibited by the system at an energy level $E_{\text{crit},2} > E > E_{\text{crit},1}$, where it is possible for the ice shell to undergo full revolutions independent of the tidally locked interior. Choosing an appropriate energy level $E = \frac{1}{2}\left(E_{\text{crit},1} + E_{\text{crit},2}\right)$ and using the same parameters from the Simulation 2 entry of Table \ref{table:SimSet1}, a Poincaré map is constructed, with the Poincaré surface defined as positive ($\eta_{i}^{'} > 0$) crossings of $\eta_{i} = 0$. The crossings are catalogued primarily from initial points with $\eta_{i}(t_{0}) = 0$, $\eta_{s}^{'}(t_{0}) = 0$, and $\eta_{s}(t_{0}) \in [0, 2\pi]$. The unspecified value of $\eta_{i}^{'}(t_{0})$ can be determined from the system energy. The resulting map is given in Figure~\ref{fig:Poincare1}. The map shows that chaotic motion of the ice shell constitutes a large part of the solution space for the system at this energy level. Whether or not Europa would undergo such chaotic behavior depends on the initial conditions and also the physical parameters governing the system - in particular the strength of gravitational coupling. It is noted that if the gravitational coupling is strengthened, the region of regular libration solutions expands, and the fraction of solutions exhibiting chaotic behavior is reduced. Additionally, the structure of the quasi-periodic solutions depends on the choice of values of $\kappa_{i}$ and $\kappa_{s}$. However, the same broad structure is generally observed: libration-based solutions near $\eta_{s} = 0$ and $\pi$, surrounded by a region of chaos.
\begin{figure}[h!]
\centering
\includegraphics[]{poincare_VanHoolst_MidE_LowKhatG_dense_v2.pdf}
\caption{Poincaré map for the case that the solid interior is restricted to librations, but the shell is potentially free to revolve, and gravitational coupling is weak. Shown are the rotation angle of the ice shell $\eta_{s}$ and the dimensionless rate $\eta_{s}^{'} = \frac{\text{d}}{\text{d}\tau}\left(\eta_{s}\right)$, where $\tau = nt$. The map shows a periodic solution when the shell and core librate in unison, surrounded by quasi-periodic solutions for independent and out-of-phase librations of shell and core. Beyond these regular libration solutions, which form closed curves in the map given sufficient time, the solution space is chaotic. The chaotic solutions are given as scattered and unorganized points outside the libration solution region.}
\label{fig:Poincare1}
\end{figure}
\subsubsection{Simulations with Tidal Locking}
This section uses numerical simulations of the unaveraged and tidally forced dynamics to show, for initially super-synchronous rotation, how energy decay from the dynamic tidal torques can produce different final states, and to verify the conclusions of the energy analysis of this system. The dynamics are simulated with Europa's true anomaly $f$ as the independent variable, $\left( \ \right)^{'} = \frac{\text{d}}{\text{d}f}\left( \ \right)$, and the libration angles $\psi_{j} = \theta_{j} - f_{j}$ as the independent variables, using the following equations:
\begin{subequations}
\label{VHfdim}
\begin{align}
& \psi_{s}^{''} - 2\frac{r}{a}\frac{e\sin{f}}{1-e^{2}}\psi_{s}^{'} + \frac{3}{2}\left(\frac{B_{s} - A_{s}}{C_{s}}\right)\frac{r}{a(1-e^{2})}\sin{2\psi_{s}} = 2\frac{r}{a}\frac{e\sin{f}}{1-e^{2}} - \frac{K_{G}\sin{\left(2\left(\psi_{s} - \psi_{i}\right)\right)}}{C_{s}\dot{f}^{2}} + \frac{L_{\text{tidal},s}}{C_{s}\dot{f}^{2}}\\
& \psi_{i}^{''} - 2\frac{r}{a}\frac{e\sin{f}}{1-e^{2}}\psi_{i}^{'} + \frac{3}{2}\left(\frac{B_{i} - A_{i}}{C_{i}}\right)\frac{r}{a(1-e^{2})}\sin{2\psi_{i}} = 2\frac{r}{a}\frac{e\sin{f}}{1-e^{2}} + \frac{K_{G}\sin{\left(2\left(\psi_{s} - \psi_{i}\right)\right)}}{C_{i}\dot{f}^{2}} + \frac{L_{\text{tidal},i}}{C_{i}\dot{f}^{2}}
\end{align}
\end{subequations}
where $\dot{f} = h/r^{2}$, and the parameters $r, \ a, \ e$ are Europa's time-varying orbital radius, constant semimajor axis, and eccentricity. Lastly, $L_{\text{tidal},j}$ is the dynamic tidal torque acting on body $j$:\cite{Goldreich_TAJ1966}
\begin{equation}
\label{TidesEq1_2B}
L_{\text{tidal},j} = - \tilde{D}_{j}\left(\frac{a}{r}\right)^{6}\text{sign}(\dot{\psi}_{j})
\end{equation}
where traditionally $\tilde{D}$ could be defined in terms of body parameters as below:\cite{SSD_1999}
\begin{equation}
\label{TidesEq2_2B}
\tilde{D} = \frac{3}{2}\frac{k_{2}}{Q_{s}}\frac{n^{4}}{G}R_{s}^{5}
\end{equation}
where $k_{2}$ is the tidal Love number, $Q$ is the tidal dissipation function, and $R_{s}$ is the reference radius. The parameterization of $\tilde{D}$ as given by Eq. \eqref{TidesEq2_2B} render Eq. \eqref{TidesEq1_2B} as the MacDonald tidal torque, which assumes a constant lag/lead angle for the tidal bulge. For the simulations which follow, the approach is less granular. The term $\tilde{D}$ is generally chosen to be large enough for tidal locking to occur on reasonable simulation timescales, since the scale of the dynamic tidal torque influences the pace of the system energy loss -- see e.g. Eq. \eqref{VH4}. This is similar to the approach in Reference~\citenum{Goldreich_TAJ1966}. Note that we do not the consider the complex long-timescale variations in the eccentricity and semimajor axis of Europa's perturbed orbit in this work. These are discussed in Reference~\citenum{Bills_EuropaRotational} and references therein.
Two simulations explore tidal locking of Europa for the two fundamental cases of interest -- (1) the case of weak interior gravity gradient torque coupling ($\hat{K}_{G} < \hat{K}_{G}^{*}$) and (2) the case of strong interior gravity gradient torque coupling ($\hat{K}_{G} > \hat{K}_{G}^{*}$). The data for these simulations is given in Table~\ref{table:EuropaDeSpin1}. The angles $\eta_{j}$ and angular rates $\dot{\eta}_{j}$ are used to characterize the initial conditions, but these can easily be converted via $\psi_{j} = \eta_{j} - f + nt$ and $\dot{\psi}_{j} = \dot{\eta}_{j} - \dot{f} + n$. Lastly, note $\psi_{j}^{'} = \frac{1}{\dot{f}}\dot{\psi}_{j}$. These same conversions are used to compute the value of the energy over the course of the simulations. The uncertain historical value of Europa's orbital eccentricity is a free variable in our simulations.
\begin{table}[h!]
\centering
\caption{Parameters for Tidal Locking Simulations}
\label{table:EuropaDeSpin1}
\begin{tabular}{ll|l}
Parameter & Simulation 1 & Simulation 2 \\ \cline{1-3}
\multicolumn{1}{l|}{Polar mom. inert., $C_{i}$} & $C_{i} \equiv 1$ & $C_{i} \equiv 1$ \\
\multicolumn{1}{l|}{$\gamma = C_{i}/C_{s}$} & 28.0 & 28.0 \\
\multicolumn{1}{l|}{$\kappa_{j} = \frac{B_{j}-A_{j}}{C_{j}}$} & $\kappa_{i} = 0.0035$, $\kappa_{s} = 0.0015$ & $\kappa_{i} = 0.0035$, $\kappa_{s} = 0.0015$ \\
\multicolumn{1}{l|}{ Grav. coupling, $\hat{K}_{G}$} & $0.1(B_{i}-A_{i})$ & $2.5(B_{i}-A_{i})$ \\
\multicolumn{1}{l|}{ Dynamic tides, $\tilde{D}_{i}$, $\tilde{D}_{s}$} & $\tilde{D}_{i} = 0.003K_{G}$, $\tilde{D}_{s} = 0.0015K_{G}$ & $\tilde{D}_{i} = 0.005K_{G}$, $\tilde{D}_{s} = 0.0005K_{G}$ \\
\multicolumn{1}{l|}{Initial orientations $(\eta_{i}, \eta_{s})$, deg} & $-80.76, -103.47$ & $0, 10.5$ \\
\multicolumn{1}{l|}{Initial ang. vel. $(\dot{\eta}_{i}, \dot{\eta}_{s})$, deg/s} & $2.72\times 10^{-5}, -1.31\times 10^{-5}$ & $1.65\times 10^{-4}, 1.65 \times 10^{-5}$ \\
\multicolumn{1}{l|}{$\hat{E}_{0}$, $\hat{E}_{\text{crit},1}$, $\hat{E}_{\text{crit},2}$} & 0.0027, -0.0024, 0.0025 & 0.0031, 0.0018, -0.0017 \\
\multicolumn{1}{l|}{Europa orbit} & $a = 670900$ km, $e = 0.0094$, $f_{0} = 0$ & $f_{0} = 0$ \\
\multicolumn{1}{l|}{Duration, Europa orbits} & 900 (Fig. 9), 14000 (Fig. 10) & 800
\end{tabular}
\end{table}
For the first simulation, Europa is placed in an initially super-synchronous spin state, with the ice shell and core revolving in lockstep. After about 280 orbits, the interior tidally locks, and the ice shell transitions from supersynchronous spin to more erratic behavior, characterized by alternating prograde and retrograde rotations.
\begin{figure}[]
\centering
\subfloat[Rotation Angles]{\includegraphics[scale=1.0]{vanhoolst_traj_unavg_Feb22_1.pdf}}
\subfloat[Rotational Energy]{\includegraphics[scale=1.0]{vanhoolst_E_unavg_Feb22_1.pdf}}
\caption{Tidal locking of Europa's core, weak gravity-gradient coupling ($\hat{K}_{G} < \hat{K}_{G}^{*}$), with erratic ice shell movement. This short timescale demonstrates initially super-synchronous rotation, followed by the locking of Europa's core while its ice shell continues to revolve, with $\hat{E}(t) \gg \hat{E}_{\text{crit},1}$.}
\label{fig:TidalDespinEx1}
\end{figure}
The erratic behavior of the ice shell continues until after 9000 orbits when it tidally locks as well. Figure~\ref{fig:TidalDespinEx1}(a) highlights the initially super-synchronous behavior and the irregular ice shell rotations after the interior tidally locks, and Figure~\ref{fig:TidalDespinEx1}(b) shows that the locking shortly precedes the transition of $\hat{E}(t)$ to below the critical value for core tidal locking, $\hat{E}_{\text{crit},2}$. The full simulation is shown in Figure~\ref{fig:TidalDespinEx1b}. The tidal locking of the ice shell occurs just before the transition of $\hat{E}(t)$ to below the critical value for ice shell tidal locking, $\hat{E}_{\text{crit},1}$. This behavior is expected from the earlier analysis based on the zero-velocity curves in Section 2.4.1. Note that after the ice shell tidally locks, the energy decay rate slows significantly. Also, note that the ice shell rotations do not decouple from the interior for all cases with $\hat{K}_{G} < \hat{K}_{G}^{*}$. However, this behavior is quite common, depending on the choices of parameters and the initial conditions of the system. By contrast, in the case of $\hat{K}_{G} > \hat{K}_{G}^{*}$, it cannot occur at all.
\begin{figure}[]
\centering
\subfloat[Rotation Angles]{\includegraphics[scale=1.0]{vanhoolst_traj_unavg_Feb22_1LONG.pdf}}
\subfloat[Rotational Energy]{\includegraphics[scale=1.0]{vanhoolst_E_unavg_Feb22_1LONG.pdf}}
\caption{Tidal locking of Europa's ice shell and core, weak gravity-gradient coupling ($\hat{K}_{G} < \hat{K}_{G}^{*}$). On this long timescale of 14000 Europa orbits ($\sim 136$ yrs), both the core and ice shell tidally lock. These locking instances occur shortly before $E(t)$ decays to the corresponding critical values for locking -- $\hat{E}_{\text{crit},2}$ for the core, and $\hat{E}_{\text{crit},1}$ for the shell.}
\label{fig:TidalDespinEx1b}
\end{figure}
\begin{figure}[]
\centering
\subfloat[Rotation Angles]{\includegraphics[scale=1.0]{vanhoolst_traj_unavg_LargeKG_1.pdf}}
\subfloat[Rotational Energy]{\includegraphics[scale=1.0]{vanhoolst_E_unavg_LargeKG_1.pdf}}
\caption{Tidal locking of Europa's ice shell and core, strong gravity-gradient coupling ($\hat{K}_{G} > \hat{K}_{G}^{*}$). Over 200 Europa orbits, initially super-synchronous rotation of core and shell is tidally locked. Afterwards, librations are steadily reduced. The simultaneous locking occurs near when $E(t)$ decays to the corresponding critical value for simultaneous locking, $\hat{E}_{\text{crit},2}$.}
\label{fig:TidalDespinEx2}
\end{figure}
For the second simulation, Europa is again placed in initially super-synchronous rotation with the core and ice shell revolving in unison. However, for this simulation, $\hat{K}_{G} > \hat{K}_{G}^{*}$, and numerical experiments for this case show that the ice shell and core generally tidally lock simultaneously. This is in contrast to the decoupling cases observed with $\hat{K}_{G} < \hat{K}_{G}^{*}$, and in line with the predictions from the earlier analysis with the zero-velocity curves in Section 2.4.1. The tidal locking in the example shown in Figure~\ref{fig:TidalDespinEx2} occurs after about 200 orbits, and coincides with the transition of $\hat{E}(t)$ to the critical value below which super-synchronous rotation is impossible -- $\hat{E}_{\text{crit},2}$.
\subsubsection{Non-synchronous Rotation}
An example with non-synchronous rotation as a final spin state is also provided, to illustrate the variety of final behaviors that are possible with a differentiated Europa. For this case, the simulation parameters are given in Table \ref{table:EuropaNSR1}. Note that Europa's eccentricity is increased to $0.05$ for this simulation. The scale of the dynamic tidal torques is chosen to allow for a reasonable simulation time to reach a final equilibrium state. With this choice of eccentricity, the specified asymmetry in the ice shell is sufficient to expect tidal locking, using Goldreich and Peale's classical result. In particular, the value of $\kappa_{s}= \frac{B_{s} - A_{s}}{C_{s}} = 0.0005$ exceeds the classical limit from Goldreich and Peale above which tidal locking is expected: Eq. ~\eqref{Ean2} yields a cutoff of $\kappa_{\text{max}} = \frac{B - A}{C} = 0.000464$. However, while $\kappa_{s}$ exceeds this value, $\kappa_{i}$ does not.
\begin{table}[h!]
\centering
\caption{Parameters for Non-Synchronous Rotation Simulation}
\label{table:EuropaNSR1}
\begin{tabular}{ll|l}
Parameter & Simulation 1 & \\ \cline{1-3}
\multicolumn{1}{l|}{Polar mom. inert., $C_{i}$} & $C_{i} \equiv 1$ & \\
\multicolumn{1}{l|}{$\gamma = C_{i}/C_{s}$} & 28.0 & \\
\multicolumn{1}{l|}{$\kappa_{j} = \frac{B_{j}-A_{j}}{C_{j}}$} & $\kappa_{i} = 0.0003$, $\kappa_{s} = 0.0006$ & \\
\multicolumn{1}{l|}{ Grav. coupling, $\hat{K}_{G}$} & $0.6(B_{i}-A_{i})$ & \\
\multicolumn{1}{l|}{ Dynamic tides, $\tilde{D}_{i}$, $\tilde{D}_{s}$} & $\tilde{D}_{i} = 0.2K_{G}$, $\tilde{D}_{s} = 0.05K_{G}$ & \\
\multicolumn{1}{l|}{Initial orientations $(\eta_{i}, \eta_{s})$, deg} & $0.0, 0.0$ & \\
\multicolumn{1}{l|}{Initial ang. vel. $(\dot{\eta}_{i}, \dot{\eta}_{s})$, deg/s} & $4.17\times 10^{-5}, 8.34\times 10^{-5}$ & \\
\multicolumn{1}{l|}{$\hat{E}_{0}$, $\hat{E}_{\text{crit},1}$, $\hat{E}_{\text{crit},2} \ \times 10^{-3}$} & 0.3922, -0.1216, 0.1484 & \\
\multicolumn{1}{l|}{Europa orbit} & $a = 670900$ km, $e = 0.05$, $f_{0} = 0$ & \\
\multicolumn{1}{l|}{Duration, Europa orbits} & 5000 &
\end{tabular}
\end{table}
A numerical exploration of this case yields the results in Figure \ref{fig:NSREx1}. Europa does not settle into a tidally locked state in this simulation, despite the large mass asymmetry in the shell. The solid interior lacks sufficient asymmetry, and the ice shell is gravitationally dragged along into a slightly spun-up final non-synchronous rotation state, with super-synchronous final equilibrium energy $E_{f} > E_{c,2}$. This is despite the potential for energy dissipation via the dynamic tides.
\begin{figure}[htb!]
\centering
\subfloat[Rotation Angles]{\includegraphics[scale=1.0]{vanhoolst_traj_unavg_NSR_May1.pdf}}
\subfloat[Rotational Energy]{\includegraphics[scale=1.0]{vanhoolst_E_unavg_NSR_May1.pdf}}
\caption{Non-synchronous rotation of Europa's ice shell and core, weak gravity-gradient coupling ($\hat{K}_{G} < \hat{K}_{G}^{*}$).The energy decreases to a final equilibrium level $E_{f} > E_{c,2}$ and thus Europa does not settle into a tidally locked state.}
\label{fig:NSREx1}
\end{figure}
\subsubsection{Tidal Locking and Ice Shell Asymmetry}
We now explore the end behavior of Europa (tidally locked or non-synchronous rotation) in several large 2D parametric studies with variables $\kappa_{i} = (B_{i} - A_{i})/C_{i}$ and $\kappa_{s} = (B_{s} - A_{s})/C_{s}$. The dynamic tidal torques are made large enough that equilibrium spin states can be achieved in feasible computational time, similar to the approach in Goldreich \& Peale.\cite{Goldreich_TAJ1966, Goldreich_Peale_SpinOrb}. Note that an equilibrium spin state is one for which secular changes in system energy have stopped. The parameters used for the simulations are given in Table~\ref{table:TLPS1}.
\begin{table}[h!]
\centering
\caption{Simulation Parameters for Tidal Locking Parameter Studies}
\label{table:TLPS1}
\begin{tabular}{ll|l}
Parameter & Values & \\ \cline{1-3}
\multicolumn{1}{l|}{Polar mom. inert., $C_{i}$} & $C_{i} \equiv 1$ & \\
\multicolumn{1}{l|}{$\gamma = C_{i}/C_{s}$} & 28.0 & \\
\multicolumn{1}{l|}{ Range of $\kappa_{i}$, $\kappa_{s}$} & $0.6 < \frac{\kappa_{i}}{\kappa^{*}} < 1.2$, $0.5 < \frac{\kappa_{s}}{\kappa^{*}} < 6.0$ & \\
\multicolumn{1}{l|}{ Grav. coupling, $\hat{K}_{G}$} & $0.6(B_{i}-A_{i})$ or $1.6(B_{i}-A_{i})$ & \\
\multicolumn{1}{l|}{ Dynamic tides, $\tilde{D}_{i}$, $\tilde{D}_{s}$} & $\tilde{D}_{i} = 0.3K_{G}$, $\tilde{D}_{s} = 0.075K_{G}$ & \\
\multicolumn{1}{l|}{Initial orientations $(\eta_{i}, \eta_{s})$, deg} & $0.0, 0.0$ & \\
\multicolumn{1}{l|}{Initial ang. vel. $(\dot{\eta}_{i}, \dot{\eta}_{s})$, deg/s} & Variable & \\
\multicolumn{1}{l|}{Europa orbit} & $a = 670900$ km, $e = 0.05$, $f_{0} = 0$ & \\
\multicolumn{1}{l|}{Duration, Europa orbits} & 2500 &
\end{tabular}
\end{table}
For these studies, for each point, the minimum interior angular velocity for unbounded revolution is computed, then scaled by a constant factor to generate a super-synchronous interior rotation state. Similarly, the ice shell rotation state is scaled to ensure super-synchronous rotation of the ice shell. Values are chosen such that the ice shell and core generally revolve in lockstep in the beginning of the simulation. Starting with the case of $\gamma = 28$ and $\hat{K}_{G} = 0.6(B_{i}-A_{i})$, with the initial conditions as specified and with gravitational coupling $K_{G} < K_{G}^{*}$, Europa is initially in the high-energy case as depicted in the top left of Figure~\ref{fig:Energies}.
\begin{figure}[h!]
\centering
\includegraphics[]{StudyFigure_WeakCoupling1_Aug6_NEW.pdf}
\caption{Europa's equilibrium spin state vs. moment of inertia asymmetries $\kappa_{i}/\kappa^{*}$ and $\kappa_{s}/\kappa^{*}$, weak gravitational coupling case with $\hat{K}_{G}=0.6(B_{i}-A_{i})$ and $\gamma=28$. Recall $\kappa_{i} = (B_{i}-A_{i})/C_{i}$, $\kappa_{s} = (B_{s}-A_{s})/C_{s}$, and $\kappa^{*} = \frac{2}{3}\left(\frac{9.5\pi e^{2}}{2\sqrt{2}}\right)^{2}$ is the classical limit from Goldreich \& Peale\cite{Goldreich_TAJ1966, Goldreich_Peale_SpinOrb}. For a rigid satellite, $\kappa > \kappa^{*}$ indicates sufficient asymmetry to produce tidal locking. In the figure, $\Delta E = E_{f} - E_{0}$ is the energy decrease over 2500 orbits, and the dark cyan cases with $\Delta E/(E_{0}-E_{c,1}) < -1$ are all tidally locked. The classical tidal locking limit does not apply to Europa, and large ice shell asymmetry enables tidal locking for values of $\kappa_{i} < \kappa^{*}$.}
\label{fig:TLParam}
\end{figure}
The results of this first numerical study are given graphically in Figure~\ref{fig:TLParam}. This figure shows the drop in system energy over the simulation time span, with a clear bifurcation between tidally locked cases (for which $\Delta E/(E_{0}-E_{c,1}) < -1$) and non-synchronous rotation cases (with $\Delta E/(E_{0}-E_{c,1}) \gg -1$). The former are indicated with a dark cyan and the latter with a light green. Furthermore, within the non-synchronous rotation region, the coloration of an individual cell reflects the level of energy loss. The cases with least energy loss are more yellow, and these represent faster equilibrium non-synchronous rotation rates.
By the choice of initial conditions, the moon cannot lose much energy before entering in a state of core or combined ice shell and core locking. Note that once $E<E_{\text{crit},2}$, tidal locking becomes inevitable -- equilibrium spin states with energies between $E_{\text{crit},2}$ and $E_{\text{crit},1}$ are completely absent from the numerical results. Overall it is observed that applying the classical condition given by Eq.~\eqref{Ean2} to the solid interior only (whose moments of inertia dwarf those of the shell) only roughly approximates the region of bifurcation between tidal locking and non-synchronous rotation. Additionally, as the moment of inertia asymmetry of the ice shell is increased, it becomes more likely for Europa to end up tidally locked. The boundary between tidally locked and NSR solutions migrates to lower values of $\kappa_{i}$ as $\kappa_{s}$ is increased.
To better approximate the transition boundary between non-synchronous and tidally locked final spin states, consider application of the classic condition given by Eq.~\eqref{Ean2} to a differentiated but rigid Europa, where the shell is artificially geometrically fixed to the solid interior. With this assumption that the shell and interior are completely in lockstep, and noting that moments of inertia are additive, we obtain a linear boundary between tidal locking and NSR, $\kappa_{i}^{*} = \kappa^{*}\left(1 + \frac{1}{\gamma} - \frac{1}{\gamma}\frac{{\kappa}_{s}}{\kappa^{*}}\right)$, given as an orange line in Figure~\ref{fig:TLParam}. This linear assumption fails to accurately fit the nonlinear transition boundary, but serves as a rough approximation. The failure of this line to fit the transition boundary in the region with lower values of $\kappa_{s}/\kappa^{*}$ is notable -- the ability of the shell to move independently of the interior turns out to be dynamically important in the end behavior of the system. A shell with a permanent mass asymmetry that is classically thought too small to induce tidal locking is in fact able to induce tidal locking in these nonlinear simulations. We observe however that the linear slope fits the nonlinear simulation results well in the range $3 < \kappa_{s}/\kappa^{*} < 5.5$, as long as $\kappa_{i}$ is high enough to avoid the complex tidal locking behavior exhibited in the lower right corner of the plot.
The region in the lower right corner of Figure~\ref{fig:TLParam}, within the range $4 < \frac{\kappa_{s}}{\kappa^{*}} < 6$ and $0.6 < \frac{\kappa_{i}}{\kappa^{*}} < 0.85$ displays more complexity than the rest of the parameter space. In this region, independent rotation of the ice shell plays an especially important role by amplifying the tidal dissipation rate and driving Europa towards a tidally locked state. In this region, the core has much less permanent mass asymmetry than is expected to drive tidal locking, but the shell has a very large permanent mass asymmetry. The result is a complex variation of outcomes depending on the exact relative values of $\kappa_{s}/\kappa^{*}$ and $\kappa_{i}/\kappa^{*}$. Notable is the ``stable tongue" of tidally locked behavior extending deep into the region that is otherwise dominated by NSR. This feature bears some casual resemblance to the stability diagrams featured in classical and contemporary studies of systems which exhibit parametric resonance. See for example Figure 4 in Reference~\citenum{KovacicEA_ParamResonance}. To extend the analogy, consider the simple damped Mathieu oscillator below with stiffness, damping, and perturbative parameters $\delta > 0$, $c$, and $\varepsilon$, for which $|\epsilon/\delta| \ll 1$.
\begin{equation}
\label{Mathieu1}
\ddot{x} + c\dot{x} + (\delta + \varepsilon \cos{t})x = 0
\end{equation}
For the case of dissipative damping, in the absence of parametric forcing ($\varepsilon = 0$) this oscillator is stable, i.e. $x(t) \rightarrow 0$ for $\delta > 0$, and unstable for $\delta < 0$. However, when $\varepsilon > 0$, there are some combinations of $\delta$ and $\varepsilon$ which can stabilize the system even when $\delta < 0$. In our numerical study of Europa, we have a more complicated system -- two coupled second-order differential equations with additional forcing terms -- but we still have time-varying coefficients in the dynamics (including time-varying ``stiffness" terms), and we observe a region of mostly unsettled (i.e. non-synchronous) behavior for which there is settling (i.e. tidal locking) for a narrow range of parameters.
It is instructive to examine two individual cases of interest from this parameter study. First, consider the non-synchronous rotation case with $\kappa_{s}/\kappa^{*} = \kappa_{i}/\kappa^{*} = 0.8$, whose energy plot is given on the left side of Figure \ref{fig:PSind1ab}. The energy settles to an equilibrium level above $\hat{E}_{\text{crit},2}$, and Europa ends up in a state of non-synchronous rotation. By contrast, for the case with $\kappa_{s}/\kappa^{*} = 6$ and $\kappa_{i}/\kappa^{*} = 0.8$, the energy falls to $\hat{E}_{\text{crit},2}$ and lingers until falling below this critical value, at which point tidal locking occurs, and energy is rapidly depleted afterwards. For this particular case, the ice shell and core tidally lock together when $\hat{E}(t) < \hat{E}_{\text{crit},2}$. It seems that regardless of the behavior of the ice shell, if the total energy falls below $E_{\text{crit,2}}$, tidal locking of the entire moon will eventually ensue, because equilibrium spin states with $E_{\text{crit},1} < E < E_{\text{crit},2}$ are generally not observed in these simulations.
\begin{figure}[htb!]
\centering
\subfloat[Non-synchronous rotation case]{\includegraphics[scale=0.95]{EPlot_May6c_LL.pdf}}
\subfloat[Tidal locking case]{\includegraphics[scale=0.95]{EPlot_May6c_HL.pdf}}
\caption{Energy behavior of two cases from the parameter study. On the left, $\kappa_{s}/\kappa^{*} = \kappa_{i}/\kappa^{*} = 0.8$, and the energy does not fall to the critical value needed for tidal locking. On the right, the large moment of inertia asymmetry $\kappa_{s}/\kappa^{*} = 6$ is sufficient to induce tidal locking, despite the sub-critical asymmetry in the interior, $\kappa_{i}/\kappa^{*} = 0.8$.}
\label{fig:PSind1ab}
\end{figure}
Repeating the above parameter study with the same methodology and the same $\gamma=28$ but using a strong gravitational coupling constant $\hat{K}_{G} = 1.6(B_{i}-A_{i})$, $\hat{K}_{G} > \hat{K}_{G}^{*}$, Figure \ref{fig:TLParam2} is produced. For this system, the energy loss always enforces simultaneous locking of the initially co-rotating ice shell and core -- see the lower half of Figure~\ref{fig:Energies}. The result is a less interesting landscape of possible solutions, with a fairly linear boundary separating tidal locking from NSR final states. Note however that once again the linear extrapolation, which assumes a differentiated but rigid Europa, does not correctly reproduce the slope of the boundary line. It provides a decent approximation of the boundary but once again is incorrect in the region with lower $\kappa_{s}/\kappa^{*}$. The source of the mismatch is that the small independent oscillations of the shell rotation angle with respect to the core rotation angle provide an additional avenue for energy dissipation, extending the tidal locking boundary below the classical limit.
\begin{figure}[h!]
\centering
\includegraphics[]{StudyFigure_StrongCoupling_Aug31_NEW.pdf}
\caption{Europa's equilibrium spin state vs. moment of inertia asymmetries $\kappa_{i}/\kappa^{*}$ and $\kappa_{s}/\kappa^{*}$, strong gravitational coupling case with $\hat{K}_{G}=1.6(B_{i}-A_{i})$ and $\gamma=28$. The figure shows $\Delta E/(E_{0}-E_{\text{crit},2})$, where $\Delta E = E_{f} - E_{0}$ is the energy decrease over 2500 orbits, and the dark cyan cases with $\Delta E/(E_{0}-E_{\text{crit},2}) < -1$ are all tidally locked. As opposed to the prior study with weak gravitational coupling, the boundary between tidal locking and non-synchronous rotation is simple and nearly linear in $\kappa_{s}$.}
\label{fig:TLParam2}
\end{figure}
\subsection{Other Torques on Europa's Ice Shell}
Here, to aid in understanding the limitations of our present analysis, we provide a summary of the most commonly discussed significant torques that could also be at work within Europa during general independent motion of the shell and interior. Some of these torques are conservative, and some are dissipative. In the case that a torque is conservative, a new energy function similar to the one introduced in this paper can be developed to account for its effects. In the case that a torque is dissipative, there is a corresponding decrease in total system energy due to the action of the dissipating torque. The dissipation power is the product of the dissipative torques and the instantaneous angular velocity of the associated body.
The first torque to consider is the Poincaré torque, which is exerted when the polar axis of a rotating fluid departs from the axis of figure of the enclosing shell:
\begin{equation}
\label{PoncareTorque1}
\bm{\Gamma}_{\text{p}} = -\int_{S}P\| \bm{r} \times \hat{\bm{n}}\|\text{d}S
\end{equation}
where $P$ is the pressure exerted by the fluid at point $\bm{r}$, $\hat{\bm{n}}$ is the local normal vector of the enclosing shell, and the integral expression is integrated over the entire shell interior surface $S$.
In the case of Europa, this term is thus important when the axis of figure of the enclosing shell departs significantly from Europa's rotation axis. This torque is conservative, but should be negligible for our purely NSR analysis. It is sometimes necessary to consider this term in a polar wander analysis -- where the spin axis and axis of figure depart significantly.\cite{Ojakangas1989EuropaPW}
The friction torque is due to the differential angular velocity of the ice shell and interior. It is a dissipative torque, which has previously been assumed to be turbulent:\cite{Ojakangas1988}
\begin{equation}
\label{FrictionTorque2}
\bm{\Gamma}_{\text{Turbulent}} \approx C_{T}\|\bm{\omega}_{i} - \bm{\omega}_{s}\|(\bm{\omega}_{i} - \bm{\omega}_{s})
\end{equation}
where $C_{T} \propto r^{5}$ for radius $r$. The condition for turbulence is $\text{Re} = \frac{ul}{\nu} \gg 300$, and a reorientation timescale of $10^{5} - 10^{6}$ yrs would yield $\text{Re} \sim 10^{5} - 10^{6}$, greatly exceeding the level required for turbulence.\cite{Ojakangas1988} Reference~\citenum{Ojakangas1988} argues that even a more powerful laminar friction torque would still likely be sub-dominant. However, because the friction torque scales with the ice shell reorientation rate, it would act to impede ice shell reorientations above some maximal rate.
The final torque under consideration is the shell dissipation torque. Because the ice shell is not perfectly rigid, there will be some loss of energy due to viscous flow, and potentially shell fracturing. This loss of energy due to work done on the shell implies that there would be a corresponding net torque on the ice shell. Reference~\citenum{Ojakangas1989EuropaPW} gives a rough estimate of this torque for polar wander, assuming a continuous viscous shell and neglecting the effects of deep fractures and other geologically developed structural irregularities. Their expression is adapted for the NSR analysis with general shell and interior angular velocities $\bm{\omega}_{s}$ and $\bm{\omega}_{i}$:
\begin{equation}
\label{ViscousTorque1}
\bm{\Gamma}_{v} \approx -C_{v}\frac{(\bm{\omega}_{s} - \bm{\omega}_{i})}{\|\bm{\omega}_{s} - \bm{\omega}_{i}\|}
\end{equation}
where $C_{v} \sim \frac{4\pi a^{2}d}{l}\left(\frac{T_{m}}{T_{m}-T_{s}}\right)\mu \gamma^{2}$ for average radius $a$, ice depth $d$, melting temperature $T_{m}$, characteristic surface temperature $T_{s}$, elastic modulus $\mu$, and a constant $l$. This is derived by a simplified model dividing the ice shell into an elastically responding outer layer and viscously responding inner layer. Ojakangas shows that this estimate for the torque is massive, and discusses at length the difficulty of accurately estimating the scale of viscous dissipation in a realistic model of the shell. In particular, this dissipative torque could be greatly reduced by deep fractures in the ice, with the strain accommodated by the cracks between fractured elements of the shell instead of in the shell elements themselves. However it is not known to what extent and to what depth Europa's ice shell has historically been fractured.
In general, the inclusion of additional dissipative torques can be addressed by straightforward extension of the approach taken in this paper. The effect on system energy of the non-conservative dynamic tidal torque has been derived previously for the single rigid-body case, and extended to the differentiated case in this work. We can analogously include additional terms in the dynamics representative of the additional dissipative torques of interest, and their effect on the energy dissipation rate is straightforward. An appropriate extension should include the dissipative torque associated with the shell's viscoelastic nature, and could also include the (likely turbulent) friction torques, but the dissipation due to shell non-rigidity is likely the dominant of the two. Because this effect removes a lot of energy from the system, we would expect the NSR boundaries in Figures ~\ref{fig:TLParam} and \ref{fig:TLParam2} to shift. Intuitively we'd expect that the regions of tidal locking will generally encroach on the regions of non-synchronous rotation due to the presence of an additional avenue for energy dissipation below the critical threshold for tidal locking. However, these additional coupling torques might also impede the development of the independent oscillations of the shell and core observed in this work. This would result in a greater portion of the total system energy remaining in the rotation of the solid interior, and potentially act as a buffer against the chaotic shell rotation solutions which feature tidal locking of the interior.
\subsection{Non-Rigidity in Europa's Early Ice Shell}
While we leave numerical exploration of the response of a non-rigid ice shell to future work, the implications of the effects of non-rigidity are still a valuable discussion, especially in the context of the kinds of motions predicted by the analyses in this work. Reference~\citenum{Ojakangas1989EuropaPW} discusses the damping effects of viscous dissipation for a non-rigid ice shell that undergoes rapid angular displacement, showing that viscous effects can absorb a large amount of energy, significantly impeding rapid shell motion. The decoupled shell revolution behavior in our work may be stalled by dissipation, or alternatively a massive amount of energy is quickly imparted into the shell -- potentially fracturing the ice. The type of reaction depends on the thickness of the ice shell. Since we are exploring tidal locking which could have occurred at any point in Europa's history, the effect on an ice shell much thinner than it is today (representative of an earlier warmer Europa) is also of interest.
The Maxwell time $\tau_{m}$ generally decreases with depth in the ice shell. Consider also a ``reorientation time" $\tau_{r}$ representative of the timescale for the ice shell to revolve to a longitude 90 degrees offset from that of the interior. For a primordial thin ice shell, the response is dominated by a large $\tau_{m}$. If the reorientation time is significantly under this timescale, $\tau_{r} \ll \tau_{m}$, it could induce broad global fracturing in the shell due to the inability for elastic strain to compensate the entire deformation. Thus, our solutions that forced the shell into a chaotic spin state (characterized by independent revolutions from the core) could in reality impart a significant amount of energy into the broad fracturing of the shell. Such a geologically violent event could also drastically change the scale and location of the permanent mass asymmetries in the shell, altering $\kappa_{s}$ and additionally defining a new $\eta_{s}$. Depending on the amount of energy released in a cracking incident, such a mechanism of inducing global cracking could present a means of dissipating rotational energy throughout Europa's early history, as the shell cracks, re-freezes, and rotates out of alignment from the core repeatedly. However, note that the chaotic shell rotation solution was only observed for part of the parameter space, in particular for some solutions where $\kappa_{i} < \kappa^{*}$ and $\kappa_{s} \gg \kappa^{*}$. These are cases when the interior has a small mass asymmetry but the shell has a large asymmetry.
In an alternate scenario, consider a thick ice shell whose response is dominated by the small $\tau_{m}$ of the warm ice deep in the shell, extending down to the shell-ocean interface. For this thick shell, if the reorientation timescale satisfies $\tau_{r} \gg\tau_{m}$, there could be a rapid arresting of independent motion due to dissipation deep within the ice shell. However, deep cracks in the ice could act to reduce this dissipative effect. The degree to which dissipation would be reduced is not certain, because an accurate modeling of the global viscoelastic response of the shell is extremely complicated
\subsection{Mass Asymmetries of Europa's Shell and Interior}
Mass asymmetries in Europa's icy shell and interior may exist due to variations in topography and density, with potentially significant contributions to the moment difference $\kappa = (B - A)/C$. In fact, only the ``frozen-in" asymmetries contribute to $\kappa$, because the hydrostatic figure raised by tides is directly responsible for nonzero mean torque driving non-synchronous rotation, whereas $\kappa$ controls the restoring torque when the asymmetry is misaligned with the tidal bulge \cite{Goldreich_Peale_SpinOrb}. In the context of the dynamical coupling of the shell and interior, the respective values of $\kappa_{s}$ and $\kappa_{i}$ are thus proportional to the quadrupole moments, $l = m = 2$ of the shell and interior mass distribution, where $C_{lm}$ are the resulting spherical harmonic coefficients for the gravity field of the rigid body.
First, we consider small-scale topography. For reasonable values of Europa's ice shell thickness $d$, a circular load with diameter $L$ will remain rigidly supported for timescales $t_\mathrm{R} \sim \eta/\rho g L$, where $\eta$ is the shell's viscosity (Pa s), $\rho$ is the density, and $g$ is the surface gravity. The temperature-dependence of ice viscosity follows an Arrhenius-like relation,
$\eta(T) = \eta_0 \exp{\left\{\frac{Q_l}{R_\mathrm{gas}}\left(\frac{1}{T}-\frac{1}{T_\mathrm{m}}\right)\right\}}$
where $\eta_0 \approx 2\times 10^{14}~\mathrm{Pa~s}$ is the viscosity at the melting point $T_\mathrm{m} \approx 270~\mathrm{K}$, $Q_l \approx 6\times 10^4~\mathrm{J~mol^{-1}}$ is the activation energy for material diffusion, and $R_\mathrm{gas} = 8.3145~\mathrm{J~mol^{-1}~K^{-1}}$ is the universal gas constant (see e.g. Reference~\citenum{ashkenazy2018dynamics}). Given that the maximum stress beneath a surface load occurs at a depth $z_\mathrm{max} \sim L/3$, with these ice properties and a geothermal gradient $dT/dz = 10~\mathrm{K~km^{-1}}$, we find $t_\mathrm{R} > 10^7~\mathrm{yr}$ (Europa's approximate present-day surface age) for $L < 20~\mathrm{km}$. In other words, topographic loads with horizontal dimension smaller than roughly 15 km will be supported by the rigidity of the upper part of the ice shell; broader loads will be supported by isostatic compensation at the base of the shell. Compensation at depth $d$ reduces the net contribution of a load to $(B-A)/C$ by a factor $d/R \sim 0.01$ \cite{Greenberg1984}. A similar analysis applies to the rocky interior, but we lack constraints on its rheological properties. Taking a viscosity $\eta_i \sim 10^{21}~\mathrm{Pa~s}$ (comparable to Earth's mantle), we find similar-sized loads as above, $L \sim 10-20~\mathrm{km}$ are likely to be rigidly supported by the rocky interior.
The mass of the anomaly is $\Delta m \sim L^2\Delta z \rho$, where $\Delta z$ is the topographic anomaly relative to the ellipsoidal surface. Although topography data for Europa are limited, limb profiles suggest variations of $\Delta z \sim 1~\mathrm{km}$ over horizontal baselines of $\sim 10^\circ = 270~\mathrm{km}$ \cite{Nimmo2007TheGS}, while relief across chaos and ridged plains regions was measured to be $\sim 100~\mathrm{m}$ on 10-km baselines\cite{SchenkPappalardo2004}. Therefore, for $L = 20~\mathrm{km}$, we assume $\Delta z \sim 200~\mathrm{m}$, yielding $\Delta m \sim 10^{14}~\mathrm{kg}$. If the mass anomaly is positioned at the sub-Jovian point at the equator, the effective change to the shell's mass asymmetry is $\Delta (B-A)/C \sim \Delta m / m_s \sim (10^{14}~\mathrm{kg})/(10^{22}~\mathrm{kg}) \sim 10^{-8}$. Given the required value \cite{Greenberg1984} $\kappa^* \sim 10^{-6}$, this result indicates Europa's rigidly supported topography is $\sim 100\times$ smaller than needed to achieve the critical value. Again, a similar analysis applies to the rocky interior, but the effects of topography are further reduced, due to the much larger value of the polar moment of inertia $C_i$.
In addition to local mass anomalies, $(B - A)/C$ may be affected by global-scale asymmetries in the shell thickness, or for the interior, a frozen-in ellipsoidal bulge. Thickness variations in Europa's ice shell would affect $\kappa_s$. Ojakangas and Stevenson \cite{Ojakangas1989EuropaThermal} showed that
\begin{equation}
\Delta \left(\frac{B-A}{C}\right) \sim \left(\frac{\rho_\mathrm{m}-\rho_\mathrm{c}}{\overline{\rho}}\right)\left( \frac{\rho_\mathrm{c}}{\rho_\mathrm{m}}\right)\left(\frac{d}{R}\right)\left( \frac{4t_{22}}{R}\right),
\end{equation}
where $\rho_\mathrm{m}$ and $\rho_\mathrm{c}$ are the density of the liquid water `mantle' and icy shell, respectively, $R$ is Europa's mean radius, $d$ is the mean shell thickness, and $t_{lm}$ represents the topography contained in the $(l,m)$ spherical harmonic. Using limb profiles, Nimmo et al. \cite{Nimmo2007TheGS} found a limit to maximum global thickness variations that requires $t_{22} < 1~\mathrm{km}$, or $4t_{22}/R \sim 2\times 10^{-3}$. Therefore, with $\rho_\mathrm{m} = 1000~\mathrm{kg~m^{-3}}$, $\rho_\mathrm{c} = 920~\mathrm{kg~m^{-3}}$, $\overline{\rho} = 990~\mathrm{kg}$, $d = 20~\mathrm{km}$, $R = 1560~\mathrm{km}$, we find $\Delta (B-A)/C \lesssim 10^{-6}$. Thus, shell thickness variations and associated topography may be barely too small to cause $\kappa_s$ to exceed $\kappa^*$.
The shape of the silicate interior is unknown, either in the past or present. We can make comparisons to similar-sized silicate bodies, such as Io and Earth's Moon. Topography data from the Lunar Orbiter Laser Altimeter (LOLA) (Mazarico et al., 2013) and limb profiles of Io\cite{White2014_IoLimb} indicated almost identical values of $4t_{22}/R \sim 2\times 10^{-4}$. Assuming a silicate crustal density $\rho_\mathrm{c} = 3000~\mathrm{kg~m^{-3}}$ and mantle density $\rho_\mathrm{m} = 3500$ (similar to Io's mean density), with a 1 km-thick crustal layer, we find $\Delta (B-A)/C \sim 10^{-8}$. Thus, if the frozen-in shape of Europa's silicate core were similar to Io's or the Moon's, the resulting $\kappa_i$ would be much smaller than $\kappa^*$. However, it should be noted that if Europa's interior were volcanically active throughout its early history, much larger asymmetries are possible. For example, Mars has $4t_{22}/R \sim 10^{-2}$, due to the Tharsis rise; an asymmetry of similar relative thickness on Europa's seafloor would result in $\kappa_i > \kappa^*$.
\section{Discussion and Conclusions}
In this work, we explore the tidal locking process for Europa, differentiated into a solid interior, ocean layer, and ice shell. Classically, as a planet's rotation slows due to energy loss, the ``dynamic" tidal torque, induced by non-rigidity of the satellite, acts to produce a spin state slightly faster than the synchronous angular velocity. This effect is countered by the ``static" torques acting on permanent asymmetries in the body. Our work explores how differentiation of a moon into a gravitationally interacting solid interior and ice shell affects classical conclusions about the evolution of its rotational state. First, we note that the equations of motion previously studied\cite{VanHoolst_Europa} for rigid shell and core interacting by mutual gravitation admit an energy integral. This quantity changes slowly under the action of unmodeled perturbative torques such as the dynamic tidal torque. We find that the spin states of the shell and core are constrained by the instantaneous value of the system's energy state, with critical values of the energy demarcating qualitatively distinct behaviors. Furthermore, the dynamical evolution of the spin state with energy loss is governed by the strength of gravitational coupling between the ice shell and solid interior. Below a critical value of gravitational coupling, the ice shell can revolve independently of the solid interior, and tidal locking of the solid interior precedes an era of erratic independent shell rotation. This behavior is much more common for certain combinations of ice shell and interior moment of inertia asymmetries. Above the critical value of gravitational coupling, the ice shell and interior revolve and tidally lock in lockstep.
Numerical simulation confirms the validity of using the system energy as a lens for studying and describing overall behavior. We show that the 2D (four state) dynamics of the dynamical model are chaotic outside of the region of librations and synchronized revolutions of the shell and interior. We provide several case studies adding the effects of orbital eccentricity and dynamic tidal torque, demonstrating examples of tidal locking for strongly and weakly gravitationally coupled cases, as well as an example showing non-synchronous rotation as a final spin state. With the same higher fidelity model, we then execute large parameter studies over a range of normalized asymmetries $\kappa_{s}/\kappa^{*}$ and $\kappa_{i}/\kappa^{*}$ for both the shell and solid interior, where the normalization parameter $\kappa^{*}$ is the necessary moment of inertia asymmetry $(B - A)/C$ classically required for a rigid satellite to tidally lock.\cite{Goldreich_TAJ1966} Two studies are performed -- one for a gravitational coupling parameter less than its critical value $K_{G} < K_{G}^{*}$ and one greater, $K_{G} > K_{G}^{*}$. For both studies, Europa is subject to dynamic tidal torques high enough to bring the moon to its final spin state within a reasonable span of simulation time, similar to the procedure used in Reference~\citenum{Goldreich_TAJ1966} for the 1D rigid satellite case. Using the energy parameter introduced in this paper, we efficiently differentiate tidally locked from non-synchronously rotating final spin states across the entire parameter space. Then, we compare the resulting distribution of final spin states to the results predicted by Reference~\citenum{Goldreich_TAJ1966} for a rigid satellite. We note that particularly for the case of weak gravitational coupling, the differentiation of shell and interior plays an important role in the dynamical evolution of Europa's spin state ways that are not captured by the rigid moon model. Finally, we discuss neglected torques that are expected to be sub-dominant, and provide commentary on the potential effects of non-rigidity of the ice shell. In the future, viscoelastic extensions of the model explored in this paper could provide additional realism and new insights.
\bibliographystyle{plain}
|
{
"arxiv_id": "2302.13164",
"language": "en",
"timestamp": "2023-02-28T02:11:37",
"url": "https://arxiv.org/abs/2302.13164",
"yymm": "2302"
} | \section{History}\label{hist}
Sensations of warm and cold connect us with the scale of atoms and molecules. Our everyday activities, taking place on the macroscopic scale, are directly and indirectly influenced by activities taking place on the microscopic scale. Attempts to understand the heat and to use it as a source of energy on the macroscopic scale gave birth to thermodynamics. From thermodynamics then unfolds multiscale theory that is a general theory of relations among scales. In this Note we begin by listing in the chronological order results that led to its formulation. In Section \ref{present} we formulate the multiscale theory and in Section \ref{future} we invite the readers to participate in its further development. We focus in this Note on the physical aspects. Mathematical aspects as well as specific illustrations are discussed in detail in the cited literature.
In the historical introduction we follow only one line, namely the line leading to the multiscale theory in which dynamical theories on a larger scale are seen as patterns in the phase portraits corresponding to the dynamical theories on the smaller scale.
The time evolution of particles composing macroscopic systems under investigation is governed by classical mechanics. Collection of all their trajectories is called a microscopic phase portrait. Macroscopic observers see in it only an overall pattern representing results of observations on the larger scale. Unimportant microscopic details are ignored. In this Note we call a description involving \textit{more details an upper description} and the one involving \textit{less details a lower description}.
We emphasize that the passage to a larger scale (in other words, reduction to a lower theory) is not only a loss (loss of details) but also a gain of new emerging overall features that are not seen in the upper theory. What is the pattern in the phase portrait and how it is recognized?
We begin to answer this question with an allegorical image in which the microscopic phase portrait is seen as an abstract painting. With our innate ability we recognize in it a pattern that is then an allegorical image of the phase portrait of the lower theory. The process of pattern recognition can be seen as
chipping off unimportant details (sweeping away the sand in an archeological excavation) and revealing in this way important overall features.
We now present, in the chronological order, a sequence of results that provides a more specific guidance on this path to lower theories.
\\
\textit{\textbf{Ludwig Boltzmann, kinetic theory}}
\\
The first attempt to formulate mathematically the emergence of an autonomous macroscopic theory in a microscopic phase portrait was made by Ludwig Boltzmann \cite{Boltzmann}. The macroscopic theory was the classical equilibrium thermodynamics and the microscopic theory was classical mechanics of particles composing a dilute ideal gas. Boltzmann begins the pattern recognition process with the following two steps. First, he sees the microscopic phase portrait in the setting in which one particle distribution function $f(\rr,\vv)$ plays the role of the state variable ($\rr$ is the position vector and $\vv$ momentum of one particle). Second, he realizes that particle trajectories are composed of straight lines (free flow) followed by very large but very localized changes due to binary collisions.
Details of the changes can be ignored in the overall view. The process of chipping them off is achieved by considering the binary collisions as
"chemical reactions" in which a pair of two species labeled by two incoming momenta transforms into another pair of two species labeled by other two outgoing momenta. The mechanical origin of the trajectories is retained only in the conservation of the total momentum and the total kinetic energy before and after collisions. The Boltzmann vector field generating the time evolution of $f(\rr,\vv)$ is a sum of a term generating the free flow and a term, called a Boltzmann collision term, coming from the Guldberg-Waage (mass action law) chemical kinetics. By analysing solutions to the Boltzmann equation we indeed find that, by following the time evolution, the pattern (composed of asymptotically reached fixed points) expressing equilibrium thermodynamics is reached. The process of chipping off the unimportant details is achieved by increasing a concave potential, called an entropy (H function function by Boltzmann) that becomes at the final destination the entropy of an ideal gas. Boltzmann has demonstrated for the first time that the new concept of entropy, that emerged previously in the investigation of the transformation of heat to a macroscopic mechanical energy, actually arises in the mathematical formulation of the observed approach to equilibrium states at which their behavior can be well described by an autonomous theory called equilibrium thermodynamics. In other words, experimental observations show that externally unforced an internally unconstrained macroscopic systems can be prepared for using equilibrium thermodynamics to describe and predict their behavior. The entropy
is the potential driving the preparation process. The existence of equilibrium states constitutes a basis, called \textit{0-th law of thermodynamics} \cite{Callen}, of equilibrium thermodynamics.
If we focus our interest only on the equilibrium thermodynamics that is the final outcome of the preparation process then we can replace the Boltzmann equation with the maximum entropy principle (MaxEnt principle). In this principle the entropy (a real valued concave function of $f(\rr,\vv)$, its tendency to increase, as well as the constraints (the energy and the number of moles) in the maximization are postulated.
\\
\textit{\textbf{Willard Gibbs, equilibrium statistical mechanics}}
\\
In Gibbs' analysis \cite{Gibbs} the microscopic phase portrait as well as the macroscopic theory searched in it are the same as in Boltzmann's analysis. The difference is in the enlargement of the class of systems under investigation (not only ideal gases but all physical systems are considered) and in the limitation to the static analysis (i.e. to postulating a single entropy that is assumed to be universally applicable to all microscopic physical systems and to postulating the MaxEnt principle). The time evolution involved in the preparation process (that is in Boltzmann's analysis described by the Boltzmann equation) is in the Gibbs analysis replaced by requiring only the energy and the number of particles conservations. The pattern in the microscopic phase portrait that corresponds to the Gibbs analysis is assumed to be a pattern with maximal disorder. The universal Gibbs entropy is interpreted as a measure of disorder.
The outcome of Gibbs' analysis is a mapping from microscopic Hamiltonians (which is one of the constraints in the entropy maximization) to fundamental thermodynamic relations. The individual nature of macroscopic systems is expressed in the microscopic theory in the Hamiltonians and in the equilibrium thermodynamics in the fundamental thermodynamic relations.
\\
\textit{\textbf{Ilya Prigogine, nonequilibrium thermodynamics}
\\}
How can Boltzmann's analysis be adapted to fluids? The upper theory has to be replaced by the classical hydrodynamics, the lower theory remains the classical thermodynamics. In the classical hydrodynamics the state variables are the same as in the classical thermodynamics except that they are fields (i.e. functions of the position vector $\rr$) and in addition there is another field representing the local momentum. It has been shown \cite{Prigogine}, \cite{dGM} that the entropy that drives the classical hydrodynamic time evolution to the classical equilibrium thermodynamics is the entropy $S=\int d\rr s(\epsilon(\rr),n(\rr))$, where $\epsilon(\rr)$ is the local internal energy (i.e. the total local energy minus the local kinetic energy), and the dependence of the local entropy $s(\rr)$ on $\epsilon(\rr)$ and $n(\rr)$ is the same as the dependence of the equilibrium entropy on the equilibrium energy and the equilibrium number of moles (assumption of local equilibrium). At the conclusion of the preparation process the local equilibrium entropy $s(\rr)$ (that drives the process) as well as $\epsilon(\rr)$ and $n(\rr)$ become the equilibrium entropy, the equilibrium energy, and the equilibrium number of moles.
A very important additional result has arisen in investigations of the passage from hydrodynamics to the equilibrium thermodynamics. The physical regularity of hydrodynamic equations (i.e. their compatibility with the 0-th law of thermodynamics) has been proven to imply a mathematical regularity (Dirichlet problem is well posed) \cite{Godunov1}, \cite{Godunov2}.
\\
\textit{\textbf{Alfred Clebsch, Vladimir Arnold, complex fluids}}
\\
Is it possible to see the Boltzmann and the Prigogine formulations of the 0-th law of thermodynamics as two particular realizations of a single abstract formulation that could also be used for other mesoscopic dynamic theories? The answer to this question is affirmative.
The right hand side of the Boltzmann equation is a sum of a term expressing the free flow and the Boltzmann collision term. The right hand side of the hydrodynamic equations
is a sum of the Euler term expressing the flow of continuum and the Navier-Stokes-Fourier term expressing the friction. Clebsch \cite{Clebsch} has put the Euler term into the Hamiltonian form. In Arnold's formulation \cite{Arnold} both the Boltzmann free flow term in the kinetic equation and the Euler term in hydrodynamic equations appear to be particular realizations of an abstract noncanonical Hamiltonian dynamics. As for the second term on the right hand side, the concept of the dissipation potential (see (\ref{Xi}) below) allows to formulate the Boltzmann collision term and the Navier-Stokes-Fourier term as two particular realizations of a single abstract formulation. In other words, the dissipation arising in chemical kinetics, in binary collisions, and in fluid flows are different but all can be expressed mathematically as particular realizations of a single concept of dissipation potential (see more in Section \ref{present}).
The unification of formulations of the 0-th law of thermodynamics was also motivated by the emergence of plastic materials and associated with it interest in hydrodynamics of complex fluids. The large polymer macromolecules inside the polymeric fluids change their conformations on the same time scale as the hydrodynamic fields do.
The necessity to include an internal structure into hydrodynamics makes the standard framework of hydrodynamics (consisting of local conservation laws, also called balance laws) unusable. The extra fields characterizing the internal structure are not, at least in general, conserved. A new framework for expressing the physics involved in flows of complex fluids (in rheology) is needed. It has been suggested \cite{grmContM}, \cite{36} that the abstract Boltzmann equation can provide such framework.
The state variable $f(\rr,\vv)$ in the Boltzmann equation is replaced in the abstract Boltzmann equation by an unspecified state variable. In the case of complex fluids, the state variable consists of hydrodynamic fields
supplemented with other fields or distribution functions expressing the internal structure. The right hand side of the abstract Boltzmann equation is a sum of the Hamiltonian term and the term generated by a dissipation potential. Such abstract Boltzmann equation has been then called in \cite{40}, \cite{41} Generic equation (an acronym for General Equation for Non Equilibrium Reversible-Irreversible Coupling). The first IWNET meeting (International Workshop of Non Equilibrium Thermodynamics) held in Montreal in the summer of 1996 assembled researches who began to use the Generic framework in formulations of rheological models.
Many examples of rheological applications of the Generic structure can be found in \cite{39} - \cite{book}.
An equation similar to the Generic equation (called a metriplectic equation) has also been introduced in \cite{lg2} - \cite{lg3} in the context of problems arising in plasma physics. The Generic equation with a particular choice of the dissipation potential (quadratic dissipation potential) becomes the metriplectic equation.
\\
\textit{\textbf{Sydney Chapman, David Enskog, Harold Grad, Lars Onsager, rate time evolution}}
\\
So far, we have looked in the upper phase portrait only for a pattern representing a lower theory on which no time evolution takes place (the classical equilibrium thermodynamics). Now, we turn our attention to the passage from an upper theory involving the time evolution to a lower theory that takes into account less details but still involves the time evolution. For example, we can think of the passage from the Boltzmann kinetic theory to hydrodynamics. This enlargement of view makes it also possible to include into our investigations physical system subjected to external forces (open systems). Indeed, while only externally unforced and internally unconstrained macroscopic system approach equilibrium states at which the equilibrium thermodynamics can be applied, the observed behavior of externally driven systems can often be faithfully described by mesoscopic theories. For instance, the time evolution of the Rayleigh-B\'{e}nard system (a horizontal thin layer of fluid heated from below) has been found to be well described by hydrodynamic equations. This means that any description involving more details (e.g. the completely microscopic description in which the Rayleigh-B\'{e}nard system is seen as composed of $10^{23}$ particles) has to show approach to the hydrodynamic description.
The archetype example of the passage from an upper mesoscopic theory to a lower mesoscopic theory is the passage from the Boltzmann kinetic theory to hydrodynamics. Two essentially different approaches to this reduction have been introduced, one by Chapman and Enskog \cite{ChapEnsk} (we shall call it Chapman-Enskog method) and the other by Grad \cite{Grad} (we shall call it Grad's method). Our strategy in the investigation of
the passage to equilibrium thermodynamics was to extract from the example of the Boltzmann theory an abstract structure that can be then carried to a larger class of macroscopic systems. We shall follow the same strategy also in the case of the Chapman-Enskog and Grad's methods.
The reduction upper theory $\rightarrow$ lower theory is seen in the Chapman-Enskog method as a split of the time evolution describing the passage from the upper theory to equilibrium thermodynamics into a "fast" time evolution describing the passage from the upper theory to the lower theory and the "slow" time evolution describing the passage from the lower theory to the equilibrium thermodynamics. The problem is to identify a manifold $\mathcal{M}$ inside the state space $M^{(up)}$ of the upper theory which is invariant (or almost invariant) in the time evolution governing the passage from the upper theory to the equilibrium thermodynamics. The time evolution in the reduced lower theory is then governed by the time evolution in $M^{(up)}$ restricted to $\mathcal{M}$. In the original investigation of Chapman and Enskog the manifold $\mathcal{M}$ is searched by, first, identifying its initial approximation (it is the manifold composed of local Maxwellian distribution functions) and second, deforming it in order to improve its invariance. Among many investigations that developed and extended this approach to reduction we mention \cite{GorbK} - \cite{Turkington}.
Taking Grad's viewpoint of the reduction of an upper to a lower theory, we
ask the following question. Is there a way to see the passage to lower theories with the time evolution in the same way as the passage to equilibrium thermodynamics (i.e. to lower theories without the time evolution)? In other words, can the pattern that we search be composed of fixed points instead of invariant manifolds? The key to answering this question is the realization that if we lift the upper time evolution from the upper state space to the time evolution in the space of vector fields on the upper state space then indeed the asymptotically approached fixed point will be the vector field generating the lower time evolution. In our terminology and notation, we shall use the adjective "rate" when addressing the concepts and quantities after making the lift. We thus talk about rate state space (the space of vector fields on the state space), rate time evolution (the upper time evolution lifted to the rate state space), rate phase portrait (collection of all trajectories in the rate phase space), etc.
In Grad's method, the attention is put in the first step on the process in which the macroscopic systems under investigation are prepared for investigating them in the lower theory. The reduced time evolution in the lower theory arises then in the second step as a solution to the equation describing the process. Similarly as in the process of preparation for equilibrium thermodynamics, solutions to equations describing it can also be found by extremizing a potential (rate entropy). Its value in the extremum is then the fundamental rate thermodynamic relation in the lower theory that is inherited from the preparation process.
The static version of the passage to a lower mesoscopic theory turns out to be the Onsager variational principle \cite{Ray}, \cite{OnP}, \cite{OM}, that has originally arisen from attempts to express the qualitative observation that macroscopic systems subjected to external forces reach states at which their resistance to the forces is minimal.
As the process of preparation for equilibrium thermodynamics (the 0-th law of thermodynamics) provides a dynamical foundation of MaxEnt principle and the equilibrium fundamental thermodynamic relation, the process of preparation for the lower theory (the 0-th law of rate thermodynamics) provides a dynamical foundation for the Onsager variational principle and the fundamental thermodynamic relation in the lower theory. Since the Grad method
does not have to be put into the context of the passage to the equilibrium thermodynamics and is thus readily applicable to externally driven systems, the extension of thermodynamics to rate thermodynamics that it offers is the extension of the classical equilibrium thermodynamics to thermodynamics of systems out of equilibrium that are driven by external forces.
\section{Multiscale Graph}\label{present}
Autonomous mesoscopic theories are represented as vertices $\mathbb{V}$ in an oriented graph.
The vertex $\mathbb{V}^{(bottom)}$ represents equilibrium thermodynamics. No time evolution takes place in $\mathbb{V}^{(bottom)}$. The vertex $\mathbb{V}^{(top)}$ represents the most microscopic theory in which macroscopic systems are seen as composed on $n\sim 10^{23}$ particles. The time evolution in $\mathbb{V}^{(top)}$ is governed by classical mechanics. The vertex $\mathbb{V}^{(up)}$ represents an autonomous mesoscopic theory involving more details than another autonomous mesoscopic theory represented by the vertex $\mathbb{V}^{(down)}$. The time evolution takes place in both $\mathbb{V}^{(up)}$ and $\mathbb{V}^{(down)}$. Reduction of $\mathbb{V}^{(up)}$ to $\mathbb{V}^{(down)}$ is represented as an oriented link $\mathbb{V}^{(up)} \longrightarrow \mathbb{V}^{(down)}$.
In this section we present an outline of the mathematical formulation of the link $\mathbb{V}^{(up)} \longrightarrow \mathbb{V}^{(down)}$.
We concentrate on the physical aspects. The mathematical aspects are discussed in more detail in the cited references.
The state variable in the vertex $\mathbb{V}^{(up)}$ is denoted by $x\in M^{(up)}$, where $M^{(up)}$ denotes the state spaces used in the vertex $\mathbb{V}^{(up)}$.
\\
\textit{\textbf{Pre-Generic equation }}
\\
Similarly as hydrodynamic equations unfold from the local conservation laws (balance laws) by specifying constitutive relations (expressing fluxes in terms of hydrodynamic fields), the equations governing the time evolution of $x\in M^{(up)}$ that describes the link $\mathbb{V}^{(up)} \longrightarrow \mathbb{V}^{(down)}$, unfold from the pre-Generic equation
\begin{equation}\label{generic}
\dot{x}=LE_x(x)+\Xi_{x^*}(x,X(x^*))
\end{equation}
by specifying the state variable $x$ (giving it a physical interpretation), and specifying constitutive for the quantities
\begin{equation}\label{cr}
(x^*,X,E, L,\Xi)
\end{equation}
entering (\ref{generic}). The quantity $x^*$ is a conjugate state variable. The constitutive relation relates it to the state variable $x\in M^{(up)}$. The quantity $E:M^{(up)}\rightarrow \mathbb{R}$ is the energy.
By $E_x$ we denote $\frac{\partial E}{\partial x}$. If $x$ is an element of an infinite dimensional space then the partial derivative is meant to be an appropriate functional derivative.
The first term on the right hand side of (\ref{generic}) represents the Hamiltonian dynamics. From the physical point of view it is a remnant of $\left(\begin{array}{cc}0&1\\-1&0\end{array}\right)\left(\begin{array}{cc}E^{(part)}_{\rr}\\E^{(part)}_{\vv}\end{array}\right)$ that drives the time evolution of particles composing the macroscopic systems; $E^{(part)}$ is the particle energy, $(\rr,\vv)$ are position coordinates and momenta of the particles.
In Eq.(\ref{generic}) the Poisson bivector $\left(\begin{array}{cc}0&1\\-1&0\end{array}\right)$ is replaced with a general Poisson bivector $L(x)$. We recall that $L(x)$ is a Poisson bivector if $<A_x,L(x)B_x>= \{A,B\}$ is a Poisson bracket, $A$ and $B$ are real valued functions of $x$ and $<,>$ is the pairing. The bracket $\{A,B\}$ is a Poisson bracket if $\{A,B\}=-\{B,A\}$ and the Jacobi identity $\{A,\{B,C\}\}+
\{B,\{C,A\}\}+\{C,\{A,B\}\}=0$ holds. The particle energy $E^{(part)}$ is replaced in (\ref{generic}) with the $\mathbb{V}^{(up)}$-energy $E(x)$.
The potential $\Xi:M^{(up)}\times \mathbb{M}^{(up)}\rightarrow \mathbb{R}$ appearing in the second term on the right hand side of (\ref{generic}) is called a dissipation potential; $x\in M^{(up)}$ and $X\in \mathbb{M}^{(up)}$.
The dissipation potential $\Xi(x,X)$ is required to satisfy the following properties:
\begin{eqnarray}\label{Xi}
&&\Xi(x,0)=0\,\,\forall x\nonumber \\
&&\Xi\,\,reaches\,\,its\,\,minimum\,\,at\,\,X=0\,\,\forall x\nonumber \\
&&\Xi(x,X) \,\,is\,\,a\,\,convex\,\,function\,\,of\, X\,\,in\,\,a\,\,neighborhood\,\,of\,\,X=0\,\,\forall x
\end{eqnarray}
Consequently, in a small neighborhood of $X=0$, the dissipation potential
becomes the quadratic potential $\Xi(x,X)=\frac{1}{2}<X,\Lambda X>$, where $\Lambda(x)$ is a symmetric and positive definite operator. With such quadratic dissipation potential, Eq.(\ref{generic}) becomes $\dot{x}=LE_x+\left(X_{x^*}\right)^T\Lambda X$.
The quantities $X\in \mathbb{M}^{(up)}$, called thermodynamic forces, are functions of $x^*\in M^{*(up)}$ that are conjugate state variables. If $M^{(up)}$ is a linear space then $M^{*(up)}$ is its conjugate.
Summing up, the requirements placed so far on the constitutive relations specifying (\ref{cr}) are: $L(x)$ is a Poisson bivector and $\Xi(x,X)$ is a dissipation potential satisfying (\ref{Xi}). The quantity $x^*$ is so far an unspecified function of $x$, $X$ is an unspecified function of $x^*$, and $E(x)$ is an unspecified function of $x$ having the physical interpretation of energy.
In the rest of this section we present the constitutive relations (specifications of (\ref{cr})) that make (\ref{generic}) the Generic equation describing the links $\mathbb{V}^{(up)} \longrightarrow \mathbb{V}^{(bottom )}$. In particular, we show the constitutive relations making (\ref{generic}) the Boltzmann kinetic equation. Subsequently, we lift (\ref{generic}) into an equation governing the time evolution of the thermodynamic forces $X$. The lifted equation, (we call it a rate Generic equation) governs the time evolution describing the link $\mathbb{V}^{(up)} \longrightarrow \mathbb{V}^{(down)}$. Specifically, we formulate the rate Boltzmann equation.
\\
\textit{\textbf{Generic equation describing $\mathbb{V}^{(up)} \longrightarrow \mathbb{V}^{(bottom)}$}}
\\
The state space $M^{(bottom)}$ used in the equilibrium thermodynamics $\mathbb{V}^{(bottom)}$ is a two dimensional space with elements $(\mathbf{E},\mathbf{N})$ denoting the equilibrium energy and the equilibrium number of moles per unit volume. The individual nature of macroscopic systems is expressed in equilibrium thermodynamics in the fundamental thermodynamic relation $\mathbf{S}=\mathbf{S}(\mathbf{E},\mathbf{N})$, where $\mathbf{S}$ is the equilibrium entropy per unit volume.
The constitutive relations for which (\ref{generic}) becomes an equation describing the process of preparation for $\mathbb{V}^{(bottom)}$ begin with
\begin{equation}\label{upinput}
E= E(x);\,\,\,\,N= N(x);\,\,\,\,S= S(x)
\end{equation}
where $S:M^{(up)}\rightarrow \mathbb{R}$ is the $\mathbb{V}^{(up)}$-entropy. The function $S:M^{(up)}\rightarrow \mathbb{R}$ is assumed to be sufficiently regular and concave. Contrary to the energy $E(x)$ and the number of moles $N(x)$, the entropy is a potential that does not exist in mechanics. It does arise in mechanics (upper mechanics) only in the process of attempting to extract overall features (patterns in the upper phase portrait). Allegorically speaking, we can say that the entropy plays the role of an ambassador of the reduced lower dynamics emerging in the upper dynamics as an overall pattern in its phase portrait.
We combine the potentials (\ref{upinput}) into a new potential
\begin{equation}\label{Phi}
\Phi(x,\mathbf{E}^*,\mathbf{N}^*)=-S(x)+\mathbf{E}^*E(x)+\mathbf{N}^*N(x)
\end{equation}
called $\mathbb{V}^{(up)}$-thermodynamic potential. The role that the quantities $\mathbf{E}^*\in\mathbb{R}$, $\mathbf{N}^*\in\mathbb{R}$ play in $\mathbb{V}^{(up)}\longrightarrow \mathbb{V}^{(down)}$ will be clarified later (see the text after Eq.(\ref{threl})).
Constitutive relations continue with the specification of the conjugate state variable
\begin{equation}\label{xstar}
x^*=S_x
\end{equation}
and with the thermodynamic force
\begin{equation}\label{genericc}
X(x,x^*)=\mathbb{K}(x)x^*
\end{equation}
where $\mathbb{K}$ is a linear operator.
The two geometrical structures transforming gradients $E_x(x)$ and $S_x(x)$ of the potentials $E(x)$ and $S(x)$ (i.e. covectors) into vectors are: the Poisson bivector $L(x)$ and the dissipation potential $\Xi(x,X(x^*))$. These two structures are required to be appropriately degenerate in order that (\ref{generic}) and (\ref{genericc}) imply
\begin{equation}\label{sol1}
\dot{N}=0;\,\,\,\,\dot{E}=0;\,\,\,\,\dot{S}=<x^*,\Xi_{x^*}>|_{x^*=S_x}\geq 0
\end{equation}
This means that in addition to requiring that $L(x)$ is a Poisson bivector we also require that $L(x)$ is a degenerate Poisson bivector in the sense that the entropy $S(x)$ and the number of moles $N(x)$ are its Casimirs (i.e. $\{A,S\}=0$ and $\{A,N\}=0$ for all real valued functions $A(x)$). Moreover, we require that $\Xi(x,X)$ is a dissipation potential satisfying (\ref{Xi}) and in addition $<x^*,\Xi_{x^*}>|_{x^*=E_x}=0$ and $<x^*,\Xi_{x^*}>|_{x^*=N_x}=0$.
The thermodynamic potential (\ref{Phi}) plays the role of the Lyapunov function for the approach, as $t\rightarrow\infty$, of solutions to (\ref{generic}) to the equilibrium states $\widehat{x}(\mathbf{E}^*,\mathbf{N}^*)$ that are solutions to
\begin{equation}\label{Phi0}
\Phi_x(x,\mathbf{E}^*,\mathbf{N}^*)=0
\end{equation}
Indeed, $\Phi$ is a convex function of $x$ and $\dot{\Phi}\leq 0$.
We refer to \cite{book} for details of the proof. The quantities $\mathbf{E}^*$ and $\mathbf{N}^*$ play the role of the Lagrange multipliers in the maximization of the entropy $S(x)$ subjected to constraints $E(x)$ and $N(x)$. In the standard thermodynamic notation $\mathbf{E}^*=\frac{1}{T}; \mathbf{N}^*=-\frac{\mu}{T}$, where $T$ is the absolute temperature and $\mu$ the chemical potential.
We make two additional observation about solutions to the Generic equation. First, we note that if we are interested only in the final destination of the preparation process (i.e. only in the equilibrium states $\widehat{x}(\mathbf{E}^*,\mathbf{N}^*)$ ) then we can completely avoid the Generic equation and only use the thermodynamic potential (\ref{Phi}) and solve (\ref{genericc}). This static method of making the reduction is called Maximum Entropy principle (MaxEnt principle).
The second observation is about the inheritance that remains in $\mathbb{V}^{(bottom)}$ from the preparation process. At the final destination the thermodynamic potential
\begin{equation}\label{threl}
\mathbf{S}^*(\mathbf{E}^*,\mathbf{N}^*)=\Phi(\widehat{x}(\mathbf{E}^*,\mathbf{N}^*),\mathbf{E}^*,\mathbf{N}^*)
\end{equation}
becomes the Legendre transformation of the fundamental thermodynamic relation $\mathbf{S}=\mathbf{S}(\mathbf{E},\mathbf{N})$ implied in $\mathbb{V}^{(bottom)}$ by the passage from $\mathbb{V}^{(up)}$. If, in addition, we extend $\mathbf{S}=\mathbf{S}(\mathbf{E},\mathbf{N})$ into $\mathbf{S}=\mathbf{S}(\mathbf{E},\mathbf{N},\mathbf{V})$, where $\mathbf{V}$ is the volume and require that all three quantities $\mathbf{S},\mathbf{E},\mathbf{N}$ are extensive (i.e. they become $\lambda \mathbf{S}, \lambda \mathbf{E}, \lambda \mathbf{N}$ if $\mathbf{V}$ changes to $\lambda \mathbf{V}$) then (if we use the standard thermodynamic notation) $S^*(\frac{1}{T},-\frac{\mu}{T})=-S_{\mathbf{V}}(\mathbf{E},\mathbf{N},\mathbf{V})=-\frac{P}{T}$, where $P$ is the pressure.
Finally, we present the specific constitutive relations for which (\ref{generic}) becomes the Boltzmann kinetic equation. The state variable $x$ is the one particle distribution function $f(\rr,\vv)$. Its kinematics is expressed in the Poisson bracket $\{A,B\}= \int d\rr\int d\vv f\left(\frac{\partial A_f}{\partial \rr}\frac{\partial B_f}{\partial \vv}-\frac{\partial B_f}{\partial \rr}\frac{\partial A_f}{\partial \vv}\right)$, where $A(f), B(f)$ are real valued sufficiently regular functions of $f(\rr,\vv)$.
The energy $E(f)=\int d\rr\int d\vv f \frac{\vv^2}{2m}$, the number of moles is $N(f)=\int d\rr\int d\vv f$, by $m$ we denote mass of one particle. The entropy $S(f)$ is the Boltzmann entropy $S(f)=-k_B\int d\rr \int d\vv f \ln f$, where $k_B$ is the Boltzmann constant. The thermodynamic force $X(f^*,\vv,\vv_1,\vv',\vv'_1)= \mathbb{K}f^*=f^*(\rr,\vv')+f^*(\rr,\vv'_1)-f^*(\rr,\vv)-f^*(\rr,\vv_1)$. The thermodynamic potential
\begin{equation}\label{XiB}
\Xi(f,X)=\int d\rr\int d\vv\int d\vv_1\int d\vv'\int d\vv'_1 W(f,\vv,\vv_1,\vv',\vv'_1)\left(e^X+e^{-X}-2\right)
\end{equation}
where $W\geq 0, W\neq 0$ only if $\vv+\vv_1=\vv'+\vv'_1$ and $\vv^2+(\vv_1)^2=(\vv')^2+(\vv'_1)^2$. Moreover, $W$ is symmetric with respect to the interchange of $\vv\rightarrow \vv_1$ and $\vv'\rightarrow \vv'_1$ and with respect to the interchange of $(\vv,\vv_1)$ and $(\vv',\vv'_1)$. The fundamental thermodynamic relation (\ref{threl}) implied by the above constitutive relation is indeed the fundamental thermodynamic relation representing $\mathbb{V}^{(bottom)}$ an ideal gas.
Details of calculation can be found in \cite{book}.
\\
\textit{\textbf{Rate Generic equation describing $\mathbb{V}^{(up)} \longrightarrow \mathbb{V}^{(down)}$}}
\\
The vector field generating the time evolution (\ref{generic}) involves thermodynamic forces $X$. In order that Eq.(\ref{generic}) be an autonomous time evolution equation the forces $X$ have to be expressed in terms of $x$. In other words, Eq.(\ref{generic}) has to be closed. We look for an equation that governs the time evolution of $X$. Its solution will be the closure. From the physical point of view, making the lift means turning to the stage of the time evolution that precedes the time evolution of $x$ governed by the closed Eq.(\ref{generic}).
The equation governing the time evolution of $X$
\begin{equation}\label{rgeneric}
\dot{X}=-\mathbb{G}\Psi_X(X,J)
\end{equation}
arises in lifts of the pre-Generic equation (\ref{generic}) to higher dimensional spaces in which $x^*$ and $X$ are considered to be independent state variables. Geometrical as well as the physical aspects of the lifting process are discussed in \cite{ogul1}, \cite{ogul2}. Here we only briefly indicate the passage from (\ref{generic}) to (\ref{rgeneric}). For the sake of simplicity, we take $X=x^*$. Equation (\ref{generic}) can be cast into the form $\dot{x}=\left(<x^*,LE_x> + \Xi(x,x^*)\right)_{x^*}$. By replacing $\dot{x}$ on the left hand side of this equation with $\mathbb{G}^{-1}\dot{x}^*$, we arrive at (\ref{rgeneric}) with $J=LE_x$. The resulting equation (\ref{rgeneric}) is then considered to be the equation governing the time evolution of $x^*$. The quantity $x$ in it is considered to be a parameter. We note that if $x^*=S_x$, i.e. if $x^*$ is the conjugate state variable, then $\mathbb{G}=-(S_{xx})^{-1}$, where $S_{xx}$ is the Hessian of the entropy $S(x)$, and the rate Generic equation is an equivalent reformulation of (\ref{generic}).
Since $X$ is a force entering the vector field generating the time evolution of $x$, we call (\ref{rgeneric}) a rate Generic equation.
The rate thermodynamic potential $\Psi(X,J)$ appearing in (\ref{rgeneric}) is given by
\begin{equation}\label{Psi}
\Psi(X,J)=-\mathfrak{S}(X)+<X,J>
\end{equation}
where the rate entropy $\mathfrak{S}(x,X)=\Xi(x,X)$. The fluxes $J$ play the role of Lagrange multipliers (the same role as $E^*$ and $N^*$ play in the thermodynamic potential (\ref{Phi})). The quantities used as state variables in $\mathbb{V}^{(down)}$ are entering in $J$.
The operator $\mathbb{G}$ is a positive definite operator.
In the particular case of the Boltzmann equation, $X:\mathbb{R}^{12}\rightarrow \mathbb{R}; (\vv,\vv_1,\vv',\vv'_1)\mapsto X$, the dissipation potential $\Xi$ is given in (\ref{XiB}), and $\mathbb{G}==\mathbb{K}G\mathbb{K}^T$, where $-G$ is the inverse of the Hessian of the Boltzmann entropy.
The term $<X,J> $ is the power associated with the collisions.
The rate thermodynamic potential $\Psi$ plays in the rate Generic equation (\ref{rgeneric}) the same role as the thermodynamic potential $\Phi$ plays in the Generic equation (\ref{generic}). First, $\Psi$ plays
the role of the Lyapunov function for the approach of $X$ to the rate equilibrium states $\widehat{X}(J)$ that are solutions to
\begin{equation}\label{Psieq}
\Psi_X(X,J)=0
\end{equation}
This means that if our interest is limited to the vertex $\mathbb{V}^{(down)}$ obtained in the passage $\mathbb{V}^{(up)}\longrightarrow \mathbb{V}^{(down)}$ then it is enough to know the rate thermodynamic potential $\Psi$ and to solve (\ref{Psieq}). This static version of the link $\mathbb{V}^{(up)}\longrightarrow \mathbb{V}^{(down)}$, known as Onsager's variational principle, has arisen previously \cite{Ray}, \cite{OnP}, \cite{Doi} in attempts to express the observation that macroscopic systems subjected to external forces reach states at which effect of the forces is minimal. In our analysis this principle arises as a static version of the 0-th law of multiscale dynamics which states that both $\mathbb{V}^{(up)}$ and $\mathbb{V}^{(doen)}$ exist as autonomous theories. If the the macroscopic system is not prevented by external forces or internal constraints to reach the vertex $\mathbb{V}^{(bottom)}$ then the thermodynamic potential $\Xi$ is closely related to the entropy production $<x^*,\Xi_{x^*}>$ (see (\ref{sol1})) and consequently, the extremization of the rate thermodynamic potential $\Psi$ is a minimization of the entropy production subjected to constraints. Note that the switch from maximization to minimization when we switch from entropy to rate entropy (production of entropy) is a consequence of the negative definiteness of the Hessian of the entropy (concavity of the entropy).
The second role that the thermodynamic potential plays is to provide the vertex $\mathbb{V}^{(down)}$ with the fundamental thermodynamic relation $\mathfrak{S}^*(J)=\Psi(\widehat{X}(J),J)$. Its possible applications and its relation to the fundamental thermodynamic relation in $\mathbb{V}^{(bottom)}$ remain to be investigated.
\section{Open Problems}\label{future}
Keeping with the tradition of IWNET meetings, we turn at the end to the future and suggest open problems. Almost all the problems listed below have already been mentioned in the cited references but their all need further clarifications and illustrations.
\\
\textit{\textbf{Boundary conditions}}
\\
Macroscopic physical systems that we have considered so far in this Note were either infinite or surrounded by boundaries with periodic boundary conditions. If the boundaries are present then it is necessary to take into account the physics taking place on them. The boundaries and their interaction with the bulk play the role of external forces acting on the bulk.
The physics taking place on the boundaries is expressed in an upper vertex $\mathbb{V}^{(boundary)}$, the physics taking place in the bulk in a lower vertex $\mathbb{V}^{(bulk)}$. \textit{\textbf{The boundary conditions addressing the boundary-bulk coupling are expressed in the rate link $\mathbb{V}^{(boundary)}\longrightarrow \mathbb{V}^{(bulk)}$}}. The link is a rate link since boundaries act on the time evolution in the bulk as forces. They are entering the bulk in vector fields. The time evolution on the boundary will be thus governed by (\ref{rgeneric}) and the Lagrange coefficients in the rate thermodynamic potential entering it will be the boundary conditions (fluxes on the boundaries).
The static version of this viewpoint of boundary conditions is the Onsager variational principle adapted to boundary conditions. In other words, we suggest to regard the time evolution equation in $\mathbb{V}^{(bulk)}$ equipped with boundary conditions as a closed part of a hierarchy addressing both the bulk and the boundary physics. The discarded part of the hierarchy (addressing the boundary physics and its coupling with the bulk) has the form of (\ref{rgeneric}) and its solution is the closure (the boundary conditions).
We emphasize that if the existence of boundary conditions is well established (we can call it 0-th law of boundary thermodynamics) then the above view of boundary conditions is as well founded as, for instance, the classical equilibrium thermodynamics (that is based on the 0-th law of equilibrium thermodynamics).
The 0-th law of boundary thermodynamics in the mesoscopic theory $\mathbb{V}^{(bulk)}$ is the experimentally observed existence of boundary conditions. The boundary conditions exist if the boundary physics and its coupling to the bulk can be expressed by supplying the governing equations in $\mathbb{V}^{(bulk)}$ with boundary conditions and if their solutions are found to agree with results of experimental observations.
The static version of this formulation of boundary conditions has already been introduced and illustrated on specific examples in \cite{SW}, \cite{GrK}.
\\
\textit{\textbf{Temperature }}
\\
The importance of the role that the temperature plays in our everyday life makes the word "temperature" appear also, for instance, in sentences like "the meeting of two leaders lowered the temperature". Can we define the temperature in a way that embraces all such meanings?
We suggest the following definition. \textit{\textbf{Temperature is a measure of internal energy, its measurements involve a process of equilibration. Different meanings that can be given to "internal energy" and "equilibration" lead to different meanings of the temperature.}}
For the equilibrium (or local equilibrium) internal energy, the equilibration made by maximizing the equilibrium (or local equilibrium) entropy, and the thermodynamic walls that freely pass or prevent passing the internal energy, the above definition becomes the standard definition of the temperature. The macroscopic system whose standard temperature is measured is put into the contact with another macroscopic system called a thermometer. The two systems are separated by the thermodynamic wall that freely passes the internal energy and both are separated from the exterior by the thermodynamic wall that prevents its passing. The thermometer is a system with a known fundamental thermodynamic relation. This means that the thermometer is able to translate the temperature into another quantity that can be directly seen or felt. For example if the thermometer is a human body then the temperature is read in the intensity of the nervous reactions. The process of equilibration is an essential part of the notion of the temperature. It brings into the definition the entropy and distinguishes the internal energy from the total energy. The internal energy is the part of the total energy that is out of our direct control (except by the thermodynamic walls that can pass it or prevent its passing). In the standard meaning of the temperature, the overall volume, the number of moles, and the overall macroscopic velocity are considered to be directly controlled from exterior. The macroscopic work and the overall macroscopic kinetic energy are thus not included in the internal energy.
In the system that is surrounded by a wall that prevents passing the internal energy, its evolution is determined only by the equilibration (by the evolution of the entropy).
The meaning of the temperature can be changed by changing the internal energy and the equilibration. The internal energy can be changed for instance by getting a part of the internal structure (e.g. the conformation of macromolecular chains) under our direct control. The new internal energy is then the original internal energy minus the energy that is expressed in terms of the internal structure that we can control. Measurements of the new temperature would still however need walls that pass or prevent passing only the new internal energy.
Other interesting changes in the meaning of the temperature could be those caused by changing the equilibration. The equilibration and the entropy associated with it that we considered above was the equilibration involved in the link $\mathbb{V}^{(up)}\longrightarrow \mathbb{V}^{(bottom)}$. We have seen however that the link $\mathbb{V}^{(up)}\longrightarrow \mathbb{V}^{(down)}$ , where both vertices represent well established autonomous theories, involves also an equilibration. For example, let $\mathbb{V}^{(up)}$ be kinetic theory and $\mathbb{V}^{(down)}$ hydrodynamics and $\mathbb{V}^{(up)}\longrightarrow \mathbb{V}^{(down)}$ be the rate link. The internal energy in this case is a rate energy and the entropy the rate entropy. Since the equilibration involved in $\mathbb{V}^{(up)}\longrightarrow \mathbb{V}^{(down)}$ occurs also in open, externally driven, and far from equilibrium systems, this temperature is well suited for such systems. Finding the walls that pass and or prevent passing the rate energies remains still however a challenge.
An extensive review of temperatures in which the equilibration is generated by standard entropies can be found in \cite{DavidJou}. A possibility to introduce temperatures measuring time variations of the internal energy and the equilibrations that occur also in externally driven systems and are driven by rate entropies
has been suggested previously in \cite{RLiliana}, and in \cite{Umberto}.
Finally, we return to the beginning of this comment and suggest a possible "social temperature". The internal energy is a collection of conspiracy theories, the thermometer is a group selected for polling, the thermodynamic walls are those that freely pass or prevent passing the gossip (electronic messages), and the fundamental thermodynamic relation that makes the temperature visible are answers to polling questions (questions about happiness, political leaning and similar questions about the state of mind influenced by the conspiracy theories). The equilibration is made by spreading the gossip about the conspiracy theories. The entropy in this illustration is a measure of information about the conspiracy theories.
\\
\textit{\textbf{Criticality}}
\\
Let the macroscopic system under investigation be fixed. For example let it be one liter of water. Some mesoscopic theories are applicable (they faithfully describe the observed behavior of the water) and some are not applicable. Among the former are the theories represented by the vertices $\mathbb{V}^{top}$, $\mathbb{V}^{bottom}$, and hydrodynamics. Among the latter is the Boltzmann kinetic theory and solid mechanics. In order to make the distinction visible in the graph, we imagine the vertices representing the theories that are applicable as illuminated and the remaining vertices as dark.
Let now the system under investigation be subjected to external influences. For example, we begin to heat the water. Eventually, the water changes into gas.
When observing the phase change in the phase portraits of the illuminated vertices, we see in them very large changes. In particular, we note two phenomena. First, the entropy generating the phase portrait inside the vertex hydrodynamics becomes to loose its concavity. Second, we see a significant increase of fluctuations indicating shortening of the links. We recall that the length of the links is a measure of the separability of the mesoscopic theories. In addition, we begin to see also changes in the lightening of the vertices. The vertex representing hydrodynamics becomes dark and the Boltzmann kinetic theory becomes lit up. In the critical region where the liquid and gas become essentially indistinguishable a large number of vertices in the graph become slightly illuminated. In general, by approaching the critical point we expect to see the graph shrinking and uniformly illuminated.
We now explore consequences of these observations. First, we turn to the loss of concavity of the entropy seen in the mathematical description of the approach to the critical region.
With many difficulties that such loss brings there is one emerging advantage.
In the critical region the entropy is universal. Outside the critical region both the entropies and the rate entropies are in mesoscopic theories different for different systems. They are in fact the quantities, among others like for instance energies, in which the individual nature of the systems is expressed. But in the vicinity of critical points all entropies or rate entropies are essentially the same. This advantage has been foreseen by Landau \cite{Landau2} and was used to formulate his theory of critical phenomena. The universality of the functions near critical points has been rigorously proven by \cite{ArnoldC}.
The second observation made in critical regions is the loss of autonomy of vertices in the graph. The loss of autonomy is manifested by a significant increase of fluctuations. This observation
offers a new definition of critical points.
\textit{\textbf{Critical points are defined as points at which no pattern in phase portraits and rate phase portraits can be recognized }}. This truly multiscale definition of critical points is the definition that emerged in the renormalization group theory of critical points \cite{Wilson}. The motivation for the renormalization group theory was the finding that the implications of the Landau theory mentioned in the previous paragraph do not exactly agree with experimental observations of the critical behavior.
The renormalization group theory of critical phenomena and associated with it the lack-of-pattern definition of critical points has been originally formulated \cite{Wilson} inside the setting of the Gibbs equilibrium statistical mechanics. The formulation
inside the setting of the Landau theory has been introduced in \cite{Grcrit1}, \cite{Grcrit2}. We briefly describe the essential points. First, the static MaxEnt link $\mathbb{V}^{(up)}\rightarrow \mathbb{V}^{bottom}$ is formulated in a neighborhood of the critical point where the $\mathbb{V}^{(up)}$-entropy looses its convexity; $\mathbb{V}^{(up)}$ is a vertex representing a theory involving more details than the vertex $\mathbb{V}^{(bottom)}$. Let $P^{(up)}$ be the set of $\mathbb{V}^{(up)}$ state variables as well as the parameters entering the $\mathbb{V}^{(up)}$-entropy. In the second step we do the same thing but with the vertex $\mathbb{V}^{(upp)}$ replacing $\mathbb{V}^{(up)}$. The vertex $\mathbb{V}^{(upp)}$ represents a theory that is an upper theory vis-\`{a}-vis the theory represented by the vertex $\mathbb{V}^{(up)}$. It is a refinement of the $\mathbb{V}^{((up))}$-theory. For example in the illustration worked out in \cite{Grcrit1}, \cite{Grcrit2} a one component system is made into two component system by simply putting another colour on some of the particles. The colour does not change in any way their properties. What changes is only our perception of the system. We are taking a sharper view but the system remains exactly the same.
The energy remains unchanged, only the entropy changes.
In the third step we make the static MaxEnt link $\mathbb{V}^{(upp)}\rightarrow \mathbb{V}^{(up)}$ (again only in a neighborhood of the critical point where $\mathbb{V}^{(upp)}$-entropy looses its convexity). As a result, we arrive back at $\mathbb{V}^{(up)}$-theory but with $\overline{P^{(up)}}$ replacing $P^{(up)}$. If $\overline{P^{(up)}}$ is the same as $P^{(up)}$ then this means that there is no pattern and thus that the point is critical in the sense of the lack-of-pattern definition of critical points. Details and a specific illustration in the context of the van der Waals theory can be found in \cite{Grcrit1}, \cite{Grcrit2}.
The observation of large fluctuations and thus the lack of autonomy of vertices in the critical region makes also possible to address the relation between the loss of concavity of the entropy and the loss of convexity of the rate entropy in the critical region. As explained in Section \ref{present}, the rate time evolution driven by the rate entropy precedes the time evolution driven by the entropy.
We conjecture that the critical behavior will be seen in both the time evolution and the rate time evolution, in both the entropy and the rate entropy.
In externally driven systems that are prevented from reaching equilibrium states and thus the vertex $\mathbb{V}^{(bottom)}$ for such systems remains dark. The critical behavior manifests itself in externally driven systems as an occurrence of a bifurcation in the vertex $\mathbb{V}$. We conjecture that the bifurcation seen in $\mathbb{V}$ will also be seen in the loss of convexity in the rate entropy.
\\
\textit{\textbf{Hierarchies}}
\\
We have already noted in Section \ref{present} that casting the upper vector fields into hierarchies can be interpreted as the first step in formulating rate links directed from upper theories to lower theories. The part of the hierarchy that is cut off plays the role of the rate Generic equation that represents the dynamical version of the Onsager variational principle. Its solution is the closure of the part of the hierarchy that is retained.
We shall make here additional comments.
\textit{Structure preserving hierarchies }
If the upper vector field $(vf)^{up}$ has a structure involving several elements then we can put into the hierarchy form only some of the elements and keep the remaining intact. The resulting hierarchy then clearly keeps the structure of $(vf)^{up}$ and may also be more suitable for the pattern recognition in its phase portrait.
For example, let $(vf)^{up}$ be the Hamiltonian vector field (i.e. $(vf)^{up}=LE_x$; $L$ is the upper Poisson bivector, $E(x)$ is the upper energy, $x$ is the upper state variable).
We can put into the hierarchy form only the Poisson bivector $L(x)$ and leave the energy $E(x)$ untouched. Such structure-hierarchy form of the BBGKY hierarchy is worked out in \cite{GrBBGKY} and of the Grad hierarchy in \cite{GrGrad}.
The process of casting $L$ into the hierarchy is as easy (if not easier) as casting the whole vector field $LE_x$ into the hierarchy. We illustrate it on the example of the Grad hierarchy. The projection $M^{up}\rightarrow M^{down}$ is in this example: $f(\rr,\vv)\mapsto \cc(\rr))=(c^{(0)}(\rr), c^{(1)}_i(\rr), c^{(2)}_{ij}(\rr), c^{(3)}_{ijk}(\rr),...)$ \\= $(\int d\vv f, \int d\vv v_i f, \int d\vv v_i f, \int d\vv v_i v_j f, \int d\vv v_i v_j v_k f,....); i,j,k=1,2,3$ . The standard Grad hierarchy is constructed by multiplying the Hamiltonian part $-\frac{\partial (v_l f)}{\partial r_l}$ of the Boltzmann vector field by $v_i, v_iv_j,...$ and integrating the products over $\vv$. The structure-Grad hierarchy is obtained by substituting $\frac{\delta}{\delta f(\rr,\vv)}$ in the Poisson bracket \\$\{A,B\}=\int d\rr \int d\vv f\left(\frac{\partial A_f}{\partial r_l}\frac{\partial B_f}{\partial v_l}-\frac{\partial B_f}{\partial r_l}\frac{\partial A_f}{\partial v_l} \right)$ with $\left(\frac{\delta}{\delta c^{(0)}}+v_i\frac{\delta}{\delta c^{(1)}_i}+v_iv_j\frac{\delta}{\delta c^{(2)}_{ij}}+...\right)$. As a result we arrive at the bracket $\{A,B\}$ in which $A$ and $B$ are real valued functions of the Grad moments. With such bracket we then write $LE_{\cc}$, where $E(\cc)$ is an energy (a function of $\cc$) that we are free to choose to express the individual nature of the system under investigation.
Details of calculations can be found in \cite{GrGrad}.
The structure-Grad hierarchy has at least two advantages. First, the energy in the Boltzmann equation as well as in its standard-Grad hierarchy reformulation is only the kinetic energy. From the physical point of view, the investigation is thus limited to ideal gases. On the other hand, the choice of the energy $E(\cc)$ in the structure-Grad hierarchy is unconstrained. Any energy expressed in terms of the 1-particle distribution function $f(\rr,\vv)$ or in terms of the low state variable $\cc(\rr)$ in its reformulation into the structure-Grad hierarchy can be chosen.
The second advantage is that both the free flow term in the Boltzmann equation and its structure-hierarchy reformulations possess manifestly the Hamiltonian structure.
\textit{Domains of applicability}
The domains of applicability of the unclosed and the closed hierarchies can be different. For example the standard infinite Grad hierarchy is an equivalent reformulation of the Boltzmann kinetic equations and thus its domain of applicability is an ideal gas.
The domain of applicability of the closed 5-moment hierarchy is much larger (it includes simple fluids). Such change of domains of applicability can happen because the patterns recognized in the upper phase portrait are
often determined by only some features of the upper vector fields. The same patterns may arise in many very different upper phase portraits. For example, in the case of the Grad hierarchy, the 5-moment equations are essentially the same in the standard Grad hierarchy and in the structure-Grad hierarchy. This means that the determining feature of the closed equation is the kinematics (the Poisson vector $L$) and not the particular choice of the energy.
\textit{Existence of closures}
Another interesting question about the closures is the question about their existence. Do patterns in the upper phase portrait exist? In the Grad hierarchy the closure to the first five moments exists. Its existence is supported by mathematical results about the Lie group expressing kinematics \cite{Ograd} and also by the existence of hydrodynamics as an autonomous dynamical theory that has arisen on the basis of its own experimental observations. On the other hand, the closure into a larger than five moments does not have to exist. Mathematical results about the kinematics \cite{Ograd} in fact indicate it. Also, there is no well established autonomous hydrodynamics involving higher order Grad fields. There are, of course, well established generalized hydrodynamic theories of complex fluids that involve extra fields and distribution functions. But the complexity in such fluids is due to the presence of an internal structure. The kinetic-theory origin of such theories is the Kirkwood kinetic theory \cite{Kirkwood1}, \cite{Kirkwood2} (see also \cite{Bird}) that unfolds from the BBGKY hierarchy and not from the Boltzmann kinetic theory. The complexity expressed in the higher moments of the Grad moments $\cc(\rr)$ are complexities of velocity correlations. They arise and become visible in complex flows, in particular then in turbulent flows. There is no autonomous hydrodynamics-like theory of complex flows. Observations of turbulent flows do not indicate existence of any autonomous extended hydrodynamics. For example the Kolmogorov cascade of energies of turbulent flows does not show anything like it.
\textit{Two classes of Grad's moments}
The fields serving as state variables in the classical hydrodynamics divide naturally into two classes. In the first class are fields of the mass density $\rho(\rr)$ and the momentum $\uu(\rr)$ (entering the macroscopic mechanics) and in the second class is the field of the internal energy $\epsilon(\rr)$ (entering the microscopic internal structure). It seems natural to keep this division also in extensions of the classical hydrodynamics. For example, in the Cattaneo extension \cite{Cattaneo} the state variables split into $(\rho(\rr),\uu(\rr))$ and $(\epsilon(\rr),\qq(\rr))$, where $\qq(\rr)$ is the heat flux. With such split it is then possible to formulate a Lagrangian version of the Cattaneo hydrodynamics as motion of two fluid particles. One is mechanical and the other caloric (see details in \cite{GT}). Grad's hierarchy with the extra structure provided by splitting Grad's moments into two classes is systematically investigates in \cite{RA}.
\section{Concluding Remarks}
The founding stone of the multiscale theory is the \textit{\textbf{0-th law of multiscale dynamics}}. The 0-th law of equilibrium thermodynamics guarantees the existence of equilibrium states and equilibrium thermodynamics. The 0-th law of multiscale dynamics extends it to the existence of autonomous mesoscopic dynamical theories and multiscale dynamics. The multiscale theory organizes the autonomous dynamical theories into an oriented graph. The vertices $\mathbb{V}$ in the graph are the autonomous dynamical theories. Every two vertices are connected by a link. Every link $\mathbb{V}^{(up)}\longrightarrow \mathbb{V}^{(down)}$ has an orientation, it is directed from the vertex $\mathbb{V}^{(up)}$ representing a theory involving more details to the vertex $\mathbb{V}^{(down)}$ representing a theory involving less details. Every link has also a length that is a measure of the separability of the two vertices that it connects.
From the physical point of view, the link $\mathbb{V}^{(up)}\longrightarrow \mathbb{V}^{(down)}$ represents the process (the time evolution taking place in the state space of $\mathbb{V}^{(up)}$) in which the macroscopic systems under investigation are prepared for being investigated in the theory represented by the vertex $\mathbb{V}^{(down)}$. If the time evolution is lifted from the state space to the time evolution in the vector fields on the state space then the link is called a rate link. While the vertices are all different one from the other, the links share a universal structure. The time evolution that they represent is in all links generated by an entropy (a rate entropy in rate links). The concept of the entropy has thus its origin in the 0-th law of multiscale dynamics. The entropy enters the time evolution represented by the links $\mathbb{V}^{(up)}\longrightarrow \mathbb{V}^{(down)}$ as an "ambassador" of the pattern representing the vertex $\mathbb{V}^{(down)}$ in the $\mathbb{V}^{(up)}$ phase portrait. In particular, if $\mathbb{V}^{(down)}$ is $\mathbb{V}^{(bottom)}$, the entropy generating the preparation process, if evaluated at the pattern reached in the preparation process, becomes the equilibrium entropy introduced originally in the equilibrium thermodynamics in its 2-nd law.
\\
\\
\textbf{Acknowledgement}
\\
I would like to thank O\v{g}ul Esen, V\'{a}clav Klika, Hans Christian \"{O}ttinger, Michal Pavelka, and Henning Struchtrup for stimulating discussions.
\\
|
{
"arxiv_id": "2302.13158",
"language": "en",
"timestamp": "2023-02-28T02:11:28",
"url": "https://arxiv.org/abs/2302.13158",
"yymm": "2302"
} | \section{Introduction}
Considerable interest has recently emerged with the enforcement of
essential boundary conditions~\cite{sukumar2022} and contact detection
algorithms~\cite{lai2022} using approximate distance functions (ADFs).
Realistic contact frameworks for finite-strain problems make use of
three classes of algorithms that are often coupled (see~\cite{wriggerscontact}):
\begin{enumerate}
\item Techniques of unilateral constraint enforcement. Examples of these
are direct elimination (also reduced-gradient algorithms), penalty
and barrier methods, and Lagrange multiplier methods with the complementarity
condition~\cite{simo1992c}. Stresses from the underlying discretization
are often used to assist the normal condition with Nitsche's method~\cite{nitsche1970}.
\item Frictional effects (and more generally,
constitutive contact laws)
and cone-complementarity forms~\cite{kanno2006,areias2015c}.
Solution paradigms are augmented Lagrangian methods for friction~\cite{laursen1993},
cone-projection techniques~\cite{desaxce1998} and the so-called yield-limited
algorithm for friction~\cite{jones2000}.
\item Discretization methods. Of concern are distance calculations, estimation
of tangent velocities and general discretization arrangements. In
contemporaneous use are (see~\cite{wriggerscontact}) node-to-edge/face,
edge/face-to-/edge/face and mortar discretizations.
\end{enumerate}
In terms of contact enforcement and friction algorithms, finite displacement
contact problems are typically addressed with well-established contact
algorithms, often derived from solutions developed by groups working
on variational inequalities and nonsmooth analysis. However, in the
area of contact discretization and the related area of contact kinematics
\cite{simo1985}, there are still challenges to be addressed in terms
of ensuring the complete robustness of implicit codes. One of the
earliest papers on contact discretization was by Chan and Tuba \cite{chan1971},
who considered contact with a plane and used symmetry to analyze cylinder-to-cylinder
contact. Francavilla and Zienkiewicz \cite{francavilla1975} later
proposed an extension to node-to-node contact in small strains, allowing
for direct comparison with closed-form solutions. The extension to
finite strains requires further development, and the logical development
was the node-to-segment approach, as described in the work of Hallquist
\cite{hallquist1985}. Although node-to-segment algorithms are known
to entail many defficiencies, most of the drawbacks have been addressed.
Jumps and discontinuities resulting from nodes sliding between edges
can be removed by smoothing and regularization \cite{neto2014}. Satisfaction
of patch-test, which is fundamental for convergence, can be enforced
by careful penalty weighing \cite{zavarise2009b,zavarise2009}. It is well known
that single-pass versions of the node-to-segment method result in
interference and double-pass can produce mesh interlocking, see \cite{puso2004,puso2004b}.
This shortcoming has eluded any attempts of a solution and has motivated
the development of surface-to-surface algorithms. One of the first
surface-to-surface algorithms was introduced by Simo, Wriggers, and
Taylor \cite{simo1985}. Zavarise and Wriggers \cite{zavarise1998}
presented a complete treatment of the 2D case and further developments
were supported by parallel work in nonconforming meshes, see \cite{kim2001}.
A review article on mortar methods for contact problems \cite{laursen2012}
where stabilization is discussed and an exhaustive detailing of most
geometric possibilities of contact was presented by Farah, Wall and
Popp \cite{farah2018}. This paper reveals the significant effort
that is required to obtain a robust contact algorithm. An interesting
alternative approach to contact discretization has been proposed by
Wriggers, Schr\"oder, and Schwarz \cite{wriggers2013}, who use an intermediate
mesh with a specialized anisotropic hyperelastic law to represent
contact interactions. In the context of large, explicit codes, Kane
et al. \cite{kane1999} introduced an interference function, or gap,
based on volumes of overlapping, allowing non-smooth contact to be
dealt in traditional smooth-based algorithms.
In addition to these general developments, there have been specialized
algorithms for rods, beams, and other structures. Litewka et al. \cite{litewka2013}
as well as Neto et al. \cite{neto2016,neto2017}, have produced efficient
algorithms for beam contact. For large-scale analysis of beams, cables
and ropes, Meier et al. \cite{meier2017} have shown how beam specialization
can be very efficient when a large number of entities is involved.
Recently, interest has emerged on using approximate distance
functions~\cite{wolff2013,liu2020cm,aguirre2020,lai2022} as alternatives to algorithms
that heavily rely on computational geometry. These algorithms
resolve the non-uniqueness of the projection, but still require
geometric computations.
In~\cite{wolff2013}, Wolff and Bucher proposed a
local construction to obtain distances inside any object, but still
require geometric calculations, especially for the integration scheme
along surfaces. Liu et al.~\cite{liu2020cm} have combined finite
elements with distance potential discrete element method (DEM) in
2D within an explicit integration framework. A geometric-based distance
function is constructed and contact forces stem from this construction.
In~\cite{aguirre2020}, the analysis of thin rods is performed using
classical contact but closed-form contact detection is achieved by
a signed-distance function defined on a voxel-type grid. In~\cite{lai2022},
a pre-established set of shapes is considered, and a
function is defined for each particle in a DEM (discrete element
method) setting with a projection that follows.
In the context
of computer graphics and computational geometry, Macklin et al.~\cite{macklin2020}
introduced an element-wise local optimization algorithm to determine
the closest-distance between the signed-distance-function
isosurface and face elements. Although close to what is proposed here,
no solution to a partial differential equation (PDE)
is proposed and significant geometric calculations are still required.
In this paper, we adopt a different method, which aims to be more general
and less geometric-based. This is possible due to the equivalence
between the solution of the Eikonal equation and the distance
function~\cite{belyaev2015}.
It is worth noting that very efficient linear algorithms are available
to solve regularized Eikonal equations, as discussed by Crane
et al.~\cite{crane2013}. The work in~\cite{crane2013}
provides a viable solution
for contact detection in computational mechanics. Applications
extend to beyond contact mechanics and they provide a solution for the
long-standing issue of imposing essential boundary conditions in
meshfree methods~\cite{sukumar2022}.
We solve a partial differential equation (PDE) that produces an ADF
(approximate distance function) that converges to the distance function
as a length parameter tends to zero. The relation between the screened
Poisson equation (also identified as Helmholtz-like equation), which
is adopted in computational damage and fracture~\cite{peerlings,peerlings2001}
and the Eikonal equation is well understood~\cite{guler2014}. Regularizations
of the Eikonal equation such as the geodesics-in-heat~\cite{crane2013} and the
screened Poisson equation are discussed by Belyaev and Fayolle~\cite{belyaev2020}.
We take advantage of the latter for computational contact mechanics.
Specifically, the proposed algorithm solves well-known shortcomings
of geometric-based contact enforcement:
\begin{enumerate}
\item Geometric calculations are reduced to the detection of a target element
for each incident node.
\item The gap function $g(\bm{x})$ is continuous, since the solution of
the screened Poisson equation is a continuous function.
\item The contact force direction is unique and obtained as the gradient
of $g(\bm{x})$.
\item Since the Courant--Beltrami penalty function is adopted, contact force
is continuous in the normal direction.
\end{enumerate}
Concerning the importance of solving the uniqueness problem, Konyukhov
and Schweizerhof~\cite{konyukhov2008} have shown that considerable
computational geometry must be in place to ensure uniqueness and existence
of node-to-line projection. Another important computational effort
was presented by Farah et al.~\cite{farah2018} to geometrically address
all cases in 3D with mortar interpolation. Extensions to higher dimensions
require even more intricate housekeeping. When compared with the geometrical
approach to the distance function~\cite{aguirre2020,lai2022}, the
algorithm is much simpler at the cost of a solution of an additional
PDE. Distance functions can be generated by level set solutions of
the transport equation~\cite{russo2000}.
The remainder of this paper is organized as follows. In Section \ref{sec:Approximate-distance-function},
the approximate distance function is introduced as the solution of
a regularization of the Eikonal equation. In Section \ref{sec:Discretization},
details concerning the discretization are introduced. The overall
algorithm, including the important step-control, is presented in Section
\ref{sec:Algorithm-based-on}. Verification and validation examples
are shown in Section \ref{sec:Numerical-tests} in both 2D and 3D.
Finally, in Section \ref{sec:Conclusions}, we present the advantages
and shortcomings of the present algorithm, as well as suggestions
for further improvements.
\section{Approximate distance function (ADF)}
\label{sec:Approximate-distance-function}
Let $\Omega\subset\mathbb{R}^{d}$ be a deformed configuration of
a given body in $d$-dimensional space and $\Omega_{0}$ the corresponding undeformed
configuration. The boundaries of these configurations are $\Gamma$
and $\Gamma_{0}$, respectively. Let us consider deformed coordinates
of an arbitrary point $\text{\ensuremath{\bm{x}\in\Omega}}$ and a
specific point, called incident, with coordinates $\bm{x}_{I}$. For
reasons that will become clear, we also consider a \emph{potential
}function $\phi\left(\bm{x}_{I}\right)$, which is the solution of
a scalar PDE. We are concerned with an approximation of the signed-distance
function. The so-called \emph{gap function }is now introduced as a
differentiable function $g\left[\phi\left(\bm{x}_{I}\right)\right]$
such that~\cite{wriggerscontact}:
\begin{equation}
g\left[\phi\left(\bm{x}_{I}\right)\right]\begin{cases}
<0 & \quad\bm{x}_{I}\in\Omega\\
=0 & \quad\bm{x}_{I}\in\Gamma\\
>0 & \quad\bm{x}_{I}\notin\Omega\cup\Gamma
\end{cases}.
\label{eq:gtriple}
\end{equation}
If a unique normal $\bm{n}\left(\bm{x}_{I}\right)$ exists for $\bm{x}_{I}\in\Gamma$,
we can decompose the gradient of $g\left[\phi\left(\bm{x}_{I}\right)\right]$
into parallel ($\parallel$) and orthogonal ($\perp$) terms: $\nabla g\left[\phi\left(\bm{x}_{I}\right)\right]=\left\{ \nabla g\left[\phi\left(\bm{x}_{I}\right)\right]\right\} _{\parallel}+\left\{ \nabla g\left[\phi\left(\bm{x}_{I}\right)\right]\right\} _{\perp}$,
with $\left\{ \nabla g\left[\phi\left(\bm{x}_{I}\right)\right]\right\} _{\perp}\sslash\bm{n}\left(\bm{x}_{I}\right)$.
Since $g\left[\phi\left(\bm{x}_{I}\right)\right]=0$ for $\bm{x}_{I}\in\Gamma$ we
have $\nabla g\left[\phi\left(\bm{x}_{I}\right)\right]\sslash\bm{n}\left(\bm{x}_{I}\right)$
on those points. In the frictionless case, the normal contact force component
is identified as $f_{c}$ and contact conditions correspond to the
following complementarity conditions~\cite{simo1992c}:
\begin{align}
g\left[\phi\left(\bm{x}_{I}\right)\right]f_{c} & =0 , \nonumber \\
f_{c} & \geq 0 , \label{eq:gplu}\\
g\left[\phi\left(\bm{x}_{I}\right)\right] & \geq 0 , \nonumber
\end{align}
or equivalently, $\left\langle gg\left[\phi\left(\bm{x}_{I}\right)\right]+f_{c}\right\rangle -f_{c}=0$
where $\left\langle x\right\rangle =\nicefrac{\left(x+|x|\right)}{2}$.
The vector form of the contact force is given by $\bm{f}_{c}=f_{c}\nabla g\left[\phi\left(\bm{x}_{I}\right)\right]$.
Assuming that a function $g\left[\phi\left(\bm{x}_{I}\right)\right]$, and its first
and second derivatives are available from the solution of a PDE, we
approximately satisfy conditions (\ref{eq:gplu}) by defining the
contact force as follows:
\begin{equation}
\bm{f}_{c}=f_{c}\nabla g\left[\phi\left(\bm{x}_{I}\right)\right]=\kappa\min\left\{ 0,g\left[\phi\left(\bm{x}_{I}\right)\right]\right\} ^{2}\nabla g\left[\phi\left(\bm{x}_{I}\right)\right]\qquad(\kappa>0), \label{eq:fcx}
\end{equation}
where $\kappa$ is a penalty parameter for the Courant--Beltrami
function~\cite{courant1943,beltrami1969}. The Jacobian of this force is given by:
\begin{equation}
\bm{J}=\kappa\min\left\{ 0,g\left[\phi\left(\bm{x}_{I}\right)\right]\right\} ^{2}\nabla
\otimes \nabla g\left[\phi\left(\bm{x}_{I}\right)\right]+2\kappa\min\left\{ 0,g\left[\phi\left(\bm{x}_{I}\right)\right]\right\} \nabla g\left[\phi\left(\bm{x}_{I}\right)\right]\otimes\nabla g\left[\phi\left(\bm{x}_{I}\right)\right].
\label{eq:stiffn}
\end{equation}
Varadhan~\cite{varadhan1967} established that the solution of the
screened Poisson equation:
\begin{subequations}\label{eq:cln2}
\begin{align}
c_{L} \nabla^2 \phi(\bm{x}) - \phi(\bm{x}) &= 0\quad\textrm{in } \Omega \label{eq:cln2-a}\\
\phi (\bm{x}) & =1 \quad \textrm{on } \Gamma \label{eq:cln2-b}
\end{align}
\end{subequations}
produces an ADF given by $-c_{L}\log\left[\phi\left(\bm{x}\right)\right]$. This property has been recently studied by Guler et al.~\cite{guler2014}
and Belyaev et al.~\cite{belyaev2015,belyaev2020}. The exact distance
is obtained as a limit:
\begin{equation}
d\left(\bm{x}\right)=-\lim_{c_{L}\rightarrow0}c_{L}\log\left[\phi\left(\bm{x}\right)\right].\label{eq:ddeterm}
\end{equation}
which is the solution of the Eikonal equation. Proof of this limit is provided in Varadhan~\cite{varadhan1967}.
We transform the ADF to a signed-ADF, by introducing the sign of outer
($+$) or inner ($-$) consisting of
\begin{equation}
g\left(\bm{x}\right)=\pm c_{L}\log\left[\phi\left(\bm{x}\right)\right].\label{eq:gx}
\end{equation}
Note that if the plus sign is adopted in (\ref{eq:gx}), then inner
points will result in a negative gap $g(\bm{x}$). The gradient of
$g(\bm{x})$ results in
\begin{equation}
\nabla g\left(\bm{x}\right)=\pm\frac{c_{L}}{\phi\left(\bm{x}\right)}\nabla\phi\left(\bm{x}\right)\label{eq:nablag}
\end{equation}
\\
and the Hessian of $g(\bm{x})$ is obtained in a straightforward manner:
\begin{equation}
\nabla\otimes\nabla\left[g\left(\bm{x}\right)\right]=\pm\frac{c_{L}}{\phi\left(\bm{x}\right)}\left\{ \nabla\otimes\nabla\left[\phi\left(\bm{x}\right)\right]-\frac{1}{\phi\left(\bm{x}\right)}\nabla\phi\left(\bm{x}\right)\otimes\nabla\phi\left(\bm{x}\right)\right\}.
\label{eq:n2g}
\end{equation}
Note that $c_L$ is a square of a characteristic length, i.e. $c_L=l_c^2$, which is here taken as a solution parameter.
Using a test field $\delta\phi(\bm{x})$, the weak form
of (\ref{eq:cln2}) is written as
\begin{equation}
\int_{\Omega}c_{L}\nabla\phi\left(\bm{x}\right)\cdot\nabla\delta\phi\left(\bm{x}\right)\dif V+\int_{\Omega}\phi\left(\bm{x}\right)\delta\phi\left(\bm{x}\right)\dif V=\int_{\Gamma}c_{L}\delta\phi\,\nabla\phi\left(\bm{x}\right)\cdot\bm{n}\left(\bm{x}\right)\dif A ,
\label{eq:weakclphi}
\end{equation}
where $\textrm{d}A$ and $\textrm{d}V$ are differential area and volume
elements, respectively.
Since an essential boundary condition is imposed on
$\Gamma$ such that $\phi\left(\bm{x}\right)=1$ for $x\in\Gamma$,
it follows that
$\delta \phi(\bm{x}) = 0$ on $\Gamma$ and the right-hand-side
of (\ref{eq:weakclphi}) vanishes.
\section{Discretization}
\label{sec:Discretization}
In an interference condition, each interfering node, with coordinates
$\bm{x}_{N}$, will fall within a given continuum element. The parent-domain
coordinates $\bm{\xi}$ for the incident node $\bm{x}_{I}$ also depends
on element nodal coordinates. Parent-domain coordinates are given
by:
\begin{subequations}
\begin{align}
\bm{\xi}_{I} &= \underset{\bm{\xi}}{\textrm{argmin}}\left[\left\Vert \bm{x}\left(\bm{\xi}\right)-\bm{x}_{I}\right\Vert \right], \label{eq:xi} \\
\intertext{and it is straightforward to show that for a triangle,}
\xi_{I1} &= \frac{x_{I2}\left(x_{11}-x_{31}\right)+x_{12}x_{31}-x_{11}x_{32}+x_{I1}\left(x_{32}-x_{12}\right)}{x_{11}\left(x_{22}-x_{32}\right)+x_{12}\left(x_{31}-x_{21}\right)+x_{21}x_{32}-x_{22}x_{31}}, \label{eq:xi1} \\
\xi_{I2} &=\frac{x_{I2}\left(x_{21}-x_{11}\right)+x_{12}x_{21}-x_{11}x_{22}+x_{I1}\left(x_{12}-x_{22}\right)}{x_{11}\left(x_{22}-x_{32}\right)+x_{12}\left(x_{31}-x_{21}\right)+x_{21}x_{32}-x_{22}x_{31}}, \label{eq:xi2}
\end{align}
\end{subequations}
with similar expressions for a tetrahedron~\cite{Areias2022cgithub}.
The continuum element interpolation is as follows:
\begin{equation}
\bm{x}\left(\bm{\xi}\right)=\sum_{K=1}^{d+1}N_{K}\left(\bm{\xi}\right)\bm{x}_{K}=\bm{N}\left(\bm{\xi}\right)\cdot\bm{x}_{N},\label{eq:xxi}
\end{equation}
where $N_{K}\left(\bm{\xi}\right)$ with $K=1,\ldots,d+1$
are the shape functions of a triangular or tetrahedral element.
Therefore, \eqref{eq:xi} can be written as:
\begin{equation}
\bm{\xi}_{I} = \underset{\bm{\xi}}{\textrm{argmin}}\left[\left\Vert \bm{N}\left(\bm{\xi}\right)\cdot\bm{x}_{N}-\bm{x}_{I}\right\Vert \right]=\bm{a}_N\left(\bm{x}_C\right)\label{eq:firstian}
\end{equation}
In (\ref{eq:firstian}), we group the continuum node coordinates and the
incident node coordinates in a single array $\bm{x}_{C}=\left\{ \begin{array}{cc}
\bm{x}_{N} & \bm{x}_{I}\end{array}\right\} ^{T}$with cardinality $\#\bm{x}_{C}=d\left(d+2\right)$. We adopt the notation $\bm{x}_{N}$ for the coordinates of the continuum
element. For triangular and tetrahedral discretizations, $\bm{\xi}_{I}$
is a function of $\bm{x}_{N}$ and $\bm{x}_{I}$:
\begin{equation}
\bm{\xi}_{I}=\bm{a}_{N}\left(\left\{ \begin{array}{c}
\bm{x}_{N}\\
\bm{x}_{I}
\end{array}\right\} \right)=\bm{a}_{N}\left(\bm{x}_{C}\right). \label{eq:xian}
\end{equation}
The first and
second derivatives of $\bm{a}_{N}$ with respect to $\bm{x}_{C}$
make use of the following notation:
\begin{align}
\bm{A}_{N}\left(\bm{x}_{C}\right) & =\frac{\dif\bm{a}_{N}\left(\bm{x}_{C}\right)}{\dif\bm{x}_{C}}\qquad
\left( d\times\left[d\left(d+2\right)\right] \right), \label{eq:firstderivative}\\
\mathcal{A}_{N}\left(\bm{x}_{C}\right) & =\frac{\dif\bm{A}_{N}\left(\bm{x}_{C}\right)}{\dif\bm{x}_{C}}\qquad
\left( d\times\left[d\left(d+2\right)\right]^{2} \right). \label{eq:secondderivative}
\end{align}
Source code for these functions is available in \texttt{Github}~\cite{Areias2022cgithub}. A mixed formulation is adopted with equal-order interpolation for
the displacement $\bm{u}$ and the function $\phi$. For a set of
nodal displacements $\bm{u}_{N}$ and nodal potential values $\bm{\phi}_{N}$:
\begin{subequations}
\begin{align}
\bm{u}\left(\bm{x}_{C}\right) & =\bm{N}_{u}\left(\bm{x}_{C}\right)\cdot\bm{u}_{N},
\label{eq:uxi}\\
\phi\left(\bm{x}_{C}\right) & =\bm{N}_{\phi}\left(\bm{x}_{C}\right)\cdot\bm{\phi}_{N},
\label{eq:phixi} \\
\intertext{where in three dimensions}
\bm{N}_{u}\left(\bm{x}_{C}\right) & =\left[\begin{array}{ccccc}
\cdots & N_{K}\left[\bm{a}_{N}\left(\bm{x}_{C}\right)\right] & 0 & 0 & \cdots\\
\cdots & 0 & N_{K}\left[\bm{a}_{N}\left(\bm{x}_{C}\right)\right] & 0 & \cdots\\
\cdots & 0 & 0 & N_{K}\left[\bm{a}_{N}\left(\bm{x}_{C}\right)\right] & \cdots
\end{array}\right],\label{eq:shu}\\
\bm{N}_{\phi}\left(\bm{x}_{C}\right) & =\left[\begin{array}{ccc}
\cdots & N_{K}\left[\bm{a}_{N}\left(\bm{x}_{C}\right)\right] & \cdots\end{array}\right]^{T}
,
\end{align}
\end{subequations}
where $\bm{\xi}_{I}=\bm{a}_{N}\left(\bm{x}_{C}\right)$. First and
second derivatives of $N_{K}\left[\bm{a}_{N}\left(\bm{x}_{C}\right)\right]$
are determined from the chain rule:
\begin{align}
\frac{\dif N_{K}}{\dif\bm{x}_{C}} & =\frac{\dif N_{K}}{\dif\bm{\xi}_{I}}\cdot\bm{A}_{N}\left(\bm{x}_{C}\right),\label{eq:dndxc}\\
\frac{\dif^{2}N_{K}}{\dif\bm{x}_{C}^{2}} & =\bm{A}_{N}^{T}\left(\bm{x}_{C}\right)\cdot\frac{\dif^{2}N_{K}}{\dif\bm{\xi}_{I}^{2}}\cdot\bm{A}_{N}\left(\bm{x}_{C}\right)+\frac{\dif N_{K}}{\dif\bm{\xi}_{I}}\cdot\mathcal{A}_{N}\left(\bm{x}_{C}\right).
\end{align}
For the test function of the incident point, we have
\begin{equation}
\delta\phi\left(\bm{x}_{C}\right) =\bm{N}_{\phi}\left(\bm{x}_{C}\right)\cdot\delta\boldsymbol{\phi}_{N}+\bm{\phi}_{N}\cdot\frac{\dif\bm{N}_{\phi}\left(\bm{x}_{C}\right)}{\dif\bm{x}_{C}}\cdot\delta\bm{x}_{C}.\label{eq:dphi}
\end{equation}
For linear continuum elements, the
second variation of $\phi\left(\bm{\xi}\right)$ is given by the following rule:
\begin{align}
\dif\delta\phi\left(\bm{x}_{C}\right) =\, &\, \delta\bm{\phi}_{N}\cdot\frac{\dif\bm{N}_{\phi}\left(\bm{x}_{C}\right)}{\dif\bm{x}_{C}}\cdot\dif\bm{x}_{C}+\dif\bm{\phi}_{N}\cdot\frac{\dif\bm{N}_{\phi}\left(\bm{x}_{C}\right)}{\dif\bm{x}_{C}}\cdot\delta\bm{x}_{C} \nonumber \\
& +\bm{\phi}_{N}\cdot\frac{\dif^{2}\bm{N}_{\phi}\left(\bm{x}_{C}\right)}{\dif\bm{x}_{C}^{2}}:\left(\delta\bm{x}_{C}\otimes\dif\bm{x}_{C}\right)\label{eq:sv}
\end{align}
Since the gradient of $\phi$ makes use of the continuum part of the
formulation, we obtain:
\begin{align}
\nabla\phi\left(\bm{\xi}\right) & =\bm{\phi}_{N}\cdot\underbrace{\frac{\dif\boldsymbol{N}_{\phi}\left(\bm{\xi}\right)}{\dif\bm{\xi}}\cdot\bm{j}^{-1}}_{\nicefrac{\dif\bm{N}_{\phi}}{\dif\bm{x}}},\qquad\nabla\delta\phi\left(\bm{\xi}\right)=\delta\bm{\phi}_{N}\cdot\frac{\dif\boldsymbol{N}_{\phi}\left(\bm{\xi}\right)}{\dif\bm{\xi}}\cdot\bm{j}^{-1}\label{eq:gphijac}
\end{align}
\\
where $\bm{j}$ is the Jacobian matrix in the deformed configuration.
The element residual and stiffness are obtained from these quantities
and available in \texttt{Github}~\cite{Areias2022cgithub}. Use is
made of the \texttt{Acegen}~\cite{korelc2002} add-on
to \texttt{Mathematica}~\cite{mathematica}
to obtain the source code for the final expressions.
\section{Algorithm and step control}
All nodes are considered candidates and all elements are targets.
A simple bucket-sort strategy is adopted to reduce the computational
cost. In addition,
Step control is required to avoid the change of target during
Newton-Raphson iteration. The screened Poisson equation is solved
for all bodies in the analyses. Figure~\ref{fig:Definition-of-a} shows
the simple mesh overlapping under consideration. The resulting algorithm
is straightforward. Main characteristics are:
\begin{itemize}
\item All nodes are candidate incident nodes.
\item All elements are generalized targets.
\end{itemize}
\label{sec:Algorithm-based-on}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{mesh}
\caption{Relevant quantities for the definition of a contact discretization
for element $e$.}\label{fig:Definition-of-a}
\end{figure}
The modifications required for a classical nonlinear Finite Element
software (in our case \texttt{SimPlas}~\cite{simplascode}) to include this
contact algorithm are modest. Algorithm~\ref{alg:Contact-algorithm-based}
presents the main tasks. In this Algorithm, blue boxes denote the
contact detection task, which here is limited to:
\begin{enumerate}
\item Detect nearest neighbor (in terms of distance) elements for each node.
A bucket ordering is adopted.
\item Find if the node is interior to any of the neighbor elements by use
of the shape functions for triangles and tetrahedra. This is performed
in closed form.
\item If changes occurred in the targets, update the connectivity table
and the sparse matrix addressing. Gustavson's algorithm~\cite{gustavson1978}
is adopted to perform the updating in the assembling process.
\end{enumerate}
In terms of detection, the following algorithm is adopted:
\begin{enumerate}
\item Find all exterior faces, as faces sharing only one tetrahedron.
\item Find all exterior nodes, as nodes belonging to exterior faces.
\item Insert all continuum elements and all exterior nodes in two bucket
lists. Deformed coordinates of nodes and deformed coordinates of element
centroids are considered.
\item Cycle through all exterior nodes
\begin{enumerate}
\item Determine the element bucket from the node coordinates
\item Cycle through all elements ($e$) in the $3^{3}=27$ buckets surrounding
the node
\begin{enumerate}
\item If the distance from the node to the element centroid is greater than
twice the edge size, go to the next element
\item Calculates the projection on the element ($\bm{\xi}_{I}$) and the
corresponding shape functions $\bm{N}\left(\bm{\xi}_{I}\right)$.
\item If $0\leq N_{K}\left(\bm{\xi}_{I}\right)\leq1$ then $e$ is the target
element. If the target element has changed, then flag the solver for connectivity update.
\end{enumerate}
\end{enumerate}
\end{enumerate}
Since the algorithm assumes a fixed connectivity table during Newton
iterations, a verification is required after each converged iteration
to check if targets have changed since last determined. If this occurs,
a new iteration sequence with revised targets is performed.
\begin{algorithm}
\caption{\label{alg:Contact-algorithm-based}Staggered contact algorithm based on the
approximate distance function. $\lambda$ is the load/displacement factor.}
\begin{centering}
\includegraphics[width=0.6\textwidth]{flowchart}
\par\end{centering}
\end{algorithm}
The only modification required to a classical FEM code is the solution of the screened-Poisson equation, the green box in Algorithm \ref{alg:Contact-algorithm-based}. The cost of this solution is negligible when compared with the nonlinear solution process since the equation is linear and scalar. It is equivalent to the cost of a steady-state heat conduction solution. Note that this corresponds to a staggered algorithm.
\section{Numerical tests}
\label{sec:Numerical-tests}
Numerical examples are solved with our in-house software,
\texttt{SimPlas}~\cite{simplascode}, using the new node-to-element contact element.
Only triangles and tetrahedra are assessed at this time, which provide
an exact solution for $\bm{\xi}_{I}$. \texttt{Mathematica}~\cite{mathematica}
with the add-on \texttt{Acegen}~\cite{korelc2002} is employed to obtain the
specific source code. All runs are quasi-static and make use of a Neo-Hookean model. If $\bm{C}$ is the right Cauchy-Green tensor, then
\[
\bm{S}=2\nicefrac{\dif\psi\left(\bm{C}\right)}{\dif\bm{C}}
\]
where $\psi\left(\bm{C}\right)=\frac{\mu}{2}\left(\bm{C}:\bm{I}-3\right)-\mu\log\left(\sqrt{\det\bm{C}}\right)+\frac{\chi}{2}\log^{2}\left(\sqrt{\det\bm{C}}\right)$
with $\mu=\frac{E}{2(1+\nu)}$ and $\chi=\frac{E\nu}{(1+\nu)(1-2\nu)}$
being constitutive properties.
\subsection*{Patch test satisfaction}
We employ a corrected penalty so that the contact patch test is satisfied in most cases. This is an important subject for convergence of computational contact solutions and has been addressed here with a similar solution to the one discussed by Zavarise and co-workers \cite{zavarise2009,zavarise2009b}.
We remark that this is not a general solution and in some cases, our formulation may fail to pass the patch test. Figure \ref{fig:pts} shows the effect of using a penalty weighted by the edge projection, see \cite{zavarise2009b}. However, this is not an universal solution.
\begin{figure}
\centering
\includegraphics[clip,width=0.8\textwidth]{patchcontact}
\caption{Patch test satisfaction by using a corrected penalty.}
\label{fig:pts}
\end{figure}
\subsection{Two-dimensional compression}
We begin with a quasi-static two-dimensional test, as shown in Figure~\ref{fig:Illustrative-problem-(2D)},
using three-node triangles. This test consists of a compression of
four polygons (identified as part 3) in Figure \ref{fig:Illustrative-problem-(2D)}
by a deformable rectangular punch (part 1 in the same Figure). The
`U' (part 2) is considered rigid but still participates in the
approximate distance function (ADF) calculation. To avoid an underdetermined
system, a small damping term is used, specifically $40$ units with
$\mathsf{L}^{-2}\mathsf{M}\mathsf{T}^{-1}$ ISQ dimensions. Algorithm
\ref{alg:Contact-algorithm-based} is adopted with a pseudo-time increment
of $\Delta t=0.003$ for $t\in[0,1]$.
For $h=0.020$, $h=0.015$ and $h=0.010$, Figure \ref{fig:Distance-interference} shows a sequence of deformed meshes and the contour plot of $\phi(\bm{x})$. The robustness of the algorithm is excellent, at the cost of some interference for coarse meshes. To further inspect the interference, the contour lines for $g(\bm{x})$ are shown in Figure \ref{fig:col}. We note that coarser meshes produce smoothed-out vertex representation, which causes the interference displayed in Figure \ref{fig:Distance-interference}. Note that $g(\bm{x})$ is determined from
$\phi(\bm{x})$.
\begin{figure}
\centering
\includegraphics[clip,width=0.8\textwidth]{distancesgeo}
\caption{Two-dimensional verification problem. Consistent units are used.}
\label{fig:Illustrative-problem-(2D)}
\end{figure}
\begin{figure}
\begin{centering}
\begin{tabular}{cccc}
\multicolumn{4}{c}{$\overline{v}=0.00$}\tabularnewline
\multicolumn{4}{c}{\includegraphics[width=0.3609\textwidth]{distance0}\hspace{1cm}\includegraphics[width=0.4941\textwidth]{distance0phi}}\tabularnewline
& $\overline{v}=3.24$ & $\overline{v}=5.30$ & $\overline{v}=5.30$, $\phi$\tabularnewline
{\Large{}\rotatebox{90}{$h=0.020$}} & \includegraphics[width=0.27702\textwidth]{distance2coarse} & \includegraphics[width=0.27702\textwidth]{distance3coarse} & \includegraphics[width=0.3807\textwidth]{distance4coarse}\tabularnewline
{\Large{}\rotatebox{90}{$h=0.015$}} & \includegraphics[width=0.27702\textwidth]{distance2intermediate} & \includegraphics[width=0.27702\textwidth]{distance3intermediate} & \includegraphics[width=0.3807\textwidth]{distance4intermediate}\tabularnewline
{\Large{}\rotatebox{90}{$h=0.010$}} & \includegraphics[width=0.27702\textwidth]{distance2fine} & \includegraphics[width=0.27702\textwidth]{distance3fine} & \includegraphics[width=0.3807\textwidth]{distance4fine}\tabularnewline
\end{tabular}
\par\end{centering}
\caption{\label{fig:Distance-interference}Two-dimensional compression: sequence of deformed meshes and contour plot of $\phi(\bm{x})$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{distancecontours}
\caption{\label{fig:col}Gap ($g(\bm{x})$) contour lines for the four meshes using $l_{c}=0.3$ consistent units.}
\end{figure}
Using the gradient of $\phi\left(\bm{x}\right)$, the contact direction
is obtained for $h=0.02$ as shown in Figure~\ref{fig:Directions-obtained-as}.
We can observe that for the star-shaped figure, vertices are poorly
represented, since small gradients are present due to uniformity of
$\phi\left(\bm{x}\right)$ in these regions. The effect of mesh refinement
is clear, with finer meshes producing a sharper growth of reaction
when all four objects are in contact with each other. In contrast,
the effect of the characteristic length $l_{c}$ is not noticeable.
In terms of the effect of $l_c$ on the fields $\phi(\bm{x})$ and $g(\bm{x})$, Figure~\ref{fig:lcphig} shows that information, although $\phi(\bm{x})$ is strongly affected by the length parameter, $g(\bm{x})$ shows very similar spatial distributions although different peaks. Effects of $h$ and $l_c$ in the displacement/reaction behavior is shown in Figure \ref{fig:Effect-of-}. The mesh size $h$ has a marked effect on the results up to $h=0.0125$, the effect of $l_c$ is much weaker.
\begin{figure}
\begin{centering}
\subfloat[$h=0.020$]{\begin{centering}
\includegraphics[width=0.45\textwidth]{distancearrowscoarse}
\par\end{centering}
}
\par\end{centering}
\begin{centering}
\subfloat[$h=0.015$]{\begin{centering}
\includegraphics[width=0.45\textwidth]{distancearrowsintermediate}
\par\end{centering}
}
\par\end{centering}
\begin{centering}
\subfloat[$h=0.010$]{\begin{centering}
\includegraphics[width=0.45\textwidth]{distancearrowsfine}
\par\end{centering}
}
\par\end{centering}
\caption{\label{fig:Directions-obtained-as}Directions obtained as $\nabla\phi\left(\bm{\xi}\right)$
for $h=0.020,$ $0.015$ and $0.010$.}
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[width=0.8\textwidth]{philogs}
\par\end{centering}
\caption{\label{fig:lcphig}Effect of $l_{c}$ in the form of $\phi(\bm{x})$ and $g(\bm{x})$.}
\end{figure}
\begin{figure}
\begin{centering}
\subfloat[]{\begin{centering}
\includegraphics[width=0.50\textwidth]{reacts}
\par\end{centering}
}\subfloat[]{\begin{centering}
\includegraphics[width=0.50\textwidth]{reacts2}
\par\end{centering}
}
\par\end{centering}
\caption{\label{fig:Effect-of-}Effect of $h$ in (a) and effect of $l_{c}$ in
(b) in the displacement/reaction behavior.}
\end{figure}
\FloatBarrier
\subsection{Three-dimensional compression}
In three dimensions, the algorithm is in essence the same. Compared
with geometric-based algorithms, it significantly reduces the coding
for the treatment of particular cases (node-to-vertex, node-to-edge
in both convex and non-convex arrangements). The determination of
coordinates for each incident node is now performed on each tetrahedra,
but the remaining tasks remain unaltered. We test the geometry shown
in Figure \ref{fig:Three-dimensional-...} with the following objectives:
\begin{itemize}
\item Assess the extension to the 3D case. Geometrical nonsmoothness
is introduced with a cone and a wedge.
\item Quantify interference as a function of $l_{c}$ and $\kappa$ as well
as the strain energy evolution.
\end{itemize}
Deformed configurations and contour plots of $\phi\left(\bm{x}\right)$
for this problem are presented in Figure \ref{fig:Three-dimensional-...},
and the corresponding CAD files are available on \texttt{Github}~\cite{Areias2022cgithub}.
A cylinder, a cone and a wedge are compressed between two blocks.
Dimensions of the upper and lower blocks are $10\times12\times2$
consistent units (the upper block is deformable whereas the lower
block is rigid) and initial distance between blocks is $8$ consistent
units. Length and diameter of the cylinder are $7.15$
and $2.86$ (consistent units), respectively. The cone has a height of $3.27$ consistent
units and a radius of $1.87$. Finally, the wedge has a width of $3.2$,
a radius of $3.2$ and a swept angle of $30$ degrees. A compressible
Neo-Hookean law is adopted with the following properties:
\begin{itemize}
\item Blocks: $E=5\times10^{4}$ and $\nu=0.3$.
\item Cylinder, cone and wedge:~$E=1\times10^{5}$ and $\nu=0.3$.
\end{itemize}
\begin{figure}
\begin{centering}
\subfloat[]{\begin{centering}
\includegraphics[width=0.7\textwidth]{distance3dgeo}
\par\end{centering}
}
\par\end{centering}
\begin{centering}
\subfloat[]{\begin{centering}
\includegraphics[width=0.6\textwidth]{star3d3}
\par\end{centering}
}
\par\end{centering}
\caption{Data for the three-dimensional compression
test. (a) Undeformed and deformed configurations ($h=0.025$). For geometric
files, see~\cite{Areias2022cgithub}.
(b) Detail of contact of cone with lower block and with the wedge. Each object is identified by a different color.}
\label{fig:Three-dimensional-...}
\end{figure}
The analysis of gap violation, $v_{\max}=\sup_{\bm{x}\in\Omega}\left[-\min\left(0,g\right)\right]$
as a function of pseudotime $t\in[0,1]$ is especially important for
assessing the robustness of the algorithm with respect to parameters
$l_{c}$ and $\kappa$. For the interval $l_{c}\in[0.05,0.4]$, the
effect of $l_{c}$ is not significant, as can be observed in
Figure~\ref{fig:Effect-of--1}. Some spikes are noticeable around $t=0.275$
for $l_{c}=0.100$ when the wedge penetrates the cone. Since $\kappa$
is constant, all objects are compressed towards end of the simulation, which the gap violation. In terms of $\kappa$, effects
are the same as in classical geometric-based contact. In terms of
strain energy, higher values of $l_{c}$ result in lower values of
strain energy. This is to be expected, since smaller gradient values
are obtained and the contact force herein is proportional to the product
of the gradient and the penalty parameter. Convergence for the strain
energy as a function of $h$ is presented in Figure~\ref{fig:Effect-of--3}.
It is noticeable that $l_{c}$ has a marked effect near the end of
the compression, since it affects the contact force.
\begin{figure}
\centering
\subfloat[]{\begin{centering}
\includegraphics[width=0.50\textwidth]{gaps}
\par\end{centering}
}\subfloat[]{\begin{centering}
\includegraphics[width=0.50\textwidth]{gaps2}
\par\end{centering}
}
\caption{(a) Effect of $l_{c}$ on the maximum gap evolution
over pseudotime $(\kappa=0.6\times10^{6}$, $h=0.030)$ and
(b) Effect of $\kappa$ on the maximum gap evolution over pseudotime
$(l_{c}=0.2$).}
\label{fig:Effect-of--1}
\end{figure}
\begin{figure}
\centering
\subfloat[]{\begin{centering}
\includegraphics[width=0.50\textwidth]{energy}
\par\end{centering}
}\subfloat[]{\begin{centering}
\includegraphics[width=0.50\textwidth]{energy2}
\par\end{centering}
}
\caption{(a) Effect of $l_{c}$ on the strain energy ($h=0.3$) and
(b) Effect of $h$ on the strain energy $(l_{c}=0.2$).
$\kappa=0.6\times10^{6}$ is used.}
\label{fig:Effect-of--3}
\end{figure}
\subsection{Two-dimensional ironing benchmark}
This problem was proposed by Yang et al.~\cite{yang2005} in
the context of surface-to-surface mortar discretizations.
Figure~\ref{fig:Ironing-problem:-relevant}
shows the relevant geometric and constitutive data, according to~\cite{yang2005}
and~\cite{hartmann2009}. We compare the present approach with the
results of these two studies in Figure~\ref{fig:Ironing-problem:-results}.
Differences exist in the magnitude of forces, and we infer that
this is due to the continuum finite element technology. We use finite
strain triangular elements with a compressible neo-Hookean law~\cite{bonet2008}.
The effect of $l_{c}$ is observed in Figure~\ref{fig:Ironing-problem:-results}.
As before, only a slight effect
is noted in the reaction forces. We use the one-pass version of our
algorithm, where the square indenter has the master nodes and targets
are all elements in the rectangle. Note that, since the cited work
includes friction, we use here a simple model based on regularized
tangential law with a friction coefficient, $\mu_{f}=0.3$.
\begin{figure}
\begin{centering}
\includegraphics[width=0.95\textwidth]{ironing2d}
\par\end{centering}
\caption{\label{fig:Ironing-problem:-relevant}Ironing benchmark in 2D: relevant
data and deformed mesh snapshots.}
\end{figure}
\begin{figure}
\begin{centering}
\subfloat[]{\begin{centering}
\includegraphics[width=0.50\textwidth]{ironing}
\par\end{centering}
}\subfloat[]{\selectlanguage{american
\begin{centering}
\includegraphics[width=0.50\textwidth]{ironing2}
\par\end{centering}
}
\par\end{centering}
\caption{\label{fig:Ironing-problem:-results}Ironing problem in 2D. Results
for the load in terms of pseudotime are compared to the values reported
in Yang et al.~\cite{yang2005} and Hartmann et al.~\cite{hartmann2009}. $\kappa=0.6\times10^{6}$ is used. (a) Effect of $h$ in the evolution of the horizontal ($R_{x}$) and vertical ($R_{y}$) reactions and (b) Effect of $l_{c}$ in the evolution of the horizontal ($R_{x}$) and
vertical ($R_{y}$) reactions for $h=0.1667$).}
\end{figure}
\subsection{Three-dimensional ironing benchmark}
We now perform a test of a cubic indenter on a soft block. This
problem was proposed by Puso and Laursen~\cite{puso2004,puso2004b}
to assess a mortar formulation based on averaged normals. The frictionless
version is adopted~\cite{puso2004b}, but we choose the most demanding
case: $\nu=0.499$ and the cubic indenter. Relevant data is presented
in Figure~\ref{fig:I}. The rigid $1\times1\times1$
block is located at $1$ unit from the edge and is first moved down
$1.4$ units. After this, it moves longitudinally $4$ units inside
the soft block. The soft block is analyzed with two distinct meshes:
$4\times6\times20$ divisions and $8\times12\times40$ divisions.
Use is made of one plane of symmetry. A comparison with the vertical
force in~\cite{puso2004b} is performed (see also~\cite{puso2004}
for a clarification concerning the force components). We allow some
interference to avoid locking with tetrahedra. In~\cite{puso2004b},
Puso and Laursen employed mixed hexahedra, which are more flexible
than the crossed-tetrahedra we adopt here.
Figure~\foreignlanguage{american}{\ref{fig:3D-ironing:-cube}
shows the comparison between the proposed approach and the mortar
method by Puso and Laursen}~\cite{puso2004b}. Oscillations are caused by the normal jumps in gradient of $\phi(\bm{x})$ due to the classical $\mathcal{C}^0$ finite-element discretization (between elements). Although the oscillations can be observed, the present approach is
simpler than the one in Puso and Laursen.
\begin{figure}
\selectlanguage{american
\begin{centering}
\includegraphics[width=0.85\textwidth]{ironing3dgeo}
\par\end{centering}
\selectlanguage{english
\caption{\label{fig:I}3D ironing: cube over soft block. Relevant data and
results. }
\selectlanguage{american
\selectlanguage{english
\end{figure}
\selectlanguage{american
\begin{figure}
\begin{centering}
\includegraphics[width=0.7\textwidth]{puso}
\par\end{centering}
\caption{\label{fig:3D-ironing:-cube}3D ironing: cube over soft block. Vertical
reactions compared with results in Puso and Laursen~\cite{puso2004b}. }
\end{figure}
\clearpage
\section{Conclusions}
We introduced a discretization and gap definition for a contact algorithm
based on the solution of the screened Poisson equation. After a log-transformation, this is equivalent to the solution of a regularized Eikonal equation
and therefore provides a distance to any obstacle or set of obstacles.
This approximate distance function is smooth and is differentiated
to obtain the contact force. This is combined with a Courant--Beltrami
penalty to ensure a differentiable force along the normal direction.
These two features are combined with a step-control algorithm that
ensures a stable target-element identification. The algorithm avoids
most of the geometrical calculations and housekeeping, and is able
to solve problems with nonsmooth geometry. Very robust behavior is
observed and two difficult ironing benchmarks (2D and 3D) are solved
with success. Concerning the selection of the length-scale parameter
$l_{c}$, which produces an exact solution for $l_{c}=0$, we found
that it should be the smallest value that is compatible with the solution
of the screened Poisson equation. Too small of a $l_{c}$ will produce
poor results for $\phi\left(\bm{x}\right)$. Newton-Raphson convergence
was found to be stable, as well as nearly independent of $l_{c}$. In terms of further developments, a $\mathcal{C}^2$ meshless discretization is important to reduce the oscillations caused by normal jumps and we plan to adopt the cone projection method developed in \cite{areias2015c} for frictional problems.
\label{sec:Conclusions}
\bibliographystyle{unsrt}
|
{
"arxiv_id": "2302.13269",
"language": "en",
"timestamp": "2023-02-28T02:14:43",
"url": "https://arxiv.org/abs/2302.13269",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
With the rapid growth in the number of online videos, objective Video quality assessment (VQA) is gaining a great deal of interest from researchers. In recent years, although opinion-aware VQA approaches~\cite{tlvqm,fastvqa,vsfa,cnntlvqm} have been extensively explored, they rely on large amounts of training data with expensive human subjective scores~\cite{pvq,cvd,kv1k,vqc} and are typically not easily adaptable to new datasets. How to alleviate the burden of costly training data and build a robust VQA capable of evaluating any given video is the issue that urgent to study.
\begin{figure}
\centering
\includegraphics[width=0.93\linewidth]{BVI-1.pdf}
\vspace{-9pt}
\caption{Visualization on criteria of the three independent metrics in \textbf{BUONA-VISTA}. The pipeline in shown in Fig.~\ref{fig:1}.}
\label{fig:criterion}
\vspace{-18pt}
\end{figure}
\let\thefootnote\relax\footnotetext{$^1$Nanyang Technological University; $^2$Sensetime Research.}
\let\thefootnote\relax\footnotetext{$^*$Code available at {{\textit{https://github.com/QualityAssessment/BVQI.}}}}
In recent years, few studies have been conducted in \textbf{opinion-unaware VQA}~\cite{niqe,ilniqe,brisque,tpqi}, which typically rely on empirical criteria for VQA instead of regression on opinion data. For example, NIQE~\cite{niqe} measures \textit{spatial} naturalness of images by comparing them with distributions of pristine natural contents (Fig.~\ref{fig:criterion}(a)). TPQI~\cite{tpqi}, inspired by knowledge on human visual system, measures the \textit{temporal} naturalness of videos through the inter-frame curvature on perceptual domains~\cite{primaryv1,lgn}. These opinion-unaware VQA methods are based on \textbf{\textit{low-level}} criteria, ignoring the human perceptions of video semantics. Moreover, many existing studies have noticed that natural authentic distortions~\cite{spaq,paq2piq,ytugc} or aesthetic-related issues~\cite{vsfa,dover} commonly occur on in-the-wild videos and impact human quality perception. These issues are hardly captured with these low-level criteria, but could be better extracted with semantic-aware deep neural features\cite{cnntlvqm,fastervqa,videval,dbcnn,bvqa2021,mdtvsfa}.
In this paper, we propose a semantic-aware criterion to tackle with these high-level quality issues in an unsupervised manner. With the Contrastive Language-image Pre-training (CLIP)~\cite{clip}, we are able to calculate the affinity between visual features and any given texts. Based on CLIP, we measure whether visual features of a video are more similar to text features for positive (\textit{e.g. high quality}), or negative (\textit{e.g. low quality}) text descriptions (Fig.~\ref{fig:criterion}(c)), which acts as semantic-aware quality criterion mostly focusing on aesthetic-related quality issues and high-level human quality perception. With the new criteria, we design the Semantic Affinity Index as a semantic-aware zero-shot VQA index to assess authentic distortions and aesthetic issues.
Furthermore, we design the gaussian normalization followed by sigmoid rescaling~\cite{vqeg}, to aggregate the Semantic Affinity Index with low-level spatial and temporal naturalness metrics, composing into the overall Blind Unified Opinion-Unaware Video Quality Index via Semantic and Technical Metric Aggregation (\textbf{BUONA-VISTA}).
In general, our contributions are three-fold:
\begin{enumerate} [topsep=0pt,itemsep=0pt,parsep=0pt]
\renewcommand{\labelenumi}{\theenumi)}
\item We introduce a novel text-prompted Semantic Affinity Index for opinion-unaware VQA. It incorporates acronym-differential affinity and multi-prompt aggregation to accurately match human quality perception.
\item We introduce gaussian normalization and sigmoid rescaling strategies to align and aggregate the Semantic Affinity Index with low-level spatial and temporal technical metrics into the \textbf{BUONA-VISTA} quality index.
\item The BUONA-VISTA significantly outperforms existing zero-shot VQA indexes ($>$20\%), and proves superior robustness against opinion-aware VQA methods.
\end{enumerate}
\section{The Proposed Method}
In this section, we introduce the three metrics with different criteria that make up the proposed video quality index, including the CLIP-based Semantic Affinity Index ($\mathrm{Q}_A$, Sec.~\ref{sec:qa}), and two technical naturalness metrics: the Spatial Naturalness Index ($\mathrm{Q}_S$, Sec.~\ref{sec:qs}), and the Temporal Naturalness Index ($\mathrm{Q}_T$, Sec.~\ref{sec:qt}). The three indexes are sigmoid-rescaled and aggregated into the proposed \textbf{BUONA-VISTA} quality index. The overall pipeline of the index is illustrated in Fig.~\ref{fig:1}.
\begin{figure*}
\centering
\includegraphics[width=0.97\textwidth]{BVI-2.pdf}
\vspace{-10bpt}
\caption{The overall pipeline of BUONA-VISTA, including (a) Semantic Affinity Index, (b) Spatial Naturalness index, and (c) Temporal Naturalness Index. The three indexes are remapped and aggregated to the final BUONA-VISTA index.}
\label{fig:1}
\vspace{-15pt}
\end{figure*}
\subsection{The Semantic Affinity Index ($\mathrm{Q}_A$)}
\label{sec:qa}
To extract semantic-related quality issues in VQA, we utilize CLIP to extract the \textbf{Semantic Affinity Index} ($\mathrm{Q}_A$) as follows.
\paragraph{Aesthetic-specific Data Preparation.}
As the semantic branch of BUONA-VISTA aim at authentic distortions and aesthetic issues which are usually insensitive to resolutions or frame rates, we follow the data preparation in DOVER~\cite{dover} to perform \textbf{\textit{spatial down-sampling}} and \textbf{\textit{temporal sparse frame sampling}} on the original video. We denote the downsampled aesthetic-specific view of the video as $\mathcal{V} = \{V_i|_{i=0}^{N}\}$, where $V_i$ is the $i$-th frame (in total $N$ frames sampled) of the down-sampled video, with spatial resolution $224\times224$, aligned with the spatial scale during the pre-training of CLIP~\cite{clip}.
\paragraph{Affinity between Video and Texts.} Given any text prompt $T$, the visual ($\mathrm{E}_v$) and textual ($\mathrm{E}_t$) encoders in CLIP extract $\mathcal{V}$ and $T$ into implicit visual ($f_{v,i}$) and textual ($f_t$) features:
\begin{equation}
f_{v,i} = \mathrm{E}_v(V_i)|_{i = 0}^{N-1}; ~~~~~~
f_{t}^{T} = \mathrm{E}_t(T)
\end{equation}
Then, the semantic affinity $\mathrm{A}(\mathcal{V}, T)$ between $\mathcal{V}$ (aesthetic view of the video) and text $T$ is defined as follows:
\begin{equation}
\mathrm{A}(\mathcal{V}, T) = (\sum_{i=0}^{N-1}\frac{f_{v,i}\cdot f_{t}^{T}}{\Vert f_{v,i}\Vert\Vert f_{t}^{T}\Vert})/N
\end{equation}
where the $\cdot$ denotes the dot product of two vectors.
\paragraph{Acronym-Differential Affinity.} In general, a video with good quality should be with higher affinity to \textbf{\green{positive}} quality-related descriptions or feelings ($T_+$, \textit{e.g. ``high quality", ``a good photo", ``clear"}), and lower affinity to \bred{negative} quality-related text descriptions ($T_-$, \textit{e.g. ``low quality", "a bad photo", "unclear"}, acronyms to $T_+$). Therefore, we introduce the Acronym-Differential affinity index ($\mathrm{DA}$), \textit{i.e.} whether the video has higher affinity to positive or negative texts (Fig.~\ref{fig:criterion}(c)), as the semantic criterion for zero-shot VQA:
\begin{equation}
\mathrm{DA}(\mathcal{V},T_{+},T_{-}) = \mathrm{A}(\mathcal{V}, T_{+}) - \mathrm{A}(\mathcal{V}, T_{-})
\label{eq:ad}
\end{equation}
\paragraph{Multi-Prompt Aggregation.} As we would like to extract both authentic distortions (which can hardly be detected by NIQE or other low-level indexes) and aesthetic-related issues in the semantic quality index, we aggregate two different pairs of acronyms: \textbf{1)} \textit{high quality}$\leftrightarrow$\textit{low quality} ($T_{+,0},T_{-,0}$); \textbf{2)} \textit{a good photo}$\leftrightarrow$\textit{a bad photo} ($T_{+,1},T_{-,1}$). Following the advice of VQEG~\cite{vqeg}, we conduct sigmoid remapping to map the two scores into range $[0,1]$ (which is practically similar to human perceptual scales) and sum the remapped scores into the final Semantic Affinity Index ($\mathrm{Q}_{A}$), formalized as follows:
\begin{equation}
\mathrm{Q}_{A} = \sum_{i=0}^1{\frac{1}{1+e^{-\mathrm{DA}(\mathcal{V},T_{+,i},T_{-,i})}}}
\label{eq:sigmoida}
\end{equation}
\subsection{The Spatial Naturalness Index ($\mathrm{Q}_S$)}
\label{sec:qs} Despite the powerful semantic affinity index, we also utilize the NIQE~\cite{niqe} index, the first completely-blind quality index to detect the traditional types of \textbf{technical distortions}, such as \textit{Additive White Gaussian Noises (AWGN), JPEG compression artifacts}. These distortions are very likely to happen in real-world videos during compression or transmission. To align different indexes, we \textit{normalize} the raw NIQE scores ($\mathrm{Q}_{\mathrm{NIQE}}$ for $V_i$) to Gaussian distribution $N(0,1)$ and rescale them with sigmoid\footnote{As lower raw NIQE/TPQI scores mean better quality, we use negative sigmoid-like remapping $\frac{1}{1+e^x}$ instead of $\frac{1}{1+e^{-x}}$ here (Eq.~\ref{eq:sigmoids}) and in Eq.~\ref{eq:sigmoidt}.} to get the frame-wise naturalness index ($\mathrm{N}_i$):
\begin{equation}
\mathrm{N}_i = \frac{1}{1 + e^{\frac{\mathrm{Q}_{\mathrm{NIQE},i}-\overline{\mathrm{Q}_{\mathrm{NIQE},i}}}{\sigma(\mathrm{Q}_{\mathrm{NIQE},i})}}}
\label{eq:sigmoids}
\end{equation}
where $\overline{\mathrm{Q}_\mathrm{NIQE}}$ and $\sigma(\mathrm{Q}_\mathrm{NIQE})$ are the \textit{mean} and \textit{standard deviance} of raw NIQE scores in the whole set, respectively. Then, following~\cite{tlvqm,videval,fastervqa}, we sample one frame per second (\textit{1fps}) and calculate the overall \textbf{Spatial Naturalness Index} ($\mathrm{Q}_S$) with sampled frames $V_{F_k}$ in $k$-th second as follows:
\begin{equation}
\mathrm{Q}_S = \sum_{k=0}^{S_0} \mathrm{N}_{F_k} / S_0
\end{equation}
where $S_0$ is the overall duration of the video.
\begin{table*}
\footnotesize
\caption{Benchmark evaluation on the proposed BUONA-VISTA, compared with other Opinion-Unaware Quality Indexes.}\label{table:eva}
\label{table:vqc}
\setlength\tabcolsep{6.6pt}
\renewcommand\arraystretch{1.05}
\footnotesize
\centering
\vspace{-8pt}
\resizebox{\textwidth}{!}{\begin{tabular}{l|cc|cc|cc|cc}
\hline
\textbf{Dataset} & \multicolumn{2}{c|}{LIVE-VQC} & \multicolumn{2}{c|}{KoNViD-1k} & \multicolumn{2}{c|}{YouTube-UGC} & \multicolumn{2}{c}{CVD2014} \\ \hline
Methods
& SRCC$\uparrow$ & PLCC$\uparrow$ & SRCC$\uparrow$ & PLCC$\uparrow$ & SRCC$\uparrow$ & PLCC$\uparrow$ &SRCC$\uparrow$ & PLCC$\uparrow$ \\ \hline
\rowcolor{lightgray} \multicolumn{9}{l}{\textbf{Opinion-Aware Methods:} } \\ \hdashline
\rowcolor{lightgray} TLVQM (TIP, 2019) \cite{tlvqm} & 0.799 & 0.803 & 0.773 & 0.768 & 0.669 & 0.659 & 0.830 & 0.850 \\
\rowcolor{lightgray} VSFA (ACMMM, 2019) \cite{vsfa} & 0.773 & 0.795 & 0.773 & 0.775 & 0.724 & 0.743 & 0.870 & 0.868 \\
\rowcolor{lightgray} VIDEVAL (TIP, 2021) \cite{videval} & 0.752 & 0.751 & 0.783 & 0.780 & 0.779 & 0.773 & 0.832 & 0.854 \\
\hdashline
\multicolumn{9}{l}{\textbf{Existing Opinion-Unaware (\textit{zero-shot}) Approaches:} } \\\hdashline
(\textit{Spatial}) NIQE (Signal Processing, 2013)~\cite{niqe} & 0.596 & 0.628 & 0.541 & \blue{0.553} & 0.278 & 0.290 & \blue{0.492} & \blue{0.612} \\
(\textit{Spatial}) IL-NIQE (TIP, 2015)~\cite{ilniqe} & 0.504 & 0.544 & 0.526 & 0.540 & \blue{0.292} & \blue{0.330} & 0.468 & 0.571\\
(\textit{Temporal}) VIIDEO (TIP, 2016)~\cite{viideo} & 0.033 & 0.215 & 0.299 & 0.300 & 0.058 & 0.154 & 0.149 & 0.119 \\
(\textit{Temporal}) TPQI (ACMMM, 2022)~\cite{tpqi} & \blue{0.636} & \blue{0.645} & \blue{0.556} & 0.549 & 0.111 & 0.218 & 0.408 & 0.469 \\
\hdashline
\rowcolor{lightpink} \textbf{BUONA-VISTA (Ours, \textit{zero-shot})} & \bred{0.784} & \bred{0.794} & \bred{0.760} & \bred{0.760} & \bred{0.525} & \bred{0.556} & \bred{0.740} & \bred{0.763}\\
\rowcolor{lightpink} \textit{Improvements to Existing Best} & 23\% & 23\% & 37\% & 38\% & 80\% & 69\% & 50\% & 25\% \\
\hline
\end{tabular}}
\vspace{-13pt}
\end{table*}
\subsection{The Temporal Naturalness Index ($\mathrm{Q}_T$)}
\label{sec:qt}
While the $\mathrm{Q}_A$ and $\mathrm{Q}_S$ can better cover different types of spatial quality issues, they are unable to cover the distortions in the temporal dimension, such as \textit{shaking}, \textit{stall}, or \textit{unsmooth camera movements}, which are well-recognized~\cite{cnn+lstm,tlvqm,deepvqa,discovqa} to affect the human quality perception. In general, all these temporal distortions can be summarized as non-smooth inter-frame changes between adjacent frames, and can be captured via recently-proposed TPQI~\cite{tpqi}, which is based on the neural-domain curvature across three continuous frames. Specifically, the curvatures can be computed via the simulated neural responses on the primary visual cortex (V1,~\cite{primaryv1}) and lateral geniculate nucleus (LGN,~\cite{lgn}) domains, as follows:
\begin{equation}
\mathrm{\mathrm{Q}_{TPQI}} = \frac{\log{(\frac{1}{M-2}\sum_{j=1}^{M-2})\mathbb{C}^\mathrm{V1}_j} + \log{(\frac{1}{M-2}\sum_{j=1}^{M-2})\mathbb{C}^\mathrm{LGN}_j}}{2}
\end{equation}
where $M$ is the total number of frames in the whole video, $\mathbb{C}_\mathrm{LGN}$ and $\mathbb{C}_\mathrm{V1}$ are the curvatures at frame $j$ respectively. The \textbf{Temporal Naturalness Index} ($\mathrm{Q}_T$) is then mapped from the raw scores via gaussian normalization and sigmoid rescaling:
\begin{equation}
\mathrm{Q}_T = \frac{1}{1 + e^{\frac{\mathrm{Q}_{\mathrm{TPQI}}-\overline{\mathrm{Q}_{\mathrm{TPQI}}}}{\sigma(\mathrm{Q}_{\mathrm{TPQI}})}}}
\label{eq:sigmoidt}
\end{equation}
\begin{table*}[]
\setlength\tabcolsep{4.4pt}
\renewcommand\arraystretch{1.16}
\caption{Comparing the cross-dataset performances of existing opinion-aware approaches with \textbf{\textit{zero-shot}} BUONA-VISTA (which requires no training at all). BUONA-VISTA is notably more robust than these approaches.} \label{tab:crossvsbv}
\vspace{-17pt}
\center
\footnotesize
\resizebox{\linewidth}{!}{\begin{tabular}{l|cc|cc|cc|cc|cc|cc}
\hline
\textbf{Train} on (\textit{None} for BUONA-VISTA) & \multicolumn{4}{c|}{{KoNViD-1k}} & \multicolumn{4}{c|}{{LIVE-VQC}} & \multicolumn{4}{c}{{Youtube-UGC}} \\ \hline
\textbf{Test} on & \multicolumn{2}{c|}{{LIVE-VQC}} & \multicolumn{2}{c|}{{Youtube-UGC}} & \multicolumn{2}{c|}{{KoNViD-1k}} & \multicolumn{2}{c|}{{Youtube-UGC}} & \multicolumn{2}{c|}{{LIVE-VQC}} & \multicolumn{2}{c}{{KoNViD-1k}} \\ \hline
\textit{} & SRCC$\uparrow$ & PLCC$\uparrow$ & SRCC$\uparrow$ & PLCC$\uparrow$ & SRCC$\uparrow$ & PLCC$\uparrow$ & SRCC$\uparrow$ & PLCC$\uparrow$ & SRCC$\uparrow$ & PLCC$\uparrow$ & SRCC$\uparrow$ & PLCC$\uparrow$ \\
\hline
CNN-TLVQM (2020,MM)\cite{cnntlvqm} & 0.713 & 0.752 & \gray{NA} & \gray{NA} & 0.642 & 0.631 & \gray{NA} & \gray{NA} & \gray{NA} & \gray{NA} & \gray{NA} & \gray{NA} \\
GST-VQA (2021, TCSVT)\cite{gstvqa} & 0.700 & 0.733 & \gray{NA} & \gray{NA} & \blue{0.709} & 0.707 & \gray{NA} & \gray{NA} & \gray{NA} & \gray{NA} & \gray{NA} & \gray{NA} \\
VIDEVAL (2021, TIP)\cite{videval} & 0.627 & 0.654 & 0.370 & 0.390 & 0.625 & 0.621 & 0.302 & 0.318 & 0.542 & 0.553 & 0.610 & 0.620 \\
MDTVSFA (2021, IJCV)\cite{mdtvsfa} & \blue{0.716} & \blue{0.759} & \blue{0.408} & \blue{0.443} & 0.706 & \blue{0.711} & \blue{0.355} & \blue{0.388} & \blue{0.582} & \blue{0.603} & 0.649 & 0.646 \\ \hdashline
\rowcolor{lightpink} \textbf{BUONA-VISTA (\textit{zero-shot})} & \bred{0.784}&\bred{0.794}&\bred{0.525}&\bred{0.556}&\bred{0.760}&\bred{0.760}&\bred{0.525}&\bred{0.556}&\bred{0.784}&\bred{0.794}&\bred{0.760}&\bred{0.760}\\ \hline
\end{tabular}}
\vspace{-7pt}
\end{table*}
\subsection{\textit{BUONA-VISTA} Index: Metric Aggregation}
As we aim to design a robust opinion-unaware perceptual quality index, we directly aggregate all the indexes by summing up the scale-aligned scores without regression from any VQA datasets. As the $\mathrm{Q}_A$, $\mathrm{Q}_S$ and $\mathrm{Q}_T$ have already been gaussian-normalized and sigmoid-rescaled in Eq.~\ref{eq:sigmoida}, Eq.~\ref{eq:sigmoids} and Eq.~\ref{eq:sigmoidt} respectively, all three metrics are in range $[0,1]$, and the overall unified \textbf{BUONA-VISTA} index $\mathrm{Q}_\text{Unified}$ is defined as:
\begin{equation}
\mathrm{Q}_\text{Unified} = \mathrm{Q}_A + \mathrm{Q}_S + \mathrm{Q}_T
\end{equation}
In the next section, we will conduct several experimental studies to prove the effectiveness of each separate index and the rationality of the proposed aggregation strategy.
\section{Experimental Evaluations}
\subsection{Implementation Details}
Due to the differences of the targeted quality-related issues in three indexes, the inputs of the three branches are different. For $\mathrm{Q}_A$, the video is spatially downsampled to $224\times 224$ via a bicubic~\cite{bicubic} downsampling kernel, and temporally sub-sampled to $N=32$ uniform frames~\cite{dover}. For $\mathrm{Q}_S$, the video retains its original spatial resolution but temporally only keep $S_0$ uniform frames, where $S_0$ is the duration of the video (\textit{unit: second}). For $\mathrm{Q}_T$, all videos are spatially downsampled to $270\times480$ (to keep the aspect ratio), with all frames fed into the neural response simulator. The $\mathrm{Q}_A$ is calculated with Python 3.8, Pytorch 1.7, with official CLIP-ResNet-50~\cite{he2016residual} weights. The $\mathrm{Q}_S$ and $\mathrm{Q}_T$ are calculated with Matlab R2022b.
\subsection{Evaluation Settings}
\paragraph{Evaluation Metrics.} Following common studies, we use two metrics, the Spearman Rank-order Correlation Coefficients (SRCC) to evaluate monotonicity between quality scores and human opinions, and the Pearson Linearity Correlation Coefficients (PLCC) to evaluate linear accuracy.
\paragraph{Benchmark Datasets.} To better evaluate the performance of the proposed BUONA-VISTA under different in-the-wild settings, we choose four different datasets, including CVD2014~\cite{cvd} (234 videos, with lab-collected authentic distortions during capturing), LIVE-VQC~\cite{vqc} (585 videos, recorded by smartphones), KoNViD-1k~\cite{kv1k} (1200 videos, collected from social media platforms), and YouTube-UGC~\cite{ytugc,ytugccc} (1131 available videos, non-natural videos collected from YouTube with categories \textit{Gaming/Animation/Lyric Videos}).
\begin{table*}[]
\caption{Ablation Studies (I): effects of different indexes in the proposed BUONA-VISTA, on three natural video datasets.}
\vspace{-8pt}
\renewcommand\arraystretch{1.10}
\resizebox{\textwidth}{!}{
\begin{tabular}{ccc|cc|cc|cc}
\hline
\multicolumn{3}{c|}{Different Indexes in BUONA-VISTA} & \multicolumn{2}{c|}{LIVE-VQC} & \multicolumn{2}{c|}{KoNViD-1k} & \multicolumn{2}{c}{CVD2014} \\ \hline
Semantic Affinity ($\mathrm{Q}_A$) & Spatial Naturalness ($\mathrm{Q}_S$) & Temporal Naturalness ($\mathrm{Q}_T$) & SRCC$\uparrow$ & PLCC$\uparrow$ & SRCC$\uparrow$ & PLCC$\uparrow$ & SRCC$\uparrow$ & PLCC$\uparrow$ \\
\hline
\cmark & & & 0.629 & 0.638 & 0.608 & 0.602 & 0.685 & 0.692 \\
& \cmark & & 0.593 & 0.615 & 0.537 & 0.528 & 0.489 & 0.558 \\
& & \cmark & 0.690 & 0.682& 0.577 & 0.569&0.482&0.498 \\
\cdashline{1-9}
\cmark & \cmark & & 0.692 & 0.712 & 0.718 & 0.713 & 0.716 & 0.731 \\
& \cmark & \cmark & 0.749 & 0.753 & 0.670 & 0.672 & 0.618 & 0.653 \\
\cmark & & \cmark & 0.767 & 0.768 & 0.704 & 0.699 & 0.708 & 0.725\\
\cdashline{1-9}
\rowcolor{lightpink} \cmark & \cmark & \cmark & \bred{0.784} & \bred{0.794} & \bred{0.760} & \bred{0.760} & \bred{0.740} & \bred{0.763} \\
\hline
\end{tabular}}
\label{tab:ablation}
\vspace{-4mm}
\end{table*}
\begin{table*}[]
\caption{Ablation Studies (IV): effects of different text prompts and the proposed multi-prompt aggregation strategy.}
\vspace{-8pt}
\renewcommand\arraystretch{1.22}
\setlength\tabcolsep{6pt}
\resizebox{\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
& \multicolumn{3}{c|}{Overall Performance of BUONA-VISTA} & \multicolumn{4}{c}{Performance of Semantic Affinity Index Only} \\
\hline
{Dataset} & {LIVE-VQC} & {KoNViD-1k} & {CVD2014} & {LIVE-VQC} & {KoNViD-1k} & {CVD2014} & {YouTube-UGC} \\ \hline
\textbf{Prompt Pairs} & SRCC$\uparrow$/PLCC$\uparrow$ & SRCC$\uparrow$/PLCC$\uparrow$ & SRCC$\uparrow$/PLCC$\uparrow$ &SRCC$\uparrow$/PLCC$\uparrow$ & SRCC$\uparrow$/PLCC$\uparrow$ & SRCC$\uparrow$/PLCC$\uparrow$ & SRCC$\uparrow$/PLCC$\uparrow$ \\ \hline
\textbf{(a)} \textit{[high $\leftrightarrow$low] quality} & 0.768/0.775 & 0.725/0.725 & 0.738/0.757 &0.560/0.575&0.477/0.472&\bred{0.728}/\bred{0.729}&0.539/0.564\\
\textbf{(b)} \textit{a [good$\leftrightarrow$bad] photo} & 0.778/0.785 & 0.727/0.727 & 0.653/0.686 &0.608/0.581&0.586/0.551&0.507/0.512&0.473/0.458 \\
\hdashline
\rowcolor{lightpink} \textbf{(a)}+\textbf{(b)} \textit{Aggregated} & \bred{0.784}/\bred{0.794} & \bred{0.760}/\bred{0.760} & \bred{0.740}/\bred{0.763} &\bred{0.629}/\bred{0.638}&\bred{0.609}/\bred{0.602}&0.686/0.693&\bred{0.585}/\bred{0.606} \\
\hline
\end{tabular}}
\label{table:prompt}
\vspace{-14pt}
\end{table*}
\subsection{Benchmark Comparison}
We compare with both existing opinion-unaware (zero-shot) or opinion-aware VQA methods in Tab.~\ref{table:eva}/\ref{tab:crossvsbv} to evaluate the accuracy and robustness of proposed BUONA-VISTA index.
\paragraph{Comparison with Opinion-Unaware Approaches.} The proposed BUONA-VISTA quality index is notably better than any existing opinion-unaware quality indexes with \textbf{\textit{at least 20\%}} improvements. Specifically, on the three natural VQA datasets (LIVE-VQC, KoNViD-1k and CVD2014), it has reached almost 0.8 PLCC/SRCC, which are even on par with or better than some opinion-aware approaches. On the non-natural dataset (YouTube-UGC), with the assistance of the semantic affinity index, the proposed BUONA-VISTA has extraordinary \textbf{80\%} improvement than all semantic-unaware zero-shot quality indexes, \textit{for the first time} provides reasonable quality predictions on this dataset. Without any training, these results demonstrate that the proposed BUONA-VISTA achieves leapfrog improvements over existing metrics and can be widely applied as a robust real-world video quality metric.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{BVI-3_compressed.pdf}
\vspace{-18pt}
\caption{Videos with \textit{best/worst} quality in perspective of three separate indexes, and the overall BUONA-VISTA index. All demo videos are appended in \textbf{\textit{supplementary materials}}.}
\label{fig:vis}
\vspace{-18pt}
\end{figure}
\paragraph{Comparison with Opinion-Aware Approaches.} Though it is extremely difficult, if not impossible for BUONA-VISTA to surpass opinion-aware approaches, it has largely bridged the gap between zero-shot and supervised methods. Moreover, these opinion-aware methods might face an extra challenge of over-fitting on specific datasets. As compared between their Cross-dataset Performances and results of BUONA-VISTA in Tab.~\ref{tab:crossvsbv} the proposed \textbf{\textit{zero-shot}} BUONA-VISTA can out-perform existing opinion-based methods when they do not train and test on the same set of videos and opinions, further proving the robustness of the proposed BUONA-VISTA.
\begin{table}[]
\caption{Ablation Studies (II): effects of different indexes in the proposed BUONA-VISTA on YouTube-UGC dataset.}
\vspace{-8pt}
\centering
\renewcommand\arraystretch{1.0}
\setlength\tabcolsep{14pt}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{ccc|cc}
\hline
\multicolumn{3}{c|}{Indexes in BUONA-VISTA} & \multicolumn{2}{c}{YouTube-UGC} \\ \hline
$\mathrm{Q}_A$ & $\mathrm{Q}_S$ & $\mathrm{Q}_T$ & SRCC$\uparrow$ & PLCC$\uparrow$ \\
\hline
\cmark & & & 0.585 & \textbf{0.606} \\
\cmark & \cmark & & \textbf{0.589} & 0.604 \\
\rowcolor{lightpink} \cmark & \cmark & \cmark & 0.525 & 0.556 \\ \hdashline
& \cmark & & 0.240 & 0.153 \\
& & \cmark & 0.133 & 0.141 \\
\hline
\end{tabular}}
\vspace{-10pt}
\label{table:ugc}
\end{table}
\begin{table}[]
\caption{Ablation Studies (III): comparison of different aggregation strategies in the proposed BUONA-VISTA.}
\vspace{-8pt}
\renewcommand\arraystretch{1.22}
\setlength\tabcolsep{3pt}
\resizebox{\linewidth}{!}{
\begin{tabular}{c|c|c|c}
\hline
{\textbf{Aggregation}} & {LIVE-VQC} & {KoNViD-1k} & {CVD2014} \\ \hline
Metric & SRCC$\uparrow$/PLCC$\uparrow$ & SRCC$\uparrow$/PLCC$\uparrow$ & SRCC$\uparrow$/PLCC$\uparrow$ \\ \hline
\textit{Direct Addition} & 0.760/0.750 & 0.675/0.660 & 0.664/0.699 \\
\textit{Linear} + \textit{Addition} & 0.776/0.760 & 0.720/0.710 & 0.700/0.729 \\ \hdashline
\textit{Sigmoid} + \textit{Multiplication} & 0.773/0.729 & 0.710/0.679 & 0.692/0.661 \\
\hdashline
\rowcolor{lightpink} \textit{Sigmoid} + \textit{Addition} & \bred{0.784}/\bred{0.794} & \bred{0.760}/\bred{0.760} & \bred{0.740}/\bred{0.763} \\
\hline
\end{tabular}}
\label{table:agg}
\vspace{-10pt}
\end{table}
\subsection{Qualitative Studies}
In the qualitative studies, we visualize snapshots of videos with highest or lowest score in each separate index, and the overall BUONA-VISTA index. As shown in Fig.~\ref{fig:vis}, the \textbf{(a)} Semantic Affinity is highly related to \textbf{\textit{aesthetics}}, where the \textbf{(b)} Spatial Naturalness focus on spatial textures (\textit{sharp}$\leftrightarrow$\textit{blurry}), and the \textbf{(c)} Temporal Naturalness focus on temporal variations (\textit{stable}$\leftrightarrow$\textit{shaky}), aligning with the aforementioned criteria of the three indexes. We also append the original videos of these examples in our \textbf{\textit{supplementary materials}}.
\subsection{Ablation Studies}
In the ablation studies, we discuss the effects of different quality indexes: Semantic Affinity, Spatial Naturalness and Temporal Naturalness (Sec.~\ref{sec:ablsi}), on either natural. We then discuss the effects of the aggregation strategies (Sec.~\ref{sec:ablas}). Moreover, we evaluate the effects of different prompt pairs and the proposed multi-prompt aggregation (Sec.~\ref{sec:ablprompt}).
\subsubsection{Effects of Separate Indexes}
\label{sec:ablsi}
\paragraph{Evaluation on Natural Datasets.} During evaluation on the effects of separate indexes, we divide the four datasets into two parts: for the first part, we categorize the LIVE-VQC, KoNViD-1k and CVD2014 as \textbf{natural datasets}, as they do not contain computer-generated contents, or movie-like edited and stitched videos. We list the results of different settings in Tab.~\ref{tab:ablation}, where all three indexes contribute notably to the final accuracy of the proposed BUONA-VISTA, proving that the semantic-related quality issues, traditional spatial distortions and temporal distortions are all important to building an robust estimation on human quality perception. Specifically, in CVD2014, where videos only have authentic distortions during capturing, the Semantic Affinity ($\mathrm{Q}_A$) index shows has largest contribution; in LIVE-VQC, the dataset commonly-agreed with most temporal distortions, the Temporal Naturalness ($\mathrm{Q}_T$) index contributes most to the overall accuracy. These results demonstrate our aforementioned claims on the separate concerns of the three indexes.
\paragraph{Evaluation on YouTube-UGC.} In YouTube-UGC, as shown in Tab.~\ref{table:ugc}, the Spatial Naturalness index cannot improve the final performance of the BUONA-VISTA, where the Temporal Naturalness index even lead to 8\% performance drop. As YouTube-UGC are all long-duration (20-second) videos and almost every videos is made up of multiple scenes, we suspect this performance degradation might come from the during scene transition, where the temporal curvature is very large but do not lead to degraded quality. In our future works, we consider detecting scene transition in videos and only compute the Temporal Naturalness Index within the same scene.
\subsubsection{Effects of Aggregation Strategies}
\label{sec:ablas}
We evaluate the effects of aggregation strategies in Tab.~\ref{table:agg}, by comparing with different rescaling strategies (\textit{Linear} denotes Gaussian Noramlization only, and \textit{Sigmoid} denotes Gaussian followed by Sigmoid Rescaling) and different fusion strategies (\textit{addition($+$) or multiplication($\times$)}). The results have demonstrated that the both gaussian normalization and sigmoid rescaling contributes to the final performance of aggregated index, and \textit{addition} is better than \textit{multiplication}.
\subsubsection{Effects of Different Text Acronym Pairs}
\label{sec:ablprompt}
In Tab.~\ref{table:prompt}, we discuss the effects of different text acronym pairs as $T_{+}$ and $T_{-}$ in Eq.~\ref{eq:ad}. We notice that \textit{[high$\leftrightarrow$low] quality} can achieve very good performance on CVD2014, where the content diversity can be neglected and the major concern is the \textbf{authentic distortions}. For LIVE-VQC and KoNViD-1k (with diverse aesthetics), however, the \textit{[good$\leftrightarrow$bad] photo} prompt shows higher accuracy. The results suggests that different datasets have different quality concerns, while aggregating two acronym pairs can result in stable improvements for overall performance in all datasets, proving the effectiveness of the proposed multi-prompt aggregation strategy.
\section{Conclusion}
In this paper, we propose BUONA-VISTA, a robust zero-shot opinion-unaware video quality index for in-the-wild videos, which aligned and aggregated CLIP-based text-prompted semantic affinity index with traditional technical metrics on spatial and temporal dimensions. The proposed BUONA-VISTA achieves unprecedented performance among opinion-unaware video quality indexes, and demonstrates better robustness than opinion-aware VQA approaches across different datasets. We hope the proposed robust video quality index can serve as an reliable and effective metric in related researches on videos and contribute in real-world applications.
{\small
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13196",
"language": "en",
"timestamp": "2023-02-28T02:12:34",
"url": "https://arxiv.org/abs/2302.13196",
"yymm": "2302"
} | \section{Introduction}
A subgroup, $H$, of a totally disconnected, locally compact (t.d.l.c.) group $G$ is \emph{flat} if there is a compact open subgroup, $U\leq G$ that is minimising for every $x\in H$, that is, $U$ realises the minimum value of the set of positive integers
$$
\left\{[xVx^{-1} : xVx^{-1}\cap V] \mid V\leq G\text{ compact and open}\right\}.
$$
This minimum value is called the \emph{scale of $x$} and denoted $s_G(x)$. The \emph{uniscalar subgroup} of $H$ is $H_u := \left\{h\in H\mid hUh^{-1} = U\right\}$ and $H/H_u$ is a free abelian group, \cite[Corollary 6.15]{SimulTriang}. The \emph{flat-rank} of $H$ is the rank of this free abelian group. Every singly generated group, $\langle x\rangle$, is flat with flat-rank equal to $0$ if there is $U\leq G$ normalised by $x$ and equal to $1$ otherwise. This paper is concerned with finding subgroups of $G$ having flat-rank greater than $1$.
Known examples of flat groups having flat-rank greater than $1$ occur in cases where the ambient group $G$ has additional structure with which the flat group is associated. For example, if $G$ is $\mathrm{PSL}_n(\mathbf Q_p)$, the group of diagonal matrices is flat and has flat-rank $n-1$, equal to the Lie rank of $\mathrm{PSL}_n(\mathbf Q_p)$, see \cite[Example 6.11]{SimulTriang}. That is a special case of a general result about semisimple algebraic groups over local fields that is itself a consequence of the fact that, if $G$ is the automorphism group of a building and the building has an apartment with geometric rank $r$, then $G$ has abelian subgroup with flat-rank $r$, see \cite[Theorems A \& B]{BaumRemyWill} and \cite[Theorem~1]{Cap_Hag}. These results, showing that the flat-rank coincides with familiar notions of rank in important classes of groups, motivate the study of flat subgroups and their flat-rank in general t.d.l.c.~groups.
There is no method for finding flat groups of higher rank in general t.d.l.c.~groups however. In the case where $G$ is a Lie group over a local field and $x\in G$, $\mathrm{Ad}(x)$ is a linear operator on the Lie algebra of $G$ and the algebra of operators generated by $\mathrm{Ad}(x)$ is commutative. This commutative algebra is the Lie algebra of an abelian subgroup of $G$ whose flat-rank may be greater than $1$ and, moreover, is equal to the number of absolute values of eigenvalues of $\mathrm{Ad}(x)$, which may be found with the aid of the Cayley-Hamilton Theorem. While this method does not extend to general t.d.l.c.~groups, it is the case that if the centraliser of $x$, $C(x)$, contains $y$ with $s_{C(x)}(y)>1$, then the subgroup $\langle x,y\rangle$ is flat and has flat-rank equal to that of $\langle x\rangle$ augmented by $1$. The main result shown below, Theorem~\ref{thm:extend_flat}, improves on this idea by replacing the centraliser of $x$ by the larger \emph{Levi subgroup} for $x$, that is, the set of all $y$ such that $\left\{x^nyx^{-n}\right\}_{n\in\mathbf{Z}}$ is precompact. It thus relates flat groups to Levi and parabolic subgroups in $G$, which were defined in \cite{ContractionB} in analogy with the similarly named subgroups of algebraic groups. The flat group found contains $y$ and a modification of $x$.
The article has the following structure. Basic results about the scale and flat groups are recalled in \S\ref{sec:scale} and in a couple of cases sharpened. Preliminary results linking Levi subgroups and minimising subgroups are shown in \S\ref{sec:Levi} and \S\ref{sec:Main} gives the proof of the main theorem. A symmetrisation of the relation `$y$ is in the Levi subgroup of $x$' is suggested by the proof of the main theorem, and this symmetrised relation is explored in \S\ref{sec:symmetric_relation}.
\section{Scale techniques}
\label{sec:scale}
This section collects results about the scale of an automorphism and related ideas that are used in the proof of the main theorem. The terms used here have in some cases only been introduced since the papers cited were written. Lemma~\ref{lem:tidy_criterion} strengthens known results and Theorem~\ref{thm:is_flat} is new (albeit fairly obvious).
\subsection{Tidy subgroups}
\label{sec:tidy}
A structural characterisation of subgroups minimising for an automorphism is given in \cite{FurtherTidy} in terms of tidiness of the subgroup, and tidiness of $U$ for $\alpha$ is defined in \cite{Structure94} in terms of the subgroups $U_+ = \bigcap_{k\in\mathbf{N}} \alpha^k(U)$ and $U_- := \bigcap_{k\in\mathbf{N}} \alpha^{-k}(U)$ as follows. (Zero is a natural number in this definition.)
\begin{definition}
Let $\alpha\in\Aut(G)$ and $U$ be a compact open subgroup of $G$. Then $U$ is \emph{tidy for $\alpha$} if and only if
\begin{description}
\item[TA] $U = U_+U_-$ and
\label{defn:TA}
\item[TB] $U_{++} := \bigcup_{k\in\mathbf{N}} \alpha^k(U_+)$ is closed.
\label{defn:TB}
\end{description}
\end{definition}
Then \cite[Theorem~3.1]{FurtherTidy} says that $U\leq G$ is minimising for $\alpha$ if and only if it is tidy for $\alpha$. Moreover, $\alpha(U_+)\geq U_+$ and $s(\alpha) = [\alpha(U_+) : U_+]$ if $U$ is tidy for $\alpha$.
The compact open subgroup $U$ is said to be \emph{tidy above} for $\alpha$ if it satisfies {\bf TA} and \emph{tidy below} if it satisfies {\bf TB}.
The following is \cite[Lemma 1]{Structure94}.
\begin{lemma}
\label{lem:tidy_above}
Let $U$ be a compact open subgroup of $G$ and $\alpha\in \Aut(G)$. Then there is $n\geq0$ such that $\bigcap_{k=0}^n \alpha^k(U)$ is tidy above for $\alpha$.
\end{lemma}
It follows immediately from the definition that, if $U$ is minimising for $\alpha$, then so is $\alpha(U)$, and the same thus holds for subgroups tidy for $\alpha$. On the other hand, it may be shown, see \cite[Lemma 10]{Structure94}, that the intersection of two tidy subgroups for $\alpha$ is tidy for $\alpha$. The same thus holds for minimising subgroups and this fact will be used in what follows. Given $U$ be tidy for $\alpha$, the subgroup $U_0 := \bigcap_{k\in\mathbf{Z}} \alpha^k(U) = U_+\cap U_-$, which is stable under $\alpha$, will appear frequently. Another fact which will be used is that
\begin{equation}
\label{eq:tidiness properties}
\bigcap_{k=0}^n \alpha^k(U) = U\cap \alpha^n(U) = U_+\alpha^n(U) = \alpha^n(U)U_+.
\end{equation}
The following is essentially \cite[Lemma 9]{Structure94}, although part \eqref{lem:tidy_criterion2} obtains the same conclusion under a weaker hypothesis and a proof is given for completeness.
\begin{lemma}
\label{lem:tidy_criterion}
Let $\alpha\in \Aut(G)$ and suppose that $U$ is tidy for $\alpha$.
\begin{enumerate}
\item If $u\in U$ and $\{\alpha^n( u )\}_{n\in\mathbf{N}}$ has an accumulation point, then $u\in U_-$. \label{lem:tidy_criterion1}
\item If $u\in \alpha^{j_1}(U)\dots \alpha^{j_l}(U)$ for some sequence of integers $j_1 < j_2 < \dots< j_l$ and $\{\alpha^n (u)\}_{n\in\mathbf{Z}}$ is contained in a compact set, then $u\in U_0$. \label{lem:tidy_criterion2}
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{lem:tidy_criterion1} Since $U$ is tidy, $u = u_+u_-$ with $u_\pm\in U_\pm$. If $u_+$ is not in $U_0$, then $\{x^nu_+x^{-n}\}_{n\in\mathbf{N}}$ has no accumulation point. On the other hand, $\{x^nu_-x^{-n}\}_{n\in\mathbf{N}}$ is contained in the compact set $U_-$. Hence, if $\{x^n u x^{-n}\}_{n\in\mathbf{N}}$ has an accumulation point, then $u_+\in U_0$ and $u\in U_-$.
\eqref{lem:tidy_criterion2} Since $U$ is tidy and $j_1 < j_2 < \dots< j_l$, we have that
$$
\alpha^{j_1}(U)\dots \alpha^{j_l}(U) = \alpha^{j_1}(U_-) \alpha^{j_l}(U_+).
$$
Hence $u = u_-u_+$ with $u_-\in \alpha^{j_1}(U_-)$ and $u_+\in \alpha^{j_l}(U_+)$. Then the argument used in \eqref{lem:tidy_criterion1} shows that $\{\alpha^{-n}(u)\}_{n\in\mathbf{N}}$ is not contained in a compact set if $u_-\not\in U_0$, and $\{\alpha^{n}(u)\}_{n\in\mathbf{N}}$ is not contained in a compact set if $u_+\not\in U_0$. Hence, if $\{x^{n}ux^{-n}\}_{n\in\mathbf{Z}}$ is contained in a compact set, then $u$ belongs to $U_0$.
\end{proof}
\subsection{Special subgroups}
Minimising, or tidy, subgroups for an automorphism $\alpha$ are not unique in general. Certain other subgroups of $G$, which are uniquely defined in terms of the dynamics of the action of $\alpha$ on $G$, are related to the scale and tidy subgroups.
Let $\alpha\in\Aut(G)$. The \emph{contraction group for $\alpha$} is
\begin{align*}
\con{\alpha} &:= \left\{ x\in G \mid \alpha^n(x)\to1 \text{ as }n\to\infty\right\},\\
\intertext{the \emph{parabolic subgroup} is}
\prb{\alpha} &:= \left\{ x\in G \mid \{\alpha^n(x)\}_{n\in\mathbf{N}} \text{ has compact closure}\right\}\\
\intertext{ and the \emph{Levi subgroup} is}
\lev{\alpha} &:= \prb{\alpha}\cap \prb{\alpha^{-1}} = \left\{ x\in G \mid \{\alpha^n(x)\}_{n\in\mathbf{Z}} \text{ has compact closure}\right\}.\\
\intertext{See \cite[\S3]{ContractionB} for these definitions and the motivations of these names for the subgroups. Contraction subgroups are linked to the scale through \cite[Proposition~3.21]{ContractionB}, which shows that $s(\alpha)$ is equal to the scale of the restriction of $\alpha$ to $\con{\alpha^{-1}}^-$, the closure of the contraction subgroup for $\alpha^{-1}$. Contraction subgroups are not closed in general, but \cite[Theorem~3.32]{ContractionB} shows that $\con{\alpha}$ is closed if and only if the \emph{nub subgroup, $\nub{\alpha}$,} is trivial, where}
\nub{\alpha} &:= \left\{x\in G\mid x\in \con{\alpha}\cap \prb{\alpha^{-1}}\right\}^-.
\end{align*}
In contrast, it is shown in \cite[Proposition~3]{Structure94} that $\prb{\alpha}$, and hence $\lev{\alpha}$ too, is always a closed subgroup of $G$.
Several results about the contraction and Levi subgroups that are used below will now be recalled. The first is easily verified.
\begin{lemma}
\label{lem:con_alpha^n}
Let $\alpha\in\Aut(G)$. Then $\con{\alpha^n} = \con{\alpha}$ and $\lev{\alpha^n} = \lev{\alpha}$.
\end{lemma}
The next is a corollary of Lemma~\ref{lem:tidy_criterion}.
\begin{lemma}
\label{lem:levi_stable}
Let $V$ be tidy for $\alpha$ and put $U := V\cap \lev{\alpha}$. Then $\alpha(U) = U$.
\end{lemma}
The nub subgroup is related to tidiness in \cite[Corollary~4.2]{nub_2014}, which shows that $U$ is tidy below for $\alpha$ if and only if $\nub{\alpha}\leq U$. Combined with Lemma~\ref{lem:tidy_above} and that fact that $\nub{\alpha}$ is stable under $\alpha$, this implies the following.
\begin{lemma}
\label{lem:TA+nub-->tidy}
If $U\geq \nub{\alpha}$, then there is $n$ such that $\bigcap_{k=0}^n \alpha^k(U)$ is tidy.
\end{lemma}
The nub of $\alpha$ is also the largest closed subgroup of $G$ on which $\alpha$ acts ergodically, and the intersection of all subgroups tidy for $\alpha$, as shown in \cite[Theorem~4.1]{nub_2014}.
The proof of the final lemma is a straight-forward verification.
\begin{lemma}
\label{lem:incl_nub(a)}
Let $\alpha\in\Aut(G)$. Then $\nub{\alpha}\triangleleft \lev{\alpha}$.
\end{lemma}
The final two lemmas in this subsection use the nub and contraction groups to prove technical facts used in \S\ref{sec:symmetric_relation}.
\begin{lemma}
\label{lem:include_stableV}
Let $\alpha\in\Aut(G)$ and $\widetilde{V}$ be a compact $\alpha$-stable subgroup of $G$. Then there is a compact, open subgroup, $V$, of $G$ that is tidy for $\alpha$ and such that $\widetilde{V}\leq V$.
\end{lemma}
\begin{proof}
The subgroup $\widetilde{V}$ is contained in $\lev{\alpha}$ and so $\widetilde{V}$ normalises $\nub{\alpha}$ by Lemma~\ref{lem:incl_nub(a)}. Hence $\widetilde{V}\nub{\alpha}$ is a compact, $\alpha$-stable subgroup of $G$. Choose a compact, open $U\leq G$ that contains $\widetilde{V}\nub{\alpha}$. (To see that such $U$ exists, choose any compact open subgroup, $W$ say, of $G$ and let $U$ be the product of $\widetilde{V}\nub{\alpha}$ and $\bigcap \{vWv^{-1} \mid v\in \widetilde{V}\nub{\alpha}\}$.) Let ${V} = \bigcap_{k=0}^n \alpha^k(U)$ be the subgroup of $U$ tidy for $\alpha$ shown to exist in Lemma~\ref{lem:TA+nub-->tidy}. Then $\widetilde{V}\leq V$ because $\widetilde{V}$ is $\alpha$-stable.
\end{proof}
\begin{lemma}
\label{lem:unbounded_sequence}
Let $\alpha\in\Aut(G)$ and let $v\in \con{\alpha}\setminus \nub{\alpha}$. Then for every $m>0$ and every compact and $\alpha$-stable subgroup $\widetilde{V}$ of $G$, and every sequence $\{w_k\}_{k\in\mathbf{N}}\subset \widetilde{V}$, the sequence
$$
\left\{ v\alpha(v)\dots \alpha^k(v)w_k\alpha^{m+k}(v)^{-1} \dots \alpha^{m+1}(v)^{-1}\alpha^m(v)^{-1}\right\}_{k\in\mathbf{N}}
$$
is discrete and non-compact.
\end{lemma}
\begin{proof}
Choose $V$ tidy for $\alpha$ and suppose, using Lemma~\ref{lem:include_stableV}, that $\widetilde{V}\leq V$. Replacing $V$ by $\alpha^n(V)$ for some $n\in\mathbf{Z}$, it may be further supposed that $v\in V_+\setminus \alpha^{-1}(V_+)$. Then
\begin{align*}
& v\alpha(v)\dots \alpha^k(v)w_k\in \alpha^k(V_+)\\
\intertext{and }
& \alpha^{m+k}(v)^{-1} \dots \alpha^{m+1}(v)^{-1}\alpha^m(v)^{-1}\in \alpha^{m+k}(V_+)\setminus \alpha^{m+k-1}(V_+).
\end{align*}
Hence the product is in $\alpha^{m+k}(V)\setminus \alpha^{m+k-1}(V)$ for every $k$ and the claim follows.
\end{proof}
Later sections concern the inner automorphism $\alpha_x : y\mapsto xyx^{-1}$ and the contraction, Levi and nub subgroups are denoted by $\con{x}$, $\lev{x}$ and $\nub{x}$ respectively.
\subsection{Flat groups and roots}
The main results in this article are about flat groups of automorphisms, that is, groups of automorphisms, $H$, for which there is a compact, open subgroup $U\leq G$ that is minimising for every $\alpha\in H$. As seen in \S\ref{sec:tidy}, this is equivalent to $U$ being tidy for every $\alpha\in H$.
Many definitions and results about flat groups carry over directly from corresponding ideas for single automorphisms. Of relevance to this article is the subgroup $U_{H0} := \bigcap_{\alpha\in H} \alpha(U)$, defined for any compact open subgroup $U$ but of particular interest if $U$ is tidy. Also, the \emph{lower nub of $H$} is
$$
\lnub{H}:= \left\langle\nub{\alpha} \mid \alpha\in H\right\rangle^-.
$$
The lower nub is defined in \cite{Reid_DynamicsNYJ_2016}, where it is observed that it is contained in every subgroup tidy for $H$, and is therefore compact, but that $\lnub{H}$ may be properly contained in the intersection of the subgroups tidy for $H$. The latter intersection is defined in \cite{Reid_DynamicsNYJ_2016} to be the \emph{nub of $H$}, whence the name \underline{lower} nub for the closed subgroup generated by the nubs of elements of $H$.
The \emph{Levi subgroup for $H$} may also may be defined as follows
$$
\lev{H} := \left\{x\in G \mid \{\alpha(x)\}^-_{\alpha\in H}\text{ is compact}\right\}.
$$
Then $\lev{H}$ is a closed subgroup of $G$, see \cite[Lemma~2.1.8]{flat_Bernoulli}. It is clear that $\lev{H}$ is contained the intersection of all $\lev{\alpha}$ with $\alpha\in H$, but these groups need not be equal, see \cite[Remark~2.1.9]{flat_Bernoulli}.
Compactness of $\lnub{H}$ and its stability under $H$ imply that $\lnub{H}\leq \lev{H}$. The next lemma then follows from Lemma~\ref{lem:incl_nub(a)}.
\begin{lemma}
\label{lem:incl_nub}
Suppose that $H\leq \Aut(G)$ is flat. Then $\lnub{H}\triangleleft \lev{H}$. Hence, if $U$ is a compact subgroup of $\lev{H}$, then $U\lnub{H}$ is a compact subgroup of $\lev{H}$.
\end{lemma}
Since $U_{H0}$ is compact and $H$-stable, $U_{H0}\leq U\cap \lev{H}$, and Lemma~\ref{lem:tidy_criterion} implies the reverse inclusion in the following
\begin{lemma}
\label{lem:flat_tidy_criterion}
Let $H\leq \Aut(G)$ be flat and suppose that $U$ is tidy for $H$. Then $U\cap \lev{H} = U_{H0}$.
\end{lemma}
It is convenient to state the next couple of results in terms of inner automorphisms and to identify the flat group $H$ with the subgroup of $G$ inducing the inner automorphisms. This is consistent with how they are used later. The next result is \cite[Theorem~5.9]{SimulTriang} in the case when $K=\{\ident\}$ and may be proved in the general case by the argument given for \cite[Proposition~20]{BaumRemyWill}.
\begin{theorem}
\label{thm:is_flat}
Suppose that $x_1$, \dots, $x_n\in G$ and the compact group $K\leq G$ satisfy
\begin{itemize}
\item $x_iKx_i^{-1} = K$ and
\item $[x_i,x_j]\in K$.
\end{itemize}
Then $\langle x_i, K \mid i\in\{1,\dots,n\}\rangle$ is flat.
\end{theorem}
\begin{definition}
\label{defn:uniscalar}
The \emph{uniscalar subgroup}\footnote{The term uniscalar was introduced by T. W. Palmer in \cite[Definition 12.3.25]{PalmerII}.} of the flat group $H$ is
$$
H_u := \left\{\alpha\in H \mid s(\alpha)=1=s(\alpha^{-1})\right\} = \left\{\alpha \in H \mid \alpha(V) =V\right\}
$$
with $V$ any subgroup minimising for $H$.
\end{definition}
Then $H_u$ is normal in $H$ and $H/H_u$ is a free abelian group, by \cite[Theorem~6.18]{SimulTriang}. For an alternative proof, see \cite[Theorem~A]{flat_Bernoulli}. The \emph{flat rank} of $H$ is the rank of this free abelian group. The next result is new but, as we shall see, its proof is straightforward.
\begin{proposition}
\label{prop:frk_equal}
Let $H = \langle y_1,\dots, y_n\rangle \leq G$ be flat and $V$ be tidy for $H$. Let $L = \langle z_1,\dots, z_n\rangle \leq G$ be another subgroup of $G$ and suppose that for every word $\mathbf{w}(a_1,\dots,a_n)$ in the free group $\langle a_1,\dots, a_n\rangle$, we have
\begin{equation}
\label{eq:identity}
\mathbf{w}(z_1,\dots,z_n)V \mathbf{w}(z_1,\dots,z_n)^{-1} = \mathbf{w}(y_1,\dots,y_n)V\mathbf{w}(y_1,\dots,y_n)^{-1}.
\end{equation}
Then $L$ is flat and $\frk{L} = \frk{H}$.
\end{proposition}
\begin{proof}
To show that $L$ is flat, it suffices to show that $[zVz^{-1} : zVz^{-1}\cap V] = s(z)$ for every $z\in L$, because that implies that $V$ is minimising for $L$. To this end, let $z\in L$ and suppose that $z = \mathbf{w}(z_1,\dots,z_n)$ with $\mathbf{w}\in \langle a_1,\dots, a_n\rangle$. Put $y = \mathbf{w}(y_1,\dots,y_n)$ in $H$. Then for each $n\in\mathbf{N}$ we have, by Equation~\eqref{eq:identity},
$$
[z^nVz^{-n} : z^nVz^{-n}\cap V] = [y^nVy^{-n} : y^nVy^{-n}\cap V],
$$
and the latter is equal to $s(y)^n$ because $V$ is tidy for $H$. Hence
$$
\lim_{n\to\infty} [z^nVz^{-n} : z^nVz^{-n}\cap V]^{\frac{1}{n}} = s(y)
$$
and it follows, by \cite[Theorem~7.7]{Moller}, that $s(z) = s(y)$. Then, again by Equation~\eqref{eq:identity},
$$
[zVz^{-1} : zVz^{-1}\cap V] = s(z)
$$
and $V$ is minimising for $z$.
The map $\phi : y\mapsto yVy^{-1}$, which sends $H$ to the set of compact open subgroups of $G$, is a bijection $H/H_u\to \phi(H)$. Since, as we now know, $V$ is tidy for $L$ too, the map $\psi : z\mapsto zVz^{-1}$ is also a bijection $L/L_u \to \psi(L)$. Equation~\eqref{eq:identity} implies that $\phi(H) = \psi(L)$ and so $\phi$ and $\psi$ induce a bijection $H/H_u\to L/L_u$. Moreover, if $\mathbf{w}_1$ and $\mathbf{w}_2$ are two words in the free group $\langle a_1,\dots, a_n\rangle$, and $y_i = \mathbf{w}_i(y_1,\dots,y_n)$ and $z_i = \mathbf{w}_i(z_1,\dots, z_n)$ are the corresponding elements of $H$ and $L$, then~\eqref{eq:identity} implies that $\phi(y_1y_2) = \psi(z_1z_2)$ and hence that the bijection $H/H_u \to L/L_u$ is a group isomorphism. The free abelian groups $H/H_u$ and $L/L_u$ therefore have the same rank.
\end{proof}
\section{The Levi subgroup and tidy subgroups for a flat group}
\label{sec:Levi}
Preliminary results on links between the Levi subgroup and subgroups tidy for a flat group are derived in this section. They will be used in the next section in the proof of the main theorem. Many of these results may have independent interest. The first shows that the property of being tidy for a flat group $H$ is virtually invariant under conjugation by elements of $\lev{H}$.
\begin{proposition}
\label{prop:Htilde}
Let $H\leq G$ be flat, $y\in \lev{H}$ and $V$ be a compact open subgroup of $G$ tidy for $H$. Then $$
L := \left\{x\in H\mid xyx^{-1}\in yV \right\}
$$
is a finite index subgroup of $H$ and $yVy^{-1}$ is tidy for $L$.
\end{proposition}
\begin{proof}
Suppose that $x\in L$. Then $xyx^{-1} = yv$ with $v\in V$ and, for each $h\in H$,
\begin{equation}
\label{eq:Htilde}
h(xyx^{-1})h^{-1} = h(yv)h^{-1} = (hyh^{-1}) (hvh^{-1})
\end{equation}
for all $n\in\mathbf{Z}$. Hence $hvh^{-1}$ is in the compact set $(y^H)^2$ for all $n\in\mathbf{Z}$ and, since $V$ is tidy for $H$, Lemma~\ref{lem:flat_tidy_criterion} implies that $v\in V_{H0}$. Let $x_1,x_2\in L$. Then $x_1yx_1^{-1} = yv_1$ and $x_2^{-1}yx_2 = yv_2$ with $v_1,v_2\in V_{H0}$ and
$$
(x_1x_2^{-1})y(x_1x_2^{-1})^{-1} = x_1 yv_2 x_1^{-1} = y(v_1 x_1 v_2x_1^{-1})
$$
with $v_1 x_1 v_2x_1^{-1} \in V_{H0}$. Hence $x_1x_2^{-1}\in L$ and $L$ is a group.
Since $y\in\lev{H}$, $\{xyx^{-1}V\mid x\in H\}$ is a finite set of $V$-cosets and it follows, again using Equation~\eqref{eq:Htilde}, that $H$ is covered by a finite number of $L$ cosets.
For each $x\in L$, there is $v\in V_{H0}$ such that
$$
x(yVy^{-1})x^{-1} = (xyx^{-1})(xVx^{-1})(xyx^{-1})^{-1} = yv(xVx^{-1})v^{-1}y^{-1} = y(xVx^{-1})y^{-1}.
$$
Hence
\begin{align*}
&\phantom{=}\ \,[x(yVy^{-1})x^{-1} : x(yVy^{-1})x^{-1}\cap yVy^{-1}] \\
&= [y(xVx^{-1})y^{-1}:y(xVx^{-1})y^{-1}\cap yVy^{-1}]\\
&= [xVx^{-1}:xVx^{-1}\cap V] = s(x)
\end{align*}
and $yVy^{-1}$ is tidy for $x$. Therefore $yVy^{-1}$ is tidy for $L$.
\end{proof}
The proof of Proposition~\ref{prop:Htilde} in fact shows the following stonger assertion.
\begin{corollary}
\label{cor:Htilde}
Let $y$ and $L$ be as in Proposition~\ref{prop:Htilde}. Then $xyx^{-1}y^{-1}\in V_{H0}$ for every $x\in L$.
\end{corollary}
In the next section, a flat group $H$ will be augmented by adding an element from its Levi subgroup. The following proposition helps to find a subgroup tidy this element.
\begin{proposition}
\label{prop:nub_in_lev}
Let $H\leq G$ be flat and $y\in\lev{H}$. Then $\nub{y}\leq \lev{H}$.
\end{proposition}
\begin{proof}
Choose $V$ be tidy for $H$. Then Lemma~\ref{lem:tidy_above} shows that there is $n\geq0$ such that $V_1 := \bigcap_{k=0}^n y^kVy^{-k}$ is tidy above for $y$; and Proposition~\ref{prop:Htilde} that there is a finite index subgroup, $H_1$, of $H$ such that $V_1$ is tidy for $H_1$. Set $V_2 = y^{-1}V_1y\cap V_1$. Then, by Proposition~\ref{prop:Htilde} again, there is a finite index subgroup, $H_2$, of $H_1$ such that $V_2$ is tidy for $H_2$ and $H_3 := \left\{x\in H_2\mid xyx^{-1}\in yV _2\right\}$ has finite index in $H_2$. Consider $c\in\nub{y}$ and let $x\in H_3$. Then $xcx^{-1}\in \nub{xyx^{-1}}$. We have $xyx^{-1}\in yV_2$ because $x\in H_3$ and so, by \cite[Corollary 3.4]{CapReidW_Titscore}, there is $r\in V_1$ such that $\nub{xyx^{-1}} = r\nub{y}r^{-1}$. Hence $xcx^{-1}$ is in the compact set $V_1\nub{y}V_1$ for every $x\in H_3$. Since $H_3$ has finite index in $H$, by Proposition~\ref{prop:Htilde}, it follows that $\{xcx^{-1}\}_{x\in H}$ has compact closure and hence that $c\in \lev{H}$.
\end{proof}
The previous argument shows that, if it is supposed only that $c\in\con{y}$ (rather than $\nub{y}$) and $x\in H_3$, then $xcx^{-1}$ belongs to $t\con{y}t^{-1}$ for some $t\in V$, by \cite[Corollary 3.2]{CapReidW_Titscore}. However, the remainder of the argument does not apply and it is not true in general that $\con{y}\leq\lev{H}$, as may be seen by considering $H = \langle y\rangle$ for example.
Augmenting a flat group $H$ involves modifying a subgroup, $U$, tidy for $H$. The following lemma facilitates one such modification.
\begin{lemma}
\label{lem:nub_in_tidy}
Suppose that $U$ is tidy for $x\in G$ and that $C$ is a compact subgroup of $G$ such that $CU = UC$ and $\{x^nCx^{-n}\}_{n\in\mathbf{Z}}\subseteq (xUx^{-1})C$. Then $UC$ is a compact open subgroup of $G$ that is tidy for $x$.
\end{lemma}
\begin{proof}
Since $U$ is compact and open and $C$ is compact, $U$, $UC$ is a compact open subset of $G$ and the condition $UC=CU$ implies that it is a group. Tidiness of $UC$ for $x$ follows from the fact that
$$
[x(UC)x^{-1} : x(UC)x^{-1}\cap UC] = [xUx^{-1} : xUx^{-1}\cap U] = s(x).
$$
To see this, consider the map
$$
xUx^{-1}/(xUx^{-1}\cap U) \to x(UC)x^{-1} / (x(UC)x^{-1}\cap UC)
$$
defined by $w(xUx^{-1}\cap U) \mapsto w(x(UC)x^{-1}\cap UC)$. This map is well-defined because $xUx^{-1}\cap U\leq x(UC)x^{-1}\cap UC$ and is onto because
$$
x(UC)x^{-1} = (xUx^{-1})(xCx^{-1}) \subseteq (xUx^{-1})C.
$$
The map is one-to-one because, if
$$
w_1,w_2\in xUx^{-1}\text{ and }w_2^{-1}w_1 \in x(UC)x^{-1}\cap UC,
$$
then $w_2^{-1}w_1 = wc$ with $w\in U$ and $c\in C$. Hence $c = w_2^{-1}w_2w^{-1}$, which belongs to $(xUx^{-1})U$, and $\{x^ncx^{-n}\}_{n\in \mathbf{Z}}$ has compact closure because it is a subset of $xUx^{-1}C$. Then, since $U$ is tidy for $x$, Lemma~\ref{lem:tidy_criterion}\eqref{lem:tidy_criterion2} shows that $w_2^{-1}w_1w^{-1} =: w_0$ belongs to $U_0$, and hence that $w_2^{-1}w_1 = w_0w\in U\cap xUx^{-1}$.
\end{proof}
\begin{corollary}
\label{cor:nub_in_tidy}
The hypotheses of Lemma~\ref{lem:nub_in_tidy} imply that $\{y^nCy^{-n}\}_{n\in\mathbf{Z}}\subseteq U_0C$.
\end{corollary}
\begin{proof}
The hypotheses imply that, if $c\in C$ and $k\in\mathbf{Z}$, then $y^kcy^{-k} = uc_1$ with $u\in yUy^{-1}$ and $c_1\in C$. Then
$$
\{y^nuy^{-n}\}_{n\in\mathbf{Z}} = \{y^n(y^kcy^{-k}c_1^{-1})y^{-n}\}_{n\in\mathbf{Z}} \subseteq ((yUy^{-1})C)^2,
$$
which is compact. Hence $u\in U_0$ by Lemma~\ref{lem:tidy_criterion}\eqref{lem:tidy_criterion2}.
\end{proof}
The following proposition shows that a subgroup tidy for $y\in\lev{H}$ and $H$ may be found at the expense passing to a finite index subgroup, $F$, of $H$. Note that this proposition does not imply that the subgroup found is tidy for $\langle y, F\rangle$ however. That is done in the next section under additional hypotheses.
\begin{proposition}
\label{prop:tidy_for_both}
Let $H\leq G$ be flat and $y\in \lev{H}$. Then there are a compact open subgroup, $V$, of $G$ and a finite index subgroup, $F\leq H$, such that $V$ is tidy for $F$ as well as for $y$.
\end{proposition}
\begin{proof}
Let $W$ be a subgroup of $G$ tidy for $H$. Then, since $\nub{y}$ is compact and $W$ is open, there are $c_i\in\nub{y}$, $i\in\{1,\dots,k\}$, such that
$$
\bigcap\left\{cWc^{-1}\mid c\in\nub{y}\right\} = \bigcap_{i=1}^k c_iWc_i^{-1}.
$$
By Lemma~\ref{prop:nub_in_lev}, each $c_i\in\lev{H}$ and hence, by Proposition~\ref{prop:Htilde}, there is a finite index subgroup $H_i\leq H$ such that $c_iWc_i^{-1}$ is tidy for $H_i$. Then $W_1 := \bigcap_{i=1}^k c_iWc_i^{-1}$ is tidy for $\bigcap_{i=1}^k H_i$ and is normalised by $\nub{y}$. Set $W_2 = \nub{y}W_1$. Then $W_2$ is a compact open subgroup of $G$ that contains $\nub{y}$.
We shall see that $W_2$ is tidy for a finite index subgroup of $\bigcap_{i=1}^k H_i$. Since $\nub{y}\leq \lev{H}$, by Proposition~\ref{prop:nub_in_lev}, and is compact, there is, by Proposition~\ref{prop:Htilde}, a finite index subgroup, $E$, of $\bigcap_{i=1}^k H_i$ such that $x\nub{y}x^{-1}\subseteq \nub{y}W_1$ for every $x\in E$. Corollary~\ref{cor:Htilde} shows that, in fact, $x\nub{y}x^{-1}\subseteq \nub{y}(W_1)_{E0}$ for every $x\in E$. Therefore, taking $U = W_1$, $C = \nub{y}$, the hypotheses of Lemma~\ref{lem:nub_in_tidy} are satisfied for every $x\in E$ and we conclude that $W_2$ is tidy for every such $x$.
Finally, choose $n$ sufficiently large that $V := \bigcap_{k=0}^n y^kW_2y^{-k}$ is tidy above for~$y$. Such $n$ exists by Lemma~\ref{lem:tidy_above}. Then, since $\nub{y}\leq W_2$, Lemma~\ref{lem:TA+nub-->tidy} shows that $V$ is tidy for $y$. By Proposition~\ref{prop:Htilde}, there is for each $k\in\{0,\dots,n\}$ a finite index subgroup, $E_k$, of $E$ such that $y^kW_2y^{-k}$ is tidy for $E_k$. Then $V$ is tidy for the subgroup $F:= \bigcap_{k=0}^n E_k$ which has finite index in $E$, and hence in $H$.
\end{proof}
The final proposition in this section derives a useful property of subgroups tidy above for an element of $G$.
\begin{proposition}
\label{prop:conj_into_U0}
Let $U$ be tidy above for $y$ and suppose that $z\in UyU$. Then there is $u\in U$ such that $uzu^{-1}\in yU_0$.
\end{proposition}
\begin{proof}
Let $v,w\in U$ be such that $z = vyw$. Since $U$ is tidy above for $y$, we have that $v=v_-v_+$ with $v_\pm\in U_\pm$. Then $vyw = v_-yw'$ with $w' = y^{-1}v_+yw\in U$. Repeating the argument for $w'$ shows that it may be supposed that
$$
z = vyw\text{ with }v\in U_-\text{ and }w\in U_+.
$$
Next, setting $v_0=v$ and $w_0=w$, we recursively construct $v_i\in y^{i}U_-y^{-i}$ and $w_i\in U_+$ such that
$$
v_i^{-1}(v_iyw_i)v_i = v_{i+1}yw_{i+1}.
$$
Suppose that $v_i$ and $w_i$ have been constructed for some $i$. Then
$$
v_i^{-1}(v_iyw_i)v_i = yw_iv_i = yv_i'w_i'
$$
with $v_i'\in y^{i}U_-y^{-i}$ and $w_i\in U_+$ because $w_iv_i \in U_+ \left(y^iU_-y^{-i}\right) = \left(y^iU_-y^{-i}\right)U_+ $ by Equation~\eqref{eq:tidiness properties}. Setting $v_{i+1} = yv_i'y^{-1}$ and $w_{i+1}=w_i'$ continues the construction. Let $u_i = v_0v_1\dots v_{i-1}$. Then for each $i\geq1$ we have
$$
u_i(vyw)u_i^{-1} = v_iyw_i
$$
with $v_i\in y^kU_-y^{-k}$ and $w_i\in U_+$. Since $U$, $U_-$ and $U_+$ are compact, the sequences $\{u_i\}_{i\in\mathbf{Z}}$, $\{v_i\}_{i\in\mathbf{Z}}$ and $\{w_i\}_{i\in\mathbf{Z}}$ have limit points $\hat{u}\in U$, $\hat{v}\in \bigcap_{k\geq0} y^kU_-y^{-k} = U_0$ and $\hat{w}\in U_+$ respectively with $\hat{u}(vyw)\hat{u}^{-1} = \hat{v}y\hat{w}$. This may be further simplified to
$$
\hat{u}(vyw)\hat{u}^{-1} = y\hat{w}
$$
with $\hat{w}\in U_+$ because $y^{-1}\hat{v}y \in U_0$ and $U_0\leq U_+$.
Repeating the construction of the previous paragraph: starting with $\hat{w}_0 = \hat{w}$ and defining $\hat{w}_{i+1} = y^{-1}\hat{w}_iy$ for $i\geq0$; and then defining $\hat{u}_i = \hat{w}_{i-1}\dots \hat{w}_0$ for $i\geq1$ yields
$$
\hat{u}_i\hat{u}(vyw)\hat{u}^{-1}\hat{u}_i^{-1} = y\hat{w}_i
$$
with $\hat{w}_i\in y^{-i}U_+y^i$. Since $U_+$ is compact, the sequences $\{\hat{u}_i\}_{i\geq1}$ and $\{\hat{w}_i\}_{i\geq1}$ have limit points $\doublehat{u}\in U$ and $u_0\in \bigcap_{i\geq0} y^{-i}U_+y^i = U_0$ respectively such that
$$
\doublehat{u}\hat{u}(vyw)\hat{u}^{-1}\doublehat{u}^{-1} = yu_0.
$$
Therefore the claim holds with $u = \doublehat{u}\hat{u}$.
\end{proof}
\section{Groups with flat rank greater than $1$}
\label{sec:Main}
In this section it is seen that a flat subgroup, $H$, of $G$ for which there is $y\in\lev{H}$ with scale greater than~$1$ may, under certain hypotheses, be modified to produce a subgroup with flat rank increased by $1$. One example is given that illustrates the method and another that shows its limitations.
The hyptheses are that $H$ has the form $\langle x_i, K \mid i\in\{1,\dots,n\}\rangle$ described in Theorem~\ref{thm:is_flat}. The theorem shows how to produce a group $\langle y, z_i, L\mid i\in\{1,\dots,n\}\rangle$ that is also flat and may have rank greater than that of $H$. The group produced by the theorem also has the required form, as does the singly generated group $\langle y\rangle$. Hence, the method may be iterated to produce groups with higher flat rank so long as there is a non-uniscalar element in the Levi subgroup of the flat group last produced. In the statement of the theorem, $s_{\lev{H}}(y)$ denotes the scale of $y$ in the group $\lev{H}$, which may be less than the scale of $y$ in $G$.
\begin{theorem}
\label{thm:extend_flat}
Let $G$ be a t.d.l.c.~group and let $x_i\in G$, $i\in\{1,\dots,n\}$, and $K$ be a compact subgroup of $G$ be such that:
\begin{itemize}
\item $x_iKx_i^{-1} = K$; and
\item $[x_i,x_j]\in K$.
\end{itemize}
Denote the flat group $\langle x_i, K \mid i\in\{1,\dots,n\}\rangle$ by $H$ and suppose that $y\in\lev{H}$.
Then there are $z_i\in G$ and a compact group $L\leq G$ such that for all $i\in\{1,\dots,n\}$:
\begin{enumerate}
\item $z_iLz_i^{-1} = L$, $yLy^{-1}=L$;
\label{thm:extend_flat2}
\item $[z_i,z_j]\in L$ and $[z_i,y]\in L$; and
\label{thm:extend_flat3}
\item $\con{z_i}=\con{x_i}$ and $\lev{z_i}=\lev{x_i}$
\label{thm:extend_flat1}
\end{enumerate}
Hence $\langle y, z_i, L\mid i\in\{1,\dots,n\}\rangle$ is flat. If $s_{\lev{H}}(y)>1$, then
$$
\frk{\langle y, z_i, L\mid i\in\{1,\dots,n\}\rangle} = \frk{\langle x_i, K \mid i\in\{1,\dots,n\}\rangle}+1.
$$
\end{theorem}
\begin{proof}
Proposition~\ref{prop:tidy_for_both} shows that there is $V\leq G$ compact, open and tidy both for $y$ and a finite index subgroup, $F$, of $H$. Let $U := V\cap \lev{H}$. Then $U$ is a compact open subgroup of $\lev{H}$ that is tidy for $y\in\lev{H}$, and Lemma~\ref{lem:levi_stable} shows that $xUx^{-1} = U$ for every $x\in F$.
We establish \eqref{thm:extend_flat2} and \eqref{thm:extend_flat3} through a series of claims. For these, note that the hypotheses that $[x_i,x_j]\in K$ and that $K$ is a compact subgroup normalised by each $x_i$ imply that $H\leq \lev{H}$. As a consequence of this, $z_i$ and $L$ may be defined within $\lev{H}$. With this in mind, it will be convenient to suppose that $G = \lev{H}$, and hence that $U=V$ and $H$ is uniscalar, while proving \eqref{thm:extend_flat2} and \eqref{thm:extend_flat3}.
\begin{Claim}
\label{claim:exists_mi}
For each $i\in\{1,\dots,n\}$ there is $n_i>0$ such that
$$
x_i^{n_i}Ux_i^{-n_i} = U \text{ and } x_i^{n_i}yx_i^{-n_i}\in yU.
$$
\end{Claim}
\begin{proof}
Since $F$ has finite index in $H$, there is $m_i>0$ such that $x_i^{m_i}\in F$ and hence that $x_i^{m_i}Ux_i^{-m_i} = U$. Proposition~\ref{prop:Htilde} shows that there is a finite index subgroup $L\leq \langle x_i^{m_i} \mid i\in\{1,\dots,n\}\rangle$ such that $xyx^{-1}\in yU$ for every $x\in L$. Hence there is $n_i$, a multiple of $m_i$, such that $ x_i^{n_i}Ux_i^{-n_i} = U$ and $x_i^{n_i}yx_i^{-n_i}\in yU$.
\end{proof}
Next, Proposition~\ref{prop:conj_into_U0} shows that for each $i\in\{1,\dots,n\}$ there is $u_i\in U$ such that
$$
u_ix_i^{n_i}yx_i^{-n_i}u_i^{-1}\in yU_0.
$$
Put $z_i = u_ix_i^{n_i}$. Then it follows from this and Claim~\ref{claim:exists_mi}.~that
\begin{Claim}
\label{claim:zy_commutator}
$z_iyz_i^{-1}\in yU_0$ and $z_iUz_i^{-1} = U$ for every $i\in\{1,\dots,n\}$.
\end{Claim}
Having defined the elements $z_i$, the subgroup $L$ will be defined next. Towards this, let $K'$ be the smallest closed subgroup satisfying, for all $i,j,k$:
\begin{itemize}
\item $[x_i^{n_i},x_j^{n_j}]\in K'$ and
\item $x_k^{n_k}K'x_k^{-n_k} = K'$.
\end{itemize}
Then $K'$ is densely generated by a normal subgroup of $\langle x_i^{n_i}\mid i\in\{1,\dots,n\}\rangle$ and so $wUw^{-1} = U$ for all $w\in K'$ by Claim~\ref{claim:exists_mi}. Furthermore, $K'$ is contained in $K$ and therefore is compact. Hence $UK'$ is a compact group stable under conjugation by all $x_i^{n_i}$. Since $U$ is normalised by all $x_i^{n_i}$, $UK'$ is also stable under conjugation by all $z_k$ and $[z_i,z_j] = [u_ix_i^{n_i},u_jx_j^{n_j}]\in UK'$ for all $i,j,k\in\{1,\dots,n\}$. Hence
\begin{Claim}
\label{claim:comm_in_K''}
The smallest closed subgroup, $K''$, satisfying that $[z_i,z_j]\in K''$ and $z_kK''z_k^{-1} = K''$
for all $i,j,k\in\{1,\dots,n\}$ is a compact subgroup of $UK'$.
\end{Claim}
It follows from Claim~\ref{claim:zy_commutator}.~and invariance of $U_0$ under conjugation by $y$ that $z_iy^lz_i^{-1}\in y^lU_0$ and hence that $z_i(y^lUy^{-l})z_i^{-1} = y^lUy^{-l}$ for every $l\in\mathbf{Z}$. Hence $z_iU_0z_i^{-1} = U_0$ for each $i\in\{1,\dots,n\}$ and $zU_0z^{-1} = U_0$ for every $z\in K''$. Therefore $K''U_0$ is a group, which is compact because $K''$ and $U_0$ are. Define $L = K''U_0$. We are now ready to prove parts \eqref{thm:extend_flat2} and \eqref{thm:extend_flat3}.
\begin{Claim}
\label{claim:L}
For every $i,j\in\{1,\dots,n\}$
$$
[z_i,y]\in L,\ [z_i,z_j]\in L\text{ and }yLy^{-1} = L = z_iLz_i^{-1}.
$$
\end{Claim}
\begin{proof}[Proof of claim] That $[z_i,y]\in L$ is because $[z_i,y]\in U_0$ by Claim~\ref{claim:zy_commutator}., and that $[z_i,z_j]\in L$ is because $[z_i,z_j]\in K''$ by definition, see Claim~\ref{claim:comm_in_K''}. Since, as was just shown, $z_iU_0z_i^{-1} = U_0$ and, by definition, $z_iK''z_i^{-1} = K''$, it follows that $z_iLz_i^{-1} = L$ for each $i\in\{1,\dots,n\}$. The definition of $U_0$ implies that $yU_0y^{-1} = U_0$. Finally, $K''$ is generated by elements of the form $z_{k_1}\dots z_{k_r}[z_i,z_j]z_{k_r}^{-1}\dots z_{k_1}^{-1}$ and for any such element we have
$$
y(z_{k_1}\dots z_{k_r}[z_i,z_j]z_{k_r}^{-1}\dots z_{k_1}^{-1})y^{-1}\in z_{k_1}\dots z_{k_r}[z_i,z_j]z_{k_r}^{-1}\dots z_{k_1}^{-1} U_0
$$
because $[z_i,y]\in U_0$ by Claim \ref{claim:zy_commutator}.~and $U_0$ is normalised $z_1$, \dots, $z_n$. Therefore $yK''y^{-1}\leq K''U_0 = L$.
\end{proof}
The remainder of the proof returns to the general case where $G\ne \lev{H}$ and $V\ne U$. To see that $\con{z_i} = \con{x_i}$, recall from Lemma~\ref{lem:con_alpha^n} that $\con{x_i^{n_i}} = \con{x_i}$ and consider $c\in G$. Then, since $x^{n_i}$ normalises $U$, we have for each $r\in\mathbf{N}$ that
$$
z_i^rcz_i^{-r} = (u_ix_i^{n_i})^rc(u_ix_i^{n_i})^{-r} = t_r (x_i^{rn_i}cx_i^{-rn_i})t_r^{-1}
$$
with $t_r\in U$. Since the open normal subgroups of $V$ form a base of neighbourhoods of $1$, it follows that $c\in \con{z_i}$ if and only if $c\in\con{x_i^{n_i}}$ . That $\lev{z_i} = \lev{x_i}$ holds by the same argument and the fact that $\lev{x_i^{n_i}} = \lev{x_i}$.
The claim that $\langle y, z_i, L \mid i\in\{1,\dots,n\}\rangle$ is flat follows from Theorem~\ref{thm:is_flat} and~\eqref{thm:extend_flat2} and~\eqref{thm:extend_flat3}. The statement about the flat ranks will follow from
\begin{Claim}
\label{claim:fltrk}
$\frk{\langle z_i,L\mid i\in\{1,\dots,n\}\rangle} = \frk{H}$.
\end{Claim}
\begin{proof} Since $V\leq G$ is tidy for $\langle x_i^{n_i},\mid i\in\{1,\dots,n\}\rangle$, we have, for each word $\mathbf{w}(a_1,\dots,a_n)$ in the free group $\langle a_1,\dots, a_n\rangle$,
$$
\mathbf{w}(z_1,\dots,z_n) = \mathbf{w}(x_1^{n_1},\dots,x_n^{n_n})u
$$
with $u\in U$ because each $x_i^{n_i}$ normalises $U$ for each $i\in\{1,\dots,n\}$. Hence
$$
\mathbf{w}(z_1,\dots,z_n)U \mathbf{w}(z_1,\dots,z_n)^{-1} = \mathbf{w}(x_1^{n_1},\dots,x_n^{n_n})U\mathbf{w}(x_1^{n_1},\dots,x_n^{n_n})^{-1}.
$$
Therefore, by Proposition~\ref{prop:frk_equal},
$$
\frk{\langle z_i\mid i\in\{1,\dots,n\}\rangle} = \frk{\langle x_i^{n_i}\mid i\in\{1,\dots,n\}\rangle}.
$$
This suffices to prove the claim because $L$ and $K$ are contained in the uniscalar subgroups of the flat groups $\langle z_i,L\mid i\in\{1,\dots,n\}\rangle$ and $H$.
\end{proof}
Now we are ready to show that the flat rank increases by $1$ if $s_{\lev{H}}(y)>1$. If that is the case, then $U_{y++} := \bigcup_{k\in\mathbf{Z}} y^k U_{y+}y^{-k}$ is a closed, non-compact subgroup of $\lev{H}$ because $U$ is tidy for $y$ in $\lev{H}$. Furthermore, since $y^kUy^{-k}$ is stable under conjugation by $z_i$, $U_{y++}$ is also stable under conjugation by $z_i$ for each $i\in\{1,\dots,n\}$. Therefore $\langle y, z_i\mid i\in\{1,\dots,n\}\rangle$ acts on $U_{y++}$ by conjugation. The map $\alpha_x : u\mapsto xux^{-1}$ is an automorphism of $U_{y++}$, and
$$
x\mapsto \alpha_x : \langle y, z_i\mid i\in\{1,\dots,n\}\rangle \to \Aut(U_{y++})
$$
is a homomorphism. Denoting the modular function on $\Aut(U_{y++})$ by $\Delta$, define $\rho : \langle y, z_i\mid i\in\{1,\dots,n\}\rangle \to (\mathbf{R},+)$ by
$$
\rho(x) = \log\Delta(\alpha_x), \qquad x\in \langle y, z_i\mid i\in\{1,\dots,n\}\rangle.
$$
Then $\rho$ is a homomorphism with $\langle z_i\mid i\in\{1,\dots,n\}\rangle\leq \ker\rho$ because $U_{y++}$ is contained in $\lev{H}$, which is equal to $\lev{\langle z_i\mid i\in\{1,\dots,n\}\rangle}$. On the other hand, $\rho$ maps onto a subgroup of $(\mathbf{R},+)$ isomorphic to $(\mathbf{Z},+)$ because $y$ is not uniscalar. Therefore the rank of $\langle y,z_i,L\mid i\in\{1,\dots,n\}\rangle$ modulo its uniscalar subgroup is $\frk{H}+1$.
\end{proof}
Since finitely generated abelian groups are flat, by Theorem~\ref{thm:is_flat}, Theorem~\ref{thm:extend_flat} specialises to the following.
\begin{corollary}
Let $G$ be a t.d.l.c.~group and $x_i\in G$, $i\in\{1,\dots, n\}$ commute and denote $H = \langle x_1,\dots, x_n\rangle$. Suppose that $y\in C(H)$. Then $\langle H,y\rangle$ is flat. If $s_{C(H)}(y)>1$, then the flat-rank of $\langle H,y\rangle$ is equal to the flat-rank of $H$ plus~$1$.
\end{corollary}
The following example illustrates the steps in the proof of Theorem~\ref{thm:extend_flat}.
\begin{example}
\label{examp:illustrate}
Let $G = GL(3,\mathbf{Q}_p)$ and $H = \langle x\rangle$ with
$$
x = \left(\begin{matrix}
1 & a & 0\\ 0 & 1 & 0\\ 0 & 0 & p
\end{matrix}\right) \text{ and }a\in\mathbf{Q}_p\setminus\{0\}.
$$
(We will use that $|a|_p$, the $p$-adic absolute value of $a$, is a power of $p$.) Then $H$ is flat and
$$
\lev{H} = \left\{\left(\begin{matrix}
a_{11} & a_{12} & 0\\ a_{21} & a_{22} & 0\\ 0 & 0 & a_{33}
\end{matrix}\right) \mid a_{ij}\in\mathbf{Q}_p,\ (a_{11}a_{22}-a_{12}a_{21})a_{33} \ne 0\right\}.
$$
Since $s(x) = p^2$, $H$ has flat rank equal to $1$.
Hence
$$
y = \left(\begin{matrix}
p & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1
\end{matrix}\right)
$$
belongs to $\lev{H}$ and $s_{\lev{H}}(y) = p$.
We apply the steps of the construction to this $H$ and $y$. To begin, the group
$$
V = \left\{\begin{pmatrix}
a_{11} & pa_{12} & pa_{13}\\ a_{21} & a_{22} & pa_{23}\\ a_{31} & a_{32} & a_{33}
\end{pmatrix}\in GL(3,\mathbf{Q}_p)\mid a_{ij}\in\mathbf{Z}_p,\ |a_{11}a_{22}a_{33}|_p=1\right\}
$$
is tidy for both $H$ and $y$ and
$$
U = V\cap \lev{H} = \left\{\begin{pmatrix}
a_{11} & pa_{12} & 0\\ a_{21} & a_{22} & 0\\ 0 & 0 & 1
\end{pmatrix}\in GL(3,\mathbf{Q}_p)\mid a_{ij}\in\mathbf{Z}_p,\ |a_{11}a_{22}|_p=1\right\}.
$$
Then choosing $n = p^2|a^{-1}|_p$ ensures that $x^nUx^{-n} = U$ and $x^nyx^{-n}\in yU$ as in Claim~\ref{claim:exists_mi}. Next, put
$$
u = \begin{pmatrix}
1 & -na & 0\\ 0 & 1 & 0\\ 0 & 0 & 1
\end{pmatrix}.
$$
Then $u\in U$ and $ux^nyx^{-n}u^{-1} = y$. Put $z = ux^n$. Then
$$
z = \begin{pmatrix}
1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & p^n
\end{pmatrix},
$$
and Claim~\ref{claim:zy_commutator} holds and the subgroups $K''$ and $L$ in Claims~\ref{claim:comm_in_K''} and~\ref{claim:L} are trivial because $y$ and $z$ commute. Hence $\langle y, z\rangle$ is flat and has flat rank equal to $2$.
\end{example}
The proof of Theorem~\ref{thm:extend_flat} and construction in Example~\ref{examp:illustrate} pass from the given flat group $H$ to a finite index subgroup on the way to constructing the new flat group with increased rank. That could have been avoided in Example~\ref{examp:illustrate} by choosing a different subgroup $V$ tidy for $H$ and $y$. The next example shows that passing to a finite index subgroup cannot always be avoided.
\begin{example}
Define $x,y\in \mathbf{Z}^2\to \Aut(SL(2,\mathbf{Q}_p)\times \mathbf{Q}_p)$ by
\begin{align*}
x(A,c) &= \left(\begin{pmatrix}
0&1\\1&0
\end{pmatrix}A\begin{pmatrix}
0&1\\1&0
\end{pmatrix},pc\right)\\
\text{ and } y(A,c) &= \left(\begin{pmatrix}
p&0\\0&1
\end{pmatrix}A\begin{pmatrix}
p&0\\0&1
\end{pmatrix}^{-1}, c\right),\qquad A\in SL(2,\mathbf{Q}_p),\ c\in \mathbf{Q}_p,
\end{align*}
and let $G = \langle x,y\rangle\ltimes \left(SL(2,\mathbf{Q}_p)\times \mathbf{Q}_p)\right)$. Then $\langle x^2,y\rangle$ is finitely generated and abelian, and hence flat. In fact, if $U\leq SL(2,\mathbf{Q}_p)$ is the subgroup
$$
\left\{\begin{pmatrix} a& pb\\ c&d \end{pmatrix}\mid a,b,c,d\in \mathbf{Z}_p,\ ad-pbc =1\right\},
$$
then $U\times \mathbf{Z}_p$ is tidy for $\langle x^2,y\rangle$. However, $\langle x,y\rangle$ is not flat because, for example, its abelianisation is finite.
\end{example}
The next example shows that the construction in Theorem~\ref{thm:extend_flat} might not apply because it may happen that, while $\lev{H}$ contains elements of a flat group with larger flat rank, it does not contain a contraction subgroup for any element of the larger flat group.
\begin{example}
Let $x_1$ and $x_2$ be the automorphisms of $\mathbf{Q}_p^2$ given by
$$
x_1(\xi_1,\xi_2) = (p\xi_1,p\xi_2)\text{ and }x_2(\xi_1,\xi_2) = (p\xi_1,p^{-1}\xi_2),\qquad (\xi_1,\xi_2)\in\mathbf{Q}_p^2,
$$
and let $A := \langle x_1,x_2\rangle$. Then $A$ is a flat subgroup of $G := \mathbf{Q}_p^2\rtimes A$ having $V := \mathbf{Z}_p^2\rtimes \{0\}$ as a tidy subgroup. The uniscalar subgroup of $A$ is trivial and so the flat rank of $A$ is equal to the rank of $A$, which is $2$.
Let $H := \langle x_1\rangle \leq A$. Then the flat rank of $H$ is equal to $1$. However, $\lev{H} = A$ which is discrete and hence uniscalar, that is, $s_{\lev{H}}(y)=1$ for every $y\in A$. Hence no $y$ can be chosen from $\lev{H}$ that can be used in the construction in Theorem~\ref{thm:extend_flat}.
\end{example}
The final example shows that the construction in Theorem~\ref{thm:extend_flat} might not apply because it may happen that, while $\lev{H}$ contains a contraction subgroup for an element in a flat group with larger flat rank, it does not contain elements of the larger flat group.
\begin{example}
Let $A := \langle a,b\rangle$ be the free group on generators $a$ and $b$ and define a homomorphism $\phi : A\to \Aut(\mathbf{Q}_p^2)$ by
$$
\phi(a)(\xi_1,\xi_2) = (p\xi_1,\xi_2)\text{ and }\phi(b)(\xi_1,\xi_2) = (\xi_1,p\xi_2),\qquad (\xi_1,\xi_2)\in\mathbf{Q}_p^2.
$$
Then $A$ is a flat subgroup of $G := \mathbf{Q}_p^2\rtimes A$ having $\mathbf{Z}_p^2\rtimes \{0\}$ as tidy subgroup. The uniscalar subgroup of $A$ is the commutator subgroup of $\langle a,b\rangle$ and the flat rank of $A$ is equal to $2$.
Let $H = \langle a\rangle\leq A$. Then the flat rank of $H$ is equal to $1$. However, while we have $\lev{H} = \{0\}\times \mathbf{Q}_p = \con{b}$ and $s(b^{-1}|_{\lev{H}}) = p$, the set of $a$-conjugates of $b^{-1} $ is infinite and discrete, and hence does not have compact closure. Hence $b^{-1}$ does not belong to $\lev{H}$.
\end{example}
\section{Precompact conjugation orbits and a symmetric relation}
\label{sec:symmetric_relation}
The relation \emph{commutes with}, as in `$x_1$ commutes with $x_2$', is reflexive and symmetric on the elements of a group. The weaker relation \emph{in the Levi subgroup of}, as in `$x_1\in \lev{x_2}$', is reflexive but is not symmetric. Theorem~\ref{thm:extend_flat} and ideas in its proof suggest a still weaker reflexive relation that is symmetric. That relation, called \emph{vague commutation} and denoted $x_1Vx_2$, is explored in this section. Unfortunately, symmetry of the relation is achieved at the expense of $\{x_1\in G \mid x_1V x_2\}$ being a group.
\begin{definition}
Let $G$ be a t.d.l.c.~group. For $x_1,x_2\in G$ say that \emph{$x_1$ vaguely commutes with $x_2$}, and write $x_1Vx_2$, if there are $n>0$ and a compact subgroup, $U$, of $G$ normalised by $x_1^n$ and $u\in U$ such that $ux_1^{n}\in\lev{x_2}$.
\end{definition}
It is clear that the relation $VC$ is reflexive and the next proposition shows that it is symmetric as well.
\begin{proposition}
\label{prop:relation_is_symmetric}
Suppose that $x_1,x_2\in G$ and that $x_1Vx_2$. Then there are, for each $i=1,2$, integers $n_i>0$, compact subgroups, $U_i$, normalised by $x_i^{n_i}$ and elements $u_i\in U_i$, such that $z_i := u_ix_i^{n_i}$ commute modulo a compact subgroup, $L$, satisfying that $z_iLz_i^{-1} = L$.
Moreover, $x_2Vx_1$.
\end{proposition}
\begin{proof}
Since $x_1$ vaguely commutes with $x_2$, there are $n_1>0$ and a compact subgroup, $U_1$ of $G$, normalised $x_1$ and $u_1\in U_1$ such that $u_1x_1^{n_1}\in\lev{x_2}$. Then $H := \langle x_2\rangle$ and $y := u_1x_1^{n_1}$ satisfy the hypotheses of Theorem~\ref{thm:extend_flat}. Hence there are $n_2>0$ and a compact subgroup, $U_2$, normalised by $x_2$ and $u_2\in U_2$, and a compact subgroup, $L$, such that, setting $z_i = u_ix_i^{n_i}$: $z_1Lz_1^{-1} = L = z_2Lz_2^{-1}$ and $[z_1,z_2] \in L$, as claimed.
It follows that $u_2x_2^{n_2}\in\lev{u_1x_1^{n_1}}$ and thus that
$$
\left\{(u_1x_1^{n_1})^k (u_2x_2^{n_2})(u_1x_1^{n_1})^{-k}\mid k\in\mathbf{Z}\right\}
$$
has compact closure. Since $u_1\in U_1$, which is normalised by $x_1^{n_1}$, we have that $(u_1x_1^{n_1})^k = \tilde{u}_kx_1^{kn_1}$ for each $k$ with $\tilde{u}_k\in U_1$. Since $U_1$ is compact, it follows that
$$
\left\{x_1^{kn_1} (u_2x_2^{n_2}) x_1^{-kn_1}\mid k\in\mathbf{Z}\right\}
$$
has compact closure, and hence so does $\left\{x_1^{k} (u_2x_2^{n_2}) x_1^{-k}\mid k\in\mathbf{Z}\right\}$. Therefore $u_2x_2^{n_2}$ belongs to $\lev{x_1}$ and $x_2$ vaguely commutes with $x_1$.
\end{proof}
The \emph{vague centraliser} of $x_2$ is
$$
\left\{ x_1\in G \mid x_1\text{ vaguely commutes with }x_2\right\}.
$$
Then the vague centraliser of $x_2$ is the analogue of the centraliser of $x_2$ in the case of the relation of commutation, and of $\lev{x_2}$ for the `belongs to $\lev{x_2}$' relation.
\begin{example}
\label{examp:vague_centraliser}
Let $\Aut(T_{q+1})$ be the automorphism group of a regular tree $T_{q+1}$ with valency $q+1$ and $q\geq2$. Let $x_1$ and $x_2$ be translations, that is, automorphisms having infinite orbits, with axes $\ell_1$ and $\ell_2$, see {\color{red}\cite[]{Tits_arbre}}. Then $x_1^kx_2x_1^{-k}$ is a translation with axis $x_1^k.\ell_2$, and the same is true if $x_2$ is replaced by $ux_2^n$ for any $n>0$ and $u$ in a compact subgroup normalised by $x_2^n$. Hence $\{x_1^k(ux_2^n)x_1^{-k}\}_{k\in\mathbf{Z}}$ does not have compact closure unless $\ell_1 = \ell_2$, and it follows that $x_1$ and $x_2$ do not vaguely commute unless this holds. On the other hand, if $x_1$ is an elliptic element of $\Aut(T_{q+1})$, that is, if $x_1$ has finite orbits in $T_{q+1}$, then $\langle x_1\rangle^-$ is compact. Hence so is $\{x_1^kx_2x_1^{-k}\}^-_{k\in\mathbf{Z}}$,
and $x_1$ and $x_2$ vaguely commute.
These considerations thus show that the vague centraliser of $x$ is:
\begin{itemize}
\item $\Aut(T_{q+1})$, if $x$ is elliptic; and
\item $\{\text{translations with axis }\ell\} \cup \{\text{elliptic elements}\}$, if $x$ is a translation with axis $\ell$.
\end{itemize}
\end{example}
Example \ref{examp:vague_centraliser} shows that, while the centraliser of $x_2$ and $\lev{x_2}$ are subgroups of~$G$, the vague centraliser is not in general. In order to expand on the example, we make a few more definitions.
\begin{definition}
\label{defn:Centre-like_sets}
Let $G$ be a totally disconnected, locally compact group.
\begin{enumerate}
\item ${Per}(G) = \{x\in G\mid \langle x\rangle^-\text{ is compact}\}$.
\item $x\in G$ is an \emph{${FC}^-$-element} if the conjugacy class of $x$ has compact closure, and the set of ${FC}^-$-elements in $G$ is denoted ${FC}^-(G)$.
\item The \emph{approximate centraliser of $G$} is the set of all $x\in G$ such that $\lev{x} = G$ and is denoted $AC(G)$.
\item The \emph{approximate centre of $G$} is equal to $\bigcap\left\{\lev{y}\mid y\in G\right\}$ and is denoted $AZ(G)$.
\item The \emph{vague centre} of $G$ is the set of all $x\in G$ whose vague centraliser is equal to $G$, and is denoted $VZ(G)$.
\end{enumerate}
\end{definition}
In Example \ref{examp:vague_centraliser}, ${Per}(G)$ is equal to the set of elliptic automorphisms of $T_{q+1}$, and this set is equal to the approximate centraliser $AC(G)$ and to the vague centre $VZ(G)$. The set of ${FC}^-$-elements ${FC}^-(G)$ and the approximate centre $AZ(G)$ are equal to $\{\ident\}$.
The next result states the order relationships between the sets in Definition~\ref{defn:Centre-like_sets} and records their properties.
\begin{figure}[h]
\label{fig:Centres}
\begin{tikzpicture}
\path (-1.5,-1.5) node(A) [rectangle] {${Per}(G)$}
(1.5,1.5) node(B) [rectangle] {$AC(G)$}
(3,-1.5) node(C) [rectangle] {$Z(G)$}
(4.5,0) node(D) [rectangle] {${FC}^-(G)$}
(3,3) node(F) [rectangle] {$VZ(G)$}
(4.5,1.5) node(E) [rectangle] {$AZ(G)$};
\draw[semithick] (A) -- (B) -- (F);
\draw[semithick] (B) -- (C) -- (D) -- (E) -- (F);
\end{tikzpicture}
\caption{Subsets of $G$ related to the centre}
\end{figure}
\begin{proposition}
\label{prop:vague_centraliser}
Let $G$ be a totally disconnected, locally compact group. The subsets of a totally disconnected, locally group, $G$, described in Definition~\ref{defn:Centre-like_sets} are ordered by inclusion as shown the Hasse diagram in Figure~\ref{fig:Centres}. Moreover:
\begin{enumerate}
\item all subsets are invariant under conjugation;
\item $Z(G)$, ${FC}^-(G)$ (provided that $G$ is compactly generated) and $AZ(G)$ are closed normal subgroups of $G$; and
\item $\text{Per}(G)$, $AC(G)$ and $VZ(G)$ are closed subsets of $G$.
\end{enumerate}
The scales of all elements in $VZ(G)$ are equal to $1$.
\end{proposition}
\begin{proof} The inclusions may be seen as follows: Proposition~\ref{prop:relation_is_symmetric} shows that $AC(G)$ and $AZ(G)$ are contained in $VZ(G)$; $AC(G)$ contains $Per(G)$ because compactness of $\langle x\rangle^-$ implies that $\{x^kyx^{-k}\}^-_{k\in\mathbf{Z}}$ is equal to the set of conjugates of $y$ by $\langle x\rangle^-$, and contains $Z(G)$ because $\{x^kyx^{-k}\}^-_{k\in\mathbf{Z}} = \{y\}$ if $x\in Z(G)$; $AZ(G)$ contains $FC^-(G)$ because $\{y^kxy^{-k}\}^-_{k\in\mathbf{Z}}\subseteq \{wxw^{-1}\}^-_{w\in G}$ for every $y\in G$; and $FC^-(G)$ contains $Z(G)$ because singleton conjugacy classes are compact.
Since each of the statements defining the sets in Definition~\ref{defn:Centre-like_sets} is invariant under conjugation, the sets so defined are too.
For the further properties of the sets, it is well-known that the centre is a closed subgroup in every topological group, and that $FC^-(G)$ is closed if $G$ is compactly generated is \cite[Theorem~2]{Moller_FC}. That $AZ(G)$ is a closed subgroup holds because it is the intersection of the closed subgroups $\lev{y}$, $y\in G$. That these subgroups are normal holds because, as just seen, they are closed under conjugation. That ${Per}(G)$ is closed is \cite[Theorem~2]{Wi:tdHM}. The proofs that $AC(G)$ and $VZ(G)$ are closed too mirror the argument used for $Per(G)$, which uses the fact that the group elements are uniscalar.
Assume, as will shortly be proved, that all elements of $x\in VZ(G)$ are uniscalar and consider $y\in VZ(G)^-$. To see that $y\in VZ(G)$, it must be shown that for every $y_2\in G$ there are $n>0$, a compact subgroup, $U$, normalised by $y_2^n$ and $u\in U$ such that $uy_2^n\in\lev{y}$. We have that $s(y) = 1 = s(y^{-1})$, because the scale is continuous, and hence there is a compact, open $W\leq G$ normalised by $y$. Since $yW$ is a neighbourhood of $y$, we may choose $x\in VZ(G)\cap yW$. Then there are $n>0$, a compact subgroup, $U$, normalised by $y_2$ and $u\in U$ such that $uy_2^n\in\lev{x}$ because $x\in VZ(G)$. Since $yW = xW$ and $x$ normalises $W$, there are elements $w_k\in W$, for all $k\in\mathbf{Z}$, such that $y^k = w_kx^k$. Hence, for every $k\in\mathbf{Z}$,
$$
y^k(uy_2^n)y^{-k} = w_k x^k(uy_2^n)x^{-k}w_k^{-1}\in W\{x^j(uy_2^n)x^{-j}\}_{j\in\mathbf{Z}}W,
$$
which is compact. Hence $uy_2^n\in \lev{y}$ as desired and $y\in VZ(G)$. The proof that $AC(G)$ is closed is essentially the same. Every element of $AC(G)$ is uniscalar because it is contained in $VZ(G)$. Given $y\in AC(G$, there is $W$ normalised by $y$ and $x\in AC(G)\cap yW$. Then $\lev{x}= \lev{y}$ and the fact that every $y_2\in G$ belongs to $\lev{x}$ implies that $y_2$ also belongs to $\lev{y}$.
The proof will now be completed by showing that every element of $VZ(G)$ is uniscalar. Let $x_1$ in $G$ have $s(x_1)>1$, choose $v_+\in\con{x}\setminus\nub{x}$, and put $x_2 = v_+x_1$. Then for every $n>0$, every compact group $U$ normalised by $x_1^n$ and $u\in U$, and for all $k\geq0$, we have
\begin{align*}
&\phantom{=}\ x_2^{k+1}(ux_1^n)x_2^{-k-1} = (v_+x)^{k+1} (ux_1^n)(v_+x)^{-k-1} \\
&= v_+\dots (x_1^kv_+x_1^{-k})(x_1^{k+1}ux_1^{-k-1})(x_1^{n+k}v_+x_1^{-n-k})^{-1}\dots (x_1^nv_+x_1^{-n})^{-1}x^n.
\end{align*}
Hence the sequence $\{x_2^{k+1} (ux_1^n)x_1^{-k-1}\}_{k\geq0}$ is discrete and non-compact, by Lemma~\ref{lem:unbounded_sequence}. In particular, $ux_1^n$ does not belong to $\lev{x_2}$ for any $n$ and $u$, and $x_1$ and $x_2$ do not vaguely commute. Since for any $x_1$ with $s(x_1)>1$ there is $x_2$ that does not vaguely commute with $x_1$, every element in $VZ(G)$ is uniscalar.
\end{proof}
\begin{remark}
(1) If $x_1\in \text{Per}(G)$, then $x_2\in \lev{x_1}$, and hence $x_2$ vaguely commutes with $x_1$, for every $x_2$ in $G$. Proposition~\ref{prop:relation_is_symmetric} then implies that every $x_1\in \text{Per}(G)$ vaguely commutes with every $x_2\in G$. It is instructive to see this directly: consider $x_1\in \text{Per}(G)$ and $x_2\in G$, and take $n=1$, $U = \langle x_1\rangle^-$ and $u=x_1^{-1}$; then $ux_1 = 1_G$, which certainly belongs to $\lev{x_2}$. \\
(2) The subgroup $FC^-(G)$ need not be closed if $G$ is a t.d.l.c.~group that is not compactly generated. The example in \cite[Proposition~3]{Tits_FC} is a group $G$ in which $FC^-(G)$ is proper and dense. If $G$ is a connected locally compact group, then $FC^-(G)$ is closed, by \cite[Corollary~1]{Tits_FC}, and more information about the structure of this subgroup is given there. The structure of general $[FC]^-$-groups is described in \cite[Theorem 3D]{Robertson_FCbar}, see \cite[Theorem 2.2]{Liukkonen_FCbar} for a proof and also \cite[12.6.10]{PalmerII}.\\
(3) The proofs that $VZ(G)$ and $AC(G)$ are closed are modelled on the argument used in \cite[Theorem~2]{Wi:tdHM} to show that $Per(G)$ is closed, and that theorem answers a question posed by Karl Heinrich Hofmann.\\
(4) The class of all compactly generated, topologically simple, non-discrete locally compact groups is denoted by $\mathcal{S}$ and it is an open question whether there any t.d.l.c.~group in $\mathcal{S}$ can be uniscalar. It follows from Proposition~\ref{prop:vague_centraliser} that a sharper version of this question is whether any groups in $\mathcal{S}$ have $G$ equal to $VZ(G)$, $AC(G)$ or $Per(G)$. Any group $G$ equal to $Per(G)$ has the stronger property than uniscalarity of being \emph{anisotropic}, which means all contraction subgroups for inner automorphisms are trivial. Note that discrete finitely generated, simple groups with $G=Per(G)$ exist because, for discrete groups, $Per(G)$ is the set of torsion elements and there are simple counterexamples to Burnside's conjecture.
\end{remark}
|
{
"arxiv_id": "2302.13184",
"language": "en",
"timestamp": "2023-02-28T02:12:02",
"url": "https://arxiv.org/abs/2302.13184",
"yymm": "2302"
} | \section[#2]{#2\sectionmark{#1}}\sectionmark{#1}}
\newcommand{\SectionTOC}[3]{\section[#1]{#3\sectionmark{#2}}\sectionmark{#2}}
\newenvironment{abstract}%
{\large\begin{center}%
\bfseries{Abstract} \end{center}}
\usepackage{tocloft}
\newcommand*{\insertlofspace}
\addtocontents{lof}{\protect\addvspace{10pt}}%
}
\usepackage{anyfontsize}
\usepackage{wrapfig}
\usepackage{hhline}
\usepackage{pdflscape}
\usepackage{placeins}
\usepackage{fontawesome5}
\usepackage{mdframed}
\usepackage{minibox}
\usepackage[super]{nth}
\usepackage[labelfont=bf,textfont=small]{caption}
\usepackage{subcaption}
\makeatletter
\newcommand\footnoteref[1]{\protected@xdef\@thefnmark{\ref{#1}}\@footnotemark}
\makeatother
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{mathtools}
\usepackage{mathrsfs}
\usepackage{amsbsy}
\usepackage{faktor}
\usepackage{relsize}
\usepackage{xfrac}
\usepackage{bbm}
\usepackage{dsfont}
\usepackage{fancyvrb}
\usepackage{amsthm}
\definecolor{links}{rgb}{0,0.3,0}
\definecolor{mylinks}{rgb}{0.8,0.2,0}
\usepackage[ pdfencoding=auto, colorlinks=true, citecolor=blue, urlcolor=links, linkcolor=mylinks, pdfpagelabels, bookmarks, hyperindex, hyperfigures, pdftitle={Hypergeometric Feynman integrals}, pdfauthor={R. P. Klausen}]{hyperref}
\usepackage[nameinlink, noabbrev]{cleveref}
\crefformat{equation}{{\color{mylinks}(#2#1#3)}}
\usepackage{thmtools}
\newtheoremstyle{owntheorem}{
{
{\itshape
{
{
{:
{1em
{\textbf{\thmname{#1}\thmnumber{ #2}}\thmnote{ [#3]}
\newtheoremstyle{owndefinition}%
{}{}%
{}{}%
{}{:}%
{1em}%
{\textbf{\thmname{#1}\thmnumber{ #2}}\thmnote{ [#3]}}
\theoremstyle{owntheorem}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{cor}[theorem]{Corollary}
\newtheorem{conjecture}[theorem]{Conjecture}
\theoremstyle{owndefinition}
\newtheorem{defn}[theorem]{Definition}
\newtheorem*{remark}{Remark}
\makeatletter
\def\@endtheorem{\hfill{\lower0.3ex\hbox{\ensuremath{\triangle}}}\endtrivlist\@endpefalse }
\makeatother
\newtheoremstyle{ownexample}{}{}{}{}{}{:}{1em}{\textbf{\thmname{#1}\thmnumber{ #2}}\thmnote{ [#3]}}
\theoremstyle{ownexample}
\newtheorem{example}[theorem]{Example}
\usepackage{url}
\usepackage{wasysym}
\usepackage{enumerate}
\usepackage{rotating}
\usepackage{multirow}
\usepackage{array}
\usepackage{makecell}
\usepackage{longtable}
\usepackage{lscape}
\usepackage{listings}
\lstset{basicstyle=\footnotesize\ttfamily}
\usepackage{tikz}
\usetikzlibrary{calc}
\usetikzlibrary{arrows}
\usetikzlibrary{shapes}
\usetikzlibrary{positioning}
\usepackage{tablefootnote}
\usepackage{blkarray}
\usepackage{authblk}
\usepackage{spverbatim}
\usepackage{pgothic}
\usepackage{egothic}
\usepackage{yfonts}
\DeclareMathAlphabet{\mathpgoth}{OT1}{pgoth}{m}{n}
\DeclareMathAlphabet{\mathesstixfrak}{U}{esstixfrak}{m}{n}
\DeclareMathAlphabet{\mathboondoxfrak}{U}{BOONDOX-frak}{m}{n}
\DeclareMathAlphabet{\mathgoth}{U}{ygoth}{m}{n}
\numberwithin{equation}{section}
\newcommand{\mathop{}\!\mathrm{d}}{\mathop{}\!\mathrm{d}}
\newcommand{\mathop{}\!\mathrm{D}}{\mathop{}\!\mathrm{D}}
\makeatletter
\newcommand{\spx}[1]{%
\if\relax\detokenize{#1}\relax
\expandafter\@gobble
\else
\expandafter\@firstofone
\fi
{^{#1}}%
}
\makeatother
\newcommand\pd[3][]{\frac{\partial\spx{#1}#2}{\partial#3\spx{#1}}}
\newcommand\tpd[3][]{\tfrac{\partial\spx{#1}#2}{\partial#3\spx{#1}}}
\newcommand\dpd[3][]{\dfrac{\partial\spx{#1}#2}{\partial#3\spx{#1}}}
\newcommand{\md}[6]{\frac{\partial\spx{#2}#1}{\partial#3\spx{#4}\partial#5\spx{#6}}}
\newcommand{\tmd}[6]{\tfrac{\partial\spx{#2}#1}{\partial#3\spx{#4}\partial#5\spx{#6}}}
\newcommand{\dmd}[6]{\dfrac{\partial\spx{#2}#1}{\partial#3\spx{#4}\partial#5\spx{#6}}}
\newcommand{\od}[3][]{\frac{\mathop{}\!\mathrm{d}\spx{#1}#2}{\mathop{}\!\mathrm{d}#3\spx{#1}}}
\newcommand{\tod}[3][]{\tfrac{\mathop{}\!\mathrm{d}\spx{#1}#2}{\mathop{}\!\mathrm{d}#3\spx{#1}}}
\newcommand{\dod}[3][]{\dfrac{\mathop{}\!\mathrm{d}\spx{#1}#2}{\mathop{}\!\mathrm{d}#3\spx{#1}}}
\newcommand{\genericdel}[4]{%
\ifcase#3\relax
\ifx#1.\else#1\fi#4\ifx#2.\else#2\fi\or
\bigl#1#4\bigr#2\or
\Bigl#1#4\Bigr#2\or
\biggl#1#4\biggr#2\or
\Biggl#1#4\Biggr#2\else
\left#1#4\right#2\fi
}
\newcommand{\del}[2][-1]{\genericdel(){#1}{#2}}
\newcommand{\set}[2][-1]{\genericdel\{\}{#1}{#2}}
\let\cbr\set
\newcommand{\sbr}[2][-1]{\genericdel[]{#1}{#2}}
\let\intoo\del
\let\intcc\sbr
\newcommand{\intoc}[2][-1]{\genericdel(]{#1}{#2}}
\newcommand{\intco}[2][-1]{\genericdel[){#1}{#2}}
\newcommand{\eval}[2][-1]{\genericdel.|{#1}{#2}}
\newcommand{\envert}[2][-1]{\genericdel||{#1}{#2}}
\let\abs\envert
\newcommand{\sVert}[1][0]{%
\ifcase#1\relax
\rvert\or\bigr|\or\Bigr|\or\biggr|\or\Biggr
\fi
}
\newcommand{\enVert}[2][-1]{\genericdel\|\|{#1}{#2}}
\let\norm\enVert
\newcommand{\fullfunction}[5]{%
\begin{array}{@{}r@{}r@{}c@{}l@{}}
#1 \colon & #2 & {}\longrightarrow{} & #3 \\
& #4 & {}\longmapsto{} & #5
\end{array}
}
\newcommand{\fd}[2][]{\,\Delta^{#1}\!#2}
\newcommand{\fpd}[3][]{\,\Delta_{#2}^{#1}#3}
\newcommand{\operatorname{diag}}{\operatorname{diag}}
\newcommand{\operatorname{Ann}}{\operatorname{Ann}}
\newcommand{\operatorname{Mon}}{\operatorname{Mon}}
\newcommand{\operatorname{lm}}{\operatorname{lm}}
\newcommand{\operatorname{Disc}}{\operatorname{Disc}}
\newcommand{\operatorname{lc}}{\operatorname{lc}}
\newcommand{\operatorname{lt}}{\operatorname{lt}}
\newcommand{\operatorname{NF}}{\operatorname{NF}}
\newcommand{\operatorname{char}}{\operatorname{char}}
\newcommand{\operatorname{spoly}}{\operatorname{spoly}}
\newcommand{\operatorname{Arg}}{\operatorname{Arg}}
\renewcommand\Re{\operatorname{Re}}
\renewcommand\Im{\operatorname{Im}}
\newcommand{\operatorname{Tr}}{\operatorname{Tr}}
\newcommand{\operatorname{vol}}{\operatorname{vol}}
\newcommand{\operatorname{Csc}}{\operatorname{Csc}}
\newcommand{\operatorname{rank}}{\operatorname{rank}}
\newcommand{\operatorname{Sing}}{\operatorname{Sing}}
\newcommand{\operatorname{codim}}{\operatorname{codim}}
\newcommand{\operatorname{Li}}{\operatorname{Li}}
\newcommand{\operatorname{Conv}}{\operatorname{Conv}}
\newcommand{\operatorname{Cone}}{\operatorname{Cone}}
\newcommand{\operatorname{relint}}{\operatorname{relint}}
\newcommand{\operatorname{Adj}}{\operatorname{Adj}}
\newcommand{\operatorname{Newt}}{\operatorname{Newt}}
\newcommand{\operatorname{Aff}}{\operatorname{Aff}}
\newcommand{\operatorname{Vert}}{\operatorname{Vert}}
\newcommand{\operatorname{Dep}}{\operatorname{Dep}}
\newcommand{\operatorname{Val}}{\operatorname{Val}}
\newcommand{\operatorname{gr}}{\operatorname{gr}}
\newcommand{\operatorname{in}}{\operatorname{in}}
\newcommand{\operatorname{sign}}{\operatorname{sign}}
\newcommand{\operatorname{Gale}}{\operatorname{Gale}}
\newcommand{\operatorname{ord}}{\operatorname{ord}}
\newcommand{\operatorname{Sol}}{\operatorname{Sol}}
\newcommand{\mathop{}\!\mathrm{d}^d}{\mathop{}\!\mathrm{d}^d}
\newcommand{\lideal}[2][]{_{#1}\left< #2 \right>}
\newcommand{\subs}[2]{\left.#1\right|_{#2}}
\newcommand{\shftmi}[1]{\mathbf{#1}^-}
\newcommand{\shftpl}[1]{\mathbf{#1}^+}
\newcommand{\shftplh}[1]{\hat{\mathbf{#1}}^+}
\newcommand{\opn}[1]{\mathbf{n}_{#1}}
\newcommand{\shftpm}[1]{\mathbf{#1}^\pm}
\newcommand{\cfaktor}[1][m]{A^{(#1 i)}_a A^b_{(#1 j)} \left(1+\delta^{(#1 i)}\right)}
\newcommand{\intmeas}[2][L]{\prod_{#2 = 1}^{#1} \frac{\d^d k_{#2}}{i\pi^{d/2}}}
\newcommand{\order}[1]{\mathcal{O} \!\left( #1 \right)}
\newcommand{\bigslant}[2]{{\raisebox{.2em}{\ensuremath{#1}}\!\!\left/\raisebox{-.2em}{\ensuremath{#2}}\right.}}
\newcommand{\normalslant}[2]{#1/#2}
\newcommand{\contraction}[2]{#1/#2}
\newcommand{A^{(mi)}_a s_{(mj)}}{A^{(mi)}_a s_{(mj)}}
\newcommand{A^{(mi)}_a s^{\text{ext}}_{(mj)}}{A^{(mi)}_a s^{\text{ext}}_{(mj)}}
\newcommand{{G_{z}(x)}}{{G_{z}(x)}}
\newcommand{{G_{\mathbf{u}}(x)}}{{G_{\mathbf{u}}(x)}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{A}_\sigma}{\mathcal{A}_\sigma}
\newcommand{\mathcal{A}_{\bar{\sigma}}}{\mathcal{A}_{\bar{\sigma}}}
\newcommand{z_\sigma}{z_\sigma}
\newcommand{z_{\bar\sigma}}{z_{\bar\sigma}}
\newcommand{{\underline{\nu}}}{{\underline{\nu}}}
\newcommand{\Cc}[1][f]{\overline{\mathcal C}_{#1}^{\mathrm c}}
\newcommand{\mathbb{A}^n_{\mathbb{K}}}{\mathbb{A}^n_{\mathbb{K}}}
\newcommand{\mathbb{P}^n_{\mathbb{K}}}{\mathbb{P}^n_{\mathbb{K}}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{(\mathbb C^*)^n}{(\mathbb C^*)^n}
\newcommand{\widehat{\mathcal T}}{\widehat{\mathcal T}}
\newcommand{{\bar\tau}}{{\bar\tau}}
\newcommand{\mink}[1]{\mathgoth{#1}}
\newcommand{\frac{d}{2}}{\frac{d}{2}}
\newcommand{\Var}{\mathlarger{\mathbb{V}}}
\newcommand{\HypF}[3]{ {_2}F_1 \!\left.\left(\genfrac{}{}{0pt}{}{#1}{#2}\right| #3 \right) }
\newcommand{\MatFunc}[4]{ #1 \left.\left(\genfrac{}{}{0pt}{}{#2}{#3}\right| #4 \right) }
\newcommand{\HypFr}[3]{ {_2}\mathbf{F}_1 \!\left.\left(\genfrac{}{}{0pt}{}{#1}{#2}\right| #3 \right) }
\newcommand{\HypFpq}[5]{ {_{#1}}F_{#2} \!\left.\left(\genfrac{}{}{0pt}{}{#3}{#4}\right| #5 \right) }
\newcommand{\StirlingFirst}[2]{\begin{bmatrix} #1 \\ #2 \end{bmatrix}}
\newcommand{\StirlingFirstSmall}[2]{\begin{bsmallmatrix} #1 \\ #2 \end{bsmallmatrix}}
\newcommand{\StirlingSecond}[2]{\begin{Bmatrix} #1 \\ #2 \end{Bmatrix}}
\newcommand{\StirlingSecondSmall}[2]{\begin{Bsmallmatrix} #1 \\ #2 \end{Bsmallmatrix}}
\newcommand{\Ssum}[2]{S^{(#1)}_{#2} }
\newcommand{\Zsum}[2]{Z^{(#1)}_{#2} }
\newcommand{\Binomial}[2]{\begin{pmatrix} #1 \\ #2 \end{pmatrix}}
\newcommand{\with}{\qquad \textrm{with} \qquad}
\newcommand{\comma}{\quad \textrm{ ,}}
\newcommand{\point}{\quad \textrm{ .}}
\newcommand{\monthword}[1]{\ifcase#1\or \or \or \or April\fi}
\newcommand*\circled[1]{\tikz[baseline=(char.base)]{
\node[shape=circle,draw,inner sep=2pt] (char) {#1};}}
\newcommand{\vspace{\baselineskip}}{\vspace{\baselineskip}}
\newcommand*{\vcenteredhbox}[1]{\begingroup \setbox0=\hbox{#1}\parbox{\wd0}{\box0}\endgroup}
\newcommand{\softwareName}[1]{\textsl{#1}}
\newcommand{completely non-vanishing }{completely non-vanishing }
\newcommand{completely non-vanishing}{completely non-vanishing}
\newcommand{Horn-Kapranov-parameterization }{Horn-Kapranov-parameterization }
\newcommand{Horn-Kapranov-parameterization}{Horn-Kapranov-parameterization}
\input{glossaryEntries.tex}
\newcommand{\topologyDescr}[5]{\multirow{#1}{*}{\begin{tabular}{>{\centering\arraybackslash}p{.9cm} >{\centering\arraybackslash}p{.9cm} >{\centering\arraybackslash}p{.9cm}} \multicolumn{3}{c}{#2} \\ $ #3 $ & $ #4 $ & $ #5 $ \end{tabular}}}
\newcommand{\topologyDescrTikz}[7]
\multirow{#1}{*}{
\makecell{
\begin{minipage}{0.3cm}
\hspace{-0.7cm}\rotatebox{90}{\textit{#2}}
\end{minipage}%
\begin{minipage}{2cm}
\centering \begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, every node/.style={scale=0.8}, scale=#7] #6
\end{tikzpicture}
\end{minipage} \\[1cm]
\vspace*{\fill}
\begin{tabular}{@{}>{\centering\arraybackslash}p{.9cm} >{\centering\arraybackslash}p{.9cm} >{\centering\arraybackslash}p{1cm}@{}}
$ L=#3 $ & $ n=#4 $ & $ m=#5 $
\end{tabular}
}%
}
}
\newcommand{\SymanzikSpec}[4]{\multirow{2}{*}{$ #1 $} & \multirow{2}{*}{$ #2 $} &\multirow{2}{*}{$ #3 $} &\multirow{2}{*}{$ #4 $} &}
\newcommand{\mmultirow}[1]{\multirow{2}{*}{$ #1 $}}
\newcommand{\tmultirow}[1]{\multirow{2}{*}{#1}}
\newcommand{\triangsF}[6]{#1 & #2 & #3 & #4 & #5 \\}
\newcommand{\triangsG}[5]{& & & & & & #1 & #2 & #3 & #4 & #5 \\}
\newcommand{\textsuperscript{$\dagger$}}{\textsuperscript{$\dagger$}}
\newcommand{\textsuperscript{$\ddagger$}}{\textsuperscript{$\ddagger$}}
\title{Hypergeometric Feynman Integrals}
\author{René Pascal Klausen}
\date{\today}
\begin{document}
\newgeometry{bindingoffset=10mm, hmargin=25mm, vmargin=25mm}
\frontmatter
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\begin{titlepage}
{\centering
\rule{\textwidth}{1.6pt}\vspace*{-\baselineskip}\vspace*{2pt}
\rule{\textwidth}{0.4pt}\\[\baselineskip]
{\Huge\MakeUppercase{Hypergeometric Feynman \\[0.2\baselineskip] integrals}}\\[0.4\baselineskip]
\rule{\textwidth}{0.4pt}\vspace*{-\baselineskip}\vspace{3.2pt}
\rule{\textwidth}{1.6pt}\\[1.5\baselineskip]
{\Large René Pascal Klausen \!\raisebox{0.5em}{\scalebox{0.75}{\href{mailto:klausen@physik.hu-berlin.de}{\faEnvelope[regular]}}}\hspace{2pt}, \nth{25} February 2023 \par}
\vspace{2\baselineskip}
}
{\noindent This is a minor updated version of my PhD thesis (date of the original version \nth{3} August 2022), which I defend at the Institute of Physics at Johannes Gutenberg University Mainz on the \nth{24} of November 2022. The referees were Christian Bogner (Johannes Gutenberg University Mainz) and Dirk Kreimer (Humboldt University of Berlin).\par}
\vspace{\baselineskip}
\vfill
{\centering\bfseries\Large Abstract}
\vspace{\baselineskip}
\currentpdfbookmark{Abstract}{bm:abstract}
In this thesis we will study Feynman integrals from the perspective of $\mathcal{A}$-hypergeometric functions, a generalization of hypergeometric functions which goes back to Gelfand, Kapranov, Zelevinsky (GKZ) and their collaborators. This point of view was recently initiated by the works \cite{DeLaCruzFeynmanIntegralsAhypergeometric2019} and \cite{KlausenHypergeometricSeriesRepresentations2019}. Inter alia, we want to provide here a concise summary of the mathematical foundations of $\mathcal{A}$-hypergeometric theory in order to substantiate this viewpoint. This overview will concern aspects of polytopal geometry, multivariate discriminants as well as holonomic $D$-modules.
As we will subsequently show, every scalar Feynman integral is an $\mathcal{A}$-hypergeometric function. Furthermore, all coefficients of the Laurent expansion as appearing in dimensional and analytical regularization can be expressed by $\mathcal{A}$-hypergeometric functions as well. By applying the results of GKZ we derive an explicit formula for series representations of Feynman integrals. Those series representations take the form of Horn hypergeometric functions and can be obtained for every regular triangulation of the Newton polytope $\operatorname{Newt}(\mathcal{U}+\mathcal{F})$ of the sum of Symanzik polynomials. Those series can be of higher dimension, but converge fast for certain kinematical regions, which also allows an efficient numerical application. We will sketch an algorithmic approach which evaluates Feynman integrals numerically by means of these series representations. Further, we will examine possible issues which can arise in a practical usage of this approach and provide strategies to solve them. As an illustrative example we will present series representations for the fully massive sunset Feynman integral.
Moreover, the $\mathcal{A}$-hypergeometric theory enables us to give a mathematically rigorous description of the analytic structure of Feynman integrals (also known as Landau variety) by means of principal $A$-determinants and $A$-discriminants. This description of the singular locus will also comprise the various second-type singularities. Furthermore, we will find contributions to the singular locus occurring in higher loop diagrams, which seem to have been overlooked in previous approaches. By means of the Horn-Kapranov-parameterization we also provide a very efficient way to determine parameterizations of Landau varieties. We will illustrate those methods by determining the Landau variety of the dunce's cap graph. We furthermore present a new approach to study the sheet structure of multivalued Feynman integrals by use of coamoebas.
\end{titlepage}
\renewcommand{\thefootnote}{\arabic{footnote}}
\pagestyle{empty}
\restoregeometry
\origdoublepage
\pagestyle{plain}
\cleardoublepage
\currentpdfbookmark{Contents}{bm:toc}
\tableofcontents
\clearpage
\currentpdfbookmark{List of figures}{bm:tof}
\listoffigures
\vfill
The figures in this thesis were generated with \softwareName{TikZ} \cite{TantauTikZPGFManual2021} and \softwareName{Mathematica} \cite{WolframResearchIncMathematicaVersion12}.
\pagestyle{withoutSections}
\mainmatter
\chapter{Introduction} \insertlofspace \label{ch:Introduction}
The modern physics' perspective to the fundamental interactions of nature is formulated in terms of quantum fields theories (QFTs). Tacitly, one often assumes perturbative quantum field theories when talking about QFT. This is because, with the exception of a few toy models, QFTs are only accessible perturbatively, i.e.\ we will consider interactions in the infinitesimal neighbourhood of free QFTs. In the framework of perturbative QFTs, Feynman integrals are indispensable building blocks for almost every prediction within these theories.
Hence, we are confronted with the problem of solving a huge number of Feynman integrals. But what does it actually mean to ``solve'' a Feynman integral? Of course, one is interested in a numerical result when comparing it with experimental data. However, on all the intermediate steps to this final result, analytical solutions of those integrals are preferred. This is less a question of elegance and more a need to understand the structure beyond Feynman integrals\footnote{Analytical expressions are also preferred in the renormalization procedure. However, one can also consider renormalized Feynman integrals so that an analytical intermediate solution can in principle be omitted.}. Hence, to get a deeper understanding of Feynman integrals and their role in perturbative QFTs, it is essential to investigate their functional relationships. Thus, ``solving Feynman integrals'' actually stands for rewriting Feynman integrals in terms of other functions. Clearly, the goal is to know more about these rewritten functions, and we should be able to efficiently evaluate these functions numerically.
A very successful example of a function class for rewriting Feynman integrals are the so-called multiple polylogarithms and their generalizations, e.g.\ elliptic polylogarithms \cite{RemiddiHarmonicPolylogarithms2000, GoncharovMultiplePolylogarithmsMixed2001, BrownMasslessHigherloopTwopoint2009, PanzerFeynmanIntegralsHyperlogarithms2015, BognerSymbolicIntegrationMultiple2012, BognerFeynmanIntegralsIterated2015, HiddingAllOrdersStructure2019}. Thus, multiple polylogarithms and related functions appear for many Feynman integrals as coefficients in a Laurent expansion in dimensional and analytical regularization. However, this applies not for all Feynman integrals. This is our main motivation for proposing another class of functions for rewriting Feynman integrals herein: $\mathcal{A}$-hypergeometric functions. These functions are in general less easy to handle than multiple polylogarithms, but we will show that every Feynman integral can be treated within this functional class. Thereby, the coefficients in a Laurent expansion of the Feynman integrals as well as the Feynman integral itself belong to the class of $\mathcal{A}$-hypergeometric functions. In doing so, we will take a closer look at two aspects in this thesis: On the one hand, we want to indicate areas where we can benefit from knowledge about this class of functions in the investigation of Feynman integrals and on the other hand we will discuss possibilities for an efficient numerical evaluation.\bigskip
Describing Feynman integrals by hypergeometric functions is by no means a new idea. Already in the early days of calculating Feynman amplitudes, it was proposed by Regge to consider Feynman integrals as a kind of generalized hypergeometric functions \cite{ReggeAlgebraicTopologyMethods1968}, where the singularities of those hypergeometric functions coincide with the Landau variety. Later on Kashiwara and Kawai \cite{KashiwaraHolonomicSystemsLinear1976} showed that Feynman integrals satisfy indeed holonomic differential equations, where the singularities of those holonomic differential equations are determined by the Landau variety.
Apart from characterizing the Feynman integral by ``hypergeometric'' partial differential equation systems, many applications determine the Feynman integral as a generalized hypergeometric series. Usually, the often used Mellin-Barnes approach \cite{BergereAsymptoticExpansionFeynman1974,SmirnovFeynmanIntegralCalculus2006} results in Pochhammer series ${}_pF_q$, Appell functions, Lauricella functions and related functions by applying the residue theorem \cite{BoosMethodCalculatingMassive1991}. Furthermore, for arbitrary one-loop Feynman integrals it is known that they can always be represented by a small set of hypergeometric series \cite{FleischerNewHypergeometricRepresentation2003,PhanScalar1loopFeynman2019}.
Thirdly, the Feynman integral may be expressed by ``hypergeometric'' integrals like the generalized Meijer $G$- or Fox $H$-functions \cite{BuschmanFunctionAssociatedCertain1990, Inayat-HussainNewPropertiesHypergeometric1987, Inayat-HussainNewPropertiesHypergeometric1987a}. The connections between Feynman integrals and hypergeometric functions was investigated over decades and a summary of these results can be found in \cite{KalmykovHypergeometricFunctionsTheir2008}.
Thus, there arise three different notions of the term ``hypergeometric'' in the Feynman integral calculus, where every notion generalizes different characterizations of the ordinary Gaussian hypergeometric function ${_2}F_1(a,b,c|z)$. In the late 1980s Gelfand, Kapranov, Zelevinsky (GKZ) and collaborators \cite{GelfandCollectedPapersVol1989, GelfandHolonomicSystemsEquations1988, GelfandDiscriminantsPolynomialsSeveral1991, GelfandGeneralizedEulerIntegrals1990, GelfandGeneralHypergeometricSystems1992, GelfandDiscriminantsResultantsMultidimensional1994, GelfandHypergeometricFunctionsToric1991, GelfandAdiscriminantsCayleyKoszulComplexes1990, GelfandHypergeometricFunctionsToral1989, GelfandDiscriminantsPolynomialsMany1990, GelfandEquationsHypergeometricType1988, GelfandNewtonPolytopesClassical1990} were starting to develop a comprehensive method to generalize the notion of ``hypergeometric'' functions in a consistent way\footnote{Indeed, Feynman integrals from QED were one of the motivations for Gelfand to develop generalized hypergeometric differential equations. However, this connection does not seem to have been pursued further by Gelfand. Moreover, it was indicated in \cite{GolubevaReggeGelfandProblem2014}, that also Regge's conjecture \cite{ReggeAlgebraicTopologyMethods1968} was influenced by Gelfand.}. Those functions are called $\mathcal{A}$-hypergeometric functions and are defined by a special holonomic system of partial differential equations. As Gelfand, Kapranov and Zelevinsky illustrated with Euler integrals, the GKZ approach not only generalizes the concept of hypergeometric functions but can also be used for analyzing and solving integrals \cite{GelfandGeneralizedEulerIntegrals1990}.\bigskip
For physicists the GKZ perspective is not entirely new. Already in the 1990s, string theorists applied the $\mathcal{A}$-hypergeometric approach in order to calculate certain period integrals and worked out the mirror symmetry \cite{HosonoMirrorSymmetryMirror1995, HosonoMirrorSymmetryMirror1995a}. Recently, the GKZ approach was also used to obtain differential equations for the Feynman integral from the maximal cut \cite{VanhoveFeynmanIntegralsToric2018}. Furthermore, $\mathcal{A}$-hypergeometric functions can also be used in several other branches of physics \cite{GolubevaReggeGelfandProblem2014}. Still, the approach of Gelfand, Kapranov and Zelevinsky is no common practice among physicists.\bigskip
However, Feynman integrals have recently begun to be considered from the perspective of GKZ. In 2016 Nasrollahpoursamami showed that the Feynman integral satisfies a differential equation system which is isomorphic to a GKZ system \cite{NasrollahpoursamamiPeriodsFeynmanDiagrams2016}. Independently of each other, the fact that Feynman integrals are $\mathcal{A}$-hypergeometric was also shown directly in 2019 by de la Cruz \cite{DeLaCruzFeynmanIntegralsAhypergeometric2019} and Klausen \cite{KlausenHypergeometricSeriesRepresentations2019} based on the Lee-Pomeransky representation \cite{LeeCriticalPointsNumber2013} of the Feynman integral. Furthermore, explicit series representations for all Feynman integrals admitting unimodular triangulations of $\operatorname{Newt}(\mathcal{U}+\mathcal{F})$ were given \cite{KlausenHypergeometricSeriesRepresentations2019}. In the manner of these two works \cite{DeLaCruzFeynmanIntegralsAhypergeometric2019,KlausenHypergeometricSeriesRepresentations2019}, a number of examples were presented in detail by \cite{FengGKZhypergeometricSystemsFeynman2020}. Shortly afterwards, a series of articles considered Feynman integrals by use of GKZ methods developed in string theory \cite{KlemmLoopBananaAmplitude2020, BonischFeynmanIntegralsDimensional2021, BonischAnalyticStructureAll2021}, which were applied mainly to the banana graph family. Moreover, the Landau variety of Feynman integrals was considered from the $\mathcal{A}$-hypergeometric perspective for banana graphs in \cite{BonischAnalyticStructureAll2021} and for arbitrary Feynman graphs in \cite{KlausenKinematicSingularitiesFeynman2022}. \bigskip
This thesis builds on the two mentioned articles \cite{KlausenHypergeometricSeriesRepresentations2019} and \cite{KlausenKinematicSingularitiesFeynman2022}. However, we will include several major generalizations of these works, and we also want to provide a more substantial overview of the mathematical theory behind it.
In particular, we will show that every Feynman integral belongs to the class of $\mathcal{A}$-hypergeometric functions. Moreover, every coefficient in a Laurent expansion as appearing in dimensional or analytical regularization can be expressed by $\mathcal{A}$-hypergeometric functions. Furthermore, we will give an explicit formula for a multivariate Horn series representation of generalized Feynman integrals for every regular (unimodular as well as non-unimodular) triangulation. Inter alia, this allows to evaluate Feynman integrals numerically very efficiently for convenient kinematic regions.
Further, we will connect the Landau variety with principal $A$-determinants and $A$-discriminants. This allows us to describe Landau varieties, including second-type singularities, with mathematical rigour. In doing so, we will find certain contributions to the singular locus of Feynman integrals in higher loops which seem to have been overlooked so far. The simplest example where those additional contributions appear is the so-called dunce's cap graph. By its connection to the $\mathcal{A}$-hypergeometric theory we can also give a very efficient parameterization of Landau varieties. In addition, we will sketch an approach to describe parts of the monodromy structure of Feynman integrals by means of a simpler geometric object, which is known as coamoeba. \bigskip
We will start this thesis with a comprehensive summary of the mathematical fundament in \cref{ch:AHypergeometricWorld}, which in our opinion is essential for understanding the following chapters. We will try to keep this chapter as short as possible without compromising understanding. For hurried readers we recommend \cref{sec:AhypergeometricNutshell}, which summarizes the main aspects of \cref{ch:AHypergeometricWorld}. After a mathematical introduction, we will continue with a physical introduction in \cref{ch:FeynmanIntegrals}. Thereby, we will focus mainly on the various representations of Feynman integrals in parametric space and recall several aspects of them. For experienced readers, \cref{sec:FeynmanIntegralsAsAHyp} may be sufficient, which connects Feynman integrals and $\mathcal{A}$-hypergeometric functions.
In \cref{ch:seriesRepresentations} we will introduce series representations of Feynman integrals based on $\mathcal{A}$-hypergeometric theory. Apart from general questions, we will also discuss some features as well as possible difficulties which can arise in the evaluation of Feynman integrals by means of those series representations. In particular, we also explain the steps that would be necessary in an algorithmic implementation. \Cref{ch:singularities} is devoted to the study of Landau varieties (or more generally to the singular locus) of Feynman integrals from the $\mathcal{A}$-hypergeometric perspective. In this chapter we will develop a mathematically rigorous description of the singular locus by means of the principal $A$-determinant and introduce the coamoeba.
\clearpage
\currentpdfbookmark{Acknowledgements}{bm:acknowledgements}
\vspace*{3\baselineskip}
\section*{\centering Acknowledgements}
\vspace{\baselineskip}
I would like to express my deepest gratitude to Christian Bogner for his great encouragement and support, which led to this thesis. I owe him the invaluable freedom to work on my own research topic, and I am grateful to him for always encouraging and supporting me in all my ideas. Furthermore, I also wish to thank him for his time to supervise my work, despite difficult circumstances. I would like to extend my sincere thanks to Dirk Kreimer for warmly welcoming me into his group at Humboldt university, for never wavering in his support and for his help in all organizational issues.
I very much appreciated the stimulating email discussions with Uli Walther about general questions on $\mathcal{A}$-hypergeometric theory and I would like to thank him for his invitation to the $\mathcal{A}$-hypergeometric conference. As well, I have also benefited from the discussions with Christoph Nega and Leonardo de la Cruz about their personal perspective on GKZ. Moreover, I also benefited from the discussions with Konrad Schultka about toric geometry, with David Prinz about renormalization, the discussions with Marko Berghoff and Max Mühlbauer as well as with Erik Panzer about Landau varieties and related questions. Further, I wish to thank Ruth Britto for the enlightening discussions and for inviting me to Dublin. I would also like to thank Stefan Theisen for his constructive comments on series representations of $\mathcal{A}$-hypergeometric functions. I also wish to express my sincere thanks to Erik Panzer and Michael Borinsky for initiating the very inspiring conference about ``Tropical and Convex Geometry and Feynman integrals'' at ETH Zürich.
This research was supported by the International Max Planck Research School for Mathematical and Physical Aspects of Gravitation, Cosmology and Quantum Field Theory and by the cluster of excellence ``Precision Physics, Fundamental Interactions and Structure of Matter'' (PRISMA+) at Johannes-Gutenberg university of Mainz.
\newpage
\pagestyle{fancyStandard}
\chapter{The \texorpdfstring{$\mathcal{A}$}{A}-hypergeometric world} \label{ch:AHypergeometricWorld}
\section{Why \texorpdfstring{$\mathcal{A}$}{A}-hypergeometric systems?} \label{sec:AhypergeometricNutshell}
Remarkably often, functions appear in the calculation of Feynman integrals which are labelled as ``hypergeometric'' functions. For example the Gauss' hypergeometric function ${_2}F_1(a,b,c | z)$ shows up in the $1$-loop self-energy graph, the $2$-loop graph with three edges (also known as sunset graph) can be written by hypergeometric Lauricella functions \cite{BerendsClosedExpressionsSpecific1994}, the hypergeometric Appell $F_1$ function appears in the triangle graph and the hypergeometric Lauricella-Saran function is used in the $1$-loop box graph \cite{FleischerNewHypergeometricRepresentation2003}, to name a few. At first sight, all these hypergeometric functions appear to be rather a loose collection of unrelated functions. Thus, it is a justified question, what all these functions have in common and what is a general meaning of the term ``hypergeometric''. The subsequent question is then if all Feynman integrals are in that general sense ``hypergeometric''.
Going back to a talk of Tullio Regge \cite{ReggeAlgebraicTopologyMethods1968} it is supposed for a long time that the second question has an affirmative answer and the appearance of hypergeometric functions in Feynman calculus is not just arbitrary. Regge also gave a suggestion as to which characteristics those general hypergeometric functions have to satisfy, which was later more specified by Sato \cite{SatoRecentDevolpmentHyperfunction1975}, Kashiwara and Kawai \cite{KashiwaraConjectureReggeSato1976} in the framework of microlocal analysis. All these approaches had in common that they based on holonomic $D$-modules, i.e.\ roughly speaking ``well-behaved'' systems of linear partial differential equations. \bigskip
A comprehensive investigation of the notion of generalized hypergeometric functions based on specific holonomic $D$-modules was then initiated by Gelfand, Graev, Kapranov and Zelevinsky in the late 1980s under the term $\mathcal{A}$-hypergeometric functions. Thereby, $\mathcal{A}$ is a finite configuration of vectors in $\mathbb Z^{n+1}$ which determines the type of the hypergeometric function. Additionally, an $\mathcal{A}$-hypergeometric function depends also on a parameter $\beta\in\mathbb C^{n+1}$ and variables $z\in\mathbb C^{|\mathcal{A}|}$.
It will turn out that the $\mathcal{A}$-hypergeometric functions fit perfectly into Regge's idea of hypergeometric functions. Further, with the $\mathcal{A}$-hypergeometric theory in mind we can answer both questions: firstly, what do all hypergeometric functions have in common, and secondly we can show that indeed any scalar Feynman integral without tadpoles is an $\mathcal{A}$-hypergeometric function. In that process, the vector configuration $\mathcal{A}$ will be determined by the Feynman graph, the parameter $\beta$ depends on the spacetime dimension and the indices of propagators and the variables $z$ turn out to be quotients of external momenta and masses. Therefore, the $\mathcal{A}$-hypergeometric functions cover the structure of Feynman integrals very naturally. Moreover, $\mathcal{A}$-hypergeometric theory has a very constructive nature, and we can make use of several features of this theory in the calculation and structural investigation of Feynman integrals.\bigskip
Thus, an $\mathcal{A}$-hypergeometric perspective on Feynman integrals could be very fruitful for physicists and the whole chapter is devoted to give an overview about the features of $\mathcal{A}$-hypergeometric functions. Before entering the $\mathcal{A}$-hypergeometric world in all its details, we want to give a first glimpse by highlighting the most important cornerstones.\bigskip
As the $\mathcal{A}$-hypergeometric system $H_\mathcal{A}(\beta)$ we will understand the holonomic $D$-ideal in the Weyl algebra $D=\mathbb C \langle z_1,\ldots,z_N,\partial_1,\ldots,\partial_N\rangle$ generated by
\begin{align}
\{ \partial^u - \partial^v \,\rvert\, \mathcal{A} u = \mathcal{A} v, \, u,v\in\mathbb N^N \} \cup \langle \mathcal{A} \theta + \beta \rangle \label{eq:AHypIdeal1}
\end{align}
where $\mathcal{A}\in\mathbb Z^{(n+1)\times N}$ is understood as a matrix representation of a non-degenerated, acyclic vector configuration $\mathcal{A}\subset\mathbb Z^{n+1}$, $\beta\in\mathbb C^{n+1}$ is a complex parameter and $\theta$ is the Euler operator in $D$. Every holomorphic solution of those systems will be called an $\mathcal{A}$-hypergeometric function. It will turn out, that for generic $\beta$ the holonomic rank of $H_\mathcal{A}(\beta)$ will be given by the volume of the convex polytope $\operatorname{Conv}(A)$, where $A$ is the dehomogenized point configuration of $\mathcal{A}$. Hence, for a given triangulation $\mathcal{T}$ of $\operatorname{Conv}(A)$, we can construct a basis of the solution space $\operatorname{Sol}(H_\mathcal{A}(\beta))$ by assigning to each maximal cell of the triangulation as many independent solutions as the volume of the maximal cell is. This allows us to write $\mathcal{A}$-hypergeometric functions in terms of e.g.\ Horn series, Mellin-Barnes integrals or Euler integrals. Also, the singular locus of $H_\mathcal{A}(\beta)$, which is mainly the Landau variety in the application to Feynman integrals, gets a very natural expression in the $\mathcal{A}$-hypergeometric world in terms of principal $A$-determinants $E_A(f)$ and $A$-discriminants $\Delta_A(f)$
\begin{align}
\operatorname{Sing}(H_\mathcal{A}(\beta)) = \Var (E_A(f)) = \bigcup_{\emptyset\neq \tau\subseteq\operatorname{Conv}(A)} \Var (\Delta_{A\cap\tau} (f_\tau)) \point
\end{align}
In the following sections we will go step by step more through all the details. Starting by the underlying geometric spaces, we will continue to convex polyhedra, which will cover the combinatorial structure of $\mathcal{A}$-hypergeometric functions. After a short introduction about $A$-discriminants and holonomic $D$-modules we are ready to state $\mathcal{A}$-hypergeometric systems and sketch their properties and their relations to $A$-discriminants.
\section{Affine and projective space} \label{sec:affineSpace}
The study of polynomials in several variables, as they appear for example in Feynman integrals, leads almost automatically to algebraic geometry. Therefore, we want to start by introducing the very basic notions of this subject adapted to $\mathcal{A}$-hypergeometric systems. This summary is orientated towards \cite{HartshorneAlgebraicGeometry1977, HolmeRoyalRoadAlgebraic2012}, which we also refer for further reading. \bigskip
Let $\mathbb K$ be a field. For convenience we will most often assume that $\mathbb K$ is algebraically closed, and we will use typically the complex numbers $\mathbb C$ or a convenient subfield. The \textit{affine $n$-space} $\mathbb{A}^n_{\mathbb{K}}$ over $\mathbb K$ is simply the set of all $n$-tuples with elements from $\mathbb K$
\begin{align}
\gls{AnK} := \{ (x_1,\ldots,x_n) \, \rvert \, x_i\in \mathbb K \quad \textrm{for } i=1,\ldots,n \} \point
\end{align}
Thus, the affine $n$-space $\mathbb{A}^n_{\mathbb{K}}$ is the vector space $\mathbb K^n$ as a set. However, $\mathbb{A}^n_{\mathbb{K}}$ will have different morphisms as $\mathbb K^n$, which is the reason, why we want to use clearly separated terms. Especially, we do not want to have a distinguished point as the origin of $\mathbb K^n$. However, we can treat relative positions of points in $\mathbb{A}^n_{\mathbb{K}}$ by referring on the corresponding points of $\mathbb K^n$. \bigskip
By $\gls{polyRing}$ we denote the coordinate ring of $\mathbb{A}^n_{\mathbb{K}}$, i.e.\ the ring of polynomials in the variables $x_1,\ldots,x_n$. Thus, we consider elements of $\mathbb K[x_1,\ldots,x_n]$ as functions from $\mathbb{A}^n_{\mathbb{K}}$ to $\mathbb K$. For a subset of polynomials $S\subseteq \mathbb K[x_1,\ldots,x_n]$ we define the zero locus
\begin{align}
\gls{variety} := \left\{ x\in \mathbb{A}^n_{\mathbb{K}} \, \rvert\, f(x) = 0 \quad\text{for all } f\in S \right\} \subseteq \mathbb{A}^n_{\mathbb{K}}
\end{align}
as the \textit{affine variety} generated by $S$. By the structure of those subsets of $\mathbb{A}^n_{\mathbb{K}}$ and Hilbert's basis theorem, we can restrict ourselves to considering only ideals $S$ or even merely their generators. It is not hard to see that the union and the intersection of affine varieties are again affine varieties \cite{HartshorneAlgebraicGeometry1977}
\begin{align}
\Var(S_1) \cup \Var(S_2) &= \Var(S_1 \cdot S_2) \\
\Var(S_1) \cap \Var(S_2) &= \Var(S_1 \cup S_2)
\end{align}
where $S_1\cdot S_2:=\{f\cdot g \,\rvert\, f\in S_1, g\in S_2\}$. Therefore, by setting these varieties as closed sets, we can define a topology on $\mathbb{A}^n_{\mathbb{K}}$, the so-called \textit{Zariski topology}. Note that also the empty set and the full space are varieties generated by the polynomials $1$ and $0$, respectively. An affine variety is called \textit{irreducible}, if it can not be written as the union of two proper subvarieties\footnote{Two different definitions of affine varieties are common in literature. Several authors assume affine varieties always to be irreducible. They would call our notion of affine varieties an affine algebraic set.}. \bigskip
Laurent polynomials can be treated in an analogous way. The so-called \textit{algebraic torus} $\gls{Csn}:= (\mathbb C\setminus \{0\})^n$ is an affine variety in $\mathbb C^{2n}$ and \textit{Laurent monomials} in the variables $x_1,\ldots,x_n$ are nothing else than the characters of $(\mathbb C^*)^n$
\begin{align}
(\mathbb C^*)^n \rightarrow \mathbb C^*, \qquad x \mapsto x^a := x_1^{a_1} \cdots x_n^{a_n} \label{eq:LaurentMonomial}
\end{align}
with the exponent $a=(a_1,\ldots,a_n)\in\mathbb Z^n$. Here and in the following we will make use of a multi-index notation as indicated in \cref{eq:LaurentMonomial}. A \textit{Laurent polynomial} is a finite linear combination of these monomials and can be uniquely written as
\begin{align}
f(x)=f(x_1,\ldots,x_n) = \sum_{a\in A} z_a x^a \in \mathbb C[x_1^{\pm 1},\ldots,x_n^{\pm 1}]
\end{align}
where $A\subset \mathbb Z^n$ is a finite set of non-repeating points and we consider the coefficients $z_a\in\mathbb C$ to be complex numbers not identically zero. We will call the subset $A$ the \textit{support} of $f$. The space of all Laurent polynomials with fixed monomials from $A$ but with indeterminate coefficients $z_a$ will be denoted as $\gls{CA}$. Thus, the set of coefficients $\{z_a\}_{a\in A}$ are coordinates of $\mathbb C^A$. Considering polynomials with fixed monomials but with variable coefficients is often referred as the ``$A$-philosophy'' \cite{GelfandDiscriminantsResultantsMultidimensional1994}. \bigskip
The \textit{dimension} of an irreducible affine variety $V$ will be defined as the largest integer $d$ such that there exists a chain of irreducible closed subsets $V_1,\ldots,V_d$
\begin{align}
V_1 \subsetneq \ldots \subsetneq V_d = V \point
\end{align}
The dimension of an affine variety is then the highest dimension of its irreducible components. This definition agrees with a definition via the Krull dimension of its coordinate ring. For the \textit{Krull dimension} of a commutative ring $R$ we consider chains of prime ideals $I_0 \subsetneq \ldots \subsetneq I_l$ in the ring $R$. The Krull dimension $\gls{KrullDimension}$ is then the suppremum of the lengths $l$ of all chains of prime ideals in $R$. We can extend the definition of Krull dimensions also to ideals $I\subseteq R$, by setting the dimension of an ideal, as the Krull dimension of its coordinate ring $\normalslant{R}{I}$. Thus, if the variety is generated by the ideal $I\subseteq R$, we have $\dim \Var(I) = \dim_{\operatorname{Kr}}(I) = \dim_{\operatorname{Kr}}(\normalslant{R}{I})$. This implies also that the dimension of the affine space (as a variety) agrees with the dimension of its corresponding vector space $\dim \mathbb{A}^n_{\mathbb{K}} = \dim \mathbb K^n = n$. We refer to \cite{GreuelSingularIntroductionCommutative2002} for further details. \bigskip
For elements $a^{(1)},\ldots,a^{(k)}\in \mathbb{A}^n_{\mathbb{K}}$ of an affine space we will call
\begin{align}
\lambda_1 a^{(1)} + \ldots + \lambda_k a^{(k)} \qquad \text{with}\qquad \sum_{j=1}^k \lambda_j = 1, \quad \lambda_j\in\mathbb K
\end{align}
an \textit{affine combination}. Points in $\mathbb{A}^n_{\mathbb{K}}$ are called \textit{affinely independent} if they can not be affinely combined to zero and \textit{affinely dependent} otherwise. Similarly, for a subset $S\subseteq \mathbb K$ we denote by
\begin{align}
\gls{affineHull} := \left\{\lambda_1 a^{(1)} + \ldots + \lambda_k a^{(k)} \,\Big\rvert\, \lambda_j \in S, \ \sum_{j=1}^k \lambda_j = 1\right\}
\end{align}
the \textit{affine span} or \textit{affine hull} generated by the elements $a^{(1)},\ldots,a^{(k)}\in\mathbb{A}^n_{\mathbb{K}}$ over $S$. A discrete subgroup of an affine space $\mathbb{A}^n_{\mathbb{K}}$ is called an \textit{affine lattice} if the subgroup spans the full space $\mathbb{A}^n_{\mathbb{K}}$. Further, a map $f: \mathbb{A}^n_{\mathbb{K}} \rightarrow \mathbb A_{\mathbb K}^m$ between two affine spaces is called \textit{affine transformation}, if it preserves all affine combinations, i.e.\ $f(\sum_j \lambda_j a^{(j)}) = \sum_j \lambda_j f(a^{(j)})$ with $\sum_j \lambda_j = 1$. Every affine map $f(a) = l(a) + b$ can be splitted into a linear map $l(a)$ and a translation $b\in \mathbb A_{\mathbb K}^m$. \bigskip
Since every affine hull is a translated linear hull, we will similarly define the dimension of an affine hull to be the same as the dimension of its corresponding linear hull. However, there are at most $n+1$ affinely independent elements in $\mathbb{A}^n_{\mathbb{K}}$. A set of those $n+1$ affinely independent elements $a^{(0)},\ldots,a^{(n)}$ is called a \textit{basis} or a \textit{barycentric frame} of an affine space $\mathbb{A}^n_{\mathbb{K}}$, since it spans the whole affine space $\mathbb{A}^n_{\mathbb{K}}$ over $\mathbb K$. Thus, for a given basis $a^{(0)},\ldots,a^{(n)}$ we can write every element $a\in \mathbb{A}^n_{\mathbb{K}}$ as an affine combination of that basis and we will call the corresponding tuple $(\lambda_0,\ldots,\lambda_{n})$ the \textit{barycentric coordinates} of $a$. \bigskip
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=2]
\draw[thick] (0,-1,0) -- (0,-.5,0);
\draw[thick,fill=white] (-.5,-.5,-1.5) -- ++(2,0,0) -- ++(0,0,2) -- ++(-2,0,0) -- cycle;
\draw[thick] (0,-.5,0) -- (0,0,0);
\draw[thick,fill=white] (-.5,0,-1.5) -- ++(2,0,0) -- ++(0,0,2) -- ++(-2,0,0) -- cycle;
\draw[thick] (0,0,0) -- (0,0.5,0);
\draw[thick,fill=white] (-.5,.5,-1.5) -- ++(2,0,0) -- ++(0,0,2) -- ++(-2,0,0) -- cycle;
\draw[thick] (0,.5,0) -- (0,1,0);
\draw[thick,fill=white] (-.5,1,-1.5) -- ++(2,0,0) -- ++(0,0,2) -- ++(-2,0,0) -- cycle;
\draw[thick,->] (0,1,0) -- (0,2,0);
\coordinate[circle, fill, inner sep = 1pt, label=right:$0$] (O) at (0,0,0);
\coordinate[label=$\mathbb{A}^n_{\mathbb{K}}$] (l1) at (2.4,2,2);
\coordinate[label=$\mathbb{A}^n_{\mathbb{K}}$] (l2) at (2.4,1.5,2);
\coordinate[label=$\mathbb K^n$] (l3) at (2.4,1,2);
\coordinate[label=$\mathbb{A}^n_{\mathbb{K}}$] (l4) at (2.4,.5,2);
\coordinate[label=$\vdots$] (l5) at (2.4,2.5,2);
\end{tikzpicture}
\end{center}
\caption[Embedding of an affine space $\mathbb{A}^n_{\mathbb{K}}$ in a vector space $\mathbb K^{n+1}$]{Embedding of affine spaces $\mathbb{A}^n_{\mathbb{K}}$ in a vector space $\mathbb K^{n+1}$.} \label{fig:affineEmbedding}
\end{figure}
These coordinates indicate that we can naturally identify the affine space $\mathbb{A}^n_{\mathbb{K}}$ as a hyperplane in the vector space $\mathbb K^{n+1}$. Thus, we consider the vector space $\mathbb K^{n+1} = \mathbb K^n \cup (\mathbb K^* \times \mathbb{A}^n_{\mathbb{K}})$ consisting in the slice of the vector space $\mathbb K^n$, containing the origin, and the remaining slices, each corresponding to an affine space $\mathbb{A}^n_{\mathbb{K}}$, see also \cref{fig:affineEmbedding}. Since all slices, which do not contain the origin behave the same, we can identify without loss of generality $\mathbb{A}^n_{\mathbb{K}}$ as the slice $1\times \mathbb{A}^n_{\mathbb{K}}$ of $\mathbb K^{n+1}$. Therefore, we can accomplish the embedding, by adding an extra coordinate
\begin{align}
a = (a_1,\ldots,a_n) \mapsto (1,a_1,\ldots,a_n) \label{eq:homogenization} \point
\end{align}
This will enable us to treat affine objects with the methods of linear algebra. Since points lying on a common hyperplane of $\mathbb K^{n+1}$, correspond to exponents of quasi-homogeneous polynomials, we will call the map of \cref{eq:homogenization} \textit{homogenization}. For a finite subset of points $A=\{a^{(1)},\ldots,a^{(N)}\}\subset \mathbb{A}^n_{\mathbb{K}}$, we will write $\mathcal A = \{(1,a^{(1)}),\ldots,(1,a^{(N)})\} \subset \mathbb K^{n+1}$ as its homogenized version. Whenever it is convenient, we will denote by $\mathcal{A}$ also the $(n+1)\times N$ matrix collecting the elements of the subset $\mathcal{A}\subset\mathbb K^{n+1}$ as columns. \bigskip
Closely related to the affine space is the projective space. The \textit{projective space} $\gls{PnK}$ is the set of equivalence classes in $\mathbb K^{n+1}\setminus\{0\}$, where two elements $p,q\in \mathbb K^{n+1}\setminus\{0\}$ are equivalent if there exists a number $\lambda\in\mathbb K^*$ such that $p=\lambda q$. Thus, points of the projective space can be described by homogeneous coordinates. A point $p\in\mathbb{P}^n_{\mathbb{K}}$ is associated to the homogeneous coordinates $\gls{homogeneousCoordinates}$ if an arbitrary element of the equivalence class of $p$ is described in the vector space $\mathbb K^{n+1}$ by the coordinates $(s_0,\ldots,s_n)$. Note that the homogeneous coordinates are not unique, as they can be multiplied by any element $\lambda\in\mathbb K^*$. \bigskip
Furthermore, we can decompose the projective space $\mathbb{P}^n_{\mathbb{K}}= \mathbb{A}^n_{\mathbb{K}} \cup \mathbb P^{n-1}_{\mathbb K}$ into an affine space and a projective space of lower dimension. In coordinates this decomposition means, that in case of $s_0\neq 0$, we can choose without loss of generality $s_0=1$, which defines the aforementioned affine hyperplane in $\mathbb K^{n+1}$. For $s_0=0$, sometimes referred as the ``points at infinity'' due to $\left[1 : \frac{s_1}{s_0} : \ldots : \frac{s_n}{s_0} \right]$, we obtain the projective space of lower dimension by the other remaining coordinates $[s_1 : \ldots : s_n]$. Analogously to the affine varieties we can define projective varieties as subsets of $\mathbb{P}^n_{\mathbb{K}}$. Affine varieties can be completed to projective varieties by adding those ``points at infinity''.
\section{Convex polyhedra and triangulations} \label{sec:ConvexPolytopes}
Feynman integrals have a very rich combinatorial structure owing from the underlying Feynman graphs. Taking the parametric perspective of Feynman integrals this combinatorics will be caused by the Symanzik polynomials. Focussing on their extreme monomials, we will consider the Newton polytopes of Symanzik polynomials. Thus, we will uniquely attach a convex polytope to every Feynman integral and it will turn out, that the analytic structure of Feynman integrals will depend on the shape of that polytope. Thus, the study of those polytopes will lead us to the convergence region of Feynman integrals (\cref{thm:FIconvergence}), as well as the poles and the meromorphic continuation in the spacetime dimension $d$ and the indices of propagators $\nu$ (\cref{thm:meromorphicContinuation}), which are useful in dimensional and analytic regularization. Moreover, a triangulation of these polytopes will result in series representations of Feynman integrals in terms of Horn hypergeometric series (\cref{thm:FeynSeries}). And finally the set of all triangulations of these polytopes determines the extreme monomials of the defining polynomial of the Landau variety (\cref{thm:NewtSec}) and provides a nice factorization of it (\cref{thm:pAdet-factorization}).
In the following subsection we will start to recall the most basic terms of convex polytopes which will generalize the intuitive perspective to higher dimensions. We will continue with a more technical perspective on convex polyhedra, which will reveal some structure of the underlying oriented matroids. In the last two parts of this section we will give an overview about triangulations and the structure of the set of all triangulations. The whole section is mostly based on \cite{ZieglerLecturesPolytopes1995,ThomasLecturesGeometricCombinatorics2006,DeLoeraTriangulations2010}, which we will refer for further reading. We will also recommend \cite{BrondstedIntroductionConvexPolytopes1983, HenkBasicPropertiesConvex2017, BrunsPolytopesRingsKtheory2009, GrunbaumConvexPolytopesSecond2003, SchrijverTheoryLinearInteger1986} for more detailed descriptions.
\subsection{Convex polytopes from point configurations} \label{ssec:PolytopesPointConfigs}
Let $\mathbb A_{\mathbb R}^n$ be the real affine $n$-space. By adding the standard inner product $ u^\top\cdot v := \sum_{i=1}^n u_i v_i$ to the associated vector space $\mathbb R^n$ of $\mathbb A_{\mathbb R}^n$, the real affine $n$-space $\mathbb A_{\mathbb R}^n$ becomes to the \textit{affine Euclidean space}. By slight abuse of notation we will denote such an affine Euclidean space also by $\mathbb R^n$.
Every finite subset of labelled points in that space will be called a \textit{point configuration} $A \subset\mathbb R^n$. In general, points can be repeated in a point configuration, but labels are unique. We will usually use the natural numbers $\{1,\ldots,N\}$ to label those points. Arranging the elements of $A$ as columns will produce a matrix, which we will also denote by $A\in\mathbb R^{n\times N}$. Although, the point configuration is invariant under a reordering of columns, we will usually sort them for convenience ascending by their labels. \bigskip
For a point configuration $A=\{a^{(1)},\ldots,a^{(N)}\}\subset\mathbb R^n$ of $N$ points we will call
\begin{align}
\gls{Conv} := \left\{ \lambda_1 a^{(1)} + \ldots + \lambda_N a^{(N)} \,\Big\rvert\, \lambda_j \in\mathbb R, \lambda_j \geq 0, \sum_{j=1}^N \lambda_j = 1 \right\} \subset \mathbb R^n \label{eq:VPolytope}
\end{align}
the \textit{convex hull} of $A$ or a \textit{convex polytope} generated by $A$. Additionally, if all points $a^{(j)} \in \mathbb Z^n$ lying on an affine lattice, $\operatorname{Conv}(A)$ is called a \textit{convex lattice polytope}. As we will never consider any non-convex polytope, we will call them simply ``polytopes'' and ``lattice polytopes'', respectively.
Thus, convex hulls can be seen as a special case of affine hulls and we will consequentially set the \textit{dimension} of a polytope $P = \operatorname{Conv} (A)=\operatorname{Aff}_{\mathbb R_+}(A)$ to be the same as the dimension of the affine hull $\operatorname{Aff}_{\mathbb R}(A)$ defined in \cref{sec:affineSpace}. By using the homogenization of $A$ described in \cref{sec:affineSpace} (see also \cref{ssec:vectorConfigurations}), we can relate the dimension of a polytope to a matrix rank
\begin{align}
\dim (\operatorname{Conv} (A)) = \operatorname{rank} (\mathcal{A}) - 1 \point
\end{align}
If a polytope $P\subset \mathbb R^n$ has dimension $n$ it is called to be \textit{full dimensional} and \textit{degenerated} otherwise. In most cases we want to assume full dimensional polytopes and adjust the dimension of the ambient space $\mathbb R^n$ if necessary.
The simplest possible polytope of dimension $n$ consists in $n+1$ vertices. We will call such a polytope an $n$-\textit{simplex}. By the \textit{standard $n$-simplex} we understand the full dimensional simplex generated by the standard unit vectors of $\mathbb R^{n}$ and the origin. \bigskip
A subset $\tau\subseteq P$ of a polytope for which there exists a linear map $\phi : \mathbb R^n \rightarrow \mathbb R$, such that \begin{align}
\gls{tauface} = \big\{ p \in P \, \rvert \, \phi(p) \geq \phi(q) \quad\text{for all } q\in P \ \big\} \subseteq P \label{eq:facedef}
\end{align}
the map $\phi$ is maximized exactly for points on $\tau$ is called a \textit{face} of $P$. Every face $\tau$ is itself a polytope generated by a subset of points of $A$. Whenever it is convenient we will identify with $\tau$ also this subset of $A$, as well as the subset of $\{1,\ldots,N\}$ labelling the elements of $A$ corresponding to this subset. Note that by construction also the full polytope $P$ and the empty set are faces of $P$ as well. Faces with dimension $0$, $1$ and $\dim(P)-1$ will be called \textit{vertices}, \textit{edges} and \textit{facets}, respectively. We will denote the set of all vertices by $\gls{Vert}$. Furthermore, the smallest face containing a point $p\in P$ is called the \textit{carrier} of $p$ and the polytope without its proper faces is called the \textit{relative interior} $\gls{relint}$.\bigskip
For full dimensional polytopes $P$ we want to introduce a \textit{volume} $\gls{vol}\in\mathbb R_{\geq 0}$, which is normalized such that the standard $n$-simplex has a volume equal to $1$. In other words, this volume is connected to the standard Euclidean volume $\operatorname{vol}_E (P)$ by a factorial of the dimension $\operatorname{vol} (P) = n! \operatorname{vol}_E (P)$. Especially, for simplices $P_\bigtriangleup$ generated by a point configuration $A=\{a^{(1)},\ldots,a^{(n+1)}\}\subset\mathbb R^n$ the volume is given by the determinant of its homogenized point configuration $\operatorname{vol} (P_\bigtriangleup) = |\det \mathcal{A}|$. The volume of a degenerated polytope will be set to zero. When restricting to lattice polytopes $P$, the volume $\operatorname{vol}(P)\in \mathbb Z_{\geq 0}$ is always a positive integer.\bigskip
The special interest in polytopes for this work results from the following construction which connects polytopes and polynomials. For a Laurent polynomial $f(x) = \sum_{a\in A} z_a x^a\in\mathbb C[x_1^{\pm 1},\ldots,x_n^{\pm 1}]$ we define its \textit{Newton polytope}
\begin{align}
\gls{Newt} := \operatorname{Conv} \!\left(\left\{a\in A \,\rvert\, z_a\not\equiv 0\right\}\right) \subset \mathbb R^n
\end{align}
as the convex hull of its exponents. Furthermore, for a face $\tau\subseteq\operatorname{Newt}(f)$ of a Newton polytope, we define the \textit{truncated polynomial} with respect to $\tau$ as
\begin{align}
\gls{truncpoly} := \sum_{a\in A\cap \tau} z_a x^a \label{eq:truncatedPolynomialDefinition}
\end{align}
consisting only in the monomials corresponding to that face $\tau$. \bigskip
\subsection{Vector configurations and convex polyhedra} \label{ssec:vectorConfigurations}
We will slightly generalize the point configurations from the previous section. Let $\mathbb R^{n+1}$ denote the Euclidean vector space and $(\mathbb R^{n+1})^\vee := \operatorname{Hom}(\mathbb R^{n+1},\mathbb R)$ its dual vector space. A finite collection of labelled elements from $\mathbb R^{n+1}$ will be called a \textit{vector configuration} and we will denote such a vector configuration by the symbol $\mathcal{A}\subset\mathbb R^{n+1}$. As described in \cref{sec:affineSpace} we can always embed $n$-dimensional affine spaces into $(n+1)$-dimensional vector spaces by homogenization. Thus, every homogenized point configuration is a vector configuration, even though we can also consider vector configurations not originating from a point configuration. As before we will denote by $\mathcal{A}\in\mathbb R^{(n+1)\times N}$ also the matrix constructed from the elements of $\mathcal{A}\subset\mathbb R^{n+1}$ considered as column vectors. \bigskip
The analogue of polytopes for vector configurations $\mathcal{A} = \{ a^{(1)},\ldots, a^{(N)} \} \subset\mathbb R^{n+1}$ are \textit{(convex) cones}
\begin{align}
\gls{Cone} := \left\{ \lambda_1 a^{(1)} + \ldots + \lambda_N a^{(N)} \,\Big\rvert\, \lambda_j \in\mathbb R, \lambda_j \geq 0 \right\} \subset \mathbb R^{n+1} \point
\end{align}
Similar to polytopes, we can introduce faces of cones, i.e.\ subsets where a linear functional $\phi\in(\mathbb R^{n+1})^\vee$ is maximized or minimized. In contrast to the polytopal faces, maximal or minimal values will always be equal to zero. Furthermore, the empty set will not always be a face. Faces of dimension $1$ will be called \textit{rays}. \bigskip
A fundamental result of convex geometry is the following statement which is often also called the ``main theorem''.
\begin{theorem}[Weyl-Minkowski theorem for cones \cite{ZieglerLecturesPolytopes1995}] \label{thm:WeylMinkowski}
A subset $C\subseteq\mathbb R^{n+1}$ is a convex cone $C=\operatorname{Cone}(\mathcal{A})$ if and only if it is an intersection of finitely many closed linear halfspaces
\begin{align}
C = P(M,0) := \left\{ \mu\in\mathbb R^{n+1} \,\rvert\, M \mu \leq 0 \right\} \subseteq \mathbb R^{n+1} \comma
\end{align}
where $M\in\mathbb R^{k\times (n+1)}$ is a real matrix and $M\mu \leq 0$ is understood componentwise $(M\mu)_i \leq 0$ for all $i=1,\ldots,k$.
\end{theorem}
Therefore, there are two equivalent representations of cones. \Cref{thm:WeylMinkowski} can be proven iteratively via Fourier-Motzkin elimination \cite{ZieglerLecturesPolytopes1995}. Note that the alternative representation of cones as intersection of halfspaces shows also that the intersection of cones is a cone as well. \bigskip
From \cref{thm:WeylMinkowski} one can also derive Farkas' lemma, which is known in many variants and which is very useful, when working with inequalities.
\begin{lemma}[Farkas' lemma, see e.g.\ \cite{ZieglerLecturesPolytopes1995}] \label{lem:Farkas}
Let $\mathcal{A}\in\mathbb R^{(n+1)\times N}$ be an arbitrary matrix and $b\in\mathbb R^{n+1}$. Then precisely one of the following assertions is true
\begin{enumerate}[i)]
\item there exists a vector $\lambda\in\mathbb R^N$ such that $\mathcal{A}\lambda = b$ and $\lambda\geq 0$
\item there exists a vector $m\in\mathbb R^{n+1}$ such that $m^\top\mathcal{A} \leq 0$ and $m^\top b > 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is roughly oriented towards \cite{SchrijverTheoryLinearInteger1986}. Note first that not both statements can be true, as
\begin{align}
0 < m^\top b = m^\top (\mathcal{A} \lambda) = (m^\top \mathcal{A}) \lambda \leq 0
\end{align}
gives a contradiction.
The first statement i) describes a cone $b\in C = \operatorname{Cone}(\mathcal{A})$. From \cref{thm:WeylMinkowski} we know that there are vectors $m_1,\ldots,m_k\in\mathbb R^{n+1}$ such that $C = \{\mu\in\mathbb R^{n+1} \,\rvert\, m_1^\top\mu \leq 0, \ldots, m_k^\top\mu \leq 0\} $. As every column of $\mathcal{A}$ corresponds to a point in the cone, we will surely have $m_i^\top\mathcal{A} \leq 0$ for all $i=1,\ldots,k$. On the other hand, i) is false if and only if $b\notin C$, thus there has to be at least one $m_i$ such that $m_i^\top b > 0$.
\end{proof}
A vector configuration $\mathcal{A}$ is called \textit{acyclic} (occasionally also \textit{pointed}) if there is no non-negative dependence, i.e.\ there is no $\lambda\in\mathbb R^{N}_{\geq 0}\setminus\{0\}$ which satisfies $\mathcal{A} \lambda = 0$. By construction, every homogenized point configuration is acyclic, since every dependence vector $\lambda$ has to satisfy $\sum_{i=j}^N \lambda_j = 0$, which allows no solution for $\lambda\in\mathbb R^{N}_{\geq 0}\setminus\{0\}$. In contrast to acyclic configurations, we call $\mathcal{A}$ \textit{totally cyclic}, if the cone $\operatorname{Cone}(\mathcal{A})$ equals the linear span of $\mathcal{A}$. By means of the following variant of Farkas' lemma we can give an alternative characterization of acyclic vector configurations.
\begin{cor} \label{cor:farkas}
Let $\mathcal{A}\in\mathbb R^{(n+1)\times N}$ be a real matrix. Then precisely one of the two assertions is true
\begin{enumerate}[i)]
\item there exists a vector $\lambda \in \mathbb R^N$ with $\mathcal{A}\lambda=0$, $\lambda_j \geq 0$ and $\lambda \neq 0$
\item there exists a linear functional $h\in(\mathbb R^{n+1})^\vee$ such that $h\mathcal{A} > 0$
\end{enumerate}
\end{cor}
\begin{proof}
Let $t>0$ be an arbitrary positive real number. Then the first statement i) can be formulated as
\begin{align}
& \exists \lambda \in \mathbb R^N \text{ such that } \mathcal{A}\lambda \leq 0, \, - \mathcal{A} \lambda \leq 0 ,\, \lambda \geq 0 \text{ and } (t,\ldots,t) \cdot \lambda >0 \nonumber\\
\Leftrightarrow & \exists \lambda\in\mathbb R^N \text{ such that } \lambda^\top \cdot (\mathcal{A}^\top,-\mathcal{A}^\top,-\mathbbm{1})\leq 0 \text{ and } t \ \lambda^\top \mathbf{1} > 0
\end{align}
where $\mathbbm{1}$ denotes the unit matrix and $\mathbf{1}=(1,\ldots,1)^\top$ denotes the column vector of ones. Now we can apply Farkas' \cref{lem:Farkas}. Therefore, the first statement is equivalent with
\begin{align}
\Leftrightarrow & \nexists x,y\in\mathbb R^{n+1},z\in\mathbb R^N \text{ such that } \mathcal{A}^\top x - \mathcal{A}^\top y - \mathbbm{1} z = t \ \mathbf{1} \text{ and } x\geq 0, y\geq 0, z\geq 0 \nonumber\\
\Leftrightarrow & \nexists \widetilde h\in\mathbb R^{n+1}, z\in\mathbb R^N \text{ such that } \mathcal{A}^\top \widetilde h = t \ \mathbf{1} + \mathbbm{1} z \text{ and } z\geq 0 \point
\end{align}
This shows the assertion, as we can choose any $t > 0$.
\end{proof}
Therefore, acyclic configurations can also be described by the existence of a linear functional $h$ with $h\mathcal{A} > 0$. In other words, every acyclic vector configuration can be scaled in such a way, that it becomes a homogenized point configuration. We will visualize this fact in \cref{fig:acyclicCone}. Thus, we can dehomogenize acyclic vector configurations in the following way. Let $h\in(\mathbb R^{n+1})^\vee$ be any linear functional with $h\mathcal{A}>0$. By scaling the vectors of $\mathcal{A} = \{a^{(1)},\ldots,a^{(N)}\}$ with the map $a^{(j)} \mapsto \frac{a^{(j)}}{h\cdot a^{(j)}}$ we obtain a vector configuration describing points on the hyperplane $\{\mu\in\mathbb R^{n+1} \,\rvert\, h \mu = 1\} \cong \mathbb A^n_{\mathbb R}$. This can be considered as an affine point configuration in $\mathbb R^n$. \bigskip
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=2]
\coordinate (O) at (0.5,0);
\coordinate (A) at (.8,1.4);
\coordinate (B) at (1.8,1.2);
\coordinate (C) at (2.9,1.5);
\coordinate (D) at (2,1.8);
\coordinate (E) at (1.3,1.7);
\draw[thick] (0,1) -- (3,1) -- (4,2) -- (1,2) -- (0,1);
\draw[thick] (A) -- (B) -- (C) -- (D) -- (E) -- (A);
\coordinate (AX) at ($(O)!1.8!(A)$);
\coordinate (BX) at ($(O)!2!(B)$);
\coordinate (CX) at ($(O)!1.5!(C)$);
\coordinate (DX) at ($(O)!1.5!(D)$);
\coordinate (EX) at ($(O)!1.5!(E)$);
\draw[thick,->] (O) -- (AX);
\draw[thick,->] (O) -- (BX);
\draw[thick,->] (O) -- (CX);
\draw[thick,->] (O) -- (DX);
\draw[thick,->] (O) -- (EX);
\draw[thick,->] (.3,1.1) -- ++(0,.3);
\coordinate[label=right:$h$] (h) at (.3,1.25);
\end{tikzpicture}
\end{center}
\caption[Acyclic vector configurations]{Example of an acyclic vector configuration with five vectors in $\mathbb R^3$. We can see the two characterizations of acyclic vector configurations. On the one hand, the only linear combination of these vectors resulting in the origin is the trivial combination. On the other hand, we can construct a hyperplane intersecting every vector in exactly one point. Thus, we can scale acyclic vector configurations in such a way that it becomes a homogenized point configuration constructing a polytope.}
\label{fig:acyclicCone}
\end{figure}
By the homogenization and dehomogenization procedure, we can transfer \cref{thm:WeylMinkowski} from cones also to polytopes. We call an intersection of finitely many, closed, linear halfspaces in $\mathbb R^n$
\begin{align}
\gls{PMb} := \left\{ \mu\in\mathbb R^n \,\rvert\, M \mu \leq b \right\} \subseteq \mathbb R^n \label{eq:HPolytope}
\end{align}
with $M \in \mathbb R^{k\times n}$ and $b\in\mathbb R^k$ a \textit{(convex) polyhedron}. We will usually assume that \cref{eq:HPolytope} contains no redundant inequalities, i.e.\ $P(M,b)$ will change if we remove an inequality. It can be shown, that bounded polyhedra are equivalent to polytopes, which is also known as the Weyl-Minkowski theorem for polytopes \cite{ZieglerLecturesPolytopes1995}. Thus, we also have two different ways to represent polytopes: either by their vertices \cref{eq:VPolytope} or by their facets \cref{eq:HPolytope}. \bigskip
The conversion between the vertex representation and the facet representation is called facet enumeration problem and vertex enumeration problem, respectively. There are various implementations providing algorithms for the enumeration problem, e.g.\ the C-library \softwareName{lrslib} \cite{AvisLrslib} or the program \softwareName{polymake} \cite{GawrilowPolymakeFrameworkAnalyzing2000}. In the \cref{sec:SoftwareTools} we provide tips for the usage of these programs.
Although there is no analytic solution of the enumeration problem in general, for simplices it is nearly trivial. For the conversion in that case let $A = \{ a^{(1)},\ldots,a^{(n+1)}\} \subset \mathbb R^n$ be the generating set of a full dimensional simplex. Therefore, we have $\det\mathcal{A}\neq 0$ and we can invert the homogenized point configuration $\mathcal{A}$
\begin{align}
P_\bigtriangleup = \operatorname{Conv}(A) = \left\{\mu\in\mathbb R^n \, \left| \, \begin{pmatrix} 1 \\ \mu \end{pmatrix} = \mathcal{A} k, k \geq 0 \right\}\right. = \left\{ \mu\in\mathbb R^n \, \left| \, \mathcal{A}^{-1} \begin{pmatrix} 1 \\ \mu \end{pmatrix} \geq 0 \right\}\right.
\end{align}
which transforms the vertex representation into the facet representation. Furthermore, a vertex of a simplex is the intersection of $n$ facets. Thus, a vertex $v$ of $P_\bigtriangleup$ is the solution of $n$ rows of the $(n+1)\times(n+1)$ linear equation system $\mathcal{A}^{-1} x = 0$. Clearly, the $i$-th column of $\mathcal{A}$ solves the system $\mathcal{A}^{-1} x =0$ except for the $i$-th row. Hence, the intersection of $n$ facets is the $i$-th column of $\mathcal{A}$, where $i$ is the index which belongs to the facet which is not involved in the intersection. In a simplex that means that the $i$-th facet is opposite to the $i$-th vertex. Therefore, the $i$-th row of $\mathcal{A}^{-1} \begin{psmallmatrix} 1 \\ \mu \end{psmallmatrix} = 0$ describes the facet, which is opposite to the point defined by the $i$-th column of $\mathcal{A}$. \bigskip
\subsection{Gale duality} \label{ssec:GaleDuality}
In the previous section we have seen that the linear dependences determine whether a vector configuration is acyclic or not. However, linear dependences are much more powerful and reveal the structure of an oriented matroid. We will give a very short overview about the connection to oriented matroids and refer to \cite{ZieglerLecturesPolytopes1995} for a more detailed description.\bigskip
As before, let $\mathcal{A} = \{a^{(1)},\ldots,a^{(N)} \} \subset\mathbb R^{n+1}$ be a vector configuration spanning $\mathbb R^{n+1}$ as a vector space, for example a full dimensional homogenized point configuration. The set of all \textit{linear dependences} between those vectors generates a linear subspace of $\mathbb R^N$
\begin{align}
\gls{Dep} := \left\{ \lambda\in\mathbb R^N \,\Big\vert\, \sum_{j=1}^N \lambda_j a^{(j)} = 0 \right \} \subseteq \mathbb R^N \label{eq:linearDep}
\end{align}
of dimension $r:=N-n-1$, which is nothing else than the kernel of $\mathcal{A}$. As seen above we are mainly interested whether these linear dependences are positive, zero or negative. Therefore, let $\operatorname{sign} : \mathbb R^N \rightarrow \{-,0,+\}^N$ be the componentwise sign function and for any vector $\lambda\in\mathbb R^N$, we call the non-zero components of $\lambda$ its \textit{support}. The elements of $\operatorname{sign}(\operatorname{Dep}(\mathcal{A}))$ are called the \textit{signed vectors of $\mathcal{A}$}. Furthermore, the elements of $\operatorname{sign}(\operatorname{Dep}(\mathcal{A}))$ having a minimal, non-empty support are called the \textit{signed circuits of $\mathcal{A}$}. \bigskip
As a further application of Farkas' lemma we can read off the face structure from the subspace of linear dependences.
\begin{lemma}[{similar results can be found e.g.\ in \cite[ch. 5.4]{GrunbaumConvexPolytopesSecond2003} or \cite[ch. 5]{ThomasLecturesGeometricCombinatorics2006}}] \label{lem:face-kernel}
Let $A=\{a^{(1)},\ldots,a^{(N)}\}\subset\mathbb R^n$ be a point configuration and $\mathcal{A}\subset\mathbb R^{n+1}$ its homogenization. Then $\tau$ is a (proper) face of $\operatorname{Conv}(A)$ if and only if there is no positive dependence, i.e.\ there is no $l\in\operatorname{Dep}(\mathcal{A})$ with $l_j\geq 0$ for $j\notin\tau$ and $\{l_j\}_{j\notin\tau}\neq 0$.
\end{lemma}
\begin{proof}
Let $A_\tau = \{ a^{(j)} \}_{j\in\tau}$ and $A_{\bar\tau} = \{ a^{(j)} \}_{j\notin\tau}$ be the subsets (matrices) collecting the elements (columns) corresponding to $\tau$ and its complement, respectively. We use the same convention for $l_\tau$ and $l_{\bar\tau}$. The existence of a positive dependence means
\begin{align}
\exists l \in \mathbb R^N \text{ such that } A_\tau l_\tau + A_{\bar\tau} l_{\bar\tau} = 0,\ \sum_{j=1}^N l_j = 0,\, l_{\bar\tau} \geq 0 \text{ and } l_{\bar\tau} \neq 0 \comma
\end{align}
where $l_{\bar\tau}\geq 0$ is understood componentwise. By writing $l_\tau = x - y$ with $x\geq 0$ and $y\geq 0$ we can reformulate this to
\begin{align}
\Leftrightarrow \exists x,y\in\mathbb R^{|\tau|},\ l_{\bar\tau}\in\mathbb R^{|{\bar\tau}|},\ & r\in\mathbb R \text{ with } x\geq 0,\ y\geq 0,\ l_{\bar\tau}\geq 0,\ r>0 \text{ such that} \nonumber\\
& \begin{pmatrix}
A_\tau & -A_\tau & A_{\bar\tau} \\
\mathbf{1}^\top & -\mathbf{1}^\top &\mathbf{1}^\top \\
0 & 0 &\mathbf{1}^\top
\end{pmatrix} \begin{pmatrix} x \\ y \\ l_{\bar\tau} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ r \end{pmatrix} \point
\end{align}
Hence, we can apply \cref{lem:Farkas}. Thus, the non-existence of a positive dependence is equivalent with
\begin{align}
&\exists m\in\mathbb R^n, \exists s,t\in\mathbb R \text{ such that } m^\top A_\tau = s \mathbf{1}^\top,\, m^\top A_{\bar\tau} \leq (s-t) \mathbf{1}^\top \text{ and } t>0 \nonumber \\
\Leftrightarrow & \exists m\in\mathbb R^n, \exists s\in\mathbb R \text{ such that } m^\top A_\tau = s \mathbf{1}^\top \text{ and } m^\top A_{\bar\tau} < s\mathbf{1}^\top \point
\end{align}
Since one can convince oneself easily that we can restrict the definition in equation \cref{eq:facedef} to the elements of $A$, this is nothing else than the definition of a face, where a linear functional $m^\top$ maximizes the points of $A_\tau$.
\end{proof}
Analogously to the linear dependences, we call the subspace of linear forms
\begin{align}
\gls{Val} := \left\{ \phi \mathcal{A} \,\rvert\, \phi \in \left(\mathbb R^{n+1}\right)^\vee \right \} \subseteq \left(\mathbb R^{N}\right)^\vee
\end{align}
the space of \textit{linear evaluations}, which has dimension $n+1$. The set $\operatorname{sign}(\operatorname{Val}(\mathcal{A}))$ will be called the \textit{signed covectors of $\mathcal{A}$} and its elements with a minimal, non-empty support are called the \textit{signed cocircuits of $\mathcal{A}$}.
These signed vectors and covectors together with the signed circuits and cocircuits can be associated with an oriented matroid of $\mathcal{A}$. Without going into detail we will focus on two aspects of oriented matroids only: the duality and the operations deletion and contraction. For more information about the connection to oriented matroids we refer to \cite{ZieglerLecturesPolytopes1995}. \bigskip
Note, that $\operatorname{Dep}(\mathcal{A})$ and $\operatorname{Val}(\mathcal{A})$ are orthogonal complements of each other, i.e.\ for every linear functional $\phi\in\operatorname{Val}(\mathcal{A})$ every vector of $\operatorname{Dep}(\mathcal{A})$ vanishes and vice versa. Thus, these subspaces are dual to each other, which vindicates the naming ``covectors'' and ``cocircuits'' from above. We will call this duality the \textit{Gale duality} and the transformation between these spaces the \textit{Gale transform}. Thus, a Gale dual of $\mathcal{A}$ is a vector configuration $\gls{Bb}$, such that the linear dependences of $\mathcal{A}$ are the linear evaluations of $\mathcal{B}$ and vice versa.
More specific, we will call a basis $\mathcal{B}\subset \mathbb R^N$ of the vector subspace $\operatorname{Dep}(\mathcal{A})$ a Gale dual or a Gale transform of $\mathcal{A}$. Gale duals are not unique, as we can choose any basis of the space $\operatorname{Dep}(\mathcal{A})$. However, all possible Gale duals are connected by regular linear transformations and we write $\gls{Gale}$ for the set of all Gale duals of $\mathcal{A}$. Note that if the vector configuration $\mathcal{A}$ is acyclic, every Gale dual is totally cyclic and vice versa. As before, we will associate $\mathcal{B}$ also with a matrix $\mathcal{B}\in\mathbb R^{N\times r}$. But contrary to the vector configurations $\mathcal{A}$, we suppose $\mathcal{B}$ as a collection of row vectors and we denote its elements by $\{b_1,\ldots,b_N\}$. The vector configuration of the rows of $\mathcal{B}$ is called a \textit{Gale diagram}.
\begin{example} \label{ex:GaleAppell}
Consider the following point configuration of $6$ points in $\mathbb R^3$
\begin{align}
A = \begin{pmatrix}
0 & 1 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 1 & 1 \\
\end{pmatrix} \label{eq:PtConfigAppell}
\end{align}
which turns out to generate the $\mathcal{A}$-hypergeometric system of the Appell $F_1$ function. The corresponding polytope $P=\operatorname{Conv}(A)$ is depicted in \cref{fig:PolytopeAppell}.
\begin{figure}
\centering
\begin{subfigure}{.48\textwidth}
\centering
\begin{tikzpicture}[scale=1]
\coordinate[circle, fill, inner sep = 2pt, label=below:$1$] (A) at (0,0);
\coordinate[circle, fill, inner sep = 2pt, label=below:$2$] (B) at (-1.3,-1.3);
\coordinate[circle, fill, inner sep = 2pt, label=below:$3$] (C) at (2,0);
\coordinate[circle, fill, inner sep = 2pt, label=above:$4$] (D) at (0,2);
\coordinate[circle, fill, inner sep = 2pt, label=above:$5$] (E) at (-1.3,.7);
\coordinate[circle, fill, inner sep = 2pt, label=above:$6$] (F) at (2,2);
\draw[thick] (E) -- (B) -- (C) -- (F);
\draw[thick] (D) -- (E) -- (F) -- (D);
\draw[thick,dashed] (A) -- (B);
\draw[thick,dashed] (C) -- (A);
\draw[thick,dashed] (A) -- (D);
\end{tikzpicture}
\caption{Polytope} \label{fig:PolytopeAppell}
\end{subfigure}
\begin{subfigure}{.48\textwidth}
\centering
\begin{tikzpicture}[scale=2]
\draw[thick,->] (0,0) -- (1,1) node[right] {$b_1$};
\draw[thick,->] (0,0) -- (0,-1) node[below] {$b_2$};
\draw[thick,->] (0,0) -- (-1,0) node[left] {$b_3$};
\draw[thick,->] (0,0) -- (-1,-1) node[below] {$b_4$};
\draw[thick,->] (0,0) -- (0,1) node[above] {$b_5$};
\draw[thick,->] (0,0) -- (1,0) node[right] {$b_6$};
\end{tikzpicture}
\caption{Gale diagram} \label{fig:GaleDualAppell}
\end{subfigure}
\caption[Polytope and Gale diagram for Appell $F_1$ function]{Polytope and Gale diagram of the point configuration \cref{eq:PtConfigAppell}, which is related to the Appell $F_1$ function.}
\end{figure}
As pointed out above, the Gale transform is not unique. For example we can choose
\begin{align}
\mathcal{B}^\top = \begin{pmatrix}
1 & 0 & -1 & -1 & 0 & 1 \\
1 & -1 & 0 & -1 & 1 & 0
\end{pmatrix} \point
\end{align}
By means of \cref{lem:face-kernel} we can read off the faces also from the Gale dual (\cref{fig:GaleDualAppell}). The cofaces of $\operatorname{Conv}(A)$, i.e.\ the complements of faces of $\operatorname{Conv}(A)$, are precisely those, where the cones spanned by corresponding elements of the Gale dual have the origin $0$ as their relative interior. E.g.\ $\operatorname{Cone}(b_2,b_5)$ contains the origin as its relative interior. Hence, the points $1,3,4,6$ generate a face of $\operatorname{Conv}(A)$. $\operatorname{Cone}(b_5,b_6)$ contains the origin only on his boundary. Therefore, $1,2,3,4$ is not related to a face of $\operatorname{Conv}(A)$.
\end{example}
For a vector configuration $\mathcal{A}\subset\mathbb R^{n+1}$ containing an element with label $j$, we mean by the \textit{deletion} \gls{deletion} simply the vector subconfiguration where we removed the element corresponding to the label $j$. Its dual operation, the \textit{contraction} \gls{contraction}, can be understood as a projection of $\mathcal{A}$ to a hyperplane not containing the element $a^{(j)}$. Thus, let $c\in\mathbb R^{n+1}$ with $c^\top a^{(j)} \neq 0$ be a vector defining a hyperplane. Then,
\begin{align}
\contraction{\mathcal{A}}{j} := (\pi \mathcal{A})\setminus j \,\text{ with }\, \pi := \mathbbm{1} - a^{(j)} \frac{1}{c^\top \cdot a^{(j)}} c^\top
\end{align}
is such a projection to a hyperplane. As a Gale dual $\mathcal{B}$ of $\mathcal{A}$ is nothing else than a matrix satisfying $\mathcal{A} \mathcal{B} = 0$, $\mathcal{B}$ will be also a Gale dual of $\pi \mathcal{A}$. Thus, it is not hard to see the duality of deletion and contraction \cite{DeLoeraTriangulations2010}
\begin{align}
\operatorname{Gale}(\contraction{\mathcal{A}}{j}) = \operatorname{Gale}(\mathcal{A})\setminus j \point
\end{align}
\subsection{Triangulations of polyhedra} \label{ssec:TriangulationsPolyhedra}
For the combinatorial structure of Feynman graphs CW-complexes are of special significance, see e.g.\ \cite{BerghoffGraphComplexesFeynman2021}. As is also for $\mathcal{A}$-hypergeometric functions, where we will consider subdivisions of polytopes. A \textit{subdivision} $\mathcal{S}$ of a point configuration $A\subset\mathbb R^n$ is a set of polytopes $\sigma$, which are called \textit{cells}, such that
\begin{enumerate}[(i)]
\item if $\sigma\in\mathcal{S}$ then also every face of $\sigma$ is contained in $\mathcal{S}$,
\item for all $\sigma,\tau\in\mathcal{S}$, the intersection $\sigma\cap\tau$ is either a face of both or empty,
\item the union of all $\sigma\in \mathcal{S}$ is equal to $\operatorname{Conv}(A)$.
\end{enumerate}
Note that this definition does not demand to use all points of $A$. If additionally all cells are simplices we call the subdivision a \textit{triangulation} $\gls{Tt}$. By $\gls{hatT}$ we want to refer to the set of maximal cells of a triangulation $\mathcal{T}$, i.e.\ those cells which are not contained in other cells. If all $\sigma\in \widehat{\mathcal T}$ belong to simplices with volume $1$ (i.e.\ $|\det\mathcal{A}_\sigma|=1$) the triangulation is called \textit{unimodular}.
We say that a subdivision $\mathcal{S}$ refines $\mathcal{S}^\prime$, in symbols $\gls{refinement}$, if for every cell $c\in\mathcal{S}$ there exists a cell $c^\prime\in\mathcal{S}^\prime$ containing it $c\subseteq c^\prime$. Therefore, we can group the subdivisions by their refinements as a partially ordered set (poset). \bigskip
The underlying structure of subdivisions and triangulations are polyhedral complexes. We call a set $\mathcal K$ of polyhedra a \textit{polyhedral complex} if it satisfies
\begin{enumerate}[i)]
\item if $P\in\mathcal K$ and $F\subseteq P$ is a face of $P$, it implies $F\in\mathcal K$
\item $P\cap Q$ is a face of $P$ and a face of $Q$ for all $P,Q\in\mathcal K$.
\end{enumerate}
Thus, a triangulation is a simplicial polyhedral complex of polytopes covering $\operatorname{Conv}(A)$. But also the set of all faces of a polytope is a polyhedral complex. A subset $\mathcal K^\prime\subseteq\mathcal K$ of a polyhedral complex $\mathcal K$, generating a polyhedral complex by itself is called a \textit{polyhedral subcomplex} of $\mathcal K$. \bigskip
The definition of polyhedral complexes is not only restricted to polytopes, i.e.\ bounded polyhedra. We call a polyhedral complex consisting only in cones a \textit{(polyhedral) fan}. A fan is \textit{complete} if it covers $\mathbb R^n$. Especially, we can associate a fan to the faces of a polytope as follows. Let $P\subset\mathbb R^n$ be a polytope and $x\in P$ a point of it. The set of linear functionals
\begin{align}
\gls{NPx} := \{ \phi\in(\mathbb R^n)^\vee \,\rvert\, \phi x \geq \phi y \quad \forall y\in P\}
\end{align}
is called the \textit{outer normal cone of $x$}. By considering the definition of a face \cref{eq:facedef} we see that for all relative interior points of a face $\tau\subseteq P$ the outer normal cone does not change. Consequentially, we will write $N_P(\tau)$. Note that $N_P(x)$ is full dimensional if and only if $x\in\operatorname{Vert}(P)$. The collection of all outer normal cones of a polytope will be called the \textit{outer normal fan of $P$}
\begin{align}
\gls{NP} := \{ N_P(\tau) \,\rvert\, \tau \,\text{ is a face of } \,P \} = \{ N_P(x) \,\rvert\, x\in P \} \subseteq (\mathbb R^n)^\vee \point
\end{align}
Analogously, we understand by the \textit{inner normal cone} the negatives of outer normal cones and the \textit{inner normal fan} will be the set of all inner normal cones. \bigskip
After the formal description of subdivisions by polyhedral complexes, we want to show several ways to construct them. Let $A = \{a^{(1)},\ldots,a^{(N)}\}\subset\mathbb R^n$ be a point configuration and $\gls{omega} : A \rightarrow \mathbb R$ a \textit{height} function, assigning a real number to every point of $A$. The so-called lifted point configuration $A^\omega := \{ (\omega_1, a^{(1)}), \ldots, (\omega_N,a^{(N)})\}$, where $\omega_j:=\omega(a^{(j)})$, describes a polytope $P^\omega = \operatorname{Conv}(A^\omega)\subset\mathbb R^{n+1}$ in dimension $n+1$. We call a face of $P^\omega$ \textit{visible from below} or a \textit{lower face} if the face is generated by a linear functional $\phi\in(\mathbb R^{n+1})^\vee$ having its first coordinate negative. By the projection $\pi: A^\omega\rightarrow A$, forgetting the first coordinate, we project all lower faces down to $\operatorname{Conv}(A)$ (see \cref{fig:regularTriangulation}). These projected lower faces satisfy all conditions of a subdivision and we will note such a subdivision by\glsadd{regSubdivision}\glsunset{regSubdivision} $\mathcal{S}(A,\omega)$. Moreover, if all projected lower faces are simplices, we have constructed a triangulation. All subdivisions and triangulations which can be generated by such a lift construction are called \textit{regular subdivisions} and \textit{regular triangulations}, respectively. Regular triangulations will become of great importance in the following and it can be shown, that every convex polytope admits always a regular triangulation \cite{DeLoeraTriangulations2010}.
\begin{figure}[tb]
\centering
\begin{tikzpicture}[scale=2]
\coordinate[label=below left:$a^{(1)}$] (1) at (0,0,0);
\coordinate[label=below right:$a^{(2)}$] (2) at (3.6,0,0.5);
\coordinate[label=right:$a^{(3)}$] (3) at (3,0,-2.5);
\coordinate[label=left:$a^{(4)}$] (4) at (0,0,-2);
\coordinate[label=below:$\hspace{-.4cm}a^{(5)}$] (5) at (2.5,0,-0.85);
\coordinate[label=left:$a_\omega^{(1)}$] (1h) at ($(1) + (0,2.2,0)$);
\coordinate[label=right:$a_\omega^{(2)}$] (2h) at ($(2) + (0,1.7,0)$);
\coordinate[label=above right:$a_\omega^{(3)}$] (3h) at ($(3) + (0,1.6,0)$);
\coordinate[label=above left:$a_\omega^{(4)}$] (4h) at ($(4) + (0,2,0)$);
\coordinate[label=below:$\hspace{-.7cm}a_\omega^{(5)}$] (5h) at ($(5) + (0,1,0)$);
\coordinate (I1) at (intersection of 1h--2h and 3h--5h);
\coordinate (I2) at (intersection of 1h--2h and 4h--5h);
\draw[thick] (1) -- (2) -- (3) -- (4) -- (1);
\draw[dashed] (1) -- (1h); \draw[dashed] (2) -- (2h); \draw[dashed] (3) -- (3h); \draw[dashed] (4) -- (4h); \draw[dashed] (5) -- (5h);
\fill[gray!50] (1h) -- (4h) -- (5h);
\fill[gray!70] (3h) -- (4h) -- (5h);
\fill[gray!120] (2h) -- (3h) -- (5h);
\fill[gray!90] (1h) -- (2h) -- (5h);
\draw[thick] (1h) -- (2h) -- (3h) -- (4h) -- (1h);
\draw[thick] (1h) -- (5h); \draw[thick] (2h) -- (5h); \draw[thick] (3h) -- (I1); \draw[thick] (4h) -- (I2);
\foreach \x in {1,2,3,4,5,1h,2h,3h,4h,5h} \fill (\x) circle (1pt);
\draw[thick] (1) -- (5); \draw[thick] (2) -- (5); \draw[thick] (3) -- (5); \draw[thick] (4) -- (5);
\end{tikzpicture}
\caption[Constructing regular triangulations]{Regular triangulation of a point configuration $A=\{a^{(1)},\ldots,a^{(5)}\}\subset\mathbb R^2$. We denote $a_\omega^{(j)} = (\omega_j,a^{(j)})$ for the lifted points. The projection of the lower faces of the lifted point configuration to the polytope $\operatorname{Conv}(A)$ generates the regular triangulation.} \label{fig:regularTriangulation}
\end{figure}
Similarly, we can define a regular subdivision $\mathcal{S}(\mathcal{A},\omega)$ of vector configurations $\mathcal{A}$. However, for a subdivision of a vector configuration we have to replace convex hulls by cones in the definitions above. Thus, a subdivision of a vector configuration is a polyhedral complex of cones covering $\operatorname{Cone}(\mathcal{A})$. According to \cite{SturmfelsGrobnerBasesConvex1995} we can reformulate regular subdivisions as follows. For a convenient height $\omega\in\mathbb R^N$ a regular subdivision $\mathcal{S}(\mathcal{A},\omega)$ of $\mathcal{A}=\{a^{(1)},\ldots,a^{(N)}\}$ consists of all subsets $\sigma\subseteq\{1,\ldots,N\}$ such that there exists a linear functional $c\in(\mathbb R^{n+1})^\vee$ with
\begin{align}
c \cdot a^{(j)} &= \omega_j \qquad\text{for}\quad j \in \sigma \label{eq:regTriangVector1} \\
c \cdot a^{(j)} &< \omega_j \qquad\text{for}\quad j \notin \sigma \point \label{eq:regTriangVector2}
\end{align}
For acyclic vector configurations this will produce a regular subdivision for any height $\omega\in\mathbb R^N$. Moreover, this subdivision will agree with the regular subdivision of the dehomogenized vector configuration $A\subset\mathbb R^n$. If $\omega$ is sufficiently generic $\mathcal{S}(\mathcal{A},\omega)$ will be a regular triangulation. However, for non-acyclic vector configurations not all heights will produce a subdivision and we have to restrict us to non-negative heights $\omega\in\mathbb R_{\geq 0}^N$ in that case \cite{DeLoeraTriangulations2010}.\bigskip
Another, iterative construction of triangulations of point configurations is the so-called \textit{placing triangulation}. Consider a face $\tau$ of a convex polytope $P\subset \mathbb R^n$ and an arbitrary point in the relative interior of that face $x\in\operatorname{relint} (\tau)$. The face $\tau$ is \textit{visible} from another point $p\notin \tau$, if the line segment $[x,p]$ intersects $P$ only at $x$. With the concept of visibility, we can construct triangulations iteratively. Let $\mathcal{T}$ be a (regular) triangulation of the point configuration $A$. Then the set
\begin{align}
\mathcal{T}^\prime = \mathcal{T} \cup \{ \tau \cup \{p\} \, | \, \tau \in \mathcal{T} \textrm{ and } \tau \textrm{ is visible from } p \}
\end{align}
is a (regular) triangulation of the point configuration $A \cup p$ \cite{DeLoeraTriangulations2010}. Thus, by starting with a triangulation of an arbitrary point of $A$ we can place step by step the other points to the previous triangulation. The order of the added points will determine the triangulation.
The placing triangulation is slightly more convenient in an algorithmical use. However, more efficient algorithms make use of the connection to oriented matroids. We refer to \cite{RambauTOPCOMTriangulationsPoint2002} for a consideration about efficiency of algorithms and its implementation in the software \softwareName{Topcom}. Also the comprehensive software \softwareName{polymake} \cite{GawrilowPolymakeFrameworkAnalyzing2000} includes algorithms to construct triangulations. We will discuss those possibilities in \cref{sec:SoftwareTools}. \bigskip
Let $\mathcal{T}$ be a point configuration of $A$ and $\mathcal{T}^\prime$ a triangulation of $A^\prime$. $\mathcal{T}^\prime$ is a \textit{subtriangulation} of $\mathcal{T}$, in symbols $\mathcal{T}^\prime \subseteq \mathcal{T}$, if every cell of $\mathcal{T}^\prime$ is contained in $\mathcal{T}$. Thus, it will be also $A^\prime\subseteq A $. In other words, $\mathcal{T}^\prime$ is a subcomplex of $\mathcal{T}$. The placing triangulation shows that for every point configurations $A$, $A^\prime$ with $A^\prime\subseteq A$ there will be (regular) triangulations $\mathcal{T}$, $\mathcal{T}^\prime$ with $\mathcal{T}^\prime\subseteq \mathcal{T}$. However, not all triangulations $\mathcal{T}$ of the point configuration $A$ will have a subtriangulation corresponding to $A^\prime$. But for regular triangulations, we will always find consistent triangulations in the following sense.
\begin{lemma}[triangulations of deleted point configurations \cite{DeLoeraTriangulations2010}] \label{lem:subtriangulations}
Let $\mathcal{T}$ be a regular triangulation (subdivision) of a point configuration $A\subset \mathbb R^n$ and $a^{(j)}\in A$ a point with the label $j$. Then there is a regular triangulation (subdivision) $\mathcal{T}^\prime$ of $A\setminus j$, using all simplices (cells) of $\mathcal{T}$ which do not contain $a^{(j)}$.
\end{lemma}
Note that \cref{lem:subtriangulations} demand the triangulations to be regular. Except for several special cases as e.g.\ $n=2$, the lemma holds not necessarily for non-regular triangulations.
\begin{figure}
\centering
\begin{subfigure}{.45\textwidth}
\centering
\begin{tikzpicture}[scale=1]
\draw[step=1cm,gray,very thin] (-0.1,-0.1) grid (2.2,1.7);
\draw[thick,->] (0,0) -- (2.2,0) node[anchor=north west] {$\mu_1$};
\draw[thick,->] (0,0) -- (0,1.7) node[anchor=south east] {$\mu_2$};
\coordinate[circle, fill, inner sep = 1pt, label=below:$1$] (A) at (1,0);
\coordinate[circle, fill, inner sep = 1pt, label=left:$2$] (B) at (0,1);
\coordinate[circle, fill, inner sep = 1pt, label=above:$3$] (C) at (1,1);
\coordinate[circle, fill, inner sep = 1pt, label=below:$4$] (D) at (2,0);
\draw[thick] (A) -- (B) -- (C) -- (D) -- (A);
\draw[thick] (B) -- (D);
\end{tikzpicture}
\caption{regular triangulation $\mathcal{S}(A,\omega)$ of $A$ generated by $\omega = (0,0,1,0)$}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\begin{tikzpicture}[scale=1]
\draw[step=1cm,gray,very thin] (-0.1,-0.1) grid (2.2,1.7);
\draw[thick,->] (0,0) -- (2.2,0) node[anchor=north west] {$\mu_1$};
\draw[thick,->] (0,0) -- (0,1.7) node[anchor=south east] {$\mu_2$};
\coordinate[circle, fill, inner sep = 1pt, label=below:$1$] (A) at (1,0);
\coordinate[circle, fill, inner sep = 1pt, label=left:$2$] (B) at (0,1);
\coordinate[circle, fill, inner sep = 1pt, label=above:$3$] (C) at (1,1);
\coordinate[circle, fill, inner sep = 1pt, label=below:$4$] (D) at (2,0);
\draw[thick] (A) -- (B) -- (C) -- (D) -- (A);
\draw[thick] (A) -- (C);
\end{tikzpicture}
\caption{regular triangulation $\mathcal{S}(A,\omega)$ of $A$ generated by $\omega = (0,0,0,1)$}
\end{subfigure}
\caption[Example of regular triangulations]{Example of the two possible regular triangulations of the point configuration $A = \{ (1,0) ; (0,1) ; (1,1) ; (2,0) \}$ with heights $\omega$ generating those triangulations. This example turns out to describe the $1$-loop self-energy Feynman integral with one mass, see \cref{ex:1loopbubbleA}. The possible proper subtriangulations of these two triangulations are simply the single simplices itself.}
\end{figure}
\subsection{Secondary polytopes and secondary fans} \label{ssec:secondaryPolytope}
In the last part about convex polyhedra we will study the structure of the set of all subdivisions. For every triangulation $\mathcal{T}$ of a full dimensional point configuration $A\subset\mathbb R^n$ we will introduce the \textit{weight map} (occasionally also known as \textit{GKZ-vector}) $\gls{weight} :A\rightarrow \mathbb Z_{\geq 0}$
\begin{align}
\varphi_\mathcal{T} (a) := \!\!\!\!\!\!\!\!\!\! \sum_{\substack{\sigma\in \mathcal{T} \,\,\text{s.t.}\\ a\in\operatorname{Vert}(\operatorname{Conv}(\sigma))}} \!\!\!\!\!\!\!\!\!\! \operatorname{vol} \!\left(\operatorname{Conv}(\sigma)\right) = \varphi_{\widehat{\mathcal T}} (a)
\end{align}
which is the sum of all simplex volumes, having $a$ as its vertex. We define degenerated polytopes to have volume zero, which is the reason why we only have to consider the full dimensional simplices $\widehat{\mathcal T}$. We write $\varphi_\mathcal{T}(A) = \left(\varphi_\mathcal{T}(a^{(1)}),\ldots,\varphi_T(a^{(N)})\right)$ for the image of $A$. Note that two distinct regular triangulations also have a different weight $\varphi_\mathcal{T}(A)$, whereas two distinct non-regular triangulations may have the same weight \cite{DeLoeraTriangulations2010}.
The weights themselves define a further polytope of dimension $r:=N-n-1$, which is the so-called \textit{secondary polytope} $\Sigma(A)$. It is the convex hull of all weights
\begin{align}
\gls{SecPoly} := \operatorname{Conv}\!\left(\left\{\varphi_\mathcal{T}(A) \, | \, \mathcal{T} \text{ is a triangulation of } A\right\}\right) \subset\mathbb R^N \point
\end{align}
The vertices of the secondary polytope $\Sigma(A)$ correspond precisely to the regular triangulations $\mathcal{T}$. Moreover, the refinement poset of regular subdivisions is isomorphic to the face lattice of $\Sigma(A)$ \cite{DeLoeraTriangulations2010}. We will demonstrate this with the example shown on the book cover of \cite{GelfandDiscriminantsResultantsMultidimensional1994}.
\begin{example} \label{ex:5pointsPlane}
Let $A\subset\mathbb R^2$ be the following point configuration
\begin{align}
A = \begin{pmatrix}
0 & 2 & 2 & 0 & 1 \\
0 & 0 & 2 & 2 & 1
\end{pmatrix}
\end{align}
of five points forming a rectangle in the plane. There are three possible triangulations. Note that not all points of $A$ has to be used in a triangulation. These three triangulations have the weights $\varphi_{\mathcal{T}_1} = (8,4,8,4,0)$, $\varphi_{\mathcal{T}_2} = (4,8,4,8,0)$ and $\varphi_{\mathcal{T}_3} = (4,4,4,4,8)$. These weights generate the secondary polytope $\Sigma (A)\subset\mathbb R^5$, whose actual dimension is only $2$. Thus, $\Sigma(A)$ is a $2$-dimensional triangle in $\mathbb R^5$ as depicted in \cref{fig:SecondaryPolytope5pointsPlane}. As aforementioned the vertices of $\Sigma(A)$ correspond to the regular triangulations of $A$. The edges of $\Sigma(A)$ will correspond to regular subdivisions of $A$, which have the triangulations as their only strict refinements. Thus, in this example it will be the regular subdivisions containing $4$ points. The full secondary polytope will correspond to the trivial subdivision.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.3]
\coordinate (T1) at (0,0); \coordinate (T2) at (15,0); \coordinate (T3) at (7.5,10);
\draw[thick] (T1) -- (T2) -- (T3) -- cycle;
\coordinate (A5) at ($(T1) + (-1.5,-1.5)$);
\coordinate[circle, fill, inner sep = 1pt] (A1) at ($(A5) + (-1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (A2) at ($(A5) + (1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (A3) at ($(A5) + (1,.8)$);
\coordinate[circle, fill, inner sep = 1pt] (A4) at ($(A5) + (-1,.8)$);
\draw[thick] (A1) -- (A2) -- (A3) -- (A4) -- (A1); \draw[thick] (A1) -- (A3);
\coordinate[circle, fill, inner sep = 1pt] (B5) at ($(T1)!0.5!(T2) + (0,-1.5)$);
\coordinate[circle, fill, inner sep = 1pt] (B1) at ($(B5) + (-1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (B2) at ($(B5) + (1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (B3) at ($(B5) + (1,.8)$);
\coordinate[circle, fill, inner sep = 1pt] (B4) at ($(B5) + (-1,.8)$);
\draw[thick] (B1) -- (B2) -- (B3) -- (B4) -- (B1); \draw[thick] (B1) -- (B3);
\coordinate[circle, fill, inner sep = 1pt] (C5) at ($(T2) + (1.5,-1.5)$);
\coordinate[circle, fill, inner sep = 1pt] (C1) at ($(C5) + (-1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (C2) at ($(C5) + (1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (C3) at ($(C5) + (1,.8)$);
\coordinate[circle, fill, inner sep = 1pt] (C4) at ($(C5) + (-1,.8)$);
\draw[thick] (C1) -- (C2) -- (C3) -- (C4) -- (C1);
\draw[thick] (C1) -- (C5); \draw[thick] (C2) -- (C5); \draw[thick] (C3) -- (C5); \draw[thick] (C4) -- (C5);
\coordinate[circle, fill, inner sep = 1pt] (D5) at ($(T2)!0.5!(T3) + (2.3,-.2)$);
\coordinate[circle, fill, inner sep = 1pt] (D1) at ($(D5) + (-1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (D2) at ($(D5) + (1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (D3) at ($(D5) + (1,.8)$);
\coordinate[circle, fill, inner sep = 1pt] (D4) at ($(D5) + (-1,.8)$);
\draw[thick] (D1) -- (D2) -- (D3) -- (D4) -- (D1); \draw[thick] (D2) -- (D4);
\coordinate (E5) at ($(T3) + (0,1.5)$);
\coordinate[circle, fill, inner sep = 1pt] (E1) at ($(E5) + (-1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (E2) at ($(E5) + (1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (E3) at ($(E5) + (1,.8)$);
\coordinate[circle, fill, inner sep = 1pt] (E4) at ($(E5) + (-1,.8)$);
\draw[thick] (E1) -- (E2) -- (E3) -- (E4) -- (E1); \draw[thick] (E2) -- (E4);
\coordinate (F5) at ($(T1)!0.5!(T3) + (-2.3,-.2)$);
\coordinate[circle, fill, inner sep = 1pt] (F1) at ($(F5) + (-1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (F2) at ($(F5) + (1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (F3) at ($(F5) + (1,.8)$);
\coordinate[circle, fill, inner sep = 1pt] (F4) at ($(F5) + (-1,.8)$);
\draw[thick] (F1) -- (F2) -- (F3) -- (F4) -- (F1);
\coordinate[circle, fill, inner sep = 1pt] (G5) at ($(T1)!0.5!(T2) + (0,4.8)$);
\coordinate[circle, fill, inner sep = 1pt] (G1) at ($(G5) + (-1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (G2) at ($(G5) + (1,-.8)$);
\coordinate[circle, fill, inner sep = 1pt] (G3) at ($(G5) + (1,.8)$);
\coordinate[circle, fill, inner sep = 1pt] (G4) at ($(G5) + (-1,.8)$);
\draw[thick] (G1) -- (G2) -- (G3) -- (G4) -- (G1);
\end{tikzpicture}
\caption[Example of a secondary polytope $\Sigma(A)$]{The secondary polytope $\Sigma(A)$ from \cref{ex:5pointsPlane} together with the regular subdivisions. Vertices of the secondary polytope correspond to regular triangulations, whereas edges of $\Sigma(A)$ corresponding to subdivisions into polytopes consisting in four points. The trivial subdivision is equal to the full secondary polytope.} \label{fig:SecondaryPolytope5pointsPlane}
\end{figure}
\end{example}
We want to consider this structure more in detail. Let $\mathcal{A}\subset\mathbb R^{n+1}$ be a vector configuration and $\mathcal{T}$ a regular subdivision of it. We call the set of heights $\omega$ generating $\mathcal{T}$ or a coarser subdivision
\begin{align}
\gls{SecCone} := \left\{\omega\in\mathbb R^N \,\rvert\, \mathcal{T} \preceq \mathcal S(\mathcal{A},\omega)\right\}
\end{align}
the \textit{secondary cone} of $\mathcal{T}$ in $\mathcal{A}$. The secondary cone $\Sigma C(\mathcal{A},\mathcal{T})$ is a polyhedral convex cone and it is full dimensional if and only if $\mathcal{T}$ is a regular triangulation \cite{DeLoeraTriangulations2010}. Furthermore, $\Sigma C(\mathcal{A},\mathcal{T}^\prime)$ is a proper face of $\Sigma C(\mathcal{A},\mathcal{T})$ if and only if $\mathcal{T} \prec \mathcal{T}^\prime$. Therefore, the relative interior of the secondary cone describes the heights generating the regular subdivision $\mathcal{T}$, i.e.\ $\operatorname{relint} \!\left(\Sigma C(\mathcal{A},\mathcal{T})\right) = \{\omega\in\mathbb R^N \,\rvert\, \mathcal{T} = \mathcal S(\mathcal{A},\omega)\}$. This set is known as \textit{relatively open secondary cone}. The set of all secondary cones is called the secondary fan
\begin{align}
\gls{SecFan} = \big\{ \Sigma C(\mathcal{A},\mathcal{T}) \,\rvert\, \mathcal{T} \text{ is a regular subdivision of } \mathcal{A} \big\} \point
\end{align}
It can be shown that $\Sigma F (\mathcal{A})$ is a polyhedral fan, which is complete if and only if $\mathcal{A}$ is acyclic. Moreover, when $\mathcal{A}$ is the homogenized point configuration of $A$, $\Sigma F(\mathcal{A})$ is the inner normal fan of the secondary polytope $\Sigma(A)$. Thus, by the secondary fan, we get the aforementioned relation between the secondary polytope and the refinement poset of subdivisions, which is encoded in the secondary cones. However, the secondary polytope is not full dimensional. Therefore, we have to find the right projection to connect the possible heights $\omega$ of subdivisions to the secondary polytope. \bigskip
It turns out that the Gale dual provides the right projection from heights to the secondary structure. Let $\mathcal{A}=\{a^{(1)},\ldots,a^{(N)}\}\subset\mathbb R^{n+1}$ be a full dimensional vector configuration and $\sigma\subseteq\{1,\ldots,N\}$ be a set of labels corresponding to a full dimensional simplex of $\mathcal{A}$. We write $\bar\sigma=\{1,\ldots,N\}\setminus \sigma$ for the complement of $\sigma$ and $\mathcal{A}_\sigma = (a^{(j)})_{j\in\sigma}$ as well as $\mathcal{A}_{\bar{\sigma}} = (a^{(j)})_{j\notin\sigma}$. Eliminating the linear forms $c$ from \cref{eq:regTriangVector1} and \cref{eq:regTriangVector2} we see that
\begin{align}
- \omega_\sigma \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} + \omega_{\bar\sigma} > 0 \label{eq:subdivisionCondGale}
\end{align}
describes a necessary and sufficient condition for $\sigma\in\mathcal{S}(\mathcal{A},\omega)$, where we use the same nomenclature $\omega_\sigma := (\omega_j)_{j\in\sigma}$ and $\omega_{\bar\sigma} := (\omega_j)_{j\notin\sigma}$. Note, that $\mathcal{B} = \begin{psmallmatrix} - \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} \\ \mathbbm 1 \end{psmallmatrix}$ is a possible Gale dual of $\mathcal{A}$, where the first rows correspond to $\sigma$ and the last rows correspond to $\bar\sigma$. Hence, we can equivalently write $\omega \mathcal{B} > 0$ instead of \cref{eq:subdivisionCondGale}. This relation can be extended to any Gale dual $\mathcal{B}\in\operatorname{Gale}(\mathcal{A})$ as follows. Denote by\glsadd{betaMap}\glsunset{betaMap} $\beta : \mathbb R^N \rightarrow \mathbb R^r$ the projection $\omega \mapsto \omega \mathcal{B}$ and $\mathcal{B}_{\bar\sigma} = (b_i)_{i\in\bar\sigma}$ are the rows of $\mathcal{B}$ corresponding to $\bar\sigma$. Then we have \cite{DeLoeraTriangulations2010}
\begin{align}
\sigma\in\mathcal{S}(\mathcal{A},\omega) \, \Leftrightarrow \, \beta(\omega) \in \operatorname{relint} \!\left(\operatorname{Cone} (\mathcal{B}_{\bar\sigma})\right) \point \label{eq:subdivisionCondGaleBeta}
\end{align}
Therefore, the (maximal) cells of regular subdivisions $\mathcal{S}(\mathcal{A},\omega)$ are directly related to the structure of the Gale diagram. Furthermore, regular subdivisions of $\mathcal{A}$ correspond to the intersection of cones spanned by subconfigurations of $\mathcal{B}$. Those intersections will also be called \textit{chambers}\footnote{To be precise, a \textit{relatively open chamber} is a minimal, non-empty intersection of cones corresponding to subconfigurations of the Gale dual $\mathcal{B}$. A \textit{closed chamber} is defined to be the closure of a relatively open chamber. The set of all closed chambers is called the \textit{chamber complex} or the \textit{chamber fan} \cite{DeLoeraTriangulations2010}.} of $\mathcal{B}$. These chambers are projections of secondary cones $\beta(\Sigma C(\mathcal{A},\mathcal{T}))$ into the Gale diagram. Therefore, by projecting the secondary fan also by the map $\beta$ we obtain the so-called chamber complex which can be constructed from the Gale diagram. Therefore, the Gale diagram contains all the essential part of the secondary fan. We will demonstrate this fact by the following example.
\begin{example} \label{ex:SecondaryAppell}
We will continue the \cref{ex:GaleAppell} of six points in $\mathbb R^3$. This point configuration has $6$ triangulations, all of them are regular and unimodular \par
{\centering
\vspace{\baselineskip}
\begin{tabular}{ccp{.8cm}cc}
I: & $\{1,2,3,4\}, \{2,3,4,5\}, \{3,4,5,6\}$ & & IV: & $\{1,2,3,5\}, \{1,4,5,6\}, \{1,3,5,6\}$ \\
II: & $\{1,2,3,4\}, \{2,4,5,6\}, \{2,3,4,6\}$ & & V: & $\{2,4,5,6\}, \{1,2,4,6\}, \{1,2,3,6\}$ \\
III: & $\{3,4,5,6\}, \{1,3,4,5\}, \{1,2,3,5\}$ & & VI: & $\{1,4,5,6\}, \{1,2,3,6\}, \{1,2,5,6\}$
\end{tabular}
\vspace{\baselineskip} \par
}
\noindent where the numbers stand for the labels of the points and the curly braces denote the full dimensional simplices. The triangulations can be constructed either by hand, considering \cref{fig:PolytopeAppell} or by using a software program, e.g.\ \softwareName{Topcom} \cite{RambauTOPCOMTriangulationsPoint2002}. Using the same Gale dual as in \cref{ex:GaleAppell} the Gale diagram is depicted in \cref{fig:AppellGaleTriangs} where we also denote the cones/chambers generating the triangulations. For example the intersection of the cones $\operatorname{Cone}(b_5,b_6) \cap\operatorname{Cone}(b_1,b_6)\cap\operatorname{Cone}(b_1,b_2)$ is the chamber generating the triangulation I. Thus, for any height vector $\omega$ with $\omega \mathcal{B}$ lying inside this intersection we will generate the triangulation I. Triangulations which are ``neighbours'' in the Gale diagram, i.e.\ they have a common facet, will be said to be related by a flip. Triangulations which are related by a flip share all except two of their simplices \cite{DeLoeraTriangulations2010}.
\begin{figure}[thb]
\centering
\begin{subfigure}{.45\textwidth}
\centering
\begin{tikzpicture}[scale=2]
\draw[thick,->] (0,0) -- (1,1) node[right] {$b_1$};
\draw[thick,->] (0,0) -- (0,-1) node[below] {$b_2$};
\draw[thick,->] (0,0) -- (-1,0) node[left] {$b_3$};
\draw[thick,->] (0,0) -- (-1,-1) node[below] {$b_4$};
\draw[thick,->] (0,0) -- (0,1) node[above] {$b_5$};
\draw[thick,->] (0,0) -- (1,0) node[right] {$b_6$};
\node at (.7,.3){I};
\node at (.3,.7){II};
\node at (.5,-.5){III};
\node at (-.3,-.7){IV};
\node at (-.5,.5){V};
\node at (-.7,-.3){VI};
\end{tikzpicture}
\caption{Gale diagram of the point configuration \cref{eq:PtConfigAppell} together with the cones generating the six triangulations of $\operatorname{Conv}(A)$.} \label{fig:AppellGaleTriangs}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}{.45\textwidth}
\centering
\begin{tikzpicture}[scale=2]
\coordinate[circle,inner sep=1pt,fill,label=left:I] (I) at (0,0);
\coordinate[circle,inner sep=1pt,fill,label=below left:II] (II) at (1,-1);
\coordinate[circle,inner sep=1pt,fill,label=above left:III] (III) at (0,1);
\coordinate[circle,inner sep=1pt,fill,label=above right:IV] (IV) at (1,1);
\coordinate[circle,inner sep=1pt,fill,label=below right:V] (V) at (2,-1);
\coordinate[circle,inner sep=1pt,fill,label=right:VI] (VI) at (2,0);
\draw[thick] (I) -- (II) -- (V) -- (VI) -- (IV) -- (III) -- (I);
\draw[thick,->] ($(I)!0.5!(II)$) -- +(.3,.3) node[midway,above left] {$b_1$};
\draw[thick,->] ($(II)!0.5!(V)$) -- +(0,.3) node[midway,left] {$b_5$};
\draw[thick,->] ($(V)!0.5!(VI)$) -- +(-.3,0) node[midway,above] {$b_3$};
\draw[thick,->] ($(VI)!0.5!(IV)$) -- +(-.3,-.3) node[midway,above left] {$b_4$};
\draw[thick,->] ($(IV)!0.5!(III)$) -- +(0,-.3) node[midway,right] {$b_2$};
\draw[thick,->] ($(III)!0.5!(I)$) -- +(.3,0) node[midway,above] {$b_6$};
\end{tikzpicture}
\caption{The secondary polytope $\Sigma(A)$ of the point configuration of \cref{eq:PtConfigAppell} together with its inner normal fan, visualized by the arrows. As seen in this example the secondary fan $\Sigma F(\mathcal{A})$ and the inner normal fan of $\Sigma(A)$ coincide.} \label{fig:AppellSecPolyFan}
\end{subfigure}
\caption[Gale diagram with regular triangulations and secondary polytope for the Appell $F_1$ function]{Gale diagram with regular triangulations and secondary polytope for the Appell $F_1$ function corresponding to \cref{eq:PtConfigAppell}.}
\end{figure}
Furthermore, we can calculate the weights of these triangulations\par
{\centering
\vspace{\baselineskip}
\begin{tabular}{llp{1.5cm}ll}
I: & $(1,2,3,3,2,1)$ & & IV: & $(3,1,2,1,3,2)$ \\
II: & $(1,3,2,3,1,2)$ & & V: & $(2,3,1,2,1,3)$ \\
III: & $(2,1,3,2,3,1)$ & & VI: & $(3,2,1,1,2,3)$ .
\end{tabular}
\vspace{\baselineskip} \par
}
\noindent The convex hull of these weights is the secondary polytope, which is a polytope of dimension $2$ in $\mathbb R^6$. On a convenient subspace one obtains the representation shown in \cref{fig:AppellSecPolyFan}. One can see the connection of the secondary fan and the Gale diagram. The secondary fan is the inner normal fan of the secondary polytope, visualized by the arrows. These agree with the Gale diagram.
However, we want to remark that the considered examples show the simplest, non-trivial case, where $r=N-n-1 = 2$. In these cases the Gale diagram is a diagram in the plane $\mathbb R^2$ and the intersection of cones of subconfigurations of $\mathcal{B}$ works out very simply. Thus, for $r=2$ the chambers are also spanned by the Gale diagram. In consequence, point configurations with $r=2$ can have at most $N$ regular triangulations. Furthermore, in that case there are no non-regular triangulations \cite{DeLoeraTriangulations2010}. For point configurations with $r=N-n-1>2$ the intersection of cones is much more involved. The procedure described above works also for those cases. However, the chamber complex can not be read off from the Gale diagram as easy as in the case $r=2$.
\end{example}
\sectionmark{$A$-discriminants, $A$-resultants \& principal $A$-determinants}
\section[\texorpdfstring{$A$}{A}-discriminants, \texorpdfstring{$A$}{A}-resultants and principal \texorpdfstring{$A$}{A}-determinants]{$A$-discriminants, $A$-resultants and\\ principal $A$-determinants}
\sectionmark{$A$-discriminants, $A$-resultants \& principal $A$-determinants}
\label{sec:ADiscriminantsReultantsPrincipalADets}
We are often interested in the question whether a system of simultaneous polynomial equations
\begin{align}
f_0 (x_1,\ldots,x_n) = \ldots = f_n (x_1,\ldots,x_n) = 0 \label{eq:polysys}
\end{align}
has a solution in a given (algebraically closed) field $\mathbb K$ or if it is inconsistent. That question could be answered in general by calculating the Gröbner basis of the ideal generated by $f_0,\ldots,f_n$. Thus, as a consequence of Hilbert's weak Nullstellensatz, a system of polynomial equations is inconsistent if and only if the corresponding reduced Gröbner basis is equal to $1$. Unfortunately, the calculation of Gröbner bases can be hopelessly complicated and computers fail even in simpler examples. This problem gets even harder if we want to vary the coefficients in the polynomials of \cref{eq:polysys}. Resultants, instead, can answer this question much more efficiently. In general a resultant is a polynomial in the coefficients of the polynomials $f_0,\ldots,f_n$, which vanishes whenever the system \cref{eq:polysys} has a common solution. \bigskip
However, the theory of multivariate resultants comes with several subtleties. We have to distinguish between \textit{classical multivariate resultants} (also known as \textit{dense resultants}) and \textit{(mixed) $A$-resultants} (or \textit{sparse resultants}). The classical multivariate resultant will be applied to $n$ homogeneous polynomials in $n$ variables, where every polynomial consists in all possible monomials of a given degree and detects common solutions in projective space $\mathbb P^{n-1}_{\mathbb K}$. In contrast, the $A$-resultant is usually used, if the polynomials do not consist in all monomials of a given degree. For a system of $n+1$ polynomials in $n$ variables, the $A$-resultant is a custom-made polynomial and reveals common solutions, which are ``mostly'' located in the affine space $(\mathbb C^*)^n$. However, what we accept as a ``solution'' in the latter case is slightly subtle, and we will give a precise definition below. Note, that the classical multivariate resultant is a special case of the $A$-resultant \cite{CoxUsingAlgebraicGeometry2005}. Furthermore, we want to distinguish between the case where all polynomials $f_0,\ldots,f_n$ have the same monomial structure, i.e.\ they all have the same support $A$ and the mixed case where the polynomials $f_0,\ldots,f_n$ have different monomial structure defined by several supports $A_0,\ldots,A_n$.
Closely related to resultants are discriminants, which determine whether a polynomial $f$ has a multiple root. This is equivalent to ask if there is a solution such that the polynomial $f$ and its first derivatives vanish. Hence, discriminants play also an important role for identifying singular points of algebraic varieties.
In the following we will sketch various key features of the theory of $A$-resultants and $A$-discriminants, which were mainly introduced in a series of articles by Gelfand, Kapranov and Zelevinsky \cite{GelfandAdiscriminantsCayleyKoszulComplexes1990, GelfandDiscriminantsPolynomialsMany1990, GelfandNewtonPolytopesClassical1990, GelfandDiscriminantsPolynomialsSeveral1991} in the study of $A$-hypergeometric functions \cite{GelfandHypergeometricFunctionsToral1989, GelfandGeneralizedEulerIntegrals1990, GelfandGeneralHypergeometricSystems1992} and were collected in \cite{GelfandDiscriminantsResultantsMultidimensional1994}. For an introduction to $A$-resultants as well as the classical multivariate resultants we refer to \cite{CoxUsingAlgebraicGeometry2005} and \cite{SturmfelsSolvingSystemsPolynomial2002}.
\subsection{Mixed \texorpdfstring{$(A_0,\ldots,A_n)$}{(A\unichar{"2080},...,A\unichar{"2099})}-resultants and \texorpdfstring{$A$}{A}-resultants}
The key idea of resultants is to specify coefficients and variables in a system of polynomial equations and eliminate the variables from it. Hence, resultants are a main tool in elimination theory. We will summarize the basic definitions and several properties of the multivariate resultants, which can be found in \cite{PedersenProductFormulasResultants1993, SturmfelsNewtonPolytopeResultant1994, SturmfelsIntroductionResultants1997, SturmfelsSolvingSystemsPolynomial2002, CoxUsingAlgebraicGeometry2005, GelfandDiscriminantsResultantsMultidimensional1994}. \bigskip
Let $A_0,\ldots, A_{n} \subset \mathbb Z^n$ be finite subsets of the affine lattice $\mathbb Z^n$ and for every set $A_i$ we will consider the corresponding Laurent polynomial
\begin{align}
f_i(x)=f_i(x_1,\ldots,x_n) = \sum_{a\in A_i} z^{(i)}_a x^a \in \mathbb C [x_1^{\pm 1}, \ldots, x_n^{\pm 1}]\point
\end{align}
For simplicity, we will assume that the supports $A_0,\ldots,A_n$ jointly generate the affine lattice $\mathbb Z^n$. Furthermore, by $P_i := \operatorname{Conv}(A_i)=\operatorname{Newt}(f_i)$ we denote the Newton polytope of $f_i$. According to \cite{SturmfelsNewtonPolytopeResultant1994, SturmfelsIntroductionResultants1997, SturmfelsSolvingSystemsPolynomial2002}, we will call a configuration $A_0,\ldots,A_n$ \textit{essential} if
\begin{align}
\dim\!\left( \sum_{j=0}^n P_j \right) = n \qquad \text{and} \qquad \dim\!\left(\sum_{j\in J} P_j\right) \geq |J| \quad \text{for every } J \subsetneq\{0,\ldots,n\} \comma \label{eq:essential}
\end{align}
where the sum of polytopes denotes the Minkowski sum and $|J|$ is the cardinality of the proper subset $J$. If all polytopes $P_i$ are $n$-dimensional, the equations in \cref{eq:essential} are trivially satisfied.
In order to define the general resultants, we are interested in the set of coefficients $z_a^{(i)}$ for which there exists a solution of $f_0(x) = \ldots = f_n(x)=0$ in $x\in(\mathbb C^*)^n$. In other words we consider the following set in $\prod_{i=0}^n \mathbb C^{A_i}$
\begin{align}
\mathscr Z = \left\{ (f_0,\ldots,f_{n}) \in \prod_i \mathbb C^{A_i} \,\rvert\, \Var(f_0,\ldots,f_{n})\neq\emptyset \text{ in } (\mathbb C^*)^{n} \right\} \subseteq \mathbb \prod_{i=0}^n \mathbb C^{A_i} \point \label{eq:defMixedAresultantsZ}
\end{align}
Furthermore, by $\overline{\mathscr Z}$ we will denote the Zariski closure of $\mathscr Z$. The \textit{mixed $(A_0,\ldots,A_n)$-resultant} $\gls{mixedARes}\in\mathbb Z[\{\{z^{(i)}_a\}_{a\in A_i}\}_{i = 0,\ldots,n}]$ is an irreducible polynomial in the coefficients of the polynomials $f_0,\ldots,f_n$. In case where $\overline{\mathscr Z}$ describes a hypersurface in $\prod_i \mathbb C^{A_i}$ we will define $R_{A_0,\ldots,A_n}(f_0,\ldots,f_n)$ to be the minimal defining polynomial of this hypersurface $\overline{\mathscr Z}$. Otherwise, so if $\operatorname{codim} \overline{\mathscr Z} \geq 2$, we will set $R_{A_0,\ldots,A_n}(f_0,\ldots,f_n)=1$. The mixed $(A_0,\ldots,A_n)$-resultant always exists and is uniquely defined up to a sign, which was shown in \cite{GelfandDiscriminantsResultantsMultidimensional1994}.
Further, we have $\operatorname{codim} \overline{\mathscr Z} = 1$ if and only if there exists a unique subset of $A_0,\ldots,A_n$ which is essential \cite{SturmfelsNewtonPolytopeResultant1994}. In that case the mixed $(A_0,\ldots,A_n)$-resultant coincides with the resultant of that essential subset.\bigskip
One has to remark as a warning, that the mixed $(A_0,\ldots,A_n)$-resultants not only detect common solutions of $f_0 = \ldots = f_n = 0$ inside $x\in(\mathbb C^*)^n$. Due to the Zariski closure in the definition of the resultants, the mixed $(A_0,\ldots,A_n)$-resultants may also describe solutions outside of $x\in(\mathbb C^*)^n$, e.g.\ ``roots at infinity''.\bigskip
If all polynomials $f_0,\ldots,f_n$ have the same monomial structure, i.e.\ $A_0 = \ldots = A_{n} =: A$, we will call $\gls{ARes}:=R_{A,A,\ldots,A}(f_0,\ldots,f_{n})$ simply the \textit{$A$-resultant}. The $A$-resultants satisfy a natural transformation law.
\begin{lemma}[Transformation law of $A$-resultants \cite{GelfandDiscriminantsResultantsMultidimensional1994}] \label{lem:trafoAres}
Consider the polynomials $f_0,\ldots, f_n\in\mathbb C^A$ and let $D$ be an invertible $(n+1)\times (n+1)$ matrix. For the transformation $g_i = \sum_{j=0}^{n} D_{ij} f_j$ for $i=0,\ldots,n$ we have
\begin{align}
R_A(g_0,\ldots,g_n) = \det (D)^{\operatorname{vol} (P)} R_A (f_0,\ldots,f_n)
\end{align}
where $P = \operatorname{Conv} (A)$.
\end{lemma}
Especially, for linear functions $g_0,\ldots,g_n$ with $g_i:= \sum_{j=0}^{n} D_{ij}x_j$ in homogenization, this lemma implies $R_A(g_0,\ldots,g_n) = \det(D)$. This result extends also to all cases, where $A$ forms a simplex \cite{GelfandDiscriminantsResultantsMultidimensional1994}, which we want to demonstrate with an example.
\begin{example}
Consider the system of polynomials
\begin{align}
f &= a_1 x_1^2 + a_2 x_1 x_2 + a_3 \nonumber \\
g &= b_1 x_1^2 + b_2 x_1 x_2 + b_3 \label{eq:ExampleRes}\\
h &= c_1 x_1^2 + c_2 x_1 x_2 + c_3 \point \nonumber
\end{align}
Since $A = \{(2,0),(1,1),(0,0)\}$ generates a full dimensional polytope the configuration is essential, and we will have $\operatorname{codim} \overline {\mathscr Z} = 1$. By eliminating the variables $x_1,x_2$ from the system $f=g=h=0$ we will obtain the $A$-resultant $R_A(f,g,h)$. Alternatively, we can make use of \cref{lem:trafoAres}. It follows that $R_A(f,g,h) = \det (D)^{\operatorname{vol} (P)} R_A(x_1^2,x_1 x_2,1)$, where $D$ is the coefficient matrix of \cref{eq:ExampleRes}. As the resultant of $x_1^2,x_1x_2,1$ is obviously equal to $1$, the $A$-resultant $R_A(f,g,h)$ is simply given by the determinant of $D$
\begin{align}
R_A(f,g,h) = - a_3 b_2 c_1 + a_2 b_3 c_1 + a_3 b_1 c_2 - a_1 b_3 c_2 - a_2 b_1 c_3 + a_1 b_2 c_3 \point
\end{align}
\end{example}
\subsection{\texorpdfstring{$A$}{A}-discriminants} \label{ssec:Adiscriminants}
Closely related to the $A$-resultant is the so-called $A$-discriminant. For a given polynomial $f\in\mathbb C[x_1^{\pm 1},\ldots,x_n^{\pm 1}]$ it describes when the hypersurface $\{f=0\}$ is singular. Equivalently, the $A$-discriminant determines whether $f$ has multiple roots. Let $A\subset \mathbb Z^n$ be the support of the polynomial $f(x) = \sum_{a\in A} z_a x^a$ and consider
\begin{align}
\nabla_0 = \left\{ f\in\mathbb C^A \,\rvert\, \Var\!\left(f,\frac{\partial f}{\partial x_1},\ldots,\frac{\partial f}{\partial x_n}\right) \neq\emptyset \text{ in } (\mathbb C^*)^n \right\} \subseteq \mathbb C^A \label{eq:Adisc}
\end{align}
the set of polynomials $f\in\mathbb C^A$ for which there exists a solution $x \in (\mathbb C^*)^n$ such that $f$ and its first derivatives vanish simultaneously. In analogy to the $A$-resultant, if the Zariski closure of $\nabla_0$ has codimension $1$, we will set the \textit{$A$-discriminant} $\gls{ADisc}\in\mathbb Z [\{z_a\}_{a\in A}]$ of $f$ as the minimal defining polynomial of the hypersurface $\overline \nabla_0$. For higher codimensions $\operatorname{codim}\!\left(\overline{\nabla}_0\right) > 1$ we will fix the $A$-discriminant to be $1$. Configurations $A$, which having $\Delta_A(f)=1$ are called \textit{defective}. Combinatorial criteria of defective configurations can be found in \cite{EsterovNewtonPolyhedraDiscriminants2010, CurranRestrictionADiscriminantsDual2006, DickensteinTropicalDiscriminants2007}. By definition, the $A$-discriminant is an irreducible polynomial in the coefficients $\{z_a\}_{a\in A}$, which is uniquely determined up to a sign \cite{GelfandDiscriminantsResultantsMultidimensional1994}.
\bigskip
\begin{example} \label{ex:cubicDisc}
Consider the cubic polynomial in one variable $f = z_0 + z_1 x + z_2 x^2 + z_3 x^3$ with its support $A = \{0,1,2,3\}$. Its $A$-discriminant is given (up to a sign) by
\begin{align}
\Delta_A(f) = z_1^2 z_2^2 - 4 z_0 z_2^3 - 4 z_1^3 z_3 + 18 z_0 z_1 z_2 z_3 - 27 z_0^2 z_3^2
\end{align}
which can be calculated either by eliminating $x$ from $f(x) = \pd{f(x)}{x} = 0$ or by the use of a convenient mathematical software program e.g.\ \softwareName{Macaulay2} \cite{GraysonMacaulay2SoftwareSystem} with additional libraries \cite{StaglianoPackageComputationsClassical2018, StaglianoPackageComputationsSparse2020}. Thus, the equations $f(x) = \pd{f(x)}{x} = 0$ have a common solution for $x\neq 0$ if and only if $\Delta_A(f) = 0$.
\end{example}
However, many polynomials share the same $A$-discriminant. Consider two finite subsets $A\subset\mathbb Z^n$ and $A'\subset\mathbb Z^m$, which are related by an injective, affine transformation $T:\mathbb Z^n \rightarrow \mathbb Z^m$ with $T(A)=A'$. Then, the corresponding transformation of $T$ connects also $\Delta_A$ with $\Delta_{A'}$, which was shown in \cite{GelfandDiscriminantsResultantsMultidimensional1994}. Thus, the $A$-discriminant only depends on the affine geometry of $A$. For example consider a configuration $\mathcal{A}\subset\mathbb Z^{n+1}$ which generates a homogeneous polynomial $\tilde f\in\mathbb C^\mathcal{A}$ and another configuration $A\subset \mathbb Z$, that arises when removing the first entries of $\mathcal{A}$. Therefore, we have the dehomogenization map
\begin{align}
\mathbb C^{\mathcal{A}} \rightarrow \mathbb C^A, \qquad \tilde f(x_0,\ldots,x_n) \mapsto f(x_1,\ldots,x_n) = \tilde f(1,x_1,\ldots,x_n) \point
\end{align}
This map is affine and injective, and we can identify both discriminants $\Delta_{\mathcal{A}}(\tilde f)=\Delta_A (f)$. Similarly, we obtain for a finite subset $A\subset \mathbb Z^n$ and its homogenization $\mathcal{A}\subset\mathbb Z^{n+1}$ the same discriminants.\bigskip
By definition \cref{eq:Adisc} it can be seen, that $\Delta_A(f)$ has to be a homogeneous polynomial. Additionally, $\Delta_A(f)$ is even quasi-homogeneous for any weight defined by a row of $A$ \cite{GelfandDiscriminantsResultantsMultidimensional1994}. Removing these homogenities leads us to the \textit{reduced $A$-discriminant} $\gls{redADisc}$. Let $\mathcal{A}\subset\mathbb Z^{n+1}$ be the homogenization of the support $A$ and $\mathcal{B}\in\operatorname{Gale}(\mathcal{A})$ a Gale dual of $\mathcal{A}$. Then we can introduce ``effective'' variables\glsadd{effectiveVars}\glsunset{effectiveVars}
\begin{align}
y_j = \prod_{i=1}^N z_i^{b_{ij}} \qquad\text{for}\quad j=1,\ldots, r \label{eq:effectiveVarsADisc}
\end{align}
where $b_{ij}$ denotes the elements of the Gale dual $\mathcal{B}$. The $A$-discriminant can always be rewritten as $\Delta_A(f) = z^{\Lambda} \Delta_\mathcal{B}(f)$, where the reduced $A$-discriminant $\Delta_\mathcal{B}(f)$ is an inhomogeneous polynomial in the effective variables $y_1,\ldots,y_r$ and $\Lambda\in\mathbb Z^N$ defines a factor $z^\Lambda$. We will usually choose the smallest $\Lambda$ such that $\Delta_\mathcal{B}$ is a polynomial.\bigskip
\begin{example}[Continuation of \cref{ex:cubicDisc}] \label{ex:cubicGale}
We will consider the cubic polynomial in one variable from \cref{ex:cubicDisc}. As a possible choice for the Gale dual $\mathcal{B}$ of the homogenized point configuration $\mathcal{A}\subset\mathbb Z^2$ we will use
\begin{align}
\mathcal B = \begin{pmatrix}
1 & 2 \\
-2 & -3\\
1 & 0\\
0 & 1
\end{pmatrix}\text{ ,} \qquad y_1= \frac{z_0 z_2}{z_1^2}, \qquad y_2 = \frac{z_0^2 z_3}{z_1^3} \point \label{eq:exEffective}
\end{align}
Thus, we can rewrite the $A$-discriminant
\begin{align}
\Delta_A(f) = \frac{z_1^6}{z_0^2} \left(27 y_2^2 + 4 y_1^3 + 4 y_2 - y_1^2 - 18 y_1y_2\right) = \frac{z_1^6}{z_0^2} \Delta_\mathcal{B}(f)
\end{align}
as a reduced discriminant $\Delta_\mathcal{B}(f)\in\mathbb Z[y_1,y_2]$.
\end{example}
Except for special cases, where $A$ forms a simplex or a circuit \cite{GelfandDiscriminantsResultantsMultidimensional1994}, the determination of the $A$-discriminant can be a very intricate issue for bigger polynomials and calculations quickly get out of hand. Fortunately, there is an indirect description of $A$-discriminants which was invented by Kapranov \cite{KapranovCharacterizationAdiscriminantalHypersurfaces1991} with slight adjustments in \cite{CuetoResultsInhomogeneousDiscriminants2006}. This so called \textit{Horn-Kapranov-parameterization } states a very efficient way to study discriminants. Let $\mathcal S\subset(\mathbb C^*)^r$ be the hypersurface defined by the reduced $A$-discriminant $\{\Delta_\mathcal{B}(f)=0\}$. Then this hypersurface $\mathcal S$ can be parameterized by the map $\psi :\mathbb P^{r-1}_{\mathbb C} \rightarrow (\mathbb C^*)^r$, where $\psi$ is given by
\begin{align}
\gls{HKP} = \left( \prod_{i=1}^N \left(\sum_{j=1}^r b_{ij} t_j \right)^{b_{i1}}, \ldots, \prod_{i=1}^N \left(\sum_{j=1}^r b_{ij} t_j \right)^{b_{ir}}\right) \label{eq:HKPpsi}
\end{align}
and $b_{ij}$ are again the elements of a Gale dual $\mathcal B$ of $\mathcal{A}$. Hence, we can give an implicit representation of the $A$-discriminant very quickly only by knowing a Gale dual.
\begin{example}[Continuation of \cref{ex:cubicGale}] \label{ex:cubicParameterization}
For the example of the cubic polynomial in one variable, we obtain with the Gale dual from \cref{eq:exEffective}
\begin{align}
\psi [t_1 : t_2] = \left( \frac{t_1 + 2 t_2}{(2t_1+3t_2)^2} t_1 , - \frac{(t_1 + 2 t_2)^2}{(2t_1+3t_2)^3} t_2\right) \point
\end{align}
Since $[t_1 : t_2]$ are homogeneous coordinates, only defined up to multiplication, we can set without loss of generality $t_2=1$. Hence, the statement of Horn-Kapranov-parameterization is, that for every $t_1\in\mathbb C$ we can identify
\begin{align}
y_1 = \frac{t_1 + 2 }{(2t_1+3)^2} t_1, \qquad y_2 = - \frac{(t_1 + 2)^2}{(2t_1+3)^3}
\end{align}
as the points characterizing the hypersurface $\Delta_\mathcal{B}(f)(y_1,y_2) = 0$ or equivalently the hypersurface $\Delta_A(f)(z_0,z_1,z_2,z_3)=0$ by means of the relations \cref{eq:exEffective}.
\end{example}
Moreover, Kapranov showed \cite{KapranovCharacterizationAdiscriminantalHypersurfaces1991}, that the map $\psi$ is the inverse of the (logarithmic) Gauss map, which is defined for an arbitrary hypersurface $\mathcal S_g = \{y\in(\mathbb C^*)^r \, |\, g(y)=0\}$ as $\gamma : (\mathbb C^*)^r \rightarrow \mathbb P^{r-1}_{\mathbb C}$, with $\gamma(y) = [ y_1 \partial_1 g(y) : \ldots : y_r \partial_r g(y)]$ for all regular points of $\mathcal S_g$. It is a remarkable fact, that all hypersurfaces $S_g$, which have a birational Gauss map are precisely those hypersurfaces defined by reduced $A$-discriminants \cite{KapranovCharacterizationAdiscriminantalHypersurfaces1991, CuetoResultsInhomogeneousDiscriminants2006}. \bigskip
To conclude this section, we want to mention the relation between $A$-discriminants and the mixed $(A_0,\ldots,A_n)$-resultants, which is also known as Cayley's trick.
\begin{lemma}[Cayley's trick \cite{GelfandDiscriminantsResultantsMultidimensional1994}] \label{lem:CayleysTrick}
Let $A_0,\ldots,A_{n}\subset \mathbb Z^n$ be finite subsets jointly generating $\mathbb Z^n$ as an affine lattice. By $f_i\in\mathbb C^{A_i}$ we denote the corresponding polynomials of the sets $A_i$. Then we have
\begin{align}
R_{A_0,\ldots,A_{n}} (f_0,\ldots,f_{n}) = \Delta_A \!\left(f_{0}(x) + \sum_{i=1}^n y_i f_i(x)\right)
\end{align}
where $A\subset \mathbb Z^{2n}$ is defined as the support of the polynomial $f_{0}(x) + \sum_{i=1}^n y_i f_i(x)\in \mathbb C[x_1^{\pm 1},\ldots,x_{n}^{\pm 1},y_1,\ldots,y_n]$.
\end{lemma}
Thus, every mixed $(A_0,\ldots,A_n)$-resultant can always be reduced to the case of an $A$-discriminant. Hence, the $A$-discriminant is the more general object than the resultant.
\subsection{Principal \texorpdfstring{$A$}{A}-determinants} \label{ssec:principalAdet}
The last object we want to introduce from the book of Gelfand, Kapranov and Zelevinsky \cite{GelfandDiscriminantsResultantsMultidimensional1994} is the principal $A$-determinant, which is a special $A$-resultant. Once again, we consider a finite subset $A\subset \mathbb Z^n$, and we will assume for the sake of simplicity that $\operatorname{Aff}_{\mathbb Z} (A) = \mathbb Z^n$. By $f=\sum_{a\in A} z_a x^a\in\mathbb C^A$ we denote the polynomial corresponding to $A$. The \textit{principal $A$-determinant} is then defined as the following $A$-resultant\footnote{A rather subtle point about this definition is the order of evaluation. A generic $A$-resultant $R_A(g_0, \ldots ,g_n)$ is determined first and the corresponding polynomials $g_0=f, g_1 = x_1 \pd{f}{x_1}, \ldots, g_n = x_n \pd{f}{x_n}$ are inserted second. I would like to thank Simon Telen for pointing this out to me.}
\begin{align}
\gls{pAdet} := R_A\!\left(f,x_1 \pd{f}{x_1}, \ldots,x_n \pd{f}{x_n}\right) \in \mathbb Z [\{z_a\}_{a\in A}] \point \label{eq:principalAdetResultant}
\end{align}
Thus, the principal $A$-determinant is a polynomial with integer coefficients depending on $\{z_a\}_{a\in A}$, which is uniquely determined up to a sign \cite{GelfandDiscriminantsResultantsMultidimensional1994}. It indicates when the system of polynomial equations $f = x_1 \pd{f}{x_1} = \ldots = x_n \pd{f}{x_n}=0$ has a common solution.\bigskip
In the application of $\mathcal{A}$-hypergeometric functions and also for Feynman integrals, it will turn out, that the principal $A$-determinant will be the central object, when considering the analytic structure. We will consider the analytic structure, which is also known as Landau variety for the Feynman integral, more in detail in \cref{ch:singularities}. In this section we will focus on two main properties of the principal $A$-determinant, that will be crucial when applying them to the analytical structure.
The first property, we want to discuss here, is the decomposition of the principal $A$-determinant into a product of several $A$-discriminants.
\begin{theorem}[{Prime factorization of principal $A$-determinant \cite[ch. 10, thm. 1.2]{GelfandDiscriminantsResultantsMultidimensional1994}}] \label{thm:pAdet-factorization}
The principal $A$-determinant can be written as a product of $A$-discriminants
\begin{align}
E_A(f) = \pm \prod_{\tau \subseteq \operatorname{Newt} (f)} \Delta_{A\cap\tau} (f_\tau)^{\mu(A,\tau)} \comma \label{eq:pAdet-factorization}
\end{align}
where the product is over all faces $\tau$ of the Newton polytope $\operatorname{Newt} (f)$ and $ \mu(A,\tau) \in\mathbb N_{> 0}$ are certain integers, called multiplicity of $A$ along $\tau$. The exact definition of the multiplicities is not crucial for the following, which is why we refer to \cite{GelfandDiscriminantsResultantsMultidimensional1994, ForsgardZariskiTheoremMonodromy2020} at this point.
\end{theorem}
Most often we are only interested in the roots of the principal $A$-determinant. Therefore, we want to define a \textit{simple principal $A$-determinant} according to \cite{ForsgardZariskiTheoremMonodromy2020} where all multiplicities $\mu(A,\tau)$ in \cref{eq:pAdet-factorization} are set to $1$
\begin{align}
\gls{spAdet} = \pm \prod_{\tau \subseteq \operatorname{Newt} (f)} \Delta_{A\cap\tau} (f_\tau)
\end{align}
which generates the same variety as $E_A(f)$. \bigskip
\begin{example}[principal $A$-determinants of homogeneous polynomials]
To illustrate the principal $A$-determinant we will recall an example from \cite{GelfandDiscriminantsResultantsMultidimensional1994}. Let $\tilde f\in\mathbb C[x_0,\ldots,x_n]$ be a homogeneous polynomial consisting in all monomials of a given degree $d\geq 1$. The support of $\tilde f$ will be called $\mathcal{A}\subset\mathbb Z^{n+1}$ and its Newton polytope $\operatorname{Newt}(\tilde f)\subset \mathbb R^{n+1}$ is an $n$-dimensional simplex, having $2^{n+1}-1$ faces. Moreover, the faces $\tau\subseteq\operatorname{Newt}(\tilde f)$ are generated by all non-empty subsets of the set of vertices $\{0,\ldots,n\}$ and one can show that all multiplicities $\mu(\mathcal{A},\tau)$ are equal to one \cite{GelfandDiscriminantsResultantsMultidimensional1994}. Thus, we get
\begin{align}
E_\mathcal{A}(\tilde f) = \pm \prod_{\emptyset\neq \tau \subseteq \{0,\ldots,n\}} \Delta_{\mathcal{A}\cap \tau} (\tilde f_{\tau}) \label{eq:pAdetHom} \point
\end{align}
Note that in this particular example, we can write $\tilde f_\tau$ as the polynomial $\tilde f$ with all variables $x_i=0$ set to zero that do not belong to $\tau$, i.e.\ $\tilde f_{\tau} (x) = \tilde f(x)|_{x_i=0, i\notin \tau}$. That decomposition is exactly the behaviour we would naively expect in the polynomial equation system
\begin{align}
\tilde f (x) = x_0 \frac{\partial \tilde f(x)}{\partial x_0} = \ldots = x_n \frac{\partial \tilde f(x)}{\partial x_n} = 0 \point \label{eq:expAdetPolySystem}
\end{align}
That is, in \cref{eq:expAdetPolySystem} we can consider each combination of how variables $x_0,\ldots,x_n$ can vanish separately. However, it should be remarked that this behaviour is not true in general since multivariate division is not necessarily unique. Hence, when not considering simplices, we have rather to take the truncated polynomials into account as described in \cref{thm:pAdet-factorization}. \bigskip
\end{example}
Another noteworthy result of Gelfand, Kapranov and Zelevinsky is the connection between the principal $A$-determinant and the triangulations of $A$.
\begin{theorem}[Newton polytopes of principal $A$-determinants \cite{GelfandNewtonPolytopesClassical1990, SturmfelsNewtonPolytopeResultant1994, GelfandDiscriminantsResultantsMultidimensional1994}] \label{thm:NewtSec}
The Newton polytope of the principal $A$-determinant and the secondary polytope coincide
\begin{align}
\operatorname{Newt}(E_A(f)) = \Sigma(A) \point
\end{align}
Further, if $\mathcal{T}$ is a regular triangulation of $A$, then the coefficient of the monomial $\prod_{a\in A} z_a^{\varphi_\mathcal{T}(a)}$ in $E_A(f)$ is equal to
\begin{align}
\pm \prod_{\sigma\in \widehat{\mathcal T}} \operatorname{vol}(\sigma)^{\operatorname{vol}(\sigma)} \point
\end{align}
And also the relative signs between the coefficients in $E_A(f)$ can be determined \cite[chapter 10.1G]{GelfandDiscriminantsResultantsMultidimensional1994}.
\end{theorem}
Thus, by the knowledge of all regular triangulations we can approximate the form of the principal $A$-determinant. As \cref{thm:NewtSec} gives us the extreme monomials we can make a suitable ansatz for the principal $A$-determinant. The unknown coefficients of the monomials corresponding to potential interior points of the Newton polytope $\operatorname{Newt}(E_A(f))$ can be determined then by Horn-Kapranov-parameterization.
Further, the theorem explains the typical appearing coefficients\footnote{Based on that theorem, Gelfand, Kapranov and Zelevinsky \cite{GelfandDiscriminantsResultantsMultidimensional1994} speculate on a connection between discriminants and probabilistic theory. Their starting point for this consideration are the ``entropy-like'' expressions as $\prod v_i^{v_i}=e^{\sum v_i \log v_i}$ for the coefficients of the principal $A$-determinant, as well as in the Horn-Kapranov-parameterization. In the latter case we can define an ``entropy'' $S := \ln\!\left (\prod_{l=1}^r \psi_l^{t_l}\right) = \sum_{i=1}^N \rho_i(t) \ln(\rho_i(t))$ where $\rho_i(t):= \sum_{j=1}^r b_{ij} t_j$. To the author's knowledge, more rigorous results about such a potential relation are missing. However, there are further connections known between tropical toric geometry and statistical thermodynamics as presented in \cite{KapranovThermodynamicsMomentMap2011, PassareAmoebasComplexHypersurfaces2013}. In any case, a fundamental understanding of such a relation could be very inspiring for a physical point of view.} in principal $A$-determinants and Landau varieties, like $1=1^1$, $4=2^2$, $27=3^3$, $4^4=256$, etc.
\begin{example}[Continuation of \cref{ex:cubicParameterization}] \label{ex:cubicPrincipalAdetFromTriangs}
We will continue the example from \cref{ssec:Adiscriminants}. From the set $A=(0,1,2,3)$ we can determine $4$ triangulations with the weights $\varphi_{\mathcal{T}_1} = (3,0,0,3)$, $\varphi_{\mathcal{T}_2}=(1,3,0,2)$, $\varphi_{\mathcal{T}_3}=(2,0,3,1)$ and $\varphi_{\mathcal{T}_4}=(1,2,2,1)$. Adding the only possible interior point $(2,1,1,2)$, we obtain the following ansatz by use of \cref{thm:NewtSec}
\begin{align}
E_A(f) = 27 z_0^3 z_3^3 + 4 z_0 z_1^3 z_3^2 + 4 z_0^2 z_2^3 z_3 - z_0 z_1^2 z_3 + \alpha z_0^2 z_1 z_2 z_3^2
\end{align}
where we have to determine $\alpha\in\mathbb Z$. By considering the faces of $\operatorname{Newt}(f)$ we can split $E_A(f)$ into discriminants
\begin{align}
E_A(f) = z_0 z_3 \Delta_A(f) = z_0 z_3 \frac{z_1^6}{z_0^2} \Delta_\mathcal{B}(f)
\end{align}
with the reduced $A$-discriminant $\Delta_\mathcal{B}(f) = 27 y_2^2 + 4y_1^3 +4y_2 -y_1^2 + \alpha y_1 y_2$ with the same conventions as in \cref{ex:cubicGale}. By the Horn-Kapranov-parameterization (see \cref{ex:cubicParameterization}) we can calculate those points $(y_1,y_2)$ which satisfy $\Delta_\mathcal{B}(f)=0$. Choosing for example $t_1=-1$, we obtain the point $(y_1,y_2)=(-1,-1)$, which leads to $\alpha=-18$. Hence, we determined the principal $A$-determinant $E_A(f)$ by means of the triangulations of the point configuration $A$ and the Horn-Kapranov-parameterization.
\end{example}
In general, we can replace $E_A(f)$ by means of Cayley's trick (\cref{lem:CayleysTrick}) by one single $A$-discriminant in order to simplify the usage of Horn-Kapranov-parameterization. However, we have to mention that the number of vertices of the secondary polytope -- or equivalently the number of regular triangulations -- grows very fast. To determine the Landau variety in the application of Feynman integrals (see \cref{ch:singularities}) of the $2$-point, $3$-loop Feynman diagram (also known as $3$-loop banana) we have to consider $78.764$ possible regular triangulations. Hence, the principal $A$-determinant will have more than $78\,764$ monomials. The $2$-loop double-edged triangle graph (or dunce's cap) generates even $885\,524$ triangulations. Further numbers to regular triangulations of Feynman diagrams can be found in the \cref{sec:AppendixCharacteristics}. Nevertheless, this approach could be faster, than the direct calculation of principal $A$-determinants by standard algorithms, since there are very efficient methods known for triangulations \cite{DeLoeraTriangulations2010, RambauTOPCOMTriangulationsPoint2002}. However, we want to remark that the triangulations are not the only way to construct the secondary polytope. We will refer to \cite{BilleraConstructionsComplexitySecondary1990} for details.
\section{Holonomic \texorpdfstring{$D$}{D}-modules} \label{sec:holonomicDmodules}
The theory of $\mathcal{A}$-hypergeometric functions will be expressed by solutions of systems of partial linear differential equations. We therefore will rephrase certain basic terms of $D$-modules. In this short overview we will forgo to have a recourse to the notion of sheaves. This section will be oriented towards \cite{SaitoGrobnerDeformationsHypergeometric2000, TakayamaGrobnerBasisRings2013, SattelbergerDModulesHolonomicFunctions2019} which we also refer for a more detailed description together with \cite{OakuComputationCharacteristicVariety1994}. For a general introduction to the subject of $D$-modules we suggest \cite{BjorkAnalyticDModulesApplications1993}.
The consideration of $D$-modules takes place in the \textit{Weyl algebra}
\begin{align}
\gls{Weyl} = \mathbb K \langle z_1,\ldots,z_N, \partial_1,\ldots,\partial_N \rangle
\end{align}
where $\mathbb K$ is any field and all generators suppose to commute except for $\partial_i z_i = z_i\partial_i + 1$. The Weyl algebra is isomorphic to the ring of differential operators on the affine space $\mathbb A_{\mathbb K}^N$ \cite{SaitoGrobnerDeformationsHypergeometric2000}, which is the reason for our special interest. However, when we want to consider differential operators with rational functions as coefficients we will use the so-called \textit{rational Weyl algebra}
\begin{align}
R = \mathbb K (z_1,\ldots,z_N) \langle \partial_1,\ldots,\partial_N\rangle
\end{align}
where $\mathbb K(z_1,\ldots,z_N)$ is the field of rational functions and $\partial_i r(z) = r(z)\partial_i + \pd{r(z)}{z_i}$ will be the analogue commutator relation for any rational function $r(z)\in\mathbb K(z_1,\ldots,z_N)$. Note, that $D$ is a subalgebra of $R$.
Any left module over the ring $D$ will be called a $D$-module $M$. Many function spaces can be considered as $D$-modules, for example the ring of polynomials $\mathbb K[z_1,\ldots,z_N]$, the space of formal power series $\mathbb K[[z_1,\ldots,z_N]]$ or the space of holomorphic functions $\mathcal O(U)$ on an open set $U\subseteq \mathbb C^N$ by using the natural identification of $\partial_i$ as a derivative and $z_i$ as a multiplication
\begin{align}
\bullet : D \times M \rightarrow M,\quad \partial_i \bullet f = \pd{f}{z_i}, \quad z_i\bullet f = z_i f \quad \text{ for } f\in M \point
\end{align}
In order to distinguish the module action $\bullet$ from the multiplication in the Weyl algebra $\cdot : D \times D \rightarrow D$ we will use different symbols. Thus, a system of linear differential equations with polynomial coefficients can be identified with a left ideal $I\subset D$ in the Weyl algebra. The \textit{solution space} for such an ideal $I\subset D$ is the $\mathbb C$-vector space
\begin{align}
\gls{Sol}= \{ f\in\mathcal O(U) \,\rvert\, P \bullet f = 0 \quad \forall\, P\in I \}
\end{align}
where $\mathcal O(U)$ is the $D$-module of holomorphic functions on the domain $U\subseteq\mathbb C^N$.
\begin{example} \label{ex:D1ideal}
To illustrate the definitions above we will consider an example in the well-known univariate case. Thus, let $I_1 = \langle (z \partial + 1)^2 (z\partial - 2) \rangle=\langle z^3 \partial^3 + 3 z^2\partial^2 - 2 z \partial - 2\rangle$ be a $D$-ideal. It is not hard to solve the corresponding differential equation, and we will obtain $\operatorname{Sol}(I_1) = \mathbb C \{ z^{-1}, z^{-1}\ln(z), z^2 \}$. Surely, in order to have a well-defined solution space, we will consider $z\in U$ where $U\subseteq \mathbb C^*$ is a simply connected domain.
\end{example}
As seen in this example the solutions are not necessarily entire functions, they may have singularities. Let us first recall the situation in the univariate case to get an intuition for the multivariate generalization. Let $P=c_m(z) \partial^m + \ldots +c_1(z) \partial + c_0(z)\in D$ be a differential operator in a single variable $z\in\mathbb C$ with polynomials $c_i(z)\in\mathbb C[z]$ as coefficients and $c_m(z) \not\equiv 0$. We will call the roots of the leading coefficient $c_m(z)$ the singular points of $P$ and the set of all roots is known as the singular locus $\operatorname{Sing}(D\cdot P)$. Standard existence theorems state then \cite{CoddingtonTheoryOrdinaryDifferential2012,SaitoGrobnerDeformationsHypergeometric2000}, that for a simply connected domain $U\subseteq\mathbb C\setminus\operatorname{Sing}(D\cdot P)$ there exist holomorphic functions on $U$, which are solutions of $P\bullet f=0$. Moreover, the dimension of the solution space (for holomorphic functions on $U$) will be equal to $m$. Therefore, the singular locus will describe potential singularities in the analytic continuation of the solutions. In the univariate case one usually distinguish further between irregular and regular singular points. By Frobenius' method, one can construct series solutions for the latter. A compact summary of this method in the univariate case can be found e.g.\ in \cite{CattaniThreeLecturesHypergeometric2006}. \bigskip
We will now turn to the multivariate case. Every element $P\in D$ has a \textit{normally ordered expression}
\begin{align}
P = \sum_{(\alpha,\beta) \in E} c_{\alpha\beta} z^\alpha \partial^\beta \label{eq:normallyOrderedEx}
\end{align}
where we use the multi-index notation as usual and $c_{\alpha\beta} \in \mathbb K^*$. Since we have more than one generator in the Weyl algebra $D$, generators can enter with different weights, when introducing an order on $D$. Hence, the vector $(u,v)\in\mathbb R^{2N}$ will be a \textit{weight} for the Weyl algebra $D$, where we associate $u$ with the weights of the generators $z_1,\ldots,z_N$ and $v$ with the weights of the generators $\partial_1,\ldots,\partial_N$. In doing so, we can define an \textit{order} for every element $P\in D$ by $\gls{order} := \max_{(\alpha,\beta)\in E} (u\alpha+v\beta)$, where $E$ refers to the normally ordered expression \cref{eq:normallyOrderedEx}. The order allows to define a filtration $\ldots \subseteq F_{-1} \subseteq F_0\subseteq F_1 \subseteq \ldots$ of $D$ by
\begin{align}
F_m = \left\{ P\in D \,\rvert\, \operatorname{ord}(P)\leq m \right\} \point
\end{align}
If $u_i\geq 0$ and $v_i\geq 0$ the filtration is bounded by below $\{0\}\subseteq F_0 \subseteq \ldots$, and we will always assume that case in this work. The \textit{associated graded ring} $\gls{assocGrRing}$ of this filtration will be generated by
\begin{align}
\{z_1,\ldots,z_N\} \cup \{ \partial_i \,\rvert\, u_i+v_i=0\} \cup \{ \xi_i \,\rvert\, u_i + v_i > 0 \}
\end{align}
where we replace the generators $\partial_i$ in the Weyl algebra $D$ by commuting generators $\xi_i$ whenever$u_i+v_i > 0$. For example, we have $\operatorname{gr}_{(\mathbf 0,\mathbf 0)}(D) = D$ and $\operatorname{gr}_{(\mathbf 0, \mathbf 1)}(D) = \mathbb K [z,\xi] = \mathbb K[z_1,\ldots,z_N,\xi_1,\ldots,\xi_N]$, where $\mathbf 0:=(0,\ldots,0)$ and $\mathbf 1:=(1,\ldots,1)$ stand for the constant zero and the constant one vector, respectively.
The order allows us also to define an analogue of the leading term. By the \textit{initial form} of an element from the Weyl algebra $P\in D$ we understand the following element in $\operatorname{gr}_{(u,v)} (D)$
\begin{align}
\operatorname{in}_{(u,v)} (P) = \sum_{\substack{ (\alpha,\beta)\in E \\ \alpha u +\beta v = \operatorname{ord} (P)}} c_{\alpha\beta} \prod_{i:\ u_i+v_i > 0} z_i^{\alpha_i} \xi_i^{\beta_i} \prod_{i:\ u_i+v_i = 0} z_i^{\alpha_i} \partial_i^{\beta_i} \in\operatorname{gr}_{(u,v)} (D) \point \label{eq:initalForm}
\end{align}
Furthermore, for any left ideal $I\subseteq D$ we define $\gls{initialuv} := \mathbb K \cdot \{ \operatorname{in}_{(u,v)} (P) \,\rvert\, P \in I \} \subseteq \operatorname{gr}_{(u,v)} (D)$ as the \textit{initial ideal} of $I$ with respect to $(u,v)$. \bigskip
However, $\operatorname{ord}(P)$ defines only a partial order on $D$ because different elements of the Weyl algebra can have the same order. A total order $\prec$ is called a \textit{term order}, if (i) $z^\alpha\partial^\beta \prec z^{\alpha^\prime} \partial^{\beta^\prime}$ implies $z^{\alpha+s}\partial^{\beta+t} \prec z^{\alpha^\prime+s} \partial^{\beta^\prime+t}$ for any $(s,t)\in\mathbb N^{2N}$ and (ii) $1=z^0\partial^0$ is the smallest element with respect to $\prec$. When $z^\alpha\partial^\beta$ is the largest monomial of $P\in D$ in the normally ordered expression \cref{eq:normallyOrderedEx} with respect to a term order $\prec$, we will call $z^\alpha\xi^\beta \in\mathbb K[z,\xi]$ the \textit{initial monomial} $\operatorname{in}_\prec(P)$ of $P$. By slightly abuse of notation, we call the ideal of all initial monomials $\operatorname{in}_\prec(I) := \mathbb K \cdot \{ \operatorname{in}_\prec (P) \,\rvert\, P \in I \}\subseteq\mathbb K[z,\xi]$ the \textit{initial ideal} with respect to $\prec$. The monomials which do not lie in $\operatorname{in}_\prec(I)$ are called the \textit{standard monomials} of $I$. The number of standard monomials can be finite or infinite. \bigskip
Analogue to the commuting case we call a generating set $G=\{g_1,\ldots,g_m\}$ of the ideal $I\subseteq D$ a \textit{Gröbner basis} of $I$ with respect to a term order $\prec$, if the initial monomials $\{\operatorname{in}_\prec(g_1),\ldots,\operatorname{in}_\prec(g_m)\}$ generate the initial ideal $\operatorname{in}_\prec (I)$. Note, that $\operatorname{in}_\prec (I)$ is an ideal in the commutative ring $\mathbb K[z,\xi]$, whereas $I\subseteq D$ is an ideal in a noncommutative ring. Let $\prec$ be a term order sorting monomials by their order $\operatorname{ord}(P)$ in case when the orders a different and by a further term order, e.g.\ a lexicographic order, in case where the orders are equal. A Gröbner basis $G$ with respect to such a term order satisfies $\langle\operatorname{in}_{(u,v)}(G)\rangle = \operatorname{in}_{(u,v)} (I)$ \cite{SaitoGrobnerDeformationsHypergeometric2000}. The benefit of Gröbner basis is, that every element of $I$ has a standard representation in terms of its Gröbner basis, and we can reduce ideal operations to operations on its Gröbner basis. We refer to \cite{SaitoGrobnerDeformationsHypergeometric2000} for the determination of those Gröbner bases by an extended version of Buchberger's algorithm. For an actual calculation of Gröbner bases we recommend the use of software e.g.\ \softwareName{Macaulay2} \cite{GraysonMacaulay2SoftwareSystem} or \softwareName{Singular} \cite{DeckerSingular421Computer2021}. We included in the \cref{sec:SoftwareTools} several examples on the usage of those programs. \bigskip
In the following we will specify the weight to $(\mathbf 0,\mathbf 1)$, i.e.\ we will give the generators $\partial_i$ the weight $1$, whereas the generators $z_i$ get the weight zero. In this case the associated graded algebra is a commutative polynomial ring $\operatorname{gr}_{(\mathbf0,\mathbf 1)}(D) = \mathbb K [z,\xi]$. For this specific weight, the initial forms $\operatorname{in}_{(\mathbf 0,\mathbf 1)}(P)$ are also called \textit{principal symbols} and the initial ideal is known as \textit{characteristic ideal} $\operatorname{in}_{(\mathbf 0,\mathbf 1)}(I)\subseteq \mathbb K[z,\xi]$. Furthermore, the variety generated by this ideal is called the \textit{characteristic variety}
\begin{align}
\gls{char} := \Var\!\left(\operatorname{in}_{(\mathbf 0,\mathbf 1)}(I)\right) \subseteq \mathbb A_{\mathbb K}^{2N} \point
\end{align}
\begin{example} \label{ex:D2ideal}
In order to illustrate these definitions, we will study two more examples. The first will be the ideal $I_2 = \langle z_1 \partial_1 - 1, z_2\partial_2^2 + \partial_1^2 \rangle$. Its (reduced) Gröbner basis is $\{z_1\partial_1 - 1, \partial_1^2, z_2\partial_2^2\}$, which can be determined by a convenient software, e.g.\ \softwareName{Macaulay2} \cite{GraysonMacaulay2SoftwareSystem} or \softwareName{Singular} \cite{DeckerSingular421Computer2021}. Thus, its characteristic ideal is given by $\operatorname{in}_{(\mathbf 0,\mathbf 1)}(I_2) = \langle z_1 \xi_1, \xi_1^2, z_2\xi_2^2\rangle$.
\end{example}
\begin{example} \label{ex:D3ideal}
For a slightly more extensive example with $N=3$ consider $I_3 = \langle \partial_1\partial_3 - \partial_2^2, z_1\partial_1 + z_2\partial_2+z_3\partial_3 + 2, z_2 \partial_2 + 2 z_3 \partial_3 - 1\rangle$. The Gröbner basis with respect to a degree reverse lexicographic ordering can be calculated with \softwareName{Macaulay2} or \softwareName{Singular} and one obtains
\begin{align}
\{&\partial_2^2-\partial_1\partial_3, z_2\partial_2+2z_3\partial_3-1, z_1\partial_1-z_3\partial_3+3, z_2\partial_1\partial_3+2z_3\partial_2\partial_3, \nonumber \\
&\quad 2z_1z_3\partial_2\partial_3+z_2z_3\partial_3^2-2z_2\partial_3, z_2^2z_3\partial_3^2-4z_1z_3^2\partial_3^2-2z_2^2\partial_3-2z_1z_3\partial_3\} \point
\end{align}
Therefore, its characteristic ideal is given by
\begin{align}
\operatorname{in}_{(\mathbf 0,\mathbf 1)} (I_3) &= \langle \xi_2^2-\xi_1\xi_3, z_2\xi_2+2z_3\xi_3, z_1\xi_1-z_3\xi_3, z_2\xi_1\xi_3 + 2z_3\xi_2\xi_3, \nonumber \\
&\quad 2z_1z_3\xi_2\xi_3 + z_2z_3\xi_3^2, z_2^2z_3\xi_3^2 - 4z_1z_3^2\xi_3^2 \rangle \nonumber \\
& = \langle \xi_1\xi_3 - \xi_2^2, z_1\xi_1 - z_3\xi_3, z_2 \xi_2 + 2 z_3 \xi_3 \rangle \point
\end{align}
\end{example}
The dimension of the characteristic variety will be crucial for the behaviour of the $D$-ideal. The so-called \textit{weak fundamental theorem of algebraic analysis} going back to \cite{BernshteinAnalyticContinuationGeneralized1973} states that for an ideal $I\subsetneq D$
\begin{align}
\dim (\operatorname{char} (I)) \geq N \label{eq:weakFTAA}
\end{align}
the dimension of the characteristic ideal is at least $N$. Moreover, it can be shown that even every irreducible component of $\operatorname{char}(I)$ has at least dimension $N$, which is the statement of the \textit{strong fundamental theorem of algebraic analysis} \cite{SaitoGrobnerDeformationsHypergeometric2000, KomatsuMicrofunctionsPseudodifferentialEquations1973}. We will call an $D$-ideal $I$ \textit{holonomic} if its characteristic variety has the minimal dimension $N$. Holonomic $D$-ideals will behave in a certain way much better than non-holonomic $D$-ideals. A module $M=\normalslant{D}{I}$ is called holonomic if the ideal $I$ is holonomic.
Furthermore, we can classify ideals $I$ by the number of their standard monomials. Thus, we will slightly change our perspective, and we will consider the $D$-ideal as an ideal of the rational Weyl algebra $R$. For any term order $\prec$, we call the number of standard monomials of $\operatorname{in}_\prec(I)$ the \textit{holonomic rank} $\operatorname{rank} (I)$ of $I\subseteq R$. We will see in \cref{thm:CKKT}, that the holonomic rank will be the generalization of the order of ordinary differential equations. Note, that if $I$ is holonomic, then its holonomic rank is finite \cite{SaitoGrobnerDeformationsHypergeometric2000}. However, the converse is not necessarily true.
\begin{example}
We will continue the \cref{ex:D1ideal,,ex:D2ideal,,ex:D3ideal}. All ideals $I_1$, $I_2$ and $I_3$ are holonomic ideals. For a convenient term order $\prec$ we have $\operatorname{in}_\prec(I_1) = \langle \xi^3 \rangle$ as the initial ideal of $I_1$ in the rational Weyl algebra $R$. Thus, its standard monomials are $\{1,\xi,\xi^2\}$ and its holonomic rank is $3$. Note, that the holonomic rank equals the order of the differential operator.
For the second example, the initial ideal in the rational Weyl algebra will be $\operatorname{in}_\prec(I_2) = \langle \xi_1,\xi_1^2,\xi_2^2\rangle$. Therefore, the standard monomials are $\{1,\xi_2\}$ and the holonomic rank is $2$.
For the last example with $N=3$, we obtain $\operatorname{rank}(I_3) = 2$, where the standard monomials are $\{1,\xi_3\}$.
\end{example}
By the holonomic rank we introduced a generalization of the order of ordinary differential equations. The last object we have to generalize to the multivariate case is the singular locus. Let $\pi:\mathbb C^{2N} \rightarrow \mathbb C^N$ denotes the projection on the first coordinates $(z,\xi)\mapsto z$. The Zariski closure of such a projection of the characteristic variety without the trivial solution $\xi_1=\ldots=\xi_N=0$ will be defined to be the \textit{singular locus}
\begin{align}
\gls{Sing} := \overline{ \pi (\operatorname{char}(I)\setminus\{\xi=0\})} \subseteq \mathbb C^N \point \label{eq:SingLocusDef}
\end{align}
If $I$ is a holonomic $D$-ideal, the singular locus $\operatorname{Sing}(I) \subsetneq \mathbb C^N$ is a proper subset of $\mathbb C^N$ \cite{SattelbergerDModulesHolonomicFunctions2019}. \bigskip
We can now turn to the central existence and uniqueness theorem for partial differential equations. It is a special case of the Cauchy-Kovalevskaya-Kashiwara theorem.
\begin{theorem}[Cauchy-Kovalevskaya-Kashiwara \cite{SaitoGrobnerDeformationsHypergeometric2000}] \label{thm:CKKT}
Let $I\subseteq D$ be a holonomic $D$-ideal and $U\subset \mathbb C^N\setminus\operatorname{Sing}(I)$ a simply connected domain. Then the dimension of the $\mathbb C$-vector space of holomorphic solutions on $U$ is equal to its holonomic rank
\begin{align}
\dim\!\left(\operatorname{Sol}(I)\right) = \operatorname{rank} (I) \point
\end{align}
\end{theorem}
Therefore, the notions from the univariate case will transfer also to the multivariate case. However, the involved objects are much harder to determine in most cases, as the calculation of Gröbner basis can be very time-consuming. We will conclude this section by continuing the examples.
\begin{example}
Let $I_1$, $I_2$ and $I_3$ be the $D$-ideals defined in the \cref{ex:D1ideal,,ex:D2ideal,,ex:D3ideal}, respectively. As expected, the singular locus for the first example is simply $\operatorname{Sing}(I_1) = \Var (z)$. Thus, $z=0$ is the only singular point for $I_1$ and also the only possible singularity of its solutions. Quite similar is the situation for $I_2$, where we obtain $\operatorname{Sing}(I_2) = \Var (z_2)$.
For the last example we have $\operatorname{Sing}(I_3) = \Var (z_1 z_3 (z_2^2 - 4z_1 z_3))$, which can be determined e.g.\ by \softwareName{Macaulay2} or by eliminating $\xi$ from $\operatorname{char}(I_3)$. Note that the singular locus is generated by the principal $A$-determinant of the quadratic polynomial $f = z_1 + z_2 x + z_3 x^2$. Thus, we have $E_A(f) = z_1 z_3 (z_2^2 - 4z_1 z_3)$. In the following section it will turn out that this is not just an arbitrary coincidence but rather an instance of a more general correspondence between principal $A$-determinants and singular loci.
\end{example}
\section{\texorpdfstring{$\mathcal{A}$}{A}-hypergeometric systems} \label{sec:AHypSystems}
Since the first hypergeometric function was studied by Euler and Gauss more than 200 years ago, many different generalization of hypergeometric functions were introduced: Pochhammer series ${}_pF_q$, Appell's, Lauricella's and Kampé-de-Fériet functions, to name a few. Those functions can be characterized in three different ways: by series representations, by integral representations and as solutions of partial differential equations. Starting with Gauss' hypergeometric function ${_2}F_1$ we have
\begin{align}
{_2}F_1 (a,b,c|z) = \sum_{k\geq 0} \frac{(a)_k(b)_k}{(c)_k} \frac{z^k}{k!} \label{eq:GaussSeries}
\end{align}
as a series representation. We call $\gls{Pochhammer} := \frac{\Gamma(a+k)}{\Gamma(a)}$ the Pochhammer symbol, and we assume appropriate limits, if both $\Gamma$-functions have a pole.
Alternatively, there are many integral representations known for the Gauss' hypergeometric function, e.g.\ \cite{OlverNISTHandbookMathematical2010, BerkeschEulerMellinIntegrals2013}
\begin{align}
{_2}F_1 (a,b,c|z) &= \frac{\Gamma(c)}{\Gamma(b)\Gamma(c-b)} \int_0^1 \frac{t^{b-1} (1-t)^{c-b-1}}{(1-zt)^a} \mathop{}\!\mathrm{d} t \label{eq:GaussEuler}\\
{_2}F_1 (a,b,c|z) &= \frac{\Gamma(c)^2}{\Gamma(a)\Gamma(b)\Gamma(c-a)\Gamma(c-b)} \int_{\mathbb R^2_+} \frac{x_1^{a-1}x_2^{b-1}}{\left(1+x_1+x_2+(1-z)x_1x_2\right)^c} \mathop{}\!\mathrm{d} x_1 \mathop{}\!\mathrm{d} x_2 \label{eq:GaussEulerMellin}\\
{_2}F_1 (a,b,c|z) &= \frac{\Gamma(c)}{\Gamma(a)\Gamma(b)} \frac{1}{2\pi i} \int_{-i\infty}^{i\infty} \frac{\Gamma(a+t)\Gamma(b+t)}{\Gamma(c+t)} \Gamma(-t) (-z)^t \mathop{}\!\mathrm{d} t \label{eq:GaussMellinBarnes}
\end{align}
for convenient domains, which are known as Euler integral, Euler-Mellin integral and Mellin-Barnes integral, respectively. And finally Gauss' hypergeometric function can also be considered as a solution of the differential equation
\begin{align}
\big[ z(z-1) \partial^2_z + ((a+b+1)z-c)\partial_z + ab\big] \bullet {_2}F_1(a,b,c|z) = 0 \point \label{eq:GaussODE}
\end{align}
We can reformulate this differential equation also in a more symmetric form, which shows the connection to \cref{eq:GaussSeries}
\begin{align}
\big[ (\theta_z+a)(\theta_z+b) - (\theta_z+c)\partial_z \big] \bullet {_2}F_1(a,b,c|z) = 0 \label{eq:GaussODE2}
\end{align}
where $\theta_z:= z\partial_z$ is the Euler operator.
Therefore, there are in principle three different branches to generalize the notion of a hypergeometric function: by generalizing the series representation \cref{eq:GaussSeries}, the various integral representations \cref{eq:GaussEuler}, \cref{eq:GaussEulerMellin}, \cref{eq:GaussMellinBarnes} or by generalizing the differential equations \cref{eq:GaussODE}, \cref{eq:GaussODE2}. However, these generalizations do not necessarily agree. Nonetheless, we would like to have the three different kinds of representation also for generalized hypergeometric functions. \bigskip
The most general series representation goes back to Horn \cite{HornUberHypergeometrischeFunktionen1940} and was later investigated by Ore and Sato (a summarizing discussion can be found in \cite{GelfandGeneralHypergeometricSystems1992}). A \textit{Horn hypergeometric series} is a multivariate power series in the variables $y_1,\ldots,y_r\in\mathbb C$
\begin{align}
\sum_{k\in\mathbb N_0^r} c(k) y^k \label{eq:DefHornHypergeometric}
\end{align}
where ratios of the coefficients $\frac{c(k+e_i)}{c(k)}$ are supposed to be rational functions in $k_1,\ldots k_r$. By $e_1,\ldots,e_r$ we denote the elements of the standard basis in Euclidean space. Thus, the coefficients $c(k)$ can be represented mainly by a product of Pochhammer symbols\footnote{By the property of the Pochhammer symbols to satisfy $(a)_n^{-1} = (-1)^n (1-a)_{-n}$ for $n\in\mathbb Z$ one can convert Pochhammer symbols in the denominator to Pochhammer symbols in the numerator and vice versa. The most general form of those coefficients $c(k)$ is given by the Ore-Sato theorem \cite{GelfandGeneralHypergeometricSystems1992}.} $\prod_i (a_i)_{l_i(k)}$. Thereby, we consider $a_i\in\mathbb C$ as complex numbers and $l_i(k)$ as integer linear combinations of the summation indices $k_1,\ldots,k_r$. Since derivatives of Pochhammer symbols with respect to $a_i$ can be expressed by a sum of other Pochhammer symbols, derivatives of Horn hypergeometric functions with respect to their parameters are again Horn hypergeometric functions \cite{BytevDerivativesHorntypeHypergeometric2017}. For further studies of Horn hypergeometric functions we refer to \cite{SadykovHypergeometricFunctionsSeveral2002}. However, we want to remark that for $r>2$ not all Horn hypergeometric functions can be expressed as a solution of a holonomic $D$-ideal \cite{GelfandGeneralHypergeometricSystems1992}. \bigskip
Another option to generalize the notion of hypergeometric functions is to start with the integral representations. This was worked out by Aomoto, among others, who generalized the Euler integral representation to integrals over a product of linear forms up to certain powers \cite{AomotoStructureIntegralsPower1977}. However, the integration region for multivariate hypergeometric integrals can be very intricate. We refer to \cite{AomotoTheoryHypergeometricFunctions2011} for a comprehensive overview about those hypergeometric integrals. \bigskip
Finally, we can choose the differential equation as starting point for a generalized meaning of hypergeometric functions. This approach was initiated in the late 1980s by Gelfand, Graev, Kapranov, Zelevinsky and collaborators \cite{GelfandCollectedPapersVol1989, GelfandHypergeometricFunctionsToric1991, GelfandGeneralizedEulerIntegrals1990, GelfandGeneralHypergeometricSystems1992, GelfandDiscriminantsResultantsMultidimensional1994}. Due to their dependence on a finite set of lattice vectors $\mathcal{A}$, they are known as $\mathcal{A}$-hypergeometric functions or occasionally as Gelfand-Kapranov-Zelevinsky (GKZ) hypergeometric functions.
$\mathcal{A}$-hypergeometric functions are defined as solutions of certain holonomic $D$-ideals and include Aomoto's hypergeometric functions \cite{OpdamMultivariableHypergeometricFunctions2001}, as well as all holonomic Horn hypergeometric functions \cite{CattaniThreeLecturesHypergeometric2006}. And vice versa, we can express $\mathcal{A}$-hypergeometric functions also in terms of series or integrals as in the case of Gauss' hypergeometric function. The numerous representations for Feynman integrals thus appear naturally in the light of $\mathcal{A}$-hypergeometric functions. In the application to Feynman integrals we can identify a generalization of \cref{eq:GaussEulerMellin} with the parametric representation of Feynman integrals, and we will find its connection to the Mellin-Barnes representations \cref{eq:GaussMellinBarnes} which are ubiquitous in Feynman calculus (see e.g.\ \cite{SmirnovFeynmanIntegralCalculus2006}) in \cref{thm:MellinBarnesRepresentation}. Also, a relation to Euler type integrals \cref{eq:GaussEuler} can be given, which we will elaborate in \cref{sec:EulerIntegrals}. Furthermore, the whole \cref{ch:seriesRepresentations} will be devoted to the connection of Feynman integrals to hypergeometric series representations.
The theory of $\mathcal{A}$-hypergeometric functions not only characterizes those functions, it will also give us deep insights into the structure of hypergeometric functions. This can also have profit in the application to Feynman integrals, which we want to demonstrate by the examination of the singular locus also known as Landau variety. Moreover, $\mathcal{A}$-hypergeometric functions are connected to many branches in mathematics, e.g.\ combinatorics, number theory, motives, and Hodge theory (see exemplarily \cite{ReicheltAlgebraicAspectsHypergeometric2020, RobertsHypergeometricMotives2021}). Therefore, we will have the reasonable hope, that these various connections will also contain useful insights for Feynman integrals. \bigskip
In this section we will introduce $\mathcal{A}$-hypergeometric systems, and we will work out a representation of $\mathcal{A}$-hypergeometric functions by multivariate series. Moreover, we will draw their connection to $A$-discriminants. The following collection can only give a small glimpse of this rich theory. We refer to \cite{GelfandCollectedPapersVol1989, GelfandGeneralHypergeometricSystems1992, AomotoTheoryHypergeometricFunctions2011, SaitoGrobnerDeformationsHypergeometric2000, StienstraGKZHypergeometricStructures2005, CattaniThreeLecturesHypergeometric2006, VilenkinGelFandHypergeometric1995} for a more detailed description.
\subsection{Basic properties of \texorpdfstring{$\mathcal{A}$}{A}-hypergeometric systems} \label{ssec:BasicAhypergeometricSystems}
Let $\mathcal{A}=\{a^{(1)},\ldots,a^{(N)}\}\subset \mathbb Z^{n+1}$ be a finite subset of lattice points, which span $\mathbb R^{n+1}$ as a vector space $\operatorname{span}_{\mathbb R}(\mathcal{A}) = \mathbb R^{n+1}$. Thus, we always want to consider the case $n+1\leq N$. As before, we will denote by $\mathcal{A}\in\mathbb Z^{(n+1)\times N}$ also the matrix generated by the elements of the subset $\mathcal{A}$ as column vectors. Therefore, $\operatorname{span}_{\mathbb R}(\mathcal{A}) = \mathbb R^{n+1}$ is nothing else than the requirement that $\mathcal{A}$ has full rank. Further, we will assume that there exists a linear map $h:\mathbb Z^{n+1}\rightarrow \mathbb Z$, such that $h(a)=1$ for any $a\in\mathcal{A}$. Thus, all elements of $\mathcal{A}$ lie on a common hyperplane off the origin, which allows us to consider $\mathcal{A}$ as describing points of an affine space in homogenization. We write $A\subset\mathbb Z^n$ for the dehomogenized point configuration of $\mathcal{A}$ (see \cref{ssec:vectorConfigurations}). Equivalent to requiring such a linear map $h$, we can also demand $f=\sum_{a\in\mathcal{A}} z_a x^a$ to be a quasi-homogeneous polynomial. The integer kernel of $\mathcal{A}$ will be denoted by
\begin{align}
\gls{Ll} := \ker_{\mathbb Z}(\mathcal{A}) = \left\{ (l_1,\ldots,l_N)\in\mathbb Z^N \, \big\rvert \, l_1 a^{(1)} + \ldots + l_N a^{(N)} = 0 \right\} = \operatorname{Dep}(\mathcal{A})\cap\mathbb Z^N \point
\end{align}
By the notions of \cref{ssec:GaleDuality} this is the space of integer linear dependences and every Gale dual of $\mathcal{A}$ provides a basis of $\mathbb L$.\bigskip
The $\mathcal{A}$-hypergeometric system will be generated by a left ideal in the Weyl algebra $D := \langle z_1,\ldots,z_N,\allowbreak\partial_1,\ldots,\partial_N\rangle$, which consists of two types of differential operators, called toric and homogeneous operators
\begin{align}
\gls{toricOp} &:= \prod_{l_j>0} \partial_j^{l_j} - \prod_{l_j<0} \partial_j^{-l_j} \qquad\text{for}\quad l\in\mathbb L \label{eq:toricOperators}\\
\gls{homOp} &:= \sum_{j=1}^N a_i^{(j)} z_j \partial_j + \beta_i \qquad\text{for}\quad i=0,\ldots,n \label{eq:homogeneousOperators}
\end{align}
where $\gls{GKZpar}\in\mathbb C^{n+1}$ is an arbitrary complex number. We call the $D$-ideal
\begin{align}
\gls{GKZIdeal} = \sum_{i=0}^{n} D\cdot E_i(\beta) + \sum_{l\in\mathbb L} D\cdot \square_l \label{eq:AhypIdeal}
\end{align}
the \textit{$\mathcal{A}$-hypergeometric ideal}. Note that this definition agrees with the definition in \cref{eq:AHypIdeal1}. We will call the latter part $\gls{toricId} = \sum_{l\in\mathbb L} D\cdot \square_l$ the toric ideal. This is an ideal in the (commutative) ring $\mathbb K[\partial_1,\ldots,\partial_N]$. The $D$-module of equivalence classes $\gls{GKZModule} = \normalslant{D}{H_\mathcal{A}(\beta)}$ is referred as the \textit{$\mathcal{A}$-hypergeometric system} or the \textit{$\mathcal{A}$-hypergeometric module}\footnote{Note, that there is a unique isomorphism between $\mathcal{A}$-hypergeometric systems and the $n$-th relative de Rham cohomology group $\mathbb H^n$, i.e.\ $\mathcal M_\mathcal{A}(\beta) \cong \mathbb H^n$, see \cite[prop. 2.3]{ChestnovMacaulayMatrixFeynman2022}.}. Holomorphic solutions on convenient domains $U\subseteq\mathbb C^N$ of these differential equation systems will be called \textit{$\mathcal{A}$-hypergeometric functions}. Thus, $\mathcal{A}$-hypergeometric functions are the elements of $\operatorname{Sol} (H_\mathcal{A}(\beta))$.\bigskip
One of the most important properties of those hypergeometric ideals \cref{eq:AhypIdeal} is to be always holonomic. Therefore, the dimension of $\operatorname{Sol}(H_\mathcal{A}(\beta))$ will be finite and the definition of $\mathcal{A}$-hypergeometric functions is meaningful. Furthermore, the holonomic rank can be characterized by the volume of the polytope $\operatorname{Conv}(A)$. In the following theorem, we will collect the most essential properties about $\mathcal{A}$-hypergeometric systems, which summarize the results of over more than $10$ years of research in that field.
\begin{theorem}[Holonomic rank of $\mathcal{A}$-hypergeometric systems \cite{GelfandHolonomicSystemsEquations1988, GelfandEquationsHypergeometricType1988, GelfandGeneralizedEulerIntegrals1990, AdolphsonHypergeometricFunctionsRings1994, SaitoGrobnerDeformationsHypergeometric2000, MatusevichHomologicalMethodsHypergeometric2004}] \label{thm:HolRankAHyp}
The $\mathcal{A}$-hypergeometric ideal $H_\mathcal{A}(\beta)$ is always a holonomic $D$-ideal. Its holonomic rank is bounded by the volume of a polytope
\begin{align}
\operatorname{rank} (H_\mathcal{A}(\beta)) \geq \operatorname{vol} (\operatorname{Conv} (A)) \point \label{eq:AHypHolRank}
\end{align}
For generic values of $\beta\in\mathbb C^{n+1}$ equality holds in \cref{eq:AHypHolRank}. Furthermore, equality in \cref{eq:AHypHolRank} holds for all values of $\beta\in\mathbb C^{n+1}$ if and only if $I_\mathcal{A} $ is Cohen-Macaulay.
\end{theorem}
We will justify the statement for (very) generic $\beta$ by an explicit construction in \cref{ssec:GammaSeries}.\bigskip
By the restriction to \textit{generic} values $\beta\in\mathbb C^{n+1}$ we mean, that $\operatorname{rank} (H_\mathcal{A}(\beta)) = \operatorname{vol} (\operatorname{Conv} (A))$ holds except for values $\beta$ of a proper Zariski closed subset of $\mathbb C^{n+1}$. Hence, the set of generic points of $\mathbb C^{n+1}$ is a non-empty open Zariski set, which is dense in $\mathbb C^{n+1}$. Occasionally, we consider the stronger restriction \textit{very generic}, which describes elements from a countable intersection of non-empty Zariski open sets. Typically, non-generic points $\beta$ concern integer values. We will specify this statement in \cref{ssec:GammaSeries}. \bigskip
\begin{example}[Gauss' hypergeometric function] \label{ex:GaussAhyp}
We want to consider Gauss' hypergeometric function from \cref{sec:AHypSystems} in the context of $\mathcal{A}$-hypergeometric systems. For this function the $\mathcal{A}$-hypergeometric system will be generated by four points in $\mathbb Z^3$
\begin{align}
\mathcal{A} = \begin{pmatrix}
1 & 1 & 1 & 1 \\
0 & 1 & 0 & 1 \\
0 & 0 & 1 & 1
\end{pmatrix} , \qquad \beta = \begin{pmatrix} a + b - c \\ a \\ b \end{pmatrix} \point \label{eq:ABetaGauss}
\end{align}
The lattice of linear dependences of $\mathcal{A}$ is given by $\mathbb L = (1,-1,-1,1)\ \mathbb Z$. Hence, the $\mathcal{A}$-hypergeometric ideal $H_\mathcal{A}(\beta)$ will be spanned by the differential operators
\begin{align}
\partial_1 \partial_4 - \partial_2\partial_3 = 0, & \qquad z_1\partial_1 + z_2\partial_2 + z_3\partial_3 + z_4\partial_4 + (a+b-c) = 0 \\
z_2\partial_2 + z_4\partial_4 + a = 0, & \qquad z_3\partial_3 + z_4\partial_4 + b = 0 \point
\end{align}
We can combine these four differential operators to a single one
\begin{align}
z_4 \left(\frac{z_4}{z_2z_3} - \frac{1}{z_1} \right) \partial_4^2 + \left( \frac{1+a+b}{z_2z_3} z_4 - \frac{c}{z_1}\right) \partial_4 + \frac{ab}{z_2z_3} = 0 \point \label{eq:GaussAhypDiff}
\end{align}
Thus, for $z_1 = z_2 = z_3 = 1$ we obtain the differential equation for Gauss' hypergeometric function \cref{eq:GaussODE}. Hence, the Gauss' hypergeometric function ${_2}F_1(a,b,c|z)$ is an $\mathcal{A}$-hypergeometric function $\Phi(1,1,1,z)$, with $\Phi\in\operatorname{Sol}(H_\mathcal{A}(\beta))$ where $\mathcal{A}$ and $\beta$ are given by \cref{eq:ABetaGauss}. The hypergeometric differential equation has two basic solutions, which will agree to the $\mathcal{A}$-hypergeometric description, since $\operatorname{vol}\!\left(\operatorname{Conv}(A)\right)=2$.
\end{example}
At first sight, it seems that we have introduced too many variables in $\mathcal{A}$-hyper\-geometric systems as it appears in \cref{ex:GaussAhyp}. However, closer inspection of this example shows, that there is actual only one effective variable $y = \frac{z_1z_4}{z_2 z_3}$. It is the beautiful observation of Gelfand, Kapranov and Zelevinsky that the introduction of these extra variables will greatly simplify the hypergeometric structures \cite{StienstraGKZHypergeometricStructures2005}. Note that this behaviour is the same as observed for the $\beta$-projection of secondary fans (see \cref{ssec:secondaryPolytope}) and the reduced $\mathcal{A}$-discriminants of \cref{ssec:Adiscriminants}.\bigskip
A simple but nevertheless useful property of $\mathcal{A}$-hypergeometric functions is that they obey certain shift relations.
\begin{lemma}
Let $\Phi(z)\in\operatorname{Sol}\!\left(H_\mathcal{A}(\beta)\right)$ an $\mathcal{A}$-hypergeometric function. Then its derivative $\pd{\Phi(z)}{z_j} \in \operatorname{Sol}\!\left(H_\mathcal{A}\!\left(\beta + a^{(j)}\right)\right)$ is an $\mathcal{A}$-hypergeometric function, where the parameter $\beta$ is shifted by $a^{(j)}$.
\end{lemma}
\begin{proof}
The generator $\partial_j$ commutes with $\square_l$ of equation \cref{eq:toricOperators}. By the use of the commutator relations of the Weyl algebra we can verify easily $\partial_j E_i(\beta) = E_i\!\left(\beta + a_i^{(j)}\right) \partial_j$, which shows the assertion.
\end{proof}
Another simple but useful consequence of the existence of a linear map $h : \mathbb Z^{n+1}\rightarrow \mathbb Z$ with $h(a)=1$ for all $a\in\mathcal{A}$ is the following observation.
\begin{lemma} \label{lem:HomogenityRelationsForAa}
Let $\mathcal{A} = \{a^{(1)},\ldots,a^{(N)} \} \subset\mathbb Z^{n+1}$ be a vector configuration satisfying the conditions for an $\mathcal{A}$-hypergeometric system (i.e.\ $\operatorname{span}_{\mathbb R}(\mathcal{A}) = \mathbb R^{n+1}$ and all elements of $\mathcal{A}$ lying on a common hyperplane off the origin) and let $\sigma \subseteq \{1,\ldots,N\}$ be an index set of cardinality $n+1$, such that $\mathcal{A}_\sigma := \{ a^{(j)} \}_{j\in\sigma}$ is also full dimensional. In other words, $\sigma$ describes a full dimensional simplex from points of $\mathcal{A}$. Then we have
\begin{align}
\sum_{i\in\sigma} \left(\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\right)_{ij} = 1 \label{eq:AA1a}
\end{align}
for all $j=1,\ldots,N$ where $\bar\sigma$ is the complement of $\sigma$. If additionally $\mathcal{A}$ is a homogenized point configuration, i.e.\ $\mathcal{A}$ is of the form $\mathcal{A} = \begin{psmallmatrix} 1 & \ldots & 1 \\ & A & \end{psmallmatrix}$, it follows
\begin{align}
\sum_{i\in\sigma} \left(\mathcal{A}_\sigma^{-1}{\underline{\nu}}\right)_i = \nu_0 \label{eq:AA1b}
\end{align}
for any complex vector ${\underline{\nu}} = (\nu_0,\nu_1,\ldots,\nu_n)\in\mathbb C^{n+1}$.
\end{lemma}
\begin{proof}
$\mathcal{B}=\begin{psmallmatrix} -\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} \\ \mathbbm{1}\end{psmallmatrix}\in\mathbb{Q}^{N\times r}$ is a possible Gale dual of $\mathcal{A}$, which can be verified by the direct computation $\mathcal{A} \mathcal{B} = 0$. From the existence of a linear form $h$ with $h(a) = 1$ for all $a\in\mathcal{A}$ it follows in particularly that $\sum_j \mathcal{B}_{jk}=0$ and thereby equation \cref{eq:AA1a}. The second statement \cref{eq:AA1b} follows trivially from $\sum_{j,k} (\mathcal{A}_\sigma)_{ij} (\mathcal{A}_\sigma^{-1})_{jk} {\underline{\nu}}_k={\underline{\nu}}_i$.
\end{proof}
As seen in \cref{thm:HolRankAHyp}, $\mathcal{A}$-hypergeometric systems are in a certain sense well-behaved, and we can construct a basis of its solution space $\operatorname{Sol}(H_\mathcal{A}(\beta))$. There are several ways to construct those bases. In the following section we will give a basis in terms of multivariate power series, which was the first solution of $\mathcal{A}$-hypergeometric systems invented in \cite{GelfandHolonomicSystemsEquations1988}. Thereby, the observations in \cref{lem:HomogenityRelationsForAa} will help us to simplify the convergence conditions of those series. There are many alternatives to construct a basis of the solution space of $\mathcal{A}$-hypergeometric systems. We refer to \cite{Matsubara-HeoLaplaceResidueEuler2018} for a collection of various types of bases and to \cite{SaitoGrobnerDeformationsHypergeometric2000} for a systematic treatment of series solutions.
\vspace{.3\baselineskip}
\subsection{\texorpdfstring{$\Gamma$}{\unichar{"0393}}-series}\label{ssec:GammaSeries}
\vspace{.3\baselineskip}
Let $\mathcal{A}\in\mathbb Z^{(n+1)\times N}$ be a full rank integer matrix with $n+1\leq N$ and $\mathbb {L} = \operatorname{ker}_\mathbb{Z} (\mathcal{A})$ its kernel with $\operatorname{rank} (\mathbb L) = N-n-1 =: r$ as before. Then for every $\gamma\in\mathbb{C}^N$ the formal series invented in \cite{GelfandHolonomicSystemsEquations1988}\glsadd{gammaSer}\glsunset{gammaSer}
\begin{align}
\varphi_\gamma(z) = \sum_{l\in\mathbb L} \frac{z^{l+\gamma}}{\Gamma(\gamma+l+1)} \label{eq:GammaSeriesOriginalDef}
\end{align}
is called a \textit{$\mathit\Gamma$-series}. We recall the use of a multi-index notation, i.e.\ we write $z^{l+\gamma} = \prod_{i=1}^N z_i^{l_i+\gamma_i}$ and $\Gamma(\gamma+l+1) = \prod_{i=1}^N \Gamma(\gamma_i+l_i+1)$. For an appropriate choice of $\gamma$, those $\Gamma$-series are formal solutions of the $\mathcal{A}$-hypergeometric systems \cref{eq:AhypIdeal}.
\begin{lemma}[$\Gamma$-series as formal solutions of GKZ hypergeometric systems \cite{GelfandHolonomicSystemsEquations1988,GelfandHypergeometricFunctionsToral1989}]
Let $\gamma\in\mathbb C^N$ be a complex vector satisfying $\mathcal{A} \gamma + \beta = 0$. Then the $\Gamma$-series $\varphi_\gamma(z)$ is a formal solution of the $\mathcal{A}$-hypergeometric system
\begin{align*}
H_\mathcal{A} (\beta) \bullet \varphi_\gamma(z)=0 \point
\end{align*}
\end{lemma}
\begin{proof}
For any $u\in\mathbb N_0^N$ and any $r\in\mathbb C^N$ it is $\left(\frac{\partial}{\partial z}\right)^u z^r = \frac{\Gamma(r+1)}{\Gamma(r-u+1)} z^{r-u}$ (with an appropriate limit, respectively). Furthermore, one can add an element of $\mathbb L$ to $\gamma$, without changing the $\Gamma$-series. Every element of $\mathbb L$ can be written as $u-v$, where $u,v\in\mathbb N_0^N$ are two vectors satisfying $\mathcal{A} u = \mathcal{A} v$. Therefore, we have
\begin{align}
\partial^u \bullet \varphi_{\gamma}(z) &= \sum_{l\in\mathbb L} \frac{z^{l+\gamma-u}}{\Gamma(\gamma+l-u +1)} = \varphi_{\gamma-u}(z) = \varphi_{\gamma-v}(z) = \partial^v \bullet \varphi_{\gamma}(z)
\end{align}
which shows that the $\Gamma$-series fulfil the toric operators \cref{eq:toricOperators}. For the homogeneous operators \cref{eq:homogeneousOperators} one considers
\begin{align}
\sum_{j=1}^N a^{(j)} z_j \partial_j \bullet \varphi_\gamma(z) = \sum_{l\in\mathbb L} \left(\sum_{j=1}^N a^{(j)} (\gamma_j+l_j)\right) \frac{z^{l+\gamma}}{\Gamma(\gamma+l+1)} = - \beta \varphi_\gamma(z) \point
\end{align}
\end{proof}
The restriction $\mathcal{A} \gamma + \beta=0$ allows in general many choices of $\gamma$. Let $\sigma \subseteq \{1,\ldots,N\}$ be an index set with cardinality $n+1$, such that the matrix $\mathcal{A}_\sigma := \{a^{(j)}\}_{j\in \sigma}$ restricted to columns corresponding to $\sigma$ is non-singular $\det (\mathcal{A}_\sigma) \neq 0$. Denote by $\bar\sigma = \{1,\ldots,N\} \setminus \sigma$ the complement of $\sigma$ and $\mathcal{A}_{\bar{\sigma}}:=\{a^{(j)}\}_{j\in \bar\sigma}$. If one sets $\gamma_\sigma = -\mathcal{A}_\sigma^{-1} (\beta + \mathcal{A}_{\bar{\sigma}} k)$ and $\gamma_{\bar\sigma} = k$ the condition $\mathcal{A} \gamma + \beta = 0$ is satisfied for any $k\in\mathbb{C}^{r}$.
On the other hand we can split the lattice $\mathbb L = \left\{ l\in\mathbb Z^N \,\rvert\, \mathcal{A} l =0\right\}$ in the same way $\mathcal{A}_\sigma l_\sigma + \mathcal{A}_{\bar{\sigma}} l_{\bar\sigma}=0 $ and obtain a series only over $l_{\bar\sigma}$
\begin{align}
\varphi_{\gamma_\sigma} (z) &= \sum_{\substack{l_{\bar\sigma}\in\mathbb Z^r \\ \textrm{s.t. } \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} l_{\bar\sigma}\in\mathbb Z^{n+1} }} \frac{z_\sigma^{-\mathcal{A}_\sigma^{-1} (\beta+\mathcal{A}_{\bar{\sigma}} k + \mathcal{A}_{\bar{\sigma}} l_{\bar\sigma}) } z_{\bar\sigma}^{k+l_{\bar\sigma}}}{\Gamma\!\left(-\mathcal{A}_\sigma^{-1}(\beta+\mathcal{A}_{\bar{\sigma}} k +\mathcal{A}_{\bar{\sigma}} l_{\bar\sigma})+1\right) \Gamma(k+l_{\bar\sigma}+1)} \point
\end{align}
In order to simplify the series we will restrict $k\in\mathbb Z^r$ to integers, since terms with $(k+l_{\bar\sigma})_i\in\mathbb{Z}_{< 0}$ will vanish. The $\Gamma$-series depends now on $k$ and $\sigma$\glsadd{gammaSer}\glsunset{gammaSer}
\begin{align}
\varphi_{\sigma,k}(z) =z_\sigma^{-\mathcal{A}_\sigma^{-1}\beta} \sum_{\lambda\in\Lambda_k} \frac{z_\sigma^{-\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda} z_{\bar\sigma}^\lambda}{\lambda!\ \Gamma\!\left(-\mathcal{A}_\sigma^{-1}(\beta+\mathcal{A}_{\bar{\sigma}} \lambda)+1\right)} \label{eq:GammaPowerSeriesVarphi}
\end{align}
where $\gls{Lambdak}=\left\{ k + l_{\bar\sigma} \in \mathbb{N}_0^r \,\rvert\, \mathcal{A}_{\bar{\sigma}} l_{\bar\sigma} \in \mathbb Z \mathcal{A}_\sigma \right\}\subseteq \mathbb{N}_0^r$. Therefore, the $\Gamma$-series is turned into a power series in the variables\glsadd{effectiveVars}
\begin{align}
y_j = \frac{(z_{\bar\sigma})_j}{\prod_i (z_\sigma)_i^{(\mathcal{A}_\sigma^{-1} \mathcal{A}_{\bar{\sigma}})_{ij}}} \quad\text{for } j=1,\ldots,r \text{ .}\label{eq:effectiveVarsGammaSeries}
\end{align}
Note that these are the same ``effective'' variables, which we introduced in \cref{eq:effectiveVarsADisc}.
However, for certain values of $\beta\in\mathbb C^{n+1}$ some of the series \cref{eq:GammaPowerSeriesVarphi} may vanish. A $\Gamma$-series $\varphi_{\sigma,k}$ vanishes if and only if for every $\lambda\in\Lambda_k$ the vector $\mathcal{A}_\sigma^{-1}(\beta+\mathcal{A}_{\bar{\sigma}}\lambda)$ has at least one (strictly) positive integer component. Hence, a $\Gamma$-series will vanish if and only if $\beta$ takes values on one of certain countable hyperplanes. Therefore, we will demand $\beta$ to be very generic in order to avoid those cases. Note, that the set of very generic $\beta$ is dense in $\mathbb C^{n+1}$.\bigskip
At next, we will consider the dependence of $\varphi_{\sigma,k}$ on the choice of $k\in\mathbb Z^r$. Note first, that $\Lambda_k = \Lambda_{k+k'}$ if and only if $\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} k' \in \mathbb Z^{n+1}$. Furthermore, there is always such a positive integer vector $k'\in\mathbb N^r$ satisfying $\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} k' \in \mathbb Z^{n+1}$, e.g.\ by choosing $k'\in |\det(\mathcal{A}_\sigma)|\cdot \mathbb N^r$. Thus, we can shift every $k\in\mathbb Z^r$ to positive integers, which is why we want to restrict our consideration to $k\in\mathbb N_0^r$. Moreover, the cardinality of the set $\left\{\Lambda_k \,\rvert\, k \in \mathbb Z^r\right\} = \left\{\Lambda_k \,\rvert\, k \in \mathbb N_0^r\right\}$ is given by $|\det(\mathcal{A}_\sigma)|$, which is nothing else than the volume of $\operatorname{Conv}(\mathcal{A}_\sigma)$ \cite{Fernandez-FernandezIrregularHypergeometricDmodules2009}. In order to generate a basis of the solution space $\operatorname{Sol}\!\left(H_\mathcal{A}(\beta)\right)$, we want to pick those elements from $\left\{\Lambda_k \,\rvert\, k \in \mathbb N_0^r\right\}$ such that the resulting $\Gamma$-series $\varphi_{\sigma,k}$ are linearly independent. Thus, let $k^{(1)},\ldots,k^{(s)}$ be representatives of $\bigslant{\mathbb Z^{n+1}}{\mathbb Z\mathcal{A}_\sigma}$, i.e.\
\begin{align}
\bigslant{\mathbb Z^{n+1}}{\mathbb Z\mathcal{A}_\sigma} = \left\{ \left[\mathcal{A}_{\bar{\sigma}} k^{(j)}\right] \, \rvert \, j = 1, \ldots, s = \operatorname{vol}\!\left(\operatorname{Conv} (\mathcal{A}_\sigma)\right) \right\} \point \label{eq:representativesK}
\end{align}
Hence, $\left\{\Lambda_{k^{(j)}} \,\rvert\, j=1,\ldots,s\right\}$ defines a partition of $\mathbb N_0^r$ \cite{Fernandez-FernandezIrregularHypergeometricDmodules2009}:
\begin{align}
\Lambda_{k^{(i)}} \cap \Lambda_{k^{(j)}} = \emptyset\quad \text{for all} \quad i\neq j \quad\text{and} \qquad \bigcup_{j=1}^s \Lambda_{k^{(j)}} = \mathbb N^r_0 \point \label{eq:LambdaPartition}
\end{align}
Therefore, the $\Gamma$-series $\varphi_{\sigma,k^{(1)}},\ldots,\varphi_{\sigma,k^{(s)}}$ have different support and are linear independent if none of them is identically zero, which we have avoided by assuming very generic $\beta$.
\begin{example}
We want to illustrate the construction of partitions of $\mathbb N_0^r$ by means of the following example. Let
\begin{align}
\mathcal{A} = \begin{pmatrix}
1 & 1 & 1 & 1 \\
1 & 0 & 2 & 0 \\
0 & 1 & 0 & 2
\end{pmatrix}
\end{align}
be a vector configuration and $\sigma = \{1,3,4\}$ an index set, where the corresponding full dimensional simplex has volume $\operatorname{vol}\!\left(\operatorname{Conv}(\mathcal{A}_\sigma)\right) = |\det \mathcal{A}_\sigma | = 2$. As we will later see, this is a specific configuration for a fully massive bubble self-energy graph. Considering the inverse of $\mathcal{A}_\sigma$, we find that the ideal $\mathbb Z \mathcal{A}_\sigma = (\mathbb Z,\mathbb Z,2\mathbb Z)^\top$ contains only even integers in its last component. Hence, the quotient ring $\bigslant{\mathbb Z^3}{\mathbb Z \mathcal{A}_\sigma} = \{ (\mathbb Z,\mathbb Z,2\mathbb Z)^\top, (\mathbb Z,\mathbb Z,2\mathbb Z + 1)^\top \}$ consists in the two equivalence classes with even and odd integers in the last component. Thus, we can choose the representatives $[0,0,0]^\top, [0,0,1]^\top$ or equivalently $[0,0,0]^\top, [1,0,1]^\top$. Due to \cref{eq:representativesK}, these representatives will be generated by $k^{(1)} = 0$ and $k^{(2)} = 1$. The two resulting summation regions $\Lambda_{k^{(1)}} = 2\mathbb N_0$ and $\Lambda_{k^{(2)}} = 2\mathbb N_0+1$ consist in even and odd natural numbers, respectively.
\end{example}
$\Gamma$-series were hitherto only treated as formal series. Therefore, we want to fill this gap now and examine the convergence of $\Gamma$-series. Thus, we will show in the following that there is a non-vanishing convergence radius $R\in\mathbb R^r_{>0}$, such that \cref{eq:GammaPowerSeriesVarphi} converges absolutely for $|y_j|< R_j$ with $j=1,\ldots,r$. Hence, we have to estimate the summands of $\Gamma$-series. As an application of the Stirling formula, one can state the following lemma.
\begin{lemma}[Bounds of $\Gamma$-functions (similar to \cite{StienstraGKZHypergeometricStructures2005})] \label{lem:gammaest}
For every $C\in\mathbb C$ there are constants $\kappa,R\in\mathbb R_{>0}$ independent of $M$, such that
\begin{align}
\frac{1}{|\Gamma (C+M)|} \leq \kappa R^{|M|} |M|^{-M} \label{eq:gammest}
\end{align}
for all $M\in\mathbb Z$.
\end{lemma}
\begin{proof}
Firstly, consider the non-integer case $C\notin \mathbb{Z}$. For $M>0$ it is
\begin{align}
|\Gamma(C+M)| &= |\Gamma(C)| \prod_{j=0}^{M-1} |C+j| \geq |\Gamma(C)| \prod_{j=0}^{M-1} \big| |C|- j\big| \nonumber\\
&= |\Gamma(C)| \prod_{j=1}^M j \left|\frac{|C|-j+1}{j}\right| \geq M! \ Q^M |\Gamma(C)|
\end{align}
where $Q=\min \left|\frac{|C|-j+1}{j}\right| > 0$ and we used a variation of the triangle inequality $|a+b| \geq \big| |a| - |b|\big|$ (also known as reverse triangle inequality). With Stirling's approximation one obtains further
\begin{align}
|\Gamma(C+M)| &\geq |\Gamma(C)| \sqrt{2\pi} Q^M M^{M+\frac{1}{2}} e^{-M} \geq |\Gamma(C)| \sqrt{2\pi} \left(\frac{Q}{e}\right)^M M^M \point
\end{align}
In contrast, for $M<0$ using the triangle inequality $|a-b| \leq |a| + |b|$ we have
\begin{align}
|\Gamma(C+M)| &= |\Gamma(C)| \prod_{j=1}^{|M|} \left|\frac{1}{C-j}\right| \geq |\Gamma(C)| \prod_{j=1}^{|M|} \left|\frac{1}{|C|+j}\right| \geq |\Gamma(C)| \prod_{j=1}^{|M|} \left|\frac{1}{|C|+|M|}\right| \nonumber\\
&= \left(1+\frac{|C|}{|M|}\right)^{-|M|} |M|^{M} |\Gamma(C)| \geq |\Gamma(C)| \left(1+|C|\right)^{-|M|} |M|^{M} \point
\end{align}
By the setting $\kappa = |\Gamma(C)|^{-1}$ and $R=\max \left( 1+|C|,e Q^{-1}\right)$ one can combine both cases to equation \cref{eq:gammest}. The case $M=0$ is trivially satisfied whereat we set $0^0:=1$.
Consider now the case where $C\in\mathbb Z$. If $C+M\leq 0$ the $\Gamma$-function has a pole and the lemma is satisfied automatically. For $C+M\geq 1$ it is $\Gamma(C+M) = \prod_{j=0}^{C+M-2} (j+1) \geq \prod_{j=0}^{C+M-2} \left(j+\frac{1}{2}\right) = \frac{\Gamma(C+M-\frac{1}{2})}{\Gamma(\frac{1}{2})} = \frac{1}{\sqrt \pi} \Gamma(C+M-\frac{1}{2})$ which recurs to the non-integer case with $C^\prime=C-\frac{1}{2}\notin \mathbb Z$.
\end{proof}
To apply this estimation to a product of $\Gamma$ functions the following lemma is helpful.
\begin{lemma} \label{lem:apowera}
Let $a_1,\ldots,a_N\in\mathbb{R}_{>0}$ be a set of positive real numbers. Then it holds
\begin{align}
\left(\sum_{i=1}^N a_i\right)^{\sum_{i=1}^N a_i} \geq \prod_{i=1}^N a_i^{a_i} \geq \left(\frac{1}{N} \sum_{i=1}^N a_i\right)^{\sum_{i=1}^N a_i} \point
\end{align}
\end{lemma}
\begin{proof}
The left inequality is trivially true. The right inequality will be proven by considering different cases of $N$. For $N=2$ without loss of generality it is $\frac{a_1}{a_2} =: \rho \geq 1$. The latter is then equivalent to the Bernoulli inequality
\begin{align}
a_1^{a_1} a_2^{a_2} \geq \left(\frac{a_1+a_2}{2}\right)^{a_1+a_2} \Leftrightarrow \left(1+\frac{\rho-1}{\rho+1}\right)^{\rho+1} \geq \rho \point
\end{align}
For $N=2^j$ with $j\in\mathbb N$ the lemma can be reduced to the case $N=2$ by an iterative use. All the other cases can be reduced to a $2^j$-case by adding the mean value $\mu := \frac{1}{N} \sum_{i=1}^N a_i$
\begin{align}
\left(\mu^\mu\right)^{2^j-N} \prod_{i=1}^N a_i^{a_i} &\geq \left(\frac{\sum_{i=1}^N a_i + (2^j-N)\mu}{2^j}\right)^{\sum_{i=1}^N a_i + (2^j-N)\mu} \nonumber \\
&= \left(\frac{\sum_{i=1}^N a_i}{N}\right)^{\sum_{i=1}^N a_i} \mu^{(2^j-N)\mu}
\end{align}
with $2^j-N>0$.
\end{proof}
Combining the estimations in \cref{lem:gammaest} and \cref{lem:apowera}, we can show the convergence of $\Gamma$-series by a convergent majorant.
\begin{theorem}[Convergence of $\Gamma$-series \cite{GelfandHolonomicSystemsEquations1988, GelfandHypergeometricFunctionsToral1989,StienstraGKZHypergeometricStructures2005}] \label{thm:GammaConverge}
There is always a positive real tuple $R\in\mathbb R_{>0}^r$, such that the series $\varphi_{\sigma,k}$ \cref{eq:GammaPowerSeriesVarphi} converges absolutely for any $y\in\mathbb C^r$ with $|y_j| < R_j$ for $j=1,\ldots,r$.
\end{theorem}
\begin{proof}
The series in equation \cref{eq:GammaPowerSeriesVarphi} can be written in the form
\begin{align}
\sum_{\lambda\in\Lambda_k} \frac{y^\lambda}{\Gamma(\mathbf 1_N + C + \mathcal{B}\lambda)}
\end{align}
where $\mathcal{B} =\begin{psmallmatrix} -\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} \\ \mathbbm{1}_r \end{psmallmatrix}\in\mathbb Q^{N\times r}$ is a Gale dual of $\mathcal{A}$, $y$ are the effective variables from \cref{eq:effectiveVarsGammaSeries} and $C = (-\mathcal{A}_\sigma^{-1}\beta,\mathbf 0_r)$. From the definition of $\Lambda_k$ we see that $\mathcal{B} \lambda\in\mathbb Z^N$ for all $\lambda\in\Lambda_k$. Thus, one can estimate by \cref{lem:gammaest}
\begin{align}
\left| \prod_{i=1}^N \frac{1}{\Gamma(1 + C_i + \sum_{j=1}^r b_{ij} \lambda_j)}\right| \leq \kappa \prod_{i=1}^N R_i^{\left|\sum_{j=1}^r b_{ij}\lambda_j\right|} \left|\sum_{j=1}^r b_{ij}\lambda_j\right|^{-\sum_{j=1}^r b_{ij} \lambda_j} \comma
\end{align}
where $b_{ij}$ are the components of $\mathcal{B}$. Furthermore, by \cref{lem:HomogenityRelationsForAa} it is also $\sum_{i=1}^N b_{ij} = 0$, which will imply $D:=\sum_{b_{ij}>0} b_{ij}\lambda_j = - \sum_{b_{ij}<0} b_{ij}\lambda_j = \frac{1}{2} \left|\sum_{ij} b_{ij}\lambda_j\right|$. With \cref{lem:apowera} we can continue to estimate
\begin{align}
\prod_{i=1}^N \left|\sum_{j=1}^r b_{ij}\lambda_j\right|^{-\sum_{j=1}^r b_{ij} \lambda_j} \leq D^D \ N_+^D D^{-D} = N_+^D
\end{align}
by splitting the product in $N_+$ factors where $\sum_{j=1}^r b_{ij}\lambda_j>0$ is positive and the $N_-$ factors where $\sum_{j=1}^r b_{ij}\lambda_j<0$ is negative. With $R_{\textrm{max}}=\max_i (R_i)$ we obtain
\begin{align}
\left| \prod_{i=1}^N \frac{1}{\Gamma(a_i+\sum_{j=1}^r C_{ij} \lambda_j)}\right| \leq \kappa N_+^D R_{\textrm{max}}^{2D} \point
\end{align}
Thus, the $\Gamma$-series will be bounded by a geometric series and there is always a non-vanishing region of absolute convergence.
\end{proof}
Applying this result to the variables $z_1,\ldots,z_N$, a $\Gamma$-series $\varphi_{\sigma,k}$ converges absolutely if those variables satisfy
\begin{align}
\ln |y_j| = \sum_{i=1}^N b_{ij} \ln |z_i| < \rho_j \label{eq:convergenceConditionLog}
\end{align}
for $j=1,\ldots,r$ where $\rho_j\in\mathbb R$ are real numbers and $b_{ij}$ denote the elements of the Gale dual $\mathcal{B} = \mathcal{B}(\sigma) =\begin{psmallmatrix} -\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} \\ \mathbbm{1}_r \end{psmallmatrix}$. Let
\begin{align}
C(\sigma) = \left\{ \omega\in\mathbb R^N \,\rvert\, \omega\mathcal{B}(\sigma) \geq 0 \right\}
\end{align}
be the cone generated by the Gale dual $\mathcal{B}(\sigma)$. As assumed, $\mathcal{A}$ is an acyclic vector configuration and therefore $\mathcal{B}$ is totally cyclic (see \cref{ssec:GaleDuality}). Thus, there is a vector $p\in\mathbb R^N$ such that $- p \mathcal{B}(\sigma) = \rho$, and we can reformulate the convergence condition \cref{eq:convergenceConditionLog} into points of a translated cone
\begin{align}
\left(-\ln|z_1|,\ldots,-\ln|z_N|\right) \in \operatorname{relint}\!\left(C(\sigma)\right) + p \point \label{eq:logMapTranslatedCone}
\end{align}
Comparing the cone $C(\sigma)$ with the results from \cref{eq:subdivisionCondGale} and \cref{eq:subdivisionCondGaleBeta} we obtain $\operatorname{relint}\!\left(C(\sigma)\right) = \{\omega\in\mathbb R^N \,\rvert\, \sigma\in \mathcal{S}(\mathcal{A},\omega)\}$. Therefore, for any regular triangulation $\mathcal{T} = \mathcal{S}(\mathcal{A},\omega)$ of $\mathcal{A}$ generated by a height $\omega\in\mathbb R^N$ we consider the intersection $C(\mathcal{T}) := \cap_{\sigma\in\mathcal{T}} C(\sigma)$ which is nothing else than a secondary cone. Hence, $C(\mathcal{T})$ is full dimensional \cite{DeLoeraTriangulations2010} and thus also the intersection of translated cones is full dimensional. Therefore, there is a common convergence region of all $\{\varphi_{\sigma,k}\}_{\sigma\in\mathcal{T}}$. Note that regularity of triangulations is necessary, as otherwise $C(\mathcal{T})$ will not be full dimensional \cite{DeLoeraTriangulations2010}. Furthermore, all those series will be linearly independent. Recall from \cref{thm:HolRankAHyp} that the holonomic rank for generic $\beta$ is given by the volume of $\operatorname{Conv}(A)$. Hence, by collecting all $\Gamma$-series $\varphi_{\sigma,k}$ corresponding to maximal cells $\sigma$ of a triangulation $\mathcal{T}$ and varying $k$ according to \cref{eq:representativesK} we will obtain exactly $\operatorname{vol}(\operatorname{Conv}(A))$ independent series $\varphi_{\sigma,k}$. We will combine all these results in the following theorem.
\begin{theorem}[Solution space with $\Gamma$-series \cite{GelfandHolonomicSystemsEquations1988, GelfandHypergeometricFunctionsToral1989, GelfandGeneralHypergeometricSystems1992}] \label{thm:SolutionSpaceGammaSeries}
Let $\mathcal{T}$ be a regular triangulation of the vector configuration $\mathcal{A}\subset\mathbb Z^{n+1}$ and $\beta\in\mathbb C^{n+1}$ very generic with respect to every $\sigma\in \widehat{\mathcal T}$, where $\widehat{\mathcal T}$ are the maximal cells of $\mathcal{T}$. Further, let $\gls{Ksigma} = \left\{ k^{(1)},\ldots, k^{(\operatorname{vol}(\operatorname{Conv}(\mathcal{A}_\sigma)))} \right\}\subset \mathbb N_0^r$ be a set of representatives of $\bigslant{\mathbb Z^{n+1}}{\mathbb Z\mathcal{A}_\sigma}$ for any $\sigma\in\widehat{\mathcal T}$ according to \cref{eq:representativesK}. Then the set of power series
\begin{align}
\left\{ \left\{ \varphi_{\sigma,k} \right\}_{k\in K_\sigma} \right\}_{\sigma\in\widehat{\mathcal T}}
\end{align}
is a basis of the solution space $\operatorname{Sol}\!\left(H_\mathcal{A}(\beta)\right)$ and all those power series have a common, non-empty domain of absolute convergence.
\end{theorem}
Above we only showed that a non-empty, common domain of convergence exists. The investigation of the exact shape of this domain therefore starts at \cref{eq:logMapTranslatedCone} and led to the introduction of the so-called \textit{amoeba}. For further reading about amoebas we refer \cite[ch. 6.1]{GelfandDiscriminantsResultantsMultidimensional1994}, \cite{PassareSingularitiesHypergeometricFunctions2005, ForsbergLaurentDeterminantsArrangements2000}. See also \cref{fig:ComplexLogarithmMaps}.\bigskip
For very generic $\beta\in\mathbb C^N$ we can normalize the $\Gamma$-series\glsadd{norGammaSer}\glsunset{norGammaSer}
\begin{align}
\phi_{\sigma,k} (z) := \Gamma(-\mathcal{A}_\sigma^{-1}\beta+1)\ \varphi_{\sigma,k}(z) = z_\sigma^{-\mathcal{A}_\sigma^{-1}\beta} \sum_{\lambda\in\Lambda_k} \frac{z_\sigma^{-\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda} z_{\bar\sigma}^\lambda}{\lambda!\ (1 -\mathcal{A}_\sigma^{-1}\beta)_{-\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} \lambda}}
\end{align}
such that the first term in the series is equal to $1$. Note that this definition of $\Gamma$-series agrees with the definition in \cite{SaitoGrobnerDeformationsHypergeometric2000}.
Especially, for unimodular triangulations ($|\det \mathcal{A}_\sigma| = 1$) we can simplify the $\Gamma$-series further. Note that in this case $\Lambda_k = \mathbb N_0^r$ for any $k\in\mathbb Z^r$ since $\mathcal{A}_\sigma^{-1}\in\mathbb Z^{(n+1)\times (n+1)}$, and we obtain\glsadd{norGammaSer}\glsunset{norGammaSer}
\begin{align}
\phi_{\sigma} (z) = z_\sigma^{-\mathcal{A}_\sigma^{-1}\beta} \sum_{\lambda\in\mathbb N_0^r} \frac{(\mathcal{A}_\sigma^{-1}\beta)_{\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda}}{\lambda!} \frac{z_{\bar\sigma}^\lambda}{(-z_\sigma)^{\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda} }
\end{align}
by the properties of Pochhammer symbols. \bigskip
A slight variation of $\Gamma$-series are the \textit{Fourier $\Gamma$-series} \cite{StienstraGKZHypergeometricStructures2005} where we replace the variables $z_j \mapsto e^{2\pi i w_j}$ for $j=1,\ldots,N$. Those Fourier $\Gamma$-series are more flexible than the original definition \cref{eq:GammaSeriesOriginalDef}. This replacement will simplify the convergence criterion \cref{eq:convergenceConditionLog} and also considerations about the monodromy of $\mathcal{A}$-hypergeometric functions are more accessible. In this context we refer also to the coamoeba (see \cref{sec:Coamoebas}), which adopts the spirit of this idea.
\subsection{Singular locus of \texorpdfstring{$\mathcal{A}$}{A}-hypergeometric systems} \label{ssec:SingularLocusAHyp}
In the previous section we constructed a basis of the solution space in terms of power series. However, we restricted the domain of these functions in order to ensure the convergence of these series. In this section we will ask for the analytic continuation of solutions of $H_\mathcal{A}(\beta)$, i.e.\ we will look for a maximal domain. By the Cauchy-Kashiwara-Kovalevskaya \cref{thm:CKKT} we have to consider the singular locus of $H_\mathcal{A}(\beta)$ for the analytic continuation. Let us first remark, that we can restrict ourselves to the codimension $1$ part of the singular locus, since all singularities in higher codimensions are removable singularities due to Riemann's second removable theorem \cite{KaupHolomorphicFunctionsSeveral1983, BerkeschZamaereSingularitiesHolonomicityBinomial2014}.
This section will also establish the link between $\mathcal{A}$-hypergeometric systems and $A$-discriminants. This connection was developed in a series of articles by Gelfand, Kapranov and Zelevinsky, mainly in \cite{GelfandEquationsHypergeometricType1988, GelfandAdiscriminantsCayleyKoszulComplexes1990, GelfandHypergeometricFunctionsToric1991, GelfandDiscriminantsResultantsMultidimensional1994}. A major part of this correspondence was also shown in \cite{AdolphsonHypergeometricFunctionsRings1994}, where the following deduction is mostly based on. A generalization of this relation can be found in \cite{BerkeschZamaereSingularitiesHolonomicityBinomial2014} and \cite{SchulzeIrregularityHypergeometricSystems2006}.
Recall, that we will always assume for $\mathcal{A}$-hypergeometric systems, that $\mathcal{A}$ describes points lying on a common hyperplane off the origin (see \cref{ssec:BasicAhypergeometricSystems}). This will imply a certain regularity of $H_\mathcal{A}(\beta)$, i.e.\ local solutions have at worst logarithmic singularities near the singular locus \cite{CattaniThreeLecturesHypergeometric2006}. This is a generalization of the behaviour of regular singular points in ordinary differential equations (see also \cref{sec:holonomicDmodules}). \bigskip
As a first step we want to establish a connection between the faces of $\operatorname{Conv}(\mathcal{A})$ and the characteristic variety of $H_\mathcal{A}(\beta)$. The following two lemmata are inspired by \cite{AdolphsonHypergeometricFunctionsRings1994} with slight adjustments.
\begin{lemma} \label{lem:face-characteristic}
For every point $(\hat z,\hat \xi)\in\operatorname{char}\!\left(H_\mathcal{A}(\beta)\right)$ of the characteristic variety, there exists a unique face $\tau\subseteq\operatorname{Conv}(\mathcal{A})$ such that $\hat \xi_j\neq 0$ if and only if $j\in\tau$.
\end{lemma}
\begin{proof}
The case $\hat\xi = (0,\ldots,0)$ is trivially satisfied by $\tau=\emptyset$, and we will exclude this case in the following. Denote by $\emptyset\neq J\subseteq\{1,\ldots,N\}$ the index set for which $\hat\xi_j \neq 0$ for all $j\in J$ and let $\tau$ be the carrier of $J$, i.e.\ the smallest face of $\operatorname{Conv}(\mathcal{A})$ containing all points with labels in $J$. Thus, we want to show $J=\tau$. Let us first show that $J$ spans affinely the supporting hyperplane of $\tau$, i.e.\ that $\operatorname{Conv}(J)$ and $\tau$ have the same dimension\footnote{One can see easily, that $\tau\subseteq\operatorname{Aff}(J)$ follows from $\dim(\operatorname{Conv}(J))=\dim(\tau)$. Note first, that $J\subseteq\tau$ implies also $\operatorname{Aff}(J)\subseteq\operatorname{Aff}(\tau)$. When $\dim(\operatorname{Conv}(J))=k$, there are $k+1$ affinely independent points in $\operatorname{Conv}(J)$ which span a basis of $\operatorname{Aff}(J)$. If $\tau\not\subseteq\operatorname{Aff}(J)$ there would be more than $k+1$ affinely independent points in $\tau$, which gives a contradiction.}.
Suppose that $\dim(\tau) > \dim (\operatorname{Conv}(J))$. Then we can find two points $\alpha,\beta\in\tau\setminus J$ with $\hat\xi_\alpha = \hat\xi_\beta=0$, such that the line segment from $\alpha$ to $\beta$ has an intersection point with $\operatorname{Conv}(J)$. Thus, there exist a rational number $0<\gamma<1$ and rational numbers $\lambda_j\geq 0$ describing this intersection point
\begin{align}
\gamma a^{(\alpha)} + (1-\gamma) a^{(\beta)} = \sum_{j\in J} \lambda_j a^{(j)} \qquad \text{with} \quad \sum_{j\in J} \lambda_j = 1 \point
\end{align}
Denote by $m\in\mathbb Z_{>0}$ the least common multiple of all denominators of $\gamma$ and $\lambda_j$ for all $j\in J$. Then we can generate an element in $\mathbb L$ or in $H_\mathcal{A}(\beta)$, respectively
\begin{align}
\square = \partial_\alpha^{m\gamma} \partial_\beta^{m(1-\gamma)} - \prod_{j\in J} \partial_j^{m\lambda_j} \in H_\mathcal{A}(\beta) \point
\end{align}
Since its principal symbol
\begin{align}
\operatorname{in}_{(\mathbf 0,\mathbf 1)} (\square) = \hat\xi_\alpha^{m\gamma} \hat\xi_\beta^{m(1-\gamma)} - \prod_{j\in J} \hat\xi_j^{m\lambda_j}
\end{align}
has to vanish for all values $(\hat z,\hat \xi)\in\operatorname{char}(H_\mathcal{A}(\beta))$ we get a contradiction, since $\hat\xi_\alpha=\hat\xi_\beta=0$ and $\hat\xi_j\neq 0$ for all $j\in J$.
Hence, $\operatorname{Conv}(J)$ and $\tau$ have the same dimension, and thus they share also the same supporting hyperplane. The second step will be to show equality $J=\tau$. Let $k\in\tau$ be an arbitrary point of the face $\tau$. We then have to prove that $\hat\xi_k\neq 0$. Since $\tau$ lies in the affine span of $J$, we will find some rational numbers $\lambda_j$ such that
\begin{align}
a^{(k)} = \sum_{j\in J} \lambda_j a^{(j)} = \sum_{\substack{j\in J \\ \lambda_j<0}} \lambda_j a^{(j)} + \sum_{\substack{j\in J \\ \lambda_j > 0}} \lambda_j a^{(j)} \qquad \text{with} \quad \sum_{j\in J} \lambda_j = 1 \point
\end{align}
Again, let $m\in\mathbb Z_{>0}$ be the least common multiple of denominators of all $\lambda_j$ with $j\in J$, which will generate an element in $\mathbb L$, and we obtain
\begin{align}
\square = \partial_{k}^m \prod_{\substack{j\in J \\ \lambda_j<0}} \partial_j^{-m \lambda_j} - \prod_{\substack{j\in J \\ \lambda_j > 0}} \partial_j^{m\lambda_j} \in H_\mathcal{A}(\beta) \point
\end{align}
Both terms have the same order, since $1-\sum_{\lambda_j<0} \lambda_j = \sum_{\lambda_j > 0} \lambda_j$, which results in a principal symbol
\begin{align}
\operatorname{in}_{(\mathbf 0,\mathbf 1)}(\square) = \xi_{k}^m \prod_{\substack{j\in J \\ \lambda_j<0}} \xi_j^{-m \lambda_j} - \prod_{\substack{j\in J \\ \lambda_j > 0}} \xi_j^{m\lambda_j} \point
\end{align}
Thus, it is $\hat\xi_{k}^m \prod_{\lambda_j<0} \hat\xi_j^{-m \lambda_j} = \prod_{\lambda_j> 0} \hat\xi_j^{m\lambda_j}$. By assumption, it is $\hat\xi_{j}\neq 0$ for all $j\in J$, and therefore it has to be also $\hat\xi_{k}\neq 0$.
\end{proof}
In order to give a relation between $A$-discriminants and the characteristic varieties, we will associate to every finite subset $\mathcal{A}\subset\mathbb Z^{n+1}$ a multivariate polynomial
\begin{align}
f_z(x) = \sum_{a^{(j)}\in\mathcal{A}} z_j x^{a^{(j)}} \in \mathbb C[x_0^{\pm 1},\ldots,x_n^{\pm 1}] \point
\end{align}
Recall, that for every face $\tau\subseteq\operatorname{Conv}(\mathcal{A})$ we understand by\glsadd{truncpoly}
\begin{align}
f_{\tau,z}(x) = \sum_{j\in\tau} z_j x^{a^{(j)}} \in \mathbb C[x_0^{\pm 1},\ldots,x_n^{\pm 1}]
\end{align}
the truncated polynomial with respect to the face $\tau$.
\begin{lemma} \label{lem:characteristic-disc}
Let $\mathcal{A}\subset\mathbb Z^{n+1}$ be a vector configuration describing points on a hyperplane off the origin and let $\emptyset\neq\tau\subseteq\operatorname{Conv}(\mathcal{A})$ be an arbitrary face. Then the following two statements are equivalent:
\begin{enumerate}[i)]
\item the point $(\hat z,\hat \xi)\in\operatorname{char}\!\left(H_\mathcal{A}(\beta)\right)$ is a point of the characteristic variety and $\tau$ is the face of $\operatorname{Conv}(\mathcal{A})$ corresponding to this point according to \cref{lem:face-characteristic}, i.e.\ $\hat \xi_j \neq 0$ if and only if $j\in\tau$
\item the polynomials $\frac{\partial f_{\tau,\hat z}}{\partial x_0},\ldots,\frac{\partial f_{\tau,\hat z}}{\partial x_n}$ have a common zero in $x\in (\mathbb C^\star)^{n+1}$.
\end{enumerate}
\end{lemma}
\begin{proof}
``$ii)\Rightarrow i)$'': Let $\hat x\in (\mathbb C^\star)^{n+1}$ be a common solution of $\frac{\partial f_{\tau,\hat z}}{\partial x_0}=\ldots=\frac{\partial f_{\tau,\hat z}}{\partial x_n}=0$ which implies
\begin{align}
\hat x_i \frac{f_{\tau,\hat z}(\hat x)}{\partial x_i} = \sum_{j\in \tau} a_i^{(j)} \hat z_j \hat x^{a^{(j)}} = 0 \point
\end{align}
Consider the principal symbol of the homogeneous operators $E_i(\beta)\in H_\mathcal{A}(\beta)$ from \cref{eq:homogeneousOperators}. By setting all $\hat\xi_j=0$ for $j\notin\tau$ and $\hat\xi_j = x^{a^{(j)}}$ for all $j\in\tau$ we obtain
\begin{align}
\operatorname{in}_{(\mathbf 0,\mathbf 1)}\!\left(E_i(\beta)\right)(\hat z,\hat\xi) = \sum_{a^{(j)}\in\mathcal{A}} a_i^{(j)} \hat z_j \hat\xi_j = 0\point
\end{align}
It remains to prove that $\operatorname{in}_{(\mathbf 0,\mathbf 1)}(\square_l) (\hat z,\hat \xi) = 0$ for all $l\in\mathbb L$, where $\square_l$ was defined in equation \cref{eq:toricOperators}. Since all points described by $\mathcal{A}$ lying on a hyperplane off the origin, all monomials in $\square_l$ having the same order. Therefore, we have to show
\begin{align}
\prod_{l_j>0} \hat\xi_j^{l_j} = \prod_{l_j<0} \hat\xi_j^{-l_j} \quad\text{for all}\quad l\in\mathbb L \point \label{eq:xi=xi}
\end{align}
According to \cref{lem:face-kernel}, if $\tau\subseteq\operatorname{Conv}(\mathcal{A})$ is a face\footnote{Since all points of $\mathcal{A}$ lying on a common hyperplane, the faces of $\operatorname{Conv}(A)$ and $\operatorname{Conv}(\mathcal{A})$ are in one to one correspondence, where $A$ is a dehomogenization of $\mathcal{A}$.}, then $l_{\bar\tau} := (l_j)_{j\neq\tau}$ is either zero $l_{\bar\tau} = 0$ or it contains positive and negative components (or empty if $\tau=\operatorname{Conv}(\mathcal{A})$). Thus, for the first case we insert $\hat\xi_j=\hat x^{a^{(j)}}$ for all $j\in\tau$
\begin{align}
\hat x^{\sum_{l_j>0} l_j a^{(j)}} = \hat x^{-\sum_{l_j<0} l_j a^{(j)}}
\end{align}
which is true, since all $l\in\mathbb L$ satisfy $\sum_j l_j a^{(j)} = \sum_{l_j>0} l_j a^{(j)} + \sum_{l_j<0} l_j a^{(j)} = 0$. In the second case, there are elements with $l_j<0$ as well as with $l_j>0$ corresponding to points outside of $\tau$ and equation \cref{eq:xi=xi} is trivially satisfied by $0=0$.\bigskip
``$i)\Rightarrow ii)$'': If $(\hat z,\hat\xi)\in\operatorname{char}\!\left(H_\mathcal{A}(\beta)\right)$ that implies
\begin{align}
\operatorname{in}_{(\mathbf 0,\mathbf 1)}\!\left(E_i(\beta)\right) = \sum_{a^{(j)}\in\mathcal{A}} a^{(j)} \hat z_j \hat\xi_j = \sum_{j\in\tau} a^{(j)} \hat z_j \hat\xi_j = 0 \point
\end{align}
Thus, $\frac{\partial f_{\tau,\hat z}}{\partial x_0},\ldots,\frac{\partial f_{\tau,\hat z}}{\partial x_n}$ have a common zero in $\hat x\in(\mathbb C^*)^{n+1}$ if the system of equations
\begin{align}
\hat x^{a^{(j)}} = \hat \xi_j \qquad\text{for all}\quad j\in\tau \label{eq:xxi}
\end{align}
has a solution in $\hat x\in(\mathbb C^*)^{n+1}$. Hence, we have to show that it is impossible to construct a contradicting equation by combining the equations of \cref{eq:xxi}. In other words for all integers $l_j\in \mathbb Z$ satisfying
\begin{align}
\sum_{j\in\tau} l_j a^{(j)} = 0 \label{eq:lcomb}
\end{align}
we have to show that $\prod_{j\in\tau} (\hat\xi_j)^{l_j} = 1$. Note, that \cref{eq:lcomb} directly gives rise to an element in $\mathbb L$, by setting the remaining $l_j=0$ for all $j\notin\tau$. Therefore, we can construct
\begin{align}
\square = \prod_{l_j>0} \partial_j^{l_j} - \prod_{l_j<0} \partial_j^{-l_j} \in H_\mathcal{A}(\beta) \point \label{eq:squarel}
\end{align}
Again, by the fact that all points described by $\mathcal{A}$ lie on a common hyperplane off the origin, both terms in \cref{eq:squarel} have the same order. Thus,
\begin{align}
\operatorname{in}_{(\mathbf 0,\mathbf 1)}(\square) = \prod_{l_j>0} \xi_j^{l_j} - \prod_{l_j<0} \xi_j^{-l_j} \comma
\end{align}
which completes the proof since $\operatorname{in}_{(\mathbf 0,\mathbf 1)}(\square)(\hat z,\hat \xi) = 0$.
\end{proof}
Combining \cref{lem:characteristic-disc} with the results from \cref{sec:ADiscriminantsReultantsPrincipalADets} we can conclude directly the following theorem.
\begin{theorem}[Singular locus of $\mathcal{A}$-hypergeometric systems \cite{GelfandAdiscriminantsCayleyKoszulComplexes1990, AdolphsonHypergeometricFunctionsRings1994}] \label{thm:SingularLocusPrincipalAdet}
Let $\mathcal{A}\subset\mathbb Z^{n+1}$ be a finite subset, which spans $\mathbb R^{n+1}$ as a vector space and describes points on a common hyperplane off the origin in $\mathbb R^{n+1}$. Furthermore, let $f=\sum_{a^{(j)}\in \mathcal{A}} z_j x^{a^{(j)}}\in\mathbb C[x_0^{\pm 1},\ldots,x_n^{\pm 1}]$ be the corresponding polynomial to $\mathcal{A}$. Then we have the equality
\begin{align}
\operatorname{Sing}\!\left(H_\mathcal{A}(\beta)\right) = \Var\!\left(E_\mathcal{A}(f)\right) \point
\end{align}
\end{theorem}
\begin{proof}
Let us assume for the moment that $\mathcal{A}$ has the form of a homogenizated point configuration, i.e.\ $\mathcal{A}=\{(1,a^{(1)}),\ldots,(1,a^{(N)})\}$ where $A=\{a^{(1)},\ldots,a^{(N)}\}\subset\mathbb Z^n$. Then the statement ii) in \cref{lem:characteristic-disc} is equal to a common zero of $f_{\tau,\hat z}, \pd{f_{\tau,\hat z}}{x_1},\ldots,\pd{f_{\tau,\hat z}}{x_n}$ in $x\in (\mathbb C^*)^n$. Thus, according to its definition \cref{eq:SingLocusDef}, the singular locus $\operatorname{Sing}\!\left(H_\mathcal{A}(\beta)\right)$ is then given by the Zariski closure of
\begin{align}
\bigcup_{\emptyset\neq \tau\subseteq\operatorname{Conv}(A)} \left\{ \hat z \in\mathbb C^N \,\rvert\, \Var\!\left(f_{\tau,\hat z},\frac{\partial f_{\tau,\hat z}}{\partial x_1},\ldots,\frac{\partial f_{\tau,\hat z}}{\partial x_n}\right) \neq\emptyset \text{ in } (\mathbb C^*)^n \right\} \point
\end{align}
But this is nothing else than the union of $A$-discriminants
\begin{align}
\operatorname{Sing}(H_\mathcal{A}(\beta)) = \bigcup_{\emptyset\neq \tau\subseteq\operatorname{Conv}(A)} \Var\!\left(\Delta_{A\cap\tau} (f_\tau)\right) \point
\end{align}
The application of \cref{thm:pAdet-factorization} concludes the proof for $\mathcal{A}=\{(1,a^{(1)}),\ldots,(1,a^{(N)})\}$. If $\mathcal{A}$ is not of that form we will always find a non-singular matrix $D$, such that $D\mathcal{A}$ will have this form. However, the $\mathcal{A}$-hypergeometric system as well as the $A$-discriminants are independent of such a transformation and thus the theorem applies also to all the other configurations $\mathcal{A}$.
\end{proof}
Thus, we have characterized the singular locus of $\mathcal{A}$-hypergeometric systems, which describes the possible singularities of the $\mathcal{A}$-hypergeometric functions, by the principal $A$-determinant. In general, it is a hard problem to calculate these principal $A$-determinants. However, by the Horn-Kapranov-parameterization we have a way to describe these possible singularities very efficiently in an indirect manner.
\chapter{Feynman integrals}
\label{ch:FeynmanIntegrals}
\label{sec:FeynmanIntegralsIntro}
After introducing the mathematical basis in the previous chapter, we now come to the actual main object of this investigation: the Feynman integral. To contextualize Feynman integrals, we will briefly sketch its origins in quantum field theory (QFT). For a detailed introduction to the topic of Feynman integrals we refer to \cite{WeinzierlFeynmanIntegrals2022, WeinzierlArtComputingLoop2006, SmirnovFeynmanIntegralCalculus2006, SmirnovAnalyticToolsFeynman2012, BognerMathematicalAspectsFeynman2009}.
The prototypical situation to study quantum field theories experimentally are scattering experiments, where particles are brought to collision under very high energies. Therefore, the scattering of particles is also immanent in the theoretical description of QFTs and the probability for certain events in such a scattering process is described by an operator called the \textit{$S$-matrix}. By means of the LSZ-reduction formula, the calculation of $S$-matrix elements reduces to the \textit{Green's function} or the \textit{correlator function} (see e.g.\ \cite{SchwartzQuantumFieldTheory2014}). Except for a very few toy models in low spacetime dimensions, the calculation of those Green's functions is only feasible by a perturbative approach. Thus, we will assume the Green's function as a formal power series in a coupling constant $g$, which describes the strength of interaction.
It was already observed in the early days of QFT \cite{FeynmanSpaceTimeApproachQuantum1949}, that the many terms arising in such a perturbation series can be represented and ordered by certain graphs\footnote{Richard Feynman introduced his graphs for the first time in 1948 during the Pocono conference about QED \cite{KaiserPhysicsFeynmanDiagrams2005}. Although, Feynman graphs have structural similarities to diagrams of Wentzel \cite{WentzelSchwereElektronenUnd1938} and Stueckelberg \cite{StueckelbergRemarqueProposCreation1941} their genuine connection to perturbation theory was fundamentally new.}. Depending on the given Lagrangian density $\mathcal L$ defining the QFT, those \textit{Feynman graphs} can consist in different types of edges and vertices. The map turning those graphs into algebraic expressions in the perturbation series is known as \textit{Feynman rules}. As the Feynman rules for vertices usually contain the coupling constant $g$, more complex Feynman graphs will stand for higher order terms in the perturbation series. Thus, for the prediction of an outcome of a scattering process we will calculate all possible Feynman graphs from a given theory up to a certain order. Thereby, Feynman graphs can be understood as a depiction of possibilities how certain particles can interact with each other.
It is known for a long time \cite{DysonDivergencePerturbationTheory1952} that the perturbation series will diverge for the most QFTs. However, a finite number of terms of this series may still be a good approximation (in the sense of an asymptotic expansion). And indeed, the obtained results from this procedure are in surprisingly good agreement with experimental data. This overwhelming success of the predictions of QFT is also the reason why -- despite the mathematical and fundamental difficulties (e.g.\ Haag's theorem) -- it is worthwhile to further explore and develop QFT. \bigskip
Before introducing the language of graphs in more detail in the following \cref{sec:FeynmanGraphs}, we will summarize the most important characteristics of Feynman graphs. These graphs should be understood as networks, where we will assign a momentum flowing through every edge. By a momentum we mean a $d$-dimensional vector\glsadd{mink}\glsunset{mink} $\mink p = (p^0,\ldots, p^{d-1})$ in Minkowski space. Thus, a scalar product of those momentas $\mink p$, $\mink q$ is given by
\begin{align}
\mink p \cdot \mink q = p^0 q^0 - p^1 q^1 - \ldots - p^{d-1} q^{d-1} \point
\end{align}
From these momenta every edge in the graph obtains an orientation. However, those orientations are not fixed and can be flipped by a change of the sign of the corresponding momentum. Furthermore, we will distinguish between internal and external edges. The \textit{external edges} are those, which are incident to a pendant vertex (i.e.\ a vertex of degree $1$) and are also known as \textit{legs} in the terminology of Feynman graphs. The momenta assigned to the legs are called \textit{external momenta}, and they are assumed to be the variables given by the experimental setting. For convenience, we will usually choose the external momenta to be incoming.
The internal edges are the essential part for a Feynman integral. Following the notion, that Feynman graphs represent possible ways of interaction of elementary particles, those internal edges are said to represent ``virtual particles'', as they carry a momentum which is off-shell, i.e.\ the assigned momentum does not satisfy the energy-momentum relation.
Depending on the considered theory, edges, and vertices may also get additional weights or colorings. These decorations of graphs will constitute a further combinatorial difficulty but no general obstacles. However, we will not discuss those cases. \bigskip
Exemplary, we will rephrase the Feynman rules for the $\phi^4$-theory, which is one of the simplest possible theories consisting in one type of scalar particles with mass $m$ having the Lagrangian density $\mathcal L = \frac{1}{2} (\partial_\mu \phi)^2 - \frac{m^2}{2} \phi^2 - \frac{g}{4!} \phi^4$. For a summary of the Feynman rules appearing in the standard model, we refer to \cite{RomaoResourceSignsFeynman2012} which also collects the different choices of signs and prefactors.
\begin{example}[Feynman rules of $\phi^4$-theory in momentum space] \label{ex:phi4theory}
\hspace{0cm}
\begin{enumerate}[I)]
\item Obey momentum conservation at every vertex (except for the pendant vertices). This implies also an overall momentum conservation of the external momenta.
\item for every internal edge:
\begin{tikzpicture}
\coordinate[draw, shape=circle, fill=black, scale=.5] (A) at (0,0);
\coordinate[draw, shape=circle, fill=black, scale=.5] (B) at (3,0);
\coordinate (a) at (1,0.3); \coordinate (b) at (2,0.3);
\draw[thick] (A) -- (B);
\draw[thick,->] (a) -- node [above] {$\mink q$} (b);
\end{tikzpicture}
${\displaystyle \qquad \longmapsto\quad\frac{-i}{- \mink q^2+m^2}}$
\item for every vertex: \hspace{1cm} \vcenteredhbox{\begin{tikzpicture}[scale=0.5]
\coordinate (A) at (-1,-1);
\coordinate (B) at (1,-1);
\coordinate (C) at (1,1);
\coordinate (D) at (-1,1);
\coordinate[draw, shape=circle, fill=black, scale=.5] (O) at (0,0);
\draw[thick] (A) -- (C); \draw[thick] (B) -- (D);
\end{tikzpicture}}
${\displaystyle \qquad \longmapsto\quad i g}$
\item integrate over every indeterminated (internal) momentum $\mink k$: ${\displaystyle \int_{\mathbb R^d} \frac{\mathop{}\!\mathrm{d}^d \mink k}{(2\pi)^d}}$
\end{enumerate}
\end{example}
Hence, for the Green's function we will sum all possible Feynman graphs, which can be built from these rules weighted with an additional symmetry factor. However, we can make three major simplifications in this sum. First of all, we can omit all vacuum graphs, i.e.\ we neglect all those graphs, which have no legs, since their contribution can be comprised in a normalization factor. Second, we will exclude all disconnected graphs in our considerations. In general a disconnected graph would evaluate to the product of its components. However, the contribution of disconnected graphs to the perturbation series is usually disregarded in most QFTs by the cluster decomposition principle\footnote{The cluster decomposition principle is based on locality considerations, that assume for disconnected graphs to represent separated processes which do not influence each other. This is equivalent to claim that the $S$-matrix contains no singularities worse than poles and branch points besides one single overall momentum conservation $\delta$-distribution \cite{WeinbergQuantumTheoryFields1995}. However, it is by no means clear whether the cluster decomposition principle is always fulfilled (see also the discussion in \cite{SchwartzQuantumFieldTheory2014, WeinbergWhatQuantumField1997}). For example, the cluster decomposition principle could be violated by colour confinement in QCD \cite{LowdonConditionsViolationCluster2016}. Note that in the case of vacuum components, these can also be neglected by the aforementioned normalization factor.} \cite{WeinbergQuantumTheoryFields1995}. Third, we will restrict ourselves to the so-called \textit{amputated graphs} or \textit{truncated graphs}. Thus, we will not consider any graph, where a leg joins a subgraph of self-energy type (i.e.\ a subgraph having only two legs). Those cases are also ruled out by LSZ-reduction formula \cite{SchwartzQuantumFieldTheory2014, ItzyksonQuantumFieldTheory1980}.
The amputated graphs are closely related to the so-called \textit{$1$-particle irreducible} graphs ($1$PI) or \textit{bridgefree} graphs, i.e.\ graphs which are still connected, when cutting an arbitrary edge. In general, a Feynman integral of a $1$-particle reducible graph evaluates to a product of all its components when cutting its bridges. Even though $1$-particle reducible graphs contribute to the Green's function, we want to exclude them in this consideration. Due to the factorization property, this is not a restriction to generality.
Also, the contributions of graphs having a cut vertex can be factorized into their components. Thereby, a \textit{cut vertex} is a vertex which increases the number of components when it will be removed. According to \cite{SmirnovAnalyticToolsFeynman2012} we will call a graph without any cut vertex, a \textit{$1$-vertex irreducible graph} ($1$VI). If additionally at least one of those components is a vacuum graph, we will call it a \textit{tadpole-like} graph. If those vacuum components not only have no legs but also no masses, they will be independent of any variable. Therefore, we will call those types of components \textit{scaleless}. Scaleless components can be renormalized to zero. Furthermore, tadpole-like graphs can also be omitted due to the renormalization procedure\footnote{In \cite{SchwartzQuantumFieldTheory2014} and \cite[cor. 3.49]{PrinzAlgebraicStructuresCoupling2021} this was shown for a subclass of the tadpole-like graphs. However, one can directly extend their results to the entire class of tadpole-like graphs using \cite[prop. 3.48]{PrinzAlgebraicStructuresCoupling2021}.}. Thus, we will lastly also omit all tadpole-like graphs in the Green's function. \bigskip
\begin{figure}
\centering
\begin{subfigure}{.3\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.5}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,0);
\draw (A) -- (B);
\draw (-.7,.7) -- (A); \draw (-.7,0) -- (A); \draw (-.7,-.7) -- (A);
\draw (2.7,.7) -- (B); \draw (2.7,0) -- (B); \draw (2.7,-.7) -- (B);
\end{tikzpicture}
\caption{a tree graph}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.5}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,0);
\draw (1,0) circle (1);
\draw (-.7,.7) -- (A); \draw (-.7,-.7) -- (A);
\draw (2.7,.7) -- (B); \draw (2.7,-.7) -- (B);
\end{tikzpicture}
\caption{a $1$-loop graph}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.5}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,0);
\draw (1,0) circle (1);
\draw (-.7,.7) -- (A); \draw (-.7,-.7) -- (A);
\draw (2.7,.7) -- (B); \draw (2.7,-.7) -- (B);
\coordinate[dot] (C) at (4,0);
\draw (3.3,.7) -- (C); \draw (4.7,.7) -- (C);
\draw (3.3,-.7) -- (C); \draw (4.7,-.7) -- (C);
\end{tikzpicture}
\caption{a disconnected graph}
\end{subfigure}
\vspace{.7cm}
\begin{subfigure}{.3\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.5}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\draw (-1,0) circle (1);
\draw (1,0) circle (1);
\end{tikzpicture}
\caption{a vacuum graph}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.5}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,0);
\draw (1,0) circle (1);
\draw (A) -- (B);
\draw (-.7,0) -- (A);
\draw (2.7,0) -- (B);
\end{tikzpicture}
\caption{a $2$-loop graph}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.5}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,0);
\coordinate[dot] (C) at (3,0);
\coordinate[dot] (D) at (5,0);
\draw (1,0) circle (1); \draw (4,0) circle (1);
\draw (A) -- (B) -- (C) -- (D);
\draw (-.7,0) -- (A);
\draw (5.7,0) -- (D);
\end{tikzpicture}
\caption{a $1$-particle reducible graph (also a non-amputated graph)}
\end{subfigure}
\vspace{.7cm}
\begin{subfigure}{.3\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.5}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,0);
\coordinate[dot] (C) at (4,0);
\draw (1,0) circle (1); \draw (3,0) circle (1);
\draw (-.7,.7) -- (A); \draw (-.7,-.7) -- (A);
\draw (4.7,.7) -- (C); \draw (4.7,-.7) -- (C);
\end{tikzpicture}
\caption{a $1$-vertex reducible graph}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.5}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\draw (0,1) circle (1);
\draw (-1.7,0) -- (A);
\draw (1.7,0) -- (A);
\end{tikzpicture}
\caption{a tadpole graph} \label{fig:tadpoleGraph}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.5}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,0);
\coordinate[dot] (C) at (1,-1);
\coordinate[dot] (D) at (3,1);
\coordinate[dot] (E) at (3,-1);
\draw (1,0) circle (1); \draw (3,0) circle (1);
\draw (C) arc[start angle=0, end angle=90, radius=1];
\draw (D) arc[start angle=130, end angle=230, radius=1.305];
\draw (E) arc[start angle=-50, end angle=50, radius=1.305];
\draw (-.7,0) -- (A);
\draw (1,-1.7) -- (C);
\end{tikzpicture}
\caption{a tadpole-like graph}
\end{subfigure}
\caption[Examples of Feynman graphs in $\phi^4$-theory]{Examples of certain Feynman graphs in $\phi^4$-theory. Pendant vertices are not drawn explicitly.}
\end{figure}
Our special interest here is the last Feynman rule from \cref{ex:phi4theory}, which also appears analogously for any other QFT. Hence, we will obtain certain integrals in the evaluation of Feynman graphs, whenever the graph contains a loop. A graph with $n$ internal edges and $L$ indeterminated internal momenta $\mink k_1,\ldots, \mink k_L$ results in a \textit{Feynman integral} of the form
\begin{align}
\int_{\mathbb R^{d\times L}} \left(\prod_{j=1}^L \frac{\mathop{}\!\mathrm{d}^d \mink k_j}{i\pi^{d/2}} \right) \prod_{i=1}^n \frac{1}{- \mink q_i^2 + m_i^2} \comma \label{eq:FeynmanMomSpMinkowski}
\end{align}
where $\mink q_i$ is the (Minkowski-)momentum flowing through the edge $e_i$ and $m_i$ is the corresponding mass. For convenience reasons we adjusted the prefactor in \cref{eq:FeynmanMomSpMinkowski} slightly, see also \cite{WeinzierlFeynmanIntegrals2022}. As momenta and masses come with a specific unit, we often introduce an additional parameter in \cref{eq:FeynmanMomSpMinkowski} to make the Feynman integral dimensionless, which becomes important in the renormalization procedure. For the sake of notational simplicity we will omit this parameter.
Since the denominator in the integrand vanishes on the integration contour, the integral in \cref{eq:FeynmanMomSpMinkowski} is ill-defined. This issue is typically solved by introducing a small imaginary part $-i\epsilon$ with $\epsilon>0$ in the denominator of \cref{eq:FeynmanMomSpMinkowski}. As we will assume $\mink q_i$ and $m_i$ to be real valued in the physical application, the poles of the integrand can be omitted for generic external momenta in that way. Equivalently, we can slightly change the integration contour. This idea will be made more explicitly in \cref{sec:Coamoebas} by means of coamoebas. However, there are certain cases where this procedure fails and we have to expect singularities depending on the external momenta and the masses. The precise description of those singularities is very subtle and the whole \cref{ch:singularities} is devoted to examine the analytic behaviour caused by those singularities. \bigskip
To exclude the problems arising from these singularities for now, we will consider a slightly different version of \cref{eq:FeynmanMomSpMinkowski} where we replace the Minkowskian kinematics by Euclidean kinematics. Therefore, we will
define the \textit{Feynman integral} of a Feynman graph $\Gamma$ to be
\begin{align}
\gls{FI} = \int_{\mathbb R^{d\times L}} \left(\prod_{j=1}^L \frac{\mathop{}\!\mathrm{d}^d k_j}{\pi^{d/2}}\right) \prod_{i=1}^n \frac{1}{(q_i^2+m_i^2)^{\nu_i}} \label{eq:FeynmanMomSp} \comma
\end{align}
where the momentum $q_i$\glsadd{internalmomenta}\glsunset{internalmomenta} attached to an edge $e_i$ is now a $d$-dimensional Euclidean vector, i.e.\ $q_i^2 := (q_i^0)^2 + \ldots (q_i^{d-1})^2$. Additionally, we introduced \textit{indices} $\nu = (\gls{nu})$ to the propagators, which will turn out to be convenient, e.g.\ for the application in the so-called integration-by-parts (IBP) methods \cite{ChetyrkinIntegrationPartsAlgorithm1981, TkachovTheoremAnalyticalCalculability1981} and general linear relations of Feynman integrals \cite{BitounFeynmanIntegralRelations2019} as well as in the analytical regularization \cite{SpeerGeneralizedFeynmanAmplitudes1969}. We will see in \cref{sec:DimAnaReg} and \cref{sec:FeynmanIntegralsAsAHyp} the dependence of the Feynman integral from those indices. \bigskip
For the Feynman integral \cref{eq:FeynmanMomSp} we will treat indices $\nu$ and the spacetime dimension $d$ as parameters. Originally restricted to positive integer values, we will meromorphically continue the Feynman integral $\mathcal I_\Gamma$ to complex values $\nu\in\mathbb C^n$ and $d\in\mathbb C$ in \cref{sec:DimAnaReg}.
Apart from the singularities arising from poles of the integrand, which we omitted for the massive case by the Euclidean kinematics, we may also have to worry about the behaviour of the integrand for large momenta $|k_j|\rightarrow \infty$. A divergence stemming from large momenta is called an \textit{UV-divergence}. In the massless case $m_i=0$, we may also have divergences for small momenta, which are called \textit{IR-divergences}. It will turn out, that the convergence behaviour can be controlled by the parameters $\nu$ and $d$. For now, we will treat the Feynman integral \cref{eq:FeynmanMomSp} as a formal integral, and we will answer the question of convergence in \cref{sec:DimAnaReg}.
The external momenta $p = (p_1,\ldots,p_m)^\top$ and the internal masses $m = (m_1,\ldots,m_n)^\top$ are considered as variables of \cref{eq:FeynmanMomSp}. Analogue to the parameters we will analytically continue those variables to complex values which will be accomplished in \cref{ch:singularities}. However, for this section we will assume these variables to be real. Instead of the $d$-dimensional momenta $p_1,\ldots,p_m$ it will be often more convenient to consider the Feynman integral to be depending on scalar products $s_{ij} = p_i p_j$ of those external momenta. This will be more apparent by the parametric representations in \cref{sec:ParametricFeynmanIntegrals}.\bigskip
When the momenta (or rather their scalar products) continued to complex values we can also relate \cref{eq:FeynmanMomSpMinkowski} with \cref{eq:FeynmanMomSp}. By inserting imaginary values for the zeroth component of every momentum, we will change the Euclidean vectors to Minkowskian vectors. This technical ``trick'' is known as \textit{Wick rotation}. Therefore, by the analytic continuation of the momenta to complex values, the Euclidean Feynman integral \cref{eq:FeynmanMomSp} will also include the Minkowskian case \cref{eq:FeynmanMomSpMinkowski}. \bigskip
The Feynman integrals in the form of \cref{eq:FeynmanMomSpMinkowski} and \cref{eq:FeynmanMomSp} are known as \textit{scalar Feynman integrals}. However, when considering more complex theories Feynman integrals will may obtain a tensorial structure, i.e.\ the Feynman integrals contain additional momenta in the numerator. It is known for a long time, that those tensorial Feynman integrals can be reduced to a linear combination of scalar integrals. Therefore, we can restrict our considerations fully to the case of \cref{eq:FeynmanMomSp}. We will present a method for such a reduction in the end of \cref{sec:ParametricFeynmanIntegrals} by \cref{thm:TensorReduction}.
\section{Feynman graphs}
\label{sec:FeynmanGraphs}
As pointed out in the previous section, Feynman integrals are based on graphs. Therefore, we will shortly rephrase the most important notions in graph theory related to Feynman graphs. Further information about classical graph theory can be found in e.g.\ \cite{HarayGraphTheory1969, TutteGraphTheory1984, ThulasiramanHandbookGraphTheory2016} and we will refer \cite{NakanishiGraphTheoryFeynman1971} for a comprehensive introduction to Feynman graphs. \cite{BapatGraphsMatrices2014} has its main focus on graph matrices and for a treatment of Feynman graph polynomials we suggest \cite{BognerFeynmanGraphPolynomials2010}.
Oriented graphs as we will introduce them in this section are very closely related to convex polytopes from \cref{sec:ConvexPolytopes}. For example there is also a version of Farkas' \cref{lem:Farkas} applying to oriented graphs \cite{BachemLinearProgrammingDuality1992}. The underlying reason is, that both objects can be described via oriented matroids \cite{BachemLinearProgrammingDuality1992, BjornerOrientedMatroids1999}. However, we will introduce graphs without relying on oriented matroids, to keep this summary short. \bigskip
As aforementioned a Feynman graph consists in internal and external edges. Since the external edges do not contribute to the following combinatorial considerations, we will omit them in our description to get a simpler notation. Thus, for our purpose a graph $\Gamma=(E,V,\varphi)$ consists in a set of $n$ edges $E=\{e_1,\ldots,e_n\}$, a set of $m$ vertices $V=\{v_1,\ldots,v_m\}$ and an incidence map $\varphi : E \rightarrow V \times V$ relating edges $e$ and vertices $u,v$ by $\varphi(e) = (u,v)$. We will orient the edges, and consequentially we will distinguish between the \textit{start point} $u$ and the \textit{end point} $v$ of an edge $e$. Except for graphs where the start point and the end point of an arbitrary edge coincide, it is convenient to formulate the incidence relations by a $m\times n$ matrix
\begin{align}
I_{ij} = \begin{cases}
+ 1 & v_i \text{ start point of } e_j \\
-1 & v_i \text{ end point of } e_j \\
0 & e_j \text{ not incident with } v_i
\end{cases}
\end{align}
called the \textit{incidence matrix} \gls{incidence}. It is not hard to show, that those incidence matrices are totally unimodular (i.e.\ the determinant of every non-singular square submatrix of $I$ is equal $\pm 1$) \cite[lem. 2.6]{BapatGraphsMatrices2014}, \cite[thm. 8.13]{ThulasiramanHandbookGraphTheory2016} and that their rank is given by
\begin{align}
\operatorname{rank} (I) = m - b_0
\end{align}
with the number of vertices $m$ and the number of connected components \gls{b0} of the graph \cite[thm. 2.3]{BapatGraphsMatrices2014}, \cite[cor. 8.1]{ThulasiramanHandbookGraphTheory2016}. \bigskip
A graph $\Gamma^\prime=(E^\prime,V^\prime,\varphi|_{E^\prime})$ satisfying $E^\prime\subseteq E$ and $V^\prime\subseteq V$ with an incidence relation $\varphi|_{E^\prime}$ restricted to $E^\prime$ is called a \textit{subgraph} of $\Gamma=(E,V,\varphi)$, symbolically $\Gamma^\prime \subseteq\Gamma$. In case of $V^\prime = V$ the subgraph is said to be \textit{spanning}. By a \textit{loop}\footnote{We use the nomenclature which is ubiquitous in the theory of Feynman graphs. However, in graph theory this object is usually called a \textit{circuit}, whereas a ``loop'' in graph theory is what physicists often call a ``tadpole'' or a ``self-loop''.} of $\Gamma$ we understand a subgraph where every vertex is incident with exactly two (not necessarily distinct) edges. A \textit{tadpole} refers to a loop consisting in only one edge. A graph $\Gamma$ which does not contain any loop is called a \textit{forest}. Consequentially, a \textit{tree} is a forest consisting in only one component. We denote the set of all spanning forests with $k$ components of a given graph by \gls{spanT}.
A tree $T$ will have one vertex more than edges $|E_T|+1=|V_T|$. Therefore, for a connected graph $\Gamma$ we have to delete at least $L=|E|-|V|+1$ edges to turn $\Gamma$ into a spanning tree. For a graph consisting in $b_0$ components we obtain similarly at least $L=|E|-|V|+b_0$ edges which have to deleted to turn $\Gamma$ into a spanning forest.\bigskip
Let \gls{circuits} be the set of all loops of a graph $\Gamma$. In addition to the orientation of edges, we will also introduce an (arbitrary) orientation of every loop and define
\begin{align}
C_{ij} = \begin{cases}
+ 1 & e_j \in \mathscr C_i \text{ and } \mathscr C_i, e_j \text{ have the same orientation} \\
- 1 & e_j \in \mathscr C_i \text{ and } \mathscr C_i, e_j \text{ have the opposite orientation} \\
0 & e_j \notin \mathscr C_i
\end{cases} \label{eq:loopmatrix}
\end{align}
the $r\times n$ \textit{loop matrix} or \textit{circuit matrix}. \bigskip
Similar to the linear evaluations and linear dependences of \cref{ssec:GaleDuality} we obtain the following fact.
\begin{lemma} \label{lem:IncidenceLoopOrthogonality}
For a graph without tadpoles, the incidence matrix and the loop matrix are orthogonal
\begin{align}
I \cdot C^\top = 0 \point
\end{align}
\end{lemma}
\begin{proof}
By definition, the term $I_{ij}C_{kj}$ is only non-zero if the vertex $v_i$ is incident with an edge contained in the loop $\mathscr C_k$. In other words, the vertex $v_i\in\mathscr C_k$ is contained in the loop. However, if this is the case, there are exactly two edges incident with $v_i$ which are both in $\mathscr C_k$. By the sign conventions the two contributions will always have different sign and sum up to zero.
\end{proof}
Let $T\in\mathscr T_1$ be a spanning tree of the graph $\Gamma$. By slight abuse of notation we will denote by $T$ and $\Gamma$ also the edge set of the tree and the graph, respectively. Furthermore, let $\gls{chord} = \{e_1,\ldots,e_L\} = \Gamma\setminus T$ be its complement, which is also known as \textit{cotree} or \textit{chord set}. For every two vertices in the tree $T$, there will be a unique path in $T$ connecting these vertices. Therefore, for every edge $e_j\in T^*$ we can uniquely construct a loop $\mathscr C_j := P_j \cup e_j$, where $P_j$ is the path in $T$ connecting the start and the end point of $e_j$. For convenience we will choose the orientation of $\mathscr C_j$ in the same direction as $e_j$.
For a chord set $T^*$, we call the set $\mathscr C_1,\ldots,\mathscr C_L$ of loops constructed in that way a \textit{fundamental set of loops}. By definition, every loop in this set will have at least one edge which is not contained in any other loop $\mathscr C_j \nsubseteq \cup_{k\neq j} \mathscr C_k$. Furthermore, one can show that for a maximal set of loops this property is also sufficient to be a fundamental set of loops \cite{NakanishiGraphTheoryFeynman1971}. We will denote by \gls{fundloop} the loop matrix \cref{eq:loopmatrix} restricted to rows corresponding to the loops of a fundamental set generated by the chord set $T^*$. Owing to its construction this matrix will have the form
\begin{align}
C_{T^*} = \begin{pmatrix}
\, (C_{T^*})_T, & \!\!\!\!\mathbbm 1 \,\,
\end{pmatrix} \label{eq:LoopMatrixUnit}
\end{align}
where the columns of the unit matrix correspond to edges in $T^*$, whereas $(C_{T^*})_T$ denotes the restriction of columns corresponding to edges in $T$. Similar to the incidence matrix, it can be shown that any fundamental loop matrix $C_{T^*}$ is totally unimodular \cite[lem. 5.12]{BapatGraphsMatrices2014} \cite[thm. 8.15]{ThulasiramanHandbookGraphTheory2016}.
Moreover, it is not hard to deduce a representation of $(C_{T^*})_T$ in terms of the incidence matrix by means of \cref{lem:IncidenceLoopOrthogonality}, whenever the graph does not contain tadpoles. Let $I'$ be the incidence matrix, where we removed $b_0$ rows such that $I'$ has full rank. With the same convention as in \cref{eq:LoopMatrixUnit} we will split $I'= (I'_{T}, I'_{T^*})$ into columns corresponding to $T$ and to $T^*$. We will then obtain \cite[thm. 5.6]{BapatGraphsMatrices2014}
\begin{align}
C_{T^*} = \begin{pmatrix}
\, - I'^\top_{T^*}\!\left(I'^\top_{T}\right)^{-1}, & \!\!\!\!\mathbbm 1\,\,
\end{pmatrix} \point \label{eq:LoopMatrixUnitIncidence}
\end{align}
Furthermore, \cref{eq:LoopMatrixUnit} shows that the fundamental loop matrix $C_{T^*}$ has always full rank $L$. Hence, $L$ is also a lower bound for the rank of the full loop matrix $\operatorname{rank} (C) \geq L$. On the other hand, \cref{lem:IncidenceLoopOrthogonality} and the rank-nullity theorem lead to an upper bound $\operatorname{rank} (C) \leq n - \operatorname{rank} (I) = L$. Thus, we have $\operatorname{rank} (C) = L$ and every fundamental set of loops $C_{T^*}$ will provide a basis for the space of all loops. We call the dimension of the space of loops the \textit{loop number} \gls{loop}. From a topological point of view, $L$ is the first Betti number of $\Gamma$.
\begin{lemma}[Minors of the loop matrix {\cite[lem 5.9]{BapatGraphsMatrices2014}} ] \label{lem:LoopMatrixMinors}
Let $\Gamma=(E,V,\varphi)$ be a connected graph with loop number $L$ and $S\subseteq E$ be a subset of $L$ edges. The restriction $(C_{T^*})_S$ of a fundamental loop matrix $C_{T^*}$ to columns of $S$ is non-singular if and only if $\Gamma\setminus S$ forms a spanning tree.
\end{lemma}
\begin{proof}
``$\Leftarrow$'': As considered $S$ is a chord set. Therefore, we can also construct a fundamental set of loops from $S$ with a fundamental loop matrix $C_S$. By changing the basis we will always find a non-singular matrix $R$ such that $C_{T^*} = R\, C_S$. Using the representation \cref{eq:LoopMatrixUnit} for $C_S$, we will find $\left(C_{T^*}\right)_S = R$.
``$\Rightarrow$'': Assume that $\Gamma\setminus S$ is not a spanning tree. Then $\Gamma\setminus S$ has to contain a loop\footnote{The case of a forest or a non-spanning tree can be excluded by analysing the relation $L=|E|-|V|+b_0$ for $\Gamma$ and $\Gamma\setminus S$.}. Choose a fundamental set of loops of $\Gamma$ containing this loop and let $C_{T_0^*}$ the corresponding fundamental loop matrix. Again, we will find a basis transformation $C_{T^*} = R\, C_{T_0^*}$ with a non-singular matrix $R$. As considered, there is a loop in the fundamental set which has no edges from $S$. Therefore, $\left(C_{T_0^*}\right)_S$ has a row of zeros. Hence, $\left(C_{T_0^*}\right)_S$ is singular and thus also $\left(C_{T^*}\right)_S$ is singular.
\end{proof}
By means of this lemma about the minors of $C_{T^*}$, we are able to show the main result of this section, which will establish a connection between graphs and polynomials. For this purpose we will introduce a variable $x_i$ for every edge $e_i$ of the graph.
\begin{theorem}[Matrix-tree theorem] \label{thm:MatrixTree}
Let $X = \operatorname{diag} (x_1,\ldots,x_n)$ be the diagonal matrix of the edge variables. Then
\begin{align}
\det\!\left(C_{T^*} X C_{T^*}^\top \right) = \sum_{T \in \mathscr T_1} \prod_{x_e\notin T} x_e \label{eq:MatrixTree1}
\end{align}
is independent of the chosen chord set $T^*$.
\end{theorem}
\begin{proof}
By the Cauchy-Binet formula we can split the determinant into
\begin{align}
\det\!\left(C_{T^*} X C_{T^*}^\top \right) = \sum_S \det\!\left(C_{T^*} X\right)_S \det\!\left(C_{T^*}\right)_S = \sum_S \left(\det\!\left(C_{T^*}\right)_S \right)^2 \prod_{e\in S} x_e
\end{align}
where the sum goes over all edge subsets $S\subseteq E$ of length $L$. Due to \cref{lem:LoopMatrixMinors} the sum will reduce to terms where $\Gamma\setminus S$ is a spanning tree. Total unimodularity of $C_{T^*}$ concludes the proof.
\end{proof}
The polynomial on the right hand side of \cref{eq:MatrixTree1} will be introduced in the next section as the first Symanzik polynomial. It is also known as dual Kirchhoff polynomial, due to a polynomial appeared in Kirchhoff's analysis of electrical networks \cite{KirchhoffUeberAuflosungGleichungen1847}. This coincidence comes with no surprise, as electrical networks and Feynman graphs are structurally very similar. We refer to \cite{SeshuLinearGraphsElectrical1961} for a summary of graphs in electrical networks, which contains also the most of the results that are of significance for Feynman graphs. For the various ways to represent Symanzik polynomials we suggest \cite{BognerFeynmanGraphPolynomials2010}.
Often the matrix-tree \cref{thm:MatrixTree} is known in a slightly different variant.
\begin{cor}
Let $I'$ be the incidence matrix of a tadpole free graph, where we removed $b_0$ rows to obtain a full ranked matrix. Further, let $\hat{\mathscr L} = I' X^{-1} I'^\top$ be the so-called reduced, dual Laplacian. Then we have
\begin{align}
\sum_{T \in \mathscr T_1} \prod_{x_e\notin T} x_e = \det (X) \det (\hat{\mathscr L}) \point
\end{align}
\end{cor}
\begin{proof}
Using the representation \cref{eq:LoopMatrixUnitIncidence} we can write
\begin{align}
\det\!\left(C_{T^*} X C_{T^*}^\top\right) = \det\!\left(X_{T^*}\right) \det\!\left[ X_{T^*}^{-1} I'^\top_{T^*} \left(I'^\top_{T}\right)^{-1} X_T \left(I'_{T}\right)^{-1} I'_{T^*} + \mathbbm 1 \right]
\end{align}
where we split the diagonal matrix $X$ in the same way, i.e.\ $X_{T} = \operatorname{diag}(x_i)_{e_i\in T}$ and $X_{T^*} = \operatorname{diag}(x_i)_{e_i\in T^*}$. Using Sylvester's determinant identity $\det(\mathbbm 1 + AB) = \det(\mathbbm 1 + BA)$ we can rewrite
\begin{align}
&\det\!\left(X_{T^*}\right) \det\!\left[ X_T \left(I'_{T}\right)^{-1} I'_{T^*} X_{T^*}^{-1} I'^\top_{T^*} \left(I'^\top_{T}\right)^{-1} + \mathbbm 1 \right] \nonumber \\
&= \det\!\left(X_{T^*}\right) \det\!\left(X_{T}\right) \det\!\left[ \left(I'_{T}\right)^{-1} \left( I'_{T} X_{T}^{-1} I'^\top_T + I'_{T^*} X_{T^*}^{-1} I'^\top_{T^*} \right) \left(I_T^\top\right)^{-1} \right] \nonumber \\
&= \det(X) \det\!\left[ I'_{T} X_{T}^{-1} I'^\top_T + I'_{T^*} X_{T^*}^{-1} I'^\top_{T^*} \right] = \det (X) \det\!\left(I' X^{-1} I'^\top\right) \point
\end{align}
\end{proof}
Similar to the deletion and contraction of vector configurations (see \cref{ssec:GaleDuality}), we can introduce the operations deletion and contraction on graphs. Let $e_i\in E$ be an edge of the graph $\Gamma=(E,V,\varphi)$. Then $\contraction{\Gamma}{e_i}$ denotes the \textit{contraction}\glsadd{contraction} of $e_i$, which means that we shrink the edge $e_i$ such that the endpoints of $e_i$ become a single vertex. For the fundamental loop matrix $C_{T^*}$ of $\Gamma$ this implies that we remove the column corresponding to $e_i$ to arrive at the fundamental loop matrix of $\contraction{\Gamma}{e_i}$. Denote by $\mathcal{U}(\Gamma)=\det\!\left(C_{T^*}XC_{T^*}^\top\right)$ the polynomial of equation \cref{eq:MatrixTree1} for a graph $\Gamma$. From the previous considerations we immediately obtain
\begin{align}
\mathcal{U} (\contraction{\Gamma}{e_i}) = \mathcal{U}(\Gamma)\big|_{x_i=0} \point \label{eq:UContracted}
\end{align}
By $\Gamma\setminus e_i$ we mean the \textit{deletion}\glsadd{deletion} of $e_i$, i.e.\ $\Gamma\setminus e_i = \Gamma(E\setminus e_i,V,\varphi |_{E\setminus e_i})$. Assuming that $\Gamma$ is bridgefree, we can choose a chord set $T^*$ such that $e_i\in T^*$. Then the fundamental loop matrix $C_{T^*}$ of $\Gamma \setminus e_i$ can be obtained from that of $\Gamma$ by removing the column and the row corresponding to $e_i$.
For a bridgefree graph, the polynomial $\mathcal{U}(\Gamma)$ will be linear in each variable $x_i$. Therefore, we obtain for its derivative
\begin{align}
\pd{\mathcal{U}}{x_i} = \sum_{\substack{T\in \mathscr T_1 \\ e_i\notin T}} \prod_{\substack{e\notin T \\ e \neq e_i}} x_e = \mathcal{U} (\Gamma\setminus e_i) \label{eq:UDeleted}
\end{align}
where the spanning trees of $\Gamma$ not containing $e_i$ are precisely those spanning trees of $\Gamma\setminus e_i$. When $\Gamma$ is a bridgefree graph, we can combine \cref{eq:UContracted} and \cref{eq:UDeleted} to
\begin{align}
\mathcal{U} (\Gamma) = \mathcal{U} (\contraction{\Gamma}{e_i} ) + x_i \, \mathcal{U} (\Gamma \setminus e_i) \label{eq:UContractedDeleted}
\end{align}
which is known as ``deletion-and-contraction'' relation.
\section{Parametric Feynman integrals} \label{sec:ParametricFeynmanIntegrals}
In this section we will present various integral representations of the Feynman integral. For this reason we will attach so-called \textit{Schwinger parameters} or \textit{Feynman parameters} $x_i$ to every edge $e_i$ of a Feynman graph $\Gamma$ and express the Feynman integral \cref{eq:FeynmanMomSp} as an integral over those parameters $\gls{Schwingers}$. Those parameter integrals will contain certain graph polynomials, which will allow us to use the many tools of algebraic geometry. Therefore, parametric representations are much more convenient than the momentum space representation \cref{eq:FeynmanMomSp} of the Feynman integral for the purpose of this thesis. Further, the parametric representations will enable us to continue the spacetime dimension $d$ to complex numbers, which will be used in the dimensional regularization (see \cref{sec:DimAnaReg}). In addition, parametric representations are also helpful when reducing tensorial Feynman integrals to scalar Feynman integrals, which we will demonstrate in the end of this section. The parametric Feynman integral will also be the starting point for the application of hypergeometric theory.
For general Feynman graphs the parametric Feynman integrals were first considered by Chisholm \cite{ChisholmCalculationMatrixElements1952}. In this procedure there appear certain polynomials which can be considered as graph polynomials, which was found by Symanzik \cite{SymanzikDispersionRelationsVertex1958} and was later proven by Nakanishi \cite{NakanishiIntegralRepresentationsScattering1961}. We will refer to \cite{NakanishiGraphTheoryFeynman1971} for a comprehensive presentation of all these results and to \cite{BognerFeynmanGraphPolynomials2010} for a modern summary. \bigskip
There are several ways to deduce parametric representations from \cref{eq:FeynmanMomSp}. In any case we will use certain integral representations of the integrand of \cref{eq:FeynmanMomSp} to change from momentum to parameter space. The simplest way of this rewriting can be made with ``Schwinger's trick''. Alternatively, one can also use ``Feynman's trick'', which we included in the appendix (\cref{lem:FeynmanTrick}) for the sake of completeness.
\begin{lemma}[Schwinger's trick] \label{lem:SchwingersTrick}
Let $D_1,\ldots,D_n$ be positive, real numbers $D_i>0$ and $\nu\in\mathbb C^n$ with $\Re(\nu_i)>0$. Then we have the following identity
\begin{align}
\frac{1}{\prod_{i=1}^n D_i^{\nu_i}} = \frac{1}{\Gamma(\nu)} \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\, x^{\nu-1} e^{-\sum_{i=1}^n x_i D_i} \point \label{eq:SchwingersTrick}
\end{align}
As before, we use a multi-index notation to keep formulas short. This implies $\Gamma(\nu):= \prod_{i=1}^n \Gamma(\nu_i)$ and $\mathop{}\!\mathrm{d} x\,x^{\nu-1} := \prod_{i=1}^n \mathop{}\!\mathrm{d} x_i\,x_i^{\nu_i-1}$.
\end{lemma}
\begin{proof}
The lemma follows directly from the integral representation of the $\Gamma$-function $\Gamma(\nu_i) = \int_0^\infty \mathop{}\!\mathrm{d} x_i \, x_i^{\nu_i-1} e^{-x_i}$ by substituting $x_i \mapsto D_i x_i$.
\end{proof}
After applying Schwinger's trick to the momentum space representation \cref{eq:FeynmanMomSp}, there will be certain Gaussian integrals left.
\begin{lemma}[Gaussian integrals] \label{lem:GaussianIntegrals}
Let $M$ be a real, positive-definite $(L\times L)$-matrix. Then
\begin{align}
\int_{\mathbb R^{d\times L}} \left(\prod_{i=1}^L \frac{\mathop{}\!\mathrm{d}^d k_j}{\pi^{d/2}} \right) e^{-k^\top\! M k - 2 Q^\top k} = e^{Q^\top\! M^{-1} Q} \det(M)^{-\frac{d}{2}}
\end{align}
holds for any $Q\in\mathbb R^L$.
\end{lemma}
\begin{proof}
As considered $M$ is a non-singular matrix, and we can substitute $k\mapsto k - M^{-1} Q$ to remove the linear part, which results in
\begin{align}
e^{Q^\top\! M^{-1} Q} \int_{\mathbb R^{d\times L}} \left(\prod_{i=1}^L \frac{\mathop{}\!\mathrm{d}^d k_j}{\pi^{d/2}} \right) e^{-k^\top\! M k} \point
\end{align}
Furthermore, we can diagonalize the matrix $M=S R S^\top$, where $R=\operatorname{diag}(r_1,\ldots,r_L)$ consisting in the eigenvalues of $M$ and $S^{-1}=S^\top$ is orthogonal. Substituting $l = S^\top\! k$ we obtain
\begin{align}
e^{Q^\top\! M^{-1} Q} \int_{\mathbb R^{d\times L}} \left(\prod_{i=1}^L \frac{\mathop{}\!\mathrm{d}^d l_j}{\pi^{d/2}} \right) e^{- l^\top\! R l } = e^{Q^\top\! M^{-1} Q} \prod_{i=1}^L r_i^{-\frac{d}{2}} = e^{Q^\top\! M^{-1} Q} \det(M)^{-\frac{d}{2}}
\end{align}
where the integration with respect to $l$ factorizes in $L$ Gaussian integrals $\int_{\mathbb R^d} \frac{\mathop{}\!\mathrm{d}^d l_i}{\pi^{d/2}} e^{-r_i l_i^2} = r_i^{-d/2}$.
\end{proof}
Combining \cref{lem:SchwingersTrick} and \cref{lem:GaussianIntegrals} we can give the first parametric integral representation of Feynman integrals of this section. As aforementioned, we attach a momentum $q_i$\glsadd{internalmomenta}\glsunset{internalmomenta} to every edge $e_i$. Since we improve momentum conservation at every vertex, $q_i$ will consist of a linear combination of external momenta $p_1,\ldots,p_m$ and a linear combination of indeterminated loop momenta $k_1,\ldots,k_L$. A possible choice can be made by a fundamental loop matrix $C_{T^*}$ for any chord set $T^*$ as
\begin{align}
q_i = \sum_{j=1}^L k_j \!\left(C_{T^*}\right)_{ji} + \widehat q_i \label{eq:qjhatqj}
\end{align}
where we denote by $\widehat q_i$\glsadd{internalmomentaHat}\glsunset{internalmomentaHat} the part coming from external momenta.
\begin{theorem}[Schwinger representation] \label{thm:SchwingerRepresentation}
Let $\Gamma$ be a (1PI) Feynman graph and $C_{T^*}$ its fundamental loop matrix with respect to any chord set $T^*$. Further, $X=\operatorname{diag}(x_1,\ldots,x_n)$ collects the Schwinger parameters and $\widehat q=(\widehat q_1,\ldots,\widehat q_n)$ represent the amount of external momenta on every edge $e_1,\ldots,e_n$ according to \cref{eq:qjhatqj}. When the Feynman integral $\mathcal I_\Gamma (d,\nu, p, m)$ from \cref{eq:FeynmanMomSp} converges absolutely, it can be rewritten as
\begin{align}
\mathcal I_\Gamma(d,\nu,p,m) = \frac{1}{\Gamma(\nu)} \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\, x^{\nu-1} \mathcal{U}^{-\frac{d}{2}} e^{-\frac{\mathcal{F}}{\mathcal{U}}} \comma \label{eq:SchwingerRepresentation}
\end{align}
where we will assume $\Re(\nu_i) >0$. Thereby, the polynomials $\mathcal{U}\in\mathbb R [x_1,\ldots,x_n]$ and $\mathcal{F}\in\mathbb R [x_1,\ldots,x_n]$ are given by
\begin{align}
\gls{Uu} = \det (M) \qquad \text{and} \qquad \gls{Ff} = \det (M) (J-Q^\top M^{-1} Q) \label{eq:SymanzikPolynomialsMatrices}
\end{align}
with $\gls{M}=C_{T^*} X C_{T^*}^\top$, $\gls{Q}=C_{T^*}X\widehat q$ and $\gls{J}=\sum_{i=1}^n x_i \!\left(\widehat q_i^2 + m_i^2\right)$.
\end{theorem}
\begin{proof}
Starting with \cref{eq:FeynmanParSpFeynman} and using Schwinger's trick (\cref{lem:SchwingersTrick}) we arrive at
\begin{align}
\mathcal I_\Gamma(d,\nu,p,m) = \frac{1}{\Gamma(\nu)} \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\, x^{\nu-1} \int_{\mathbb R^{d\times L}} \left(\prod_{j=1}^L \frac{\mathop{}\!\mathrm{d}^d k_j}{\pi^{d/2}}\right) e^{-\sum_{i=1}^n x_i D_i} \label{eq:SchwingerRepresentationProof1}
\end{align}
with the inverse propagators\glsadd{propagators}\glsunset{propagators} $D_i = q_i^2 + m_i^2$. Sorting the exponent in \cref{eq:SchwingerRepresentationProof1} in terms being quadratic, linear, and constant in $k$ we obtain by means of \cref{eq:qjhatqj}
\begin{align}
\gls{Lambda} :=\sum_{i=1}^n x_i D_i &= k^\top\! C_{T^*} X C_{T^*}^\top\! k + 2 k^\top\! C_{T^*} X \widehat q + \sum_{i=1}^n x_i \!\left(\widehat q_i^2 + m_i^2\right) \nonumber \\
& = k^\top\! M k + 2 Q^\top\! k + J \point \label{eq:LambdaxDrelation}
\end{align}
By construction $M$ is symmetric and its $i$-th leading principal minor is given by $\mathcal{U}(\Gamma\setminus e_i) = \pd{\mathcal{U}}{x_i}$ due to the considerations in \cref{eq:UDeleted}. Thus, $M$ is positive-definite inside the integration region $x\in (0,\infty)^n$, and we can apply \cref{lem:GaussianIntegrals} to obtain \cref{eq:SchwingerRepresentation}.
\end{proof}
\begin{theorem}[Symanzik polynomials] \label{thm:SymanzikPolynomialsTreeForest}
The polynomials appearing in \cref{thm:SchwingerRepresentation} can be rewritten as
\begin{align}
\gls{Uu} &= \sum_{T\in\mathscr T_1} \prod_{e\notin T} x_e \label{eq:FirstSymanzik} \\
\gls{Ff} &= \sum_{F\in\mathscr T_2} s_F \prod_{e\notin F} x_e + \mathcal{U} (x) \sum_{i=1}^n x_i m_i^2 \label{eq:SecondSymanzik}
\end{align}
where $s_F = \left(\sum_{e_i\notin F} \pm q_i\right)^2 = \left(\sum_{e_i\notin F} \pm \widehat q_i\right)^2$ is the squared sum of signed momenta of the cutted edges. Denote by $F_1$ and $F_2$ the two components of the spanning $2$-forest $F$. Then we choose the signs in such a way, that the momenta on $e$ with $e\notin F$ flow from $F_1$ to $F_2$. Thus, $s_F$ is the squared total momentum flowing from $F_1$ to $F_2$. By momentum conservation this is equivalent to $s_F = \left(\sum_{a\in V_{F_1}} p_a\right)^2$ the sum of external momenta flowing into $F_1$ (or equivalently $F_2$). The polynomials $\mathcal{U} (x)$ and $\mathcal{F} (x)$ are known as the \emph{first} and the \emph{second Symanzik polynomial}.
\end{theorem}
\begin{proof}
The representation \cref{eq:FirstSymanzik} was already shown in \cref{thm:MatrixTree}. For the representation of \cref{eq:SecondSymanzik} the proof is oriented towards \cite{NakanishiGraphTheoryFeynman1971}. Note, that the massless part of \cref{eq:SecondSymanzik} as well as the massless part of the representation in \cref{eq:SymanzikPolynomialsMatrices} is quadratic in the external momenta. Thus, we will write $\sum_{a,b} W_{ab} p_a p_b$ for the massless part of \cref{eq:SymanzikPolynomialsMatrices} and $\sum_{a,b} \widetilde W_{ab} p_a p_b$ for the massless part of \cref{eq:SecondSymanzik}. Hence, we want to compare the coefficients $W_{ab}$ and $\widetilde W_{ab}$. For this reason we can choose arbitrary external momentum configurations, e.g.\ we can set all but two external momenta to zero. Therefore, it is sufficient to show equality in the case $p_a = - p_b$.
Let us add a new edge $e_0$ linking the external vertices\footnote{Without loss of generality we can assume that $v_a\neq v_b$, since the case $v_a=v_b$ corresponds to a ``tadpole-like'' graph, where the massless part of \cref{eq:SymanzikPolynomialsMatrices} and \cref{eq:SecondSymanzik} vanishes identically.} $v_a$ and $v_b$ (in this direction). We will call the resulting graph $\Gamma_0 = \Gamma \cup \{e_0\}$. To construct a fundamental loop matrix of $\Gamma_0$ we will select a chord set $T_0^*=T^*\cup\{e_0\}$, where $T^*$ is a chord set of $\Gamma$. The loop $\mathscr C_0$ will then consist of $e_0$ and a path in $\Gamma$ connecting the vertices $v_a$ and $v_b$. Therefore, we will have
\begin{align}
\widehat q_i = \left(C_{T_0^*}\right)_{0,i} p_b \point \label{eq:hatqi}
\end{align}
That implies $Q_i = \sum_{j=1}^n \!\left(C_{T^*}\right)_{ij} x_j \widehat q_j = p_b M^0_{0,i}$ where $M^0 = C_{T_0^*} X C_{T_0^*}^\top$ is the corresponding matrix for the graph $\Gamma_0$.
On the other hand, we can write the massless part of \cref{eq:SymanzikPolynomialsMatrices} as\glsadd{Ff0}\glsunset{Ff0} $\mathcal{F}_0 = - Q^\top\! \operatorname{Adj}(M) Q + \det(M) \sum_{i=1}^n x_i \widehat q_i^2 = \det (V)$, where
\begin{align}
V = \begin{pmatrix}
\sum_{i=1}^n x_i \widehat q_i^2 & Q \\
Q^\top & M
\end{pmatrix}
\end{align}
by using Laplace expansion. Inserting \cref{eq:hatqi} this is nothing else than $V = \left. p_b^2 M^{0}\right|_{x_0 = 0}$. Using \cref{thm:MatrixTree} again we obtain
\begin{align}
\det (V) = \left. p_b^2 \det(M^{0})\right|_{x_0 = 0} = p_b^2 \sum_{\substack{T\in\mathscr T_1(\Gamma_0) \\ e_0 \notin T}} \prod_{e\notin T} x_e = p_b^2 \sum_{F\in \mathscr T_2(\Gamma)} \prod_{e\notin F} x_e
\end{align}
where every spanning tree of $\Gamma_0$ which does not contain $e_0$, will split the graph $\Gamma$ into two components. This is nothing else than a spanning $2$-forest of $\Gamma$.
\end{proof}
As obviously by \cref{eq:FirstSymanzik} and \cref{eq:SecondSymanzik} the Symanzik polynomials are homogeneous of degree $L$ and $L+1$, respectively. In addition to it the first Symanzik polynomial $\mathcal{U}$ and the massless part of the second Symanzik polynomial $\mathcal{F}_0$ are linear in each Schwinger parameter. The complete second Symanzik polynomial $\mathcal{F}$ is at most quadratic in every Schwinger parameter. Beside the representations in \cref{eq:FirstSymanzik}, \cref{eq:SecondSymanzik} and \cref{eq:SymanzikPolynomialsMatrices}, there are many alternatives known to write Symanzik polynomials. We refer to \cite{BognerFeynmanGraphPolynomials2010, NakanishiGraphTheoryFeynman1971} for an overview.
Alternatively, we can consider the second Symanzik polynomial also as the discriminant of $\Lambda(k,p,x)\, \mathcal{U}(x)$ with respect to $k$, where $\Lambda(k,p,x)$ was defined in \cref{eq:LambdaxDrelation}. Thus, we will obtain the second Symanzik polynomial by eliminating $k$ in $\Lambda(k,p,x)\, \mathcal{U}(x)$ by means of the equation $\frac{\partial \Lambda}{\partial k_j} = 0$, which was noted in \cite{EdenAnalyticSmatrix1966}.
As we can reduce $\mathcal{F}_0 := \sum_{F\in\mathscr T_2} s_F \prod_{e\notin F} x_e$ to a determinant of a matrix $C_{T_0^*} X C_{T_0^*}^\top$ the formulas \cref{eq:UContracted}, \cref{eq:UDeleted} and \cref{eq:UContractedDeleted} do also apply to $\mathcal{F}_0$.\bigskip
We already restricted us to ($1$PI) graphs (i.e.\ bridgefree graphs) in \cref{thm:SchwingerRepresentation}. Another specific case happens when the graph $\Gamma$ contains a cut vertex, i.e.\ if $\Gamma$ is a $1$-vertex reducible graph. Whenever a Feynman graph contains those cut vertices, the Feynman integral will factorize into the Feynman integrals of the components which are held together by the cut vertices. This can be seen easily by the momentum space representation \cref{eq:FeynmanMomSp} as we can choose the loop momenta separately for all those components. If additionally one of these components is a vacuum graph (i.e.\ it contains no legs), the value of this component is independent of the external momenta, and we call those graphs tadpole-like. We already excluded this kind of graphs to appear in perturbation series in the beginning of \cref{sec:FeynmanIntegralsIntro}. If such a component not only has no legs but also no masses, it will be independent of any variable and is scaleless. Consequentially, the second Symanzik polynomial of every scaleless graph or subgraph will vanish $\mathcal{F}\equiv 0$. Hence, Feynman integrals containing a scaleless subgraph connected only by a cut vertex will diverge (in any representation and for any choice of $d$ and $\nu$). In \cref{sec:DimAnaReg} we will derive a similar result. Furthermore, we will show, that this is the only case, where the Feynman integral diverges for any choice of $d$ and $\nu$. The problem of divergences of scaleless graphs can be ``cured'' also in renormalization procedures. By a convenient choice of the counter term, the renormalization procedure will map any tadpole-like graph to zero \cite{SchwartzQuantumFieldTheory2014}.\bigskip
From \cref{thm:SchwingerRepresentation} we can deduce another representation of the Feynman integral in parameter space.
\begin{lemma}[Feynman representation] \label{lem:FeynmanRepresentation}
If the Feynman integral \cref{eq:SchwingerRepresentation} converges absolutely, we can rewrite it as the following integral
\begin{align}
\mathcal I_\Gamma(d,\nu,p,m) = \frac{\Gamma(\omega)}{\Gamma(\nu)} \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\, x^{\nu-1} \delta\big(1-H(x)\big) \frac{\mathcal{U}^{\omega-\frac{d}{2}}}{\mathcal{F}^\omega} \label{eq:FeynmanParSpFeynman}
\end{align}
where $\gls{sdd} = \sum_{i=1}^n \nu_i - L\frac{d}{2}$ is called the \textit{superficial degree of divergence} and $H(x) = \sum_{i=1}^n h_i x_i$ is an arbitrary hyperplane with $h_i\geq 0$ not all zero. $\delta(x)$ denotes Dirac's $\delta$-distribution, and we will assume $\Re (\omega)>0$ and $\Re(\nu_i)>0$.
\end{lemma}
\begin{proof}
We will insert $1 = \int_0^\infty \mathop{}\!\mathrm{d} t \, \delta(t-H(x))$ in the integral \cref{eq:SchwingerRepresentation}, which results in
\begin{align}
\mathcal I_\Gamma(d,\nu,p,m) = \frac{1}{\Gamma(\nu)} \int_0^\infty \mathop{}\!\mathrm{d} t \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\, x^{\nu-1} \frac{1}{t} \delta\left(1-\sum_{i=1}^n h_i \frac{x_i}{t} \right) \mathcal{U}^{-\frac{d}{2}} e^{-\frac{\mathcal{F}}{\mathcal{U}}} \point
\end{align}
Due to the homogenity of $\mathcal{U}$ and $\mathcal{F}$, a substitution $x_i \mapsto t x_i$ leads to
\begin{align}
\mathcal I_\Gamma(d,\nu,p,m) = \frac{1}{\Gamma(\nu)} \int_0^\infty \mathop{}\!\mathrm{d} t \, t^{\omega-1} \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\, x^{\nu-1} \delta\big(1-H(x)\big) \mathcal{U}^{-\frac{d}{2}} e^{- t \frac{\mathcal{F}}{\mathcal{U}}} \point
\end{align}
The integration with respect to $t$ can be performed by \cref{eq:SchwingersTrick}.
\end{proof}
The freedom of the choice of a hyperplane $H(x)$ in \cref{eq:FeynmanParSpFeynman} is sometimes referred as Cheng-Wu theorem and expresses the projective nature of the integral due to the homogenity of the Symanzik polynomials. We can also use this freedom to exclude one of the Symanzik polynomials in the integrand, as done for the first Symanzik polynomial in the following corollary. However, note that the evaluation of the $\delta$-distribution will produce terms of $\mathcal{U}(\Gamma \setminus e_i)$ and $\mathcal{U}(\contraction{\Gamma}{e_i})$.
\begin{cor}
In case of absolute convergence, the Feynman integral can also be expressed as
\begin{align}
\mathcal I_\Gamma(d,\nu,p,m) = L \frac{\Gamma(\omega)}{\Gamma(\nu)} \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x \, x^{\nu-1} \delta\big(1-\mathcal{U}(x)\big) \mathcal{F}^{-\omega} \label{eq:FeynmanRepresentationCorollary} \point
\end{align}
\end{cor}
\begin{proof}
Similar to the proof of \cref{lem:FeynmanRepresentation} we will insert $1 = \int_0^\infty \mathop{}\!\mathrm{d} t \, \delta\big(t - \mathcal{U}(x)\big)$ for $x\in (0,\infty)^n$ in the representation \cref{eq:SchwingerRepresentation}. The remaining steps are completely analogue to the proof of \cref{lem:FeynmanRepresentation} where we will substitute $x_i\mapsto t^{1/L} x_i$. The relation \cref{eq:UContractedDeleted} can be used to evaluate the integration of the $\delta$-distribution.
\end{proof}
Clearly, those integrals converge not for every value of $\nu\in\mathbb C^n$ and $d\in\mathbb C$. Moreover, \cref{eq:SchwingerRepresentation} and \cref{eq:FeynmanParSpFeynman} will have in general different convergence regions. However, it is only important that their convergence regions intersect with each other (which was shown implicitly by the proof of \cref{lem:FeynmanRepresentation}). Afterwards we will meromorphically continue \cref{eq:SchwingerRepresentation} and \cref{eq:FeynmanParSpFeynman} to the whole complex plane. The convergence as well as the meromorphic continuation will be discussed in more detail in the next \cref{sec:DimAnaReg}.\bigskip
The previous parametric representations contained two polynomials. With the following parametric Feynman integral we will give a representation, which contains only one polynomial. This representation was first noted in \cite{LeeCriticalPointsNumber2013} and will be very convenient when applied to hypergeometric theory. The following representation was also found independently in \cite{NasrollahpoursamamiPeriodsFeynmanDiagrams2016}.
\begin{lemma}[Lee-Pomeransky representation \cite{LeeCriticalPointsNumber2013,BitounFeynmanIntegralRelations2019}] \label{lem:LeePomeranskyRepresentation}
In case of absolute convergence we can rewrite the Feynman integral as
\begin{align}
\mathcal I_\Gamma(d,\nu,p,m) = \frac{\Gamma\left(\frac{d}{2}\right)}{\Gamma\left(\frac{d}{2}-\omega\right)\Gamma(\nu)} \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\, x^{\nu-1} \mathcal{G}^{-\frac{d}{2}} \label{eq:LeePomeranskyRepresentation}
\end{align}
where $\mathcal{G} = \mathcal{U} + \mathcal{F}$\glsadd{Gg}\glsunset{Gg} denotes the sum of the first and the second Symanzik polynomial. We will assume\footnote{Those restrictions can later be relaxed by meromorphic continuation, see \cref{sec:DimAnaReg}.} that $\Re\!\left(\frac{d}{2}\right)>0$, $\Re\!(\nu_i)>0$ and $\Re\left(\frac{d}{2}-\omega\right)>0$.
\end{lemma}
\begin{proof}
Note that for every real, positive $D>0$ we have
\begin{align}
D^{-\omega} = \frac{\Gamma(\alpha)}{\Gamma(\alpha-\omega)\Gamma(\omega)} \int_0^\infty \mathop{}\!\mathrm{d} t \, t^{\omega-1} (1 + D t)^{-\alpha} \label{eq:LeePomTrick}
\end{align}
from the Euler beta function, where we also assume $\Re(\alpha)>0$, $\Re(\omega)>0$ and $\Re(\alpha-\omega)>0$. Applying \cref{eq:LeePomTrick} with $D=\frac{\mathcal{F}}{\mathcal{U}}$ and $\alpha=\frac{d}{2}$ to the representation \cref{eq:FeynmanParSpFeynman} we obtain
\begin{align}
\mathcal I_\Gamma(d,\nu,p,m) = \frac{\Gamma\left(\frac{d}{2}\right)}{\Gamma\left(\frac{d}{2}-\omega\right)\Gamma(\nu)} \int_0^\infty \mathop{}\!\mathrm{d} t\,t^{\omega-1} \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x \, x^{\nu-1} \delta\big(1-H(x)\big) (\mathcal{U} + t \mathcal{F})^{-\frac{d}{2}} \point
\end{align}
Again, by a substitution $x_i = \frac{y_i}{t}$ we will arrive at the assertion.
\end{proof}
Hence, the Feynman integral can be expressed as the Mellin transform of a polynomial up to a certain power. For this reason, integrals of the form \cref{eq:LeePomeranskyRepresentation} are often called \textit{Euler-Mellin integrals}, which were systematically investigated in \cite{NilssonMellinTransformsMultivariate2010, BerkeschEulerMellinIntegrals2013}.
On a more abstract level, \cref{lem:LeePomeranskyRepresentation} can be considered as an application of the ``Cayley trick'' (\cref{lem:CayleysTrick}) for integrals. Indeed, we will see in \cref{sec:FeynmanIntegralsAsAHyp} that \cref{eq:LeePomeranskyRepresentation} arise from the Cayley embedding of \cref{eq:FeynmanParSpFeynman}.\bigskip
Instead of the external momenta $p$ and the masses $m$, the parametric representations indicate another choice of what we want to use as the variables of the Feynman integral. Thus, it is much more convenient to use the coefficients of the Symanzik polynomials as the variables of Feynman integrals. Hence, we will write\glsadd{Gg}
\begin{align}
\mathcal{G} = \sum_{a\in A} z_a x^a = \sum_{j=1}^N z_j \prod_{i=1}^n x_i^{a^{(j)}_i} \in\mathbb C[x_1,\ldots,x_n]\label{eq:Gsupport}
\end{align}
where $A=\left\{a^{(1)},\ldots,a^{(N)}\right\}\subset\mathbb Z_{\geq 0}^n$ is the set of exponents and $z\in\mathbb C^N$ are the new variables of the Feynman integral. To avoid redundancy we will always assume that $z_j\not\equiv 0$ and that all elements of $A$ are pairwise disjoint. Therefore, we will change our notation of the Feynman integral to $\mathcal I_\Gamma(d,\nu,z)$.
In \cref{eq:Gsupport} we have introduced in fact a generalization of Feynman integrals by way of the back door. Equation \cref{eq:Gsupport} gives also coefficients to the first Symanzik polynomial, and it is also implicitly assumed that the coefficients in the second Symanzik polynomial are independent of each other. We will call such an extension of the Feynman integral a \textit{generalized Feynman integral}. By specifying the variables $z$ to the physically relevant cases, we can always lead back to the original case. However, it should not go unmentioned, that such a limit from generalized Feynman integrals to physical Feynman integrals will not always be unproblematic. We will discuss this problem more in detail in the following sections. Especially, it is not guaranteed that specific representations of generalized Feynman integrals e.g.\ in terms of power series or integrals converge also for the physically relevant specification of the variables $z$, albeit the analytic continuation of those power series or integral representations will still lead to finite values.\bigskip
As before, we will write $\mathcal{A} = \left\{\left(1,a^{(1)}\right),\ldots,\left(1,a^{(N)}\right)\right\}$ for the homogenized point configuration of $A$ (see \cref{sec:affineSpace} and \cref{ssec:vectorConfigurations}). In addition, we will also transfer the parameters $d$ and $\nu$ to the vector space $\mathbb C^{n+1}$. Therefore, we will write
\begin{align}
\gls{nuu} := (\nu_0, \nu)\in \mathbb C^{n+1} \qquad\text{with}\quad \nu_0 := \frac{d}{2} \point \label{eq:nuuDefinition}
\end{align}
As we will see later, the spacetime dimension $d$ and the indices $\nu$ will take a similar role in parametric Feynman integrals. Therefore, it is meaningful from a mathematical perspective to combine those parameters into a single parameter ${\underline{\nu}}$. Their similar mathematical role is also the reason why instead of using dimensional regularization one can regularize the integral by considering the indices as done in analytic regularization \cite{SpeerGeneralizedFeynmanAmplitudes1969}. We will develop this consideration more in detail in the following two \cref{sec:DimAnaReg,sec:FeynmanIntegralsAsAHyp}.
Hence, the (generalized) Feynman integral becomes finally to $\gls{gFI}$, where the essential part of the graph structure, which is necessary to evaluate the Feynman integral is contained in the vector configuration $\mathcal{A}\in\mathbb Z^{(n+1)\times N}$. The ${\underline{\nu}}\in\mathbb C^n$ are the parameters and $z\in\mathbb R^N$ are the variables, encoding the dependencies from external momenta and masses according to \cref{eq:Gsupport}. \bigskip
It is a genuine aspect of hypergeometric functions to be also representable by Mellin-Barnes integrals. By use of a multivariate version of Mellin's inversion theorem, we will derive such a representation of the Feynman integral.
\begin{theorem}[Mellin-Barnes representation] \label{thm:MellinBarnesRepresentation}
Let $\sigma\subset \{1,\ldots,N\}$ be an index subset with cardinality $n+1$, such that the matrix $\mathcal{A}$ restricted to columns of $\sigma$ is invertible, $\det(\mathcal{A}_\sigma) \neq 0$. Then the Feynman integral can be written as a multi-dimensional Mellin-Barnes integral
\begin{align}
\mathcal I_\mathcal{A} ({\underline{\nu}}, z) = \frac{1}{\Gamma(\nu_0-\omega)\Gamma(\nu)} \frac{ z_\sigma^{-\mathcal{A}_\sigma^{-1}{\underline{\nu}}}}{\left|\det(\mathcal{A}_\sigma)\right|} \int_\gamma \frac{\mathop{}\!\mathrm{d} t}{(2\pi i)^r} \Gamma( t) \Gamma\!\left(\mathcal{A}_\sigma^{-1}{\underline{\nu}}-\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} t\right) z_{\bar\sigma}^{- t} z_\sigma^{\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} t} \label{eq:MellinBarnesRepresentation}
\end{align}
wherever this integral converges. The set $\bar\sigma:=\{1,\ldots N\} \setminus \sigma$ denotes the complement of $\sigma$, containing $r:=N-n-1$ elements. Restrictions of vectors and matrices to those index sets are similarly defined as $ z_\sigma := (z_i)_{i\in\sigma}$, $ z_{\bar\sigma} := (z_i)_{i\in\bar\sigma}$, $\mathcal{A}_{\bar{\sigma}} := (a_i)_{i\in\bar\sigma}$. Every component of the integration contour $\gamma\in\mathbb C^r$ goes from $-i\infty$ to $i\infty$ such that the poles of the integrand are separated.
\end{theorem}
\begin{cor} \label{cor:MellinBarnesRepresentation}
Let $N=n+1$ or in other words let $\mathcal{A}$ be quadratic or equivalently $\operatorname{Newt}(\mathcal{G})=\operatorname{Conv}(A)$ forming a simplex. If the Feynman integral $\mathcal I_\mathcal{A}({\underline{\nu}}, z)$ from \cref{eq:LeePomeranskyRepresentation} converges absolutely, it can be expressed by a combination of $\Gamma$-functions
\begin{align}
\mathcal I_\mathcal{A} ({\underline{\nu}}, z) = \frac{1}{\Gamma(\nu_0-\omega)\Gamma(\nu)} \frac{\Gamma(\mathcal{A}^{-1}{\underline{\nu}})}{ \left|\det (\mathcal{A})\right|} z^{-\mathcal{A}^{-1}{\underline{\nu}}} \point \label{eq:MellinBarnesCorollary}
\end{align}
\end{cor}
\begin{proof}
For simplicity, we will write $\mathcal I_\mathcal{A}({\underline{\nu}},z) = \frac{1}{\Gamma(\nu_0-\omega)\Gamma(\nu)} \gls{gFJ}$. Then by Schwinger's trick (\cref{lem:SchwingersTrick}) we can reformulate \cref{eq:LeePomeranskyRepresentation} as
\begin{align}
\mathcal J_\mathcal{A}({\underline{\nu}}, z) = \int_{\mathbb{R}^{n+1}_+} \mathop{}\!\mathrm{d} x_0 \, x_0^{\nu_0-1} \mathop{}\!\mathrm{d} x \, x^{\nu-1} e^{-x_0 \mathcal{G}} \comma
\end{align}
where $\nu_0=\frac{d}{2}$ was defined in \cref{eq:nuuDefinition}. Writing $\underline{x} = (x_0, x)$, ${\underline{\nu}} = (\nu_0,\nu)$ and using the Cahen-Mellin integral representation of exponential function one obtains
\begin{align}
\mathcal J_\mathcal{A}({\underline{\nu}}, z) = \int_{\mathbb{R}^{n+1}_+} \mathop{}\!\mathrm{d} \underline{x} \, \underline{x}^{{\underline{\nu}}-1} \int_{\delta+i\mathbb R^{n+1}} \frac{\mathop{}\!\mathrm{d} u}{(2\pi i)^{n+1}} \Gamma(u) z_\sigma^{-u} \underline{x}^{-\mathcal{A}_\sigma u} \int_{\eta+i\mathbb R^r} \frac{\mathop{}\!\mathrm{d} t}{(2\pi i)^r} \Gamma(t) z_{\bar\sigma}^{-t} \underline{x}^{-\mathcal{A}_{\bar{\sigma}} t}
\end{align}
with $u\in\mathbb{C}^{n+1}$, $t\in\mathbb{C}^r$, some arbitrary positive numbers $\delta_i >0$, $\eta_i>0$ and where we split the polynomial $\mathcal{G}$ into a $\sigma$ and a $\bar\sigma$ part. By a substitution $u\mapsto \mathcal{A}_\sigma^{-1} u^\prime$ it is
\begin{align}
\mathcal J_\mathcal{A}({\underline{\nu}},z) = \left|\det\!\left(\mathcal{A}_\sigma^{-1}\right)\right| &\int_{\eta+i\mathbb R^{r}} \frac{\mathop{}\!\mathrm{d} t}{(2\pi i)^r} \Gamma(t) z_{\bar\sigma}^{-t} \nonumber \\
& \: \int_{\mathbb{R}^{n+1}_+}\! \mathop{}\!\mathrm{d} \underline{x} \, \int_{\mathcal{A}_\sigma \delta+i \mathcal{A}_\sigma \mathbb R^{n+1}} \frac{\mathop{}\!\mathrm{d} u^\prime}{(2\pi i)^{n+1}} \Gamma\!\left(\mathcal{A}_\sigma^{-1}u^\prime\right) z_\sigma^{-\mathcal{A}_\sigma^{-1}u^\prime} \underline{x}^{{\underline{\nu}}-u^\prime -\mathcal{A}_{\bar{\sigma}} t- 1} \:\text{.}
\end{align}
Since the matrix $\mathcal{A}_\sigma$ contains only positive values, the integration region remains the same $\mathcal{A}_\sigma \delta + i \mathcal{A}_\sigma \mathbb R^{n+1} \simeq \delta^\prime+i\mathbb R^{n+1}$ with some other positive numbers $\delta^\prime\in\mathbb R^{n+1}_{>0} $, which additionally have to satisfy $\mathcal{A}_\sigma^{-1}\delta^\prime >0$. By Mellin's inversion theorem \cite{AntipovaInversionMultidimensionalMellin2007} only the $t$-integrations remain and one obtains equation \cref{eq:MellinBarnesRepresentation}.
Thereby, the integration contour has to be chosen, such that the poles are separated from each other in order to satisfy $\mathcal{A}_\sigma^{-1}\delta^\prime>0$. More specific this means that the contour $\gamma$ has the form $c+i\mathbb R^{n+1}$ where $c\in\mathbb R^{n+1}_{>0}$ satisfies $\mathcal{A}_\sigma^{-1}{\underline{\nu}} - \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} c>0$. Clearly, in order for those $c$ to exist, the possible values of parameters ${\underline{\nu}}$ are restricted.
The proof of the corollary is a special case, where one does not have to introduce the integrals over $t$. The existence of the inverse $\mathcal{A}^{-1}$ will be ensured by \cref{thm:FIconvergence}.
\end{proof}
A more general version of this theorem can be found in \cite[thm. 5.6]{BerkeschEulerMellinIntegrals2013} with an independent proof. In \cite{SymanzikCalculationsConformalInvariant1972} a similar technique was used to obtain Mellin-Barnes representations from Feynman integrals. It should not go unmentioned, that the precise contours of Mellin-Barnes integrals in higher dimensions contain many subtleties. We refer to \cite{ParisAsymptoticsMellinBarnesIntegrals2001} for general aspects in the study of Mellin-Barnes integrals. Furthermore, Mellin-Barnes integral are well suited to investigate the monodromy of $\mathcal{A}$-hypergeometric functions \cite{BeukersMonodromyAhypergeometricFunctions2013}.
According to \cite{HaiConvergenceProblemCertain1995}, integrals of the form \cref{eq:MellinBarnesRepresentation} are also known as multivariate Fox's $H$-functions, where also convergence criteria of those functions can be found. The connection between Feynman integrals and Fox's $H$-function was studied before \cite{Inayat-HussainNewPropertiesHypergeometric1987, Inayat-HussainNewPropertiesHypergeometric1987a, BuschmanFunctionAssociatedCertain1990}.
We have to remark, that the representation \cref{eq:MellinBarnesRepresentation} is not necessarily made for an efficient computation of Feynman integrals. There are much more involved methods to derive more specific Mellin-Barnes representations for certain types of Feynman integrals \cite{UsyukinaRepresentationThreepointFunction1975, BergereAsymptoticExpansionFeynman1974, SmirnovFeynmanIntegralCalculus2006}. The advantage of \cref{eq:MellinBarnesRepresentation} is rather the simplicity of its representation, especially the simplicity in the situation of the \cref{cor:MellinBarnesRepresentation}, which will be of great importance, when constructing series representations later.
Feynman integrals, which satisfy the conditions of \cref{cor:MellinBarnesRepresentation} are the so-called massless ``banana graphs'', i.e.\ graphs consisting in $L$ loops and having the minimal number $L+1$ of edges. However, we can apply \cref{cor:MellinBarnesRepresentation} also to any Euler-Mellin integral where there is exactly one monomial more than variables, i.e.\ where $\operatorname{Newt}(\mathcal{G})$ forms a simplex.
\Cref{cor:MellinBarnesRepresentation} can alternatively be shown using Ramanujan's master theorem. For this purpose, one splits $\mathcal{G}^{-\nu_0}$ by means of the multinomial theorem $(m_1 + \ldots + m_N)^s = \sum_{k_1=0}^\infty \cdots \sum_{k_{N-1}=0}^\infty \frac{(-1)^{|k|} (-s)_{|k|}}{k!} m_1^{k_1} \cdots m_{N-1}^{k_{N-1}} m_N^{s-|k|}$ and then applies a multivariate version of Ramanujan's master theorem \cite{GonzalezGeneralizedRamanujanMaster2011}.\bigskip
Let us illustrate the application of \cref{thm:MellinBarnesRepresentation} about Mellin-Barnes representations with a small example.
\begin{example} \label{ex:1loopbubbleA}
Consider the self-energy $1$-loop $2$-point function with one mass (see \cref{fig:bubble1}) having the Symanzik polynomials $\mathcal{U}=x_1 + x_2$ and $\mathcal{F}=(m_1^2+p^2) x_1 x_2 + m_1^2 x_1^2$ in Euclidean kinematics. Thus, the matrix $\mathcal{A}$ and the vector $z$ are given by
\begin{align}
\mathcal{A} = \begin{pmatrix}
1 & 1 & 1 & 1 \\
1 & 0 & 1 & 2 \\
0 & 1 & 1 & 0
\end{pmatrix} \qquad z = (1,1,m_1^2+p^2,m_1^2) \point
\end{align}
\begin{figure}[bt]
\begin{center}
\includegraphics[width=.38\textwidth, trim = 0 1.5cm 0 0.5cm,clip]{Graphics/bubble1}
\end{center}
\caption[The $1$-loop self-energy Feynman graph with one mass]{The $1$-loop self-energy Feynman graph with one mass (self-energy/bubble graph).} \label{fig:bubble1}
\end{figure}
Choosing an index set $\sigma = \{1,2,3\}$, the corresponding Feynman integral in the Mellin-Barnes representation of \cref{thm:MellinBarnesRepresentation} is given by
\begin{align}
\mathcal J_\mathcal{A} ({\underline{\nu}},z) &= z_1^{-\nu_0+\nu_2} z_2^{-\nu_0+\nu_1} z_3^{\nu_0-\nu_1-\nu_2} \int_{\delta-i\infty}^{\delta+i\infty} \frac{\mathop{}\!\mathrm{d} t}{2\pi i} \Gamma(t) \Gamma(\nu_0-\nu_1+t)\nonumber\\
&\qquad \qquad \Gamma(\nu_0-\nu_2-t) \Gamma(-\nu_0+\nu_1+\nu_2-t) \left(\frac{z_2 z_4}{z_1 z_3}\right)^{-t} \comma
\end{align}
where we write again $\mathcal J_\mathcal{A}({\underline{\nu}},z) = \Gamma(\nu_0-\omega)\Gamma(\nu) \mathcal I_\mathcal{A}({\underline{\nu}},z)$ to omit the prefactor. For the correct contour prescription the poles have to be separated such that there exist values $\delta$ satisfying $\max \{0,-\nu_0+\nu_1\} < \delta < \min \{\nu_0-\nu_2,-\nu_0+\nu_1+\nu_2\}$. Therefore, we can extract $4$ conditions on the values of ${\underline{\nu}}$. As we see in this example, these conditions are equivalent to demand $\delta\in\operatorname{relint}\!\left(\Re(\nu_0)\operatorname{Newt}(\mathcal{G})\right)$, where $\Re(\nu_0)\operatorname{Newt}(\mathcal{G})$ is the Newton polytope of $\mathcal{G}$ dilated by $\Re(\nu_0)$ (see \cref{sec:ConvexPolytopes}). Hence, $\operatorname{Newt}(\mathcal{G})$ should be full dimensional in order to allow values for $\delta$. As we will see in \cref{sec:DimAnaReg} the full dimensionalness of $\operatorname{Newt}(\mathcal{G})$ relates directly to the convergence of $\mathcal I_\mathcal{A}({\underline{\nu}},z)$. In this case, by Cauchy's theorem the integral evaluates simply to a Gaussian hypergeometric function
\begin{align}
\mathcal J_\mathcal{A} ({\underline{\nu}},z) &= \frac{\Gamma(2\nu_0-\nu_1-\nu_2)\Gamma(\nu_2)\Gamma(\nu_0-\nu_2)\Gamma(-\nu_0+\nu_1+\nu_2)}{\Gamma(\nu_0)} \nonumber\\
& z_1^{-\nu_0+\nu_2} z_2^{-\nu_0+\nu_1} z_3^{\nu_0-\nu_1-\nu_2} \HypF{\nu_0-\nu_2,-\nu_0+\nu_1+\nu_2}{\nu_0}{1-\frac{z_2 z_4}{z_1z_3}} \point
\end{align}
After restoring the original prefactors and coefficients $z_1=z_2=1$, $z_3=p^2-m_1^2$, $z_4=m_1^2$ and $\nu_0=\frac{d}{2}$ it agrees with the expected result
\begin{align}
\mathcal I_\Gamma (\nu_1,\nu_2,d,m_1^2,p^2) &= \frac{\Gamma\left(\frac{d}{2}-\nu_2\right)\Gamma\left(-\frac{d}{2}+\nu_1+\nu_2\right)}{\Gamma(\nu_1)\Gamma\left(\frac{d}{2}\right)} \nonumber\\
& (p^2-m_1^2)^{\frac{d}{2}-\nu_1-\nu_2} \HypF{\frac{d}{2}-\nu_2,-\frac{d}{2}+\nu_1+\nu_2}{\frac{d}{2}}{1-\frac{m_1^2}{p^2-m_1^2}} \point
\end{align}
\end{example}
\bigskip
As was mentioned in the beginning of this chapter, scalar Feynman integrals arise only in the simplest possible QFTs and in general Feynman integrals will carry an additional Lorentz-tensorial structure. More precisely, in a more general QFT we also have to expect terms in the numerator of \cref{eq:FeynmanMomSp}, i.e.\ Feynman integrals appear in the form
\begin{align}
\mathcal I_\Gamma (s,d,\nu,p,m) = \int_{\mathbb R^{d\times L}} \left(\prod_{j=1}^L \frac{\mathop{}\!\mathrm{d}^d k_j}{\pi^{d/2}} \right) \frac{q_1^{s_1} \cdots q_n^{s_n}}{D_1^{\nu_1} \cdots D_n^{\nu_n}} \label{eq:FeynmanMomSpTensor}
\end{align}
where $s_i\in\mathbb N_0^d$, $q_i$ is the momentum of the edge $e_i$, $D_i=q_i^2+m_i^2$ are the inverse propagators and we will also use a multi-index notation $q_i^{s_i} = \prod_{\mu=1}^d \!\left(\!\left(q_i\right)^\mu\right)^{(s_i)^\mu}$ for Lorentzian vectors. We deviate here from the common representation of Lorentz tensors by small Greek letters as indices, in order that the symbols do not become overloaded by the many indices. Thus, for a fixed value of $s\in\mathbb N_0^{d\times n}$, equation \cref{eq:FeynmanMomSpTensor} represents an element of a Lorentz tensor. Moreover, equation \cref{eq:FeynmanMomSpTensor} often shows up restricted to the loop momenta $k_1,\ldots,k_L$ in the numerator. Note that \cref{eq:FeynmanMomSpTensor} includes also this case. By a choice of a chord set $T^*$, we can assign the momenta $q_i=k_i$ for edges $e_i\in T^*$. The remaining momenta can be excluded by setting $s_i=0$ whenever $e_i\notin T^*$.
However, one can always reduce those integrals to a linear combination of scalar Feynman integrals. For $1$-loop integrals such a reduction is known for a long time \cite{PassarinoOneloopCorrectionsAnnihilation1979}, which wrotes the tensorial Feynman integral as a linear combination of the physically relevant tensors and find the coefficients by considering certain special cases. Another idea to reduce tensorial Feynman integrals by certain differential operators was developed in \cite{DavydychevSimpleFormulaReducing1991} for $1$-loop integrals and extended to higher loops in \cite{TarasovConnectionFeynmanIntegrals1996, TarasovGeneralizedRecurrenceRelations1997}. Thus, we can consider \cref{eq:FeynmanMomSpTensor} as a derivative
\begin{align}
& \mathcal I_\Gamma (s,d,\nu,p,m) = \left. \prod_{i=1}^n \left(\pd{}{c_i}\right)^{s_i} \int_{\mathbb R^{d\times L}} \left(\prod_{j=1}^L \frac{\mathop{}\!\mathrm{d}^d k_j}{\pi^{d/2}} \right) \frac{e^{c_1 q_1 + \ldots + c_n q_n}}{D_1^{\nu_1} \cdots D_n^{\nu_n}}\right|_{c=0} \label{eq:FeynmanTensorDerivative}
\end{align}
Repeating the rewriting from momentum space to parametric representation (\cref{thm:SchwingerRepresentation}) with the additional exponential function, one can see that all those tensorial Feynman integrals can be expressed by means of scalar Feynman integrals with shifted parameters.
\begin{theorem}[Tensor reduction, similar to \cite{TarasovConnectionFeynmanIntegrals1996, TarasovGeneralizedRecurrenceRelations1997}] \label{thm:TensorReduction}
Let $\mathcal I_\Gamma(s,d,\nu,p,m)$ be the tensorial Feynman integral defined in \cref{eq:FeynmanMomSpTensor} for a fixed value of $s\in\mathbb N_0^{d\times n}$. Then there exist polynomials $\alpha_1,\ldots,\alpha_t\in\mathbb Q[p,m]$ and vectors $\rho_1,\ldots,\rho_t\in\mathbb Z_{\geq0}$ such that
\begin{align}
\mathcal I_\Gamma(s,d,\nu,p,m) = (-1)^{|s|} \sum_{j=1}^t \alpha_j(p,m)\ (\nu)_{\rho_j}\ \mathcal I_\Gamma (d+ 2 |s|, \nu+\rho_j,p,m) \label{eq:tensorReduction}
\end{align}
the tensorial Feynman integral is a linear combination of scalar Feynman integrals with shifted parameters. $(\nu)_{\rho_j} := \frac{\Gamma(\nu+\rho_j)}{\Gamma(\nu)}$ denotes the Pochhammer symbol and $|s|:= \sum_{i=1}^n\sum_{\mu=1}^d (s_i)^\mu \in \mathbb N$ is the number of derivatives in \cref{eq:FeynmanTensorDerivative}.
\end{theorem}
\begin{proof}
As a first step, we will show that
\begin{align}
\overline {\mathcal I}_\Gamma (d,\nu,p,m,c) := \int_{\mathbb R^{d\times L}} \left(\prod_{j=1}^L \frac{\mathop{}\!\mathrm{d}^d k_j}{\pi^{d/2}} \right) \frac{e^{c^\top\! q}}{D_1^{\nu_1} \cdots D_n^{\nu_n}} = \frac{1}{\Gamma(\nu)} \int_{\mathbb R_+^n} \mathop{}\!\mathrm{d} x\, x^{\nu-1} \mathcal{U}^{-\frac{d}{2}} e^{-\frac{\overline \mathcal{F}}{\mathcal{U}}} \label{eq:tensorReductionStep1}
\end{align}
we can reduce $\overline{\mathcal I}_\Gamma (d,\nu, p, m, c)$ to a scalar Feynman integral with slightly different external momenta and masses where $\overline \mathcal{F}(p,m,c) = \mathcal{F}(\overline p,\overline m)$ with $\overline p = p + \frac{1}{2} I X^{-1} c$ and $\overline m_i^2 = m_i^2 - \frac{c_i^2}{4x_i^2}$. This is analogue to the deduction of \cref{thm:SchwingerRepresentation}. Applying Schwinger's trick (\cref{lem:SchwingersTrick}) to the left hand side of \cref{eq:tensorReductionStep1} we obtain
\begin{align}
\overline {\mathcal I}_\Gamma (d,\nu,p,m,c) = \frac{1}{\Gamma(\nu)} \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\,x^{\nu-1} \int_{\mathbb R^{d\times L}} \left(\prod_{j=1}^L \frac{\mathop{}\!\mathrm{d}^d k_j}{\pi^{d/2}} \right) e^{-\overline\Lambda} \comma
\end{align}
where $\overline\Lambda = \Lambda + c^\top\! q = k^\top\! M k + 2 \overline Q^\top\! k + \overline J$ with $\overline Q:= Q + \frac{1}{2} \left(C_{T^*}\right) c$ and $\overline J := J + c^\top\! \hat q$. Further, we used the relation \cref{eq:qjhatqj} and $\Lambda, Q, M$ and $J$ were defined in \cref{thm:SchwingerRepresentation}. Applying \cref{lem:GaussianIntegrals}, we will get
\begin{align}
\overline{\mathcal I}_\Gamma (d,\nu, p, m, c) = \frac{1}{\Gamma(\nu)} \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\, x^{\nu-1} \mathcal{U}^{-\frac{d}{2}} e^{-\frac{\overline \mathcal{F}}{\mathcal{U}}}
\end{align}
where $\overline \mathcal{F}(p,m,c) := \det (M) \left(-\overline Q^\top M^{-1} \overline Q + \overline J\right)$. Hence, it remains to show $\overline \mathcal{F}(p,m,c) = \mathcal{F}(\overline p,\overline m)$, which means in particular to show $\overline Q (p) = Q(\overline p)$ and $\overline J(p,m) = J (\overline p,\overline m)$. Due to \cref{lem:IncidenceLoopOrthogonality} it is $I q = I \widehat q = p$. Hence, we will get $\widehat{\overline q } = \widehat q + \frac{1}{2} X^{-1} c$ from $I\left(\widehat{\overline q} - \widehat q\right) = \overline p - p$. Inserting $\widehat{\overline q}$ and $\overline m$ in the definitions of $Q$ and $J$ concludes the statement of \cref{eq:tensorReductionStep1}.
Therefore, the tensorial Feynman integral can be expressed as a derivative with respect to $c$ of $\overline{\mathcal I}_\Gamma (d,\nu, p, m, c)$
\begin{align}
\mathcal I_\Gamma(s,d,\nu,p,m) = \frac{1}{\Gamma(\nu)} \int_{\mathbb R_+^n} \mathop{}\!\mathrm{d} x\, x^{\nu-1} \mathcal{U}^{-\frac{d}{2}} \prod_{i=1}^n \left.\left(\pd{}{c_i}\right)^{s_i} e^{-\frac{\overline\mathcal{F}}{\mathcal{U}}}\right|_{c=0} \point \label{eq:tensorReductionStep2}
\end{align}
Note, that $\overline \mathcal{F}(p,m,c)$ is a polynomial in $x$ and $c$, because $\det(M) M^{-1} = \operatorname{Adj}(M), \overline Q$ and $\overline J$ contain only polynomials in $x$ and $c$. Hence, derivatives of $\overline\mathcal{F}(p,m,c)$ with respect to $c$ are polynomials in $x$ and $c$ as well. Therefore, we will write
\begin{align}
h(x) := \left.\left(\pd{}{c_i}\right)^{s_i} \overline\mathcal{F}(p,m,c)\right|_{c=0} = \sum_{j=1}^t \alpha_j(p,m)\ x^{\rho_j} \point \label{eq:tensorReductionStep3}
\end{align}
The insertion of \cref{eq:tensorReductionStep3} in \cref{eq:tensorReductionStep2}, together with $\overline\mathcal{F}(p,m,c)|_{c=0} = \mathcal{F}(p,m)$ will show the assertion. The polynomial $h(x)$ from \cref{eq:tensorReductionStep3} will specify the polynomials $\alpha_1,\ldots,\alpha_t$ and the vectors $\rho_1,\ldots,\rho_t$ in the linear combination \cref{eq:tensorReduction}.
\end{proof}
An advanced version of such a tensor reduction was proposed in \cite{KreimerQuantizationGaugeFields2013, KreimerPropertiesCorollaPolynomial2012}, where a specific differential operator turns scalar Feynman integrals in the corresponding Feynman integral of a gauge theory. This differential operator can be constructed from a graph polynomial, known as the \textit{corolla polynomial}.\bigskip
Therefore, when focusing to scalar Feynman integrals (with general dimension), one can still describe the full range of Feynman integrals. Hence, we will restrict ourselves in the following discussion to the scalar Feynman integrals only.
\section{Dimensional and analytic regularization}
\label{sec:DimAnaReg}
In the definition of Feynman integrals in momentum space as well in their reformulation in parametric space we have omitted to discuss the convergence of those integrals. Sufficient criteria of the absolute convergence of the Feynman integral in momentum space \cref{eq:FeynmanMomSp} can be derived by power counting. Hence, the (1PI) Feynman integral in momentum space with Euclidean kinematics \cref{eq:FeynmanMomSp} converges absolutely if the superficial degree of divergence $\omega(\gamma) = \sum_{i=1}^n \Re(\nu_i) - L \Re\!\left(\frac{d}{2}\right) >0$ is positive for every subgraph $\gamma\subseteq\Gamma$ \cite{WeinbergHighEnergyBehaviorQuantum1960} and where we have to assume massive edges to exclude IR-divergences. An extension of this result which holds also for non-massive edges was given in \cite{LowensteinPowerCountingTheorem1975}. The convergence region for the Feynman parametric representation \cref{eq:FeynmanParSpFeynman} was worked out in \cite{SpeerUltravioletInfraredSingularity1975}. A short summary of those results can be found in \cite{PanzerFeynmanIntegralsHyperlogarithms2015}. Therefore, we will discuss the convergence of Feynman integrals in the Lee-Pomeransky representation \cref{eq:LeePomeranskyRepresentation}, where we can give a necessary and sufficient condition for absolute convergence. These convergence criteria will also nicely rely on polytopes.\bigskip
The following theorem is mostly a direct implication of the work of \cite{NilssonMellinTransformsMultivariate2010, BerkeschEulerMellinIntegrals2013, SchultkaToricGeometryRegularization2018} and proofs can be found there.
\begin{theorem}[following from {\cite[thm. 2.2]{BerkeschEulerMellinIntegrals2013}}, the second statement was proven in {\cite[thm. 3.1]{SchultkaToricGeometryRegularization2018}}] \label{thm:FIconvergence}
Consider the Feynman integral in the Lee-Pomeransky representation \cref{eq:LeePomeranskyRepresentation} with the conventions from \cref{eq:Gsupport} and \cref{eq:nuuDefinition} in the Euclidean region $\Re (z_j)>0$ and with positive dimensions $\Re (\nu_0) >0$. Then the Feynman integral converges absolutely, if the real parts of $\nu$ scaled componentwise by the real part of $\nu_0=\frac{d}{2}$ lie inside the relative interior of the Newton polytope of $\mathcal{G}$
\begin{align}
\Re(\nu)/\Re(\nu_0) \in \operatorname{relint} (\operatorname{Newt}(\mathcal{G})) \point \label{eq:ConvergenceCriteriaFeynmanNewton}
\end{align}
Furthermore, if the Newton polytope $\operatorname{Newt}(\mathcal{G})$ is not full dimensional, the Feynman integral does not converge absolutely for any choice of $\nu_0\in\mathbb C$ and $\nu\in\mathbb C^n$.
\end{theorem}
Hence, parametric Feynman integrals are well-defined, with the exception of the case where the Newton polytope $\operatorname{Newt}(\mathcal{G})$ is not full dimensional. A degenerated Newton polytope $\operatorname{Newt}(\mathcal{G})$ means, that $\mathcal{G}$ is quasi-homogeneous or equivalently that the homogenized point configuration $\mathcal{A}$ has not the full rank. For Feynman integrals this case happens if $\Gamma$ is a scaleless graph or contains a scaleless subgraph, i.e.\ a massless subgraph without legs which is connected to the remaining part of $\Gamma$ only by a cut vertex. As aforementioned for those Feynman graphs also the momentum representation diverges for any choice of $d$ and $\nu$. As those graphs will be excluded by the renormalization procedure \cite{StermanIntroductionQuantumField1993}, they are not of relevance in a physical treatment. Therefore, the parametric space representation \cref{eq:LeePomeranskyRepresentation} and the momentum space representation \cref{eq:FeynmanMomSp} are regularizable by $d$ or $\nu$ for the same graphs, albeit the convergence regions may differ.\bigskip
Note, that we can also reformulate \cref{eq:ConvergenceCriteriaFeynmanNewton} by the polytope representation \cref{eq:HPolytope} as $b_j \Re (\nu_0) - m_j^\top \cdot \Re (\nu) > 0$ for all $j=1,\ldots,k$ where $m_j^\top$ denotes the $j$-th row of $M$. However, these regions of convergence will only cover a small part of the domain where we can analytically continue the Feynman integral representations, even though the Feynman integral is not an entire function. We can distinguish two different kinds of singularities which will appear in the analytic continuation: singularities in the parameters ${\underline{\nu}}$ and singularities in the variables $z$. For the parameters ${\underline{\nu}}$ these singularities are known as UV- and IR-divergences and the Feynman integral has only poles in these parameters ${\underline{\nu}}$. The singularities with respect to the variables $z$ are known as Landau singularities and will only appear if we leave the Euclidean region $\Re(z_j)>0$. Their nature is much more intricate than that of the parametric singularities, and we will discuss them extensively in \cref{ch:singularities}. For now, we will avoid them by restricting ourselves to the Euclidean region.
The possible poles in the parameters ${\underline{\nu}}$ can simply be described with the facets of the Newton polytope $\operatorname{Newt}(\mathcal{U}+\mathcal{F})$ by a clever use of integration by parts based on the representation \cref{eq:LeePomeranskyRepresentation}. The following theorem is an application of theorem \cite[thm. 2.4, rem. 2.6]{BerkeschEulerMellinIntegrals2013}.
\begin{theorem}[Meromorphic continuation in parameters ${\underline{\nu}}$ \cite{KlausenHypergeometricSeriesRepresentations2019, BerkeschEulerMellinIntegrals2013}] \label{thm:meromorphicContinuation}
Describe the non-degenerated Newton polytope $\operatorname{Newt}(\mathcal{G})$ as a minimal number of intersections of half-spaces according to equation \cref{eq:HPolytope} and assume that the components of $m_j^\top\in\mathbb Z^n$ and $b_j\in\mathbb Z$ are relatively prime. Then any Feynman integral $\mathcal I_\Gamma({\underline{\nu}},z)$ in the Euclidean region $\Re(z_j)>0$ can be written as
\begin{align}
\mathcal I_\mathcal{A}({\underline{\nu}},z) = \gls{PhiMero} \frac{\prod_{j=1}^k \Gamma( b_j \Re \nu_0 - m_j^\top \cdot \Re \nu)}{\Gamma(\nu_0-\omega)\Gamma(\nu)} \label{eq:meromorphicContinuation}
\end{align}
where $\Phi_\mathcal{A}({\underline{\nu}},z)$ is an entire function with respect to ${\underline{\nu}}\in\mathbb C^{n+1}$. As before we use $\nu_0 = \frac{d}{2}$ and ${\underline{\nu}} = (\nu_0,\nu)$.
\end{theorem}
Hence, we can continue the Feynman integral meromorphically with respect to its parameters $d,\nu$, and we can easily give a necessary condition for its poles. Similar results were also found in \cite{SpeerUltravioletInfraredSingularity1975} based on the Feynman representation \cref{eq:FeynmanParSpFeynman}. Note, that we can apply the results of \cref{thm:FIconvergence,thm:meromorphicContinuation} also to the Feynman representation \cref{eq:FeynmanParSpFeynman} e.g.\ by choosing the hyperplane $H(x) = x_n$.
By means of \cref{thm:meromorphicContinuation} we see that we can avoid the poles of the Feynman integral by considering non-rational values of ${\underline{\nu}}$. More specific it is sufficient to consider either $d$ to be non-rational or $\nu$ to be non-rational. These two options are also known as \textit{dimensional regularization} \cite{EtingofNoteDimensionalRegularization1999, BolliniDimensionalRenorinalizationNumber1972, THooftRegularizationRenormalizationGauge1972} and \textit{analytical regularization} \cite{SpeerAnalyticRenormalization1968, SpeerGeneralizedFeynmanAmplitudes1969}.
\begin{example}
Consider the \cref{ex:1loopbubbleA} from above which corresponds to \cref{fig:bubble1}. For the relative interior of the Newton polytope one obtains from the facet representation, the region of convergence (with $\Re(\nu_0)>0$)
\begin{align*}
& \qquad \Re(\nu_0-\nu_2) > 0 & \Re(-\nu_0+\nu_1+\nu_2)>0 \\
& \qquad \Re(\nu_2) > 0 & \Re(2\nu_0-\nu_1-\nu_2) > 0
\end{align*}
which enables us to separate the poles of the Feynman integral in the $\Gamma$ functions
\begin{align*}
\mathcal I_\mathcal{A} ({\underline{\nu}},z) = \Phi_\mathcal{A} ({\underline{\nu}},z) \frac{\Gamma(-\nu_0+\nu_1+\nu_2) \Gamma(\nu_0-\nu_2)}{\Gamma(\nu_1)}
\end{align*}
with an entire function $\Phi_\mathcal{A}({\underline{\nu}},z)$. \Cref{fig:exampleMeromorphicContinuation} shows the original convergence region, as well as the meromorphic continuation. In the case $\nu_1=\nu_2=1$ and $\nu_0=\frac{d}{2}=2-\epsilon$ we will obtain
\begin{align}
\mathcal I_\mathcal{A} ({\underline{\nu}},z) = \frac{\Phi^{(0)}_\mathcal{A}({\underline{\nu}},z)}{\epsilon} + \Phi^{(1)}_\mathcal{A}({\underline{\nu}},z) + \left( \Phi^{(2)}_\mathcal{A}({\underline{\nu}},z) + \zeta(2) \Phi^{(0)}_\mathcal{A}({\underline{\nu}},z) \right) \epsilon + \mathcal O(\epsilon^2)
\end{align}
with $\Phi^{(i)}_\mathcal{A}({\underline{\nu}},z) = \left.\pd{\Phi_\mathcal{A}({\underline{\nu}},z)}{\epsilon}\right|_{\epsilon=0}$.
\begin{figure}
\centering
\begin{tikzpicture}
\draw[thick,->] (0,0) -- (4.8,0) node[anchor=north west] {$\frac{\Re(\nu_1)}{\Re(\nu_0)}$};
\draw[thick,->] (0,0) -- (0,3.8) node[anchor=south east] {$\frac{\Re(\nu_2)}{\Re(\nu_0)}$};
\coordinate[circle,inner sep=1pt,fill] (A) at (1,0);
\coordinate[circle,inner sep=1pt,fill] (B) at (2,0);
\coordinate[circle,inner sep=1pt,fill] (C) at (1,1);
\coordinate[circle,inner sep=1pt,fill] (D) at (0,1);
\filldraw [thick, fill=gray, fill opacity = 0.5] (A) coordinate (GeneralStart) -- ++(1,0) -- ++(-1,1) -- ++(-1,0) -- ++(1,-1) -- cycle;
\draw (0.1,-2.1) -- (-2.1,0.1);
\draw (-0.9,-2.1) -- (-2.1,-0.9);
\draw ($(A) + (-2,0)$) -- ++(2.1,-2.1) -- ($(D) + (-2,0)$) -- ++(-0.1,0.1);
\draw ($(A) + (-1,0)$) -- ++(2.1,-2.1) -- ($(D) + (-1,0)$) -- ++(-1.1,1.1);
\draw ($(A) + (0,0)$) -- ++(2.1,-2.1) -- ($(D) + (0,0)$) -- ++(-2.1,2.1);
\draw ($(A) + (1,0)$) -- ++(2.1,-2.1) -- ($(D) + (1,0)$) -- ++(-2.1,2.1);
\draw ($(A) + (2,0)$) -- ++(1.1,-1.1) -- ($(D) + (2,0)$) -- ++(-2.1,2.1);
\draw ($(A) + (3,0)$) -- ++(0.1,-0.1) -- ($(D) + (3,0)$) -- ++(-2.1,2.1);
\draw (1.9,3.1) -- (4.1,0.9);
\draw (2.9,3.1) -- (4.1,1.9);
\draw ($(A) + (0,-2)$) -- ++(-3.1,0) -- ($(B) + (0,-2)$) -- ++(2.1,0);
\draw ($(A) + (0,-1)$) -- ++(-3.1,0) -- ($(B) + (0,-1)$) -- ++(2.1,0);
\draw ($(A) + (0,0)$) -- ++(-3.1,0) -- ($(B) + (0,0)$) -- ++(2.1,0);
\draw ($(A) + (0,1)$) -- ++(-3.1,0) -- ($(B) + (0,1)$) -- ++(2.1,0);
\draw ($(A) + (0,2)$) -- ++(-3.1,0) -- ($(B) + (0,2)$) -- ++(2.1,0);
\draw ($(A) + (0,3)$) -- ++(-3.1,0) -- ($(B) + (0,3)$) -- ++(2.1,0);
\end{tikzpicture}
\caption[Meromorphic continuation of Feynman integrals w.r.t. parameters ${\underline{\nu}}$]{The original convergence region of the Feynman integral is the gray shaded tetragon. By meromorphic continuation one can extend the Feynman integral to the whole plane. The lines characterize the poles in the parameters. We omitted the cancellations of $\Gamma$-functions with the denominator $\Gamma(\nu_0-\omega)\Gamma(\nu)$ in \cref{eq:meromorphicContinuation}. Thus, this figure shows rather the poles of $\mathcal J_\mathcal{A}({\underline{\nu}},z)$.} \label{fig:exampleMeromorphicContinuation}
\end{figure}
\end{example}
Hence, \cref{thm:meromorphicContinuation} not only vindicates dimensional and analytical regularization, it will also allow us in the $\epsilon$-expansion around integer values of $\nu_0=\frac{d}{2}$ to focus on the Taylor expansion of $\Phi_\mathcal{A}$ instead of a Laurent expansion of $\mathcal I_\mathcal{A}$. Thus, one can determine the coefficients in the $\epsilon$-expansion by differentiating, which makes the procedure much easier.\bigskip
The regularization of Feynman integrals is an intermediate step in the renormalization procedure, which makes the divergences visible. However, we would like to remark that there are also renormalization procedures in which this intermediate step does not have to be carried out explicitly. Renormalization is essential for perturbative QFTs to be formulated in a meaningful way. This is because the original Lagrangian density $\mathcal L$ contains certain ambiguities that are resolved with renormalization. As noted in \cite{DelamotteHintRenormalization2004}, these ambiguities, rather than the divergences of Feynman integrals, are the main reason that renormalization is required in pQFTs. Nevertheless, we do not intend to give an introduction to the very extensive field of renormalization here. Instead, we will refer exemplarily to \cite{DelamotteHintRenormalization2004} for an elementary but illustrative introduction, to \cite{CollinsRenormalizationIntroductionRenormalization1984} for a classical overview and to \cite{ConnesRenormalizationQuantumField1999, ConnesRenormalizationQuantumField2001} for a mathematically rigorous treatment.
\section{Feynman integrals as \texorpdfstring{$\mathcal{A}$}{A}-hypergeometric functions} \label{sec:FeynmanIntegralsAsAHyp}
It is one of the first observations in the calculation of simple Feynman amplitudes, that Feynman integrals evaluate to classical hypergeometric functions in many cases. This observation was leading Tullio Regge to the conjecture that Feynman integrals are always hypergeometric functions in a general sense \cite{ReggeAlgebraicTopologyMethods1968}. For such a generalization of hypergeometric functions, he suggested to take the analytic behaviour of Feynman integrals as a starting point. In the language of $D$-modules, he proposed that Feynman integrals are holonomic functions whose singular locus is given by the Landau variety. As Regge also noted, this criterion can be transferred to certain partial differential equations, which can be understood as generalized Picard-Fuchs equations.
This idea was later refined by Kashiwara and Kawai \cite{KashiwaraHolonomicSystemsLinear1976}, who showed that Feynman integrals are indeed holonomic functions, i.e.\ they always satisfy holonomic differential equations (see \cref{sec:holonomicDmodules} for the basic notions of $D$-modules). For the $1$-loop case Regge's idea was worked out partly by Kershaw \cite{KershawFeynmanAmplitudesPower1973} and Wu \cite{WuGeneralizedEulerPochhammerIntegral1974}. However, this development happened at a time when there was no consistent theory of general hypergeometric functions. Such a theory was only started in the late '80s by Gelfand, Kapranov and Zelevinsky (GKZ) and their collaborators (we summarized their approach in \cref{sec:AHypSystems}). As already remarked in \cite{GolubevaReggeGelfandProblem2014}, the GKZ theory is the consequential extension of Regge's ideas. Thus, we can give a revisited view on the idea of Regge by means of GKZ theory and within this framework, Regge's conjecture can also be confirmed. \bigskip
Beside the general question of a sufficient functional class of Feynman integrals, specific hypergeometric functions play also an important role in many approaches of the calculation of Feynman integrals. Typically, those hypergeometric functions appear in the often used Mellin-Barnes approach \cite{UsyukinaRepresentationThreepointFunction1975, BergereAsymptoticExpansionFeynman1974, SmirnovFeynmanIntegralCalculus2006}. This appearance is a consequence of Mellin-Barnes representations with integrands consisting in a product of $\Gamma$-functions, which can be identified by the hypergeometric Fox's $H$-functions \cite{Inayat-HussainNewPropertiesHypergeometric1987, Inayat-HussainNewPropertiesHypergeometric1987a, BuschmanFunctionAssociatedCertain1990}. Further, there are different techniques for the representation of $1$-loop integrals in terms of simple hypergeometric functions \cite{BoosMethodCalculatingMassive1991, FleischerNewHypergeometricRepresentation2003, PhanScalar1loopFeynman2019}, which rely on those Mellin-Barnes representations by means of residue theorem. Beside the $1$-loop case, there are not many results known, which is also due to the fact that multivariate Mellin-Barnes integrals can be highly non-trivial \cite{ParisAsymptoticsMellinBarnesIntegrals2001}. We also refer to \cite{KalmykovHypergeometricFunctionsTheir2008} for a review of hypergeometric functions appearing in Feynman integral calculus. \bigskip
As aforementioned the Gelfand-Kapranov-Zelevinsky approach is a convenient opportunity to examine the correspondence between hypergeometric functions and Feynman integrals. It was already stated by Gelfand and his collaborators themselves, that ``practically all integrals which arise in quantum field theory'' \cite{GelfandHypergeometricFunctionsToral1989} can be treated with this approach. However, this insight seems to have been forgotten for a long time and was not pursued further. In 2014, the connection between Feynman integrals and GKZ-hypergeometric theory was mentioned and discussed again by Golubeva \cite{GolubevaReggeGelfandProblem2014}. In \cite{NasrollahpoursamamiPeriodsFeynmanDiagrams2016} it was proven, that the Feynman integrals satisfy a system of differential equations which is isomorphic to an $\mathcal{A}$-hypergeometric system. Recently, the fact that scalar Feynman integrals are $\mathcal{A}$-hypergeometric functions was independently shown in 2019 by \cite{DeLaCruzFeynmanIntegralsAhypergeometric2019} and \cite{KlausenHypergeometricSeriesRepresentations2019} based on the Lee-Pomeransky representation \cref{eq:LeePomeranskyRepresentation}.
\begin{theorem}[Feynman integrals as $\mathcal{A}$-hypergeometric functions] \label{thm:FeynmanAHypergeometric}
Consider a generalized scalar Feynman integral $\mathcal I_\mathcal{A}({\underline{\nu}},z)$ in the representation \cref{eq:LeePomeranskyRepresentation}. ${\underline{\nu}}\in\mathbb C^{n+1}$ was defined in \cref{eq:nuuDefinition} and $\mathcal{A}\in\mathbb Z^{(n+1)\times N}$ is the homogenization of $A$, which was defined by \cref{eq:Gsupport}, i.e.\ we interpret $A$ as a set of column vectors building an $n\times N$ integer matrix and adding the row $(1,\ldots,1)$. Then $\mathcal I_\mathcal{A}({\underline{\nu}},z)$ is $\mathcal{A}$-hypergeometric, i.e.\ it satisfies the $\mathcal{A}$-hypergeometric system \cref{eq:AhypIdeal}
\begin{align}
H_\mathcal{A} ({\underline{\nu}}) \bullet \mathcal I_\mathcal{A}({\underline{\nu}},z)=0 \point
\end{align}
Thus, the generalized Feynman integral is an $\mathcal{A}$-hypergeometric function.
\end{theorem}
\begin{proof}
Instead of $\mathcal I_\mathcal{A}({\underline{\nu}},z)$ we will consider $\mathcal J_\mathcal{A}({\underline{\nu}},z) := \frac{1}{\Gamma(\nu_0-\omega)\Gamma(\nu)} \mathcal I_\mathcal{A}({\underline{\nu}},z)$ to avoid unnecessary prefactors. Firstly, we want to show that Feynman integrals satisfy the toric operators $\square_l$ for all $l\in\mathbb L$ or equivalently $ \left\{\partial^u - \partial^v \,\rvert\, \mathcal{A} u = \mathcal{A} v,\: u,v\in\mathbb N^N\right\}$. Derivatives of the Feynman integral with respect to $z$ result in
\begin{align}
\partial^u \bullet \Gamma(\nu_0) \int_{\mathbb R^{n}_+} \mathop{}\!\mathrm{d} x \, x^{\nu-1}\mathcal{G}^{-\nu_0} = \Gamma\!\left(\nu_0-|u|\right) \int_{\mathbb{R}^n_+} \mathop{}\!\mathrm{d} x \, x^{\nu-1} x^{\mathcal{A} u} \mathcal{G}^{-\nu_0- |u|}
\end{align}
where $|u|:= \sum_i u_i$. Since $\mathcal{A}$ contains the row $(1,1,\ldots,1)$ it follows immediately that $|u|=|v|$. Therefore, one obtains the same equation for $v$.
Secondly, we want to show that Feynman integrals satisfy the homogeneous operators $E_i({\underline{\nu}})$ for $i=0,\ldots,n$. For this purpose note that $\mathcal J_\mathcal{A}\!\left({\underline{\nu}},s^{a_{b}^{(1)}} \! z_1,\ldots,s^{a_{b}^{(N)}} \! z_N\right) = \Gamma(\nu_0) \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\, x^{\nu-1} \mathcal{G} (x_1,\ldots,s x_b,\ldots,x_n)^{-\nu_0}$. After a substitution $x_b \mapsto \frac{1}{s} x_b$ for $s>0$ it is
\begin{align}
\mathcal J_\mathcal{A}\!\left({\underline{\nu}},s^{a_{b}^{(1)}} \! z_1,\ldots,s^{a_{b}^{(N)}} \! z_N\right) = s^{-\nu_b} \mathcal J_\mathcal{A} ({\underline{\nu}},z) \point \label{eq:FeynAhypProof}
\end{align}
A derivative of \cref{eq:FeynAhypProof} with respect to $s$ concludes the proof for $s=1$.
\end{proof}
Hence, as conjectured by Regge and suggested already by Gelfand and his collaborators every scalar, generalized Feynman integral in Euclidean kinematics satisfies the $\mathcal{A}$-hypergeometric system and can be treated within the framework of GKZ.
This fact is not quite surprising as the parametric representations \cref{eq:FeynmanParSpFeynman} and \cref{eq:LeePomeranskyRepresentation} belong both to the class of Euler-Mellin integrals \cite{BerkeschEulerMellinIntegrals2013}, which are defined as Mellin transforms of a product of polynomials up to certain powers. As every Euler-Mellin integral, also the Feynman integral is an $\mathcal{A}$-hypergeometric function.
Therefore, we can also write an $\mathcal{A}$-hypergeometric systems for the Feynman representation \cref{eq:FeynmanParSpFeynman} in the following way. Without loss of generality we will set $x_n=1$ by evaluating the $\delta$-distribution in \cref{eq:FeynmanParSpFeynman}. Denote by $A_\mathcal{U}$ and $A_\mathcal{F}$ the support of the first and the second Symanzik polynomial after setting $x_n=1$. In doing so, we can construct the following matrix
\begin{align}
\mathcal{A}^\prime = \begin{pmatrix}
1 & \cdots & 1 & 0 & \cdots & 0\\
0 & \cdots & 0 & 1 & \cdots & 1\\
\\
& A_\mathcal{U} & & & A_\mathcal{F} &\\
\\
\end{pmatrix} \label{eq:AprimeUF}
\end{align}
which defines together with $\beta = \left(\frac{d}{2}-\omega,\omega,\nu_1,\ldots,\nu_{n-1}\right)$ the $\mathcal{A}$-hypergeometric system $H_{\mathcal{A}^\prime}(\beta)$ of \cref{eq:FeynmanParSpFeynman}. A matrix of the form \cref{eq:AprimeUF} is also known as \textit{Cayley embedding} of $A_\mathcal{U}$ and $A_\mathcal{F}$ \cite[def. 9.2.11]{DeLoeraTriangulations2010}. As expected the $\mathcal{A}$-hypergeometric systems for \cref{eq:FeynmanParSpFeynman} and \cref{eq:LeePomeranskyRepresentation} are equivalent, which can be verified by the matrix
\begin{align}
T = \left(\begin{array}{cccc}
L+1 & -1 & \cdots & -1 \\
-L & 1 & \cdots & 1 \\
0 & & & 0 \\
\vdots & \multicolumn{2}{c}{\smash{ \scalebox{1.5}{$\mathbbm{1}$}}_{n-1} } & \vdots \\
0 & & & 0
\end{array}\right)\quad\text{,}\qquad
T^{-1} = \left(\begin{array}{ccccc}
1 & 1 & 0 & \cdots & 0 \\
0 & 0 & & & \\
\vdots & \vdots & \multicolumn{3}{c}{\smash{\scalebox{1.5}{$\mathbbm{1}$}}_{n-1} } \\
0 & 0 & & \\
L & L+1 & -1 & \cdots & -1
\end{array}\right)
\end{align}
which transforms $\mathcal{A}^\prime = T \mathcal{A}$ and $\beta = T {\underline{\nu}}$. Moreover, by Laplace expansion we can see that $T$ is an unimodular matrix, whence $\operatorname{Conv}(\mathcal{A})$ and $\operatorname{Conv}(\mathcal{A}^\prime)$ are equivalent polytopes. According to \cite{BerkeschEulerMellinIntegrals2013}, when working with the representation \cref{eq:FeynmanParSpFeynman} we will consider the Newton polytope $\operatorname{Newt}\!\left(\left(\mathcal{U}\cdot\mathcal{F}\right)\!|_{x_n=1}\right) = \operatorname{Newt}\!\left(\mathcal{U}|_{x_n=1}\right) + \operatorname{Newt}\!\left(\mathcal{F}|_{x_n=1}\right)$ instead of $\operatorname{Newt}(\mathcal{G})$, where the sum denotes the Minkowski addition. For the general relation between Cayley embedding and Minkowski sums we refer to \cite[lem. 3.2]{HuberCayleyTrickLifting2000}. In the following we will mostly prefer the Lee-Pomeransky representation \cref{eq:LeePomeranskyRepresentation} due to its plainer structure. \bigskip
Feynman integrals will only be a subclass of $\mathcal{A}$-hypergeometric functions. We want to list certain characteristics that distinguish Feynman integrals within this class of arbitrary $\mathcal{A}$-hypergeometric functions. This characterization does not claim to be exhaustive. When considering $\mathcal{A}$-hypergeometric functions, the behaviour will be determined by the vector configuration $\mathcal{A}\subset\mathbb Z^{n+1}$ or equivalently by the Newton polytope $\operatorname{Newt}(\mathcal{G})$. Therefore, we will examine the special properties of these objects in case where $\mathcal{A}$ comes from a scalar Feynman integral. Obviously, from the definitions of $\mathcal{U}$ and $\mathcal{F}$ \cref{eq:FirstSymanzik}, \cref{eq:SecondSymanzik}, the entries of the matrix $\mathcal{A}$ are restricted to $\mathcal{A}\in \{0;1;2\}^{(n+1)\times N}$. In addition, every column of $\mathcal{A}$ contains at most one entry equals $2$. In case of massless Feynman integrals, $\operatorname{Newt}(\mathcal{G})$ will even be a $0/1$-polytope, i.e.\ $\mathcal{A}\in\{0;1\}^{(n+1)\times N}$. Furthermore, due to the homogeneous degrees of $\mathcal{U}$ and $\mathcal{F}$, all points in $\operatorname{Newt}(\mathcal{G})$ are arranged on two parallel hyperplanes of $\mathbb R^{n}$. These hyperplanes will be of the form $\left\{\mu\in\mathbb R^n \,\rvert\, \sum_{i=1}^n \mu_i = L \right\}$ and $\left\{\mu\in\mathbb R^n \,\rvert\, \sum_{i=1}^n \mu_i = L+1 \right\}$, respectively. Thus, $\operatorname{Newt}(\mathcal{G})$ is compressed in one direction. As a further consequence, $\operatorname{Newt}(\mathcal{G})$ will have no interior points. \bigskip
For fully massive Feynman integrals it was noticed in \cite{TellanderCohenMacaulayPropertyFeynman2021}, that every monomial of $\mathcal{F}_0$ will be contained in the massive part of the second Symanzik polynomial $\mathcal{U}\sum_i x_i m_i^2$. Hence, we will have
\begin{align}
\operatorname{Newt}(\mathcal{F}) = \operatorname{Newt}\!\left(\mathcal{U} \sum_{i=1}^n x_i m_i^2\right) = \operatorname{Newt}(\mathcal{U}) + \Delta_n
\end{align}
where $\Delta_n = \operatorname{Conv}(e_1,\ldots,e_n)$ is the $(n-1)$-simplex and where the sum denotes the Minkowski addition. In consequence, it is
\begin{align}
\operatorname{Newt}(\mathcal{G}) = \operatorname{Newt}\!\left(\mathcal{U} \left(1 + \sum_{i=1}^n x_i m_i^2\right)\right) = \operatorname{Newt}(\mathcal{U}) + \tilde\Delta_n
\end{align}
with the $n$-simplex $\tilde\Delta_n = \operatorname{Conv}(0,e_1,\ldots,e_n)$. Thus, it is remarkable that for fully massive graphs, the precise form of the second Symanzik polynomial does not play a role for the most properties of the Feynman integral. Only for the specification to the variables $z$ to the physical values we have to consider the second Symanzik polynomial. \bigskip
It was conjectured\footnote{In fact, it was even conjectured that Feynman configurations always admit unimodular triangulations, which is a slightly stronger assumption.} in \cite{KlausenHypergeometricSeriesRepresentations2019}, that Feynman configurations are always \textit{normal}, i.e.\ that they satisfy
\begin{align}
k \operatorname{Newt}(\mathcal{G}) \cap \mathbb Z^n = (k-1) \operatorname{Newt}(\mathcal{G}) \cap \mathbb Z^n + \operatorname{Newt}(\mathcal{G})\cap\mathbb Z^n
\end{align}
for any $k\in\mathbb N$, which is also known as the \textit{integer decomposition property}. This property is particularly interesting because it implies that the toric ideal $I_\mathcal{A}$ defined by $\operatorname{Newt}(\mathcal{G})=\operatorname{Conv}(A)$ is Cohen-Macaulay.
This assumption was shown for two specific classes of Feynman graphs in \cite{TellanderCohenMacaulayPropertyFeynman2021} and was largely extended in \cite{WaltherFeynmanGraphsMatroids2022}.
\begin{theorem}[Cohen-Macaulayness for Feynman integrals {\cite[thm. 3.1 and thm. 3.4]{TellanderCohenMacaulayPropertyFeynman2021},\cite[thm. 4.3, thm. 4.5 and thm. 4.9]{WaltherFeynmanGraphsMatroids2022}}] \label{thm:CohenMacaulayFeynman}
Assuming that $\Gamma$ is a (1PI), (1VI) Feynman graph with sufficiently generic\footnote{``Sufficiently generic'' means here that the external momenta should not lead to a cancellation of monomials in $\mathcal{G}$, which could happen outside the Euclidean region for specific momenta.} external momenta. Then in the following cases the Feynman configuration is normal
\begin{enumerate}[a)]
\item all edges of $\Gamma$ are massive
\item all edges of $\Gamma$ are massless
\item every $2$-forest of $\Gamma$ induces a non-zero term in $\mathcal{G}$.
\end{enumerate}
The last case c) can be rephrased as follows: For every internal vertex of $\Gamma$ there is a path to an external vertex consisting in massive edges only.
\end{theorem}
As seen in \cref{thm:HolRankAHyp}, the Cohen-Macaulayness of the toric ideal $I_\mathcal{A}$ ensures that the holonomic rank is given by $\operatorname{vol}(\operatorname{Newt}(\mathcal{G}))$ for all values of ${\underline{\nu}}$ and not only for generic ones. Therefore, the $\mathcal{A}$-hypergeometric system is well-behaved also for the points ${\underline{\nu}}\in\mathbb Z^{n+1}$ and has no rank jumpings there. Thus, even though the Feynman integral may diverge for $d\rightarrow 4$, the structure of the Feynman integral remains the same in this limit.\bigskip
As a further characterization of Feynman configurations, we would like to draw attention to the insight found in \cite{SchultkaToricGeometryRegularization2018}. According to it, the polytope $\operatorname{Newt}(\mathcal{U}\mathcal{F}) = \operatorname{Newt}(\mathcal{U}) + \operatorname{Newt}(\mathcal{F})$ is a generalized permutahedron, which means that all edges of $\operatorname{Newt}(\mathcal{U}\mathcal{F})$ are parallel to an edge of the form $e_i-e_j$ for all $i,j\in \{1,\ldots,n\}$. In addition, Symanzik polynomials satisfy certain useful relations going beyond \cref{eq:UContractedDeleted} as stated in \cite[prop. 4.11]{SchultkaToricGeometryRegularization2018}, \cite{BrownFeynmanAmplitudesCosmic2017}.\bigskip
To conclude this chapter we want to highlight the most important points for the following. For a ($1$PI) and ($1$VI) Feynman graph, the scalar Feynman integral in the Euclidean region $\Re(z_j)>0$ is a meromorphic function in the parameters ${\underline{\nu}}=\left(\frac{d}{2},\nu_1,\ldots,\nu_n\right)\in\mathbb C^{n+1}$. Moreover, the generalized Feynman integral is an $\mathcal{A}$-hyper\-ge\-o\-me\-tric function and the (physical) Feynman integral is a certain restriction of those $\mathcal{A}$-hypergeometric functions. As we will see in \cref{ch:singularities} we can relax the restriction to the Euclidean region after a rigorous treatment of kinematic singularities. Furthermore, in \cref{cor:MellinBarnesRepresentation} we found a class of Feynman-like integrals which provides a simple solution in terms of $\Gamma$-functions. These integrals will be helpful to fix the boundary values for the $\mathcal{A}$-hypergeometric systems.
\chapter{Series representations} \label{ch:seriesRepresentations}
In this chapter we will concern with power series representations of Feynman integrals. It will be found\footnote{That Feynman integrals can always be expressed by Horn hypergeometric functions has been assumed for a long time, with good reasons, see e.g.\ \cite{KalmykovFeynmanDiagramsDifferential2009}. However, a rigorous proof has been lacking so far.} that any given generalized Feynman integral $\mathcal I_\mathcal{A}({\underline{\nu}},z)$ can be expressed in terms of Horn hypergeometric functions \cref{eq:DefHornHypergeometric} for every regular triangulation of the Newton polytope $\operatorname{Newt}(\mathcal{G})$. Hence, we will generate a set of different series representations for each Feynman integral. This will be done by considering the $\Gamma$-series solutions of $\mathcal{A}$-hypergeometric systems from \cref{ssec:GammaSeries}. To fix the Feynman integral as a specific element in the solution space $\operatorname{Sol}(H_\mathcal{A}({\underline{\nu}}))$ we will make use of \cref{cor:MellinBarnesRepresentation}, which provides the boundary values for the $\mathcal{A}$-hypergeometric system. In this way, we are able to give a closed formula for series representations of generalized Feynman integrals in \cref{thm:FeynSeries}.\bigskip
After discussing the series representations for generalized Feynman integrals, we will answer the question of how one can transform statements for the generalized Feynman integral into statements of the Feynman integral restricted to physical values (\cref{sec:AnalyticContinuation}). Furthermore, we will present techniques for the Laurent expansion of those series in their parameters ${\underline{\nu}}$ in \cref{sec:epsilonExpansion}, which is necessary in the dimensional and analytic regularization.
Since the handling of multivariate series can be very elaborate, we will provide a small amount of tools for the treatment of those series in \cref{sec:ManipulationSeries}. Those techniques can in principle also be used for a symbolic reduction of Horn hypergeometric series to multiple polylogarithms and related functions. However, series representations are notably efficient in a numerical evaluation (\cref{sec:numerics}). For convenient kinematical regions those series will converge very fast and one can give a sufficient approximation after a few summands.\bigskip
In principle there are many known ways to span the solution space of $\mathcal{A}$-hyper\-ge\-o\-me\-tric systems \cite{Matsubara-HeoLaplaceResidueEuler2018}. Thus, one can use the $\mathcal{A}$-hypergeometric theory also to write the Feynman integral in terms of Euler integrals or Laplacian integrals to name a few. We will address those alternative approaches in \cref{sec:EulerIntegrals}.
The procedure presented in this chapter is not only restricted to Feynman integrals. One can derive series representations for any Euler-Mellin integral. We will show some of these applications appearing in Feynman calculus in \cref{sec:periodMarginal}. In order to illustrate the method of Horn hypergeometric series, we will conclude this chapter by deriving a series representation of the fully massive sunset graph in \cref{sec:ExampleSunset}.
\section[Series representations for generalized Feynman integrals]{Series representations for generalized Feynman \\ integrals}
\sectionmark{... for generalized Feynman integrals}
\label{sec:seriesRepresentationsSEC}
As it was observed in \cref{thm:FeynmanAHypergeometric}, every generalized Feynman integral is an $\mathcal{A}$-hypergeometric function. Thus, we can directly apply the results of $\mathcal{A}$-hypergeometric systems from \cref{ch:AHypergeometricWorld}. In addition to the genericity of the variables $z$, we will also consider preliminary generic\footnote{As before, we will call ${\underline{\nu}}\in \mathbb C^{n+1}$ generic, if ${\underline{\nu}} \in D \subseteq \mathbb C^{n+1}$ attains only values in a non-empty, Zariski open set $D$. This means in particular that $D$ is dense in $\mathbb C^{n+1}$. When working with $\Gamma$-series \cref{eq:GammaSeriesDefCh4}, we will often consider a slightly stronger restriction, where we will assume ${\underline{\nu}}$ to be in a countable intersection of non-empty, Zariski open sets. In other words, we want ${\underline{\nu}}$ to attend values in the complement of a countable union of hyperplanes of $\mathbb C^{n+1}$. This slightly stronger version is sometimes referred as ``very generic''. Hence, for very generic ${\underline{\nu}}$, we can assume that $\mathcal{A}_\sigma^{-1}{\underline{\nu}}+\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} \lambda$ will not contain any integer coordinate for all $\lambda\in\Lambda_k$, see also \cref{thm:meromorphicContinuation}. Note that for the assumptions in \cref{thm:HolRankAHyp} generic values of ${\underline{\nu}}$ are sufficient.} values for ${\underline{\nu}}\in \mathbb C^{n+1}$ in order to avoid singularities in the $\Gamma$-functions. The latter assumption will also exclude the possibilities of rank jumpings, i.e.\ the holonomic rank of $H_\mathcal{A}({\underline{\nu}})$ is precisely given by the volume of $\operatorname{Newt}(\mathcal{G})$ according to \cref{thm:HolRankAHyp}. However, as stated in \cref{thm:CohenMacaulayFeynman}, the toric ideal generated by Feynman integrals is Cohen-Macaulay in many cases. Therefore, there will be no rank jumpings even for non-generic ${\underline{\nu}}$ for the most Feynman integrals. Hence, this will be not a serious restriction for Feynman integrals and is only done for simplicity in the handling of $\Gamma$-functions.\bigskip
In \cref{ssec:GammaSeries} we already constructed series solutions for any $\mathcal{A}$-hypergeometric system. In particular, a basis of the solution space $\operatorname{Sol}(H_\mathcal{A}({\underline{\nu}}))$ was provided by $\Gamma$-series. Thus, let $\mathcal{T}$ be a regular triangulation of the Newton polytope $\operatorname{Newt}(\mathcal{G})=\operatorname{Newt}(\mathcal{U}+\mathcal{F})$ and $\widehat{\mathcal T}$ the set of maximal cells of $\mathcal{T}$. Then according to \cref{thm:SolutionSpaceGammaSeries} we can write every element of $\operatorname{Sol}(H_\mathcal{A}({\underline{\nu}}))$ as a linear combination of $\Gamma$-series and especially the Feynman integral. For the sake of simplicity, we will drop the prefactors of the Feynman integral \cref{eq:LeePomeranskyRepresentation} and consider instead
\begin{align}
\gls{gFJ} := \Gamma (\nu_0-\omega)\Gamma(\nu) \mathcal I_\mathcal{A}({\underline{\nu}},z) = \Gamma(\nu_0)\int_{\mathbb R^n_+} \! \mathop{}\!\mathrm{d} x\, x^{\nu-1} \mathcal{G}^{-\nu_0} \label{eq:DefinitionFeynmanJ}
\end{align}
where the corresponding definitions can be found in \cref{eq:Gsupport} and \cref{eq:nuuDefinition}. Hence, the Feynman integral can be written as a linear combination of $\Gamma$-series
\begin{align}
\mathcal J_\mathcal{A}({\underline{\nu}},z) = \sum_{\sigma\in\widehat{\mathcal T}} \sum_{k\in K_\sigma} \gls{Cfactors} \varphi_{\sigma,k}({\underline{\nu}},z) \label{eq:LinearCombinationGammaSeries}
\end{align}
where $K_\sigma=\left\{k^{(1)},\ldots,k^{(s)}\right\}$ is a set of representatives of $\bigslant{\mathbb Z^{n+1}}{\mathbb Z\mathcal{A}_\sigma} = \big\{\!\left[\mathcal{A}_{\bar{\sigma}} k^{(j)}\right] \, \big\rvert \, j = 1,\ldots, s = \operatorname{vol}(\operatorname{Conv}(\mathcal{A}_\sigma))\big\}$ for every simplex $\sigma\in\widehat{\mathcal T}$ according to \cref{eq:representativesK}. Thereby $\varphi_{\sigma,k}$ denote the $\Gamma$-series which were defined in \cref{ssec:GammaSeries}, i.e.\
\begin{align}
\varphi_{\sigma,k} ({\underline{\nu}},z) = z_\sigma^{-\mathcal{A}_\sigma^{-1}{\underline{\nu}}} \sum_{\lambda\in\Lambda_k} \frac{z_\sigma^{-\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} \lambda} z_{\bar\sigma}^\lambda}{\lambda! \ \Gamma\!\left(1-\mathcal{A}_\sigma^{-1}{\underline{\nu}} - \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} \lambda\right)} \label{eq:GammaSeriesDefCh4}
\end{align}
with $\Lambda_k=\left\{ k + m \in \mathbb{N}_0^r \, \rvert\, \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} m \in \mathbb Z^{n+1} \right\}\subseteq \mathbb{N}_0^r$. By $\bar\sigma = \{1,\ldots,N\}\setminus\sigma$ we were denoting the complement of $\sigma$ and $\mathcal{A}_\sigma$, $\mathcal{A}_{\bar{\sigma}}$, $z_\sigma$, $z_{\bar\sigma}$ and all related objects indicate the restriction to columns corresponding to $\sigma$ and $\bar\sigma$, respectively. Further, we will assume Euclidean kinematics $\Re(z_j)>0$ in a convenient region, such that all $\Gamma$-series $\varphi_{\sigma,k}$ are convergent. As pointed out in \cref{ssec:GammaSeries}, those regions always exist.
Therefore, in order to get a series representation of Feynman integrals, we have to determine the meromorphic functions $C_{\sigma,k} ({\underline{\nu}})$ of \cref{eq:LinearCombinationGammaSeries}. This can be done by considering specific values for $z$ in \cref{eq:LinearCombinationGammaSeries}. In order to get sufficient boundary values also for non-unimodular triangulations $\mathcal{T}$, we have to include derivatives of $\mathcal J_\mathcal{A}({\underline{\nu}},z)$ with respect to $z$. As aforementioned, derivatives with respect to $z$ will correspond to a shift in the parameters ${\underline{\nu}}$, and we will have
\begin{align}
\partial^u \varphi_{\sigma,k} = \varphi_{\sigma,k-u_{\bar\sigma}} ({\underline{\nu}} + \mathcal{A} u,z)
\end{align}
with $\partial^u = \prod_{i=1}^N \left(\pd{}{z_i}\right)^{u_i}$ and $u\in\mathbb N_{\geq 0}^N$. Up to a sign, also the derivatives of the Feynman integral relate simply in a shift of the parameters ${\underline{\nu}}$
\begin{align}
\partial^u \mathcal J_\mathcal{A}({\underline{\nu}},z) = (-1)^{|u|} \mathcal J_\mathcal{A} ({\underline{\nu}}+\mathcal{A} u,z) \label{eq:FeynmanJDerivative}
\end{align}
where $|u| := \sum_{i=1}^N u_i$, which follows directly from the definition \cref{eq:DefinitionFeynmanJ}. \bigskip
For the purpose of considering boundary values of \cref{eq:LinearCombinationGammaSeries} where certain variables $z_j$ are set to zero, we will examine the behaviour of $\Gamma$-series when they are restricted to subconfigurations in the following slightly technical lemma. As we will see below, Feynman integrals transmit their meromorphic functions $C_{\sigma,k}({\underline{\nu}})$ to simpler Feynman integrals. This enables us to reduce every Feynman integral to the case described in \cref{cor:MellinBarnesRepresentation} and derive an analytic expression of the functions $C_{\sigma,k}({\underline{\nu}})$.
\begin{lemma}[linear combinations for subtriangulations] \label{lem:linCombSubtriangs}
Let $\mathcal{A}\in\mathbb Z^{(n+1)\times N}$ and $\mathcal{A}^\prime := \mathcal{A}\setminus i \in\mathbb Z^{(n+1)\times (N-1)}$ be two acyclic, full dimensional vector configurations with $r=N-n-1>1$. Further, let $\mathcal{T}$ be a regular triangulation of $\mathcal{A}$ and $\mathcal{T}^\prime$ a regular triangulation of $\mathcal{A}^\prime$ being subtriangulations of each other $\mathcal{T}^\prime \subseteq \mathcal{T}$. Moreover, we will consider that the representatives $K_\sigma$ and $K^\prime_\sigma$ are chosen compatible with each other for all $\sigma\not\ni i$, i.e.\ for every ${k^\prime}^{(j)}\in K_\sigma^\prime$ we will construct $k^{(j)}\in K_\sigma$ by adding $k^{(j)}_i = 0$ as the $i$-th component. Under the assumption $\lim_{z_i\rightarrow 0} \mathcal J_\mathcal{A}({\underline{\nu}},z) = \mathcal J_{\mathcal{A}^\prime}({\underline{\nu}},z^\prime)$ we will have an equality of the meromorphic functions from \cref{eq:LinearCombinationGammaSeries}
\begin{align}
C_{\sigma,k}({\underline{\nu}}) = C^\prime_{\sigma,k^\prime} ({\underline{\nu}}) \qquad\text{for all } \sigma \in \widehat{\mathcal T}^\prime
\end{align}
where the primed objects are all related to the system $H_{\mathcal{A}^\prime}({\underline{\nu}})$.
\end{lemma}
\begin{proof}
Due to the relation of derivatives and shifts in the parameters \cref{eq:FeynmanJDerivative} we can extend $\lim_{z_i\rightarrow 0} \mathcal J_\mathcal{A}({\underline{\nu}},z) = \mathcal J_{\mathcal{A}^\prime}({\underline{\nu}},z^\prime)$ to
\begin{align}
\lim_{z_i\rightarrow 0} \partial^u \mathcal J_\mathcal{A}({\underline{\nu}},z) = (-1)^{|u|} \mathcal J_{\mathcal{A}^\prime} ({\underline{\nu}} + \mathcal{A} u,z^\prime) \stackrel{u_i=0}{=} \partial^{u^\prime} \mathcal J_{\mathcal{A}^\prime} ({\underline{\nu}} , z^\prime)
\end{align}
where we use $z^\prime := (z_j)_{j\neq i}$ as before for the variables of the system $H_{\mathcal{A}^\prime}({\underline{\nu}})$ and equivalently $u^\prime := (u_j)_{j\neq i}$. So the task will be to compare the limits of $\partial^u \mathcal J_\mathcal{A}({\underline{\nu}},z)$ with the limits of $\Gamma$-series $\partial^u \varphi_{\sigma,k}({\underline{\nu}},z)$. For the latter we have to distinguish between the two cases $\sigma\ni i$ and $\sigma\not\ni i$. Starting with the second case we have
\begin{align}
\lim_{z_i\rightarrow 0} \partial^u \varphi_{\sigma,k}({\underline{\nu}},z) \stackrel{i\notin\sigma}{=} z_\sigma^{- \mathcal{A}_\sigma^{-1}({\underline{\nu}}+\mathcal{A} u)} \displaystyle\sum_{\substack{\lambda\in\Lambda_{k-u_{\bar\sigma}} \\ \text{with } \lambda_i = 0}} \frac{z_\sigma^{-\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} \lambda} z_{\bar\sigma}^\lambda}{\lambda! \ \Gamma\!\left(1-\mathcal{A}_\sigma^{-1} ({\underline{\nu}} + \mathcal{A} u + \mathcal{A}_{\bar{\sigma}} \lambda)\right)} \label{eq:derivativeGammaSeriesLimit} \point
\end{align}
Note, that there does not necessarily exist a $\lambda\in\Lambda_{k-u_{\bar\sigma}}$ satisfying $\lambda_i=0$. In this case the sum in \cref{eq:derivativeGammaSeriesLimit} will be empty and $\lim_{z_i\rightarrow 0} \partial^u \varphi_{\sigma,k}({\underline{\nu}},z) = 0$. To avoid this case, let us assume that we have chosen $u$ in such a way that $(u_{\bar\sigma})_i = k_i$. This will agree with the previous formulated assumptions $(u_{\bar\sigma})_i = 0$ and $k_i=0$. Hence, we can reformulate the summation region
\begin{align}
\left\{\lambda\in\Lambda_{k-u_{\bar\sigma}} \,\rvert\, \lambda_i = 0 \right\} &= \left\{ k^\prime - u_{\bar\sigma}^\prime + m^\prime \in \mathbb{N}_0^{r-1} \, \rvert\, \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}^\prime m^\prime \in \mathbb Z^{n+1} \right\} \times \left\{\lambda_i = 0\right\} \nonumber \\
& = \Lambda^\prime_{k^\prime - u_{\bar\sigma}^\prime} \times \left\{\lambda_i = 0 \right\}
\end{align}
where the primed objects belong to the vector configuration $\mathcal{A}^\prime$. Thus, we will obtain for $i\notin\sigma$ with $(u_{\bar\sigma})_i = k_i=0$
\begin{align}
\lim_{z_i\rightarrow 0} \partial^u \varphi_{\sigma,k} ({\underline{\nu}},z) = \varphi^\prime_{\sigma,k^\prime - u_{\bar\sigma}^\prime} ({\underline{\nu}}+\mathcal{A} u, z^\prime) = \partial^{u^\prime} \varphi^\prime_{\sigma,k^\prime} ({\underline{\nu}},z^\prime) \comma
\end{align}
which is nothing else than the $\Gamma$-series we would expect for a system $\mathcal{A}^\prime$. Therefore, in the linear combination \cref{eq:LinearCombinationGammaSeries} we will have
\begin{align}
\lim_{z_i\rightarrow 0} & \partial^u \mathcal J_\mathcal{A}({\underline{\nu}},z) = \partial^{u^\prime} \mathcal J_{\mathcal{A}^{\prime}} ({\underline{\nu}}, z^\prime) \\&= \sum_{\sigma\in\widehat{\mathcal T}^\prime} \sum_{k^\prime\in K^\prime_\sigma} C_{\sigma,k^\prime}({\underline{\nu}}) \partial^{u^\prime} \varphi^\prime_{\sigma,k^\prime}({\underline{\nu}},z^\prime) + \sum_{\substack{ \sigma\in\widehat{\mathcal T} \\ \text{s.t. } i\in\sigma}} \sum_{k\in K_\sigma} C_{\sigma,k} ({\underline{\nu}}) \lim_{z_i\rightarrow 0}\partial^{u^\prime} \varphi_{\sigma,k}({\underline{\nu}},z) \point \nonumber
\end{align}
Since the $\Gamma$-series $\varphi^\prime_{\sigma,k^\prime}$ with $\sigma\not\ni i$ already describe the full $\mathcal{A}^\prime$-hypergeometric system $H_{\mathcal{A}^\prime}({\underline{\nu}})$ the $\Gamma$-series $\lim_{z_i\rightarrow 0} \varphi_{\sigma,k}$ with $i\in\sigma$ are either linear dependent of the latter or they are zero. Suppose that they are linear dependent. This implies in particular that the limit of the series
\begin{align}
\lim_{z_i\rightarrow 0} \sum_{\lambda\in\Lambda_{k-u_{\bar\sigma}}} \frac{z_\sigma^{-\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda} z_{\bar\sigma}^\lambda}{\lambda! \ \Gamma\!\left(1-\mathcal{A}_\sigma^{-1} ({\underline{\nu}} + \mathcal{A} u) - \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda\right)} \quad\text{with } i\in\sigma
\end{align}
is finite. On the other hand there exists a (full dimensional) region $D\subset\mathbb C^{n+1}$ such that $\left(\mathcal{A}_\sigma^{-1}{\underline{\nu}}\right)_i < 0$ for all ${\underline{\nu}}\in D$. Therefore, we will have $\lim_{z_i\rightarrow 0} \partial^u \varphi_{\sigma,k}({\underline{\nu}},z)=0$ for all ${\underline{\nu}}\in D$. Standard arguments of complex analysis lead to the result that $\lim_{z_i\rightarrow 0} \partial^u \varphi_{\sigma,k}({\underline{\nu}},z)$ vanishes for all generic values of ${\underline{\nu}}$ \cite[thm. 4.8, ch. 2]{SteinComplexAnalysis2003}. Therefore, the assumption of linear dependency was false and all the $\Gamma$-series $\partial^u \varphi_{\sigma,k}$ with $\sigma\ni i$ will identically vanish in the limit $z_i\rightarrow 0$. Note, that these series may not be defined in that limit, and it is rather the analytic continuation of the $\Gamma$-series which will vanish then.
Hence, the $\Gamma$-series of $\mathcal{A}$ with $\sigma \not\ni i$ will transform to the $\Gamma$-series of $\mathcal{A}^\prime$ in the limit $z_i\rightarrow 0$, whereas the $\Gamma$-series with $\sigma\ni i$ will vanish. Thus, the meromorphic functions $C_{\sigma,k}({\underline{\nu}})$ will stay the same in such a limit.
\end{proof}
Therefore, the set of $\Gamma$-series for the vector configuration $\mathcal{A}$ becomes the set of $\Gamma$-series for the vector configuration $\mathcal{A}^\prime$ in the limit $z_i\rightarrow 0$ if the considered triangulations are compatible. Such a behaviour was also mentioned in \cite{GelfandGeneralHypergeometricSystems1992}. Thus, the meromorphic functions $C_{\sigma,k}({\underline{\nu}})$ for a Feynman integral with vector configuration $\mathcal{A}$ will be transmitted to the one for a vector configuration $\mathcal{A}^\prime$, and we can reduce the determination of the functions $C_{\sigma,k}({\underline{\nu}})$ to simpler situations.
Note, that those pairs of regular triangulations as considered in \cref{lem:linCombSubtriangs} always exist, as argued in the end of \cref{ssec:TriangulationsPolyhedra}. E.g.\ we can construct $\mathcal{T}$ from $\mathcal{T}^\prime$ as a placing triangulation with respect to the point labelled by $i$. Also the assumption of the choice of representatives $K_\sigma$ and $K^\prime_\sigma$ in \cref{lem:linCombSubtriangs} is always possible, as the quotient rings $\normalslant{\mathbb Z^{n+1}}{\mathbb Z \mathcal{A}_\sigma}$ and $\normalslant{\mathbb Z^{n+1}}{\mathbb Z \mathcal{A}_\sigma^\prime}$ are obviously the same for all $\sigma\not\ni i$. \bigskip
Thus, we will determine the meromorphic functions $C_{\sigma,k} ({\underline{\nu}})$ by considering Euler-Mellin integrals corresponding to subtriangulations. In this way one can define ancestors and descendants of Feynman integrals by deleting or adding monomials to the Lee-Pomeransky polynomial $\mathcal{G}$. E.g.\ the massless one-loop bubble graph is a descendant of the one-loop bubble graph with one mass, which itself is a descendant of the fully massive one-loop bubble. Those ancestors and descendants do not necessarily correspond to Feynman integrals in the original sense, since one can also consider polynomials $\mathcal{G}$ which are not connected to graph polynomials any more. For an arbitrary acyclic vector configuration $\mathcal{A}$, we can choose any full dimensional simplex of a triangulation of $\mathcal{A}$ as a possible descendant. Hence, the trivial triangulation of that simplex and the triangulation of $\mathcal{A}$ are compatible in the sense of \cref{lem:linCombSubtriangs}. In doing so, one can relate the prefactors $C_{\sigma,k} ({\underline{\nu}})$ to the problem where only one simplex is involved. This consideration results in the following theorem.
\vspace{2em}
\begin{theorem}[Series representation of Feynman integrals] \label{thm:FeynSeries}
Let $\mathcal{T}$ be a regular triangulation of the Newton polytope $\operatorname{Newt}(\mathcal{G})$ corresponding to a generalized Feynman integral $\mathcal I_\mathcal{A}({\underline{\nu}},z)$ and $\widehat{\mathcal T}$ its maximal cells. Then the generalized Feynman integral can be written as
\begin{align}
\mathcal I_\mathcal{A}({\underline{\nu}},z) = \frac{1}{\Gamma(\nu_0-\omega)\Gamma(\nu)} \sum_{\sigma\in \widehat{\mathcal T}} \frac{z_\sigma^{-\mathcal{A}_\sigma^{-1}{\underline{\nu}}}}{\left|\det (\mathcal{A}_\sigma)\right|} \sum_{\lambda\in\mathbb N_0^r} \frac{(-1)^{|\lambda|}}{\lambda!} \Gamma\!\left(\mathcal{A}_\sigma^{-1}{\underline{\nu}}+\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda\right) z_\sigma^{-\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda} z_{\bar\sigma}^\lambda \label{eq:SeriesRepresentationFeynmanIntegral}
\end{align}
where $r=N-n-1$. In order to avoid poles of the Feynman integral, we will assume ${\underline{\nu}}\in\mathbb C^{n+1}$ to be very generic. All series in \cref{eq:SeriesRepresentationFeynmanIntegral} have a common region of absolute convergence.
\end{theorem}
\begin{proof}
As pointed out above, we only have to determine the meromorphic functions $C_{\sigma,k}({\underline{\nu}})$ which were defined by \cref{eq:LinearCombinationGammaSeries}. Due to \cref{lem:linCombSubtriangs} this can be accomplished by considering simpler cases. Especially, we want to reduce it to the case of simplices\footnote{The reduction to a simplex (i.e.\ $r=1$) does not satisfy the original assumptions which were considered in \cref{lem:linCombSubtriangs}. However, by duplicating an arbitrary vertex in $\mathcal{A}^\prime$ and adjusting the corresponding $z$-variable we can treat also simplices by \cref{lem:linCombSubtriangs}. Hence, we will split a monomial of $\mathcal{G}$ into two equal monomials with halved coefficients.}, i.e.\ we examine the limit $z_{\bar\sigma} \rightarrow 0$. Note that $\lim_{z_{\bar\sigma}\rightarrow 0} \partial^u \varphi_{\sigma,k}({\underline{\nu}},z)$ vanishes if and only if $0\in\Lambda_{k-u_{\bar\sigma}}$. This is satisfied if $k=u_{\bar\sigma}$, which follows directly from the definition of $\Lambda_{k-u_{\bar\sigma}}$. On the other hand, if $0\notin\Lambda_{k-u_{\bar\sigma}}$, $k$ and $u_{\bar\sigma}$ have to correspond to different equivalence classes described by $K_\sigma$. Therefore, we obtain by a choice of $u_\sigma=0$
\begin{align}
\lim_{z_{\bar\sigma} \rightarrow 0} \partial^{(0,u_{\bar\sigma})} \varphi_{\sigma,k}({\underline{\nu}},z) = \delta_{k,u_{\bar\sigma}} \frac{z_\sigma^{-\mathcal{A}_\sigma^{-1}({\underline{\nu}}+\mathcal{A}_{\bar{\sigma}} k)}}{\Gamma\!\left(1- \mathcal{A}_\sigma^{-1} ( {\underline{\nu}} + \mathcal{A}_{\bar{\sigma}} k)\right)} \label{eq:proofSeriesA}
\end{align}
for all $u_{\bar\sigma},k\in K_\sigma$ where $\delta_{k,u_{\bar\sigma}}$ denotes the Kronecker delta.
If $\operatorname{Conv}(\mathcal{A}_\sigma)$ describes only a simplex, we can give a simple analytic expression for the Feynman integral $\mathcal I_{\mathcal{A}_\sigma}$ by means of \cref{cor:MellinBarnesRepresentation}. For the derivative we obtain by \cref{eq:FeynmanJDerivative} and \cref{eq:MellinBarnesCorollary}
\begin{align}
\lim_{z_{\bar\sigma} \rightarrow 0} \partial^{(0,u_{\bar\sigma})} \mathcal J_\mathcal{A}({\underline{\nu}},z) = (-1)^{|u_{\bar\sigma}|} \frac{\Gamma(\mathcal{A}_\sigma^{-1}{\underline{\nu}} + \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} u_{\bar\sigma})}{\left|\det (\mathcal{A}_\sigma)\right|} z_\sigma^{-\mathcal{A}_\sigma^{-1}{\underline{\nu}} - \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} u_{\bar\sigma}} \point \label{eq:proofSeriesB}
\end{align}
Comparing the coefficients of \cref{eq:proofSeriesA} and \cref{eq:proofSeriesB} we will get
\begin{align}
\frac{(-1)^{|k|}}{\left|\det(\mathcal{A}_\sigma)\right|} \Gamma\!\left(\mathcal{A}_\sigma^{-1}{\underline{\nu}} + \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} k\right) = \frac{C_{\sigma,k}({\underline{\nu}})}{\Gamma\!\left(1-\mathcal{A}_\sigma^{-1}{\underline{\nu}}-\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} k\right)}\point
\end{align}
By means of Euler's reflection formula, $\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}(\lambda-k)\in\mathbb Z^{n+1}$ for all $\lambda\in\Lambda_k$ and \cref{lem:HomogenityRelationsForAa} we have the equality
\begin{align}
\frac{\Gamma\!\left(\mathcal{A}_\sigma^{-1} {\underline{\nu}} +\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} k\right) \Gamma\!\left(1-\mathcal{A}_\sigma^{-1}{\underline{\nu}} - \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} k\right)}{\Gamma\!\left(\mathcal{A}_\sigma^{-1}{\underline{\nu}} + \mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda\right)\Gamma\!\left(1-\mathcal{A}_\sigma^{-1}{\underline{\nu}}-\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda\right)} = (-1)^{\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} (\lambda-k)} = (-1)^{|\lambda|-|k|} \point
\end{align}
This results in
\begin{align}
\mathcal J_\mathcal{A}({\underline{\nu}},z) = \sum_{\sigma\in \widehat{\mathcal T}} \sum_{k\in K_\sigma} \frac{z_\sigma^{-\mathcal{A}_\sigma^{-1}{\underline{\nu}}}}{\left|\det (\mathcal{A}_\sigma)\right|} \sum_{\lambda\in\Lambda_k} \frac{(-1)^{|\lambda|}}{\lambda!} \Gamma\!\left(\mathcal{A}_\sigma^{-1}{\underline{\nu}}+\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda\right) z_\sigma^{-\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda} z_{\bar\sigma}^\lambda \point
\end{align}
Since the summands have no $k$ dependence any more, we can combine the summation over $k$. As $K_\sigma$ describes a partition of $\mathbb N_0^r$ according to \cref{eq:LambdaPartition} we obtain \cref{eq:SeriesRepresentationFeynmanIntegral}. The common convergence region was shown in \cref{thm:GammaConverge}.
\end{proof}
\Cref{thm:FeynSeries} works for regular unimodular, as well as for regular non-unimodular triangulations and extends the results of \cite{KlausenHypergeometricSeriesRepresentations2019}. Although, there are less series in the non-unimodular case (we obtain a Horn hypergeometric series for every simplex in the triangulation) we will mostly prefer the unimodular case. For those triangulations we will always have $\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\in\mathbb Z^{(n+1)\times r}$, such that the summation indices will appear only as integer combinations in the $\Gamma$-functions. This will simplify the following treatments, e.g.\ the Laurent expansion in dimensional regularization. However, the rational combinations of summation indices appearing in the non-unimodular case are no general obstacle and can be cured by splitting the series into $|\det(\mathcal{A}_\sigma)|$ many series with summation region $\Lambda_k$ for $k\in K_\sigma$.\bigskip
We want to emphasize that many Feynman integrals allow unimodular triangulations. Therefore, unimodular triangulations will be the relevant case in most applications. As we have not found any Feynman integral yet which does not allow any unimodular triangulation, we formulated the conjecture that all Feynman integrals enable unimodular triangulations \cite{KlausenHypergeometricSeriesRepresentations2019}. The only exception are certain momentum constraints, where monomials in the second Symanzik polynomial drop out. E.g.\ the fully massive $1$-loop bubble with $\mathcal{U}=x_1+x_2$ and $\mathcal{F} = (p^2+m_1^2+m_2^2) x_1 x_2 + m_1^2 x_1^2 + m_2^2 x_2^2$ does not allow an unimodular triangulation for the partial case $p^2 + m_1^2 + m_2^2 = 0$ and $m_1^2 \neq 0$, $m_2^2\neq 0$. As usual (see e.g.\ \cite{SchultkaToricGeometryRegularization2018}), we want to exclude those situations, where external momenta have specific, fixed values. Hence, we treat the external momenta as variables that are set to specific values only after renormalization.
The conjecture that any Feynman integral (without the aforementioned specific momentum constraints) allows at least one unimodular triangulation is neither proven nor disregarded. A first step to answer this question was done in \cite{TellanderCohenMacaulayPropertyFeynman2021} and \cite{WaltherFeynmanGraphsMatroids2022}. In these two articles the slightly weakened conjecture that all Feynman integrals generate Cohen-Macaulay ideals was shown for a wide class of Feynman graphs (see also \cref{thm:CohenMacaulayFeynman}). However, we want to emphasize, that the existence of unimodular triangulations is not necessary for series representations as seen in \cref{thm:FeynSeries}.\bigskip
In order to illustrate those series representations we will present two small examples. In \cref{sec:ExampleSunset} we will show a more extensive example to sketch the scope of this approach.
\begin{example} \label{ex:1loopbubbleB}
To exemplify the series representation in the unimodular case, we want to continue the \cref{ex:1loopbubbleA} of the $1$-loop bubble graph with one mass corresponding to \cref{fig:bubble1}. The vector configuration for this Feynman graph was given by
\begin{align}
\mathcal{A} = \begin{pmatrix}
1 & 1 & 1 & 1 \\
1 & 0 & 1 & 2 \\
0 & 1 & 1 & 0
\end{pmatrix} \qquad z = (1,1,p^2+m_1^2,m_1^2) \point
\end{align}
For the triangulation $\widehat\mathcal{T}_1 = \big\{\{1,2,4\},\{2,3,4\}\big\}$ one obtains the series representation
\begin{align}
\mathcal J_\mathcal{A}({\underline{\nu}},z) &= z_1^{-2 \nu_0 + \nu_1 + 2 \nu_2} z_2^{-\nu_2} z_4^{\nu_0 - \nu_1 - \nu_2} \sum_{\lambda\in\mathbb N_0} \frac{1}{\lambda !} \left(-\frac{z_1 z_3}{z_2 z_4}\right)^\lambda \Gamma (\nu_2+\lambda ) \nonumber\\
&\qquad\qquad \Gamma (2 \nu_0-\nu_1-2 \nu_2-\lambda ) \Gamma (-\nu_0+\nu_1+\nu_2+\lambda ) \nonumber \\
&\qquad + z_4^{\nu_2-\nu_0} z_2^{-2 \nu_0+\nu_1+\nu_2} z_3^{2 \nu_0-\nu_1-2 \nu_2} \sum_{\lambda\in\mathbb N_0} \frac{1}{\lambda !} \left(-\frac{z_1 z_3}{z_2 z_4}\right)^\lambda \Gamma (\nu_0-\nu_2+\lambda ) \nonumber \\
&\qquad\qquad \Gamma (-2 \nu_0+\nu_1+2 \nu_2-\lambda ) \Gamma (2 \nu_0-\nu_1-\nu_2+\lambda ) \point \label{eq:exBubble1MassSeriesRepr}
\end{align}
In the physically relevant case $z = (1,1,p^2+m_1^2,m_1^2)$ and ${\underline{\nu}} = (2-\epsilon,1,1)$ one can easily evaluate the series
\begin{align}
&\mathcal I_\mathcal{A}\!\left(2-\epsilon,1,1,1,1,m_1^2+p^2,m_1^2\right) = (m_1^2)^{-\epsilon} \frac{\Gamma(1-2\epsilon)\Gamma(\epsilon)}{\Gamma(2-2\epsilon)}\ \HypF{1,\epsilon}{2\epsilon}{\frac{ m_1^2+p^2}{m_1^2}} \nonumber \\
&\qquad + \left(m_1^2+p^2\right)^{1-2\epsilon} \left(-p^2\right)^{-1+\epsilon} \Gamma(1-\epsilon) \Gamma(-1+2\epsilon) \label{eq:Example1loopBubble2F1}
\end{align}
which agrees with the expected result. The power series representation which can be obtained by the triangulation $\widehat\mathcal{T}_2=\big\{\{1,2,3\},\{1,3,4\}\big\}$ will contain $-\frac{z_2 z_4}{z_1z_3}$ as its argument. Hence, depending on the kinematic region we like to consider, it is possibly worthwhile for numerical issues to take a different triangulation. For a region where $0 \ll |p^2|$ we should prefer $\mathcal{T}_2$, whereas $\mathcal{T}_1$ is convenient when $|p^2| \approx m_1^2$. The series representation for $\mathcal{T}_2$, as well as the former result in \cref{ex:1loopbubbleA}, are equivalent to \cref{eq:exBubble1MassSeriesRepr} by transformation rules of the ${_2}F_1$ function \cref{eq:2F1trafoA}.
\end{example}
\begin{example} \label{ex:SeriesRepresentationNonUnimod}
To illustrate how \cref{thm:FeynSeries} works in the non-unimodular case we want to consider the $1$-loop bubble graph with two masses. Hence, we will have the Lee-Pomeransky polynomial $\mathcal{G}=\mathcal{U}+\mathcal{F}$
\begin{align}
\mathcal{G} = x_1 + x_2 + \left(p^2 + m_1^2 + m_2^2\right) x_1 x_2 + m_1^2 x_1^2 + m_2^2 x_2^2 \comma
\end{align}
where the corresponding data for $\mathcal{A}$ and $z$ are
\begin{align}
\mathcal{A} = \begin{pmatrix}
1 & 1 & 1 & 1 & 1\\
1 & 0 & 1 & 2 & 0 \\
0 & 1 & 1 & 0 & 2
\end{pmatrix} \qquad z = (1,1,p^2+m_1^2+m_2^2,m_1^2,m_2^2) \point
\end{align}
The vector configuration $\mathcal{A}$ has five (regular) triangulations, where two of them are non-unimodular. We will choose the triangulation $\widehat{\mathcal T} = \big\{\{1,2,5\},\{1,4,5\}\big\}$, where the simplex $\sigma_2 = \{1,4,5\}$ has volume $2$. By means of those data we can put together the series representation according to \cref{eq:SeriesRepresentationFeynmanIntegral}
\begin{align}
\mathcal J_\mathcal{A} ({\underline{\nu}},z) &= z_1^{-\nu_1} z_2^{-2\nu_0 + 2\nu_1 + \nu_2} z_5^{\nu_0 - \nu_1 - \nu_2} \sum_{\lambda\in\mathbb N_0^2} \frac{1}{\lambda!} \left(-\frac{z_2 z_3}{z_1 z_5}\right)^{\lambda_1} \!\left(-\frac{z_2^2 z_4}{z_1^2 z_5}\right)^{-\lambda_2} \!\! \Gamma (\nu_1 + \lambda_1 + 2 \lambda_2) \nonumber \\
&\quad \Gamma(2 \nu_0 - 2 \nu_1 - \nu_2 -\lambda_1 - 2 \lambda_2) \Gamma(- \nu_0 + \nu_1 + \nu_2 + \lambda_1 + \lambda_2) \nonumber \\
& + \frac{1}{2} z_1^{-2\nu_0 + \nu_1 + \nu_2} z_4^{\nu_0 - \nu_1 - \frac{\nu_2}{2}} z_5^{-\frac{\nu_2}{2}} \sum_{\lambda\in\mathbb N_0^2} \frac{1}{\lambda!} \left(-\frac{z_2 \sqrt{z_4}}{z_1 \sqrt{z_5}}\right)^{\lambda_1} \!\left(-\frac{z_3}{\sqrt{z_4 z_5}}\right)^{\lambda_2} \nonumber \\
&\quad \Gamma\!\left(2\nu_0 - \nu_1 - \nu_2 + \lambda_1\right) \Gamma\!\left(\frac{\nu_2}{2} + \frac{\lambda_1}{2} + \frac{\lambda_2}{2} \right) \Gamma\!\left(- \nu_0 + \nu_1 + \frac{\nu_2}{2} - \frac{\lambda_1}{2} + \frac{\lambda_2}{2} \right) \point
\end{align}
For the physical case $z=(1,1,p^2+m_1^2+m_2^2,m_1^2,m_2^2)$, ${\underline{\nu}}=(2-\epsilon,1,1)$ we can rewrite
\begin{align}
\mathcal I_\mathcal{A} &= \frac{1}{\Gamma(2-2\epsilon)} \left\{
z_5^{-\epsilon} \sum_{\lambda\in\mathbb N_0^2} \frac{(\lambda_1 + 2 \lambda_2)!}{\lambda!} \Gamma(1-2\epsilon -\lambda_1 - 2 \lambda_2) \Gamma(\epsilon + \lambda_1 + \lambda_2) \frac{\left(-y_1\right)^{\lambda_1}}{\left(-y_2\right)^{\lambda_2}} \right. \nonumber \\
& + \left. \frac{z_4^{\frac{1}{2}-\epsilon} }{2 \sqrt{z_5}} \sum_{\lambda\in\mathbb N_0^2} \frac{(-1)^\lambda}{\lambda!} \Gamma\!\left(2-2\epsilon + \lambda_1\right) \Gamma\!\left(\frac{1+\lambda_1+\lambda_2}{2}\right) \Gamma\!\left(-\frac{1}{2} + \epsilon + \frac{\lambda_2 - \lambda_1}{2} \right) y_2^{\frac{\lambda_1}{2}} y_3^{\frac{\lambda_2}{2}} \right\}
\end{align}
with $y_1 = \frac{z_3}{z_5} = \frac{p^2+m_1^2+m_2^2}{m_2^2}$, $y_2 = \frac{z_4}{z_5} = \frac{m_1^2}{m_2^2}$ and $y_3 = \frac{z_3^2}{z_4 z_5} = \frac{(p^2+m_1^2+m_2^2)^2}{m_1^2m_2^2}$. In order to avoid half-integer summation indices in the second series one can split this series into two parts as indicated by \cref{eq:representativesK}. Thus, we will have $\mathbb Z\mathcal{A}_{\sigma_2} = (\mathbb Z,\mathbb Z,2\mathbb Z)^\top$ for the ideal spanned by $\mathcal{A}_{\sigma_2}$ and therefore $\normalslant{\mathbb Z\mathcal{A}}{\mathbb Z\mathcal{A}_{\sigma_2}} = \left\{ [0,0,0]^\top ; [1,0,1]^\top\right\}$ which results in $K_{\sigma_2} = \left\{ (0,0)^\top ; (1,0)^\top \right\}$ as a possible choice. Hence, we have to split the second series into the two summation regions $\Lambda_{k_1} = \begin{Bsmallmatrix} 2\mathbb N_0 \\ 2\mathbb N_0 \end{Bsmallmatrix} \cup \begin{Bsmallmatrix} 2\mathbb N_0 + 1\\ 2\mathbb N_0 + 1\end{Bsmallmatrix}$ and $\Lambda_{k_2} = \begin{Bsmallmatrix} 2\mathbb N_0 + 1\\ 2\mathbb N_0 \end{Bsmallmatrix} \cup \begin{Bsmallmatrix} 2\mathbb N_0\\ 2\mathbb N_0 + 1\end{Bsmallmatrix}$. This is a partition of $\mathbb N_0^2$ which transforms the half-integer combinations of summation indices into integer combinations.
\end{example}
Thus, we found series representations for generalized Feynman integrals for every regular triangulation of $\operatorname{Newt}(\mathcal{G})$. We want to recall from \cref{ssec:TriangulationsPolyhedra} that every point/vector configuration has at least one regular triangulation. Typically, a Feynman graph admits many different possibilities to triangulate its corresponding Newton polytope. Therefore, one usually obtains a large number of those series representations. We included the number of triangulations for certain Feynman graphs in the appendix in \cref{tab:characteristics}. It is not surprising that there are a lot of different series representations, since hypergeometric functions satisfy many transformation formulas and can be converted to other hypergeometric functions. Therefore, for the sake of numerical computations one can choose a series representation, which converges fast for the given kinematics, such that we can evaluate the Feynman integral numerically by considering the first summands of every series. Thereby, one can directly construct a convenient triangulation by means of the height (see \cref{ssec:TriangulationsPolyhedra}).
\begin{lemma} \label{lem:heightEffectiveVars}
Let $\omega=-\ln |z| = (-\ln |z_1|,\ldots,-\ln |z_N|)$ be a height vector of a regular triangulation $\mathcal{T}$ of an acyclic vector configuration $\mathcal{A}$. Then the effective variables $y=z_{\bar\sigma} z_\sigma^{-\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}}\in\mathbb C^r$ for every simplex $\sigma \in\widehat{\mathcal T}$ will satisfy $|y_j|<1$ for $j=1,\ldots,r$.
\end{lemma}
\begin{proof}
According to \cref{eq:subdivisionCondGaleBeta} we have
\begin{align}
\sigma \in \mathcal{S}\!\left(\mathcal{A},-\ln |z|\right) \Leftrightarrow - \ln |z| \cdot \mathcal{B}(\sigma) > 0
\end{align}
with $\mathcal{B}(\sigma) = \begin{psmallmatrix} -\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}} \\ \mathbbm 1 \end{psmallmatrix}$ a Gale dual of $\mathcal{A}$ constructed from the simplex $\sigma$ as before. Since $y = z^{\mathcal{B}(\sigma)}$ we arrive at the assertion.
\end{proof}
Note that height vectors $\omega$ will produce in general only a regular subdivision. Only for generic heights $\omega$ the regular subdivision $\mathcal{S}(\mathcal{A},\omega)$ will be a triangulation. Thus, for specific values $z$ we will not always find an optimal triangulation. It will not be guaranteed that for certain values of $z$, we will find a triangulation which results in power series with all $|y_j|<1$. In this case the series have to be continued analytically by means of hypergeometric transformation rules, which we consider in the subsequent \cref{sec:AnalyticContinuation}. Additionally, it is often worthwhile to use classical series accelerations as e.g.\ Euler's transformation to speed up the numerical calculation of those series. \Cref{sec:numerics} will take a closer look on numerical issues.
However, \cref{lem:heightEffectiveVars} will give us a simple guideline to construct appropriate triangulations, and we can perturb $\omega=-\ln|z|$ slightly in order to generate an almost optimal triangulation, whenever $-\ln|z|$ is not generic enough.
\section{Analytic continuation of series representations} \label{sec:AnalyticContinuation}
Up to this point, most of the theorems were statements about the generalized Feynman integral, i.e.\ we assumed the coefficients of Symanzik polynomials $z$ to be generic. When we want to consider non-generic values of $z$, as necessary in a physical application, we will have two possible strategies. The first option is to specify the $\mathcal{A}$-hypergeometric system, calculating then the restriction to a coordinate subspace and find solutions of this restricted system. In principle, it is known how to restrict $\mathcal{A}$-hypergeometric systems to coordinate subspaces and one can find a comprehensive description\footnote{Typically, one considers restrictions where variables $z_j=0$ vanish. This is no general obstacle as one can change the $D$-ideal by a convenient coordinate transformation $z_j\mapsto z_j-1$.} in \cite[sec. 5.2]{SaitoGrobnerDeformationsHypergeometric2000}. However, it can be algorithmically very hard to calculate those restriction ideals and for practical applications these algorithms seem hopelessly slow. Another drawback of that strategy is that we will lose the information about the explicit form of the meromorphic functions $C_{\sigma,k}({\underline{\nu}})$ from \cref{thm:FeynSeries}. We refer to \cite{WaltherAlgorithmicComputationRham1998,OakuLocalizationAlgorithmModules1999,OakuAlgorithmsDmodulesRestriction1998} for algorithms as well as the included algorithms in \softwareName{Singular} \cite{GreuelSINGULARComputerAlgebra2009}.
A similar strategy was also used in \cite{KlemmLoopBananaAmplitude2020} for the computation of marginal banana graphs. Instead of computing the restriction ideal by the above-mentioned algorithms, the authors made ansatzes for the restriction ideal based on the first terms of the series solutions of the generic case.\bigskip
The second option is to calculate solutions of the generic $\mathcal{A}$-hypergeometric system and specify the $\mathcal{A}$-hypergeometric functions afterwards by analytic continuation. This way is substantially faster, as one can use well known relations of hypergeometric functions for the implementation of the analytic continuation. Hence, we will consider the series representation due to \cref{thm:FeynSeries} and calculate the limit from generic $z$ to the physically relevant values as specified by the Symanzik polynomials. In particular, we will set the variables corresponding to the first Symanzik polynomial $\mathcal{U}$ equal to $1$ and certain variables related to the second Symanzik polynomial $\mathcal{F}$ may get the same value. In this limit the convergence behaviour of the $\Gamma$-series can be changed. Consider a region $D\subseteq \mathbb C^{n+1}$ where the Feynman integral \cref{eq:LeePomeranskyRepresentation} has no poles for ${\underline{\nu}}\in D$ and assume that the masses and momenta of the Feynman integral do not correspond to a Landau singularity (see \cref{ch:singularities}). Due to \cref{thm:meromorphicContinuation} the Feynman integral then has an analytic continuation and hence also the linear combination of $\Gamma$-series \cref{eq:LinearCombinationGammaSeries} has a (finite) analytic continuation. Thus, in the limit from generic to physical values there can arise only two issues: a) every series converges separately, but they do not have a common convergence region anymore or b) certain $\Gamma$-series diverge, but the linear combination is still finite. In the first case a) the convergence criteria for the effective variables of the $\Gamma$-series $y = z_{\bar\sigma} z_\sigma^{-\mathcal{A}_\sigma^{-1} \mathcal{A}_{\bar{\sigma}}}$ contradict each other for the different simplices $\sigma\in \widehat{\mathcal T}$ due to the specification of the variables $z$. In the second case b) certain variables $y_j$ become constant (usually equals $1$) after the limit to physical values of $z$, which can be outside of the convergence region. In practice, problem a) can be usually avoided by the choice of an appropriate triangulation (see \cref{lem:heightEffectiveVars}). However, in the following we will discuss this case equally since it is to be solved analogously to b).
Note that the analytic continuation with respect to the variables $z$ of the Feynman integral with ${\underline{\nu}}\in D$ is unique as long as the variables $z$ are in the Euclidean region $\Re(z_j)>0$. Therefore, we will assume Euclidean kinematics in this chapter in order that also the $\Gamma$-series have a unique analytic continuation. Outside of the Euclidean region, the Feynman integral will be potentially multi-valued, which will be discussed in \cref{ch:singularities}.
Fortunately, analytic continuations of $\Gamma$-series can be calculated explicitly by means of well-known transformation formulas of standard hypergeometric functions. We can always arrange the $\Gamma$-series in an arbitrary way from inner to outer power series. Therefore, we can sort these Horn hypergeometric series such that all series with certain convergence issues appear as the innermost series and focus on them separately. Note that every of these series parts is itself a Horn hypergeometric series. Let us start by discussing the case in which this inner series is a one-dimensional power series in one variable. We can distinguish then between the situation where we have to transform $y_j \mapsto \frac{1}{y_j}$ to solve issues coming from case a) or we have to transform $y_j \mapsto y_j-1$ to solve the problem b). Hence, we have to look at transformation formulas of hypergeometric functions of those types. Relationships between hypergeometric functions have always played an important role in the Feynman calculus, and we refer exemplarily to \cite{KniehlFindingNewRelationships2012}.\bigskip
We will start with the simplest, but also most often appearing case. Hence, consider that the innermost series where convergence issues appear is of the type of a ${_2}F_1$ hypergeometric function. Those transformation formulas are well known \cite[sec. 15.8]{OlverNISTHandbookMathematical2010}
\begin{align}
\HypF{a,b}{c}{t} &= \frac{\Gamma(c)\Gamma(b-a)}{\Gamma(b)\Gamma(c-a)} (-t)^{-a} \HypF{a,a-c+1}{a-b+1}{\frac{1}{t}} \nonumber \\
& + \frac{\Gamma(c)\Gamma(a-b)}{\Gamma(a)\Gamma(c-b)} (-t)^{-b} \HypF{b,b-c+1}{b-a+1}{\frac{1}{t}} \label{eq:2F1trafoA} \\
\HypF{a,b}{c}{t} &= \frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)} \HypF{a,b}{a+b-c+1}{1-t} \nonumber \\
& + (1-t)^{c-a-b} \frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b)} \HypF{c-a,c-b}{c-a-b+1}{1-t} \point\label{eq:2F1trafoB}
\end{align}
Note that the analytic continuations by means of those formulas are again of Horn hypergeometric type, as the prefactors of \cref{eq:2F1trafoA} and \cref{eq:2F1trafoB} consist in $\Gamma$-functions.
We will illustrate the application of those transformation formulas with an example.
\begin{example}
For the $2$-loop sunset graph with two different masses in dimensional regularization, inter alia there appears the $\Gamma$-series
\begin{align}
\phi_2 =& \sum_{k\in\mathbb N_0^4} (1-\epsilon)_{k_3+k_4} (\epsilon)_{k_1+2 k_2+k_3} (\epsilon -1)_{-k_1-k_2+k_4} (2-2 \epsilon)_{k_1-k_3-k_4} \nonumber \\
&\qquad \frac{1}{k_1!\, k_2!\, k_3!\, k_4!} \left(-\frac{z_1 z_6}{z_5 z_2}\right)^{k_1} \!\left(-\frac{z_4 z_6}{z_5^2}\right)^{k_2} \! \left(-\frac{z_2 z_7}{z_3 z_5}\right)^{k_3} \! \left(-\frac{z_2 z_8}{z_3 z_6}\right)^{k_4} \label{eq:example2MassesSunsetHypTrafoPhi2}
\end{align}
where one has to consider the limit $(z_1,z_2,z_3,z_4,z_5,z_6,z_7,z_8)\rightarrow (1,1,1,m_2^2,m_1^2+m_2^2+p_1^2,m_1^2,m_2^2,m_1^2)$. For simplicity, we dropped prefactors in \cref{eq:example2MassesSunsetHypTrafoPhi2} and $(a)_n = \Gamma(a+n)/\Gamma(a)$ denotes the Pochhammer symbol as before. In this limit we will have $\left(-\frac{z_2 z_8}{z_3 z_6}\right)^{k_4} \rightarrow \left(-1\right)^{k_4}$, which is not in the convergence region for small values of $\epsilon>0$ any more. Therefore, we evaluate the $k_4$ series carefully and write
\begin{align}
\phi_2 &= \lim_{t\rightarrow 1} \sum_{(k_1,k_2,k_3)\in\mathbb N_0^3} (1-\epsilon)_{k_3} (\epsilon-1)_{-k_1-k_2} (2-2 \epsilon)_{k_1-k_3} (\epsilon)_{k_1+2 k_2+k_3} \frac{1}{k_1!\, k_2!\, k_3!} \nonumber \\
& \qquad\left(-y_1\right)^{k_1} \left(-y_1 y_2\right)^{k_2} \left(-y_2\right)^{k_3} \HypF{-\epsilon+k_3+1,\epsilon-k_1-k_2-1}{2 \epsilon-k_1+k_3-1}{t}
\end{align}
where $y_i=\frac{ m_i^2}{m_1^2+m_2^2+p_1^2}$. With the transformation formula \cref{eq:2F1trafoB} for the ${_2}F_1$ hypergeometric function, one can split the series in a convergent and a divergent part
\begin{align}
&\phi_2 = \!\sum_{(k_1,k_2,k_3)\in\mathbb N_0^3} \! \frac{\Gamma (k_2+2 \epsilon -1) \Gamma (-k_1+k_3+2 \epsilon -1) }{ \Gamma (-k_1+3 \epsilon -2) \Gamma (k_2+k_3+\epsilon )} (1-\epsilon )_{k_3} (\epsilon -1)_{-k_1-k_2} (2-2 \epsilon )_{k_1-k_3} \nonumber \\
& \quad (\epsilon )_{k_1+2 k_2+k_3} \frac{(-y_1)^{k_1} (-y_2)^{k_3} (-y_1 y_2)^{k_2}}{k_1!\, k_2!\, k_3!} + \lim_{t\rightarrow 1} \sum_{(k_1,k_2,k_3,k_4)\in\mathbb N_0^4} \frac{(1-t)^{k_2+k_4+2 \epsilon -1} }{k_1!\, k_2!\, k_3!\, k_4! } \nonumber \\
& \quad \frac{\Gamma (1 - 2\epsilon - k_2) \Gamma (2\epsilon - 1 - k_1 + k_3) \Gamma (\epsilon + k_2 + k_3 + k_4) \Gamma (2\epsilon + k_2) \Gamma (3\epsilon - 2 - k_1 + k_4)}{ \Gamma (1 - \epsilon + k_3) \Gamma (-1 + \epsilon - k_1 - k_2) \Gamma (\epsilon + k_2 + k_3) \Gamma(2\epsilon + k_2 + k_4)\Gamma (- 2 + 3\epsilon - k_1)} \nonumber \\
& \quad (\epsilon )_{k_1+2 k_2+k_3} (1-\epsilon )_{k_3} (\epsilon -1)_{-k_1-k_2} (2-2 \epsilon )_{k_1-k_3} (-y_1)^{k_1} (-y_2)^{k_3} (-y_1 y_2)^{k_2} \nonumber \allowdisplaybreaks \\
& =\sum_{(k_1,k_2,k_3)\in\mathbb N_0^3} \frac{\Gamma (k_2+2 \epsilon -1) \Gamma (-k_1+k_3+2 \epsilon -1) }{ \Gamma (-k_1+3 \epsilon -2) \Gamma (k_2+k_3+\epsilon )} (1-\epsilon )_{k_3} (\epsilon -1)_{-k_1-k_2} (2-2 \epsilon )_{k_1-k_3} \nonumber \\
& \quad (\epsilon )_{k_1+2 k_2+k_3} \frac{(-y_1)^{k_1} (-y_2)^{k_3} (-y_1 y_2)^{k_2}}{k_1!\, k_2!\, k_3!} + \lim_{t\rightarrow 1} (1-t)^{2 \epsilon -1} \!\sum_{(k_1,k_3)\in\mathbb N_0^2}\! \frac{ (-y_1)^{k_1} (-y_2)^{k_3}}{k_1!\, k_3!} \nonumber \\
& \quad \frac{\Gamma (-2 \epsilon +1) \Gamma (-k_1+k_3+2 \epsilon -1)}{ \Gamma (1-\epsilon) \Gamma (\epsilon -1)} (\epsilon )_{k_1+k_3} (2-2 \epsilon )_{k_1-k_3} \point
\end{align}
Note that all Horn hypergeometric functions at zero are trivial to evaluate, and we have $\HypF{a,b}{c}{0}=1$. Comparing the divergent part with the other $\Gamma$-series, which occur in the calculation of the sunset graph with two masses, one can find another divergent series which exactly cancels this divergence. This cancellation always has to happen, since the linear combination has to be finite.
\end{example}
Although, those identities of the ${_2}F_1$ functions will fix the most issues appearing in practice, there potentially appear more complicated situations. The next more general type of innermost series are the ${}_{p+1}F_p$ hypergeometric functions. Those transformation formulas were considered in \cite{BuhringGeneralizedHypergeometricFunctions1992}. We have the recursion
\begin{align}
\HypFpq{p+1}{p}{a_1,\ldots,a_{p+1}}{b_1,\ldots,b_p}{t} = \sum_{k=0}^\infty \tilde A_k^{(p)}(a,b)\ \HypF{a_1,a_2}{|b|-|a|+a_1+a_2+k}{t} \label{eq:pFqReduction}
\end{align}
where $\tilde A_k^{(p)}(a,b)$ are rational functions in $a$ and $b$. Expressions of those functions $\tilde A_k^{(p)}(a,b)$ can be found in \cite[sec. 2]{BuhringGeneralizedHypergeometricFunctions1992}. Hence, the case where we need transformation formulas of ${}_{p+1}F_p$ functions can be reduced to the ${_2}F_1$ case.\bigskip
As a next step we consider any Horn hypergeometric function depending on one variable, where the summation index appears only as an integer multiple in the $\Gamma$-functions. By the identities of Pochhammer symbols
\begin{align}
(a)_{-n} &= (-1)^n \frac{1}{(1-a)_n} \qquad \text{for } n\in\mathbb Z \label{eq:PochhammerIdentityA} \\
(a)_{m+n} &= (a)_m (a+m)_n \label{eq:PochhammerIdentityB} \\
(a)_{mn} &= m^{mn} \prod_{j=0}^{m-1} \left(\frac{a+j}{m}\right)_n \qquad \text{for } m\in\mathbb Z_{>0} \label{eq:PochhammerIdentityC}
\end{align}
one can always bring that Horn hypergeometric series in the form of an ${}_pF_q$ function. Due to \cref{lem:HomogenityRelationsForAa}, in particular equation \cref{eq:AA1a}, there will only appear functions with $p=q+1$, which relates back to the previous discussed situation.
As the most general case, we will expect Horn hypergeometric functions in many variables. Again, we will assume that summation indices only appear as linear integer combinations, which is always the case for unimodular triangulations and which can be achieved for non-unimodular triangulations as well by a convenient splitting of the series. The analytic continuation of those functions can be accomplished iteratively, as the analytically continuated functions will be again from hypergeometric type. This is due to the rationality of the coefficients in \cref{eq:2F1trafoA}, \cref{eq:2F1trafoB} and \cref{eq:pFqReduction}.\bigskip
Therefore, one can derive convergent series representations also for physical Feynman integrals by considering convenient transformation formulas of hypergeometric functions. Since those transformation formulas consist in a simple replacement of $\Gamma$-functions, the analytic continuation can be done algorithmically efficient.\bigskip
We want to remark that the limit from generic to specific values of $z$ may reduce the dimension of $\operatorname{Sol}(H_\mathcal{A}({\underline{\nu}}))$. Hence, it may appear that $\Gamma$-series become linearly dependent after such a limit (as e.g.\ \cref{sec:ExampleSunset}). However, this will be no general obstacle and shows only that the series representations are not necessarily optimal in the sense that they are not always the simplest possible series representation of the given Feynman integral. Or in other words, the holonomic rank of the restriction ideal may differ from the one of the original ideal. We refer to \cite{BitounFeynmanIntegralRelations2019}, where a similar decreasing of the dimension of a solution space was described, which also occurred in the specification from generic to physical values of $z$.
\section{Laurent expansion of hypergeometric series} \label{sec:epsilonExpansion}
As outlined in \cref{sec:DimAnaReg}, one has to renormalize Feynman integrals to fix certain ambiguities. In addition, renormalization removes also the divergences in Feynman integrals. Hence, in this process it is necessary for the most renormalization schemes to handle the singularities of the Feynman integrals by regularization. Therefore, in the widely spread dimensional and analytic regularization (see \cref{sec:DimAnaReg}) one is rather interested in the Laurent expansion of a Feynman integral around certain integer values of ${\underline{\nu}}$ instead in the Feynman integral itself. Namely, in dimensional regularization we assume all indices to be integer values and consider the spacetime dimension to be close to an integer, i.e.\ we set $\nu_i\in\mathbb Z_{>0}$ for $i=1,\ldots,n$, $\nu_0=\frac{d}{2} = \frac{D}{2} - \epsilon$ with $\frac{D}{2}\in\mathbb Z_{>0}$ (usually $D=4$) and expand around $\epsilon =0$. In analytic regularization one would instead fix $\nu_0 = \frac{D}{2}\in\mathbb Z_{>0}$ and introduce a single parameter $\epsilon$ controlling the distance of $\nu\in(\mathbb C\setminus\mathbb Z)^n$ to integer values.\bigskip
Due to the \cref{thm:meromorphicContinuation} one can relate this task to the Taylor expansion of the hypergeometric series representation. Thus, one has simply to differentiate the Horn hypergeometric series with respect to their parameters ${\underline{\nu}}$. As pointed out in \cite{BytevDerivativesHorntypeHypergeometric2017}, those derivatives of Horn hypergeometric series are again Horn hypergeometric series of higher degree. By the identities of Pochhammer symbols \cref{eq:PochhammerIdentityA}, \cref{eq:PochhammerIdentityB} and \cref{eq:PochhammerIdentityC} one can reduce all\footnote{We will assume for simplicity that all summation indices in \cref{eq:SeriesRepresentationFeynmanIntegral} appear only as integer combinations. This is true for all unimodular triangulations due to $\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\in\mathbb Z^{(n+1)\times r}$. For non-unimodular triangulations one can always find a partition of the summation region to arrive at this situation, similar to \cref{ex:SeriesRepresentationNonUnimod}.} derivatives to two cases \cite{BytevDerivativesHorntypeHypergeometric2017}
\begin{align}
\frac{\partial}{\partial \alpha} \sum_{n=0}^\infty f(n) (\alpha)_n t^n &= t \sum_{k=0}^\infty\sum_{n=0}^\infty f(n+k+1) \frac{(\alpha+1)_{n+k} (\alpha)_k}{(\alpha+1)_k}\ t^{n+k} \label{eq:HornDerivativeA} \\
\frac{\partial}{\partial \alpha} \sum_{n=0}^\infty f(n) (\alpha)_{-n} t^n &= -t \sum_{k=0}^\infty\sum_{n=0}^\infty f(n+k+1) \frac{(\alpha)_{-n-k-1} (\alpha)_{-k-1}}{(\alpha)_{-k}}\ t^{n+k} \point \label{eq:HornDerivativeB}
\end{align}
Thus, Horn hypergeometric functions do not only appear as series representations of Feynman integrals, but also in every coefficient of the Laurent expansion of those Feynman integrals. Therefore, the class of Horn hypergeometric functions is sufficient to describe all Feynman integrals as well as their Laurent expansions. \bigskip
However, it seems much more efficient to expand the $\Gamma$-functions in the series representation around $\epsilon = 0$ instead of a determination of the derivatives by \cref{eq:HornDerivativeA} and \cref{eq:HornDerivativeB}. For this purpose we want to introduce the (unsigned) \textit{Stirling numbers of the first kind}. Those numbers $\StirlingFirstSmall{n}{k}$ \glsadd{StirlingFirst}\glsunset{StirlingFirst} count the permutations of $\{1,\ldots,n\}$ with precisely $k$ cycles. We can define them recursively by
\begin{align}
\StirlingFirst{n+1}{k} = n \StirlingFirst{n}{k} + \StirlingFirst{n}{k-1} \quad\text{with}\quad \StirlingFirst{0}{0} = 1, \quad \StirlingFirst{n}{0} = \StirlingFirst{0}{k} = 0 \quad \text{for } n,k\in\mathbb N_{>0} \point \label{eq:StirlingFirstRecursion}
\end{align}
Stirling numbers are related to many other functions. E.g.\ they can be expressed by the $Z$-sums invented in \cite{MochNestedSumsExpansion2002}, which are in this case also known as Euler-Zagier sums appearing in the study of multiple zeta values \cite{ZagierValuesZetaFunctions1994}. We will consider this relation more in detail in \cref{sec:ManipulationSeries}. Furthermore, the generating function of Stirling numbers are the Pochhammer symbols
\begin{align}
(\alpha)_n = \prod_{j=0}^{n-1} (\alpha+j) = \sum_{k=0}^n \StirlingFirst{n}{k} \alpha^k \label{eq:StirlingPochhammer} \point
\end{align}
In particular, we have for $n\in\mathbb N_{>0}$
\begin{align}
&\StirlingFirst{n}{0} = 0\, , \quad \StirlingFirst{n}{1} = (n-1)!\, , \quad \StirlingFirst{n}{2} = (n-1)!\, H_{n-1}\, , \quad \StirlingFirst{n}{3} = \frac{(n-1)!}{2!} \left(H_{n-1}^2 - H_{n-1}^{(2)}\right) \label{eq:StirlingPartPos}
\end{align}
where $\gls{harmonicNumber} := \sum_{i=1}^n \frac{1}{i^k}$ denotes the $k$-th harmonic number and $H_{n} := H_n^{(1)}$. We collected further relations in \cref{sec:StirlingNumbers}. Those relations can be derived efficiently by a recursion with respect to $k$ \cite{AdamchikStirlingNumbersEuler1997}
\begin{align}
&\StirlingFirst{n}{k} = \frac{(n-1)!}{(k-1)!}\, w(n,k-1) \nonumber \\
&\text{with } w(n,0) = 1 \, , \quad w(n,k) = \sum_{i=0}^{k-1} (1-k)_i\, H_{n-1}^{(i+1)}\, w(n,k-1-i) \point \label{eq:StirlingNumbersRecursionPos}
\end{align}
By means of \cref{eq:StirlingPochhammer} one can extend the Stirling numbers also to negative $n$. In this case we will have \cite{BransonExtensionStirlingNumbers1996, KnuthTwoNotesNotation1992}
\begin{align}
\StirlingFirst{-n}{k} = \frac{(-1)^n}{n!} \sum_{i=1}^n \frac{(-1)^{i+1}}{i^k} \begin{pmatrix} n \\ i \end{pmatrix} \quad\text{for } n\in\mathbb N_{\geq 0}, k\in\mathbb N_{>0} \label{eq:StirlingFirstNegativeDef}
\end{align}
and in particular
\begin{align}
&\StirlingFirst{-n}{0} = \frac{(-1)^n}{n!}\, , \quad \StirlingFirst{-n}{1} = \frac{(-1)^n}{n!} H_n\, , \quad \StirlingFirst{-n}{2} = \frac{(-1)^n}{2!\,n!} \left(H_{n}^2 + H_{n}^{(2)}\right) \label{eq:StirlingPartNeg} \point
\end{align}
This list will be continued in \cref{sec:StirlingNumbers}. The Stirling numbers with negative and positive integers $n$ are the natural analytic continuations of each other. Thus, one can show that definition \cref{eq:StirlingFirstNegativeDef} satisfies the recursion \cref{eq:StirlingFirstRecursion}. Hence, when writing \cref{eq:StirlingPartNeg} by means of the more general functions $n! = \Gamma(n+1)$, $H_n=\psi^{(0)} (n+1) + \gamma$ and $H_n^{(r)} = \zeta(r) - \frac{(-1)^r}{(r-1)!} \psi^{(r-1)} (n+1)$ for $r>1$ we will obtain \cref{eq:StirlingPartPos} in the corresponding limits. By these replacements we can also continue the Stirling numbers to complex arguments. However, such a generalization is not necessary in the following approach.\bigskip
By those definitions we can expand the $\Gamma$-functions with respect to a small parameter $|\epsilon|\ll 1$ appearing in the (unimodular) series representation \cref{eq:SeriesRepresentationFeynmanIntegral} by
\begin{align}
\Gamma(n+\epsilon) = \Gamma(\epsilon) \left( \StirlingFirst{n}{0} + \StirlingFirst{n}{1} \epsilon + \StirlingFirst{n}{2} \epsilon^2 + \StirlingFirst{n}{3} \epsilon^3 + \ldots \right) \qquad n\in\mathbb Z \label{eq:GammaSeriesExpansionStirling}
\end{align}
which works for positive as well as for negative integers $n$. Equation \cref{eq:GammaSeriesExpansionStirling} is a direct consequence of \cref{eq:StirlingPochhammer}. A similar expansion in terms of $S$-sums and $Z$-sums was suggested in \cite{MochNestedSumsExpansion2002}, where the cases for $n\in\mathbb Z_{>0}$ and $n\in \mathbb Z_{<0}$ were treated separately. The expansion \cref{eq:GammaSeriesExpansionStirling} is especially appropriate for a numerical evaluation, as we have not to distinguish between positive and negative cases in a symbolic preprocessing. We will consider both aspects below: a numerical evaluation of those series in \cref{sec:numerics} and a symbolical evaluation by means of $Z$-sums in \cref{sec:ManipulationSeries}. In \cref{sec:StirlingNumbers} we collected further useful facts about Stirling numbers.
\section{Manipulation of series} \label{sec:ManipulationSeries}
In this section we want to collect several aspects, how to treat multivariate series as appearing in \cref{thm:FeynSeries}. However, this overview is not intended to be exhaustive, and we will only give a small amount of useful relations and references for further information and algorithms. In particular, the methods presented below do not claim to be an efficient rewriting of the Horn type series into multiple polylogarithms and related functions in more complex examples.\bigskip
As seen in the previous \cref{sec:epsilonExpansion}, Stirling numbers $\StirlingFirstSmall{n}{k}$ behave slightly different for positive and negative $n\in\mathbb Z$. Therefore, it will be necessary for a symbolic processing of the Laurent coefficients of the Feynman integral to determine the signs in the Stirling numbers. In order to avoid fixed integer shifts in the argument of Stirling numbers it is recommendable to normalize the $\Gamma$-functions before introducing the Stirling numbers. Thus, we will shift $\Gamma$-functions by means of $\Gamma(n+c) = (n)_c \Gamma(n)$ before applying \cref{eq:GammaSeriesExpansionStirling}. Therefore, we will be left with Stirling numbers depending only on a linear combination of summation indices. In order to generate definite signs of these linear combinations, we can use series rearrangement techniques. To determine for example the signs in a Stirling number $\StirlingFirstSmall{i-j}{k}$ we can split a double series in all its cases $i<j$, $i>j$ and $i=j$, i.e.\
\begin{align}
\sum_{j=a}^n \sum_{i=a}^n f(i,j) = \sum_{j=a}^n \sum_{i=a}^{j-1} f(i,j) + \sum_{i=a}^n \sum_{j=a}^{i-1} f(i,j) + \sum_{j=a}^n f(j,j) \label{eq:splittingNested}
\end{align}
which holds for $n\in\mathbb N \cup \{\infty\}$. The structure of relations like \cref{eq:splittingNested} is also known as \textit{quasi-shuffle product} \cite{WeinzierlFeynmanIntegrals2022}. Especially, for infinite sums we can also rearrange the summands instead of a splitting of the summation region, which results in the following identities \cite{SrivastavaTreatiseGeneratingFunctions1984}
\begin{align}
\sum_{j=0}^\infty \sum_{i=0}^\infty f(i,j) &= \sum_{j=0}^\infty \sum_{i=0}^j f(i,j-i) \\
\sum_{j=0}^\infty \sum_{i=0}^j f(i,j) &= \sum_{j=0}^\infty \sum_{i=0}^\infty f(i,j+i) = \sum_{i=0}^\infty \sum_{j=0}^i f(i-j,i) \point
\end{align}
More complicated situations of indetermined signs in Stirling numbers can be solved by applying \cite{SrivastavaTreatiseGeneratingFunctions1984}
\begin{align}
\sum_{j=0}^\infty \sum_{i=0}^\infty f(i,j) &= \sum_{j=0}^\infty \sum_{i=0}^{\left\lfloor\frac{j}{m}\right\rfloor} f(i,j-mi)
\end{align}
where $\gls{floor}$ denotes the greatest integer less or equal $x$ with $m\in\mathbb N_{>0}$. All those identities can be used recursively to disentangle more involved combinations of indetermined signs in Stirling numbers. \bigskip
It is often also worthwhile to split the first summands of each series off, since they behave different from the remaining series due to the positive Stirling numbers \cref{eq:StirlingPartPos}. This can be done by an application of
\begin{align}
\sum_{N\geq i_k \geq \ldots \geq i_1 \geq 0} \hspace{-1em} f(i_1,\ldots,i_k) \hspace{.5em} &= \hspace{-1em} \sum_{N\geq i_k \geq \ldots \geq i_1 \geq 1} \hspace{-1em} f(i_1,\ldots,i_k) \hspace{.5em} + \hspace{-1em} \sum_{N\geq i_k \geq \ldots \geq i_2 \geq 1} \hspace{-1em} f(0,i_2,\ldots,i_k) \nonumber \\
& + \ldots + \sum_{N\geq i_k\geq 1} f(0,\ldots,0,i_k) + f(0,\ldots,0) \point
\end{align}
After these steps we will arrive at nested sums containing three types of factors in their summands: a) powers of variables, b) Horn rational expressions in the summation indices, stemming from the factorials and the normalization of $\Gamma$-functions and c) harmonic numbers. We will call an expression $c(k)$ \textit{Horn rational} in $k$, when $\frac{c(k+e_i)}{c(k)}$ is a rational function in $k$ for all directions $i$ (see also \cref{sec:AHypSystems}). There are different techniques known to simplify those sums. For example Zeilberger's algorithm, also known as creative telescoping \cite{ZeilbergerMethodCreativeTelescoping1991, Petkovsek1996} can be used to find expressions for finite sums consisting in certain Horn rational summands. In principle, one can use this technique also for multivariate sums. However, it is not very efficient then \cite{ZeilbergerMethodCreativeTelescoping1991}. Hence, this algorithm is mostly restricted to a single summation where no harmonic numbers are involved.
Certain cases can also be solved by introducing integral expressions of the harmonic numbers, see e.g.\ \cite{AdamchikStirlingNumbersEuler1997}. Further, we want to refer to \cite{BorweinEvaluationsKfoldEuler1996} where different techniques are used to find expressions for those sums.\bigskip
Another approach was suggested in \cite{MochNestedSumsExpansion2002}. They introduced so-called $Z$-sums as nested sums
\begin{align}
\gls{Zsum} := \sum_{n \geq i_k > \ldots > i_1 > 0} \frac{y_1^{i_1} \cdots y_k^{i_k}}{i_1^{m_1} \cdots i_k^{m_k}}
\end{align}
where $n$ can also be infinite and $m_j\in\mathbb N_{\geq 0}$. Note that we adapted the notation from \cite{MochNestedSumsExpansion2002} slightly. On the one hand, those $Z$-sums can be understood as generalizations of multiple polylogarithms\footnote{Note, that the order of variables in multiple polylogarithms as well as in multiple zeta functions differs in various literature. We supposed the definitions in \cite{ZagierValuesZetaFunctions1994, GoncharovMultiplePolylogarithmsMixed2001}.}
\begin{align}
\Zsum{\infty}{m_1, \ldots, m_k}(y_1, \ldots, y_k) = \operatorname{Li}_{m_1, \ldots, m_k} (y_1, \ldots, y_k) \point
\end{align}
On the other hand, they generalize also so-called Euler-Zagier sums, and we write $\gls{Zsum1} := \Zsum{n}{m_1, \ldots, m_k}(1, \ldots, 1)$ for short. Euler-Zagier sums \cite{ZagierValuesZetaFunctions1994} appear in the study of multiple $\zeta$-values due to $\Zsum{\infty}{m_1, \ldots, m_k} = \zeta(m_1,\ldots,m_k)$.
Stirling numbers as defined in \cref{sec:epsilonExpansion} are special cases of Euler-Zagier sums, and we have
\begin{align}
\StirlingFirst{n}{k+1} = (n-1)! \ \Zsum{n-1}{\underbrace{\scriptstyle 1,\ldots,1}_{k}} \quad\text{for } n,k\in\mathbb Z_{>0} \point
\end{align}
Stirling numbers with negative first arguments $n$ will correspond to $S$-sums, which were also defined in \cite{MochNestedSumsExpansion2002}, see \cref{sec:StirlingNumbers} for details. Note that $Z$-sums are also closely related to harmonic numbers $H_n^{(k)} = \Zsum{n}{k}$ and their generalizations \cite{VermaserenHarmonicSumsMellin1999}.
As pointed out in \cite{MochNestedSumsExpansion2002} $Z$-sums obey a nice algebraic structure for fixed upper bound $n$. In particular, products of $Z$-sums with the same upper bound $n$ can be expressed in terms of single $Z$-sums by an iterative use of \cref{eq:splittingNested}. Moreover, certain sums over (Horn) rational functions, powers of variables and $Z$-sums can be converted into linear combinations of $Z$-sums as well \cite{MochNestedSumsExpansion2002}. Implementations of those algorithms are available in \cite{WeinzierlExpansionHalfintegerValues2004, MochXSummerTranscendentalFunctions2006}. It should be mentioned, that the known algorithms only cover a small part of the possible situations.
Hence, apart from very simple Feynman integrals it is currently not possible to convert Horn type hypergeometric series from \cref{eq:SeriesRepresentationFeynmanIntegral} efficiently into multiple polylogarithms and related functions by existing algorithms. Nevertheless, we will provide a small amount of tools for this problem in \cref{sec:StirlingNumbers}. However, there are already very efficient algorithms known \cite{PanzerFeynmanIntegralsHyperlogarithms2015, BrownMasslessHigherloopTwopoint2009, BognerSymbolicIntegrationMultiple2012, BognerFeynmanIntegralsIterated2015} for those Feynman integrals which can be evaluated by means of multiple polylogarithms. Therefore, for symbolical expressions of the Feynman integral, the hypergeometric approach should be understood as an alternative way, which could be used, when the current algorithms can not be applied. Furthermore, the hypergeometric approach is well-made for a fast numerical evaluation.
\begin{example}
To illustrate the techniques from above in a very simple case, we want to continue \cref{ex:1loopbubbleB} and write the Laurent expansion of the appearing Gauss' hypergeometric function in terms of multiple polylogarithms. In \cref{eq:Example1loopBubble2F1} we had the following hypergeometric function
\begin{align}
\HypF{1,\epsilon}{2\epsilon}{t} &= \sum_{k\geq 0} (-t)^k (\epsilon)_k\, (1-2\epsilon)_{-k} = \sum_{k\geq 0} (-t)^k \frac{2\epsilon+k}{2\epsilon} \left( \StirlingFirst{k}{0} + \StirlingFirst{k}{1} \epsilon \right. \nonumber \\
& \qquad \left. + \StirlingFirst{k}{2} \epsilon^2 + \order{\epsilon^3} \!\right) \!\left( \StirlingFirst{-k}{0} - \StirlingFirst{-k}{1} 2 \epsilon + \StirlingFirst{-k}{2} 4 \epsilon^2 + \order{\epsilon^3} \! \right) \nonumber \\
& = \frac{1}{\epsilon} c_{-1} + c_0 + c_1 \epsilon + c_2 \epsilon^2 + c_3 \epsilon^3 + \order{\epsilon^4}
\end{align}
where we used \cref{eq:PochhammerIdentityA} and introduced Stirling numbers due to \cref{eq:GammaSeriesExpansionStirling} after a normalization of $\Gamma$-functions. We find then
\begin{align}
c_{-1} &= \frac{1}{2} \sum_{k\geq 0} (-t)^k k \StirlingFirst{k}{0} \StirlingFirst{-k}{0} = 0 \\
c_0 &= \sum_{k\geq 0} (-t)^k \left\{ \StirlingFirst{k}{0} \StirlingFirst{-k}{0} - k \StirlingFirst{k}{0} \StirlingFirst{-k}{1} + \frac{1}{2} k \StirlingFirst{k}{1} \StirlingFirst{-k}{0} \right\} \nonumber \\
&= 1 + \frac{1}{2} \sum_{k\geq 1} t^k = 1 + \frac{1}{2} \operatorname{Li}_0(t)
\end{align}
where we treated the first summand with $k=0$ separately and inserted the special representations of Stirling numbers from \cref{eq:StirlingPartPos} and \cref{eq:StirlingPartNeg}. By the same procedure we will obtain
\begin{align}
c_1 &= \sum_{k\geq 1} t^k \left( \frac{1}{k} - H_k + \frac{1}{2} H_{k-1} \right) = - \frac{1}{2} \sum_{k\geq 1} t^k H_{k-1} = -\frac{1}{2} \operatorname{Li}_{1,0}(1,t)
\end{align}
where we used $H_{k}^{(i)} = H_{k-1}^{(i)} + \frac{1}{k^i}$. For the next coefficient we have
\begin{align}
c_2 &= \sum_{k\geq 1} t^k \left( - \frac{2}{k} H_k + H_k^2 + H_k^{(2)} + \frac{1}{k} H_{k-1} - H_{k-1} H_k + \frac{1}{4} H_{k-1}^2 - \frac{1}{4} H_{k-1}^{(2)} \right) \nonumber \\
&= \sum_{k\geq 1} t^k \left( \frac{3}{4} H_{k-1}^{(2)} + \frac{1}{4} H_{k-1}^2 \right) = \frac{3}{4} \operatorname{Li}_{2,0}(1,t) + \frac{1}{4} \sum_{k\geq 1} t^k \left( 2 \sum_{i=1}^{k-1} \sum_{j=1}^{i-1} \frac{1}{ij} + \sum_{i=1}^{k-1} \frac{1}{i^2} \right) \nonumber \\
&= \operatorname{Li}_{2,0}(1,t) + \frac{1}{2} \operatorname{Li}_{1,1,0} (1,1,t)
\end{align}
where we additionally made use of \cref{eq:splittingNested}. By the same techniques we can continue
\begin{align}
c_3 = -\frac{1}{2} \operatorname{Li}_{1,1,1,0}(1,1,1,t) - \operatorname{Li}_{1,2,0}(1,1,t) -\operatorname{Li}_{2,1,0} (1,1,t) - 2\operatorname{Li}_{3,0} (1,t) \point
\end{align}
\end{example}
We want to conclude this section with a remark on another promising but immature method. As every coefficient in the Laurent expansion is again a Horn hypergeometric function one can apply the machinery of $\mathcal{A}$-hypergeometric systems a second time. We refer to \cite{SadykovHypergeometricFunctionsSeveral2002} for a comprehensive treatment of Horn hypergeometric functions within $D$-modules. In comparison to the original $\mathcal{A}$-hypergeometric system for the generalized Feynman integral $H_\mathcal{A}({\underline{\nu}})$, those new $\mathcal{A}$-hypergeometric systems will describe (a part of) a single coefficient in the Laurent expansion and will not contain additional variables any more. Hence, these second $\mathcal{A}$-hypergeometric systems will describe the problem in an efficient manner. Note further that a Horn hypergeometric series collapses to a Horn hypergeometric series with reduced depth when setting a variable equals zero. Hence, one can once again find boundary values by means of simpler cases. However, one source of potential difficulties in this approach lies in the singularity structure of the Horn hypergeometric functions \cite{PassareSingularitiesHypergeometricFunctions2005}, i.e.\ one has to ensure that all Horn hypergeometric functions are on their principle sheets.
\section{Notes on numerical evaluation} \label{sec:numerics}
The series representations from \cref{thm:FeynSeries} are very suitable for a numerical evaluation of Feynman integrals. Therefore, we will collect certain considerations on numerics in this section from the perspective of a practitioner. As described in \cref{lem:heightEffectiveVars} we can choose a regular triangulation which is convenient for the considered kinematical region. Furthermore, if the restriction to physical values will produce convergence issues of those series, we can solve those problems by the methods from \cref{sec:AnalyticContinuation}. Therefore, after the Laurent expansion described in \cref{sec:epsilonExpansion} we will obtain series of the form
\begin{align}
\sum_{k\in\mathbb N^r_0} c(k) \frac{y^k}{k!} \label{eq:seriesNumerics}
\end{align}
where $|y_i|<1$. The function $c(k)$ is a product of Stirling numbers and additionally may contain certain factorials (which are Stirling numbers as well). Hence, we can calculate an approximation of those series simply by summing the first terms. Even this first naive attempt often creates surprisingly precise results after a few amount of terms. Classical techniques of series accelerations will enhance this approach and are worthwhile when $|y_i| \approx 1$. E.g.\ one can speed up the convergence rate by Euler's transformation. This is particularly appropriate since those series \cref{eq:seriesNumerics} are typically alternating. Hence, we will use the identity
\begin{align}
&\sum_{k=0}^\infty (-1)^k f(k) = \sum_{k=0}^\infty \frac{(-1)^k}{2^{k+1}} \fd[k]{f(0)} \comma \qquad \text{where } \label{eq:EulerSummation} \\
&\fd[k]{f(0)} = \sum_{i=0}^k (-1)^i \begin{pmatrix} k \\ i \end{pmatrix} f(k-i) = \sum_{i=0}^k (-1)^{k+i} \begin{pmatrix} k \\ i \end{pmatrix} f(i) \point \label{eq:kthFiniteDifference}
\end{align}
Thereby $\gls{fd}$ is known as the \textit{$k$-th forward difference}, i.e.\ we have $\fd{f(i)} = f(i+1)-f(i)$ and $ \fd[k]{f(i)} = \Delta^{k-1} \!\left(\fd{f(i)}\right)$. Hence, we can transform an alternating series into a series where subsequent summands are suppressed by a power of $\frac{1}{2}$. The series acceleration \cref{eq:EulerSummation} can be used iteratively to multivariate series as \cref{eq:seriesNumerics}. Moreover, there are many more known techniques for series acceleration. We refer to \cite{CohenConvergenceAccelerationAlternating2000} for an overview. \bigskip
When truncating the series \cref{eq:seriesNumerics}, the first question should be how many terms we take into account. We will denote by
\begin{align}
S(K) = S(K_1,\ldots,K_r) := \sum_{k_1=0}^{K_1} \cdots \sum_{k_r=0}^{K_r} f(k)
\end{align}
the partial sum of \cref{eq:seriesNumerics} or an accelerated version of \cref{eq:seriesNumerics}, respectively. The relative increase for a summation step in direction $i$ will be denoted by
\begin{align}
t_i(K) := \left| \frac{f(K+e_i)}{S(K)}\right| \point
\end{align}
A good criterion to truncate the series is to stop the summation in direction $i$ when $t_i(K)$ becomes less than a given tolerance threshold. To avoid possible effects from the first summands it is recommendable to fix also a minimal number of terms.
Another criterion for series truncation that is often used is to set a tolerance threshold for the difference of two summands $\fpd{i}{f(K)} = f(N+e_i) - f(N)$. Depending on the series, one can relate this difference threshold to an estimation of the error term \cite{CalabreseNoteAlternatingSeries1962, PinskyAveragingAlternatingSeries1978, BradenCalculatingSumsInfinite1992}.\bigskip
As aforementioned, the summands of \cref{eq:seriesNumerics} consist only in a product of Stirling numbers (see \cref{sec:epsilonExpansion}). As we need only a small amount of those Stirling numbers, we suggest to create a table of those numbers instead of a repetitive calculation. This will enhance the evaluation speed further. Depending on the used data types, it is possibly reasonable to replace $\StirlingFirstSmall{n}{k} = (n-1)!\ \sigma (n,k-1)$ for $n,k\in\mathbb Z_{>0}$ and $\StirlingFirstSmall{-n}{k} = \frac{(-1)^n}{(n)!} \sigma(-n,k)$ for $n,k\in\mathbb Z_{\geq 0}$ and store $\sigma(n,k)$ instead. It can be shown, that $\sigma(n,k)$ grows only moderately for $n\rightarrow\infty$ (see \cref{sec:StirlingNumbers}).\bigskip
\begin{figure}[htb]
\centering
\begin{tikzpicture}
\tikzstyle{every node}=[draw, rounded corners, inner sep = 15pt, outer sep = 5pt, align=center]
\node (1) at (0,0) {construct a regular triangulation \\ choose a height according to \cref{lem:heightEffectiveVars}};
\node (2) [below=of 1] {construct the series representation \\ due to \cref{thm:FeynSeries}};
\node (3) [below=of 2] {analytic continuation w.r.t. $z$ \\ (\cref{sec:AnalyticContinuation})};
\node (4) [below=of 3] {Laurent expansion by \\ Stirling numbers (\cref{sec:epsilonExpansion})};
\node (5) [below=of 4] {numerical evaluation of series};
\draw[->, thick] (1) -- (2); \draw[->, thick] (2) -- (3); \draw[->, thick] (3) -- (4); \draw[->, thick] (4) -- (5);
\end{tikzpicture}
\caption[Structure of a numerical evaluation of Feynman integrals by means of series representations]{Structure of a numerical evaluation of Feynman integrals by means of series representations.}
\label{fig:structureEvaluation}
\end{figure}
By those methods one can construct a very efficient way for a numerical evaluation of Feynman integrals. \Cref{fig:structureEvaluation} shows the different steps in such an evaluation. Let us shortly comment each step from an algorithmic point of view. The construction of a single triangulation by means of a given height vector is algorithmically a relatively simple task and implementations can be found in various software programs (see \cref{sec:SoftwareTools} for details). We want to remark, that it is simple to construct a single triangulation. However, it is very intricate to determine all triangulations of a given configuration. For the next step, the construction of the series representation, one only has to calculate the Gale duals for each simplex, which is mainly an inversion of an $(n+1)\times (n+1)$ integer matrix. The complexity of the analytic continuation of those series, which is necessary for the most except of very simple Feynman integrals, depends highly on the problem. However, it seems that many Feynman integrals can be solved even by the ${_2}F_1$ transformation rules \cref{eq:2F1trafoA} and \cref{eq:2F1trafoB}. Hence, this step is in most cases a simple replacement of algebraic expressions. However, an efficient implementation of this step seems to us to be the comparatively greatest difficulty in an algorithmic realization. After the analytic continuation one would usually expand the Feynman integrals due to a small parameter $\epsilon$ for dimensional or analytical regularization. The algorithmic effort consists in a simple replacement of algebraic expressions and a reordering of terms in powers of $\epsilon$. Further, it is worthwhile to simplify all those terms where Stirling numbers $\StirlingFirstSmall{n}{k}$ appear with $n\geq 0$. Last but not least the computation of the series heavily depends on the kinematical region. Thus, for $|y_i|\ll 1$ the series converges very fast. However, by series accelerations one can also achieve good results when $|y_i|\approx 1$.
\section{Euler integrals and other representations} \label{sec:EulerIntegrals}
There are different ways to span the solution space $\operatorname{Sol}(H_\mathcal{A}({\underline{\nu}}))$ of an $\mathcal{A}$-hypergeometric system. In principle, we can use all types of functions which belong to the class of $\mathcal{A}$-hypergeometric functions to generate this space. Hence, we will also have various possibilities to write the Feynman integral. All these representations will have slightly different characteristics. Therefore, the investigation of the various formulations of Feynman integrals is potentially helpful for certain aspects of those integrals. When spanning $\operatorname{Sol}(H_\mathcal{A}({\underline{\nu}}))$ by $\Gamma$-series (see \cref{ssec:GammaSeries}) we will obtain the series representation of \cref{thm:FeynSeries}. Another type of functions which can be used to span the solution space are Laplace integrals $\int_\gamma \mathop{}\!\mathrm{d} x\, x^{\nu-1} e^{-\mathcal{G}(x)}$. We refer to \cite{Matsubara-HeoLaplaceResidueEuler2018} for the construction of integration cycles $\gamma$. In this reference one can also find a direct transformation between $\Gamma$-series and Laplace integrals. Hence, we can also adopt the prefactors $C_{\sigma,k}({\underline{\nu}})$ which were derived in \cref{sec:seriesRepresentationsSEC}.
A further option to generate $\operatorname{Sol}(H_\mathcal{A}({\underline{\nu}}))$ are Mellin-Barnes integrals. For Feynman integrals this was derived by the results of \cref{thm:MellinBarnesRepresentation}. In general and more extensively those representations were investigated in \cite{BeukersMonodromyAhypergeometricFunctions2013, BerkeschEulerMellinIntegrals2013, Matsubara-HeoMellinBarnesIntegralRepresentations2018}. We want to remark, that Mellin-Barnes integrals are in particular useful when considering the monodromy of Feynman integrals \cite{BeukersMonodromyAhypergeometricFunctions2013}.\bigskip
We can also give a representation of Feynman integrals by so-called Euler integrals. Since the Feynman integral in parametric space \cref{eq:FeynmanParSpFeynman} is an Euler integral by itself, this type of integrals play a central role for many approaches, see exemplarily \cite{AbreuGeneralizedHypergeometricFunctions2019}. To find a relation between Horn type series and Euler integrals one can use Kummer's method \cite{AomotoTheoryHypergeometricFunctions2011}. Hence, we will use a convenient integral representation of the summand and change summation and integration. A classical result for iterated integrals going back to Kronecker and Lagrange is the following \cite[sec. 12.5]{WhittakerCourseModernAnalysis2021}
\begin{align}
\int_{\Delta^n} \mathop{}\!\mathrm{d} t \, t^{\alpha-1} f(t_1+\ldots+t_n) = \frac{\Gamma(\alpha)}{\Gamma(|\alpha|)} \int_0^1 \mathop{}\!\mathrm{d} \tau \, \tau^{|\alpha|-1} f(\tau)
\end{align}
where $t=(t_1,\ldots,t_n)\in\mathbb R^n$, $\alpha=(\alpha_1,\ldots,\alpha_n)\in\mathbb C^n$ with $\Re(\alpha_i)>0$, $|\alpha|:=\sum_{i=1}^n \alpha_i$ and $\Delta^n := \left\{ t\in\mathbb R^n \, |\, t_i > 0,\, \sum_{i=1}^n t_i < 1 \right\}$ denotes an $n$-simplex. Especially, for $f(\tau) = (1-\tau)^{\alpha_0-1}$ we will find a generalization of the beta function
\begin{align}
\int_{\Delta^n} \mathop{}\!\mathrm{d} t \, t^{\alpha-1} \left(1-\sum_{i=1}^n t_i \right)^{\alpha_0-1} = \frac{\Gamma(\underline\alpha)}{\Gamma(|\underline\alpha|)} \label{eq:DirichletIntegral}
\end{align}
where $\underline\alpha = (\alpha_0,\alpha)\in\mathbb C^{n+1}$ with $\Re(\underline\alpha_i)>0$. Once again we make use of a multi-index notation for compact expressions (see for example \cref{lem:SchwingersTrick}). We can use this integral representation of a product of $\Gamma$-functions to derive the following theorem.
\begin{theorem}[Euler integrals \cite{AomotoTheoryHypergeometricFunctions2011}] \label{thm:EulerHorn}
We have the following relation between Horn hypergeometric series and Euler integrals for all $\gamma\notin \mathbb Z$
\begin{align}
&\sum_{k\in\mathbb N_0^r} \frac{y^k}{k!} \prod_{i=1}^{n} \Gamma\!\left(\alpha_i + \sum_{j=1}^r B_{ij} k_j\right) = \nonumber\\
&\qquad \Gamma(\gamma)\Gamma(1+|\alpha|-\gamma) \int_{\Delta^n} \mathop{}\!\mathrm{d} t \, t^{\alpha-1} \left(1-\sum_{i=1}^n t_i \right)^{-\gamma} \left( 1+ \sum_{j=1}^r y_j \prod_{i=1}^n t_i^{B_{ij}} \right)^{-\gamma} \label{eq:EulerRepresentation}
\end{align}
whenever $\Re(\alpha_i)>0$ and $\sum_{i=1}^n B_{ij} = 1$ for $j=1,\ldots,r$. Furthermore, $\alpha$ is assumed to be generic such that the expressions converge absolutely.
\end{theorem}
\begin{proof}
The proof follows \cite{AomotoTheoryHypergeometricFunctions2011}. Introducing $1=\Gamma(1-\gamma-|k|) \Gamma(1-\gamma-|k|)^{-1}$ we can reformulate the left hand side of \cref{eq:EulerRepresentation} by means of \cref{eq:DirichletIntegral} to
\begin{align}
\Gamma(1+|\alpha|-\gamma) \int_{\Delta^n} \mathop{}\!\mathrm{d} t\, t^{\alpha-1} \left(1-\sum_{i=1}^n t_i \right)^{-\gamma} \sum_{k\in\mathbb N_0^r} \frac{y^k t^{Bk}}{k! \, \Gamma(1-\gamma-|k|)} \point \label{eq:EulerProof1}
\end{align}
For the remaining series in \cref{eq:EulerProof1} note that
\begin{align}
\sum_{k\in\mathbb N_0^r} \frac{x^k}{k! \, \Gamma(1-\gamma-|k|)} =\frac{(1+|x|)^{-\gamma}}{\Gamma(1-\gamma)} \label{eq:EulerProof2}
\end{align}
which can be shown by an induction over $r$. For $r=1$ this follows from the binomial theorem. The induction step can be shown again by the binomial theorem. Inserting \cref{eq:EulerProof2} into \cref{eq:EulerProof1} concludes the proof.
\end{proof}
The representation \cref{eq:EulerRepresentation} will simplify whenever there is a row $i$ with $B_{ij}=1$ for all $j=1,\ldots,r$, i.e.\ there is already a factor of the form $\Gamma(\gamma+|k|)$. However, when $B$ is (a part of) a Gale dual of an acyclic vector configuration this will only happen in the case $r=1$ (see \cref{ssec:GaleDuality}). Moreover, one can extend \cref{thm:EulerHorn} also to general values for $\alpha\in\mathbb C^n$. In this case the integration region $\Delta^n$ has to be regularized by twisted cycles as described in \cite[sec. 3.2.5]{AomotoTheoryHypergeometricFunctions2011}. A similar result for general $\alpha\in\mathbb C^n$ was derived in \cite[thm. 5.3]{Matsubara-HeoLaplaceResidueEuler2018} with an alternative description of integration cycles.
Note, that the Horn hypergeometric series from \cref{thm:FeynSeries} will always satisfy the condition $\sum_i B_{ij}=1$ due to \cref{lem:HomogenityRelationsForAa}. Hence, by a rewriting of the series from \cref{thm:FeynSeries} by means of \cref{thm:EulerHorn} we can give also an alternative representation of Feynman integrals in terms of Euler integrals. Since the components of $\mathcal{A}_\sigma^{-1}{\underline{\nu}}$ may also appear with negative values, the integration region of those Euler integrals can be very intricate.\bigskip
Besides of the various integral representations, there are also different series solutions for the $\mathcal{A}$-hypergeometric systems known. In this context we want to remark the canonical series \cite{SaitoGrobnerDeformationsHypergeometric2000}, which are power series with additional logarithmic terms. By those canonical series one can even span solution spaces for more general $D$-modules than $\mathcal{A}$-hypergeometric systems. This method is a direct generalization of Frobenius' method to the multivariate case. Hence, they are series solutions around the singular locus of the considered $D$-module. For the construction of those canonical series one has to determine the roots of the indicial ideal $\operatorname{ind}_w\!\left(H_\mathcal{A}({\underline{\nu}})\right) = R \cdot \operatorname{in}_{(-w,w)} \!\left(H_\mathcal{A}({\underline{\nu}})\right) \cap \mathbb C[\theta]$, where $\theta=(z_1\partial_1,\ldots,z_N\partial_N)$ is the Euler operator and the other objects were defined in \cref{sec:holonomicDmodules}. In case of generic parameters ${\underline{\nu}}$ one can simplify $\operatorname{ind}_w\!\left(H_\mathcal{A}({\underline{\nu}})\right)$ to the fake indicial ideal. For details we refer to \cite{SaitoGrobnerDeformationsHypergeometric2000}. The $\Gamma$-series introduced in \cref{ssec:GammaSeries} can be seen as a special case of canonical series, where no logarithms appear.
\section{Periods and marginal Feynman integrals} \label{sec:periodMarginal}
The mechanism developed above to create series representations is not restricted to the Feynman integral in Lee-Pomeransky representation. It is a method which can be used for all integrals of Euler-Mellin type, i.e.\ for Mellin transforms of polynomials up to certain powers \cite{BerkeschEulerMellinIntegrals2013}. Hence, there are also more applications in the Feynman integral calculus. Since the $\mathcal{A}$-hypergeometric system for the representation \cref{eq:FeynmanParSpFeynman} is equivalent to the one of \cref{eq:LeePomeranskyRepresentation} as pointed out in \cref{sec:FeynmanIntegralsAsAHyp} the series representations will be the same. However, in case where one of the Symanzik polynomials drops out from \cref{eq:FeynmanParSpFeynman} due to certain constraints on $d$ and $\nu$, we will obtain a simpler series representation. The case, where the superficial degree of divergence vanishes $\omega=0$ and only the first Symanzik polynomial $\mathcal{U}$ remains is often called the \textit{period}\footnote{Here ``period'' is meant in the sense of Kontsevich and Zagier \cite{KontsevichPeriods2001}, i.e.\ a complex number where its real and imaginary part is expressible as an absolutely convergent integral over a rational function with rational coefficients over real domains defined by polynomial inequalities with rational coefficients. Indeed, every term in the $\epsilon$-expansion of a Feynman integral is a period when restricting the kinematic invariants and masses to rational numbers \cite{BognerMathematicalAspectsFeynman2009, BognerPeriodsFeynmanIntegrals2009, BrownPeriodsFeynmanIntegrals2010}. However, one often refers with ``period'' to the special case of Feynman integrals, where only the first Symanzik polynomial is included in representation \cref{eq:FeynmanParSpFeynman}.}, whereas the case $\omega=\frac{d}{2}$ where only the second Symanzik polynomial $\mathcal{F}$ remains is known as \textit{marginal Feynman integral} \cite{BourjailyBoundedBestiaryFeynman2019}. For instance all ``banana''-graphs are marginal for $\nu_i=1$ and $d=2$. This special circumstance was used in \cite{KlemmLoopBananaAmplitude2020}.
It is important to remark, that periods and marginal Feynman integrals not only appear in special configurations, which forces \cref{eq:FeynmanParSpFeynman} to drop a polynomial. Those integrals will also contribute to terms of the $\epsilon$-expansion of the Feynman integral in dimensional regularization (see \cref{sec:DimAnaReg}). Hence, it is meaningful to consider periods and marginal Feynman integrals also for general $d$ and $\nu$.\bigskip
\begin{example}[Periods of $1$-loop graphs]
For example, we can calculate the periods of all $1$-loop graphs (see \cref{fig:1loopFamily}) with the previous proposed approach, i.e.\ we consider
\begin{align}
\gls{period} := \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\, x^{\nu-1} \delta (1-x_n) \, \mathcal{U}^{-\nu_0} \quad\text{with}\quad \mathcal{U} = \sum_{i=1}^n x_i \point
\end{align}
After evaluating the $\delta$-distribution we will find
\begin{align}
\mathcal{A} = \begin{pmatrix}
1 & \mathbf 1_{n-1}^\top \\
\mathbf 0_{n-1} & \mathbbm 1_{n-1}
\end{pmatrix} \text{ , } \quad \mathcal{A}^{-1} = \begin{pmatrix}
1 & - \mathbf 1_{n-1}^\top \\
\mathbf 0_{n-1} & \mathbbm 1_{n-1}
\end{pmatrix}
\end{align}
and $\det (\mathcal{A}) = 1$. As before, we denote by $\mathbf 0_{n-1}$, $\mathbf 1_{n-1}$ the constant zero column vectors and the constant one column vectors of size $n-1$, respectively. Since $\mathcal{A}$ is quadratic, those period integrals for $1$-loop graphs will satisfy the conditions of \cref{cor:MellinBarnesRepresentation}. Therefore, we will have
\begin{align}
\mathcal P_\Gamma ({\underline{\nu}}) = \frac{\Gamma\!\left(\mathcal{A}^{-1}{\underline{\nu}}\right)}{\Gamma(\nu_0)\, |\det (\mathcal{A})|} = \frac{\Gamma\!\left(\nu_0 - \sum_{i=1}^n \nu_i\right) \Gamma(\nu)}{\Gamma(\nu_0)}
\end{align}
for all $1$-loop graphs $\Gamma$.
\end{example}
\begin{figure}[tb]
\centering
\begin{subfigure}{.45\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, scale=1.5]
\draw (0,0) circle (1);
\coordinate[dot] (1) at (40:1);
\coordinate[dot] (2) at (65:1);
\coordinate[dot] (3) at (90:1);
\coordinate[dot] (4) at (115:1);
\coordinate[dot] (5) at (140:1);
\draw (1) -- node[pos=0.7, above, xshift=-4pt] {\footnotesize $p_{n-1}$} (40:2);
\draw (2) -- node[pos=0.7, above, xshift=-3pt] {\footnotesize $p_{n}$} (65:2);
\draw (3) -- node[pos=0.7, left, xshift=3pt, yshift=4pt] {\footnotesize $p_{1}$} (90:2);
\draw (4) -- node[pos=0.7, below, xshift=-5pt, yshift=5pt] {\footnotesize $p_{2}$} (115:2);
\draw (5) -- node[pos=0.7, below, xshift=-4pt, yshift=2pt] {\footnotesize $p_{3}$} (140:2);
\node at (52.5:1.15) {$n$};
\node at (77.5:1.15) {$1$};
\node at (102.5:1.15) {$2$};
\node at (127.5:1.15) {$3$};
\node at (160:1.5) {$\cdot$};
\node at (168:1.5) {$\cdot$};
\node at (176:1.5) {$\cdot$};
\node at (184:1.5) {$\cdot$};
\end{tikzpicture}
\caption{Family of $1$-loop graphs with $n$ edges.} \label{fig:1loopFamily}
\end{subfigure} %
\begin{subfigure}{.45\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, scale=1.5]
\draw (0,0) circle (1);
\coordinate[dot] (A) at (-1,0);
\coordinate[dot] (B) at (1,0);
\draw (A) -- node[above,midway] {$p$} ++(-0.7,0);
\draw (B) -- node[above,midway] {$p$} ++(0.7,0);
\draw (0,-0.4) ++(158.2:1.077) arc (158.2:21.8:1.077);
\draw (0,+0.4) ++(338.2:1.077) arc (338.2:201.8:1.077);
\node at (0,1.15) {$1$};
\node at (0,0.83) {$2$};
\node at (0,0.2) {$\vdots$};
\node at (0,-0.49) {$n-1$};
\node at (0,-0.82) {$n$};
\end{tikzpicture}
\caption{Family of banana graphs with $n=L+1$ edges.} \label{fig:bananaFamily}
\end{subfigure}
\caption[Feynman graphs for $1$-loop graphs and banana graphs]{Feynman graphs for $1$-loop graphs and banana graphs.}
\end{figure}
\begin{example}[Periods of $L$-loop banana graphs] \label{ex:periodsLloopBanana}
In the same way one can give an analytic expression for the period of every $L$-loop banana graph (see \cref{fig:bananaFamily}), i.e.\ graphs which consist of $n=L+1$ edges, two legs and have the first Symanzik polynomial
\begin{align}
\mathcal{U} = x_1 \cdots x_n \left(\frac{1}{x_1} + \ldots + \frac{1}{x_n} \right) \point \label{eq:BananaUSymanzik}
\end{align}
After evaluation of the $\delta$-distribution which sets $x_n=1$, we will obtain
\begin{align}
\mathcal{A} = \begin{pmatrix}
1 & \mathbf 1_L^\top \\
\mathbf 1_L & \mathbf 1_{L\times L} - \mathbbm 1_L
\end{pmatrix} \text{ , } \quad \mathcal{A}^{-1} = \begin{pmatrix}
-L+1 & \mathbf 1_L^\top \\
\mathbf 1_L & - \mathbbm 1_L
\end{pmatrix}
\end{align}
which can be verified by multiplication and $\mathbf 1_{L\times L}$ denotes the constant $L\times L$ matrix consisting only in ones. By Laplace expansion along the second column we obtain $\det\!\left(\mathcal{A}_L\right) = - \det\!\left(\mathcal{A}_{L-1}\right)$, where $\mathcal{A}_L$ and $\mathcal{A}_{L-1}$ stand for the vector configurations of a $L$-loop and $(L-1)$-loop banana graph, respectively. Therefore, we will find by induction $|\det(\mathcal{A})|=1$. Hence, we can combine
\begin{align}
\mathcal P_\Gamma({\underline{\nu}}) = \frac{\Gamma\!\left(\! (1-L) \nu_0 + \displaystyle\sum_{i=1}^{n-1} \nu_i \right) \displaystyle\prod_{i=1}^{n-1} \Gamma(\nu_0-\nu_i)}{\Gamma(\nu_0)} \point
\end{align}
\end{example}
We want to remark that it seems in general not very appropriate to determine periods by hypergeometric functions, since we have to evaluate the hypergeometric functions at unity argument which is typically a very non-generic point. Moreover, there are very efficient alternatives to calculate periods, e.g.\ \cite{BrownPeriodsFeynmanIntegrals2010, PanzerFeynmanIntegralsHyperlogarithms2015, SchnetzQuantumPeriodsCensus2009, SchnetzGraphicalFunctionsSinglevalued2014}. The above examples have been only included for the purpose of showing possible alternative applications.
\begin{example}[marginal $L$-loop banana graphs with one mass]
The counterpart of period Feynman integrals are the so-called marginal Feynman graphs, i.e.\ Feynman integrals where the first Symanzik polynomial drops from \cref{eq:FeynmanParSpFeynman}. Another example which satisfies the condition of \cref{cor:MellinBarnesRepresentation} and allows therefore an analytical expression is the class of marginal $L$-loop banana graphs with one mass, i.e.\ we consider the family of integrals
\begin{align}
\mathcal K_\Gamma ({\underline{\nu}},z) := \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x \, x^{\nu-1} \delta(1-x_n)\, \mathcal{F}^{-\nu_0} \quad \text{with}\quad \mathcal{F} = p^2 x_1 \cdots x_n + m_n^2 x_n \mathcal{U}
\end{align}
where $\mathcal{U}$ was given in \cref{eq:BananaUSymanzik}. Note that $\mathcal{F}|_{x_n=1}$ has the same monomials as $\mathcal{U}|_{x_n=1}$. Therefore, we can adopt the results from \cref{ex:periodsLloopBanana} by adding the variable $z=(p^2+m_n^2,m_n^2,\ldots,m_n^2)$. Hence, we obtain
\begin{align}
\mathcal K_\Gamma ({\underline{\nu}},z) = \frac{\Gamma\!\left(\! (1-L) \nu_0 + \displaystyle\sum_{i=1}^{n-1} \nu_i \right) \displaystyle\prod_{i=1}^{n-1} \Gamma(\nu_0-\nu_i)}{\Gamma(\nu_0)} \left(\frac{p^2+m_n^2}{m_n^2}\right)^{L\nu_0 - \sum_{i=1}^{n-1}\nu_i} (p^2+m_n^2)^{-\nu_0} \ \text{.}
\end{align}
\end{example}
\begin{example}[marginal massive $1$-loop bubble]
As another example of a marginal Feynman integral we will present, for the purpose of illustration once again, the $1$-loop bubble, i.e.\ the integral
\begin{align}
\gls{marginal} = \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x \, x^{\nu-1} \delta(1-x_2)\, \mathcal{F}^{-\nu_0}
\end{align}
where $\mathcal{F} = (p^2 + m_1^2+m_2^2) x_1 x_2 + m_1^2 x_1^2 + m_2^2 x_2^2$ and which generates
\begin{align}
\mathcal{A} = \begin{pmatrix}
1 & 1 & 1 \\
0 & 1 & 2
\end{pmatrix}\quad\text{,} \qquad z = (m_2^2,p^2+m_1^2+m_2^2,m_1^2) \point
\end{align}
When choosing the triangulation $\widehat{\mathcal T} = \{\{1,2\},\{2,3\}\}$ this will result in
\begin{align}
\mathcal K_\Gamma ({\underline{\nu}},z) &= \frac{1}{\Gamma(\nu_0)} \left[ z_1^{-\nu_0+\nu_1} z_2^{-\nu_1} \sum_{\lambda\in\mathbb N_0} \frac{\Gamma(\nu_0-\nu_1-\lambda)\Gamma(\nu_1+2\lambda)}{\lambda!} (-y)^\lambda \right. \nonumber \\
&\qquad \left. + z_2^{-2\nu_0+\nu_1} z_3^{\nu_0-\nu_1} \sum_{\lambda\in\mathbb N_0} \frac{\Gamma(2\nu_0-\nu_1+2\lambda)\Gamma(-\nu_0+\nu_1-\lambda)}{\lambda!} (-y)^\lambda \right] \nonumber \\
&= \frac{1}{\Gamma(\nu_0)} \left[ z_1^{-\nu_0+\nu_1} z_2^{-\nu_1} \Gamma(\nu_0-\nu_1)\Gamma(\nu_1)\ \HypF{\frac{\nu_1}{2},\frac{1+\nu_1}{2}}{1-\nu_0+\nu_1}{4y} \right. \nonumber \\
&\qquad \left. + z_2^{-2\nu_0+\nu_1} z_3^{\nu_0-\nu_1} \Gamma(2\nu_0-\nu_1)\Gamma(-\nu_0+\nu_1)\ \HypF{\nu_0-\frac{\nu_1}{2},\frac{1}{2}+\nu_0-\frac{\nu_1}{2}}{1+\nu_0-\nu_1}{4y} \right]
\end{align}
where $y=\frac{z_1z_3}{z_2^2} = \frac{m_1^2m_2^2}{\left(p^2+m_1^2+m_2^2\right)^2}$.
\end{example}
Last but not least, we want to mention that the so-called \textit{stringy integrals} \cite{Arkani-HamedStringyCanonicalForms2021} also belong to the class of Euler-Mellin integrals. These stringy integrals are generalizations of (open) string amplitudes and can also be treated by the series approach presented in this chapter.
\section{Series representation for the fully massive sunset graph}
\label{sec:ExampleSunset}
We want to conclude this chapter about series representations by an extensive example to illustrate the methods stated above as well as to show the scope of this approach. For this reason we will consider the sunset Feynman integral with three different masses according to \cref{fig:sunset}. The corresponding Feynman graph consists in $n=3$ edges and the Lee-Pomeransky polynomial includes $N=10$ monomials
\begin{align}
\mathcal{G} &= x_1 x_2 + x_1 x_3 + x_2 x_3 + \left(m_1^2+m_2^2+m_3^2+p^2\right) x_1 x_2 x_3 \nonumber \\
&\qquad + m_1^2 x_1^2 (x_2+x_3) + m_2^2 x_2^2 (x_1+x_3) + m_3^2 x_3^2 (x_1+x_2) \point
\end{align}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.5}, scale=1.5]
\draw (0,0) circle (1);
\coordinate[dot] (A) at (-1,0);
\coordinate[dot] (B) at (1,0);
\draw (A) -- node[above] {$p$} ++(-0.7,0);
\draw (B) -- node[above] {$p$} ++(+0.7,0);
\draw (A) -- (B);
\node at (0,1.15) {$m_1$};
\node at (0,0.15) {$m_2$};
\node at (0,-0.85) {$m_3$};
\end{tikzpicture}
\caption[$2$-loop self-energy Feynman graph (``sunset'')]{The $2$-loop $2$-point function (sunset graph) with three different masses.}
\label{fig:sunset}
\end{figure}
In the representation of equation \cref{eq:Gsupport} we encode this polynomial by
\begin{align}
\mathcal{A} &= \begin{pmatrix}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
0 & 1 & 1 & 0 & 1 & 0 & 1 & 2 & 1 & 2 \\
1 & 0 & 1 & 1 & 0 & 2 & 1 & 0 & 2 & 1 \\
1 & 1 & 0 & 2 & 2 & 1 & 1 & 1 & 0 & 0
\end{pmatrix} \\
z &= (1,1,1,m_3^2,m_3^2,m_2^2,m_1^2+m_2^2+m_3^2+p_1^2,m_1^2,m_2^2,m_1^2) \point
\end{align}
The rank of the kernel of $\mathcal{A}$ is equal to $r=N-n-1=6$, and therefore we will expect $6$-dimensional $\Gamma$-series. Moreover, the Newton polytope $\operatorname{Newt}(\mathcal{G})=\operatorname{Conv} (A)$ has the volume $\operatorname{vol}(\operatorname{Newt}(\mathcal{G}))=10$, which can e.g.\ be calculated with \softwareName{polymake} \cite{GawrilowPolymakeFrameworkAnalyzing2000}, see \cref{sec:SoftwareTools}. This leads to $10$ basis solutions, and there are $826$ different ways for a regular triangulation of the Newton polytope $\operatorname{Newt}(\mathcal{G})$, where $466$ of those triangulations are unimodular. We choose the unimodular triangulation (calculated with \softwareName{Topcom} \cite{RambauTOPCOMTriangulationsPoint2002})
\begin{align}
\widehat{\mathcal T}_{152} &= \big\{\{3,6,7,9\},\{3,7,9,10\},\{3,7,8,10\},\{2,5,7,8\},\{2,3,7,8\},\nonumber \\
&\qquad \{2,4,5,7\},\{1,4,6,7\},\{1,2,4,7\},\{1,3,6,7\},\{1,2,3,7\}\big\} \label{eq:sunsetTriang}
\end{align}
in order to get series, which converge fast for highly relativistic kinematics $m_i^2 \ll m_1^2+m_2^2+m_3^2+p^2$. Further, we set $\nu_i=1$ and $d=4-2\epsilon$.
In the limit $z\rightarrow(1,1,1,m_3^2,m_3^2,m_2^2,m_1^2+m_2^2+m_3^2+p_1^2,m_1^2,m_2^2,m_1^2)$ the series $\phi_1$, $\phi_3$, $\phi_5$, $\phi_6$, $\phi_8$ and $\phi_9$ are divergent for small values of $\epsilon>0$. We will write $\phi_i := \phi_{\sigma_i} ({\underline{\nu}},z)$ for short, where the numeration of simplices $\sigma_i$ is oriented towards \cref{eq:sunsetTriang}. By the method described in \cref{sec:AnalyticContinuation} one can split all these series by the transformation formula for the ${_2}F_1$ Gauss' hypergeometric function in a convergent and a divergent part. The divergent parts of these series cancel each other. In doing so the resulting $\Gamma$-series have linear dependences and the dimension of the solution space will reduce from $10$ to $7$.
By applying all these steps one arrives at the following series representation of the fully massive sunset integral
\begin{align}
\mathcal I_\mathcal{A} ({\underline{\nu}},z) &= \frac{s^{1-2\epsilon}}{\Gamma(3-3\epsilon)} \big[ y_2^{1-\epsilon} \phi_1^\prime + (y_1 y_2)^{1-\epsilon} \phi^\prime_2 + y_1^{1-\epsilon} \phi^\prime_3 + (y_1 y_3)^{1-\epsilon} \phi^\prime_4 + y_1^{1-\epsilon} \phi^\prime_5 \nonumber \\
&\qquad + y_3^{1-\epsilon} \phi^\prime_6 + (y_2 y_3)^{1-\epsilon} \phi^\prime_7 + y_3^{1-\epsilon} \phi^\prime_8 + y_2^{1-\epsilon} \phi^\prime_9 + \phi^\prime_{10} \big] \comma
\end{align}
where we adapted the notation of $\Gamma$-series slightly for convenience. Those $\Gamma$-series are given by
\begin{align}
&\phi^\prime_1 = \sum_{k_2,k_3,k_4,k_5,k_6=0}^\infty\frac{ (-y_2)^{k_2} (-y_3)^{k_3} (-y_2 y_3)^{k_4} (-y_1 y_2)^{k_5} (-y_1)^{k_6} }{k_2!\, k_3!\, k_4!\, k_5!\, k_6!} \nonumber\\
&\quad \Gamma (k_2-3 \epsilon +3) \Gamma (k_2+k_3+k_4-k_6-2 \epsilon +3) \Gamma (k_3-k_5-k_6-\epsilon +1) \nonumber \\
&\quad \frac{\Gamma (k_2+k_3+2 k_4+2 k_5+k_6+\epsilon ) \Gamma (k_4+k_5+2 \epsilon -1) \Gamma (-k_2-k_3-k_4+k_6+2 \epsilon -2)}{\Gamma (k_2+k_4+k_5-\epsilon +2) \Gamma (k_3+k_4-k_6+\epsilon )} \allowdisplaybreaks \nonumber\\[\baselineskip
&\phi^\prime_2 = \sum_{k_1,k_2,k_3,k_4,k_5,k_6=0}^\infty\frac{ (-y_1)^{k_1+k_5} (-y_2)^{k_2+k_6} (-y_1 y_3)^{k_3} (-y_2 y_3)^{k_4}}{k_1!\, k_2!\, k_3!\, k_4!\, k_5!\, k_6!} \nonumber\\
&\quad \Gamma (k_1+k_2+2 k_3+2 k_4+k_5+k_6+1) \Gamma (k_1+k_2-3 \epsilon +3) \nonumber \\
&\quad \Gamma (-k_2-k_4+k_5-k_6+\epsilon -1) \Gamma (-k_1-k_3-k_5+k_6+\epsilon -1) \allowdisplaybreaks \nonumber\\[\baselineskip
&\phi^\prime_3 = \sum_{k_1,k_3,k_4,k_5,k_6=0}^\infty\frac{(-y_1)^{k_1} (-y_1 y_3)^{k_3} (-y_3)^{k_4} (-y_1 y_2)^{k_5} (-y_2)^{k_6} }{k_1!\, k_3!\, k_4!\, k_5!\, k_6!} \nonumber\\
&\quad \Gamma (k_1-3 \epsilon +3) \Gamma (k_1+k_3+k_4-k_6-2 \epsilon +3) \Gamma (k_4-k_5-k_6-\epsilon +1) \nonumber \\
&\quad \frac{\Gamma (k_1+2 k_3+k_4+2 k_5+k_6+\epsilon ) \Gamma (k_3+k_5+2 \epsilon -1) \Gamma (-k_1-k_3-k_4+k_6+2 \epsilon -2)}{\Gamma (k_1+k_3+k_5-\epsilon +2) \Gamma (k_3+k_4-k_6+\epsilon )} \allowdisplaybreaks \nonumber\\[\baselineskip
&\phi^\prime_4 = \sum_{k_1,k_2,k_3,k_4,k_5,k_6=0}^\infty\frac{ (-y_1)^{k_1+k_3} (-y_3)^{k_2+k_6} (-y_1 y_2)^{k_4} (-y_2 y_3)^{k_5}}{k_1!\, k_2!\, k_3!\, k_4!\, k_5!\, k_6!} \nonumber\\
&\quad \Gamma (k_1+k_2+k_3+2 k_4+2 k_5+k_6+1) \Gamma (k_1+k_2-3 \epsilon +3) \nonumber \\
&\quad \Gamma (-k_2+k_3-k_5-k_6+\epsilon -1) \Gamma (-k_1-k_3-k_4+k_6+\epsilon -1) \allowdisplaybreaks \nonumber\\[\baselineskip
&\phi^\prime_5 = \sum_{k_1,k_2,k_3,k_4,k_5=0}^\infty\frac{ (-y_1)^{k_1} (-y_1 y_3)^{k_2} (-y_3)^{k_3} (-y_1 y_2)^{k_4} (-y_2)^{k_5} }{k_1!\, k_2!\, k_3!\, k_4!\, k_5!} \nonumber\\
&\quad \Gamma (k_1+k_2+k_3-k_5-2 \epsilon +2) \Gamma (-k_2-k_3+k_5-\epsilon +1) \Gamma (-k_1-k_2-k_4+\epsilon -1) \nonumber \\
&\quad \frac{\Gamma (k_1+2 k_2+k_3+2 k_4+k_5+\epsilon ) \Gamma (k_2+k_4+2 \epsilon -1) \Gamma (-k_1-k_2-k_3+k_5+2 \epsilon -1)}{\Gamma (-k_3+k_4+k_5+\epsilon ) \Gamma (-k_1+3 \epsilon -2)} \allowdisplaybreaks \nonumber\\[\baselineskip
&\phi^\prime_6 = \sum_{k_2,k_3,k_4,k_5,k_6=0}^\infty\frac{ (-y_3)^{k_2} (-y_2)^{k_3} (-y_1)^{k_4} (-y_2 y_3)^{k_5} (-y_1 y_3)^{k_6} }{k_2!\, k_3!\, k_4!\, k_5!\, k_6!} \nonumber\\
&\quad \Gamma (k_2-3 \epsilon +3) \Gamma (k_2+k_3-k_4+k_5-2 \epsilon +3) \Gamma (k_3-k_4-k_6-\epsilon +1) \nonumber \\
&\quad \frac{\Gamma (k_2+k_3+k_4+2 k_5+2 k_6+\epsilon ) \Gamma (-k_2-k_3+k_4-k_5+2 \epsilon -2) \Gamma (k_5+k_6+2 \epsilon -1)}{\Gamma (k_2+k_5+k_6-\epsilon +2) \Gamma (k_3-k_4+k_5+\epsilon )} \allowdisplaybreaks \nonumber\\[\baselineskip
&\phi^\prime_7 = \sum_{k_1,k_2,k_3,k_4,k_5,k_6=0}^\infty\frac{ (-y_2)^{k_1+k_3}(-y_3)^{k_2+k_5} (-y_1 y_2)^{k_4} (-y_1 y_3)^{k_6}}{k_1!\, k_2!\, k_3!\, k_4!\, k_5!\, k_6!} \nonumber\\
&\quad \Gamma (k_1+k_2+k_3+2 k_4+k_5+2 k_6+1) \Gamma (k_1+k_2-3 \epsilon +3) \nonumber \\
&\quad \Gamma (-k_1-k_3-k_4+k_5+\epsilon -1) \Gamma (-k_2+k_3-k_5-k_6+\epsilon -1) \allowdisplaybreaks \nonumber\\[\baselineskip
&\phi^\prime_8 = \sum_{k_1,k_3,k_4,k_5,k_6=0}^\infty\frac{(-y_3)^{k_1} (-y_2)^{k_3} (-y_1)^{k_4} (-y_2 y_3)^{k_5} (-y_1 y_3)^{k_6} }{k_1!\, k_3!\, k_4!\, k_5!\, k_6!} \nonumber\\
&\quad \Gamma (k_1+k_3-k_4+k_5-2 \epsilon +2) \Gamma (-k_3+k_4-k_5-\epsilon +1) \Gamma (-k_1-k_5-k_6+\epsilon -1) \nonumber \\
&\quad \frac{\Gamma (k_1+k_3+k_4+2 k_5+2 k_6+\epsilon ) \Gamma (-k_1-k_3+k_4-k_5+2 \epsilon -1) \Gamma (k_5+k_6+2 \epsilon -1)}{\Gamma (-k_3+k_4+k_6+\epsilon ) \Gamma (-k_1+3 \epsilon -2)} \allowdisplaybreaks \nonumber\\[\baselineskip
&\phi^\prime_9 = \sum_{k_1,k_2,k_3,k_4,k_6=0}^\infty\frac{ (-y_2)^{k_1} (-y_3)^{k_2} (-y_2 y_3)^{k_3} (-y_1 y_2)^{k_4} (-y_1)^{k_6} }{k_1!\, k_2!\, k_3!\, k_4!\, k_6!} \nonumber\\
&\quad \Gamma (k_1+k_2+k_3-k_6-2 \epsilon +2) \Gamma (-k_2-k_3+k_6-\epsilon +1) \Gamma (-k_1-k_3-k_4+\epsilon -1) \nonumber \\
&\quad \frac{\Gamma (k_1+k_2+2 k_3+2 k_4+k_6+\epsilon ) \Gamma (k_3+k_4+2 \epsilon -1) \Gamma (-k_1-k_2-k_3+k_6+2 \epsilon -1)}{\Gamma (-k_2+k_4+k_6+\epsilon ) \Gamma (-k_1+3 \epsilon -2)} \allowdisplaybreaks \nonumber\\[\baselineskip
&\phi^\prime_{10} = \sum_{k_1,k_2,k_3,k_4,k_5,k_6=0}^\infty\frac{ (-y_3)^{k_1+k_2} (-y_2)^{k_3+k_5} (-y_1)^{k_4+k_6} }{k_1!\, k_2!\, k_3!\, k_4!\, k_5!\, k_6!} \nonumber\\
&\quad \Gamma (k_2-k_3+k_4-k_5-\epsilon +1) \Gamma (k_1+k_3-k_4-k_6-\epsilon +1) \nonumber \\
&\quad \Gamma (-k_1-k_2+k_5+k_6-\epsilon +1) \Gamma (k_1+k_2+k_3+k_4+k_5+k_6+2 \epsilon -1)
\end{align}
with $y_i = \frac{m_i^2}{m_1^2+m_2^2+m_3^2+p^2}$ and $s=m_1^2+m_2^2+m_3^2+p^2$. All these series converge for small values of $y_i$ and the series representation can be obtained by a very simple algorithm, which is a straightforward implementation of the steps described in the previous sections. In fact, some of these $\Gamma$-series are related to each other. One can reduce the whole system only to $\phi^\prime_1, \phi^\prime_2, \phi^\prime_5$ and $\phi^\prime_{10}$ by the relations
\begin{align}
\phi^\prime_1 (y_1,y_2,y_3) &= \phi^\prime_3 (y_2,y_1,y_3) = \phi^\prime_6 (y_1,y_3,y_2) \\
\phi^\prime_2 (y_1,y_2,y_3) &= \phi^\prime_4 (y_1,y_3,y_2) = \phi^\prime_7 (y_3,y_2,y_1) \\
\phi^\prime_5 (y_1,y_2,y_3) &= \phi^\prime_8 (y_2,y_3,y_1) = \phi^\prime_9 (y_2,y_1,y_3) \point
\end{align}
By these relations one can also verify the expected symmetry of the Feynman integral under the permutation $y_1 \leftrightarrow y_2 \leftrightarrow y_3$.
In order to expand the Feynman integral $\mathcal I_\mathcal{A}$ for small values of $\epsilon>0$ one can use the methods described in \cref{sec:epsilonExpansion}. The correctness of these results was checked numerically by \softwareName{Fiesta} \cite{SmirnovFIESTAOptimizedFeynman2016} with arbitrary kinematics and masses, satisfying $y_i< 0.5$ up to the order $\epsilon^2$. For small values of $y_i$ the resulting series converge fast, such that for a good approximation one only has to take the first summands into account.
\chapter{Kinematic singularities}
\label{ch:singularities}
Feynman integrals are usually understood as functions depending on various observables and parameters. Even though the physical observables do not take complex values in measurements, these Feynman integrals can only be thought consistently in complex domains. By considering Feynman integrals as complex functions and examining their analytic properties, surprising connections were found, for example dispersion relations and Cutkosky's rules \cite{CutkoskySingularitiesDiscontinuitiesFeynman1960, BlochCutkoskyRulesOuter2015}. As conjectured for the first time by T. Regge, it seems that these connections are not just arbitrary and indicate a more fundamental relation between the monodromy group and the fundamental group for Feynman integrals in the context of Picard-Lefschetz theory (see e.g.\ \cite{SpeerGenericFeynmanAmplitudes1971, PonzanoMonodromyRingsClass1969} for Regge's perspective). Apart from these conceptual questions, the analytic structure plays also an important role in many practical approaches,
for example sector decomposition \cite{BinothNumericalEvaluationMultiloop2004, AnastasiouEvaluatingMultiloopFeynman2007, BorowkaNumericalEvaluationMultiloop2013}, Steinman relations \cite{CahillOpticalTheoremsSteinmann1975} or certain methods in QCD \cite{LibbyMassDivergencesTwoparticle1978}.
Hence, formally speaking, a Feynman integral maps a Feynman graph $\Gamma$ containing loops to a multivalued function $\mathcal I_\Gamma({\underline{\nu}},z)$, which depends on several variables $z$ and parameters ${\underline{\nu}}$. As specific representations of these functions $\mathcal I_\Gamma$, we can write down different kinds of integrals (see \cref{sec:ParametricFeynmanIntegrals}), each valid only on a restricted domain. Alternatively, we can express $\mathcal I_\Gamma$ also by series representations as done in \cref{thm:FeynSeries} or by means of various other functions (see \cref{sec:EulerIntegrals}). Thus, we do not want to use the term ``Feynman integral'' to refer to individual integrals, but rather to the analytic, common continuation of these representations to a maximal domain for the parameters and the variables.
In the process of analytic continuation to complex numbers there will arise two kinds of singularities: singularities in the parameters ${\underline{\nu}}$ and singularities in the variables $z$. The first type were already discussed in \cref{sec:DimAnaReg}. These singularities are only poles and $\mathcal I_\Gamma({\underline{\nu}},z)$ will be a meromorphic function with respect to ${\underline{\nu}}\in\mathbb C^{n+1}$. The analytic behaviour with respect to ${\underline{\nu}}$ was fully described by \cref{thm:meromorphicContinuation}. Considerably more difficult is the situation for the variables $z$ of the Feynman integral. We will find certain combinations of momenta $p$ and masses $m$, such that the Feynman integral has lacking analyticity or differentiability. This chapter will be devoted to the study of those singularities, which we will call \textit{kinematic singularities} or \textit{Landau singularities}.
Up to now, we restricted ourselves to Euclidean kinematics $\Re (z_j) >0$. Equivalently, we assumed that norms of momenta are real numbers, e.g.\ for those $\left| \sum_{e_i\notin F} \pm \widehat q_i \right| \in\mathbb R$ appearing in the description of Symanzik polynomials (\cref{thm:SymanzikPolynomialsTreeForest}). Therefore, when taking also the Minkowskian region into account, we have to consider those norms to be complex or more generally, we have to assume $z\in\mathbb C^N$. Hence, the analytic continuation of variables $z$ to complex numbers is indispensable when considering Minkowskian momenta\footnote{At first sight, it seems sufficient to continue the variables from $z_j\in\mathbb R_{>0}$ to the real numbers $z_j\in\mathbb R$. However, for real numbers we will expect poles in the integrands of Feynman integrals, e.g.\ in \cref{eq:FeynmanParSpFeynman}. Hence, we have to elude those poles by going to the complex plane. Therefore, we have to shift the integration contour in the complex region, or equivalently we can assume the variables $z_j\in\mathbb C$ to be complex. A minimal version of introducing complex numbers to \cref{eq:FeynmanMomSp} is the so-called $i\varepsilon$ prescription. We will elaborate on this in \cref{sec:Coamoebas}.}. \bigskip
Before studying the kinematic singularities on the level of Feynman integrals, let us first encounter this subject from a different angle, which will motivate the appearance of thresholds. This perspective was developed in the '60s, e.g.\ in \cite{GunsonUnitarityMassShell1965, CosterPhysicalRegionDiscontinuity1969, CosterPhysicalRegionDiscontinuity1970}, and we will recall the very concise summary in \cite{HannesdottirWhatVarepsilonSmatrix2022}. As aforementioned in \cref{sec:FeynmanIntegralsIntro} the $S$-matrix is the central object of interest, describing the probabilities for certain events in a scattering experiment. From the conservation of probabilities $S^\dagger S=\mathbbm 1$ and the separation of the trivial scattering $\gls{Smatrix}=\mathbbm 1 + i \gls{Tmatrix}$, we will obtain
\begin{align}
T T^\dagger = \frac{1}{i} \left(T - T^\dagger\right) = 2 \Im (T) \label{eq:OpticalTheorem} \comma
\end{align}
which is often referred as the optical theorem. From \cref{eq:OpticalTheorem} we will obtain $T^\dagger = (\mathbbm 1 +i T)^{-1} T = i \sum_{k\geq 0} (-iT)^{k+1}$ by Neumann series. Hence, we have
\begin{align}
\Im (T) = \frac{1}{2i} \left(T - T^\dagger\right) = - \frac{1}{2} \sum_{k\geq 2}(-iT)^k \label{eq:SMatrixImSeries} \point
\end{align}
Every term in this series stands for a sequence of $k$ separated, non-trivial scattering processes. In this manner, \cref{eq:SMatrixImSeries} is an expression of the fact that we typically cannot determine in a scattering experiment whether it is a single scattering process or a chain of such processes\footnote{This holds independently of the indistinguishability of particles in QFTs and is simply an effect of the experimental setup.}. This chain of processes will be connected by real (on-shell), intermediate particles. Hence, we will only have processes in this chain, if they are kinematically allowed, i.e.\ if there is enough center of mass energy to produce these intermediate particles. Therefore, from \cref{eq:SMatrixImSeries} we will expect ``jumps'' in the imaginary part of $T$ when exceeding certain center of mass energies, such that a new chain of processes is kinematically allowed.
\begin{figure}[hbt]
\centering
\begin{subfigure}{\textwidth}
\centering
\begin{tikzpicture}[thick, scale=1.5]
\filldraw[draw=black, line width=0.75mm, fill = gray!30] (1,0) rectangle ++(1,1);
\filldraw[draw=black, line width=0.75mm, fill = gray!30] (3,0) rectangle ++(1,1);
\filldraw[draw=black, line width=0.75mm, fill = gray!30] (5,0) rectangle ++(1,1);
\filldraw[draw=black, line width=0.75mm, fill = gray!30] (7,0) rectangle ++(1,1);
\draw [-latex, line width=0.75mm] (0,0.25) -- ++(0.6,0);
\draw [-, line width=0.75mm] (0,0.25) -- ++(1,0);
\draw [-latex, line width=0.75mm] (0,0.75) -- ++(0.6,0);
\draw [-, line width=0.75mm] (0,0.75) -- ++(1,0);
\draw [-latex, line width=0.75mm] (2,0.25) -- ++(0.6,0);
\draw [-, line width=0.75mm] (2,0.25) -- ++(1,0);
\draw [-latex, line width=0.75mm] (2,0.5) -- ++(0.6,0);
\draw [-, line width=0.75mm] (2,0.5) -- ++(1,0);
\draw [-latex, line width=0.75mm] (2,0.75) -- ++(0.6,0);
\draw [-, line width=0.75mm] (2,0.75) -- ++(1,0);
\draw [-latex, line width=0.75mm] (4,0.25) -- ++(0.6,0);
\draw [-, line width=0.75mm] (4,0.25) -- ++(1,0);
\draw [-latex, line width=0.75mm] (4,0.75) -- ++(0.6,0);
\draw [-, line width=0.75mm] (4,0.75) -- ++(1,0);
\draw [-latex, line width=0.75mm] (6,0.2) -- ++(0.6,0);
\draw [-, line width=0.75mm] (6,0.2) -- ++(1,0);
\draw [-latex, line width=0.75mm] (6,0.4) -- ++(0.6,0);
\draw [-, line width=0.75mm] (6,0.4) -- ++(1,0);
\draw [-latex, line width=0.75mm] (6,0.6) -- ++(0.6,0);
\draw [-, line width=0.75mm] (6,0.6) -- ++(1,0);
\draw [-latex, line width=0.75mm] (6,0.8) -- ++(0.6,0);
\draw [-, line width=0.75mm] (6,0.8) -- ++(1,0);
\draw [-latex, line width=0.75mm] (8,0.25) -- ++(0.6,0);
\draw [-, line width=0.75mm] (8,0.25) -- ++(1,0);
\draw [-latex, line width=0.75mm] (8,0.75) -- ++(0.6,0);
\draw [-, line width=0.75mm] (8,0.75) -- ++(1,0);
\end{tikzpicture}
\caption{Illustration of a chain of processes producing a normal threshold.} \label{fig:normalThreshold}
\end{subfigure} \\
\vspace{1cm}
\begin{subfigure}{\textwidth}
\centering
\begin{tikzpicture}[thick, scale=1.5]
\filldraw[draw=black, line width=0.75mm, fill = gray!30] (1,0) rectangle ++(1,1.5);
\filldraw[draw=black, line width=0.75mm, fill = gray!30] (3,-0.1) rectangle ++(1,0.7);
\filldraw[draw=black, line width=0.75mm, fill = gray!30] (3,0.9) rectangle ++(1,0.7);
\filldraw[draw=black, line width=0.75mm, fill = gray!30] (5,0) rectangle ++(1,1);
\filldraw[draw=black, line width=0.75mm, fill = gray!30] (7,0) rectangle ++(1,1.5);
\draw [-latex, line width=0.75mm] (0,0.4) -- ++(0.6,0);
\draw [-, line width=0.75mm] (0,0.4) -- ++(1,0);
\draw [-latex, line width=0.75mm] (0,1.1) -- ++(0.6,0);
\draw [-, line width=0.75mm] (0,1.1) -- ++(1,0);
\draw [-latex, line width=0.75mm] (2,0.1) -- ++(0.6,0);
\draw [-, line width=0.75mm] (2,0.1) -- ++(1,0);
\draw [-latex, line width=0.75mm] (2,0.3) -- ++(0.6,0);
\draw [-, line width=0.75mm] (2,0.3) -- ++(1,0);
\draw [-latex, line width=0.75mm] (2,1.2) -- ++(0.6,0);
\draw [-, line width=0.75mm] (2,1.2) -- ++(1,0);
\draw [-latex, line width=0.75mm] (4,0.25) -- ++(0.6,0);
\draw [-, line width=0.75mm] (4,0.25) -- ++(1,0);
\draw [-latex, line width=0.75mm] (4,1.3) -- ++(1.6,0);
\draw [-, line width=0.75mm] (4,1.3) -- ++(3,0);
\draw [-latex, line width=0.75mm] (6,0.25) -- ++(0.6,0);
\draw [-, line width=0.75mm] (6,0.25) -- ++(1,0);
\draw [-latex, line width=0.75mm] (6,0.75) -- ++(0.6,0);
\draw [-, line width=0.75mm] (6,0.75) -- ++(1,0);
\draw [-latex, line width=0.75mm] (8,0.4) -- ++(0.6,0);
\draw [-, line width=0.75mm] (8,0.4) -- ++(1,0);
\draw [-latex, line width=0.75mm] (8,1.1) -- ++(0.6,0);
\draw [-, line width=0.75mm] (8,1.1) -- ++(1,0);
\end{tikzpicture}
\caption{Illustration of a ``chain'' of processes producing an anomalous threshold.} \label{fig:anomalousThreshold}
\end{subfigure}
\caption[Illustration of normal/anomalous thresholds from $S$-matrix theory]{Illustration of normal/anomalous thresholds from $S$-matrix theory. The figures are oriented towards \cite{CosterPhysicalRegionDiscontinuity1969,CosterPhysicalRegionDiscontinuity1970}. Every rectangle stands for a specific type of a process, i.e.\ a sum of Feynman graphs.} \label{fig:thresholds}
\end{figure}
We can classify those chains of scattering processes in two different types. In the first situation all intermediate, real particles outgoing of an intermediate process will join the next process (\cref{fig:normalThreshold}). Jumps in $\Im (T)$ relating to this situation are called \textit{normal thresholds}. They appear if the energy exceeds a squared sum of particle masses in the considered theory. The second possible situation we can imagine is, that the outgoing intermediate particles participate in several distinct further processes (\cref{fig:anomalousThreshold}). The analysis of those chains is much more involved and jumps in $\Im (T)$ relating to this situation are called \textit{anomalous thresholds}.
Since the transfer matrix $T$ is built by algebraic expressions related to Feynman graphs, we will find the same analytic behaviour also on the level of Feynman integrals. Thus, Feynman integrals will also have singularities with respect to their variables $z$, whenever the energies of incoming momenta allow particles to go on-shell. Hence, those thresholds will also be apparent in Feynman integrals, and we will focus mainly on the anomalous thresholds, which are also known as Landau singularities or kinematic singularities. \bigskip
To begin with, we want to have a look at the kinematic singularities from the perspective of momentum space Feynman integrals \cref{eq:FeynmanMomSp}, where we will now allow the norm of momenta to be complex $|q_i|\in\mathbb C$. In these integrals those singularities may appear if some inverse propagators $D_i=q_i^2+m_i^2$ vanish and additionally the integration contour is trapped in such a way, that we are not able to elude the singularity by deforming the contour in the complex plane. These situations are called pinches and if they appear, the equations
\begin{align}
x_i D_i = 0 \qquad \text{for all} \quad i=1,\ldots,n \label{eq:MomLandau1}\\
\frac{\partial}{\partial k_j} \sum_{i=1}^n x_i D_i = 0 \qquad \text{for all} \quad j=1,\ldots,L \label{eq:MomLandau2}
\end{align}
have a solution for $x\in\mathbb C^n\setminus\{0\}$ and $k\in\mathbb C^{L\times d}$. We will call every point $z$ admitting such a solution a \textit{Landau critical point}. Landau critical points do not depend on the choice of internal momenta or their orientation.\bigskip
The equations \cref{eq:MomLandau1}, \cref{eq:MomLandau2} are called \textit{Landau equations} and were independently derived in 1959 by Bjorken \cite{BjorkenExperimentalTestsQuantum1959}, Landau \cite{LandauAnalyticPropertiesVertex1959} and Nakanishi \cite{NakanishiOrdinaryAnomalousThresholds1959}. We recommend \cite{MizeraCrossingSymmetryPlanar2021} for a comprehensive summary of the known research results in Landau's analysis of over 60 years and restrict ourselves to a very short historical overview. For a summary of the first steps of this subject from the 1960s we refer to the monograph of Eden et al. \cite{EdenAnalyticSmatrix1966}. A much more mathematically profound investigation was carried out by Pham et al. in terms of homology theory \cite{HwaHomologyFeynmanIntegrals1966, PhamSingularitiesIntegrals2011}. Pham's techniques have recently brought back into focus by Bloch and Kreimer \cite{BlochCutkoskyRulesOuter2015}. An alternative approach avoiding the introduction of homology theory was initiated by Regge, Ponzano, Speer and Westwater \cite{PonzanoMonodromyRingsClass1969, SpeerGenericFeynmanAmplitudes1971}. Their work was also the starting point for a mathematical treatment due to Kashiwara and Kawai \cite{KashiwaraConjectureReggeSato1976}, Sato \cite{SatoRecentDevolpmentHyperfunction1975} and Sato et al. \cite{SatoHolonomyStructureLandau1977}, which are all heavily based on $D$-modules. Currently, there is a renewed interested in Landau varieties, and we refer to \cite{CollinsNewCompleteProof2020, MuhlbauerMomentumSpaceLandau2020, MizeraLandauDiscriminants2021} for a selection of modern approaches as well as \cite{BonischAnalyticStructureAll2021, BonischFeynmanIntegralsDimensional2021}, where the analytic structure of specific Feynman integrals was studied in the context of differential equations by methods of topological string theory on Calabi-Yau manifolds. However, as already mentioned in \cite{MizeraCrossingSymmetryPlanar2021, CollinsNewCompleteProof2020} there is dismayingly little known about kinematic singularities. \bigskip
Strictly speaking the Landau equations \cref{eq:MomLandau1}, \cref{eq:MomLandau2} are neither necessary nor sufficient conditions to have a singularity of the analytic continuated Feynman integral. Thus, there are on the one hand singularities which do not correspond to a solution of Landau equations. Those singularities are often called second-type singularities or non-Landau singularities and were found for the first time in \cite{CutkoskySingularitiesDiscontinuitiesFeynman1960}. And on the other hand, not all solutions of Landau equations result in a singularity \cite{CollinsNewCompleteProof2020}. However, Landau equations are necessary and sufficient for the appearance of a trapped contour \cite{CollinsNewCompleteProof2020} and can be a necessary condition for certain restrictions on Feynman integrals. Apart from the distinction between normal thresholds and anomalous thresholds, certain further distinctions are common. Singularities with all $x_i\neq 0$ in \cref{eq:MomLandau1}, \cref{eq:MomLandau2} are known as \textit{leading singularities}, and we will further distinguish between solutions with real positive $x_i\geq 0$ and general complex $x_i\in\mathbb C$. Landau critical points corresponding to solutions with $x\in (\mathbb{C}\setminus\mathbb R_{>0})^n$ are also known as \textit{pseudo thresholds}.\bigskip
As pointed out above, Landau equations are understood to determine when internal (virtual) particles going on-shell. Hence, the Feynman diagram describes then an interaction between real particles with a specific lifetime \cite{ColemanSingularitiesPhysicalRegion1965}. The extraordinary meaning for Feynman integrals owing the Landau singularities also from various methods, which construct the whole Feynman integral on the basis of these singularities. All these methods root more or less in the optical theorem and the corresponding unitarity cuts, introduced by Cutkosky \cite{CutkoskySingularitiesDiscontinuitiesFeynman1960} shortly after Landau's article. However, it should be mentioned that Cutkosky's rules are unproven up today. We refer to \cite{BlochCutkoskyRulesOuter2015} for the recent progress of giving a rigorous proof of Cutkosky's rules. \bigskip
In this chapter, we want to take a look at kinematic singularities from the perspective of $\mathcal{A}$-hypergeometric systems. Especially, we can combine this with the considerations about the singular locus of $\mathcal{A}$-hypergeometric systems $\operatorname{Sing}(H_\mathcal{A}({\underline{\nu}}))$ from \cref{ssec:SingularLocusAHyp}. This new point of view enables us to give a mathematical rigorous description of kinematic singularities. Furthermore, we will also notice certain discrepancies to the classical treatment of Landau varieties. Inter alia, it appears that in the common factorization of Landau varieties into leading Landau varieties of subgraphs certain non-trivial contributions were overlooked. Moreover, by means of the results in $\mathcal{A}$-hypergeometric theory, the most considerations can be reduced to polytopal combinatorics instead of algebraic topology as done in \cite{PhamSingularitiesIntegrals2011, HwaHomologyFeynmanIntegrals1966, MuhlbauerMomentumSpaceLandau2020, BrownPeriodsFeynmanIntegrals2010}. We also want to draw attention to the very interesting work of \cite{MizeraLandauDiscriminants2021}, published shortly after the article \cite{KlausenKinematicSingularitiesFeynman2022} that constitutes the basis of this chapter.\bigskip
We will begin in \cref{sec:LandauVarieties} with a discussion of the Landau variety, which is the central object for the analytic structure of Feynman integrals, and we will relate this variety to principal $A$-determinants. Within this framework we will also notice overlooked contributions in Landau varieties for graphs beyond one loop or the banana family. Moreover, this relation to the principal $A$-determinant enables us to give an efficient but indirect determination of Landau varieties by means of the Horn-Kapranov-parameterization. However, the Landau variety will not describe all kinematic singularities, and we will consider all remaining singularities, also known as second-type singularities, in \cref{sec:2ndtypeSingularities}. By those methods, we will exemplarily determine the Landau variety of the dunce's cap graph in \cref{sec:ExampleDuncesCap}.
Last but not least we will give a glimpse to the monodromy structure of Feynman integrals. Since kinematic singularities result in a non-trivial monodromy, Feynman integrals become multivalued functions. Unfortunately, the sheet structure of Feynman integrals is usually very sophisticated, and we will propose a related concept which is slightly simpler in \cref{sec:Coamoebas} called the coamoeba.
\section{Landau varieties} \label{sec:LandauVarieties}
Landau varieties are the central objects in the study of kinematic singularities. Unfortunately, Landau varieties come with several subtleties. In this section we want to give a definition of Landau varieties, and we will also discuss several of those subtleties. Furthermore, we want to relate Landau varieties to principal $A$-determinants. This will allow us to draw various consequences from the $\mathcal{A}$-hypergeometric theory to Landau varieties. In particular, we will see certain discrepancies in the classical approach of treating Landau varieties. But before giving a definition of Landau varieties, we will start with a reformulation of the Landau equations.\bigskip
The Landau equations stated in \cref{eq:MomLandau1}, \cref{eq:MomLandau2} involve the integration variables in momentum space as well as the integration variables of parametric space. There are also equivalent equations, which are stated in the parametric variables only. As aforementioned, the second Symanzik polynomial $\mathcal{F} (p,x)$ can be written as a discriminant of $\Lambda (k,p,x)\, \mathcal{U}(x)$ with respect to the loop momenta $k$, where $\Lambda(k,p,x) = \sum_{i=1}^n x_i D_i$ was defined in \cref{eq:LambdaxDrelation}. Hence, it is immediately clear, that Landau's equations \cref{eq:MomLandau1}, \cref{eq:MomLandau2} will be conditions on the second Symanzik polynomial $\mathcal{F}$. Instead of eliminating $k$ from \cref{eq:MomLandau1}, \cref{eq:MomLandau2}, one can show the Landau equations in parametric space also directly by considering the parametric integral representations \cite{NakanishiOrdinaryAnomalousThresholds1959}.
\begin{lemma}[Parametric space Landau equations \cite{NakanishiGraphTheoryFeynman1971, EdenAnalyticSmatrix1966}]
\label{lem:ParLandau}
Under the assumption $\mathcal{U}(x)\neq 0$, a point $z\in\mathbb C^N$ is a Landau critical point, if and only if the equations
\begin{align}
x_i \pd{\mathcal{F}}{x_i} &= 0 \quad \text{for } i=1,\ldots,n \label{eq:LandauEqParameterSpA} \\
\mathcal{F} &= 0 \label{eq:LandauEqParameterSpB}
\end{align}
have a solution in $x\in\mathbb P^{n-1}_{\mathbb C}$. Solutions with $\mathcal{U}(x)=0$ are connected with the second-type singularities, which we will examine in \cref{sec:2ndtypeSingularities}.
\end{lemma}
\begin{proof}
``$\Rightarrow$'':
Consider $\Lambda=k^\top\! M k + 2 Q^\top\! k + J$ from equation \cref{eq:LambdaxDrelation}. We find
\begin{align}
\pd{\Lambda(k,p,x)}{k} &= 2 Mk + 2 Q^\top \point
\end{align}
By the assumption $\mathcal{U}\neq 0$, $M$ is invertible and thus $\pd{\Lambda}{k_j}=0$ from \cref{eq:MomLandau2} implies $k=-M^{-1}Q^\top$. Inserting this equation for $k$ in \cref{eq:LambdaxDrelation} and comparing with \cref{eq:SymanzikPolynomialsMatrices} we find $\Lambda(-M^{-1}Q^\top\!,p,x) = \mathcal{F}/\mathcal{U}$. Hence, we have shown that $\mathcal{F}$ is the discriminant of $\mathcal{U} \Lambda$ with respect to $k$. Therefore, $\Lambda=0$ and $\pd{\Lambda}{k_j}=0$ implies $\mathcal{F}=0$ and furthermore
\begin{align}
x_j \pd{\mathcal{F}}{x_j} = x_j \frac{\partial}{\partial x_j} \left( \mathcal{U} \Lambda \right) = \mathcal{U} x_j D_j = 0 \point
\end{align}
``$\Leftarrow$'': Since the equations \cref{eq:MomLandau1},\cref{eq:MomLandau2} contain more variables than in parameter space, we can always find a value $k^\prime$, s.t. $\Lambda(k^\prime,p,x) = \mathcal{F}/\mathcal{U}$, without restricting the possible solutions for $x$. This can also be vindicated by the fact that the Feynman integral \cref{eq:FeynmanMomSp} is invariant under linear transformations of loop momenta. We conclude
\begin{align}
\frac{\partial\Lambda}{\partial k'} &= 2Mk^\prime + 2Q^\top = 0\\
x_j D_j &= x_j \frac{\partial \Lambda}{\partial x_j} = x_j \frac{\partial \mathcal{U}^{-1}}{\partial x_j} \mathcal{F} + x_j \frac{\partial \mathcal{F}}{\partial x_j} \mathcal{U} = 0 \point
\end{align}
\end{proof}
Note, that by Euler's theorem one of the $n+1$ equations in \cref{lem:ParLandau} is redundant, which is the reason why we look for solutions in projective space. \bigskip
According to \cite{MizeraLandauDiscriminants2021, BrownPeriodsFeynmanIntegrals2010, PhamSingularitiesIntegrals2011, MuhlbauerMomentumSpaceLandau2020} we will call the variety defined by the Zariski closure of all Landau critical points the \textit{Landau variety} $\mathcal L(\mathcal I_\Gamma)$. Due to Riemann's second removable theorem \cite{KaupHolomorphicFunctionsSeveral1983}, all singularities corresponding to a part of the Landau variety with $\operatorname{codim} \mathcal L(\mathcal I_\Gamma) > 1$ are removable singularities. Hence, we can focus on the codimension one part of $\mathcal L(\mathcal I_\Gamma)$, which we will denote by $\gls{Landau1}$. Based on the Landau equations in parameter space (\cref{lem:ParLandau}), we can directly read off the following relation between the Landau variety and the principal $A$-determinant (see \cref{ssec:principalAdet}). The following theorem can also be understood as an alternative definition of Landau varieties.
\begin{theorem}[Landau variety] \label{thm:LandauVar}
Let $\mathcal{F}\in\mathbb C[x_1,\ldots,x_n]$ be the second Symanzik polynomial of a Feynman graph $\Gamma$ and let $\mathcal{A}_\mathcal{F}\subset \mathbb Z^n$ be the support of $\mathcal{F}$, i.e.\ $\mathcal{F} = \sum_{a^{(j)}\in \mathcal{A}_\mathcal{F}} z_j x^{a^{(j)}}$. The Landau variety is given by the (simple) principal $A$-determinant of $\mathcal{F}$
\begin{align}
\mathcal L_1(\mathcal I_\Gamma) = \Var\!\left( E_{\mathcal{A}_\mathcal{F}}(\mathcal{F})\right) = \Var\!\left( \widehat E_{\mathcal{A}_\mathcal{F}}(\mathcal{F})\right) \point
\end{align}
In particular, $\mathcal L_1(\mathcal I_\Gamma)$ is independent of the parameters ${\underline{\nu}}$.
\end{theorem}
Usually, one splits the calculation of Landau singularities into a leading singularity with all $x_i\neq 0$ in \cref{eq:LandauEqParameterSpA}, \cref{eq:LandauEqParameterSpB} and non-leading singularities with $x_i=0$ for $i\in I$, where $\emptyset\neq I\subsetneq \{1,\ldots,n\}$ \cite{EdenAnalyticSmatrix1966}. Every non-leading singularity can be interpreted as a leading singularity of the subgraph where all edges corresponding to $I$ are contracted. This is due to \cref{eq:UContracted} and its corresponding identity for $\mathcal{F}_0$. Hence, setting a Schwinger parameter $x_i$ in the second Symanzik polynomial $\mathcal{F}$ to zero is equivalent to considering a subgraph, where the corresponding edge $e_i$ is contracted. Note that the second Symanzik polynomial vanishes if the edge $e_i$ corresponds to a tadpole. This procedure seems at first glance to be a natural distinction of cases appearing in \cref{eq:LandauEqParameterSpA} and \cref{eq:LandauEqParameterSpB} and works for special cases. However, when considering systems of polynomials in several variables as \cref{eq:LandauEqParameterSpA}, \cref{eq:LandauEqParameterSpB}, we should rather consider the ideal generated by these polynomials \cite{SturmfelsSolvingSystemsPolynomial2002}. Due to the more intricate situation with multivariate polynomials, we cannot expect primary decomposition \cite{GreuelSingularIntroductionCommutative2002} to work so naively, except for special cases. Hence, in general we will not expect such a simple decomposition of Landau varieties $\mathcal L_1(\mathcal I_\Gamma)$.
When comparing with the results of \cref{ssec:principalAdet}, we will find a similar but in general different splitting. According to \cref{thm:pAdet-factorization} we have a factorization
\begin{align}
\widehat E_{\mathcal{A}_\mathcal{F}}(\mathcal{F}) = \pm \prod_{\tau\subseteq \operatorname{Newt}(\mathcal{F})} \Delta_{\mathcal{A}_\mathcal{F}\cap\tau} (\mathcal{F}_\tau) \label{eq:LandauFactorizationPrincipalADet}
\end{align}
into $A$-discriminants, where $\mathcal{F}_\tau$ denotes the truncated polynomial of $\mathcal{F}$ defined by \cref{eq:truncatedPolynomialDefinition} and the product runs over all faces $\tau$ of $\operatorname{Newt}(\mathcal{F})$. The decomposition into subgraphs and the one into faces of $\operatorname{Newt}(\mathcal{F})$ coincide if the second Symanzik polynomial $\mathcal{F}$ consists in all monomials of a given degree. However, in general the procedure of subgraphs will miss certain contributions.
\begin{lemma} \label{lem:SubgraphPolynomialsVsTruncated}
For an index set $I\subseteq\{1,\ldots,n\}$ we call $\gls{FIx} :=\mathcal{F}(x)|_{\{x_i=0\,|\, i\in I\}}$ the subgraph polynomial associated to $I$. Every subgraph polynomial is also a truncated polynomial $\mathcal{F}_\tau$ with a face $\tau\subseteq\operatorname{Newt}(\mathcal{F})$. The converse is true if $\mathcal{F}$ consists in all monomials of degree $L+1$. However, the converse is not true in general.
\end{lemma}
\begin{proof}
Let $\phi(p) = - \sum_{i=1}^n b_i p_i$ be a linear form with $b_i=1$ for all $i\in I$ and $b_i=0$ otherwise. This linear form takes its maximal value $\max \phi(p) =0$ for precisely those points $p\in\mathbb R_+^n$ with $p_i=0$ for $i\in I$. Since all points of $\operatorname{Newt}(\mathcal{F})$ are contained in the positive orthant $\mathbb R_+^n$, such a linear map $\phi$ defines the corresponding face $\tau$ according to \cref{eq:facedef}.
In case, where $\mathcal{F}$ consists in all possible monomials of a given degree, the Newton polytope is an $n$-simplex. The faces of this simplex are trivially in a one to one correspondence with subsets of $\{1,\ldots,n\}$.
\end{proof}
Thus, beyond $1$-loop graphs and banana graphs, which contain all monomials of a given degree in the second Symanzik polynomial, one may get additional singularities from the truncated polynomials, which will be missed with the approach of the subgraphs. Remarkably, there are also non-trivial factors, which will be missed by the classical approach of subgraphs, i.e.\ there are $A$-discriminants of missed truncated polynomials which are neither $1$ nor contained in another $A$-discriminant. For example the Landau variety of the dunce's cap graph will contain a factor which has the shape of a Landau variety of a $1$-loop bubble graph. This contribution will be overlooked by subgraphs, see \cref{sec:ExampleDuncesCap}. Hence, this observation describes a serious and unexpected issue in the current understanding of Landau varieties beyond $1$-loop and banana graphs. Moreover, those additional, overlooked singularities can also appear on the principal sheet, i.e.\ we will find solutions $x\in\mathbb R^n_{> 0}$ (see \cref{sec:ExampleDuncesCap}).
That there is a serious issue in the decomposition of the Landau variety in the classical approach with subgraphs was also indicated in \cite{LandshoffHierarchicalPrinciplePerturbation1966} and further discussed in \cite{BoylingHomologicalApproachParametric1968}\footnote{I would like to thank Marko Berghoff who brought these two articles to my attention.}. By means of the principal $A$-determinant, this problem can now be cleared up and the correct decomposition of Landau varieties can be easily described by the truncated polynomials of $\mathcal{F}$. \bigskip
Apart from those general questions, the relation between Landau varieties and principal $A$-determinants leads also to a very efficient tool to determine Landau varieties. By means of Horn-Kapranov-parameterization (see \cref{ssec:Adiscriminants}) one can compute a parameterization of the Landau variety very fast. If we decompose the Landau variety into its irreducible components $\mathcal L_1 (\mathcal I_\Gamma) = \bigcup_\tau \mathcal L_1^{(\tau)} (\mathcal I_\Gamma)$ in the sense of \cref{eq:LandauFactorizationPrincipalADet}, every component corresponds to an $A$-discriminant. Hence, we can write these components as the image of a parameterization $\psi^{(\tau)}$ defined in \cref{eq:HKPpsi}
\begin{align}
\mathcal L_1^{(\tau)} (\mathcal I_\Gamma) &= \psi^{(\tau)} \!\left(\mathbb P^{r-1}_{\mathbb C} \right) \qquad\text{with} \nonumber \\
\psi^{(\tau)} [t_1 : \ldots : t_r ] &= \left( \prod_{i=1}^N \left(\sum_{j=1}^r b^{(\tau)}_{ij} t_j \right)^{b^{(\tau)}_{i1}}, \ldots, \prod_{i=1}^N \left(\sum_{j=1}^r b^{(\tau)}_{ij} t_j \right)^{b^{(\tau)}_{ir}}\right) \label{eq:LandauHKPpsi}
\end{align}
where $b^{(\tau)}_{ij}$ are the components of a Gale dual of $\mathcal{A}\cap\tau\in\mathbb Z^{(n+1)\times N}$ and $r=N-n-1$. Hence, one only has to determine Gale duals of the corresponding vector configurations of $\mathcal{A}\cap\tau$. Since Gale duals can be determined very efficiently, this is particularly pleasing since the existing results for Landau varieties in the literature are limited to very few number of graphs \cite{OliveSingularitiesScatteringAmplitudes1962,IslamLeadingLandauCurves1966,RiskAnalyticityEnvelopeDiagrams1968}. Recently, there was published another new tool for an efficient determination of Landau varieties \cite{MizeraLandauDiscriminants2021} which also extends the calculable graphs significantly. We will present the scope of Horn-Kapranov-parameterization with an example in \cref{sec:ExampleDuncesCap}. However, we have to recall that Horn-Kapranov-parameterization only generates an indirect representation of Landau varieties. Depending on the purpose, a parameterization can be even more useful than the generating polynomial of the Landau variety. But turning \cref{eq:LandauHKPpsi} in a generating equation of a variety by elimination of parameters is still a very costly task.\bigskip
We want further to recall the observation of \cref{thm:NewtSec}, which relates the Newton polytope of the principal $A$-determinant $E_{\mathcal{A}_\mathcal{F}}(\mathcal{F})$ by the secondary polytope $\Sigma(\mathcal{A}_\mathcal{F})$. As sketched in \cref{ssec:principalAdet}, one can use this relation to determine the defining equation of the Landau variety by means of all regular triangulations of $\operatorname{Conv}(\mathcal{A}_\mathcal{F})$. We refer to \cref{ex:cubicPrincipalAdetFromTriangs} for an illustration of this idea. Furthermore, this connection gives also a lower bound for the number of monomials in the defining polynomial of $\mathcal L_1(\mathcal I_\Gamma)$. Thus, this polynomial will contain at least as many monomials as there are regular triangulations of $\operatorname{Newt}(\mathcal{F})$. Even though this estimation is far from being a sharp bound\footnote{The number of regular triangulations counts only the extreme monomials of the defining polynomial of $\mathcal L_1(\mathcal I_\Gamma)$, i.e.\ the vertices of $\operatorname{Newt}(E_{\mathcal{A}_\mathcal{F}}(\mathcal{F}))$.}, the number of these triangulations is growing very fast (see \cref{tab:characteristics}). \bigskip
Another direct consequence of the relation to $\mathcal{A}$-hypergeometric theory is, that singularities with respect to $z$ can not be worse than logarithmic singularities. This can be seen by the representation of Feynman integrals in the neighbourhood of the singular locus by means of canonical series solution \cite{SaitoGrobnerDeformationsHypergeometric2000}. Hence, $\mathcal{A}$-hypergeometric systems are ``regular'' in a generalized sense of regular singular points in linear ordinary differential equations \cite[sec. 2.3]{CattaniThreeLecturesHypergeometric2006}. A similar result was also shown in \cite[sec. 7.2]{HannesdottirWhatVarepsilonSmatrix2022}.\bigskip
However, we need to point out a further subtlety in the definition of Landau varieties as well as in \cref{thm:LandauVar}. In both places we will assume that the coefficients in the second Symanzik polynomial $z\in\mathbb C^N$ are generic. In the physically relevant case, there will be relations among these coefficients, and they are not necessarily generic. Hence, the application of these extra relations will restrict the Landau variety to a subspace. This is not only an issue in parametric representation, which involves the Symanzik polynomials. It also appears in momentum space, where the external momenta are treated as vectors in $d$-dimensional Minkowskian space. Thus, there can not be more than $d$ independent external momenta, and additionally we suppose an overall momentum conservation. If the variables are not generic, it can occur that a factor in the defining equation of the Landau variety is identical to zero. Therefore, the Landau variety would cover the whole space. On the other hand, we know that there can not be a singularity with unbounded functional value for all points $z\in\mathbb C^N$ due to the convergence considerations from \cref{sec:DimAnaReg}. Thus, in the limit to the physically relevant case, we want to exclude such that ``overall singularities'' in order to make the other singularities apparent\footnote{The problem of vanishing defining equation of the Landau variety in the presence of physical relations between variables is known for a long time and will usually be ignored as we will do with \cref{eq:LandauPrincipalADetPhysical}, see e.g.\ \cite[sec. 2.10]{EdenAnalyticSmatrix1966} and also \cref{ex:1loopSecondtype}. However, a deeper understanding of this behaviour would be desirable, and we would identify this as one of the most uncharted areas in this subject.}. Therefore, we want to define
\begin{align}
\gls{pAdetPh} := \, \pm \hspace{-2em} \prod_{\substack{\tau\subseteq\operatorname{Newt}(\mathcal{F}) \\ \left.\Delta_{\mathcal{A}_\mathcal{F}\cap\tau}(\mathcal{F}_\tau)\right|_{z\rightarrow z^{(ph)}} \neq 0}} \hspace{-1em} \Delta_{\mathcal{A}_\mathcal{F}\cap\tau}(\mathcal{F}_\tau) \label{eq:LandauPrincipalADetPhysical}
\end{align}
a simple principal $A$-determinant which contains only the physically relevant parts of the principal $A$-determinant, i.e.\ we omit the factors, which vanish after inserting the physical restrictions $z^{(ph)}$ on the variables. Equivalently, we define $\gls{Landau1Ph} := \Var\!\left( \widehat E_{\mathcal{A}_\mathcal{F}}^{ph}(\mathcal{F})\right)$ as the physically relevant Landau variety. Note that the Landau variety $\mathcal L_1(\mathcal I_\Gamma)$ is independent of the parameters ${\underline{\nu}}$, whereas the physical Landau variety $\mathcal L_1^{ph}(\mathcal I_\Gamma)$ may principally depend on the parameters ${\underline{\nu}}$ since the relations between the variables $z$ may depend on ${\underline{\nu}}$. For example, specific choices of spacetime dimension $d$ can change the relations between the external momenta.\bigskip
The fact that the singularities of Feynman integrals are related to principal $A$-determinants, comes with no surprise. As aforementioned, the singular locus of an $\mathcal{A}$-hypergeometric function will always be generated by a principal $A$-determinant. Comparing the Landau variety $\mathcal L_1(\mathcal I_\Gamma)$ from \cref{thm:LandauVar} with the results of \cref{ssec:SingularLocusAHyp} about $A$-hypergeometric functions (especially \cref{thm:SingularLocusPrincipalAdet}), we would rather expect $\Var(E_{A_\mathcal{G}}(\mathcal{U}+\mathcal{F}))$ instead of $\mathcal L_1(\mathcal I_\Gamma)$ to be the singular locus of $\mathcal I_\Gamma$. Directly from the factorization of the principal $A$-determinant we can see the relation of these two varieties.
\begin{lemma} \label{lem:LandauVarietyContainedInSing}
The Landau variety is contained in the singular locus of the $A$-hyper\-ge\-o\-me\-tric function:
\begin{align}
\mathcal L_1 (\mathcal I_\Gamma) = \Var\big(E_{\mathcal{A}_\mathcal{F}}(\mathcal{F})\big) \subseteq \Var \big(E_{A_\mathcal{G}}(\mathcal{U}+\mathcal{F})\big) = \operatorname{Sing}(H_{\mathcal{A}_\mathcal{G}}({\underline{\nu}})) \point
\end{align}
\end{lemma}
\begin{proof}
$\mathcal{U}$ and $\mathcal{F}$ are homogeneous polynomials of different degrees. Therefore, $\operatorname{Newt}(\mathcal{U}+\mathcal{F})$ has points on two different, parallel hyperplanes and thus $\operatorname{Newt}(\mathcal{U})$ and $\operatorname{Newt}(\mathcal{F})$ are two facets of $\operatorname{Newt}(\mathcal{U}+\mathcal{F})$. By \cref{thm:pAdet-factorization} we see that $\Var (E_{A_\mathcal{G}}(\mathcal{U}+\mathcal{F})) = \Var (E_{\mathcal{A}_\mathcal{F}}(\mathcal{F})) \cup \Var (E_{\mathcal{A}_\mathcal{U}}(\mathcal{U})) \cup \Var (\Delta_{A_\mathcal{G}}(\mathcal{U}+\mathcal{F})) \cup \Var(R)$, where the remaining polynomial $R$, corresponds to all discriminants coming from proper, mixed faces, i.e.\ faces $\tau\subsetneq \operatorname{Newt}(\mathcal{G})$ having points of $\mathcal{U}$ and $\mathcal{F}$.
\end{proof}
Thus, \cref{lem:LandauVarietyContainedInSing} shows what we already indicated above: The Landau variety covers not all\footnote{These singularities of the Feynman integral, which are not contained in $\mathcal L_1(\mathcal I_\Gamma)$, have nothing to do with the overlooked singularities in $\mathcal L_1(\mathcal I_\Gamma)$ mentioned above. The overlooked singularities discussed above come from the fact that the usually assumed decomposition in subgraphs is in general not the correct approach. The singularities discussed here come from the fact that the second Symanzik polynomial $\mathcal{F}$ only represents a part of the Feynman integral.} kinematic singularities of the Feynman integrals and in general $\Var(E_{\mathcal{A}_\mathcal{F}}(\mathcal{F}))$ will be a proper subvariety of $\Var (E_{A_\mathcal{G}}(\mathcal{G}))$. Based on the prime factorization of the principal $A$-determinant, we will divide the singular locus of the Feynman integral $\Var (E_{A_\mathcal{G}}(\mathcal{G}))$ into four parts
\begin{align}
\Var \big(E_{A_\mathcal{G}}(\mathcal{G})\big) = \Var \big(\widehat E_{A_\mathcal{G}}(\mathcal{G})\big) = \Var \big(\widehat E_{\mathcal{A}_\mathcal{F}}(\mathcal{F})\big) \cup \Var \big(\widehat E_{\mathcal{A}_\mathcal{U}}(\mathcal{U})\big) \cup \Var \big(\Delta_{A_\mathcal{G}}(\mathcal{G})\big) \cup \Var (R) \point
\end{align}
Namely, we factorize $\widehat E_{A_\mathcal{G}}(\mathcal{G})$ in a polynomial $\widehat E_{\mathcal{A}_\mathcal{F}}(\mathcal{F})$ generating the classical Landau variety according to \cref{thm:LandauVar}, a polynomial $\widehat E_{\mathcal{A}_\mathcal{U}}(\mathcal{U})$ which is constant in the physically relevant case and a polynomial $\Delta_{A_\mathcal{G}}(\mathcal{G})$, which we will associate to the second-type singularities. The remaining polynomial
\begin{align}
\gls{R} := \prod_{\substack{\tau\subsetneq \operatorname{Newt}(\mathcal{U}+\mathcal{F})\\ \tau \nsubseteq \operatorname{Newt} (\mathcal{U}), \tau \nsubseteq \operatorname{Newt} (\mathcal{F})}} \Delta_{A\cap\tau} (\mathcal{G}_\tau) \label{eq:defRproperMixedFaces}
\end{align}
will correspond to second-type singularities of subgraphs, and we will call the roots of $R$ the \textit{mixed type singularities of proper faces}. In the following section, we will analyze step by step these further contributions to the singular locus.\bigskip
\section{Second-type singularities} \label{sec:2ndtypeSingularities}
As aforementioned the defining polynomial of the singular locus $\operatorname{Sing}(H_\mathcal{A}({\underline{\nu}}))$ splits into several $A$-discriminants. With the $A$-discriminant $\Delta_{A_\mathcal{G}}(\mathcal{U}+\mathcal{F})$ we will associate the so-called second-type singularities \cite{CutkoskySingularitiesDiscontinuitiesFeynman1960, FairlieSingularitiesSecondType1962}. We have to remark, that the notion of second-type singularities differs slightly in various literature. Moreover, there is very little known about second-type singularities. Usually, a distinction is made between pure second-type singularities and mixed second-type singularities \cite{EdenAnalyticSmatrix1966, NakanishiGraphTheoryFeynman1971}. The pure second-type singularities do not depend on masses and can be expressed by Gram determinants, whereas the latter appear in higher loops. Second-type singularities are slightly better understood in momentum space, whereas they are endpoint singularities at infinity \cite{MuhlbauerMomentumSpaceLandau2020}. In parametric space, second-type singularities are connected to the case where $\mathcal{U}=0$ \cite[sec. 16]{NakanishiGraphTheoryFeynman1971}.\bigskip
In our approach we will call the variety generated by $\Delta_{A_\mathcal{G}}(\mathcal{G})$ the second-type singularities. By introducing a new variable $x_0$, we can change to the homogeneous setting $\Delta_{\widetilde A_\mathcal{G}}(x_0 \mathcal{U} + \mathcal{F})$ which has the same discriminant, since there is an appropriate injective, affine map connecting $A_\mathcal{G}$ with $\widetilde A_\mathcal{G}$ according to \cref{ssec:Adiscriminants}. Writing the corresponding polynomial equation system explicitly, the $A$-discriminant $\Delta_{A_\mathcal{G}}(\mathcal{G})$ is the defining polynomial of the closure of the set of all coefficients $z\in\mathbb C^N$, such that the equations
\begin{align}
\mathcal{U} = 0, \quad \mathcal{F}_0 = 0, \quad \pd{\mathcal{F}_0}{x_i} + \pd{\mathcal{U}}{x_i} \left( x_0 + \sum_{j=1}^n x_j m_j^2 \right) = 0 \quad\text{for}\quad i=1,\ldots,n \label{eq:secondtype} \point
\end{align}
have a solution for $(x_0,x)\in(\mathbb C^*)^{n+1}$. As before, we denote by $\mathcal{F}_0$ the massless part of the second Symanzik polynomial. Not all conditions herein \cref{eq:secondtype} are independent, because the polynomial $x_0\mathcal{U}+\mathcal{F}$ is homogeneous again. Thus, we can drop an equation from \cref{eq:secondtype}. These equations for second-type singularities \cref{eq:secondtype} agree with the description in \cite[sec. 16]{NakanishiGraphTheoryFeynman1971}.
\begin{example}[\nth{2} type singularities of all banana graphs]
Consider the family of massive $L$-loop $2$-point functions, which are also called banana graphs (see \cref{fig:bananaFamily}). These graphs having $n=L+1$ edges and the Symanzik polynomials
\begin{align}
\mathcal{U} = x_1 \cdots x_n \left(\frac{1}{x_1} + \ldots + \frac{1}{x_n} \right)\text{ , } \qquad \mathcal{F}_0 = p^2 x_1\cdots x_n \point
\end{align}
Applying the conditions of \cref{eq:secondtype} we will find the second-type singularity for all banana graphs to be
\begin{align}
p^2 = 0 \point
\end{align}
\end{example}
\begin{example}[\nth{2} type singularities of all $1$-loop graphs] \label{ex:1loopSecondtype}
A massive $1$-loop graph with $n$ edges has Symanzik polynomials (see \cref{fig:1loopFamily}
\begin{align}
\mathcal{U} = x_1 + \ldots + x_n \qquad \mathcal{F}_0 = \sum_{1\leq i < j \leq n} u_{ij} x_i x_j
\end{align}
where $u_{ij} := \left(\sum_{k=i}^{j-1} p_k\right)^2$ for $i<j$ defines the dependence on external momenta. We will set $u_{ij}=u_{ji}$ for $i>j$ and $u_{ii}=0$. By these definitions we obtain $\pd{\mathcal{U}}{x_j} = 1$ and $\pd{\mathcal{F}_0}{x_j} = \sum_{i\neq j} u_{ij} x_i$ for the derivatives. Since we can drop one equation from \cref{eq:secondtype} due to the homogenity of $x_0 \mathcal{U} +\mathcal{F}$, we will obtain a system of linear equations in the $1$-loop case. Eliminating $x_0$ by subtracting equations, we can combine these conditions to a determinant
\begin{align}
\begin{vmatrix}
\hspace{.3cm} 1 & 1 & \cdots & 1 & \\
\multicolumn{4}{c}{\multirow{2}{*}{$\bigl(u_{ij} - u_{jn}\bigr)_{\scalebox{0.7}{$\substack{1\leq i \leq n-1 \\ 1\leq j \leq n}$}}$}} \\
&
\end{vmatrix} = 0 \label{eq:1loopSecondtype}
\end{align}
as the (not necessarily irreducible) defining polynomial of the second-type singularity for all $1$-loop graphs. By the same argument as used in \cite[sec. 16]{NakanishiGraphTheoryFeynman1971}, the condition \cref{eq:1loopSecondtype} is equivalent to the vanishing of the Gram determinant of external momenta, which is usually set for the pure second-type singularity \cite{FairlieSingularitiesSecondType1962, EdenAnalyticSmatrix1966}. Note that in the $1$-loop case, the second-type singularity does not depend on masses. Furthermore, for higher $n$, the external momenta satisfy certain relations, since there can not be more than $d$ linearly independent vectors in $d$-dimensional Minkowski space. In addition, the external momenta ensure a conservation law. Thus, for\footnote{One can even find, that the Gram determinant vanishes for $n > d$, see \cite[sec. 2.10]{EdenAnalyticSmatrix1966}.} $n > d + 1$ the condition \cref{eq:1loopSecondtype} is satisfied for all physical external momenta. Hence, we will remove this contribution to the singular locus, when we restrict us to the physically relevant case as done by \cref{eq:LandauPrincipalADetPhysical} whenever $n > d+1$. We want to emphasize again that this removing of ``the unwanted zeros'' does not only appear for our special approach. As it can be seen by this example, such a phenomenon does also appear in the ``classical way'' of treating Landau singularities.
\end{example}
As the next contribution to the singular locus $\operatorname{Sing}(H_\mathcal{A}({\underline{\nu}}))$ we will consider the principal $A$-determinant of the first Symanzik polynomial $E_{\mathcal{A}_\mathcal{U}}(\mathcal{U})$. Since the coefficients of the first Symanzik polynomial are all equal to one, the principal $A$-determinant $E_{\mathcal{A}_\mathcal{U}}(\mathcal{U})$ can be either $0$ or $\pm 1$. Note that singularities corresponding to $E_{\mathcal{A}_\mathcal{U}}(\mathcal{U})=0$ can only be pseudo thresholds, i.e.\ all solutions of $\mathcal{U} = \pd{\mathcal{U}}{x_1} = \ldots = \pd{\mathcal{U}}{x_n} = 0$ have to satisfy $x\notin \mathbb R_{>0}^n$. Moreover, we can determine $E_{\mathcal{A}_\mathcal{U}}(\mathcal{U})$ also for certain classes of graphs.
\begin{lemma} \label{lem:EAUfor1LoopAndBanana}
For all $1$-loop graphs and all $L$-loop banana graphs we obtain
\begin{align}
E_{\mathcal{A}_\mathcal{U}}(\mathcal{U}) = 1 \point
\end{align}
\end{lemma}
\begin{proof}
Note, that in both cases $\operatorname{Newt}(\mathcal{U})$ describes a simplex. Since $\mathcal{U}$ has no free coefficients, $E_{\mathcal{A}_\mathcal{U}}(\mathcal{U})$ is either $0$ or $\pm 1$. Therefore, in order to prove $E_{\mathcal{A}_\mathcal{U}}(\mathcal{U})=1$ it is sufficient to show, that $\mathcal{U}=\pd{\mathcal{U}}{x_1} = \ldots=\pd{\mathcal{U}}{x_n}=0$ contains a contradiction. For $1$-loop graphs this contradiction is obvious since $\pd{\mathcal{U}}{x_i}=1$.
For $L$-loop banana graphs consider the relation following from \cref{eq:UContractedDeleted}
\begin{align}
\mathcal{U} - x_i \pd{\mathcal{U}}{x_i} = - \frac{x_1 \cdots x_n}{x_i} = 0
\end{align}
which has no solution for $x\in(\mathbb C^*)^n$.
\end{proof}
We want to emphasize, that \cref{lem:EAUfor1LoopAndBanana} does not hold for general Feynman graphs. The dunce's cap graph is the simplest example, which will result in a vanishing principal $A$-determinant of the first Symanzik polynomial. Note, that a vanishing of $E_{\mathcal{A}_\mathcal{U}}(\mathcal{U})$ will not correspond to a singularity with unbounded functional value, due to the convergence considerations of \cref{sec:DimAnaReg}. Hence, singularities stemming from $E_{\mathcal{A}_\mathcal{U}}(\mathcal{U})$ can be justifiably omitted in the physical approach, as their singular behaviour will only show up when we will add variables to the first Symanzik polynomials, i.e.\ they have an impact on the neighbourhood of $z_i=1$. Therefore, we will exclude these overall contributions to the singular locus in the spirit of \cref{eq:LandauPrincipalADetPhysical} if we consider the physically relevant case. Thus, we will set $E_{\mathcal{A}_\mathcal{U}}^{ph}(\mathcal{U})=1$. \bigskip
The last contribution $R$ to the singular locus of Feynman integrals defined in \cref{eq:defRproperMixedFaces} comes from discriminants of proper, mixed faces, i.e.\ proper faces of $\operatorname{Newt}(\mathcal{U}+\mathcal{F})$ which are neither completely contained in $\operatorname{Newt}(\mathcal{F})$ nor in $\operatorname{Newt}(\mathcal{U})$. These $A$-discriminants can be associated with second-type singularities of subgraphs as it can be observed in the following example.
\begin{example}[proper, mixed singularities for the triangle graph] \label{ex:triangleGraphProperMixed}
\begin{figure}
\centering
\begin{subfigure}{.45\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.5}, scale=1.5]
\coordinate[dot] (A) at (-1,0);
\coordinate (A1) at (-1.5,-0.5);
\coordinate[dot] (B) at (1,0);
\coordinate (B1) at (1.5,-0.5);
\coordinate[dot] (C) at (0,1.73);
\coordinate (C1) at (0,2.5);
\draw (A) -- node[below] {$3$} (B);
\draw (B) -- node[right] {$1$} (C);
\draw (C) -- node[left] {$2$} (A);
\draw (A1) -- (A);
\draw (B1) -- (B);
\draw (C1) -- (C);
\node at (A1) [below left = 0.7mm of A1] {$p_1$};
\node at (B1) [below right = 0.7mm of B1] {$p_2$};
\node at (C1) [above = 0.7mm of C1] {$p_3$};
\end{tikzpicture}
\caption{Triangle graph}\label{fig:TriangleGraph}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, shape=circle, fill=black, scale=.5}, scale=1.5]
\draw[thin,->] (0,0,0) -- (2.4,0,0) node[anchor=north west] {$\mu_1$};
\draw[thin,->] (0,0,0) -- (0,2.4,0) node[anchor=south east] {$\mu_2$};
\draw[thin,->] (0,0,0) -- (0,0,2.4) node[anchor=south east] {$\mu_3$};
\coordinate[dot] (U1) at (1,0,0);
\coordinate[dot] (U2) at (0,1,0);
\coordinate[dot] (U3) at (0,0,1);
\coordinate[dot] (F1) at (0,1,1);
\coordinate[dot] (F2) at (1,0,1);
\coordinate[dot] (F3) at (1,1,0);
\coordinate[dot] (F4) at (2,0,0);
\coordinate[dot] (F5) at (0,2,0);
\coordinate[dot] (F6) at (0,0,2);
\path[fill=gray!70, draw, opacity=0.5] (1,0,0) -- (0,1,0)--(0,0,1)--cycle;
\path[fill=gray!70, draw, opacity=0.2] (2,0,0) -- (0,2,0)--(0,0,2)--cycle;
\draw (U1) -- (F4); \draw (U2) -- (F5); \draw (U3) -- (F6);
\draw (U1) -- (U2) -- (U3) -- (U1);
\draw (F4) -- (F5) -- (F6) -- (F4);
\end{tikzpicture}
\caption{Newton polytope $\operatorname{Newt} (\mathcal{U}+\mathcal{F})$ corresponding to the triangle graph}\label{fig:NewtonPolytopeTriangleGraph}
\end{subfigure}
\caption[Feynman graph and Newton polytope of the triangle graph]{Feynman graph and Newton polytope of the triangle graph as treated in \cref{ex:triangleGraphProperMixed}. The Newton polytope has one face of codimension zero, five faces of codimension one, nine faces of codimension two and six faces of codimension three. The two parallel triangles in this Newton polytope correspond to $\mathcal{U}$ and $\mathcal{F}$, respectively.}
\end{figure}
We will determine the discriminants of proper, mixed faces of the triangle graph (see \cref{fig:TriangleGraph}). The Symanzik polynomials for the triangle graph are given by
\begin{align}
\mathcal{U} &= x_1 + x_2 + x_3\\
\mathcal{F} &= p_1^2 x_2 x_3 + p_2^2 x_1 x_3 + p_3^2 x_1 x_2 + \mathcal{U} \left(x_1 m_1^2 + x_2 m_2^2 + x_3 m_3^2\right) \point
\end{align}
The corresponding Newton polytope $\operatorname{Newt}(\mathcal{U}+\mathcal{F})$ has in total $21$ faces, where one face is the Newton polytope $\operatorname{Newt}(\mathcal{U}+\mathcal{F})$ itself, $7$ faces corresponding to faces of $\operatorname{Newt}(\mathcal{U})$, another $7$ corresponding to faces of $\operatorname{Newt}(\mathcal{F})$ and the remaining $6$ faces are proper, mixed faces (see \cref{fig:NewtonPolytopeTriangleGraph}). The truncated polynomials corresponding to the $6$ proper, mixed faces are up to permutations $1\leftrightarrow 2 \leftrightarrow 3$ given by
\begin{align}
r_1 &= m_2^2 x_2^2 + m_3^2 x_3^2 + x_2 + x_3 + \left(p_1^2+m_2^2+m_3^2\right) x_2 x_3 \\
h_1 &= m_1^2 x_1^2 + x_1 \point
\end{align}
As $h_1 = \pd{h_1}{x_1} = 0$ has no common solution we have $\Delta(h_1) = \Delta(h_2) = \Delta(h_3) = 1$ for the $A$-discriminant of $h_1$ as well as for its symmetric permutations. Considering $r_1$, this is nothing else than the sum of Symanzik polynomials of a $1$-loop bubble graph. Hence, the solutions of $r_1=\pd{r_1}{x_2}=\pd{r_1}{x_3} = 0$ describe the second-type singularity of the bubble graph, and we obtain $p_1^2=0$ and similarly for all the permutations. Thus, we get
\begin{align}
R = p_1^2 p_2^2 p_3^2
\end{align}
as the contribution to the singular locus of the triangle graph from proper, mixed faces.
\end{example}
We want to mention, that also for the discriminants of proper, mixed faces $R$ there can be contributions which are identically zero in the physically relevant case. The simplest example where such a behaviour appears is the $2$-loop $2$-point function also known as the sunset graph. Again, we will define a physically relevant singular locus, where we have to remove these overall contributions.
Although, one does not usually consider the contribution of $R$ to the singular locus of Feynman integrals in the literature, we want to remark, that they can give a non-trivial contribution to the singular locus.
\section{Landau variety of the double-edged triangle graph} \label{sec:ExampleDuncesCap}
In order to demonstrate the methods described before, we want to calculate the Landau variety of a massive $2$-loop $3$-point function according to \cref{fig:duncescap}, which is known under various names, e.g.\ ``double-edged triangle graph'', ``dunce's cap'', ``parachute'' or ``ice cream cone''. To our knowledge this Landau variety was not published before. This graph is particularly interesting, since it is the simplest graph which does not belong to the cases discussed in \cref{lem:SubgraphPolynomialsVsTruncated} and \cref{lem:EAUfor1LoopAndBanana}. Using Horn-Kapranov-parameterization we will give the leading Landau variety in a parametrized form. Compared with the standard methods of eliminating variables from the Landau equations, the Horn-Kapranov-parameterization can be calculated very fast. We have to mention, that the reason for the effectiveness of the Horn-Kapranov-parameterization lies in a different representation of the result. To determine the defining polynomial of the Landau variety from the parametrized form is still a very time-consuming task. However, for many approaches the parametrized form of Landau varieties can be even more convenient, since the parameterization specifies the Landau singularities directly. \bigskip
\begin{figure}[bht]
\centering
\includegraphics[trim=0 0 0 1.5cm, clip]{Graphics/duncescap}
\caption{Double-edged triangle graph or ``dunce's cap'' graph} \label{fig:duncescap}
\end{figure}
For the Feynman graph of the dunce's cap graph in \cref{fig:duncescap} the Symanzik polynomials are given by
\begin{align}
\mathcal{U} &= (x_1+x_2)(x_3+x_4) + x_3 x_4 \\
\mathcal{F} &= s_1 x_1 x_2 (x_3 + x_4) + s_2 x_1 x_3 x_4 + s_3 x_2 x_3 x_4 \nonumber \\
& + m_1^2 x_1^2 (x_3 + x_4) + m_2^2 x_2^2 (x_3 + x_4) + m_3^2 x_3^2 (x_1 + x_2 + x_4) + m_4^2 x_4^2 (x_1 + x_2 + x_3)
\end{align}
where we abbreviate $s_1 = p_1^2 + m_1^2 + m_2^2$, $s_2= p_2^2 + m_1^2 + m_3^2 + m_4^2$ and $s_3 = p_3^2 + m_2^2 + m_3^2 + m_4^2$. The Newton polytope $\operatorname{Newt}(\mathcal{U}+\mathcal{F})$ has $85$ faces in total. Hence, we will expect $85$ contributions to the singular locus. Among these faces, $33$ belong to $\operatorname{Newt}(\mathcal{F})$, $19$ to $\operatorname{Newt}(\mathcal{U})$ and $32$ are proper, mixed faces. We will concentrate on the Landau variety and consider only the faces of $\operatorname{Newt}(\mathcal{F})$. Recalling \cref{eq:LandauFactorizationPrincipalADet} we will have
\begin{align}
\mathcal L_1 (\mathcal I_\Gamma) = \hspace{-1em} \bigcup_{\tau\subseteq\operatorname{Newt}(\mathcal{F})} \hspace{-1em} \mathcal L_1^{(\tau)} (\mathcal I_\Gamma) \qquad\text{with}\quad \mathcal L_1^{(\tau)} (\mathcal I_\Gamma) = \Var \!\left( \Delta_{\mathcal{A}\cap\tau} (\mathcal{F}_\tau) \right)
\end{align}
as the decomposition of the Landau variety $\mathcal L_1 (\mathcal I_\Gamma)$. We will treat the leading singularity $\Delta_\mathcal{A}(\mathcal{F})$ by means of Horn-Kapranov-parameterization, whereas the $A$-discriminants of all the other truncated polynomials will be determinated by \softwareName{Macaulay2}.
Starting with the leading singularity, we can read off the points generating the monomials of $\mathcal{F}=\sum_{a^{(j)}\in\mathcal{A}} z_j x^{a^{(j)}}$
\begin{align}
\mathcal{A} &= \left(
\begin{array}{cccccccccccccc}
1 & 1 & 1 & 0 & 2 & 2 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \\
1 & 1 & 0 & 1 & 0 & 0 & 2 & 2 & 0 & 1 & 0 & 0 & 1 & 0 \\
1 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 2 & 2 & 2 & 0 & 0 & 1 \\
0 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 2 & 2 & 2 \\
\end{array} \right) \\
z &= (s_1,s_1,s_2,s_3,m_1^2,m_1^2,m_2^2,m_2^2,m_3^2,m_3^2,m_3^2,m_4^2,m_4^2,m_4^2) \label{eq:DuncesCapZvars} \point
\end{align}
The leading Landau variety is nothing else than the hypersurface $\{\Delta_{\mathcal{A}}(\mathcal{F}) = 0\}$, and we will determine this hypersurface by a convenient parameterization. For this Horn-Kapranov-parameterization we need a Gale dual, and we have several possibilities to choose a Gale dual of $\mathcal{A}$, e.g.\
\begin{align}
\mathcal B = \resizebox{0.5\hsize}{!}{$ \left(
\begin{array}{cccccccccc}
1 & 1 & 1 & 0 & -1 & -1 & 0 & -1 & 0 & -1 \\
0 & -1 & -1 & 1 & 1 & 1 & -1 & 0 & -1 & 0 \\
-1 & 0 & -1 & -1 & 0 & -1 & 1 & 1 & -1 & -1 \\
-1 & -1 & 0 & -1 & -1 & 0 & -1 & -1 & 1 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{array}\right) $} \point
\end{align}
According to \cref{ssec:Adiscriminants} a Gale dual directly gives rise to a parameterization of generalized Feynman integrals. Let $z_1,\ldots,z_{14}$ be the coefficients of a generalized Feynman integral, i.e.\ coefficients of the second Symanzik polynomial $\mathcal{F}$. Then, the generic discriminant hypersurface $\{\Delta_{\mathcal{A}_\mathcal{F}}(\mathcal{F})=0\}$ is parametrized by $t\in\mathbb P_{\mathbb C}^{10-1}$
\begin{align}
y_1 &= \frac{z_1 z_{14}}{z_3 z_4} = \frac{R_1 t_1}{R_3 R_4} \quad,\qquad
y_2 = \frac{z_1 z_{13}}{z_2 z_4} = - \frac{R_1 t_2}{R_2 R_4} \quad,\qquad
y_3 = \frac{z_1 z_{12}}{z_2 z_3} = - \frac{R_1 t_3}{R_2 R_3} \nonumber\\
y_4 &= \frac{z_2 z_{11}}{z_3 z_4} = \frac{R_2 t_4}{R_3 R_4} \quad,\qquad
y_5 = \frac{z_2 z_{10}}{z_1 z_4} = - \frac{R_2 t_5}{R_1 R_4} \quad,\qquad
y_6 = \frac{z_2 z_9}{z_1 z_3} = - \frac{R_2 t_6}{R_1 R_3} \nonumber\\
y_7 &= \frac{z_3 z_8}{z_2 z_4} = \frac{R_3 t_7}{R_2 R_4} \quad,\qquad
y_8 = \frac{z_3 z_7}{z_1 z_4} = \frac{R_3 t_8}{R_1 R_4} \quad,\qquad
y_9 = \frac{z_4 z_6}{z_2 z_3} = \frac{R_4 t_9}{R_2 R_3} \nonumber\\
y_{10} &= \frac{z_4 z_5}{z_1 z_3} = \frac{R_4 t_{10}}{R_1 R_3} \label{eq:duncescapHKPgeneric}
\end{align}
with
\begin{align}
R_1 &= t_1+t_2+t_3-t_5-t_6-t_8-t_{10} \ \text{,}\qquad
R_2 = -t_2-t_3+t_4+t_5+t_6-t_7-t_9 \\
R_3 &= t_1+t_3+t_4+t_6-t_7-t_8+t_9+t_{10} \ \text{,}\quad
R_4 = t_1+t_2+t_4+t_5+t_7+t_8-t_9-t_{10} \nonumber \ \text{.}
\end{align}
However, up to now this Horn-Kapranov-parameterization gives the zero locus of discriminants for polynomials with generic coefficients. Thus, in order to adjust this parameterization to the physically relevant case we have to include the constraints given by the relations between coefficients of the Symanzik polynomials \cref{eq:DuncesCapZvars}. This can be accomplished e.g.\ by \softwareName{Mathematica} \cite{WolframResearchIncMathematicaVersion12} using the command ``Reduce'' or by \softwareName{Macaulay2} \cite{GraysonMacaulay2SoftwareSystem}. For algorithmical reasons it can be more efficient to reduce the effective variables $y_1,\ldots,y_{10}$, the linear forms $R_1,\ldots,R_4$ and the parameters $t_1,\ldots,t_{10}$ step by step. In doing so, we can eliminate $4$ parameters and the Landau variety splits into two components
\begin{align}
\frac{m_2^2}{m_1^2} &= \frac{t_6^2 t_8}{t_5^2 t_{10}} \quad,\qquad
\frac{m_3^2}{m_1^2} = -\frac{t_6^2 t_9}{\left(t_5+t_6\right) \left(t_9+t_{10}\right)t_{10} } \quad,\qquad
\frac{m_4^2}{m_1^2} = -\frac{t_3 t_6 t_{10}}{\left(t_5+t_6\right) \left(t_9+t_{10}\right)t_9 } \nonumber\\
\frac{s_1}{m_1^2} &= -\frac{t_5 \left(t_6 t_9+t_3 t_{10}\right)+t_6 \left(t_6 t_9+t_8 t_9+t_3 t_{10}+t_9t_{10}\right)}{t_5 t_9 t_{10}} \nonumber \\
\frac{s_2}{m_1^2} &= -\frac{t_5 \left(t_9+t_{10}\right) \left(t_6 t_9+t_3 t_{10}\right) + t_6 \left[t_6 t_9^2+t_8 t_9 \left(t_9+t_{10}\right) -t_{10} \left(t_9^2+t_{10} t_9-t_3 t_{10}\right)\right]}{\left(t_5+t_6\right) \left(t_9+t_{10}\right)t_9 t_{10}} \nonumber \\
\frac{s_3}{m_1^2} &= -\frac{t_6 \left[t_6 \left(t_9+t_{10}\right) \left(t_6 t_9-t_8 t_9+t_3 t_{10} + t_9 t_{10}\right)+t_5 \left(t_6 t_9^2+t_3 t_{10}^2\right)\right]}{\left(t_5+t_6\right) \left(t_9+t_{10}\right) t_5 t_9 t_{10} } \label{eq:duncesCapResult1}
\end{align}
and
\begin{align}
\frac{m_2^2}{m_1^2} &= \frac{t_6^2 t_8}{t_5^2 t_{10}} \quad,\qquad
\frac{m_3^2}{m_1^2} = \frac{t_6^2 t_9}{t_4 t_{10}^2} \quad,\qquad
\frac{m_4^2}{m_1^2} = \frac{t_6^2}{t_4 t_9} \quad,\qquad
\frac{s_1}{m_1^2} = \frac{t_6 \left(\left(t_4-t_9\right) t_{10} -t_8 t_9\right)}{t_5 t_9 t_{10}} \nonumber \\
\frac{s_2}{m_1^2} &= -\frac{t_6 \left[t_{10} \left(t_4 \left(t_9+t_{10}\right)+t_9 \left(2 t_6+t_9+t_{10}\right)\right)-t_8 t_9 \left(t_9+t_{10}\right)\right]}{t_4 t_9 t_{10}^2} \nonumber\\
\frac{s_3}{m_1^2} &= -\frac{t_6^2 \left[t_8 t_9 \left(t_9+t_{10}\right)+t_{10} \left(t_4 \left(t_9+t_{10}\right)-t_9 \left(-2 t_5+t_9+t_{10}\right)\right)\right]}{t_4 t_5 t_9 t_{10}^2} \point \label{eq:duncesCapResult2}
\end{align}
Hence, the leading Landau variety of the dunce's cap graph will be given by the values of \cref{eq:duncesCapResult1} and \cref{eq:duncesCapResult2} for all values $[t_3 : t_5 : t_6 : t_8 : t_9 : t_{10}]\in\mathbb P^{6-1}_{\mathbb C}$ and $[t_4 : t_5 : t_6 : t_8 : t_9 : t_{10}]\in\mathbb P^{6-1}_{\mathbb C}$, respectively. We dispense with renaming of the parameters $t$ in order to ensure the reproducibility of the results from \cref{eq:duncescapHKPgeneric}. \bigskip
The remaining $32$ proper faces of $\operatorname{Newt}(\mathcal{F})$ result in the contributions of the non-leading part of the Landau variety. We will determine them by means of a \softwareName{Macaulay2} \cite{GraysonMacaulay2SoftwareSystem,} routine, which can be found in the \cref{ssec:Macaulay2} and which is based on \cite{StaglianoPackageComputationsSparse2020, StaglianoPackageComputationsClassical2018}.
For the $7$ faces of $\operatorname{Newt}(\mathcal{F})$ with codimension $1$ we obtain the following truncated polynomials $B_1,\ldots,B_7$ with their $A$-discriminants
\begin{align}
B_1 &= s_3 x_2 x_3 x_4 + m_2^2 x_2^2 (x_3+x_4) + m_3^2 x_3^2 (x_2+x_4) + m_4^2 x_4^2 (x_2+x_3) = \mathcal{F}|_{x_1=0} \nonumber \\
&\Rightarrow \Delta(B_1) = m_2^4 m_3^4 m_4^4 \left(s_3-m_2^2-m_3^2-m_4^2\right)^2 \nonumber \\
&\qquad \left(s_3^{4} - 8 s_3^{2} m_2^2 m_3^2 + 16m_2^4m_3^4-8s_3^{2}m_2^2m_4^2-8s_3^{2}m_3^2m_4^2 + 64s_3m_2^2m_3^2m_4^2 \right. \nonumber \\
&\qquad \left. - 32m_2^4m_3^2m_4^2-32m_2^2m_3^4m_4^2+16m_2^4m_4^4-32m_2^2m_3^2m_4^4+16m_3^4m_4^4\right) \allowdisplaybreaks\\
B_2 &= s_2 x_1 x_3 x_4 + m_1^2 x_1^2 (x_3+x_4) + m_3^2 x_3^2 (x_1+x_4) + m_4^2 x_4^2 (x_1+x_3) = \mathcal{F}|_{x_2=0} \nonumber \\
&\Rightarrow \Delta(B_2) = m_1^4 m_3^4 m_4^4 \left( s_2-m_1^2-m_3^2-m_4^2 \right)^2 \nonumber \\
&\qquad \left( s_2^{4}-8s_2^{2}m_1^2m_3^2+16m_1^4m_3^4-8s_2^{2}m_1^2m_4^2-8s_2^{2}m_3^2m_4^2 + 64s_2m_1^2m_3^2m_4^2 \right. \nonumber \\
&\qquad \left. - 32m_1^4m_3^2m_4^2-32m_1^2m_3^4m_4^2+16m_1^4m_4^4-32m_1^2m_3^2m_4^4+16m_3^4m_4^4 \right) \allowdisplaybreaks\\
B_3 &= x_4 [ s_1 x_1 x_2 + m_1^2 x_1^2 + m_2^2 x_2^2 + m_4^2 x_4 (x_1+x_2)] = \mathcal{F}|_{x_3=0} \nonumber \\
&\Rightarrow \Delta(B_3) = m_4^4(s_1-m_1^2-m_2^2) \allowdisplaybreaks\\
B_4 &= (x_3+x_4) ( s_1 x_1 x_2 + m_1^2 x_1^2 + m_2^2 x_2^2) \Rightarrow \Delta(B_4) = s_1^2 - 4m_1^2m_2^2 \allowdisplaybreaks\\
B_5 &= m_3^2 x_3^2 (x_1 + x_2 + x_4) \Rightarrow \Delta(B_5) = 1 \allowdisplaybreaks\\
B_6 &= x_3 ( s_1 x_1 x_2 + m_1^2 x_1^2 + m_2^2 x_2^2 + m_3^2 x_3 (x_1+x_2)) = \mathcal{F}|_{x_4=0} \nonumber \\
&\Rightarrow \Delta(B_6) = m_3^4(s_1-m_1^2-m_2^2) \allowdisplaybreaks \\
B_7 &= m_4^2 x_4^2 (x_1 + x_2 + x_2) \Rightarrow \Delta(B_7) = 1 \point
\end{align}
The truncated polynomials $B_1,B_2,B_3$ and $B_6$ would be also present in the approach of subgraphs, i.e.\ we can generate those polynomials also by setting certain variables $x_i$ to zero. However, for the other polynomials which can be not generated by subgraphs we will also obtain non-trivial contributions. Note, that the $A$-discriminant of $B_4$ will vanish when considering general solutions in $x\in(\mathbb C^*)^n$. The discriminant for $B_4$ given above is for the restriction of solutions with positive real part.
From the faces of $\operatorname{Newt}(\mathcal{F})$ with codimension $2$ we will obtain $15$ truncated polynomials, which are listed below together with their $A$-discriminants
\begin{align}
C_1 &= x_3 x_4 (m_3^2 x_3 + m_4^2 x_4) = \mathcal{F}|_{x_1=x_2=0} \Rightarrow \Delta(C_1) = 1 \allowdisplaybreaks\\
C_2 &= x_2 x_4 (m_2^2 x_2 + m_4^2 x_4) = \mathcal{F}|_{x_1=x_3=0} \Rightarrow \Delta(C_2) = 1 \allowdisplaybreaks\\
C_3 &= m_2^2 x_2^2 (x_3 + x_4) \Rightarrow \Delta(C_3) = 1 \allowdisplaybreaks\\
C_4 &= m_3^2 x_3^2 (x_2 + x_4) \Rightarrow \Delta(C_4) = 1 \allowdisplaybreaks\\
C_5 &= x_2 x_3 (m_2^2 x_2 + m_3^2 x_3) = \mathcal{F}|_{x_1=x_4=0} \Rightarrow \Delta(C_5) = 1 \allowdisplaybreaks\\
C_6 &= m_4^2 x_4^2 (x_2 + x_3) \Rightarrow \Delta(C_6) = 1 \allowdisplaybreaks\\
C_7 &= x_1 x_4 (m_1^2 x_1 + m_4^2 x_4) = \mathcal{F}|_{x_2=x_3=0} \Rightarrow \Delta(C_7) = 1 \allowdisplaybreaks\\
C_8 &= m_1^2 x_1^2 (x_3 + x_4) \Rightarrow \Delta(C_8) = 1 \allowdisplaybreaks\\
C_9 &= m_3^2 x_3^2 (x_1 + x_4) \Rightarrow \Delta(C_9) = 1 \allowdisplaybreaks\\
C_{10} &= x_1 x_3 (m_1^2 x_1 + m_3^2 x_3) = \mathcal{F}|_{x_2=x_4=0} \Rightarrow \Delta(C_{10}) = 1 \allowdisplaybreaks\\
C_{11} &= m_4^2 x_4^2 (x_1 + x_3) \Rightarrow \Delta(C_{11}) = 1 \allowdisplaybreaks\\
C_{12} &= x_4 (s_1 x_1 x_2 + m_1^2 x_1^2 + m_2^2 x_2^2) \Rightarrow \Delta(C_{12}) = s_1^{2}-4 m_1^2 m_2^2 \allowdisplaybreaks\\
C_{13} &= m_4^2 x_4^2 (x_1 + x_2) \Rightarrow \Delta(C_{13}) = 1 \allowdisplaybreaks\\
C_{14} &= x_3 (s_1 x_1 x_2 + m_1^2 x_1^2 + m_2^2 x_2^2) \Rightarrow \Delta(C_{14}) = s_1^{2}-4 m_1^2 m_2^2 \allowdisplaybreaks\\
C_{15} &= m_3^2 x_3^2 (x_1 + x_2) \Rightarrow \Delta(C_{15}) = 1 \point
\end{align}
The polynomials $C_1, C_2, C_5, C_7$ and $C_{10}$ also appear as Symanzik polynomials of subgraphs. Remarkably, the truncated polynomials $C_{12}$ and $C_{14}$, which do not correspond to subgraph polynomials, will result in non-trivial factors. These truncated polynomials have the shape of graph polynomials of a $1$-loop bubble graph. Their contribution to the singular locus is also not contained in any other discriminant coming from subgraph polynomials. Hence, those contributions were overlooked by the classical approach. Moreover, for $\Re(s_1)<0$ those additional singularities are on the principal sheet, i.e.\ they allow a solution in $x\in\mathbb R^n_{>0}$.
The remaining faces of codimension $3$ are the vertices of $\operatorname{Newt}(\mathcal{F})$. They result in
\begin{align}
D_1 &= m_3^2 x_3^2 x_4 \Rightarrow \Delta(D_1) = m_3^2 \qquad\quad D_6 = m_3^2 x_3^2 x_2 \Rightarrow \Delta(D_6) = m_3^2 \allowdisplaybreaks\\
D_2 &= m_4^2 x_4^2 x_3 \Rightarrow \Delta(D_2) = m_4^2 \qquad\quad D_7 = m_1^2 x_1^2 x_4 \Rightarrow \Delta(D_7) = m_1^2 \allowdisplaybreaks\\
D_3 &= m_2^2 x_2^2 x_4 \Rightarrow \Delta(D_3) = m_2^2 \qquad\quad D_8 = m_4^2 x_4^2 x_1 \Rightarrow \Delta(D_8) = m_4^2 \allowdisplaybreaks\\
D_4 &= m_4^2 x_4^2 x_2 \Rightarrow \Delta(D_4) = m_4^2 \qquad\quad D_9 = m_1^2 x_1^2 x_3 \Rightarrow \Delta(D_9) = m_1^2 \allowdisplaybreaks\\
D_5 &= m_2^2 x_2^2 x_3 \Rightarrow \Delta(D_5) = m_2^2 \qquad\quad D_{10} = m_3^2 x_3^2 x_1 \Rightarrow \Delta(D_{10}) = m_3^2 \point
\end{align}
Therefore, expressed by graphs, we will obtain the following contributions to the Landau variety of the dunce's cap graph
\begin{align}
\widehat E_A (\mathcal{F}) &= \Delta \!\left( %
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, every node/.style={scale=0.8}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,-1);
\coordinate[dot] (C) at (2,1);
\draw (A) -- node[above] {$1$} (C);
\draw (A) -- node[below] {$2$} (B);
\draw (C) arc[start angle=135, end angle=225, radius=1.4142];
\draw (B) arc[start angle=-45, end angle=45, radius=1.4142];
\node at (1.3,0) {$3$};
\node at (2.7,0) {$4$};
\draw (A) -- node[below] {$p_1$} ++(-1,0);
\draw (B) -- node[below right,shift={(0,.1)}] {$p_3$} ++(0.2,-.7);
\draw (C) -- node[above right,shift={(0,.1)}] {$p_2$} ++(0.2,.7);
\end{tikzpicture} %
\right) \cdot \Delta \!\left( %
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, every node/.style={scale=0.8}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,0);
\coordinate (A1) at (1,1);
\coordinate (A2) at (1,-1);
\draw (A) -- node[above] {$3$} (B);
\draw (1,0) circle (1); \node at (A1) [above = 0.5mm of A1] {$2$}; \node at (A2) [below = 0.5mm of A2] {$4$};
\draw (A) -- node[below] {$p_3$} ++(-1,0);
\draw (B) -- node[above,shift={(0,.1)}] {$p_1$} ++(1,1);
\draw (B) -- node[below,shift={(0,-.1)}] {$p_2$} ++(1,-1);
\end{tikzpicture} %
\right) \cdot \Delta \!\left( %
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, every node/.style={scale=0.8}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,0);
\coordinate (A1) at (1,1);
\coordinate (A2) at (1,-1);
\draw (A) -- node[above] {$3$} (B);
\draw (1,0) circle (1); \node at (A1) [above = 0.5mm of A1] {$1$}; \node at (A2) [below = 0.5mm of A2] {$4$};
\draw (A) -- node[below] {$p_2$} ++(-1,0);
\draw (B) -- node[above,shift={(0,.1)}] {$p_1$} ++(1,1);
\draw (B) -- node[below,shift={(0,-.1)}] {$p_3$} ++(1,-1);
\end{tikzpicture}
\right) \cdot \nonumber \\
& \Delta \!\left(
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, every node/.style={scale=0.8}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,0);
\coordinate (A1) at (1,1);
\coordinate (A2) at (1,-1);
\coordinate (A3) at (3.2,0);
\draw (1,0) circle (1); \node at (A1) [above = 0.5mm of A1] {$1$}; \node at (A2) [below = 0.5mm of A2] {$2$};
\draw (A) -- node[below] {$p_1$} ++(-1,0);
\draw (B) -- node[above,shift={(0,.6)}] {$p_2$} ++(0,1.3);
\draw (B) -- node[below,shift={(0,-.6)}] {$p_3$} ++(0,-1.3);
\draw (2.6,0) circle (0.6); \node at (A3) [left = 0.5mm of A3] {$4$};
\end{tikzpicture} %
\right) \cdot \Delta \!\left( %
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, every node/.style={scale=0.8}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,0);
\coordinate (A1) at (1,1);
\coordinate (A2) at (1,-1);
\coordinate (A3) at (3.2,0);
\draw (1,0) circle (1); \node at (A1) [above = 0.5mm of A1] {$1$}; \node at (A2) [below = 0.5mm of A2] {$2$};
\draw (A) -- node[below] {$p_1$} ++(-1,0);
\draw (B) -- node[above,shift={(0,.6)}] {$p_2$} ++(0,1.3);
\draw (B) -- node[below,shift={(0,-.6)}] {$p_3$} ++(0,-1.3);
\draw (2.6,0) circle (0.6); \node at (A3) [left = 0.5mm of A3] {$3$};
\end{tikzpicture} %
\right) \cdot \Delta \!\left( %
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, every node/.style={scale=0.8}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate[dot] (B) at (2,0);
\coordinate (A1) at (1,1);
\coordinate (A2) at (1,-1);
\coordinate (A3) at (4,0);
\draw (1,0) circle (1); \node at (A1) [above = 0.5mm of A1] {$1$}; \node at (A2) [below = 0.5mm of A2] {$2$};
\draw (A) -- node[below] {$p_1$} ++(-1,0);
\draw (B) -- node[above,shift={(0,.1)}] {$p_2$} ++(1,1);
\draw (B) -- node[below,shift={(0,-.1)}] {$p_3$} ++(1,-1);
\end{tikzpicture} %
\right) \cdot\nonumber \\
& \Delta \!\left( %
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, every node/.style={scale=0.8}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate (A1) at (1.6,0);
\draw (0.8,0) circle (.8); \node at (A1) [left = 0.5mm of A1] {$1$};
\draw (A) -- node[left,shift={(-.3,0)}] {$p_2$} ++(-.7,0);
\draw (A) -- node[left,shift={(-.3,.3)}] {$p_1$} ++(-.7,.7);
\draw (A) -- node[left,shift={(-.3,-.3)}] {$p_3$} ++(-.7,-.7);
\end{tikzpicture} %
\right) \cdot \Delta \!\left( %
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, every node/.style={scale=0.8}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate (A1) at (1.6,0);
\draw (0.8,0) circle (.8); \node at (A1) [left = 0.5mm of A1] {$2$};
\draw (A) -- node[left,shift={(-.3,0)}] {$p_2$} ++(-.7,0);
\draw (A) -- node[left,shift={(-.3,.3)}] {$p_1$} ++(-.7,.7);
\draw (A) -- node[left,shift={(-.3,-.3)}] {$p_3$} ++(-.7,-.7);
\end{tikzpicture} %
\right) \cdot \Delta \!\left( %
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, every node/.style={scale=0.8}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate (A1) at (1.6,0);
\draw (0.8,0) circle (.8); \node at (A1) [left = 0.5mm of A1] {$3$};
\draw (A) -- node[left,shift={(-.3,0)}] {$p_2$} ++(-.7,0);
\draw (A) -- node[left,shift={(-.3,.3)}] {$p_1$} ++(-.7,.7);
\draw (A) -- node[left,shift={(-.3,-.3)}] {$p_3$} ++(-.7,-.7);
\end{tikzpicture} %
\right) \cdot \Delta \!\left( %
\begin{tikzpicture}[baseline={([yshift=-.5ex]current bounding box.center)}, thick, dot/.style = {draw, shape=circle, fill=black, scale=.3}, every node/.style={scale=0.8}, scale=0.7]
\coordinate[dot] (A) at (0,0);
\coordinate (A1) at (1.6,0);
\draw (0.8,0) circle (.8); \node at (A1) [left = 0.5mm of A1] {$4$};
\draw (A) -- node[left,shift={(-.3,0)}] {$p_2$} ++(-.7,0);
\draw (A) -- node[left,shift={(-.3,.3)}] {$p_1$} ++(-.7,.7);
\draw (A) -- node[left,shift={(-.3,-.3)}] {$p_3$} ++(-.7,-.7);
\end{tikzpicture} %
\right) \label{eq:DuncesCapDecompositionGraphical}
\end{align}
where we suppressed multiplicities of those $A$-discriminants, and we dropped all the $A$-discriminants which are trivial. The first $5$ factors would be also expected by considering subgraphs. In contrast to the approach of subgraphs, we will additionally obtain contributions in terms of bubble graphs and tadpole graphs. Those parts will be missed within the classical approach. That the singularities corresponding to the truncated polynomials $C_{12}$ and $C_{14}$ result indeed to a threshold of the Feynman integral was found in \cite{AnastasiouTwoloopAmplitudesMaster2007}. \bigskip
We want to mention that it is by no means clear or trivial, that the overlooked contributions have the shape of discriminants of graph polynomials. In this particular example we will find that they behave like discriminants of graph polynomials. Furthermore, we can summarize the involved diagrams in \cref{eq:DuncesCapDecompositionGraphical} as follows: We take all subdiagrams which arise from the dunce's cap graph by shrinking edges and deleting tadpoles. Whether such a behaviour also applies in general can only be speculated at this point. \bigskip
\section{Coamoebas and Feynman's \texorpdfstring{$i\varepsilon$}{i\unichar{"03B5}} prescription} \label{sec:Coamoebas}
In the previous sections we only were asking if there are values of $p$ and $m$ such that there exists a multiple root of the Symanzik polynomials anywhere in the complex plane. However, when applied to Feynman integral representations, e.g.\ the Lee-Pomeransky representation \cref{eq:LeePomeranskyRepresentation} (or more generally to Euler-Mellin integrals) we only should worry if these multiple roots lie inside the integration region $\mathbb R^n_+$. Thus, in this section we want to study the position of singularities in the space of Schwinger parameters $x$. The other main aspect of this section will concern the multivalued structure of Feynman integrals, which is closely related to the previous question.
In order to better investigate the nature of the multiple roots, we will introduce the concept of coamoebas. The coamoeba was invented by Mikael Passare as a related object to the amoeba, which goes back to Gelfand, Kapranov and Zelevinsky \cite{GelfandDiscriminantsResultantsMultidimensional1994}. Amoebas as well as coamoebas provide a link between algebraic geometry and tropical geometry \cite{NisseAmoebasCoamoebasLinear2016}. We suggest \cite{KazarnovskiiNewtonPolytopesTropical2021} for a recent survey about tropical geometry and its connection to toric varieties. The relations between coamoebas and Euler-Mellin integrals were investigated in \cite{NilssonMellinTransformsMultivariate2010, BerkeschEulerMellinIntegrals2013}, which state the key references for this section. For further reading about coamoebas we refer to \cite{NisseGeometricCombinatorialStructure2009, JohanssonCoamoebas2010, ForsgardHypersurfaceCoamoebasIntegral2012, JohanssonArgumentCycleCoamoeba2013, PassareDiscriminantCoamoebasHomology2012, NisseHigherConvexityCoamoeba2015, ForsgardOrderMapHypersurface2015, ForsgardTropicalAspectsReal2015}.\bigskip
One of the main reasons that only very little is known about Landau varieties is the very sophisticated nature of multivalued sheet structure of Feynman integrals. Hence, the purpose of this section is to suggest the study of a slightly simpler object. For many aspects as e.g.\ the monodromy of Feynman integrals, it turns out that the coamoeba provides deep insights in the sheet structure of Feynman integrals. Even though coamoebas are simpler objects than the sheet structure itself, they are still challenging and difficult to compute. However, coamoebas seem somehow easier accessible, as we will sketch in the end of this section.
This section does not intend to give a complete answer to those questions and should rather be seen as the first step of a bigger task. We will focus here on a very basic overview about the mathematical fundament of those objects and clarify some first questions.\bigskip
But before introducing coamoebas, we will concern with a very classical idea about handling the multivaluedness of Feynman integrals. As aforementioned, one often introduces a small imaginary part in the denominators of the Feynman integral in momentum space \cref{eq:FeynmanMomSp}. This so-called \textit{Feynman's $i\varepsilon$ prescription} is a kind of a minimal version to complexify the momenta and masses in the Feynman integral. When considering squared momenta and masses to be real numbers, we can elude poles in the integrand of \cref{eq:FeynmanMomSp} by introducing a small imaginary part. On the other hand, one can understand this $i\varepsilon$ also as an implementation of the time ordering \cite[sec. 6.2]{SchwartzQuantumFieldTheory2014}, whence the $i\varepsilon$ prescription is often related to causality \cite{HannesdottirWhatVarepsilonSmatrix2022}. Thus, we consider
\begin{align}
\gls{FeynEps} = \int_{\mathbb R^{d\times L}} \prod_{j=1}^L \frac{\mathop{}\!\mathrm{d}^d k_j}{\pi^{d/2}} \prod_{i=1}^n \frac{1}{(q_i^2+m_i^2-i\varepsilon)^{\nu_i}} \comma \label{eq:FeynmanMomSpEpsilon}
\end{align}
where $m_i^2\in\mathbb R_{\geq 0}$ and $q_i^2 \in \mathbb R$, and replace \cref{eq:FeynmanMomSp} by $\lim_{\varepsilon\rightarrow 0^+} \mathcal I_\Gamma^\varepsilon$. Thus, the advantage of this procedure is, that the Feynman integral defined now by $\lim_{\varepsilon\rightarrow 0^+} \mathcal I_\Gamma^\varepsilon$ becomes a single-valued function. Hence, those definitions are nothing else than a choice of a principal sheet by determining the direction from which we want to enter the branch cut. Therefore, we will denote the discontinuity at those branch cuts by
\begin{align}
\gls{Disc} = \lim_{\varepsilon\rightarrow 0^+} \left(\mathcal I_\Gamma^\varepsilon (d,\nu,p,m) - \mathcal I_\Gamma^{-\varepsilon} (d,\nu,p,m) \right) \label{eq:discontinuity1}
\end{align}
according to \cite{CutkoskySingularitiesDiscontinuitiesFeynman1960}. Note, that on the level of the transfer matrix $T$ the discontinuity is related to the imaginary part as seen in \cref{eq:OpticalTheorem}. By means of Schwarz' reflection principle, this behaviour can also be extended to individual Feynman integrals in many cases. We refer to \cite[sec. 4.4]{HannesdottirWhatVarepsilonSmatrix2022} for a discussion of this procedure.
In equation \cref{eq:FeynmanMomSpEpsilon} we can absorb the introduction of $i\varepsilon$ by a transformation of $m_i^2 \mapsto m_i^2 - i\varepsilon$. Hence, we can simply determine the $i\varepsilon$-prescripted parametric Feynman integral representations, and we will obtain
\begin{align}
\gls{FeynGenEps} &= \frac{\Gamma(\nu_0)}{\Gamma(\nu_0-\omega)\Gamma(\nu)} \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x \, x^{\nu-1} \left(\mathcal{U}+\mathcal{F} - i \varepsilon \mathcal{U} \sum_{i=1}^n x_i\right)^{-\nu_0} \nonumber \\
&\stackrel{|\epsilon|\ll 1}{=} \frac{\Gamma(\nu_0)}{\Gamma(\nu_0-\omega)\Gamma(\nu)} \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x \, x^{\nu-1} \left(\mathcal{U}+\mathcal{F} - i \varepsilon\right)^{-\nu_0} \label{eq:LeePomeranskyRepresentationEpsilon}
\end{align}
as the equivalent for the generalized Feynman integral in representation \cref{eq:LeePomeranskyRepresentation}. When $|\varepsilon|\ll 1$ is small (as we usually want to assume), we can drop the factor $\mathcal{U}\sum_{i=1}^n x_i$ which is positive inside the integration region $(0,\infty)^n$. Hence, the $i\varepsilon$ prescription describes a possibility to circumvent poles in the integrand. However, this happens at the price of a limit that must subsequently be carried out.\bigskip
We want to suggest an alternative to Feynman's $i\varepsilon$ prescription, which occurs in the study of Euler-Mellin integrals \cite{NilssonMellinTransformsMultivariate2010,BerkeschEulerMellinIntegrals2013}. This variant coincides with the $i\varepsilon$ prescription in the limit $\varepsilon\rightarrow 0^+$. However, it avoids the need to consider a limit value. Since we want to make the argument slightly more general, let us introduce Euler-Mellin integrals for that purpose
\begin{align}
\gls{EM} := \Gamma(t) \int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x \, x^{s-1} f(x)^{-t} = \Gamma(t) \int_{\mathbb R^n} \mathop{}\!\mathrm{d} w \, e^{s\cdot w} f\left(e^w\right)^{-t} \label{eq:EulerMellin}
\end{align}
where $s\in \mathbb C^n$ and $t\in \mathbb C^k$ are complex parameters. Further, we denote $f^{-t}:= f_1^{-t_1}\cdots f_k^{-t_k}$ for powers of polynomials $f_i\in\mathbb C[x_1,\ldots,x_n]$, and we write $f:=f_1 \cdots f_k$. Parametric Feynman integrals can be treated as a special case of Euler-Mellin integrals with $k=1$ in the Lee-Pomeransky representation \cref{eq:LeePomeranskyRepresentation} or with $k=2$ in the Feynman representation \cref{eq:FeynmanParSpFeynman} for a convenient hyperplane $H(x)$.\bigskip
Following \cite{BerkeschEulerMellinIntegrals2013}, a polynomial $f\in\mathbb C[x_1,\ldots,x_n]$ with fixed coefficients is called \textit{completely non-vanishing on} $X\subseteq \mathbb C^n$ if for all faces $\tau\subseteq\operatorname{Newt}(f)$ of the Newton polytope, the truncated polynomials $f_\tau$ do not vanish on $X$. We can consider the vanishing of some truncated polynomial as a necessary condition in the approach of the principal $A$-determinant. Thus, if a polynomial $f$ is completely non-vanishing on a set $X$, roots of the principal $A$-determinant $E_A(f)$ will correspond to a solution $x\notin X$ outside of this set $X$. \bigskip
\begin{figure}[tb]
\centering
\begin{tikzpicture}[thick, dot/.style = {draw, cross out, scale=.5}, scale=1]
\draw[thin,->] (-2,0) -- (4.3,0) node[anchor=north west] {$\Re (x)$};
\draw[thin,->] (0,-1) -- (0,3) node[anchor=south east] {$\Im (x)$};
\draw (0,0) -- (4.3,0);
\draw (0,0) -- (4,1.5);
\draw (1,0) arc[start angle=0, end angle=20.56, radius=1];
\node at (0.84,0.14) {\footnotesize $\theta$};
\node [anchor=west] (note1) at (5,0.5) {$\operatorname{Arg}^{-1}(0)=\mathbb R^n_+$};
\draw [-stealth] (note1) to[out=180, in=70] (4.1,0.1);
\node [anchor=west] (note2) at (4.5,2) {$\operatorname{Arg}^{-1}(\theta)$};
\draw [-stealth] (note2) to[out=180, in=70] (3.8,1.525);
\coordinate[dot] (a1) at (1.5,2);
\coordinate[dot] (a2) at (0,2.2);
\coordinate[dot] (a3) at (-1.4,0);
\coordinate[dot] (a4) at (0.8,-0.8);
\coordinate[dot] (a5) at (2,0);
\node [anchor=north] (note3) at (3,4) {roots of $f(x)$};
\draw [-stealth] (note3) to[out=270, in=60] (a1);
\end{tikzpicture}
\caption[Sketch of the idea behind the $\theta$-analogue Euler-Mellin integrals]{Sketch of the idea behind the $\theta$-analogue Euler-Mellin integrals. Thus, we allow the integration contour to rotate in the complex space $\mathbb C^n$. The value of $\mathscr M_f^\theta(s,t)$ changes only if we exceed zeros of $f(x)$ with the integration contour $\operatorname{Arg}^{-1}(\theta)$. Hence, $\mathscr M_f^\theta(s,t)$ is locally constant in $\theta$. Especially, when roots hit the original integration region $\mathbb R^n_+$ one has to consider a $\theta$-analogue Euler-Mellin integral instead. For this picture, we would expect $5$ connected components in the complement of the closure of the coamoeba.} \label{fig:SketchThetaEM}
\end{figure}
Analogue to the Feynman integral, Euler-Mellin integrals \cref{eq:EulerMellin} become ill-defined, when $f$ is not completely non-vanishing on the positive orthant $(0,\infty)^n$. Hence, in \cite{BerkeschEulerMellinIntegrals2013, NilssonMellinTransformsMultivariate2010} a slightly more general version of Euler-Mellin integrals was introduced, where we rotate the original integration contour by an angle $\theta=(\theta_1,\ldots,\theta_n)$ in the complex plane
\begin{align}
\gls{EMtheta} := \Gamma(t) \int_{\operatorname{Arg}^{-1} (\theta)} \mathop{}\!\mathrm{d} x\, x^{s-1} f(x)^{-t} = e^{is\theta}\, \Gamma(t) \int_{\mathbb R_+^n} \mathop{}\!\mathrm{d} x\, x^{s-1} f\!\left(x e^{i\theta}\right)^{-t} \label{eq:EulerMellinTheta}
\end{align}
with the component-wise argument map $\gls{Arg} := (\arg x_1,\ldots,\arg x_n)$. For short we write $f\!\left(x e^{i\theta}\right) := f\!\left(x_1 e^{i\theta_1},\ldots,x_n e^{i\theta_n}\right)$, and we will call \cref{eq:EulerMellinTheta} the \textit{$\theta$-analogue Euler-Mellin integral}. Deforming the integration contour slightly in cases where poles of the integrand hit the integration contour, is the same procedure as Feynman's $i\varepsilon$ prescription. To illustrate the idea behind $\theta$-analogue Euler-Mellin integrals we sketched those integration regions in \cref{fig:SketchThetaEM} in a one-dimensional case. By choosing $\theta=(\varepsilon,\ldots,\varepsilon)$ the $\theta$-analogue Euler-Mellin integral and the $i\varepsilon$-prescripted Feynman integral coincide for small $\varepsilon$
\begin{align}
\mathcal I_\mathcal{A}^\varepsilon ({\underline{\nu}},z) &= \frac{\Gamma(\nu_0)}{\Gamma(\nu)\Gamma\left(\nu_0-\omega\right)}\int_{\operatorname{Arg}^{-1}(\varepsilon,\ldots,\varepsilon)} \mathop{}\!\mathrm{d} x\, x^{\nu-1} (\mathcal{U}+\mathcal{F})^{-\nu_0} \nonumber\\
&=\frac{\Gamma(\nu_0)}{\Gamma(\nu)\Gamma\left(\nu_0-\omega\right)}\int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\, x^{\nu-1} e^{i\varepsilon(\omega-\nu_0)}(\mathcal{U} e^{-i\varepsilon} +\mathcal{F})^{-\nu_0} \nonumber\\
&\stackrel{|\varepsilon|\ll 1}{=} \frac{\Gamma(\nu_0)}{\Gamma(\nu)\Gamma\left(\nu_0-\omega\right)}\int_{\mathbb R^n_+} \mathop{}\!\mathrm{d} x\, x^{\nu-1} (\mathcal{U} + \mathcal{F} - i\varepsilon)^{-\nu_0} \label{eq:iepsilonThetaEM}
\end{align}
where we used the homogenity of Symanzik polynomials. As we are only interested in the limit $\varepsilon\rightarrow 0^+$, we will assume that $\varepsilon$ is small enough\footnote{Obviously, \cref{eq:iepsilonThetaEM} holds only if the $\theta$-analogue Euler-Mellin integral $\mathscr M_\mathcal{G}^\varepsilon ({\underline{\nu}},z)$ exists for small values of $\varepsilon$. Furthermore, we will expect in general that the different parametric Feynman representations have distinct $\varepsilon$-domains in their $\theta$-analogue extension.}.\bigskip
This alternative description by means of a rotated integration contour has several advantages. Note that the $\theta$-analogue Euler-Mellin integral is locally constant in $\theta$. Hence, the value of $\mathcal I^\varepsilon_\mathcal{A}$ and $\mathcal I^{\varepsilon^\prime}_\mathcal{A}$ can only differ if there are roots of $\mathcal{G}(x)$ in the segment of $\mathbb C^n$ spanned by $\operatorname{Arg}^{-1}(\varepsilon,\ldots,\varepsilon)$ and $\operatorname{Arg}^{-1}(\varepsilon^\prime,\ldots,\varepsilon^\prime)$. This follows directly from residue theorem by closing the integration region at infinity, since the integrand of Euler-Mellin integrals vanishes sufficiently fast when $|x|$ tends to infinity. Therefore, we will find regions in which a variation of $\varepsilon$ leaves the Feynman integral $\mathcal I_\mathcal{A}^\varepsilon$ constant. Especially, we do not have to take limits to determine the discontinuity, i.e.\ we will now have
\begin{align}
\operatorname{Disc} \mathcal I_\mathcal{A} ({\underline{\nu}},z) = \mathcal I_\mathcal{A}^\varepsilon ({\underline{\nu}},z) - \mathcal I_\mathcal{A}^{-\varepsilon} ({\underline{\nu}},z)
\end{align}
instead of \cref{eq:discontinuity1}, where $\varepsilon$ has to be sufficiently small (but not infinitesimally small). This can reduce the complexity of determining discontinuities significantly. \bigskip
As seen above, in order to get well-defined integrals in \cref{eq:EulerMellinTheta} we have to track the poles of the integrand. Denote by $\gls{Zf} := \left\{ x\in(\mathbb C^*)^n \,\rvert\, f(x) = 0 \right\}$ the set of (non-zero) roots of a polynomial $f$. To analyze when these poles meet the integration contour it is natural to consider
\begin{align}
\gls{Coamoeba} := \operatorname{Arg}(\mathcal Z_f) \subseteq \mathbb T^n \label{eq:defCoamoeba}
\end{align}
the argument of the zero locus. We will call the set $\mathcal C_f$ the \textit{coamoeba} of $f$, which is motivated by the fact that the coamoeba presents the imaginary counterpart of the amoeba (see \cref{fig:ComplexLogarithmMaps}). Since the argument map is a periodic function we can restrict the discussion without loss of generality to the $n$-dimensional real torus $\gls{RTorus} := \left(\normalslant{\mathbb R}{2\pi\mathbb Z}\right)^n$. Moreover, the closure of the coamoeba is closely related to the completely non-vanishing of $f$, which we will see in the following lemma.
\begin{lemma}[\cite{BerkeschEulerMellinIntegrals2013,JohanssonCoamoebas2010}] \label{lem:CoamoebasAndCNV}
For $\theta\in\mathbb T^n$, the polynomial $f(x)$ is completely non-vanishing on $\operatorname{Arg}^{-1} (\theta)$ if and only if $\theta \notin \overline{\mathcal C_f}$. Equivalently, we have
\begin{align}
\overline{\mathcal C_f} = \bigcup_{\tau\subseteq\operatorname{Newt}(f)} \mathcal C_{f_\tau} \point
\end{align}
\end{lemma}
\begin{figure}[t]
\centering
\begin{tikzpicture
\node (C) at (0,0) {$\mathbb C^n$};
\node (R1) at (2,0) {$\mathbb R^n$};
\node (R2) at (-2,0) {$\mathbb R^n$};
\node (CS) at (0,2) {$(\mathbb C^*)^n$};
\draw[->] (C) to node[below] {\footnotesize $\Im$} (R1);
\draw[->] (C) to node[below] {\footnotesize $\Re$} (R2);
\draw[->] (C) to node[right] {\footnotesize $\operatorname{Exp}$} (CS);
\draw[->] (CS) to node[above right] {\footnotesize $\operatorname{Arg}$} (R1);
\draw[->] (CS) to node[above left] {\footnotesize $\operatorname{Log}$} (R2);
\end{tikzpicture}
\caption[Relations between the multivariate complex logarithm]{Relations between the multivariate complex logarithm \cite{NilssonMellinTransformsMultivariate2010}. We write $\operatorname{Log} (x) := (\ln |x_1|,\ldots,\ln |x_n|)$ for the logarithm of absolute values. For the set of roots $\mathcal Z_f$ we will call $\operatorname{Log}(\mathcal Z_f)$ the amoeba of $f$ and $\operatorname{Arg}(\mathcal Z_f)$ the coamoeba of $f$.} \label{fig:ComplexLogarithmMaps}
\end{figure}
Applied to Feynman integrals, this lemma gives a criterion, whether the limit of the $i\varepsilon$ prescripted Feynman integral \cref{eq:LeePomeranskyRepresentationEpsilon} $\lim_{\varepsilon\rightarrow 0^+} \mathcal I^\varepsilon_\mathcal{A}({\underline{\nu}},z)$ differs from the original Feynman integral $\mathcal I_\mathcal{A}({\underline{\nu}},z)$.
\begin{lemma} \label{lem:nonphysical}
Assume that $z\in\Var(E_A(\mathcal{G}))$ is a singular point. If $0\notin \overline{\mathcal C_\mathcal{G}}$, the common solution of $\mathcal{G}=x_1 \pd{\mathcal{G}}{x_1} = \ldots = x_n \pd{\mathcal{G}}{x_n} = 0$ generating the singular point $z$ according to \cref{sec:ADiscriminantsReultantsPrincipalADets} does not lie on the integration contour $x \notin\mathbb R^n_+$. The same is true, when $\mathcal{G}$ is replaced by $\mathcal{F}$, which applies to the Landau variety $\mathcal L_1 (\mathcal I_\Gamma)$.
\end{lemma}
\begin{proof}
The lemma follows obviously from the previous \cref{lem:CoamoebasAndCNV}, which states in particular that $0\notin\overline{\mathcal C_\mathcal{G}}$ is equivalent to $\mathcal{G}(x)$ being completely non-vanishing on $\mathbb R^n_+$. The latter means nothing more than that $\mathcal{G}_\tau(x)=0$ has no solutions with $x\in\mathbb R^n_+$ for all faces $\tau\subseteq\operatorname{Newt}(\mathcal{G})$. The application of \cref{thm:pAdet-factorization} concludes the proof.
\end{proof}
Singular points corresponding to multiple roots outside the original integration contour $\mathbb R^n_+$ are also known as \textit{pseudo thresholds}. Thus, \cref{lem:nonphysical} provides a criterion for those pseudo thresholds. We have to remark, that \cref{lem:nonphysical} is not necessarily an effective criterion to determine pseudo thresholds. However, certain approximations of coamoebas are known, which may result in more effective criteria. We will present an overview of those approximations below.\bigskip
Hence, we are mainly interested in the complement of the closure of coamoebas $\gls{Cc}:=\mathbb T^n\setminus\overline{\mathcal C_f}$ because only for points $\theta\in\Cc$ the $\theta$-analogue Euler-Mellin integrals are well-defined. It is well known \cite{NisseGeometricCombinatorialStructure2009}, that this complement $\Cc$ is structured into a finite number of connected, convex components, and we will denote such a connected component by $\Theta$. Moreover, the number of connected components of $\Cc$ will be bounded by $\operatorname{vol} (\operatorname{Newt} (f))$. As pointed out above, the $\theta$-analogue Euler-Mellin integral \cref{eq:EulerMellinTheta} only depends on a choice of a connected component $\Theta$, as one can see simply by a homotopic deformation of the integration contour. Thus, for all $\theta\in\mathbb T^n$ inside the same connected component $\Theta\subset \Cc$, the value of $\mathscr M^\theta_f(s,t)$ stays the same, whereas the value may change for another component $\Theta^\prime$. Therefore, we can also write $\mathscr M^\Theta_f(s,t)$ instead of $\mathscr M^\theta_f(s,t)$ from \cref{eq:EulerMellinTheta}.
Connected components of $\Cc$ will also relate to different branches of the Feynman integral. Therefore, the connected components $\Theta$ will generate the multivalued sheet structure in a certain sense. For the connection between coamoeba and homology groups we refer to \cite{PassareDiscriminantCoamoebasHomology2012, NisseHigherConvexityCoamoeba2015}. However, we have to remark that the coamoeba may not generate the full fundamental group, as the coamoeba can not distinguish between poles of the integrand having the same argument, i.e.\ points which lie on the same ray.
\begin{example}[Coamoeba for the $1$-loop self-energy graph with one mass] \label{ex:CoamoebaBubble1Mass}
To illustrate the concept of coamoebas we will consider the $1$-loop self-energy graph with one massive edge for different kinematic regions. The Lee-Pomeransky polynomial and the simple principal $A$-determinant are given by
\begin{align}
\mathcal{G} &= x_1 + x_2 + (p^2+m_1^2) x_1 x_2 + m_1^2 x_1^2 \\
\widehat E_A(\mathcal{G}) &= p^2 m_1^2 (p^2+m_1^2)
\end{align}
for this particular graph. Therefore, with $m_1^2\neq 0$ we will expect two singular points $p^2=0$ and $p^2=-m_1^2$. Recall, the conventions for momenta from \cref{sec:FeynmanIntegralsIntro}, i.e.\ we will allow $p^2$ to be negative, when not restricting ourselves to Euclidean kinematics. Thus, when exceeding those thresholds the structure of the coamoeba changes, which we can be seen explicitly in \cref{fig:CoamoebaShellBubble1Mass}. \bigskip
Thereby, every point $(\theta_1,\theta_2)\in\mathbb T^2$ which lies not in the closure of the coamoeba defines an integration contour of the Feynman integral $\mathscr M_\mathcal{G}^\theta ({\underline{\nu}},z)$. As aforementioned, the value of $\mathscr M_\mathcal{G}^\theta({\underline{\nu}},z)$ does not change for two angles $\theta$ and $\theta^\prime$ which are in the same connected component of $\Cc[\mathcal{G}]$. The coamoebas in picture \cref{fig:coamoebaBubble1MassA} to \cref{fig:coamoebaBubble1MassE} have only one connected component of $\Cc[\mathcal{G}]$. However, when exceeding the threshold $p^2=-m_1^2$ the complement of the closure of the coamoeba consists in two components (\cref{fig:coamoebaBubble1MassF}). Thus, in the first cases we would expect a single analytic continuation, whereas we will expect two different analytic continuations in the region $p^2<-m_1^2$. This will agree with the actual behaviour of the Feynman integral, which is shown in \cref{fig:Bubble1MassBranches}.
Furthermore, we can also see the application of \cref{lem:nonphysical}. Thus, for the threshold $p^2 = 0$ the origin $\theta_1=\theta_2=0$ does not lie in the closure of the coamoeba $0\notin\overline{\mathcal C_\mathcal{G}}$. Therefore, $p^2=0$ is only a pseudo threshold, because we have not necessarily to change the original integration contour $\mathbb R^2_+ = \operatorname{Arg}^{-1}(0,0)$.
\begin{figure}
\centering
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble1MassCoamoeba/bubbleshellAm2=2-p2=100.png}
\caption{$-p^2<0$} \label{fig:coamoebaBubble1MassA}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble1MassCoamoeba/bubbleshellDm2=2-p2=1.png}
\caption{$-p^2<0$}
\end{subfigure} \\[.5\baselineskip]
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble1MassCoamoeba/bubbleshellFm2=2-p2=0.png}
\caption{$p^2=0$}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble1MassCoamoeba/bubbleshellHm2=2-p2=-0,5.png}
\caption{$0<-p^2<m_1^2$}
\end{subfigure} \\[.5\baselineskip]
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble1MassCoamoeba/bubbleshellKm2=2-p2=-1,9.png}
\caption{$0<-p^2<m_1^2$} \label{fig:coamoebaBubble1MassE}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble1MassCoamoeba/bubbleshellMm2=2-p2=-10.png}
\caption{$0<m_1^2<-p^2$} \label{fig:coamoebaBubble1MassF}
\end{subfigure}
\caption[Coamoeba $\mathcal C_\mathcal{G}$ of the $1$-loop self-energy graph with one mass]{Coamoeba $\mathcal C_\mathcal{G}$ of the $1$-loop self-energy graph with one mass discussed in \cref{ex:CoamoebaBubble1Mass}. From \cref{fig:coamoebaBubble1MassA} to \cref{fig:coamoebaBubble1MassF} the momentum $-p^2$ is increased. Whenever a threshold is exceeded the structure of the coamoeba changes. All those coamoebas are drawn only on the principle domain $\mathbb T^2$. Exemplarily, we printed the coamoeba in the case $p^2>0$ on a larger domain in \cref{fig:CoamoebaShellBubble1MassBigPicture}. The blue lines depict the shell $\mathcal H_\mathcal{G}$ of the coamoeba.} \label{fig:CoamoebaShellBubble1Mass}
\end{figure}
\begin{figure}
\centering\includegraphics[width=.5\textwidth]{Graphics/Bubble1MassCoamoeba/bubbleshellBigPicturem2=2-p2=6.png}
\caption[Coamoeba $\mathcal C_\mathcal{G}$ of the $1$-loop self-energy graph with one mass (larger region)]{Coamoeba $\mathcal C_\mathcal{G}$ of the $1$-loop self-energy graph with one mass for the domain $(-3\pi,3\pi)^2$. Depicted is a coamoeba for the kinematical region $p^2 > 0$. The blue lines show the shell $\mathcal H_\mathcal{G}$ of the coamoeba.} \label{fig:CoamoebaShellBubble1MassBigPicture}
\end{figure}
\begin{figure}
\centering\includegraphics[width=.5\textwidth]{Graphics/Bubble1MassCoamoeba/BubbleOneMassThreshold.png}
\caption[Real and imaginary part of $\mathcal I_\Gamma$ for the $1$-loop self-energy graph with one mass]{Real (solid line) and imaginary part (dashed line) of $\mathcal I_\Gamma$ for the $1$-loop self-energy graph with one mass. The imaginary part of $\mathcal I_\Gamma$ will split into two branches at the threshold $p^2+m_1^2=0$. That observation is consistent with \cref{fig:CoamoebaShellBubble1Mass}. When crossing this particular threshold, the coamoeba has two connected components in $\Cc[\mathcal{G}]$. Therefore, we also expect two analytic continuations.} \label{fig:Bubble1MassBranches}
\end{figure}
\end{example}
As aforementioned, the coamoeba is not very easily accessible in its original definition \cref{eq:defCoamoeba}. However, there are certain ways to approximate coamoebas more efficiently. As seen in \cref{lem:CoamoebasAndCNV}, the closure of the coamoeba $\overline{\mathcal C_f}$ is the union of all coamoebas of truncated polynomials of $f$. Restricting us to the faces of dimension $1$ we obtain the \textit{shell} of $f$ \cite{JohanssonCoamoebas2010, NisseGeometricCombinatorialStructure2009, ForsgardOrderMapHypersurface2015}
\begin{align}
\gls{shell} = \bigcup_{\substack{\tau\subseteq\operatorname{Newt}(f) \\ \dim (\tau) = 1}} \mathcal C_{f_\tau} \point
\end{align}
It is known, that each cell of the hyperplane arrangement of $\mathcal H_f$ contains at most one connected component of $\Cc$ \cite{ForsgardHypersurfaceCoamoebasIntegral2012}. Moreover, every line segment with endpoints in $\Cc$ intersecting $\overline{\mathcal C}_f$ has also to intersect a coamoeba $\mathcal C_{f_\tau}$ of a truncated polynomial $f_\tau$ where $\dim(\tau)=1$ \cite[lem. 2.3]{ForsgardOrderMapHypersurface2015}. This behavior will also explain the name ``shell''. Thus, $\mathcal H_f$ carries the rough structure of $\mathcal C_f$. \bigskip
\begin{example}[Shell for the $1$-loop self-energy graph with one mass] \label{ex:ShellBubble1Mass}
We want to continue \cref{ex:CoamoebaBubble1Mass} and determine the shell $\mathcal H_\mathcal{G}$ for the $1$-loop self-energy graph with one mass. There are four truncated polynomials corresponding to a face $\tau\subseteq\operatorname{Newt}(\mathcal{G})$ having dimension one
\begin{gather}
\mathcal{G}_{\tau_1} (x) = x_1 + x_2\ \text{,}\quad \mathcal{G}_{\tau_2} (x) = x_1 + m_1^2 x_1^2 \\
\mathcal{G}_{\tau_3} (x) = (p^2+m_1^2) x_1 x_2 + m_1^2 x_1^2\ \text{,}\quad \mathcal{G}_{\tau_4} (x) = x_2 + (p^2+m_1^2) x_1 x_2 \point
\end{gather}
Replacing the variables $x_i \mapsto r_i e^{i\theta_i}$ with $r_i\in\mathbb R_{>0}$ and $\theta_i\in\mathbb T$ we can directly conclude to the coamoebas
\begin{align}
\mathcal C_{\mathcal{G}_{\tau_1}} &= \big\{ \theta\in\mathbb T^2 \,\big\rvert\, \theta_1-\theta_2 \in (2\mathbb Z +1)\pi \big\} \\
\mathcal C_{\mathcal{G}_{\tau_2}} &= \big\{ \theta\in\mathbb T^2 \,\big\rvert\, \theta_1 \in (2\mathbb Z +1)\pi \big\} \\
\mathcal C_{\mathcal{G}_{\tau_3}} &= \begin{cases}
\big\{ \theta\in\mathbb T^2 \,\big\rvert\, \theta_1-\theta_2 \in (2\mathbb Z +1)\pi \big\} & \text{if}\ \frac{p^2+m_1^2}{m_1^2} > 0 \\
\big\{ \theta\in\mathbb T^2 \,\big\rvert\, \theta_1-\theta_2 \in (2\mathbb Z) \pi \big\} & \text{if}\ \frac{p^2+m_1^2}{m_1^2} < 0
\end{cases} \\
\mathcal C_{\mathcal{G}_{\tau_4}} &= \begin{cases}
\big\{ \theta\in\mathbb T^2 \,\big\rvert\, \theta_1 \in (2\mathbb Z +1)\pi \big\} & \text{if}\ p^2+m_1^2 > 0 \\
\big\{ \theta\in\mathbb T^2 \,\big\rvert\, \theta_1 \in (2\mathbb Z)\pi \big\} & \text{if}\ p^2+m_1^2 < 0
\end{cases}
\end{align}
where we assume $m_1^2 \in \mathbb R_{>0}$ and $p^2\in\mathbb R$. We have depicted the shell together with its coamoeba in \cref{fig:CoamoebaShellBubble1Mass}. Thus, we will also see a structural change in the shell, when exceeding the threshold $p^2=-m_1^2$. In particular, we can read off from the shell, that below $p^2=m_1^2$ there can be at most one connected component in $\Cc[\mathcal{G}]$, whereas above this threshold there are at most two connected components of $\Cc[\mathcal{G}]$ possible. Hence, we get without much effort (the shell is always an arrangement of hyperplanes) the rough structure of the coamoeba.
\end{example}
\vspace{\baselineskip}
Another way, for approximating the coamoeba, is the so-called \textit{lopsided coamoeba} $\mathcal L \mathcal C_f$. According to \cite{ForsgardOrderMapHypersurface2015} we can characterize the lopsided coamoeba as
\begin{align}
\gls{lop} := \left\{ \theta\in\mathbb T^n \, \big\rvert \, \exists t\in\mathbb R^N_{>0}\ \text{ s.t.}\ t_1 e^{i [\arg(z_1) + a^{(1)} \cdot \theta ]} + \ldots + t_N e^{i [\arg(z_N) + a^{(N)} \cdot \theta ]} = 0 \right\}
\end{align}
where $z_j\in\mathbb C$ and $a^{(j)}\in\mathbb Z^{n}$ refer to the representation of the polynomial $f(x) = \sum_{j=1}^N z_j x^{a^{(j)}}$. Therefore, we have to consider only a system of linear equations to determine if a point $\theta\in\mathbb T^n$ belongs to the lopsided coamoeba. This is considerably simpler than the situation for coamoebas where we have to consider in general non-linear systems $\mathcal C_f = \{\theta\in\mathbb T^n \,\rvert\, \exists r\in\mathbb R^n_{>0}\ \text{s.t.}\ f(r_1 e^{i\theta_1},\ldots,r_n e^{i\theta_n}) = 0 \}$. The lopsided coamoeba is a coarser version of the coamoeba and will contain the coamoeba $\mathcal C_f \subseteq \mathcal L \mathcal C_f$. Furthermore, each connected component of $\Cc$ contains at most one connected component of $\overline{\mathcal L \mathcal C}_f^c$ \cite[prop. 2.2.10]{ForsgardHypersurfaceCoamoebasIntegral2012}. But we will not necessarily find all connected components of $\Cc$ by considering $\overline{\mathcal L \mathcal C}_f^c$. We get immediately the following extension of \cref{lem:nonphysical}.
\begin{lemma} \label{lem:nonphysicalLopsided}
Assume that $z\in\Var(E_A(\mathcal{G}))$ is a singular point. If $0\notin \overline{\mathcal L \mathcal C_\mathcal{G}}$, the common solution of $\mathcal{G}=x_1 \pd{\mathcal{G}}{x_1} = \ldots = x_n \pd{\mathcal{G}}{x_n} = 0$ generating the singular point $z$ according to \cref{sec:ADiscriminantsReultantsPrincipalADets} does not lie on the integration contour $x \notin\mathbb R^n_+$. The same is true, when $\mathcal{G}$ is replaced by $\mathcal{F}$, which applies to the Landau variety $\mathcal L_1 (\mathcal I_\Gamma)$.
\end{lemma}
\begin{proof}
This follows trivially from \cref{lem:nonphysical} and $\overline{\mathcal C_\mathcal{G}} \subseteq \overline{\mathcal L \mathcal C_\mathcal{G}}$.
\end{proof}
There is also an alternative description of the lopsided coamoeba. Let $\operatorname{Tri}(f)$ be the set of trinomials, i.e.\ all polynomials which can be formed by removing all but three monomials from $f$. Then we have the following identity \cite[prop. 2.2.5]{ForsgardHypersurfaceCoamoebasIntegral2012}
\begin{align}
\overline{\mathcal L \mathcal C_f} = \bigcup_{g\in\operatorname{Tri}(f)} \overline{\mathcal C_g} \point
\end{align}
where we only have to determine the roots of trinomials. Furthermore, lopsided coamoebas can also be seen as the union of all coamoebas $\mathcal C_f$, where we scale the coefficients $z_j$ of $f$ by positive reals numbers \cite[lem. 3.8]{ForsgardOrderMapHypersurface2015}. Hence, the lopsided coamoeba is in particular sensible to a change of signs in the coefficients of $f$.
\begin{figure}
\centering
\begin{subfigure}{.45\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble1MassCoamoeba/lopsidedbubble1massAm2=2-p2=100.png}
\caption{$p^2+m_1^2>0$}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble1MassCoamoeba/lopsidedbubble1massMm2=2-p2=-10.png}
\caption{$p^2+m_1^2<0$}
\end{subfigure}
\caption[Lopsided coamoeba $\mathcal C_\mathcal{G}$ of the $1$-loop self-energy graph with one mass]{The lopsided coamoeba $\mathcal L \mathcal C_\mathcal{G}$ of the $1$-loop self-energy graph with one mass. Unlike the coamoeba (\cref{fig:CoamoebaShellBubble1Mass}) we can only detect two different shapes when varying $p^2\in\mathbb R$ and $m_1^2\in\mathbb R_{>0}$. However, we can see that the threshold $p^2=0$ corresponds only to a pseudo threshold, since the origin $\theta_1=\theta_2=0$ is not contained in the closure of the lopsided coamoeba. The blue lines show the shell $\mathcal L\mathcal H_\mathcal{G}$ of the lopsided coamoeba.} \label{fig:LopsidedcoamoebaBubble1Mass}
\end{figure}
Similar to the shell of coamoebas, one can also define a shell of lopsided coamoebas \cite{ForsgardHypersurfaceCoamoebasIntegral2012}, which reads as
\begin{align}
\gls{lopshell} = \bigcup_{g\in\operatorname{Bin}(f)} \mathcal C_g
\end{align}
where $\operatorname{Bin}(f)$ denotes the set of all binomials of $f$. As roots of binomials are trivial, one can always give a simple algebraic expression for the lopsided shell $\mathcal L \mathcal H_f$. Note, that the boundary $\overline{\mathcal L \mathcal C}_f$ of lopsided coamoebas will be contained in the shell of lopsided coamoebas $\mathcal L \mathcal H_f$ \cite[prop. 3.6]{ForsgardOrderMapHypersurface2015}. Hence, one can determine lopsided coamoebas effectively by their shells. In \cref{fig:LopsidedcoamoebaBubble1Mass} we continued the \cref{ex:CoamoebaBubble1Mass,ex:ShellBubble1Mass} related to \cref{fig:CoamoebaShellBubble1Mass} for the lopsided coamoeba. One can observe, that the lopsided coamoeba changes only for the second threshold $p^2+m_1^2=0$. However, we can detect by means of the much simpler lopsided coamoeba, that $p^2=0$ belongs to a pseudo threshold. Because the origin is not contained in the closure of the lopsided coamoeba $\overline{\mathcal L\mathcal C}_\mathcal{G}$ for $p^2=0$ and $m_1^2>0$, the origin will also not contained in the closure of coamoeba due to $\overline{\mathcal L \mathcal C}^c_\mathcal{G} \subseteq \overline{\mathcal C}^c_\mathcal{G}$. \bigskip
We want to give a second example to illustrate the behaviour of coamoebas and lopsided coamoebas further.
\begin{example}[Coamoebas for the $1$-loop self-energy graph with two masses] \label{eq:CoamoebaBubble2Masses}
Let us add a second mass to the previous example, and consider
\begin{align}
\mathcal{U} = x_1 + x_2 \qquad \mathcal{F} = (p^2+m_1^2+m_2^2) x_1 x_2 + m_1^2 x_1^2 + m_2^2 x_2^2 \point
\end{align}
We can take two different perspectives and either consider the Euler-Mellin integral $\mathscr M_{\mathcal{U}+\mathcal{F}}^\theta ({\underline{\nu}},z)$ according to \cref{eq:LeePomeranskyRepresentation} or the Euler-Mellin integral $\mathscr M_{\tilde \mathcal{U}\tilde \mathcal{F}}^\theta (\beta,z)$ where $\tilde \mathcal{U}=\mathcal{U}|_{x_n=1}$, $\tilde \mathcal{F} = \mathcal{F}|_{x_n=1}$ and $\beta = (\nu_0-\omega,\omega,\nu_1,\ldots,\nu_{n-1})$ according to \cref{eq:FeynmanParSpFeynman}. The advantage of $\mathscr M_{\tilde \mathcal{U}\tilde \mathcal{F}}^\theta (\beta,z)$ is that we have one Schwinger parameter less than in $\mathscr M_{\mathcal{U}+\mathcal{F}}^\theta ({\underline{\nu}},z)$. Thus, the coamoeba of the first is an object in $\mathbb T$, whereas the coamoeba for the latter is in $\mathbb T^2$. However, they will both have the same singular locus, since there $\mathcal{A}$-hypergeometric systems are equivalent according to \cref{sec:FeynmanIntegralsAsAHyp}
\begin{align}
\widehat E_A (\mathcal{U}+\mathcal{F}) &= \pm p^2 m_1^2 m_2^2 \left[\!\left( p^2+m_1^2+m_2^2\right)^2 - 4m_1^2m_2^2\right] \\
\widehat E_{\tilde A} (\tilde \mathcal{U} \tilde \mathcal{F}) &= \pm p^4 m_1^2 m_2^2 \left[\!\left( p^2+m_1^2+m_2^2\right)^2 - 4m_1^2m_2^2\right] \point
\end{align}
Hence, with $m_1^2\neq 0$, $m_2^2\neq 0$ we will expect thresholds for $p^2=0$ and $p^2 = - (m_1 \pm m_2)^2$. We printed the corresponding coamoebas $\mathcal C_\mathcal{G}$ in \cref{fig:CoamoebaShellBubble2MassG} as well as $\mathcal C_{\tilde \mathcal{U}\tilde\mathcal{F}}$ in \cref{fig:CoamoebaShellBubble2MassUF}. The main difference in those coamoebas lies in the behaviour for $p^2 < - (m_1+m_2)^2$, where $\mathcal C_\mathcal{G}$ does not allow to choose small values $\theta=(\varepsilon,\varepsilon)$ for $\mathscr M_{\mathcal{U}+\mathcal{F}}^\theta ({\underline{\nu}},z)$, whereas the neighbourhood of the origin is not contained in $\mathcal C_{\tilde \mathcal{U}\tilde\mathcal{F}}$. In both coamoebas we can locate the thresholds $p^2$ and $p^2=-(m_1-m_2)^2$ as pseudo thresholds by means of \cref{lem:nonphysical}.
\begin{figure}
\centering
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble2MassCoamoeba/bubble2mass-b1=1-b2=4-p2=5.png}
\caption{$p^2>0$} \label{fig:coamoebaBubble2MassA}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble2MassCoamoeba/bubble2mass-b1=1-b2=4-p2=-0,9.png}
\caption{$0<-p^2<(m_1-m_2)^2$} \label{fig:coamoebaBubble2MassB}
\end{subfigure} \\[.5\baselineskip]
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble2MassCoamoeba/bubble2mass-b1=1-b2=4-p2=-1,1.png}
\caption{$(m_1-m_2)^2<-p^2<(m_1+m_2)^2$} \label{fig:coamoebaBubble2MassC}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble2MassCoamoeba/bubble2mass-b1=1-b2=4-p2=-3.png}
\caption{$(m_1-m_2)^2<-p^2<(m_1+m_2)^2$} \label{fig:coamoebaBubble2MassD}
\end{subfigure} \\[.5\baselineskip]
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble2MassCoamoeba/bubble2mass-b1=1-b2=4-p2=-8,9.png}
\caption{$(m_1-m_2)^2<-p^2<(m_1+m_2)^2$} \label{fig:coamoebaBubble2MassE}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble2MassCoamoeba/bubble2mass-b1=1-b2=4-p2=-9,1.png}
\caption{$(m_1+m_2)^2<-p^2$} \label{fig:coamoebaBubble2MassF}
\end{subfigure}
\caption[Coamoeba $\mathcal C_\mathcal{G}$ of the $1$-loop self-energy graph with two masses]{Coamoeba $\mathcal C_\mathcal{G}$ of the $1$-loop self-energy graph with two masses discussed in \cref{eq:CoamoebaBubble2Masses}. From \cref{fig:coamoebaBubble2MassA} to \cref{fig:coamoebaBubble2MassF} the momentum $-p^2$ is increased. Whenever a threshold is exceeded the structure of the coamoeba changes. In \cref{fig:coamoebaBubble2MassA,fig:coamoebaBubble2MassB} $\Cc[\mathcal{G}]$ consists in one component, in \cref{fig:coamoebaBubble2MassC,fig:coamoebaBubble2MassD,fig:coamoebaBubble2MassE} $\Cc[\mathcal{G}]$ contains three components, whereas for \cref{fig:coamoebaBubble2MassF} there are two components in $\Cc[\mathcal{G}]$. The blue lines depict the shell $\mathcal H_\mathcal{G}$ of the coamoeba. Remarkably, in \cref{fig:coamoebaBubble2MassF} the closure of the coamoeba $\overline{\mathcal C_\mathcal{G}}$ contains the full neighbourhood of the origin.} \label{fig:CoamoebaShellBubble2MassG}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{.45\textwidth}
\centering\includegraphics[width=5cm,trim=0cm 2cm 0cm 2cm, clip]{Graphics/Bubble2MassCoamoeba/coamoebaBubble2massesUF-b1=1-b2=4-p2=5.png}
\caption{$-p^2<(m_1-m_2)^2$}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering\includegraphics[width=5cm,trim=0cm 2cm 0cm 2cm, clip]{Graphics/Bubble2MassCoamoeba/coamoebaBubble2massesUF-b1=1-b2=4-p2=-3.png}
\caption{$(m_1-m_2)^2 < -p^2 < (m_1+m_2)^2$}
\end{subfigure} \\
\begin{subfigure}{.45\textwidth}
\centering\includegraphics[width=5cm,trim=0cm 2cm 0cm 2cm, clip]{Graphics/Bubble2MassCoamoeba/coamoebaBubble2massesUF-b1=1-b2=4-p2=-8.png}
\caption{ $(m_1-m_2)^2 < -p^2 < (m_1+m_2)^2$}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering\includegraphics[width=5cm,trim=0cm 2cm 0cm 2cm, clip]{Graphics/Bubble2MassCoamoeba/coamoebaBubble2massesUF-b1=1-b2=4-p2=-9,1.png}
\caption{$-p^2>(m_1+m_2)^2$}
\end{subfigure}
\caption[Coamoeba $\mathcal{C}_{\tilde{\mathcal{U}} \tilde{\mathcal{F}}}$ of the $1$-loop self-energy graph with two masses]{Coamoeba $\mathcal C_{\tilde\mathcal{U}\tilde\mathcal{F}}$ of the $1$-loop self-energy graph with two masses discussed in \cref{eq:CoamoebaBubble2Masses}. In $\mathcal C_{\tilde\mathcal{U}\tilde\mathcal{F}}$ we will only observe the two thresholds $p^2=-(m_1\pm m_2)^2$, where we can determine $p^2=-(m_1-m_2)^2$ to be a pseudo threshold. Note, that after exceeding the second threshold $p^2=-(m_1+m_2)^2$, the neighbourhood of the origin belongs not to $\overline{\mathcal C_{\tilde\mathcal{U}\tilde\mathcal{F}}}$. However, the number of connected components in $\Cc[\tilde\mathcal{U}\tilde\mathcal{F}]$ coincides with the number of connected components in $\Cc[\mathcal{G}]$ from \cref{fig:CoamoebaShellBubble2MassG}.} \label{fig:CoamoebaShellBubble2MassUF}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{.45\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble2MassCoamoeba/bubble2massLopsided-b1=1-b2=4-p2=-3.png}
\caption{$p^2+m_1^2+m_2^2>0$}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering\includegraphics[width=.75\textwidth]{Graphics/Bubble2MassCoamoeba/bubble2massLopsided-b1=1-b2=4-p2=-8.png}
\caption{$p^2+m_1^2+m_2^2<0$}
\end{subfigure}
\caption[Lopsided coamoeba $\mathcal C_{\mathcal{G}}$ of the $1$-loop self-energy graph with two masses]{Lopsided coamoeba $\mathcal L \mathcal C_{\mathcal{G}}$ of the $1$-loop self-energy graph with two masses discussed in \cref{eq:CoamoebaBubble2Masses}. The lopsided coamoeba changes only when passing $p^2+m_1^2+m_2^2 = 0$, i.e.\ when the coefficients of $\mathcal{G}$ change their signs. However, one can conclude from the lopsided coamoeba, that $p^2=-(m_1-m_2)^2$ corresponds to a pseudo threshold. The blue lines depict the lopsided shell $\mathcal L\mathcal H_\mathcal{G}$.}
\end{figure}
\end{example}
At the end of this chapter we want to give a comprehensive answer to the question of analytic continuations of Feynman integrals. We already discussed the analytic continuation with respect to parameters ${\underline{\nu}}$ in \cref{sec:DimAnaReg}. We also discussed the singular locus $\operatorname{Sing}(H_\mathcal{A}({\underline{\nu}}))$ with respect to the variables $z$ in \cref{sec:LandauVarieties,sec:2ndtypeSingularities} and introduced the $\theta$-analogue, which applies whenever $\mathcal{G}(x)$ is not completely non-vanishing on $(0,\infty)^n$. Hence, we want to complete the picture by a result of \cite{BerkeschEulerMellinIntegrals2013}, that the $\theta$-analogue of a generalized Feynman integral has a multivalued analytic continuation to all points except the singular locus.
\begin{theorem}[{analytic continuation of $\theta$-analogue, generalized Feynman integrals \cite[thm. 4.2]{BerkeschEulerMellinIntegrals2013}}]
Let $\operatorname{Sing}(H_\mathcal{A}({\underline{\nu}})) = \Var( E_A(\mathcal{G})) \subset \mathbb C^N$ be the singular locus of $\mathscr M_\mathcal{G}^\Theta$, $z\in\mathbb C^N\setminus\operatorname{Sing}(H_\mathcal{A}({\underline{\nu}}))$ a point outside the singular locus and $\Theta$ a connected component of $\mathbb T^n\setminus\overline{\mathcal C_\mathcal{G}}$. Then $\mathscr M_\mathcal{G}^\Theta({\underline{\nu}},z)$ has a multivalued analytic continuation to $\mathbb C^{n+1}\times (\mathbb C^N\setminus\operatorname{Sing}(H_\mathcal{A}({\underline{\nu}})))$, which is everywhere $\mathcal{A}$-hypergeometric.
\end{theorem}
Hence, we can analytically continue the generalized, $\theta$-analogue Feynman integral for its parameters ${\underline{\nu}}$ as well as for its variables $z$ for any connected component $\Theta$ of $\Cc[\mathcal{G}]$. \bigskip
We want to conclude this section by a short recapitulation of the main results. We have shown, that the $\theta$-analogue Euler-Mellin integrals (when they exist for small values of $\theta$) states an alternative to Feynman's $i\varepsilon$ prescription. Those $\theta$-analogue Euler-Mellin integrals will help us to investigate the kind of solutions of the Landau equations. Thereby, the behaviour of $\theta$-analogue Euler-Mellin integrals is determined by the coamoeba, i.e.\ the coamoeba shows which contours we can take in the Feynman integral. Hence, we can find the properties of Landau singularities by considering the coamoeba, e.g.\ we can give conditions for pseudo thresholds as well as we can determine the discontinuity of Feynman integrals by means of components of $\Cc[\mathcal{G}]$. Moreover, certain aspects of coamoebas can be approximated by the shells and the lopsided coamoeba.\bigskip
\pagestyle{withoutSections}
\chapter{Conclusion and outlook}
The aim of this thesis was to characterize Feynman integrals in the context of $\mathcal{A}$-hypergeometric theory. This point of view is quite recently and was initiated inter alia by one of the articles on which this thesis is based on. However, the connection between Feynman integrals and general hypergeometric functions is a long-standing discussion in research going back to Tullio Regge. As we have illustrated above, $\mathcal{A}$-hypergeometric functions are the appropriate framework to address and answer these questions. In particular, we showed in \cref{thm:FeynmanAHypergeometric} that every generalized, scalar Feynman integral is an $\mathcal{A}$-hypergeometric function. Thereby, $\mathcal{A}\in\mathbb Z^{(n+1)\times N}$ is given as the homogenized vector configuration of the support $A$ of the Lee-Pomeransky polynomial $\mathcal{G}=\mathcal{U}+\mathcal{F} = \sum_{a^{(j)}\in A} z_j x^{a^{(j)}}$, and the GKZ-parameter ${\underline{\nu}}=\left(\frac{d}{2},\nu_1,\ldots,\nu_n\right)\in\mathbb C^{n+1}$ is determined by the spacetime dimension $d$ and the indices $\nu_i$ of the Feynman integral. Hence, the Feynman integral $\mathcal I_\mathcal{A}({\underline{\nu}},z)$ depends only on three objects: The vector configuration $\mathcal{A}\in\mathbb Z^{(n+1)\times N}$, which is determined by the graph topology, the parameters ${\underline{\nu}}\in\mathbb C^{n+1}$, which are especially important in dimensional and analytical regularization and on the variables $z\in\mathbb C^N$, which encode the dependence on masses and external momenta. The characterization of Feynman integrals to be $\mathcal{A}$-hypergeometric means, that every generalized, scalar Feynman integral $\mathcal I_\mathcal{A}({\underline{\nu}},z)$ satisfies the following two types of partial differential equations
\begin{align}
\left(\sum_{j=1}^N a^{(j)} z_j \partial_j + {\underline{\nu}}\right) \bullet \mathcal I_\mathcal{A}({\underline{\nu}},z) &= 0 \\
\left(\prod_{l_j>0} \partial_j^{l_j} - \prod_{l_j<0} \partial_j^{-l_j}\right) \bullet \mathcal I_\mathcal{A}({\underline{\nu}},z) &= 0
\end{align}
for all $l\in\mathbb L = \operatorname{Dep}(\mathcal{A})\cap\mathbb Z^N$, which generate the holonomic $D$-module $H_\mathcal{A}({\underline{\nu}})$. From the theory of holonomic $D$-modules, one can determine the dimension of the solution space $\operatorname{Sol}(H_\mathcal{A}({\underline{\nu}}))$. Thus, the dimension of the vector space of holomorphic functions satisfying this system of partial differential equations is given by the volume of the Newton polytope $\operatorname{Newt}(\mathcal{G})$, whenever ${\underline{\nu}}$ is generic (\cref{thm:CKKT,thm:HolRankAHyp}). For many classes of Feynman integrals, this holds even for non-generic ${\underline{\nu}}$ (\cref{thm:CohenMacaulayFeynman}). Furthermore, it was shown that also every coefficient in a Laurent expansion as appearing in dimensional or analytical regularization can be expressed by $\mathcal{A}$-hypergeometric functions (\cref{sec:epsilonExpansion}).\bigskip
From the property to be $\mathcal{A}$-hypergeometric, we also derived series representations for every Feynman integral. Hence, for each regular triangulation $\mathcal{T}$ of the Newton polytope $\operatorname{Newt}(\mathcal{G})$ one obtains a series representation in the following form (\cref{thm:FeynSeries})
\begin{align}
\mathcal I_\mathcal{A}({\underline{\nu}},z) = \frac{1}{\Gamma(\nu_0-\omega)\Gamma(\nu)} \sum_{\sigma\in \widehat{\mathcal T}} \frac{z_\sigma^{-\mathcal{A}_\sigma^{-1}{\underline{\nu}}}}{|\det (\mathcal{A}_\sigma)|} \sum_{\lambda\in\mathbb N_0^{N-n-1}} \!\!\!\! \frac{(-1)^{|\lambda|}}{\lambda!} \Gamma\!\left(\mathcal{A}_\sigma^{-1}{\underline{\nu}}+\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda\right) \frac{z_{\bar\sigma}^\lambda}{z_\sigma^{\mathcal{A}_\sigma^{-1}\mathcal{A}_{\bar{\sigma}}\lambda}} \ \text{.}
\end{align}
Power series of this type are also known as Horn hypergeometric functions. As there are in general many different ways to triangulate a polytope, there also exist many different series representations for a single Feynman integral, which can also be connected by hypergeometric transformations. Those series representations are especially made for an efficient numerical evaluation, since one can choose those series representations which converge fast for a given kinematic region. For the purpose of a practical usage of this concept, we discussed possible obstacles which can appear in the concrete evaluation and gave certain strategies to solve them. Especially, we have offered a simple method to determine the analytic continuation in case of non-generic values $z$ (\cref{sec:AnalyticContinuation}), as they appear for more complex Feynman graphs. An implementation of this method sketched in \cref{ch:seriesRepresentations} is already in progress and is planned for publication.
Besides of Horn hypergeometric series, the GKZ approach allows also the representation of Feynman integrals in terms of other hypergeometric functions, as e.g.\ Euler integrals, which we discussed shortly in \cref{sec:EulerIntegrals}.\bigskip
Hence, we have characterized the Feynman integral in the three different notions in which hypergeometric functions can appear: as various types of integrals, as solutions of specific partial differential equations and by a certain type of series. From the perspective of general hypergeometric functions the common ground of these representations is the vector configuration $\mathcal{A}\in\mathbb Z^{(n+1)\times N}$, which is also the only information from the topology of the Feynman graph that has an influence to the Feynman integral. The possibility to characterize Feynman integrals in those three different ways is one of the key features of $\mathcal{A}$-hypergeometric functions. \bigskip
Besides numerical applications, there are also structurally very interesting implications for the Feynman integral from $\mathcal{A}$-hypergeometric theory. Thus, we have investigated the kinematic singularities of scalar Feynman integrals from the $\mathcal{A}$-hypergeometric perspective. This point of view provides a mathematically rigorous description of those singularities by means of principal $A$-determinants. More precisely, it turns out that the singular locus of Feynman integrals is the variety defined by the principal $A$-determinant of the sum of the Symanzik polynomials $\mathcal{G}=\mathcal{U}+\mathcal{F}$ (\cref{thm:SingularLocusPrincipalAdet})
\begin{align}
\operatorname{Sing}(H_\mathcal{A}({\underline{\nu}})) = \Var (E_A(\mathcal{G})) = \Var (\widehat E_A(\mathcal{G}))
\end{align}
or by the simple principal $A$-determinant $\widehat E_A(\mathcal{G})$, respectively. Thereby, principal $A$-determinants $E_A(\mathcal{G})\in\mathbb Z [z_1,\ldots,z_N]$ are polynomials in the coefficients of the polynomial $\mathcal{G}$, uniquely determined up to a sign.
Every principal $A$-determinant $E_A(\mathcal{G})$ can be factorized into $A$-discriminants, where each one corresponds to a face of $\operatorname{Newt}(\mathcal{G})$ (\cref{thm:pAdet-factorization}). Further, we can divide the $A$-discriminants for $\widehat E_A(\mathcal{G})$ into four different parts
\begin{align}
\widehat E_A (\mathcal{G}) = \pm \vspace{-1em} \prod_{\tau\subseteq \operatorname{Newt}(\mathcal{G})} \Delta_{A\cap\tau} (\mathcal{G}_\tau) = \pm \, \Delta_A(\mathcal{G}) \cdot \widehat E_{\mathcal{A}_\mathcal{F}}(\mathcal{F}) \cdot \widehat E_{\mathcal{A}_\mathcal{U}}(\mathcal{U}) \cdot R \point
\end{align}
We have shown, that the $A$-discriminant of the full polytope $\Delta_A(\mathcal{G})$ can be identified with the so-called second-type singularities (\cref{sec:2ndtypeSingularities}). The principal $A$-determinant of the second Symanzik polynomial $\mathcal{F}$ can be associated with the Landau variety, i.e.\ $\mathcal L_1 (\mathcal I_\Gamma) = \Var \Big(\widehat E_{\mathcal{A}_\mathcal{F}}(\mathcal{F}) \Big)$ (\cref{thm:LandauVar}). The polynomial $R$ contains all remaining $A$-discriminants coming from proper, mixed faces of $\operatorname{Newt}(\mathcal{G})$ and corresponds to second-type singularities of subgraphs, whereas $\widehat E_{\mathcal{A}_\mathcal{U}} (\mathcal{U})$ has no influence to the singular locus when considering the restriction to physically relevant values of $z$. Remarkably, we have found that the singular locus of Feynman integrals (especially the Landau variety $\mathcal L_1(\mathcal I_\Gamma)$) includes parts, which were overlooked in previous approaches. This is due to the fact that not all truncated polynomials will have an equivalent polynomial coming from a subgraph. However, this difference has an impact only for Feynman graphs beyond $1$-loop or banana graphs (\cref{lem:SubgraphPolynomialsVsTruncated}). This is may the reason why these forgotten singularities were not detected earlier. Exemplarily, we presented the Landau variety of the dunce's cap graph in \cref{sec:ExampleDuncesCap}, where one can observe also those additional contributions.\bigskip
From the perspective of $\mathcal{A}$-hypergeometric theory, the $\mathcal{A}$-hypergeometric functions are often said to be ``quantizations'' of $A$-discriminants \cite{GelfandDiscriminantsResultantsMultidimensional1994}. Here, ``quantization'' should not be confused with the quantization in physics, which is why the use of this term may be misleading in this context. However, the relation between $A$-discriminants and $\mathcal{A}$-hypergeometric functions has certain formal similarities with the quantization procedure in physics. Thus, if one maps the $A$-discriminants from the commutative polynomial ring in a specific manner to a non-commutative Weyl algebra, one obtains a partial differential equation system which precisely characterizes the $\mathcal{A}$-hypergeometric function. In this sense, Feynman integrals are the ``quantization'' of Landau varieties. \bigskip
By means of \cref{thm:NewtSec} we revealed an unexpected connection between Landau varieties and the set of all triangulations of $\operatorname{Newt}(\mathcal{F})$ going back to \cite{GelfandDiscriminantsResultantsMultidimensional1994}. It states, that the Newton polytope of $E_{\mathcal{A}_\mathcal{F}}(\mathcal{F})$ coincides with the secondary polytope
\begin{align}
\operatorname{Newt}(E_{\mathcal{A}_\mathcal{F}}(\mathcal{F})) = \Sigma(A_\mathcal{F}) \point
\end{align}
The same holds also for the principal $A$-determinant $E_A(\mathcal{G})$ and the secondary polytope $\Sigma(A_\mathcal{G})$. This relation can be understood against the background of series representations, where the structure of variables -- and thus also their singularities -- depends on triangulations. \bigskip
Apart from the description of the singular locus, we also introduced a powerful tool to determine the singular locus: the Horn-Kapranov-parameterization. This parameterization relies in the same vein on the above-mentioned relation between the singular locus and series representations. Thus, the calculation of a Gale dual is sufficient to obtain a parameterization of the hypersurface defined by an $A$-discriminant. Clearly, such a parametric representation of a variety differs from a representation via defining polynomials. However, such a representation can be even more convenient for many approaches, as we describe the singularities directly. Also having in mind, that a representation of Landau varieties by a defining polynomial will be an incommensurable effort for almost all Feynman graphs, we want to advertise the usage of Horn-Kapranov-parameterization.\bigskip
Finally, in order to study the monodromy of Feynman integrals, we introduced an Euler-Mellin integral with a rotated integration contour. We showed that this \mbox{$\theta$-analogue} Euler-Mellin integral coincides with Feynman's $i\varepsilon$ prescription in the limit $\varepsilon\rightarrow 0^+$, whenever this limit exists. However, these $\theta$-analogue Euler-Mellin integrals have several advantages compared to the $i\varepsilon$ prescription. In particular, in the descriptions of those $\theta$-analogue Euler-Mellin integrals one does not have to take a limit to discuss the discontinuity at a branch cut.
Moreover, the behaviour of $\theta$-analogue Euler-Mellin integrals and the monodromy of Feynman integrals is substantially determined by the coamoeba of the polynomial $\mathcal{G}$. From the shape of the coamoeba of $\mathcal{G}$, we can also conclude to the nature of the singularities of Feynman integrals, i.e.\ we can identify pseudo thresholds. We sketched also several ways to approximate the coamoeba in order to derive efficient algorithms. However, the application of coamoebas to Feynman integrals leaves many questions open and will surely be a worthwhile focus for future research.\bigskip
We would like to conclude this thesis with an incomplete list of questions and ideas that we think are of interest and that might be answered with the help of $\mathcal{A}$-hypergeometric theory. First of all, it would be desirable to extend the knowledge about kinematic singularities and monodromy of Feynman integrals. Especially, we would like to find a clear description what happens with the singular locus when variables take non-generic values, since this case is usually left out in classical treatments about this subject. In particular, we would like to find the dimension of the solution space when considering non-generic values. Further, it would be interesting to know if the truncated polynomials which contribute to the singular locus $\operatorname{Sing}(H_\mathcal{A}({\underline{\nu}}))$ always have a graphical equivalent, as appeared in the example of \cref{sec:ExampleDuncesCap}. To study the monodromy of Feynman integrals, we would also like to investigate more in the coamoeba.
Moreover, it would be interesting to apply the $\mathcal{A}$-hypergeometric approach also on other stages of perturbative QFT. Hence, we can consider the GKZ systems for each Horn hypergeometric series appearing in the Laurent expansion of a Feynman integral. This would give an alternative way for the analytic continuation of those series, as well as a method to find simpler representations. Further, it would be promising to try the $\mathcal{A}$-hypergeometric approach also to the $S$-matrix itself, which goes back to a suggestion in \cite{KawaiMicrolocalStudySmatrix1975, SatoRecentDevolpmentHyperfunction1975}.
As Feynman integrals form only a subclass of $\mathcal{A}$-hypergeometric functions, it would be interesting, if one can characterize this subclass further. For example, there is the legitimate hypothesis that Feynman integrals are always Cohen-Macaulay (see \cref{thm:CohenMacaulayFeynman}), when external momenta are considered to be generic.
Furthermore, we would like to get an understanding of linear relations of Feynman integrals from the $\mathcal{A}$-hypergeometric perspective. Very recently, there was a method proposed to generate linear relations by means of Pfaffian systems coming from $\mathcal{A}$-hypergeometric system \cite{ChestnovMacaulayMatrixFeynman2022}. Usually, one constructs partial differential equations in the kinematic variables from those linear relations. Hence, it could be inspiring to see this relation on the more formal level of GKZ. Thus, one has to consider the relation\footnote{The connection between the $D$-module $H_\mathcal{A}({\underline{\nu}})$ and a shift algebra, shifting the values of ${\underline{\nu}}$ by integer amounts, is already indicated by \cref{eq:FeynmanJDerivative}. In this context we also refer to \cite[ch. 5]{SaitoGrobnerDeformationsHypergeometric2000}.} between the $D$-module $H_\mathcal{A}({\underline{\nu}})$ and the $s$-parametric annihilator $\operatorname{Ann}(\mathcal{G}^s)$ generating all linear relations among Feynman integrals \cite{BitounFeynmanIntegralRelations2019}.
To connect the hypergeometric perspective stronger to the graph perspective it would be fruitful to consider the graph operations deletion and contraction on the level of Feynman integrals. \bigskip
By the series representations and the singular locus of Feynman integrals we studied only two particular aspects which arise from the connection between $\mathcal{A}$-hypergeometric theory and Feynman integrals. Since Feynman integrals and $\mathcal{A}$-hypergeometric functions were mainly developed independently of each other, we strongly believe, that physicists and mathematicians can learn much from each other in this area.
\pagestyle{fancyStandard}
|
{
"arxiv_id": "2302.13160",
"language": "en",
"timestamp": "2023-02-28T02:11:31",
"url": "https://arxiv.org/abs/2302.13160",
"yymm": "2302"
} | \section{Conclusion}
\label{Conclusion}
Random projection trees (rpTrees) generalize the concept of $k$d-trees by using random directions to split, instead of restricting the splits to the existing dimensions. An rpForest is a collection of rpTrees. The intuition is if one tree separates two true neighbors, this is less likely to happen in other rpTrees. Previous studies suggested using a random direction with the maximum dispersion to get better splits. However, it was not clear how the dispersion of points will affect the $k$-nn search in a large rpForest. We conducted experiments to investigate if the dispersion of points affects $k$-nn search in rpForest with varying number of trees. We are confident to recommend that if the number of trees is sufficient to achieve a small error rate, one is better off using random directions regardless of the dispersion of points. This method is faster than other methods with the same $k$-nn search efficiency in case of a sufficient number of rpTrees.
A possible extension of this work is to test how the examined method will perform in problems other than $k$-nn search, like spectral clustering. Also, we would like to explore if there are metrics other than the dispersion of points that improve $k$-nn search with less computational footprint.
\section{Experiments and discussions}
\label{Experiments}
\begin{figure*
\centering
\includegraphics[width=0.24\textwidth]{Dataset-01.png}
\includegraphics[width=0.24\textwidth]{Dataset-02.png}
\includegraphics[width=0.24\textwidth]{Dataset-03.png}
\includegraphics[width=0.24\textwidth]{Dataset-04.png}
\caption{Synthetic datasets used in the experiments; from left to right \texttt{Dataset 1} to \texttt{Dataset 4}; source: \cite{Zelnik2005Self,ClusteringDatasets}.}
\label{Fig:Fig-07}
\end{figure*}
\begin{table*
\centering
\caption{Properties of tested datasets. $N =$ number of samples, $d =$ number of dimensions (i.e., features).}
\begin{tabular}{l rrr | l rrr | l rrr}
\hline\hline \\ [-1.5ex]
\multicolumn{4}{c}{Synthetic datasets} &
\multicolumn{4}{c}{Small real datasets} &
\multicolumn{4}{c}{Large real datasets}\\ [0.5ex]
\hline
& $N$ & $d$ & source &
& $N$ & $d$ & source &
& $N$ & $d$ & source\\
Dataset 1 & 303 & 2 & \cite{Zelnik2005Self} &
iris & 150 & 4 & \cite{scikit-learn} &
mGamma & 19020 & 10 & \cite{Dua2019UCI}\\
Dataset 2 & 238 & 2 & \cite{Zelnik2005Self} &
wine & 178 & 13 & \cite{scikit-learn} &
Cal housing & 20640 & 8 & \cite{scikit-learn}\\
Dataset 3 & 622 & 2 & \cite{Zelnik2005Self }&
BC-Wisc. & 569 & 30 & \cite{scikit-learn} &
credit card & 30000 & 24 & \cite{Dua2019UCI}\\
Dataset 4 & 788 & 2 & \cite{ClusteringDatasets} &
digits & 1797 & 64 & \cite{scikit-learn} &
CASP & 45730 & 9 & \cite{Dua2019UCI}\\
\hline
\end{tabular}
\label{Table:Table-01}
\end{table*}
\begin{figure*}
\centering
\begin{tabular}{c | c }
\hline \\ [-1.5ex]
$T \in \{1, 2, 3, 4, 5\}$ &
$T \in \{10, 20, 40, 60, 80, 100\}$ \\ [0.5ex]
\hline
\raisebox{-\totalheight}{\includegraphics[width=0.49\textwidth]{synthetic-1.png}} &
\raisebox{-\totalheight}{\includegraphics[width=0.49\textwidth]{synthetic-10.png}} \\
\hline
\end{tabular}
\caption{Average missing rate $\overline{m}$ and average distance error $\overline{d}_k$ after testing the four methods on four synthetic datasets: \texttt{Dataset 1, Dataset 2, Dataset 3 } and \texttt{, Dataset 4}. Each box represents a 100 runs average. (Best viewed in color)}
\label{Fig:Fig-synthetic}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{c | c }
\hline \\ [-1.5ex]
$T \in \{1, 2, 3, 4, 5\}$ &
$T \in \{10, 20, 40, 60, 80, 100\}$ \\ [0.5ex]
\hline
\raisebox{-\totalheight}{\includegraphics[width=0.49\textwidth]{small-1.png}} &
\raisebox{-\totalheight}{\includegraphics[width=0.49\textwidth]{small-10.png}} \\
\hline
\end{tabular}
\caption{Average missing rate $\overline{m}$ and average distance error $\overline{d}_k$ after testing the four methods on four small real datasets: \texttt{iris, wine, BC-Wisc. } and \texttt{, digits}. Each box represents a 100 runs average. (Best viewed in color)}
\label{Fig:Fig-small}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{c | c }
\hline \\ [-1.5ex]
$T \in \{1, 2, 3, 4, 5\}$ &
$T \in \{10, 20, 40, 60, 80, 100\}$ \\ [0.5ex]
\hline
\raisebox{-\totalheight}{\includegraphics[width=0.49\textwidth]{large-1.png}} &
\raisebox{-\totalheight}{\includegraphics[width=0.49\textwidth]{large-10.png}} \\
\hline
\end{tabular}
\caption{Average missing rate $\overline{m}$ and average distance error $\overline{d}_k$ after testing the four methods on four large real datasets: \texttt{mGamma, Cal housing, credit card } and \texttt{, CASP}. Each box represents a 10 runs average. (Best viewed in color)}
\label{Fig:Fig-Large}
\end{figure*}
The experiments were designed to investigate the relationship between the dispersion of points ($\sigma_r$) when splitting tree nodes and the number of trees ($T$). There are a couple of factors that could affect the $k$-nearest neighbor search in rpForests. The first factor is the dataset used, which includes the number of samples $n$ and the number of dimensions $d$. The second factor that affects $k$-nn search is the value of $k$, which indicates how many neighbors should be returned by the algorithm. To draw a consistent conclusion on the relationship between the dispersion of points and the number of trees, we have to test the method on datasets from different applications. We also have to repeat the experiments using different values of $k$.
The evaluation metrics used in our experiments are the ones proposed by Yan, et al. \cite{Yan2021Nearest,Yan2018Nearest}. The first metric in equation \ref{Eq-002} measures the number of neighbors missed by the algorithm. The second metric (shown in equation \ref{Eq-003}) measures the distance to the $k^{\text{th}}$ neighbor found by the algorithm. Apparently, if the algorithm made a mistake, the distance to the $k^{\text{th}}$ neighbor will be larger than the distance to the true $k^{\text{th}}$ neighbor.
The code used to produce the experiments is available on \url{https://github.com/mashaan14/RPTree}. To make our experiments reproducible, we used standardized datasets and off the shelf machine learning libraries. The properties of the tested datasets are shown in Table\ \ref{Table:Table-01}. The functions to perform random projection and principle component analysis (PCA) in our experiments are the ones provided by scikit-learn library in python \cite{scikit-learn, sklearn_api}. All experiments were coded in python 3 and run on a machine with 20 GB of memory and a 3.10 GHz Intel Core i5-10500 CPU.
\subsection{Experiments on synthetic and real data}
\label{SyntheticRealData}
In the first part of this experiment, we used four synthetic datasets with different distribution of data points. The value of $k$ was set to 5 and the rpTree node $W$ was not allowed to have less than 20 data points. The results of this experiment are shown in Figure\ \ref{Fig:Fig-synthetic}. An observation that kept persisting across all datasets is that all methods performed better with a large number of rpTrees. The missing rate remained low for all methods when $T>20$. Also, the distance error remained low when $T>20$.
\texttt{Method 1} has the highest missing rate when $T=1$ in three of the four datasets. This is due to the randomness involved in this method, since it does not maximize the dispersion of points before splitting the points. This increases the chance that two true neighbors are placed in different tree branches. When we increased the number of trees $T>20$, \texttt{Method 1} delivered similar performance as the other competing methods as shown in the right side of Figure\ \ref{Fig:Fig-synthetic}. Since \texttt{Method 1} involves less computations than other methods, it is recommended to use it when $T>20$.
The second part of this experiment uses small real datasets shown in Table\ \ref{Table:Table-01}. The same settings in synthetic datasets were used here. $k=5$ and the capacity of the node $W$ should not drop below 20 data points. The results of this experiment are shown in Figure\ \ref{Fig:Fig-small}, where all boxes are averages of 100 runs.
We continue to have the same observation from the last experiment. When the number of trees is large $T>20$, it does not matter which method you used to construct rpTrees. So, the user is better off using the original method \texttt{Method 1} because it has less computations.
We also noticed that in the last dataset, the \texttt{digits} dataset, the methods needed large number of trees usually above 20, to maintain a lower missing rate as shown in the right side of Figure\ \ref{Fig:Fig-small}. This was not the case in the previous three datasets \texttt{iris}, \texttt{wine}, and \texttt{breast cancer}, when the missing rate dropped and stabilized well before we reach $T=20$. This can be explained by the large number of data points in \texttt{digits} dataset compared with the other three datasets.
The last part of this experiment was to test the competing methods on large real datasets: \texttt{mGamma, Cal housing, credit card } and \texttt{, CASP}. The properties of these datasets are shown in Table\ \ref{Table:Table-01}. The results of this experiment are shown in Figure\ \ref{Fig:Fig-Large}, in which all boxes represent an average of 10 runs.
With \texttt{mGamma} dataset, the $k$-nn search was ineffective when $T=1$, regardless of the method used. The competing methods have some runs where the average missing rate $\overline{m}=1$, indicating the methods could not retrieve any of the true neighbors when $T=1$. As we increase $1 < T \le 5$, we notice that $\overline{m}$ dropped below $0.2$, which means the methods were able to retrieve some of the true neighbors. With $T > 20$ $\overline{m}$ and $\overline{d}_k$ stabilize around 0, which means the methods are able to retrieve all true neighbors, and the distance to the $k^{\text{th}}$ neighbor found by the methods equals the true distance.
The observation we had with \texttt{mGamma} dataset when increasing the value of $T$ applies to other datasets in Figure\ \ref{Fig:Fig-Large} \texttt{Cal housing, credit card } and \texttt{, CASP}. We also observed that some runs with \texttt{credit card} dataset had high $\overline{m}$ value even with large number of trees with $20 < T \le 40$. This can be explained by the high dimensions in this dataset $d=24$, which affects the $k$-nn search. But with $T > 60$, the average missing rate $\overline{m}$ stabilizes around zero for all competing methods.
\begin{table*
\centering
\caption{Results of running t-test on the samples collected from different methods using large real datasets: \texttt{mGamma, Cal housing, credit card } and \texttt{, CASP}. $T$ is the number of trees; (-) indicates that the samples have identical averages.}
\includegraphics[width=\textwidth,height=20cm,keepaspectratio]{Hypothesis-testing.pdf}
\label{Table:Hypothesis-testing}
\end{table*}
\subsection{Hypothesis testing}
\label{Hypothesis-testing}
We performed a statistical significance test to evaluate the samples we collected from our experiment. The hypothesis testing was formulated as follows:
\begin{itemize}
\item $H_0$: $\mu_A = \mu_B$ (maximizing the points dispersion does not change k-nn search results with $T>20$)
\item $H_1$: $\mu_A \ne \mu_B$ (maximizing the points dispersion changes k-nn search results with $T>20$)
\end{itemize}
\noindent
$\mu_A$ is the average missing rate $\overline{m}$ using \texttt{Method 1}, and $\mu_B$ is the average missing rate $\overline{m}$ using \texttt{Method 2}, \texttt{Method 3}, or \texttt{Method 4}.
We used the function (\texttt{scipy.stats.ttest\_ind}) \cite{2020SciPy-NMeth} to perform t-test. The results are shown in Table\ \ref{Table:Hypothesis-testing}. By looking at Table\ \ref{Table:Hypothesis-testing} we can see that in most cases the collected samples have identical averages, which was indicated by a dash (-). In some cases, particularly in \texttt{credit card} dataset, the averages were not identical. However, the p-value was greater than $0.05$, which is not statistically significant. Based on the results shown in Table\ \ref{Table:Hypothesis-testing}, we fail to reject the null hypothesis $H_0$.
\subsection{Experiments using a varying value of $k$}
\label{Varyk}
In this experiment we fixed the dataset factor and vary the value of the returned nearest neighbors (i.e., $k$). In the previous experiment the value of $k$ was fixed to be $5$, and the number of data points in an rpTree node $W$ should not be less than $20$. This is the same settings used by Yan, et al. \cite{Yan2021Nearest}. With a varying number of $k$ we had to increase the capacity of the node $W$ to be 30 instead of $20$. This was sufficient to cover the tested $k$ values, which are $k \in \{7,9,11,13,15,17,19,21\}$. The results are shown in Figure\ \ref{Fig:Fig-12}.
In general, with an increasing number of rpTrees $T$ in rpForest, it does not matter which method for dispersion of points you used. We also observed that with smaller values of $k$ (roughly $k < 11$), the missing rate drops faster as we increase the number of trees $T$. This can be explained by the capacity of the node $W$. With larger $k$ values we are approaching the capacity of $W$, which was set to 30. This increases the chance of missing some neighbors, as the method requires all $k$ nearest neighbors have to be present at the current node $W$. This is not the case when $k$ is small, because these few $k$ nearest neighbor have a higher chance of ending up in the same node $W$. Consequently, we need a few trees for small $k$ values, and sufficiently large number of trees (ideally $T > 20$) for large $k$ values.
\begin{figure*
\centering
\includegraphics[width=0.7\textwidth]{Results-Digits-k.png}
\caption{Average missing rate $\overline{m}$ after testing different values of $k$ on the \texttt{digits} dataset. Each point in the plot represents a 10 runs average. (Best viewed in color)}
\label{Fig:Fig-12}
\end{figure*}
\begin{figure*
\centering
\includegraphics[width=0.7\textwidth]{Results-Digits-time.png}
\caption{Average running time after testing the four methods on the \texttt{digits} dataset. Each bar represents a 100 runs average. We used a machine with 20 GB of memory and a 3.10 GHz Intel Core i5-10500 CPU. (Best viewed in color)}
\label{Fig:Fig-13}
\end{figure*}
\subsection{Experiments on running time}
\label{RunningTime}
The running time experiment (shown in Figure\ \ref{Fig:Fig-13}) measures the time taken by each method to: 1) construct the rpForest, and 2) return the $k$ nearest neighbors. The value of $k$ was set to 5, and the minimum capacity of an rpTree node $W$ was set to 20 data points. The dataset used in this experiment was \texttt{digits} dataset with 1797 samples and 64 dimensions.
\texttt{Method 1} which is the original algorithm of rpTree \cite{Dasgupta2008Random} was the fastest across all experiments. This was not a surprise, since this method picks only one random direction and projects the data points onto it. On the contrary, \texttt{Method 2} and \texttt{Method 3} attempts to maximize the dispersion of points using expensive computations before splitting these points into different tree branches. \texttt{Method 4} tries to do the same using the expensive process of principle component analysis (PCA).
A final remark on this experiment is why \texttt{Method 4} is slightly faster than \texttt{Method 2} and substantially faster than \texttt{Method 3}. This could be explained by the for loops involved in these two methods (\texttt{Method 2} and \texttt{Method 3}). \texttt{Method 2} uses a single for loop that runs for $nTry$ times to find the direction with the maximum dispersion. \texttt{Method 3} uses three for loops each of which run $nTry$ times to find the direction with the maximum dispersion and then adds noise to it to maximize the dispersion. Meanwhile, \texttt{Method 4} uses the native PCA implementation in scikit-learn, which uses an optimized library to find the principle components.
If we couple the findings from this experiment with the findings we had from the previous two experiments, we are confident to recommend the use of the original algorithm of rpTree \cite{Dasgupta2008Random}. It is true that this method might produce a random direction that could split two true neighbors. But with an increasing number of trees ($T>20$) this risk will be mitigated. Additional trees minimizes the risk of splitting two true neighbors. Using 100 rpTrees, \texttt{Method 1} is faster than \texttt{Method 2}, \texttt{Method 3}, and \texttt{Method 4} by 100\%, 300\%, and 100\% respectively. This performance gain comes with similar $k$-nearest neighbor missing rate, making \texttt{Method 1} as the recommended method for $k$-nn search in rpForests.
\section{Introduction}\label{Introduction}
$k$-nearest neighbor ($k$-nn) search is one of the fundamental tasks in machine learning. Given a set of $n$ points in $d$ dimensions $X = \{x_1, x_2, \cdots, x_n \}$, the $k$-nn search problem is defined as the problem of finding the subset of $X$ containing $\{x_1, x_2, \cdots, x_k \}$ nearest neighbors to a test point $x_i$ \cite{hart2000pattern}. Machine learning algorithms make decisions based on samples matching, which is usually done using $k$-nearest neighbor search. It has been used in several machine learning tasks such as time series classification \cite{Gweon2021Nearest,Kowsar2022Shape} and spectral clustering \cite{Alshammari2021Refining,Yuan2020Spectral,Kim2021KNN}.
In their extensive review of $k$-nn search methods, Muja and Lowe \cite{Muja2014Scalable} identified three categories of $k$-nn search methods: partitioning trees, hashing techniques and neighboring graph techniques. Performing $k$-nearest neighbor search on partitioning trees has been the go-to option for most of machine learning libraries (e.g., scikit-learn library in python \cite{scikit-learn, sklearn_api} and knnsearch function in Matlab \cite{MATLABknnsearch}). These libraries implement a special data structure named k-dimensional tree (or $k$d-tree for short) \cite{Bentley1975Multidimensional,Friedman1977Algorithm,Ram2013Which}. When branching the $k$d-tree, the median point of one dimension divides the points into two parts. At the next branch, the next dimension is used to divide the remaining points into two branches. This process continues to partition median split of alternating dimension until the leaves of the tree contain only a few data points. This restricts the splits in $k$d-tree to the existing dimensions, which makes them vulnerable to the (curse of dimensionality). In other words, a $k$d-tree increasingly needs more tree levels to decrease the vector quantization (VQ) error and produce good partitioning \cite{Dasgupta2008Random,Freund2008Learning,Dasgupta2015Randomized}. The introduction of random projection trees (rpTrees) generalizes the concept of $k$d-tree. Instead of cutting along an existing dimension, the points are projected onto a random direction and split at a random split point. This concept eases the need for more dimensions as the number of points grows. Random projection forests (rpForests) were introduced to overcome the errors made by single rpTrees, and it was suggested that the aggregated results are more reliable \cite{Yan2018Nearest,Yan2021Nearest}.
There are two factors that affect the performance of $k$-nearest neighbor search in rpForests. First, the dispersion of points along the projected dimension. Intuitively, the more we spread the points along this random axis, the less likely we are separating two neighbors. The second factor that affects $k$-nn search in rpForests is the number of trees. If two neighbors were separated in one tree, there is a higher chance that they will be placed together in other trees.
The purpose of the article is to investigate what is the effect of maximizing the points dispersion on $k$-nn search in rpForests. The $k$-nn search results were evaluated using two metrics proposed by (Yan et al., 2018) \cite{Yan2018Nearest} and (Yan et al., 2021) \cite{Yan2021Nearest}. Our experiments revealed that the gains from attempting to maximize the dispersion of points, are outweighed by the growth of computations as the number of trees increases. We also found out that as the number of trees increases, maximizing the dispersion did not improve the quality of $k$-nn search in rpForests. When the number of trees is large ($T>20$), we are better off using the original rpTree algorithm, that is picking a random direction regardless of the dispersion and use it to split the points.
This paper is organized as follows: in the next section we review the methods used for $k$-nn, and the advancements on rpTrees. In section \ref{ProposedApproach}, we present our approach to perform $k$-nn search in rpForest, how did we construct rpForests and how the $k$-nn search results were aggregated from different trees. In section \ref{Experiments}, experiments were explained and discussed.
\section{Related work}
\label{RelatedWork}
$k$-nearest neighbors can be found via exact or approximate search. The difference is that approximate search is faster, but unlike exact search, it is not guaranteed to return the true nearest neighbors. In some cases, exact search can be faster, for example using time series data \cite{rakthanmanon2012searching}. In d-dimensional case, prestructuring is a common way to perform approximate $k$-nn search and reduce the computational burden \cite{hart2000pattern}. Prestructured $k$-nn search techniques can be classified into two categories: 1) searching hash-based indices, or 2) searching hierarchical tree indices \cite{Ram2013Which}.
Locality-sensitive hashing (LSH) is one of the widely-used methods in hashing category \cite{Andoni2008Near}. The idea behind LSH is basic, instead of searching the data space, the search takes place in a hash table because it is faster and efficient. A query point $p$ will get a hash key and fall onto a hash bucket within the search space, the neighbors in that bucket are returned by $k$-nn search. A number of studies have successfully applied LSH to their applications. A kernelized LSH has been introduced to accommodate kernel functions and visual features \cite{Kulis2009Kernelized}, LSH was used to detect multi-source outliers \cite{Ma2021Outlier} and to build a k-means-like clustering method \cite{Mau2021LSH}. In LSH, the $k$-nn search results are largely dependent on the content of the hash table. The challenge is how to design a hash function that accurately encodes the data into a hash table.
\begin{figure*
\centering
\includegraphics[width=\textwidth,height=20cm,keepaspectratio]{Fig-01.png}
\caption{An example of rpTree; points in blue are placed in the left child and points in orange are placed in the right child. (Best viewed in color)}
\label{Fig:Fig-01}
\end{figure*}
Performing $k$-nearest neighbor search in partitioning trees overcomes the challenges posed by hash-based methods. Partitioning trees do not require a hash function and building the index is computationally efficient. $k$d-trees are widely used for $k$-nn search. However, the splits have to be done using the existing dimensions, which makes $k$d-trees vulnerable to the curse of dimensionality. rpTrees are a generalization of $k$d-trees, instead of splitting along an existing dimension, the split will occur along a random direction selected uniformly (as shown in Figure\ \ref{Fig:Fig-01}). Let $W$ be a node in an rpTree, all points in $W$ are projected onto a direction $\overrightarrow{r}$ chosen at random. Along the direction $\overrightarrow{r}$, a split point $c$ is selected randomly between $\lbrack \frac{1}{4}, \frac{3}{4} \rbrack$. Unlike the median split in $k$d-tree which occurs exactly at the median, $c$ is a perturbed split. If a projected point $x$ is less than $c$, it is placed in the left child $W_L$, otherwise, $x$ is placed in the right child $W_R$. Equation\ \ref{Eq-001} shows the placement rules in rpTrees.
\begin{equation}
\begin{split}
W_L={x \in W : x \cdot r < c} \\
W_R={x \in W : x \cdot r \ge c}.
\end{split}
\label{Eq-001}
\end{equation}
The construction process of rpTree consists of two steps: 1) projecting the points in $W$ onto a random direction $\overrightarrow{r}$, and 2) picking a split point $c$ at random along $\overrightarrow{r}$. Ideally, these two steps should not separate two true neighbors into two different tree branches. To minimize this risk, (Yan et al., 2021) \cite{Yan2021Nearest} and (Yan et al., 2018) \cite{Yan2018Nearest} suggested that one should pick a number of random directions and project onto the one that yields the maximum dispersion of points. They also suggested using enough rpTrees in an rpForest to rectify errors produced by some rpTrees. However, the relationship between the dispersion of points ($\sigma_r$) and the number of trees ($T$) is not clear. In this work, we used four methods to construct rpForest. Each of which has its own strategy to handle the dispersion of points ($\sigma_r$).
\section{$k$-nearest neighbor search in random projection forests}
\label{ProposedApproach}
$k$-nearest neighbor search in random projection forests (rpForests) is influenced by two factors: the dispersion of points ($\sigma_r$) along a random direction and the number of rpTrees in rpForest ($T$). The relationship between these two factors was not covered in recent studies. Therefore, we designed our experiments to investigate how these two factors affect $k$-nn search in rpForests.
A single rpTree represents a block to build an rpForest. Performing $k$-nn search in rpForests involves: 1) building rpTrees $O(nlogn)$, 2) traversing the search sample $x$ from the root node of each tree to a leaf node $O(nlogn)$, and 3) aggregating the results across all trees and finding the nearest neighbors; where $n$ is the number of data points.
To investigate the relationship between the dispersion of points ($\sigma_r$) and the number of rpTrees in rpForest ($T$), we designed four methods to build rpTrees. The first method is the original rpTree algorithm developed by (Dasgupta and Freund, 2008) \cite{Dasgupta2008Random}. It picks a random direction regardless of the dispersion of points ($\sigma_r$). The second method of rpTree construction is the one developed by (Yan et al., 2021) \cite{Yan2021Nearest}. They suggested picking a number of random directions and use the one that yields the maximum dispersion of points. We developed the third method where we modified (Yan et al., 2021) \cite{Yan2021Nearest} method by introducing two further steps. In these steps, we rotate the direction with maximum dispersion to see if there is a larger dispersion of points. The last method is the optimal case, which is guaranteed to produce the maximum dispersion of points, that is applying principle component analysis (PCA) at each node of rpTree \cite{sproull1991refinements,Pearson1901On,abdi2010principal}.
\subsection{Building rpTrees}
\label{BuildingrpTrees}
Building an rpTree involves three main steps: 1) projecting $n$ data points in the current node $W$ onto a random direction $\overrightarrow{r}$, 2) picking a split point $c$ randomly between $1^{\text{st}}$ and $3^{\text{rd}}$ quantiles,
3) placing data points that are less than $c$ into the left child $W_L$ and all other points into the right child $W_R$, and 4) recursively perform steps 1 to 3, until the leaf nodes contain a number of points that are less than a predetermined integer. Algorithm \ref{Alg:Alg-01} shows the steps we used to construct the rpTrees.
\begin{algorithm}
\caption{The recursive function used to construct $rpTrees$}\label{Alg:Alg-01}
\begin{algorithmic}[1]
\State Let $X$ be the set of points to be split in a tree $t$
\Function{partition}{X}
\If{$ \lvert X \rvert < n$}\label{algln2}
\State \textbf{return} $makeNode(X)$
\Else
\State \textcolor{blue}{// \textit{the used method selects $\overrightarrow{r}$}}
\State Generate a random direction $\overrightarrow{r}$
\State Project points in $X$ onto $\overrightarrow{r}$
\State Let $V$ be the projection of points in $X$ onto $\overrightarrow{r}$
\State Pick $c \in V$ between $1^{\text{st}}$ and $3^{\text{rd}}$ quantiles.
\State $X_L=\{x:V(x<c)\}$
\State \textbf{return} $partition(X_L)$
\State $X_R=\{x:V(x\ge c)\}$
\State \textbf{return} $partition(X_R)$
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{figure
\centering
\includegraphics[width=0.235\textwidth]{Method-01-01.png}
\includegraphics[width=0.235\textwidth]{Method-01-02.png}
\caption{A split made by \texttt{Method 1}; (left) picking a random direction; (right) splitting the points along that direction; the black point along the projection line is the split point. (Best viewed in color)}
\label{Fig:Fig-02}
\end{figure}
The original algorithm of rpTree (which will be referred to as \texttt{Method 1} for the rest of this paper) suggests picking a random direction regardless of the dispersion of points. It is the fastest method in our experiments because it uses random projection only once and move to the second step of splitting the data points. Figure\ \ref{Fig:Fig-02} illustrates how this method performs projection and splitting of the points in a node $W$. Figure\ \ref{Fig:Fig-02} shows that \texttt{Method 1} projects completely at random, and the splits may not be perfect.
\begin{figure
\centering
\includegraphics[width=0.235\textwidth]{Method-02-01.png}
\includegraphics[width=0.235\textwidth]{Method-02-02.png}
\caption{\texttt{Method 2} chooses the projection dimension with the maximum dispersion; (left) measuring the dispersion of points on three random directions; (right) splitting the points along the direction of maximum dispersion; the black point along the projection line is the split point. (Best viewed in color)}
\label{Fig:Fig-03}
\end{figure}
\texttt{Method 2} in our experiments is the method developed by (Yan et al., 2021) \cite{Yan2021Nearest}. They suggested picking a number of random directions (they used the symbol $nTry$) then use the one that yields the maximum dispersion of points. Apparently, it requires more time than \texttt{Method 1} because it performs random projection $nTry$ times not only once. An example of a node $W$ split by \texttt{Method 2} is shown in Figure\ \ref{Fig:Fig-03}.
\begin{figure*
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=.98\linewidth]{Method-03-01.png}
\caption{}
\label{Fig:Fig-04-a}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=.98\linewidth]{Method-03-02.png}
\caption{}
\label{Fig:Fig-04-b}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=.98\linewidth]{Method-03-03.png}
\caption{}
\label{Fig:Fig-04-c}
\end{subfigure}
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=.98\linewidth]{Method-03-04.png}
\caption{}
\label{Fig:Fig-04-d}
\end{subfigure}
\caption{\texttt{Method 3} attempts to maximize the dispersion of points using three attempts; then it splits the points along the direction of maximum dispersion; the black point along the projection line is the split point. (Best viewed in color)}
\label{Fig:Fig-04}
\end{figure*}
We developed \texttt{Method 3} in which we modified \texttt{Method 2}. Instead of just picking $nTry$ random directions, we extended this method by tuning the direction with the maximum dispersion. The tuning is done by adding Gaussian noise (with small $\sigma$) to the projection matrix, in order to maximize the dispersion. Unlike repeating the random projection, adding noise requires less computations. Figure\ \ref{Fig:Fig-04}. shows an example of this method. Initially, we picked three directions at random as in Figure\ \ref{Fig:Fig-04-a}. The yellow line has the maximum dispersion of points with $dispersion=0.3592$. The first step of direction tuning is adding noise with $\sigma=0.1$ to the projection matrix. This got us the yellow line with better dispersion of $0.3915$, shown in Figure\ \ref{Fig:Fig-04-b}. The second step of tuning is adding noise with $\sigma=0.01$ to the projection matrix. This gives us the winning direction shown in a yellow line with $dispersion=0.3960$, shown in Figure\ \ref{Fig:Fig-04-c}.
\begin{figure
\centering
\includegraphics[width=0.235\textwidth]{Method-04-01.png}
\includegraphics[width=0.235\textwidth]{Method-04-02.png}
\caption{\texttt{Method 4} uses PCA to find the projection dimension with the maximum dispersion; (left) the first principle component of data points in 2D; (right) splitting the points along the first principle component; the black point along the projection line is the split point. (Best viewed in color)}
\label{Fig:Fig-05}
\end{figure}
The most optimal case when looking for the maximum dispersion of points is to run principle component analysis (PCA) \cite{Pearson1901On,abdi2010principal}. PCA guarantees to return the direction where the data points have the maximum spread. The computational complexity for PCA is $O(d^2n+d^3)$. The method that uses PCA to project data points is referred to as \texttt{Method 4} in our experiments. An example of \texttt{Method 4} is shown in Figure\ \ref{Fig:Fig-05}.
\subsection{Searching for nearest neighbors in rpTrees}
\label{SearchingrpTrees}
Once all rpTrees are constructed, we are ready to perform $k$-nearest neighbor search in random projection forests (rpForests). Each rpTree node $W$ must store the projection hyperplane $\overrightarrow{r}$ used to project the data points in that node. It also has to store the split point $c$ used to split the points into the left child $W_L$ and the right child $W_R$. When searching for the $k$-nearest neighbors of a test sample $x$, we traverse $x$ from the root node all the way down to a leaf node. There are two steps involved when traversing $x$ down an rpTree: 1) use the stored projection hyperplane $\overrightarrow{r}$ to project $x$ onto it, and 2) use the stored split point $c$ to place $x$ in the left or right child. An example of walking a test sample down an rpTree is shown in Figure\ \ref{Fig:Fig-06}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth,height=20cm,keepaspectratio]{Fig-06.png}
\caption{$k$-nn search in rpTree; a test sample depicted by $\times$ is traversed down the tree and traced left if it sits in a blue cluster and right if otherwise. (Best viewed in color)}
\label{Fig:Fig-06}
\end{figure}
rpForest is a collection of rpTrees, each of which is constructed by one of the methods explained in the previous section \texttt{Method 1} to \texttt{Method 4}. The $k$-nearest neighbors returned from all rpTrees in the rpForest are aggregated into one set. After removing the duplicate points, the nearest neighbors are returned using Euclidean distance in the function \texttt{(scipy.spatial.distance.cdist)} \cite{2020SciPy-NMeth}. In case of a tie (i.e., two neighbors having the same distance from the test sample $x$), the rank of neighbors is decided by the function \texttt{(numpy.argsort)} \cite{harris2020array}.
To evaluate the results returned by $k$-nearest neighbor search in rpForests, we used two metrics proposed by (Yan et al., 2021) \cite{Yan2021Nearest} and (Yan et al., 2018) \cite{Yan2018Nearest}. The first metric returns a fraction representing the true neighbors $k$ missed by $k$-nn search in rpForests. The true neighbors are retrieved using a brute-force method. Let $x_1, x_2, \cdots, x_n$ be the data points in a dataset. For a point $x_i$, the number of true neighbors missed by $k$-nn search is denoted by $m_i$. The average missing rate $\overline{m}$ is defined as:
\begin{equation}
\overline{m}=\frac{1}{nk} \sum_{i=1}^n m_i\ .
\label{Eq-002}
\end{equation}
The second metric measures the discrepancy between the distance to the true $k^{\text{th}}$ neighbor $d_k(i)$ and the distance to the $k^{\text{th}}$ neighbor found by the algorithm $\hat{d}_k(i)$. The average distance error $\overline{d}_k$ is defined as:
\begin{equation}
\overline{d}_k=\frac{1}{n} \sum_{i=1}^n \hat{d}_k(i) - d_k(i)\ .
\label{Eq-003}
\end{equation} |
{
"arxiv_id": "2302.13214",
"language": "en",
"timestamp": "2023-02-28T02:13:10",
"url": "https://arxiv.org/abs/2302.13214",
"yymm": "2302"
} |
\section{Attention Algorithm}\label{sec:low_rank}
In this section, we show how to use polynomial approximations for the exponential function in order to approximately perform attention computations.
In Section~\ref{sec:low_rank:definition}, we define the type of low-rank matrix approximation which we will use. In Section~\ref{sec:low_rank:from_polynomial_to_matrix}, we show how polynomial approximations can give rise to such low-rank matrix approximations. In Section~\ref{sec:low_rank:bounded}, we bound the entries of the matrix $Q K^\top \in \mathbb{R}^{n \times n}$ (before converting it to the attention matrix) to confirm that our polynomial approximation applies. In Section~\ref{sec:low_rank:key_lemma}, we state our main technique for approximating the attention matrix. In Section~\ref{sec:low_rank:from_A_to_D}, we show how to control the error propagation from $A$ to the rescaling matrix $D$. In Section~\ref{sec:low_rank:from_D_A_to_attention}, we further explain how to control the error propagation from $D$ and $A$ to the resulting attention matrix. Finally, in Section~\ref{sec:low_rank:complex}, we conclude our general algorithm, and in Section~\ref{sec:low_rank:proof_of_informal}, we appropriately select the parameters to achieve almost linear time.
\subsection{Matrix Low-Rank Approximation}\label{sec:low_rank:definition}
\begin{definition}\label{def:epsilon_g_approx}
Let $r \geq 1$ denote a positive integer. Let $\epsilon \in (0,0.1)$ denote an accuracy parameter.
Given a matrix $A \in \mathbb{R}^{n \times n}_{\geq 0}$, we say $\widetilde{A} \in \mathbb{R}^{n \times n}_{\geq 0}$ is an $(\epsilon,r)$-approximation of $A$ if
\begin{itemize}
\item $\widetilde{A} = U_1 \cdot U_2^\top$ for some matrices $U_1, U_2 \in \mathbb{R}^{n \times r}$ (i.e., $\widetilde{A}$ has rank at most $r$), and
\item $| \widetilde{A}_{i,j} - A_{i,j} | \leq \epsilon \cdot A_{i,j}$ for all $(i,j) \in [n]^2$.
\end{itemize}
\end{definition}
\subsection{From Low Degree Polynomials to Low Rank Matrices}\label{sec:low_rank:from_polynomial_to_matrix}
\begin{lemma}\label{lem:rank_is_small}
Let $M = X Y^\top \in \mathbb{R}^{n \times n}$ denote a matrix with
$X, Y \in \mathbb{R}^{n \times d}$. Let $P(x)$ denote a degree-$g$ polynomial, and define $r = { 2(g+d) \choose 2 g }$.
There is an algorithm that runs in $O(n r g)$ time and, given as input the matrix $X,Y$, constructs matrices $U_1, U_2 \in \mathbb{R}^{n \times r}$ such that $P(M) = U_1 U_2^\top$. (Here, $P(M)$ denotes the entry-wise application of $P$ to $M$.)
\end{lemma}
\begin{proof}
Let $P(x) $ denote the degree-$g$ polynomial. Expand it in terms of its coefficients as
\begin{align*}
P(x) = \sum_{i=0}^d c_i \cdot x^i.
\end{align*}
Consider the function $\mathsf{K} : \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}$ defined by, for $u,v \in \mathbb{R}^d$,
\begin{align*}
\mathsf{K}(u,v) := P( \langle u , v\rangle ).
\end{align*}
$\mathsf{K}$ is a degree-$2g$ polynomial in the $2d$ entries $u_1, \cdots u_d, v_1, \cdots, v_d$ of the vectors $u,v$.
Define the set $V$ of its variables,
\begin{align*}
V: = \{ u_1, \cdots, u_d, v_1, \cdots, v_d \}.
\end{align*}
Let ${\cal F}$ denote the set of functions
\begin{align*}
{\cal F} := \left\{ f : V \rightarrow \{0,1,2,\cdots, 2g\} ~|~ \sum_{v \in V} f(v) \leq 2 g \right\}.
\end{align*}
We can count that
\begin{align*}
|{\cal F}| = { 2d + 2 g \choose 2 g }.
\end{align*}
Hence, there are coefficients $c_t \in \mathbb{R}$ for each $t \in {\cal F}$ such that
\begin{align*}
\mathsf{K}(u,v) = \sum_{t \in {\cal F}} c_t \cdot \prod_{v \in V} v^{t(v)}.
\end{align*}
Define
\begin{align*}
V_u := \{ u_1, \cdots, u_d \}
\end{align*}
and
\begin{align*}
V_v = V \backslash V_u.
\end{align*}
We define $\phi_u : \mathbb{R}^d \rightarrow \mathbb{R}^{|{\cal F}|}$ by, for any $t \in {\cal F}$,
\begin{align*}
\phi_u(u_1, \cdots, u_d)_t = c_t \cdot \prod_{v_i \in V_u} u_i^{t(u_i)}.
\end{align*}
Similarly, we define $\phi_v: \mathbb{R}^d \rightarrow \mathbb{R}^{|{\cal F}|}$ by, for any $t \in {\cal F}$,
\begin{align*}
\phi_v(v_1, \cdots, v_d)_t = \prod_{v_i \in V_v} v_i^{t(u_i)}.
\end{align*}
Thus, we have
\begin{align*}
\mathsf{K}(u,v) = \langle \phi_u(u) , \phi_v(v) \rangle.
\end{align*}
For $i \in [n]$, let $X_i \in \mathbb{R}^d$ denote the $i$-th row of $X$, and let $Y_i \in \mathbb{R}^d$ denote the $i$-th row of $Y$.
Our algorithm can thus construct
\begin{itemize}
\item the matrix $U_1 \in \mathbb{R}^{n \times |\cal F|}$ whose $i$-th row is the vector $\phi_u(x_i)$ for $i \in [n]$, and
\item the matrix $U_2 \in \mathbb{R}^{n \times |{\cal F}|}$ whose $i$-th row is the vectors $\phi_v(y_i)$ for $i \in [n]$.
\end{itemize}
Each entry of these matrices can be constructed by multiplying together at most $g$ variables, so these $n \times r$ matrices can be constructed in time $O(nrg)$ as desired.
\end{proof}
\subsection{Matrix \texorpdfstring{$Q K^\top$}{} Has Bounded Entries}\label{sec:low_rank:bounded}
\begin{lemma}[Bounded entry]\label{lem:M_bounded_entry}
Suppose $B \geq 1$ and matrices $Q, K \in \mathbb{R}^{n \times d}$ have $\|Q \|_{\infty} \leq B$ and $\| K \|_{\infty} \leq B$. Then, we have
\begin{align*}
\| Q K^\top / d \|_{\infty} \leq B^2.
\end{align*}
\end{lemma}
\begin{proof}
For each $(i,j) \in [n] \times [n]$, we have
\begin{align*}
| (Q K^\top)_{i,j} |
= & ~ | \sum_{l=1}^d Q_{i,l} K_{j,l} | \\
\leq & ~ d \cdot \| Q \|_{\infty} \cdot \| K \|_{\infty} \\
\leq & ~ d \cdot B^2,
\end{align*}
as desired.
\end{proof}
\subsection{Key Lemma}\label{sec:low_rank:key_lemma}
Our key lemma shows that, even though the attention matrix $A$ may have full rank, it has a low-rank approximation that is easy to compute:
\begin{lemma}\label{lem:wt_A_small_rank}
Suppose $Q, K \in \mathbb{R}^{n \times d}$, with $\| Q \|_{\infty} \leq B$, and $\| K \|_{\infty} \leq B$. Let $A:=\exp(QK^\top /d) \in \mathbb{R}^{n \times n}$. For accuracy parameter $\epsilon \in (0,1)$, there is a positive integer $g$ bounded above by
\begin{align*}
g = O \Big( \max \Big\{ \frac{\log(1/\epsilon)}{ \log(\log(1/\epsilon)/B) }, B^2 \Big\} \Big),
\end{align*}
and a positive integer $r$ bounded above by
\begin{align*}
r \leq { 2(g+ d) \choose 2g }
\end{align*}
such that:
There is a matrix $\widetilde{A} \in \mathbb{R}^{n \times n}$ that is an $(\epsilon,r)$-approximation (Definition~\ref{def:epsilon_g_approx}) of $A \in \mathbb{R}^{n \times n}$.
Furthermore, the matrices $U_1$ and $U_2$ defining $\widetilde{A}$ can be computed in $O(n \cdot r)$ time.
\end{lemma}
\begin{proof}
Let $M:= Q K^\top / d $.
From Lemma~\ref{lem:M_bounded_entry}, we know that $\| M \|_{\infty} \leq B^2$.
Thus, applying Corollary~\ref{cor:aa22_from_-B_to_B} (with bound $B^2$ on its entries), there is a degree-$g$ polynomial $P$ such that the matrix $\widetilde{A} = P(M)$ is an $(\epsilon,r)$-approximation to $A$ (See the definition of $(\epsilon,r)$-approximation in Definition~\ref{def:epsilon_g_approx}.)
We can then compute $U_1, U_2$ using Lemma~\ref{lem:rank_is_small}, which gives the bound
\begin{align*}
r \leq { 2(g+d) \choose 2g } .
\end{align*}
This completes the proof.
\end{proof}
\subsection{From \texorpdfstring{$A$}{} to \texorpdfstring{$D$}{}}\label{sec:low_rank:from_A_to_D}
\begin{lemma}\label{lem:error_from_A_to_D}
Let $A \in \mathbb{R}^{n \times n}$ be any matrix whose entires are all positive and $\epsilon_A \in (0,0.1)$ be any parameter.
Let $\widetilde{A} \in \mathbb{R}^{n \times n}$ be any matrix such that, for all $(i,j) \in [n] \times [n]$, we have
\begin{align*}
| \widetilde{A}_{i,j} -A_{i,j} | \leq \epsilon_A \cdot A_{i,j}.
\end{align*}
Define the matrices $D, \widetilde{D} \in \mathbb{R}^{n \times n}$ by $D = \diag(A {\bf 1}_n)$ and $\widetilde{D} =\diag(\widetilde{A} {\bf 1}_n)$.
Then, for all $i \in [n]$ we have
\begin{align*}
| \widetilde{D}_{i,i} - D_{i,i} | \leq \epsilon_A \cdot D_{i,i}.
\end{align*}
\end{lemma}
\begin{proof}
We calculate that
\begin{align*}
| \widetilde{D}_{i,i} - D_{i,i} |
= & ~ |\sum_{j=1}^n \widetilde{A}_{i,j} - \sum_{j=1} A_{i,j} | \\
\leq & ~ \sum_{j=1}^n | \widetilde{A}_{i,j} - A_{i,j} | \\
\leq & ~ \sum_{j=1}^n \epsilon_A A_{i,j} \\
= & ~ \epsilon_A \cdot D_i.
\end{align*}
This completes the proof.
\end{proof}
\subsection{From \texorpdfstring{$A$}{} and \texorpdfstring{$D$}{} to Attention Matrix}\label{sec:low_rank:from_D_A_to_attention}
\begin{lemma}\label{lem:error_from_D_A_to_attention}
Let $\epsilon_A, \epsilon_D \in (0,0.1)$ and $B>1$ be parameters, and let $V \in \mathbb{R}^{n \times d}$ denote a matrix with $\| V \|_{\infty} \leq B$. Let $A \in \mathbb{R}^{n \times n}$ be any matrix whose entires are all positive, and let $\widetilde{A} \in \mathbb{R}^{n \times n}$ be a matrix such that, for all $(i,j) \in [n] \times [n]$ we have
\begin{align*}
| \widetilde{A}_{i,j} -A_{i,j} | \leq \epsilon_A \cdot A_{i,j}.
\end{align*}
Let $D, \widetilde{D} \in \mathbb{R}^{n \times n}$ be any diagonal matrices with positive entries on their diagonals, with the property that, for all $i \in [n]$, we have
\begin{align*}
| \widetilde{D}_{i,i} - D_{i,i} | \leq \epsilon_D \cdot D_{i,i}.
\end{align*}
Then, we have
\begin{align*}
\| \widetilde{D}^{-1} \widetilde{A} V - D^{-1} A V \|_{\infty} \leq ( \epsilon_A + \epsilon_D )\cdot B.
\end{align*}
\end{lemma}
\begin{proof}
We have
\begin{align}\label{eq:split_error_into_two_parts}
\| \widetilde{D}^{-1} \widetilde{A} V - D^{-1} A V \|_{\infty}
\leq \| \widetilde{D}^{-1} \widetilde{A} V - D^{-1} \widetilde{A} V \|_{\infty} + \| D^{-1} \widetilde{A} V - D^{-1} A V \|_{\infty}.
\end{align}
We now bound each of these two terms separately.
First, for each $(i,j) \in [n] \times [d]$,
\begin{align}\label{eq:bound_error_part1}
| (\widetilde{D}^{-1} \widetilde{A} V - D^{-1} \widetilde{A} V)_{i,j} |
= & ~ | \sum_{l=1}^n (\widetilde{D}^{-1}_{i,i} - D^{-1}_{i,i} ) \cdot \widetilde{A}_{i,l} \cdot V_{l,j} | \notag \\
\leq & ~ \sum_{l=1}^n | (\widetilde{D}^{-1}_{i,i} - D^{-1}_{i,i}) \cdot \widetilde{A}_{i,l} | \cdot \| V \|_{\infty} \notag \\
= & ~ \sum_{l=1}^n | \frac{D_{i,i} - \widetilde{D}_{i,i} }{D_{i,i} \widetilde{D}_{i,i} } \widetilde{A}_{i,l} | \cdot \| V \|_{\infty} \notag \\
\leq & ~ \epsilon_D \cdot \sum_{l=1}^D | \widetilde{D}_i^{-1} \widetilde{A}_{i,l} | \cdot \| V \|_{\infty} \notag \\
= & ~ \epsilon_D \cdot | \sum_{l=1}^D \widetilde{D}_i^{-1} \widetilde{A}_{i,l} | \cdot \| V \|_{\infty} \notag \\
= & ~ \epsilon_D \cdot \| V \|_{\infty} \notag \\
\leq & ~ \epsilon_D \cdot B
\end{align}
where the second step follows from the triangle inequality, the forth step follows from $|(D_{i,i}-\widetilde{D}_{i,i}) / D_{i,i}| \leq \epsilon_D$, the fifth step follows from $\widetilde{D}_i^{-1} > 0$ and $\widetilde{A}_{i,l} > 0$, and the last step follows from our assumption on $V$.
Second, for each $(i,j) \in [n] \times [d]$,
\begin{align}\label{eq:bound_error_part2}
| (D^{-1} \widetilde{A} V - D^{-1} A V)_{i,j} |
= & ~ | \sum_{l=1}^n D_{i,i}^{-1} ( \widetilde{A}_{i,l} - A_{i,l}) \cdot V_{l,j} | \notag \\
\leq & ~ \sum_{l=1}^n | D_{i,i}^{-1} | \cdot | ( \widetilde{A}_{i,l} - A_{i,l}) | \cdot \| V \|_{\infty} \notag \\
= & ~ \sum_{l=1}^n D_{i,i}^{-1} \cdot | ( \widetilde{A}_{i,l} - A_{i,l}) | \cdot \| V \|_{\infty} \notag \\
\leq & ~ \sum_{l=1}^n D_{i,i}^{-1} \cdot \epsilon_A A_{i,l} \cdot B \notag \\
= & ~ \epsilon_A \cdot B,
\end{align}
where the second step follows from triangle inequality, the third step follows from $D_{i,i}^{-1} > 0$, the forth step follows from $|\widetilde{A}_{i,l} - A_{i,l}| \leq \epsilon_A \cdot A_{i,l}$ and the last step follows from definition of $D_{i,i}$.
The result follows by combining Eq.~\eqref{eq:split_error_into_two_parts}, and two inequalities (Eq.~\eqref{eq:bound_error_part1} and Eq.~\eqref{eq:bound_error_part2}).
\end{proof}
\subsection{Main Upper Bound}\label{sec:low_rank:complex}
\begin{theorem}\label{thm:complex_main_upper_bound}
For positive integers $n,d$ and real parameters $\epsilon>0$ and $B>1$, there are positive integers $g = \Theta( \max\{ \frac{\log(1/\epsilon)}{\log (\log(1/\epsilon) /B) }, B^2\})$
and $r = { 2(g+d) \choose 2d}$ such that:
There is an algorithm (Algorithm~\ref{alg:main}) that runs in $O({\cal T}_{\mathrm{mat}}(n,r,d) + nrg)$ time to solve $\mathsf{AAttC}(n,d,B,\epsilon)$ (Definition~\ref{def:AAttC}).
\end{theorem}
\begin{proof}
The running time of each step is shown in Algorithm~\ref{alg:main}; its running time follows from Lemma~\ref{lem:wt_A_small_rank}.
Its correctness follows from Lemma~\ref{lem:error_from_A_to_D} and Lemma~\ref{lem:error_from_D_A_to_attention}.
\end{proof}
\begin{algorithm}[!ht]\caption{Our Algorithm}\label{alg:main}
\begin{algorithmic}[1]
\Procedure{PolyAttention}{$Q \in \mathbb{R}^{n \times d},K \in \mathbb{R}^{n \times d},V \in \mathbb{R}^{n \times d}, n, d, B, \epsilon$} \Comment{Theorem~\ref{thm:informal_main_upper_bound}}
\State \Comment{$\epsilon$ is the accuracy output}
\State \Comment{$\|Q\|_{\infty}, \| K \|_{\infty}, \| V \|_{\infty} \leq B$}
\State $g \gets O( \max\{ \frac{\log(1/\epsilon)}{\log(\log(1/\epsilon) / B)} , B^2\} )$
\State $r \gets {2(g+d) \choose 2d}$
\State Construct $U_1, U_2 \in \mathbb{R}^{n \times r}$ via Lemma~\ref{lem:wt_A_small_rank} \Comment{$O(nrg)$ time}
\State $\widetilde{w} \gets U_1 \cdot (U_2^\top {\bf 1}_n)$ \Comment{$O(n r)$ time}
\State $\widetilde{D}^{-1} = \diag( \widetilde{w}^{-1} )$ \Comment{$O(n)$ time}
\State Compute $U_2^\top V \in \mathbb{R}^{r \times d}$ \Comment{Takes ${\cal T}_{\mathrm{mat}}(r,n,d)$ time}
\State Compute $U_1 \cdot (U_2^\top V)$ \Comment{ ${\cal T}_{\mathrm{mat}}(n,r,d)$ time}
\State $T \gets \widetilde{D}^{-1} \cdot (U_1 \cdot (U_2^\top V)) $ \Comment{$O(nd)$ time}
\State \Return $T$ \Comment{$T \in \mathbb{R}^{n \times d}$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Proof of \texorpdfstring{Theorem~\ref{thm:informal_main_upper_bound}}{}}\label{sec:low_rank:proof_of_informal}
\begin{theorem}[Upper bound, formal statement of Theorem~\ref{thm:informal_main_upper_bound}]\label{thm:formal_main_upper_bound}
$\mathsf{AAttC}(n,d = O(\log n),B = o(\sqrt{\log n}),\epsilon_a = 1/\poly(n))$ can be solved in time ${\cal T}_{\mathrm{mat}}(n,n^{o(1)},d) = n^{1+o(1)}$
\end{theorem}
\begin{proof}
If we select the parameters
\begin{align*}
B = & ~ o(\sqrt{\log n}) , ~~~ \epsilon = 1/\poly(n), ~~~ d = O(\log n)
\end{align*}
in Theorem~\ref{thm:complex_main_upper_bound}, then we see that
\begin{align*}
g= & ~ O( \max\{ \frac{ \log(1/\epsilon) }{ \log( \log(1/\epsilon) / B ) }, B^2 \} ) \\
= & ~ O( \max\{ \frac{ \log(n) }{ \log( \log(n) / B ) }, B^2 \} ) \\
= & ~ O( \max\{ \frac{\log n}{\log \log n} ,o(\log n) \} ) \\
= & ~ o(\log n),
\end{align*}
where the second step follows from $\epsilon= 1/\poly(n)$ and the third step follows from $B = o(\sqrt{\log n})$.
Since $g = o(\log n)$, let us write $g = (\log n) / f$ for some $f = \omega(1)$. We thus have that
\begin{align*}
r = { 2(d + g) \choose 2g} \leq \left( \frac{e(d+g)}{g} \right)^{2g} \leq 2^{O(g \log ((\log n)/g))} \leq 2^{O(\log n \log (f) / f)} < 2^{o(\log n)} < n^{o(1)}.
\end{align*}
The second step follows from the generic bound $\binom{a}{b} \leq (e a / b)^b$ for $1 \leq b \leq a$, and the third step uses that $d = O(\log n)$.
Since $d, r, g$ are all bounded by $n^{o(1)}$, our final running time is $n^{1 + o(1)}$ as desired.
\end{proof}
\section{Conclusion}\label{sec:conclusion}
\section{Error}\label{sec:error}
\section{Hardness}\label{sec:hard}
In this section, we prove our fine-grained lower bound for attention computation.
In Section~\ref{sec:hard:eth}, we state the Strong Exponential Time Hypothesis ($\mathsf{SETH}$), the main hardness assumption we will use. In Section~\ref{sec:hard:ann}, we define the approximate nearest neighbor search problem, and its known hardness assuming $\mathsf{SETH}$. Finally, in Section~\ref{sec:hard:our}, we give a reduction from approximate nearest neighbor search to attention computation, which implies our hardness result.
\subsection{Fine-Grained Hypotheses}\label{sec:hard:eth}
The Strong Exponential Time Hypothesis ({\sf SETH}) was introduced by Impagliazzo and Paturi \cite{ip01} over 20 years ago. It is a strengthening of the $\mathsf{P} \neq \mathsf{NP}$ conjecture, which asserts that our current best $\mathsf{SAT}$ algorithms are roughly optimal:
\begin{hypothesis}[Strong Exponential Time Hypothesis ({\sf SETH})]
For every $\epsilon > 0$ there is a positive integer $k \geq 3$ such that $k$-$\mathsf{SAT}$ on formulas with $n$ variables cannot be solved in $O(2^{(1-\epsilon )n})$ time, even by a randomized algorithm.
\end{hypothesis}
\noindent{\sf SETH} is a popular conjecture which has been used to prove fine-grained lower bounds for a wide variety algorithmic problems. See, for instance, the survey~\cite{w18}.
\subsection{Nearest Neighbor Search}\label{sec:hard:ann}
We will make use of a known relationship between $\mathsf{SETH}$ and approximate nearest neighbor search.
\begin{definition}[Approximate Hamming Nearest Neighbor Search $(\mathsf{ANN})$]
For a parameter $\epsilon>0$, in the $(1+\epsilon)$-Approximate Hamming Nearest Neighbor Search problem for $n$ vectors of dimension $d$, we are given as input two sets $A, B \subset \{0,1\}^d$ with $|A|=|B|=n$, and our goal is to find an $a^* \in A$ and $b^* \in B$ satisfying $\| a^* - b^*\|_0 \leq (1 + \epsilon) \cdot \min_{a \in A, b \in B} \| a - b \|_0$.
\end{definition}
(This is sometimes called the `bichromatic' $\mathsf{ANN}$ problem, and a monochromatic version has also been studied; see, for instance, \cite{sm19}.) Rubinstein \cite{r18} showed that for certain parameters, it is impossible to substantially improve on the straightforward quadratic-time algorithm for $\mathsf{ANN}$ assuming $\mathsf{SETH}$:
\begin{lemma}[\cite{r18}]\label{lem:r18}
Assuming {\sf SETH}, for every $q >0$, there are $\epsilon \in (0,1)$ and $C > 0$ such that $(1+\epsilon)$-Approximate Hamming Nearest Neighbor Search in dimension $d = C \log n$ requires $\Omega(n^{2-q})$ time.
\end{lemma}
\begin{remark}\label{rem:complement_transformation}
We may assume that \ref{lem:r18} holds even in the special case where each input vector from $A$ and $B$ has half its entries equal to $0$ and half equal to $1$. Indeed, for any vector $a \in \{0,1\}^d$, we can construct a new vector $\widetilde{a} \in \{0,1\}^{2d}$ given by $\widetilde{a} = \begin{bmatrix} a^\top & \overline{a}^\top \end{bmatrix}^\top$. Here $\overline{a} \in \{0,1\}^d$ is the binary complement of vector $a$, i.e., $\overline{a}_i = 1-a_i$ for all $i \in [d]$. Thus, $\| \widetilde{a} \|_0 =d$. We can similarly construct a new vector $\widetilde{b} \in \{0,1\}^{2d}$ for each $b \in B$. After this transformation, for any $a \in A$ and $b \in B$, we have $\| \widetilde{a} - \widetilde{b} \|_0 = 2 \cdot \| a - b \|_0$, so it suffices to find an approximate nearest neighbor among these transformed vectors.
\end{remark}
For convenient of the analysis, we define a gap version of approximate nearest neighbor search problem ${\sf Gap}{\rm-}{\sf ANN}(n,d,t,\epsilon)$.
\begin{definition}[Gap approximate nearest neighbor search (${\sf Gap}{\rm-}{\sf ANN}(n,d,t,\epsilon)$)]\label{def:GapANN}
Let $n, d$ denote two positive integers. Let $t > 0$ denote a threshold parameter. Let $\epsilon$ denote a accuracy parameter.
Given two sets of points $A = \{ a_1, \cdots, a_n \} \subset \{0,1\}^d$ and $B = \{ b_1, \cdots, a_n \} \subset \{0,1\}^d$:
For each $i\in [n]$, we need to distinguish the following two cases
\begin{itemize}
\item Case 1. There exists a $ j \in [n]$ such that $\| a_i - b_j \|_0 < t$.
\item Case 2. For all $j \in [n]$ we have $\| a_i - b_j \|_2^2 \geq (1+\epsilon)\cdot t$.
\end{itemize}
\end{definition}
An algorithm for ${\sf Gap}{\rm-}{\sf ANN}(n,d,t,\epsilon)$ can be called $\log(nd)$ times to binary search for the answer to $\mathsf{ANN}$, so Lemma~\ref{lem:r18} holds as well for ${\sf Gap}{\rm-}{\sf ANN}(n,d,t,\epsilon)$.
\subsection{Hardness Result}\label{sec:hard:our}
In the remainder of this section, we prove our lower bound for attention computation:
\begin{theorem}[Main Result, formal version of Theorem~\ref{thm:informal_main_lower_bound}]\label{thm:formal_main_lower_bound}
Assuming {\sf SETH}, for every sufficiently small $q >0$, there are constants
$C > 0$ and $C_{\alpha}>0$ and $C_{\beta} > 1$ such that Approximate Attention Computation $\mathsf{AAttC}$ (Definition~\ref{def:AAttC}) for parameters $(n, d = C \log n, B = C_{\beta} \sqrt{\log n}, \epsilon_a = n^{- C_{\alpha}})$ requires $\Omega(n^{2-q})$ time.
\end{theorem}
\begin{proof}
This follows from combining Lemma~\ref{lem:r18} (hardness for approximation nearest neighbor search) and Lemma~\ref{lem:reduce_GapANN_to_AAttC} (a reduction from approximate nearest neighbor search to approximate attention computation) which we prove below.
\end{proof}
\begin{lemma}\label{lem:reduce_GapANN_to_AAttC}
For any constant $C_\gamma \in (0,0.1)$: For every $\epsilon>0$ and $C>0$, there exist constants $C_a > 0$ and $C_b>0$ and such that, if $\mathsf{AAttC}$ (Definition~\ref{def:AAttC}) for parameters $(2n, d = 2C \log n, B = C_b \sqrt{\log n}, \epsilon_a = n^{- C_a})$ can be solved in time $T$, then ${\sf Gap}{\rm-}{\sf ANN}(n,d = C \log n,t, \epsilon)$ (Definition~\ref{def:GapANN}) can be solved in time $O(T + n^{2 - C_{\gamma}})$.
\end{lemma}
\begin{proof}
We give an algorithm with the stated running time for ${\sf Gap}{\rm-}{\sf ANN}(n,d = C \log n,t, \epsilon)$. Let $c > 0$ be a parameter we will choose later (it will be a function of $C$ and $C_\gamma$). Our algorithm will proceed to one of two cases depending on the value of $t$. If $t < c \log n$, then we will use one algorithm which runs in time $O(n^{2 - C_\gamma})$. Otherwise, if $t \geq c \log n$, we will use another algorithm which runs in time $O(T)$.
{\bf Case 1}: $t < c \log n$.
Let $a_1, \cdots, a_n, b_1, \cdots, b_n \in \{0,1\}^d$ be the input vectors to ${\sf Gap}{\rm-}{\sf ANN}$, and let $t \in [0,d]$ denote the target distance. Recall that $d = C \log n$.
In this $t < c \log n$ case, we will simply brute-force for the answer in the following way:
We first store the vectors $b_1, \cdots, b_n$ in a lookup table, then for each $i \in [n]$, we iterate over all vectors $b' \in \{0,1\}^d$ which have Hamming distance at most $t$ from $a_i$ and check whether $b'$ is in the lookup table. This determines whether there is a $b \in B$ at distance at most $t$ from $a_i$, as desired.
For each $i \in [n]$, we need to iterate over ${d \choose t}$ choices for the vector $b'$, so the total running time will be $O(n \cdot {d \choose t})$. By standard bounds on binomial coefficients, we know that
\begin{align*}
n \cdot {d \choose t}
\leq & ~ n \cdot { C \log n \choose c \log n } \\
\leq & ~ n^{1 + f(C,c)}
\end{align*}
for some function $f: \mathbb{R}_{>0} \times \mathbb{R}_{>0} \rightarrow \mathbb{R}_{>0}$ with the property that, for any fixed $C>0$, we have
\begin{align*}
\lim_{c\rightarrow 0} f(C,c) = 0.
\end{align*}
We can thus pick a sufficiently small constant $c >0$, depending only on $C_{\gamma}$ and $C$ such that $f(C,c) < 1-C_\gamma$ and this entire brute-force takes $O(n^{2-C_\gamma})$ time.
{\bf Case 2:} $t \geq c \log n$.
Let $a_1, \cdots, a_n, b_1, \cdots,b_n \in \{0,1\}^d$ denote the input of ${\sf Gap}{\rm-}{\sf ANN}(n,d,t,\epsilon)$ (Definition~\ref{def:GapANN}), and recall from Remark~\ref{rem:complement_transformation} that we may assume each has half its entries $0$ and half its entries $1$. We will explain how to construct an Attention matrix using this instance.
Let $C_0 \geq c$ be such that
\begin{align}\label{eq:def_t_C_0_logn}
t:= C_0 \log n.
\end{align}
Let $\beta > 0$ and $\widetilde{d} \geq d$ denote parameters we will choose later (see Eq.~\eqref{eq:def_beta} and Eq.~\eqref{eq:def_wt_n_wt_d}, respectively). Define $\tau >0$ by \begin{align}\label{eq:def_tau}
\tau: = \exp(\beta).
\end{align}
Intuitively, our goal in picking these parameters is that $\tau$ will be an upper bound on entries of the attention matrix, i.e., we will have:
$$\tau \geq \max_{i \in [n], j \in [n]} \exp( \beta \langle a_i, b_j \rangle / \widetilde{d} ).$$
We will make use of an algorithm for the $\mathsf{AAttC}(\widetilde{n}, \widetilde{d}, B, \epsilon_a)$ problem, for the following parameters:
\begin{align}\label{eq:def_wt_n_wt_d}
\widetilde{n} := 2n, ~~~~ \widetilde{d} := 2d,
\end{align}
\begin{align}\label{eq:def_B_C_b}
B:= C_b \sqrt{\log n}, \text{~~~where~~~} C_b := \sqrt{40 C / (C_0 \epsilon)} ,
\end{align}
\begin{align}\label{eq:def_epsilon_a_C_a}
\epsilon_a : = n^{-C_a}, \text{~~~where~~~} C_a: = 2 + C_b^2 (1 + C_0/C).
\end{align}
Furthermore, set
\begin{align}\label{eq:def_beta}
\beta := B^2.
\end{align}
We define $Q \in \mathbb{R}^{\widetilde{n} \times \widetilde{d}}$ and $K \in \mathbb{R}^{\widetilde{n} \times \widetilde{d}}$ as
\begin{align*}
Q: = \sqrt{\beta} \cdot \begin{bmatrix}
a_1^\top & {\bf 1}_d^\top \\
a_2^\top & {\bf 1}_d^\top \\
\vdots & \vdots \\
a_n^\top & {\bf 1}_d^\top \\
{\bf 0}_n^\top & {\bf 1}_d^\top \\
{\bf 0}_n^\top & {\bf 1}_d^\top \\
\vdots & \vdots \\
{\bf 0}_n^\top & {\bf 1}_d^\top
\end{bmatrix}
\text{~~~and~~~}
K := \sqrt{\beta} \cdot \begin{bmatrix}
b_1^\top & 0 \\
b_2^\top & \\
\vdots & \vdots \\
b_n^\top & 0 \\
{\bf 0}_n^\top & {\bf 1}_d^\top \\
{\bf 0}_n^\top & {\bf 1}_d^\top \\
\vdots & \vdots \\
{\bf 0}_n^\top & {\bf 1}_d^\top
\end{bmatrix}.
\end{align*}
Since each entry of $Q$ and $K$ is either $\sqrt{\beta}$ or $0$, it follows that
\begin{align*}
\| Q \|_{\infty} \leq & ~ \sqrt{\beta} = B \\
\| K \|_{\infty} \leq & ~ \sqrt{\beta} = B \\
\| Q K^\top / \widetilde{d} \|_{\infty} \leq & ~ \frac{\beta \cdot\widetilde{d}}{\widetilde{d}} = \beta = B^2.
\end{align*}
Using the above construction of $Q \in \mathbb{R}^{\widetilde{n} \times \widetilde{d}}$ and $K \in \mathbb{R}^{\widetilde{n} \times \widetilde{d}}$, we note that
\begin{align*}
A:=\exp(QK^\top / \widetilde{d} ) \in \mathbb{R}^{\widetilde{n} \times \widetilde{n}}
\end{align*}
is given by
\begin{align*}
A= \begin{bmatrix}
\exp( \beta \langle a_1, b_1\rangle / \widetilde{d}) & \exp( \beta \langle a_1, b_2 \rangle / \widetilde{d}) & \cdots & \exp( \beta \langle a_1, b_n \rangle / \widetilde{d}) & \tau & \tau & \cdots & \tau \\
\exp(\beta \langle a_2, b_1\rangle / \widetilde{d}) & \exp(\beta \langle a_2, b_2 \rangle / \widetilde{d}) & \cdots & \exp(\beta \langle a_2, b_n \rangle / \widetilde{d}) & \tau & \tau & \cdots & \tau \\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
\exp(\beta \langle a_n, b_1\rangle / \widetilde{d}) & \exp(\beta \langle a_n, b_2 \rangle / \widetilde{d}) & \cdots & \exp(\beta \langle a_n, b_n \rangle / \widetilde{d}) & \tau & \tau & \cdots & \tau \\
0 & 0 & \cdots & 0 & \tau & \tau & \cdots & \tau\\
0 & 0 & \cdots & 0 & \tau & \tau & \cdots & \tau\\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & 0 & \tau & \tau & \cdots & \tau\\
\end{bmatrix}.
\end{align*}
(Note that we do not explicitly compute all the entries of $A$ in our algorithm; we will make use of it only through calling our algorithm for the Attention problem.)
For each $(i,j) \in [n] \times [n]$, we know that
\begin{align}\label{eq:upper_bound_A_i_j}
A_{i,j}
= & ~ \exp( \beta \langle a_i, b_j \rangle / \widetilde{d} ) \notag \\
\leq & ~ \exp( \beta \| a_i \|_{\infty} \cdot \| b_j\|_{\infty} \cdot d /\widetilde{d} ) \notag \\
\leq & ~ \exp( \beta ) \notag \\
= & ~ \tau
\end{align}
where the first step follows from definition of $A \in \mathbb{R}^{\widetilde{n} \times \widetilde{n}}$, the second step follows from $\langle a_i, b_j \rangle \leq \| a_i \|_{\infty} \cdot \| b_j \|_{\infty} d$, the third step follows from $d < \widetilde{d}$ (see Eq.~\eqref{eq:def_wt_n_wt_d}), and the last step follows from definition of $\tau$ (see Eq.~\eqref{eq:def_tau}).
On the other hand, we know that that for each $(i,j) \in [n] \times [n]$,
\begin{align}\label{eq:lower_bound_A_i_j}
A_{i,j} \geq 0
\end{align}
since it is an exponential of an entry of $Q K^\top / \widetilde{d}$.
Using Eq.~\eqref{eq:upper_bound_A_i_j} and Eq.~\eqref{eq:lower_bound_A_i_j}, combined with our expression for $A$, it thus follows that
\begin{align*}
n \tau \leq (A {\bf 1}_{\widetilde{n}})_i \leq 2n \tau, ~~~\forall i \in [\widetilde{n} ].
\end{align*}
Since $D_{i,i} = (A {\bf 1}_{\widetilde{n}})_i$, thus we know that
\begin{align*}
n \tau \leq D_{i,i} \leq 2 n \tau, ~~~\forall i \in [\widetilde{n}].
\end{align*}
Choose the vector $v\in \mathbb{R}^{\widetilde{n}}$ defined as
\begin{align*}
v = \begin{bmatrix} {\bf 1}_n \\ {\bf 0}_n\end{bmatrix}.
\end{align*}
We define $\widetilde{t}$ as
\begin{align}\label{eq:def_wt_t}
\widetilde{t}:= \frac{1}{3}\exp(0.25 \beta (1-t/d))/(2n \tau) .
\end{align}
We can show that $\widetilde{t} \geq \epsilon_a$ as follows:
\begin{align*}
\widetilde{t} = & ~ \frac{1}{6n} \exp( 0.25 \beta(1-t/d) - \beta ) \\
= & ~ \frac{1}{6n} \exp( -0.75\beta - 0.25 \beta t /d ) \\
= & ~ \frac{1}{6n} \exp( -0.75\beta - 0.25 \beta C_0/ C ) \\
= & ~ \frac{1}{6} \exp( -0.75\beta - 0.25 \beta C_0/C - \log n ) \\
= & ~ \frac{1}{6} \exp( -0.75 C_b^2 \log n - 0.25 C_b^2 (C_0/C) \log n - \log n ) \\
\geq & ~ n^{-C_a} \\
= & ~ \epsilon_a,
\end{align*}
where the second step follows from simple algebra, the third step follows from $t = C_0\log n$ (Eq.~\eqref{eq:def_t_C_0_logn}) and $d = C\log n$ (assumption in Lemma statement), the second step follows from choice of $\beta$ (Eq.~\eqref{eq:def_B_C_b}), and the sixth step follows from choice of $C_a$ (Eq.~\eqref{eq:def_epsilon_a_C_a}), and the last step follows from Eq.~\eqref{eq:def_epsilon_a_C_a}.
Since $\widetilde{t} \geq \epsilon_a$, if we run an algorithm for Approximation Attention Computation (Definition~\ref{def:AAttC}) $\mathsf{AAttC}(\widetilde{n}, \widetilde{d}, B , \epsilon_a)$, where we pick $V$ to be a matrix with one row $v$ and the rest $0$, we can output a vector $u \in \mathbb{R}^{\widetilde{n}}$ such that, for all $i \in [\widetilde{n}]$,
\begin{align*}
| u_i - (D^{-1} A v)_i | < \widetilde{t}.
\end{align*}
Note that using Remark~\ref{rem:complement_transformation}, we have
\begin{align*}
\| a_i \|_2^2 / d = & ~ 0.5, ~~~ \forall i \in [n], \\
\| b_j \|_2^2 / d = & ~ 0.5, ~~~ \forall j \in [n].
\end{align*}
Therefore, for any $(i,j) \in [n] \times [n]$,
\begin{align*}
\frac{1}{d} \langle a_i , b_j \rangle
= & ~ \frac{1}{2d} ( \| a_i \|_2^2 + \| b_j \|_2^2 - \| a_i - b_j \|_2^2 ) \\
= & ~ \frac{1}{2d} ( 0.5 d + 0.5 d - \| a_i - b_j \|_2^2 ) \\
= & ~ 0.5 - 0.5 \| a_i -b_j \|_2^2 / d,
\end{align*}
where the second step follows from $\| a_i \|_2^2 = \| b_j \|_2^2 = d/2$, and the last step follows from simple algebra.
Recall that our goal is to determine, for each $i \in [n]$, whether there is a $j\in [n]$ such that $\| a_i - b_j \|_2^2 \leq t$, or whether $\| a_i - b_j \|_2^2 \geq (1 + \epsilon_a)t$ for all $j \in [n]$. We will show next that we can distinguish these two cases by seeing whether $u_i$ is greater than or less than the value $\widetilde{t}_0 := 2 \widetilde{t}$.
{\bf Case 2a.}
If there exists an $(i,j) \in [n] \times [n]$ such that $\| a_i - b_j \|_2^2 \leq t$, then
\begin{align*}
\beta \langle a_i, b_j \rangle / \widetilde{d}
= & ~ 0.5 \cdot \beta \langle a_i, b_j \rangle /d \\
\geq & ~ 0.25 \cdot \beta (1 - t/d),
\end{align*}
where the first step follows from $2d = \widetilde{d}$ (see Eq.~\eqref{eq:def_wt_n_wt_d}).
This means that
\begin{align*}
u_i
\geq & ~ \exp ( 0.25 \beta (1- t/d) ) / ( 2 n\tau ) - \widetilde{t} \\
= & ~ 3 \widetilde{t} - \widetilde{t} \\
= & ~ 2 \widetilde{t} \\
= & ~ \widetilde{t}_0,
\end{align*}
where the second step follows from the definition of $\widetilde{t}$ (see Eq.~\eqref{eq:def_wt_t}), and the last step follows from the definition of $\widetilde{t}_0$.
{\bf Case 2b.}
If for all $(i,j) \in [n] \times [n]$, we have $\| a_i - b_j \|_2^2 > t(1+\epsilon)$, this implies
\begin{align*}
\beta \langle a_i, b_j \rangle /d \leq 0.25 \beta \cdot (1 - t(1+\epsilon)/d ).
\end{align*}
Then, for all $i \in [n]$,
\begin{align*}
u_i < & ~ ( n \cdot \exp(0.25 \beta (1-(1+\epsilon) t/d)) ) / (n \tau) + \widetilde{t} \\
= & ~ \exp ( 0.25 \beta (1- t/d) ) / ( 2 n\tau ) \cdot (n/ \exp(0.25\beta \epsilon t /d)) + \widetilde{t} \\
= & ~ 3 \widetilde{t} \cdot (n/ \exp(0.25\beta \epsilon t /d)) + \widetilde{t} \\
\leq & ~ 3 \widetilde{t} \cdot \frac{1}{4} + \widetilde{t} \\
= & ~ 2\widetilde{t} \\
= & ~ \widetilde{t}_0,
\end{align*}
where the third step follows from definition of $\widetilde{t}$ (see Eq.~\eqref{eq:def_wt_t}), the forth step follows from the calculation in Eq.~\eqref{eq:gap_term} below, and the last step follows from $\widetilde{t} = \widetilde{t}_0 / 2$.
Finally, b our choice of $\beta$ and $t$, we can see that
\begin{align}\label{eq:gap_term}
\exp(0.25 \beta \epsilon t /d )
= & ~ \exp( (0.25 \beta \epsilon C_0 \log n ) / d) \notag \\
= & ~ \exp( 0.25 \beta \epsilon C_0 / C) \notag \\
= & ~ \exp(10 \log n) \notag\\
> & ~ 4 n,
\end{align}
where the first step follows $t = C_0 \log n$ (Eq.~\eqref{eq:def_t_C_0_logn}), the second step follows from $d= C\log n$, and the third step follows from $\beta = B^2$ (Eq.~\eqref{eq:def_beta}) and the choice of $B$ (Eq.~\eqref{eq:def_B_C_b}).
\end{proof}
\section{Introduction}
Large language models (LLMs) such as Transformer \cite{transformer17}, BERT \cite{bert18}, GPT-3 \cite{gpt3_20}, PaLM \cite{palm22}, and OPT \cite{opt_22} can process natural language more effectively than smaller models or traditional algorithms. This means that they can understand and generate more complex and nuanced language, which can be useful for a variety of tasks such as language translation, question answering, and sentiment analysis. LLMs can also be adapted to multiple purposes without needing to be retained from scratch.
Their power is particularly exemplified by the recent success of ChatGPT, a chat software by OpenAI built on top of GPT-3 \cite{chatgpt}.
The key technical backbone of LLMs is the \emph{attention matrix} \cite{transformer17,gpt1_18,bert18,gpt2_19,gpt3_20}. An attention matrix is a square matrix whose rows and columns correspond to words or ``tokens'', and whose entries correspond to the correlations between these tokens in natural text. The attention matrix is then used to calculate the importance of each input token in a sequence when producing an output. In an attention mechanism, each input token is given a weight or score, which reflects its importance or relevance to the current output being generated. These scores are calculated based on a comparison between the current output state and the input states, using a similarity function.
More formally, the attention matrix is defined as follows.
Let $Q \in \mathbb{R}^{n \times d}$ be the matrix of query tokens, and $K \in \mathbb{R}^{n \times d}$ be the matrix of key tokens. (We focus here on the case when $d = n^{o(1)}$, so $d \ll n$.)
The attention matrix is an $n \times n$ matrix $A$ where the rows and columns correspond to the input tokens in the sequence. Each entry in the matrix represents the attention weight or score between a particular input token (query token $Q$) and a particular output token (key token $K$). The diagonal entries of the matrix represent self-attention scores, which measure the importance of each token with respect to itself.
The major bottleneck to speeding up LLM operations (in the case of modeling long sequences with large $n$) is the time to perform attention matrix computations \cite{transformer17,gpt1_18,bert18,gpt2_19,gpt3_20,linformer20,reformer20}. These computations ask us to multiply the attention matrix $A$ with another value token matrix $V \in \mathbb{R}^{n \times d}$.
We formally define Attention computation as follows. Throughout this paper, we write $\exp$ to denote the \emph{entry-wise} exponential for matrices.
\begin{definition}[Exact Attention Computation $\mathsf{EAttC}(n,d)$]\label{def:EAttC}
Given three matrices $Q,K, V \in \mathbb{R}^{n \times d}$, output the $n \times d$ matrix $\Att(Q,K,V)$ defined by
\begin{align*}
\Att(Q,K,V) : = D^{-1} A V
\end{align*}
where $A \in \mathbb{R}^{n \times n}$ and diagonal matrix $D \in \mathbb{R}^{n \times n}$ are defined as
\begin{align*}
A:= \exp(Q K^\top /d), \text{~~~and~~~} D: = \diag(A {\bf 1}_n).
\end{align*}
\end{definition}
The straightforward algorithm for this problem computes the matrix $A$ and then performs the multiplications $D^{-1} A V$, in time $n^{2 + o(1)}$. Since $A$ is an $n \times n$ matrix with $n^2$ entries, it is impossible to improve on this much while explicitly computing the matrix $A$. However, the input to the problem is not $A$, but rather the three matrices $Q,K,V$ which each have only $n^{1 + o(1)}$ entries. An algorithm which only \emph{implicitly} makes use of $A$, without explicitly computing all its entries, could hope to run in almost linear time!
In this paper, we investigate the possibility of accelerating attention computations in this way. The two main questions we address are:
\begin{itemize}
\item {\bf Q1.} When can we perform attention computations in almost linear time $n^{1 + o(1)}$?
\item {\bf Q2.} When can we prove that subquadratic-time algorithms for attention computations are \emph{impossible}?
\end{itemize}
In most LLMs, it suffices to \emph{approximately} perform attention computations throughout the inference process as long as there are reasonable precision guarantees \cite{sparse_transformer19,reformer20,linformer20,dkod20,kvpf20,cdw+21,cdl+22}.
We therefore focus here on approximate attention computation, which can potentially be performed even faster than exact computation. Mathematically, we define the {\it approximate} version of $\mathsf{EAttC}$ as follows.
\begin{definition}[Approximate Attention Computation $\mathsf{AAttC}(n,d, B, \epsilon_a)$]\label{def:AAttC}
Let $\epsilon_a >0$ and $B > 0$ be parameters. Given three matrices $Q,K, V \in \mathbb{R}^{n \times d}$, with the guarantees that $\| Q \|_{\infty} \leq B$, $\| K \|_{\infty} \leq B$, and $\| V \|_{\infty} \leq B$, output a matrix $T \in \mathbb{R}^{n \times d}$ which is approximately equal to $D^{-1} A V$, meaning, $$\| T - D^{-1} A V \|_{\infty} \leq \epsilon_a.$$ Here, for a matrix $M \in \mathbb{R}^{n \times n}$, we write $\| M \|_{\infty}:=\max_{i,j} | M_{i,j} |$.
\end{definition}
Again, the straightforward algorithm for this problem runs in time $O(n^2 d) \leq n^{2 + o(1)}$, but the input size is only $O(nd) \leq n^{1 + o(1)}$. Our goal is to investigate when faster algorithms are possible in terms of the parameters $d, B$, and $\epsilon_a$.
\subsection{Our Results}
We focus on the natural setting where $d = O(\log n)$ (the setting where we model long sequences) and $\epsilon_a = 1/\poly(n)$ (low enough error so that attention computations over an entire network can be combined). Our main results show that whether or not there is a fast algorithm for $\mathsf{AAttC}$ critically depends on $B$, the magnitudes of the entries in the input matrices.
We first show a lower bound, that when $B \geq \Omega(\sqrt{\log n})$, it is impossible to design a truly subquadratic-time algorithm. Our lower bound makes use of the Strong Exponential Time Hypothesis ($\mathsf{SETH}$)~\cite{ip01}, a popular conjecture \cite{w18} from the area of fine-grained complexity regarding the time required to solve $k$-SAT. (See Section~\ref{sec:hard} below where we discuss $\mathsf{SETH}$ in more detail.)
\begin{theorem}[Lower bound, informal version of Theorem~\ref{thm:formal_main_lower_bound}]\label{thm:informal_main_lower_bound}
Assuming $\mathsf{SETH}$, for every $q>0$, there are constants $C,C_a,C_b>0$ such that: there is no $O(n^{2-q})$ time algorithm for the problem $\mathsf{AAttC}(n,d = C \log n,B= C_b \sqrt{\log n},\epsilon_a = n^{-C_a})$.
\end{theorem}
Our second complementary result is a new algorithm, showing that when $B < o(\sqrt{\log n})$, the problem can be solved very efficiently, in almost linear time.
\begin{theorem}[Upper bound, informal version of Theorem~\ref{thm:formal_main_upper_bound}]\label{thm:informal_main_upper_bound}
There is an algorithm (Algorithm~\ref{alg:main}) that solves $\mathsf{AAttC}(n,d = O(\log n),B = o(\sqrt{\log n}),\epsilon_a = 1/\poly(n))$ in time $ n^{1+o(1)}$.
\end{theorem}
Our Theorems~\ref{thm:informal_main_lower_bound} and \ref{thm:informal_main_upper_bound} show that the attention computation problem $\mathsf{AAttC}$ exhibits a very tight transition at $B = \Theta(\sqrt{\log n})$ from almost linear time to trivial quadratic time. When $B < o(\sqrt{\log n})$ is smaller, the problem can be solved in almost linear time $n^{1 + o(1)}$ in the input size, using our algorithm for Theorem~\ref{thm:informal_main_upper_bound}. When $B \geq \Omega(\sqrt{\log n})$ is greater, our algorithm from Theorem~\ref{thm:informal_main_upper_bound} no longer applies, and furthermore our lower bound from Theorem~\ref{thm:informal_main_lower_bound} shows that it is \emph{impossible} to solve the problem in truly subquadratic time, no matter what algorithmic techniques one uses (assuming $\mathsf{SETH}$).
It has been observed in LLM implementations in practice that computations are much faster when one assumes that the matrix entries are bounded or can be well-approximated using a small number of bits (see, e.g., \cite[Section 2]{bert8bit_19} and \cite[Section 3.2.1]{kvpf20}). Our work can be viewed as giving a theoretical explanation for this phenomenon, and helping to explain why techniques like quantization \cite{bert8bit_19} and low-degree polynomial approximation \cite{kvpf20} have been so effective in practice.
{\bf Related Work.}
A recent work by Zandieh, Han, Daliri, and Karbasi~\cite{zhdk23} was the first to give an algorithm with provable guarantees for attention approximation. Their algorithm makes use of locality sensitive hashing (LSH) techniques \cite{ckns20} which, as we will discuss next, is quite different from our algorithm for Theorem~\ref{thm:informal_main_upper_bound} which uses the polynomial method \cite{acss20,aa22}.
In the case when $d = o(\log^2 n)$, they achieve a running time of roughly $O(n^{1.17} \cdot d / \epsilon_r^2)$, where $\epsilon_r$ is a \emph{relative} error parameter (which is similar, though not exactly the same, as our $\epsilon_a$ from Definition~\ref{def:AAttC}). In particular, their algorithm applies for larger $d$ than ours (we require $d = O(\log n)$), but we achieve almost linear time $n^{1 + o(1)}$ (whereas their running time is bounded below by $\Omega(n^{1.17})$), and our algorithm can handle any polynomial error $\epsilon_a = 1/\poly(n)$ (whereas they require $\epsilon_r \geq 1/n^{o(1)}$ to not increase the running time by a polynomial factor).
It is natural to wonder whether further improvements are possible by combining our techniques with those of~\cite{zhdk23}. However, our lower bound of Theorem~\ref{thm:informal_main_lower_bound} shows that our algorithm of Theorem~\ref{thm:informal_main_upper_bound} is already essentially tight and cannot be substantially improved.
\subsection{Technique Overview}
Our high-level approach is to make use of similarities between attention computation and other computational problems related to Kernel Density Estimation (KDE). Such a relationship was investigated by recent work \cite{tby+19,zhdk23}. In particular,~\cite{zhdk23} was inspired to apply LSH techniques to attention computation because of the prevalence of LSH in KDE algorithms~\cite{cs17,bcis18,cs19, ckns20}. The main conceptual idea behind our results is that different techniques from the KDE literature, other than LSH, can be modified to apply in this setting and yield tight algoriths and lower bounds.
To design our algorithm for Theorem~\ref{thm:informal_main_lower_bound}, we instead build off of a different line of work on KDE which makes use of the `polynomial method in algorithm design'. Suppose $M \in \mathbb{R}^{n \times n}$ is a matrix, $f : \mathbb{R} \to \mathbb{R}$ is a function, and let $f(M)$ denote the matrix one gets by applying $f$ entry-wise to $M$. The polynomial method is a technique for finding low-rank approximations of $f(M)$. It shows that if $M$ has low rank, and if $f$ can be approximated by a low-degree polynomial, then the matrix $f(M)$ is very close to a low-rank matrix whose low-rank decomposition can be computed efficiently.
To use this to solve $\mathsf{AAttC}$, we make use of a recent result which bounds the degree required to approximate the exponential function by a polynomial~\cite{aa22} in order to find a low-rank approximation of the attention matrix $A$. Prior work~\cite{acss20,acm+20,aa22} applied these polynomials in a similar way to solve the Gaussian KDE problem; our main observation is that by an appropriate rescaling, this approach can be modified to apply to $\mathsf{AAttC}$ as well.
The proof of our lower bound Theorem~\ref{thm:informal_main_lower_bound} builds off of another line of work on the fine-grained complexity of KDE problems \cite{bis17, acss20, aa22}. The main idea is to give a fine-grained reduction from the well-studied problem of Approximate Nearest Neighbor search $\mathsf{ANN}$. In $\mathsf{ANN}$, one is given as input $n$ vectors of dimension $d$, and an error parameter $\epsilon>0$, and the goal is to find a pair of vectors whose distance is at most $(1 + \epsilon)$ times the \emph{minimum} distance between any pair of the vectors. The straightforward algorithm for $\mathsf{ANN}$ runs in quadratic time, and it is known that it is impossible to solve $\mathsf{ANN}$ in truly subquadratic time assuming $\mathsf{SETH}$~\cite{r18}.
In order to prove our lower bound, we show that $\mathsf{AAttC}$ can be used to solve $\mathsf{ANN}$. The key idea is that, if the matrices $Q$ and $K$ from $\mathsf{AAttC}$ are formed by concatenating the input vectors to the $\mathsf{ANN}$ problem, then the nearest neighbor vectors correspond to the largest entries of the attention matrix $A$. It is not immediately clear that $\mathsf{AAttC}$ can be used to detect large entries of $A$, since the output is rescaled by the matrix $D^{-1}$, but we show that this can be overcome with some modifications to the input vectors which approximately balance the rows of $A$. Prior work~\cite{bis17, acss20, aa22} used a very similar approach to give lower bounds for KDE problems, although KDE doesn't involve any rescaling factors.
\section*{Acknowledgments}
\bibliographystyle{alpha}
\section{Preliminaries}\label{sec:preli}
We work in the standard real-RAM model and assume arithmetic operations on real numbers can be performed in constant time in our algorithms.
We use ${\cal T}_{\mathrm{mat}}(a,b,c)$ to denote the time to multiply an $a \times b$ matrix with another $b \times c$ matrix. In fact, we will only make use of the straightforward, practical bound ${\cal T}_{\mathrm{mat}}(a,b,c) \leq O(abc)$. In principle, fast theoretical matrix multiplication algorithms could be used instead to improve this bound and speed up our algorithms here (in exchange for making them less practical). That said, because of our parameter settings\footnote{We will make use of ${\cal T}_{\mathrm{mat}}(n,n^{o(1)}, n^{o(1)})$, which can be solved straightforwardly in time $n^{1 + o(1)}$, and which cannot be solved much faster since it has input size $n^{1 + o(1)}$.}, we will see that faster matrix multiplication could only improve low-order terms in our running times.
For any positive integer, we use $[n]$ to denote set $\{1,2,\cdots, n\}$.
For a matrix $M$, we write $\| M \|_{\infty}$ to denote its $\ell_{\infty}$ norm, i.e., $\| M \|_{\infty}:= \max_{i,j} |M_{i,j}|$. For a matrix $M$, we use $M^\top$ to denote its transpose.
We use ${\bf 1}_n$ to denote a length-$n$ vector whose entries are all $1$s. We use ${\bf 0}_n$ to denote a length-$n$ vector whose entries are all $0$s.
For any matrix $A \in \mathbb{R}^{n \times n}$, we use $\exp(A) \in \mathbb{R}^{n \times n}$ to denote the matrix where $\exp(A)_{i,j} = \exp(A_{i,j})$. In other words, all the $\exp()$ operators in this paper are applied entry-wise to matrices. In particular, we will not use matrix exponentials in this paper.
For a vector $x \in \mathbb{R}^n$, we use $\| x \|_0$ to denote its number of non-zero entries, we use $\| x \|_1$ to denote its $\ell_1$ norm, i.e., $\| x \|_1 := \sum_{i=1}^n |x_i|$, and we use $\| x \|_2$ to denote its $\ell_2$ norm, i.e., $\| x \|_2 := (\sum_{i=1}^n |x_i|^2)^{1/2}$. For a vector $x$, we use $x^\top$ to denote its transpose.
\subsection{Additive Error for Polynomial Approximation}
Our algorithm for attention computation will critically make use of a polynomial approximation for the exponential function. In particular, we use the following tight construction from previous work \cite{aa22}.
\begin{lemma}[\cite{aa22}]\label{lem:aa22}
Let $B > 1$ and let $\epsilon \in (0,0.1)$.
There is a polynomial $P: \mathbb{R} \rightarrow \mathbb{R}$ of degree $g := \Theta\left( \max \left\{ \frac{\log(1/\epsilon)}{ \log( \log(1/\epsilon) / B ) }, B \right\}\right)$ such that for all $x \in [0,B]$, we have
\begin{align*}
|P(x) - \exp(x)| < \epsilon.
\end{align*}
Moreover, $P$ can be computed efficiently: its coefficients are rational numbers with $\poly(g)$-bit integer numerators and denominators which can be computed in $\poly(g)$ time.
\end{lemma}
\subsection{From Additive Error to Relative Error
}
We note that in our setting, Lemma~\ref{lem:aa22} can be used to give a relative error approximation as well:
\begin{corollary}\label{cor:aa22_from_-B_to_B}
Let $B > 1$ and let $\epsilon \in (0,0.1)$.
There is a polynomial $P: \mathbb{R} \rightarrow \mathbb{R}$ of degree $g := \Theta( \max\{ \frac{\log(1/\epsilon)}{ \log( \log(1/\epsilon) / B ) } , B \} )$ such that for all $x \in [-B,B]$, we have
\begin{align*}
|P(x) - \exp(x)| < \epsilon \cdot \exp(x).
\end{align*}
\end{corollary}
\begin{proof}
By Lemma~\ref{lem:aa22}, there is a polynomial $Q : \mathbb{R} \to \mathbb{R}$ of degree $g = \Theta( \{ \frac{\log(1/\epsilon)}{ \log( \log(1/\epsilon) / B ) } ,B \} )$ such that, for all $y \in [0,2B]$ we have $|Q(y) - \exp(y)| \leq \epsilon$. Our desired polynomial is the rescaled $P(x) := Q(x+B) / \exp(B)$. Indeed, for any $x \in [-B,B]$, we have $\exp(x) \geq \exp(-B)$, and so
\begin{align*}
| P(x) - \exp(x) | =&~ | Q(x+B) / \exp(B) - \exp(x) | \\
=&~ | Q(x+B) - \exp(x+B) | / \exp(B) \\
\leq & ~ \epsilon / \exp(B) \\
\leq & ~ \epsilon \cdot \exp(x),
\end{align*}
as desired.
\end{proof} |
{
"arxiv_id": "2302.13267",
"language": "en",
"timestamp": "2023-02-28T02:14:41",
"url": "https://arxiv.org/abs/2302.13267",
"yymm": "2302"
} | \section{Introduction}
\label{sec:Intro}
One of the big questions we are still asking in the literature is that why the universe is accelerating \citep{Acceleration1,Acceleration2}. To answer this question, a variety of theoretical models (modified gravity or dark energy) are developed~\cite{Clifton2012,Brax2018}. These models are largely degenerated with each other, and in order to break the degeneracy, we intend to simultaneously measure the expansion and structure growth histories of our universe~\cite{weinbergreview,Joyce2016,Koyama2016}. For the former, cosmic probes like standard candle~\cite{Riess2022}, ruler~\cite{Eisenstein05}, siren~\cite{Abbott2017} and time delay technique~\cite{Wong2020,Treu2022} guarantee an accurate and precise measurement, and for the latter, we can detect the cosmic peculiar velocity field.
The peculiar velocity of a galaxy is its velocity component that deviates from the Hubble flow. It is induced by the gravitational attraction generated by the small scale inhomogeneity of our universe. The linear continuity equation directly relates the peculiar velocity field, in particular its divergence, to the matter density growth rate, making it an unbiased tracer of the underlying matter density field. As a result, the peculiar velocity field is an important additional probe for the multi-tracer cosmological analysis~\cite{Koda13,Howlett17,Munchmeyer2019} and a unique cosmic probe which helps understand the fundamental physics such as inflation~\cite{Munchmeyer2019}, dark energy~\cite{Okumura22}, modified gravity~\cite{Roncarelli2018,PanZ2019,Mitchell2021}, massive neutrino \citep{Mueller2015,Whitford2022,ZhouSR2022} and so on.
The current peculiar velocity detection methodologies can be divided into three categories. The first is through the redshift space distortion (RSD) effect~\cite{Jackson72,Kaiser87}. This method indirectly detects the velocity field by its anisotropic imprint on the galaxy clustering pattern in redshift space. Having the highest detection significance within the three, the RSD effect has been successfully utilized by various galaxy surveys to measure the structure growth history and constrain cosmological models~\cite{Peacock01,Tegmark02,Tegmark04,Samushia12,Guzzo08,Blake11b,Blake12,Alam2017,Hector2020,Tamone2020,HouJM2021}. The second kind of methods depend on extra measurements that reveal the real space distance of a galaxy, which will then be subtracted from the measured redshift space distance and unveil its peculiar velocity. This extra distance measure can come from Tully-fisher~\cite{Tully1977} or Fundamental plane relationship~\cite{Djorgovski1987}, Type Ia supernova~\citep{Zhang08a}, Gravitational wave event~\cite{Abbott2017} and so on. Although their utility is limited either by large intrinsic uncertainties or limited detection events, the next generation surveys will make improvements on these downsides and make the so-called peculiar velocity surveys good complements to the RSD effect~\citep{Howlett17,Kim2020,Adams2020}. Lastly, the third method is the kinetic Sunyaev Zel'dovich (kSZ) effect that we will discuss in this paper.
The kSZ effect is a secondary CMB anisotropy in which the temperature of CMB photons is slightly changed by their inverse Compton scattering off of free electrons with a bulk motion. It detects the momentum field of free electrons in our universe, as illustrated by
\begin{equation}
\label{eq:deltaT_kSZ}
\frac{\delta T_{\rm kSZ}(\hat{n})}{T_0} = -\sigma_{\rm T}\int dl n_{\rm e} \left(\frac{{{\bf v}}_{\rm e}\cdot \hat{n}}{c}\right) \,.
\end{equation}
Here $T_0\simeq 2.7255\rm K$ is the averaged CMB temperature, $\sigma_{\rm T}$ is the Thomson-scattering cross-section, $c$ is the speed of light, $\hat{n}$ is the unit vector along the line of sight (LOS), $n_{\rm e}$ is the physical free electron number density, ${\bf v}_{\rm e}$ is the proper peculiar velocity of free electrons, defined to be positive for those recessional objects, and the integration $\int dl$ is along the LOS given by $\hat{n}$.
As a result, cosmological applications of the kSZ effect are two-fold. First it measures the (ionized) baryon distribution in our universe and can be used to constrain the Epoch-of-Reionization (EoR) history~\citep{Alvarez2016,ChenN2023} or the gas distribution within halos~\citep{Schaan16,Sugiyama18} and filaments~\cite{ZhengY2023}. In particular, since $\delta T_{\rm kSZ}(\hat{n})$ is linearly proportional to $n_{\rm e}$ and independent of the gas temperature, it is well suited to search for missing baryons \citep{ZhengY2023,Carlos2015,Shao2016,Lim2020,Jonas2021} which are believed to mainly reside in low density and low temperature environments. Second, it directly measures the LOS peculiar velocity field, which in turn helps constrain different aspects of the cosmological model~\citep{Okumura22}, e.g., Copernican principle~\citep{Zhang11b}, primordial non-Gaussianity~\citep{Munchmeyer2019,Kumar2022}, dark energy~\citep{Pen2014}, modified gravity~\citep{Roncarelli2018,Zheng20,Mitchell2021}, massive neutrino~\citep{Roncarelli2017} and so on.
We focus on the late-time kSZ effect in this work, which denotes the kSZ effect after the EoR. It can be probed via cross-correlations between CMB data and tracers of large-scale structure at different redshifts, a technique known as the kSZ tomography. Since its first measurement one decade ago~\citep{Hand12}, the current detection significance by adopting this technique is around $4\sim7\sigma$, with combinations of different CMB data and galaxy catalogs, e.g., SPT+DES~\citep{Soergel16}, ACT+SDSS~\citep{Schaan2021,Calafut2021}, Planck/WMAP+unWISE~\citep{Kusiak2021}, Planck+DESI(photo-z)~\citep{Chen2022}, et al. With the next generation of surveys,
signal-to-noises (S/N's) of such measurements can reach $\mathcal{O}(100)$ and beyond~\citep{Sugiyama18,Smith2018}.
The major contribution of the detected kSZ signal by kSZ tomography comes from free electrons in and around dark matter halos, and within the foreground galaxy catalog, most CMB photons are assumed to encounter only one large halo during their journey to us. In this case, Eq.~(\ref{eq:deltaT_kSZ}) can be reformed as
\begin{equation}
\label{eq:deltaT_kSZ2}
\frac{\delta T_{\rm kSZ}(\hat{n}_i)}{T_0} = -\frac{\tau_{{\rm T},i}}{c}{\bf v}_i\cdot\hat{n}_i\approx-\frac{\tau_{\rm T}}{c}{\bf v}_i\cdot\hat{n}_i\,,
\end{equation}
where $\tau_{{\rm T},i}=\int dl\sigma_{\rm T}n_{{\rm e},i}$ is the optical depth of the $i$th halo and $\tau_{\rm T}$ is the average optical depth of a halo sample. Eq.~(\ref{eq:deltaT_kSZ2}) shows that, $\tau_{\rm T}$ fully couples with the galaxy velocity field and modulates its overall amplitude in the kSZ measurement. As a result, in a real space kSZ observation it is difficult to decouple $\tau_{\rm T}$ from those cosmological parameters controlling the velocity field amplitude, such as the linear growth rate $f\equiv d\ln D/d\ln a$ where $D$ is the linear growth factor and $a$ is the scale factor. It leads to the so-called $\tau_{\rm T}-f$ degeneracy in the kSZ cosmology and a fact that, little cosmological information can be extracted solely from the kSZ observations without a concrete knowledge of the optical depth. The degeneracy is rigorous on linear scales in real space, as we have proved in Appendix~\ref{app:degeneracy_proof} using the continuity equation.
This degeneracy can be illustrated in alternative ways. For example, in some works the $\tau_{\rm T}$ parameter is alternatively expressed as a constant velocity bias $\tilde{b}_v$ in the linearly reconstructed radial velocity field $\tilde{v}_{\hat{n}}$ via the kSZ tomography~\citep{Smith2018},
\begin{equation}
\label{eq:vel_recons}
\tilde{v}_{\hat{n}}({\bf k}) = \mu\frac{\tilde{b}_vafH}{k} \delta({\bf k})+({noise})\,.
\end{equation}
Here $\tilde{v}_{\hat{n}}\equiv\tilde{{\bf v}}\cdot\hat{n}$ and $\mu\equiv\cos(\hat{{\bf k}}\cdot\hat{n})$ denotes the cosine of the angle between ${\bf k}$ and the LOS. In the cosmological inference, $b_v$ is then marginalized and loosens the constraints on parameters relative to the amplitude of the velocity power spectrum like $f$.
Several ideas were proposed to break this degeneracy and enhance the scientific return of the kSZ cosmology: (1) it was proposed in~\citep{Sugiyama2016} that the synergy between kSZ effect and redshift space distortion (RSD) effect can break this degeneracy; (2) a comprehensive model \citep{Flender2017} or empirical relationship \citep{Alonso2016,Battaglia2016,Soergel2018} was constructed to predict the value of $\tau_{\rm T}$; (3) extra observations like fast radio burst can directly measure the optical depth \citep{Madhavacheril2019}; (4) studying the three-point statistic of the pairwise velocity field can avoid the $\tau_{\rm T}$ dependency in a cosmological inference~\cite{Kuruvilla2022}; and so on.
Being innovative and promising though, all of these ideas except method (4) rely on extra information of $\tau_{\rm T}$ or $f$ from other observations. On the contrary, we explore in this work the possibility of breaking this degeneracy spontaneously by the observed kSZ effect in a two-point statistic. The joint analysis combining the two- and three-point statistics will be delivered to future works. The idea is simple. Since we actually apply the kSZ tomography technique in redshift space where the velocity field leaves its own imprint on the galaxies' redshift space positions, the extra information of $f$ can be provided by the RSD effect encoded in the observed kSZ effect itself, and the $\tau_{\rm T}-f$ degeneracy can be spontaneously broken.
In the following sections, we will first use Fisher matrix technique to reveal the severe $\tau_{\rm T}-f$ degeneracy on linear scales. By combining the dipole and octopole of the pairwise kSZ power spectrum in redshift space, the degeneracy can be broken in a limited level. Then, by developing non-linear kSZ models, we extend the Fisher matrix analysis to non-linear scales and find that the degeneracy can be broken in a dramatic level therein. This is the main result of this paper, and it is made more solid by being verified alternatively with a Monte Carlo Markov Chain (MCMC) analysis on mock galaxy catalogs from simulations. In summary, we will show how $\tau_{\rm T}$ and $f$ can be simultaneously constrained with the kSZ effect alone if we work on non-linear scales in redshift space, and it naturally benefits the cosmological applications of this effect in the future.
The rest of this paper will be organized as follows. In Sec.~\ref{sec:models} we present the developed theoretical kSZ models. In Sec.~\ref{sec:mock} we introduce the mock galaxy catalogs constructed from simulations. In Sec.~\ref{sec:method} we present the methodologies adopted in our work, including the measurements of the pairwise kSZ power spectra from the mock catalogs, the Fisher matrix analysis and the MCMC analysis. In Sec.~\ref{sec:result} we show the results and Sec.~\ref{sec:discussion} is dedicated to conclusion and discussion.
\section{Models}
\label{sec:models}
The kSZ tomography technique can be implemented with various estimators. We refer readers to~\cite{Smith2018} for a good summary. Although in this work the idea of the degeneracy breaking is justified with the widely adopted \textit{pairwise kSZ estimator}, the main conclusion will also hold for others.
Biased tracers, such as galaxies living in dark matter halos containing free electrons, tend to move toward each other due to the mutual gravitational attraction. This peculiar kinematic pattern leaves a distinct feature in the CMB map, which can be captured by the pairwise kSZ estimator~\cite{Hand12}. With this estimator, the density weighted pairwise kSZ correlation function in redshift space and its Fourier counterpart can be expressed as~\cite{Sugiyama2017}
\begin{equation}
\xi^{\rm s}_{\rm kSZ}({\bf s}) = \left\langle -\frac{V}{N^2} \sum_{i,j}\left[\delta T_{\rm kSZ}(\hat{n}_i)-\delta T_{\rm kSZ}(\hat{n}_j)\right]\delta_{\rm D}({{\bf s}}-{{\bf s}}_{ij}) \right\rangle \,,
\label{eq:xi_kSZ}
\end{equation}
\begin{equation}
P^{\rm s}_{\rm kSZ}({\bf k}) = \left\langle -\frac{V}{N^2} \sum_{i,j}\left[\delta T_{\rm kSZ}(\hat{n}_i)-\delta T_{\rm kSZ}(\hat{n}_j)\right]e^{-i {\bf k}\cdot {{\bf s}}_{ij}} \right\rangle \,,
\label{eq:P_kSZ}
\end{equation}
where $V$ is the survey volume, $N$ is the number of galaxies, and ${{\bf s}}_{ij}={{\bf s}}_i-{{\bf s}}_j$ is the pair separation vector in redshift space. We have assumed here that $\tau_{\rm T}$ is not correlated with the galaxy density and velocity fields.
Since the $\tau_{\rm T}-f$ degeneracy and the way it is broken are more transparent in Fourier space, we will work with power spectrum from now on. Inserting Eq.~(\ref{eq:deltaT_kSZ2}) into Eq.~(\ref{eq:P_kSZ}), we have
\begin{eqnarray}
P^{\rm s}_{\rm kSZ}({\bf k}) &\simeq& \left(\frac{T_0\tau_{\rm T}}{c}\right)\left\langle \frac{V}{N^2} \sum_{i,j}\left[{\bf v}_i\cdot\hat{n}_i-{\bf v}_j\cdot\hat{n}_j\right]e^{-i {\bf k}\cdot {{\bf s}}_{ij}} \right\rangle \, \nonumber \\
&\equiv& \left(\frac{T_0\tau_{\rm T}}{c}\right)P^{\rm s}_{\rm pv}({\bf k})\,.
\label{eq:P_pv}
\end{eqnarray}
Here $P^{\rm s}_{\rm pv}$ is the density-weighted pairwise LOS velocity power spectrum in redshift space.
It is proven in \cite{Sugiyama2016,Sugiyama2017} that in redshift space, $P^{\rm s}_{\rm pv}$ is related to the density power spectrum $P^{\rm s}$ by a simple relation
\begin{equation}
P^{\rm s}_{\rm pv}({\bf k}) =\left(i\frac{aHf}{k\mu}\right) \frac{\partial}{\partial f} P^{\rm s}({\bf k})\,,
\label{eq:P_kSZ_der}
\end{equation}
where $a$ is the scale factor, $H$ is the Hubble parameter at redshift $z$. This relation holds for any object such as dark matter particles, halos, galaxies, et al. Since observationally we measure the kSZ effect in redshift space, our kSZ models will be based on this formula.
The detailed derivation of this relationship, including its extensions to higher order moments of density-weighted velocity field, can be found in~\cite{Sugiyama2016,Sugiyama2017}. In Appendix~\ref{app:eq_ksz_der_proof} we provide a simple proof of Eq.~(\ref{eq:P_kSZ_der}) itself.
\subsection{Linear model}
\label{subsec:linear_model}
\begin{figure
\begin{center}
\includegraphics[width=\columnwidth]{fig/dPdtheta.pdf}
\end{center}
\caption{The absolute values of the derivatives of $P^{\ell=1,3}_{\rm kSZ}$ with respect to $\tau_{\rm T}$ and $f$ calculated by the linear model ({\it top panel}) and the non-linear `NL-Lor' kSZ model ({\it bottom panel}). The linear bias $b_L$ is adopted in the calculations.}
\label{fig:derivative}
\end{figure}
First we derive the linear model of $P^{\rm s}_{\rm kSZ}$. Inserting the linear RSD model of $P^{\rm s}$~\cite{Kaiser87}
\begin{equation}
P^{\rm s}(k,\mu) = \left(b_L+f\mu^2\right)^2P_{\rm lin}(k)
\end{equation}
into Eqs.~(\ref{eq:P_pv}) and ~(\ref{eq:P_kSZ_der}), we obtain
\begin{equation}
\label{eq:P_kSZ_lin}
P^{\rm s,lin}_{\rm kSZ}(k,\mu)=\left(\frac{T_0\tau_{\rm T}}{c}\right)2iaHf\mu(b_L+f\mu^2)\frac{P_{\rm lin}(k)}{k}\,.
\end{equation}
Here $b_L$ is the linear galaxy bias, $P_{\rm lin}(k)$ is the linear dark matter density power spectrum at the redshift $z$. The multipoles of a power spectrum are defined as
\begin{equation}
\label{eq:multipoles}
P_\ell(k) = \frac{2\ell+1}{2}\int^1_{-1}d\mu P(k,\mu){\cal L}_\ell\,,
\end{equation}
in which ${\cal L}_\ell$ is the Legendre polynomial. Then the dipole and the octopole of $P^{\rm s,lin}_{\rm kSZ}$ are
\begin{eqnarray}
\label{eq:P_kSZ_lin_l1}
P^{\ell=1,\rm lin}_{\rm kSZ}&=&2iaH\left(\frac{T_0}{c}\right)\left(b_Lf\tau_{\rm T}+\frac{3}{5}f^2\tau_{\rm T}\right)\frac{P_{\rm lin}}{k}\,, \\
\label{eq:P_kSZ_lin_l3}
P^{\ell=3,\rm lin}_{\rm kSZ}&=&2iaH\left(\frac{T_0}{c}\right)\left(\frac{2}{5}f^2\tau_{\rm T}\right)\frac{P_{\rm lin}}{k}\,.
\end{eqnarray}
As shown in Eq.~(\ref{eq:P_kSZ_lin}), on linear scales $P^{\rm s}_{\rm kSZ}$ can be decomposed into two terms of $\tau_{\rm T}$ and $f$ ($f\tau_{\rm T}\mu$ and $f^2\tau_{\rm T}\mu^3$) with different $\mu$ dependence. Likewise, the dipole in Eq.~(\ref{eq:P_kSZ_lin_l1}) contains terms of $f\tau_{\rm T}$ and $f^2\tau_{\rm T}$ and the octopole in Eq.~(\ref{eq:P_kSZ_lin_l3}) only contains $f^2\tau_{\rm T}$. This anisotropic property in the redshift space kSZ power spectrum induced by the RSD effect can thus spontaneously break the $\tau_{\rm T}-f$ degeneracy and the main idea of this paper is illustrated on linear scales.
With this qualitative demonstration in mind, the quantitative level of the degeneracy breaking is limited on linear scales. As an example, $b_{\rm L}$ is usually larger than 1 in Eq.~(\ref{eq:P_kSZ_lin}) and $f\mu^2$ is always smaller than 1, the term $\propto b_{\rm L}f\tau_{\rm T}$ thus dominates the linear power spectrum and diminishes the level of the degeneracy breaking on linear scales. This situation is better illustrated on the top panel of Fig.~\ref{fig:derivative}, where we plot the absolute values of the derivatives $\partial P^{\ell=1,3}_{\rm kSZ}/\partial(f,\tau_{\rm T})$ of the linear model. We see that the ways in which the linear $P^{\ell=1,3}_{\rm kSZ}$ are sensitive to $f$ and $\tau_{\rm T}$ are quite similar to each other at all scales, demonstrating the difficulty of the degeneracy breaking on linear scales, or if we wrongly apply the linear model on non-linear scales.
By the same argument, we notice that the linear bias $b_L$ is also degenerated with $f$ and $\tau_{\rm T}$ in Eq.~(\ref{eq:P_kSZ_lin}). Although this degeneracy can be broken in a similar way, we focus on the degeneracy between $\tau_{\rm T}$ and $f$ in this work, and assume an {\it a prior} known bias term directly measured from simulations. Furthermore, we assume a unity velocity bias $b_v\equiv P_{\delta\theta}/P_{\delta\delta}=1$ in this work~\cite{Zheng14b,Junde18vb}, where $\theta\equiv-\nabla\cdot{\bf k}/afH$.
\subsection{Non-linear models}
\label{subsec:nonlinear_models}
The situation is quite different on non-linear scales where the linear kSZ model does not work. For deriving the non-linear pairwise kSZ model, we start from a phenomenological RSD model, with a Kaiser term and a Finger-of-God (FoG) term~\cite{Scoccimarro04,Zhangrsd}:
\begin{equation}
P^{\rm s}(k,\mu) = \left(b+fW\mu^2\right)^2P_{\delta\delta}(k)D^{\rm FoG}(k\mu\sigma_v)\,.
\label{eq:rsd}
\end{equation}
Here $b=b(k)$ is the non-linear galaxy bias, $P_{\delta\delta}(k)$ is the non-linear dark matter density power spectrum at redshift $z$, and $W(k)\equiv P_{\delta\theta}/P_{\delta\delta}$ quantifies the non-linear correlation between density and velocity fields~\cite{Zhangrsd}.
Motivated by the perturbation theory and verified with N-body simulations, $W(k)$ can be described by the following fitting function~\cite{Zheng13}
\begin{equation}
\label{eq:W}
W(k) = \frac{1}{1+\Delta\alpha(z)\Delta^2_{\delta\delta}(k,z)}\,,
\end{equation}
where $\Delta^2_{\delta\delta}(k,z) = k^3P_{\delta\delta}/2\pi^2$ is the dimensionless density power spectrum, and the redshift dependence of $\Delta\alpha$ can be fitted well by a linear relation~\cite{Zheng13}
\begin{equation}
\label{eq:Delta_alpha}
\Delta\alpha(z) = \Delta\alpha(z=0)+0.082z = 0.331+0.082z\,.
\end{equation}
In order to compensate the inaccuracy of our simple Kaiser term modelling, we allow the functional form of $D^{\rm FoG}$ to vary. We try Gaussian and Lorentzian $D^{\rm FoG}$, namely
\begin{eqnarray}
\label{eq:FoG}
D^{\rm FoG}(k\mu\sigma_v)=
\left\{
\begin{array}{cc}
\exp(-k^2\mu^2\sigma_v^2/H^2)\,\,\,\, {\rm Gaussian}\,, \\
(1+k^2\mu^2\sigma_v^2/H^2)^{-1}\,\,\,\, {\rm Lorentzian}\,.
\end{array}
\right.
\end{eqnarray}
Here $\sigma_v^2$ is the LOS velocity dispersion and $\sigma_v^2=\int f^2H^2P_{\rm lin}(k)dk/6\pi^2$ in linear theory. It will be treated as a free parameter in the model fitting and absorbs systematic errors originating from the model inaccuracies.
By substituting Eqs.~(\ref{eq:rsd}) and~(\ref{eq:FoG}) into Eq.~(\ref{eq:P_kSZ_der}), we derive the corresponding formula of $P^{\rm s}_{\rm kSZ}(k,\mu)$~\footnote{When making the derivative, we adopt this expression for $\sigma_v$: $\sigma_v^2=\int f^2H^2P_{\delta\delta}(k)dk/6\pi^2$}. For Gaussian $D^{\rm FoG}$~\cite{Zheng20},
\begin{eqnarray}
\label{eq:P_kSZ_Gau}
P^{\rm s,Gau}_{\rm kSZ}(k,\mu)&=&\left(\frac{T_0\tau_{\rm T}}{c}\right)2iaHf\mu(b+fW\mu^2)\frac{P_{\delta\delta}(k)}{k} \nonumber \\
&&\times S_{\rm G}(k,\mu) \exp(-k^2\mu^2\sigma_v^2/H^2)\,, \\
S_{\rm G}(k,\mu)&=&W-(b/f+W\mu^2)k^2\sigma_v^2/H^2\,.\nonumber
\end{eqnarray}
For Lorentzian $D^{\rm FoG}$,
\begin{eqnarray}
\label{eq:P_kSZ_Lor}
P^{\rm s,Lor}_{\rm kSZ}(k,\mu)&=&\frac{k^3}{2\pi^2}\left(\frac{T_0\tau_{\rm T}}{c}\right)2iaHf\mu(b+fW\mu^2)\frac{P_{\delta\delta}(k)}{k} \nonumber \\
&&\times S_{\rm L}(k,\mu) \frac{1}{1+k^2\mu^2\sigma_v^2/H^2}\,, \\
S_{\rm L}(k,\mu)&=&W-(b/f+W\mu^2)\frac{k^2\sigma_v^2/H^2}{1+k^2\mu^2\sigma_v^2/H^2}\,.\nonumber
\end{eqnarray}
As pointed out in~\cite{Zheng20}, the shape kernel $S(k,\mu)$ is a consequence of the competition between the Kaiser and FoG effects. As $k$ increases, $S(k,\mu)$ decreases from 1 to $-\infty$ and induces the turnover of the sign of $P^{\ell=1}_{\rm kSZ}$ at quasi-linear scales as shown on the top panel of Fig.~\ref{fig:Pkl1l3}.
Depending on which FoG term we choose, and if we set $W=1$ or to be Eq.~(\ref{eq:W}), the non-linear models can be divided into 4 categories in this work, and they are labelled as `NL-Gau', `NL-Gau-W', `NL-Lor', and `NL-Lor-W' separately.
Compared to the linear model, the $k$ and $\mu$ dependence of $P^{\rm s}_{\rm kSZ}$ in non-linear models are much more complicated. On the bottom panel of Fig.~\ref{fig:derivative}, we display the absolute values of the derivatives $\partial P^{\ell=1,3}_{\rm kSZ}/\partial(f,\tau_{\rm T})$ of the NL-Lor model. Four derivatives are different from each other at $0.1h/{\rm Mpc}<k<1h/{\rm Mpc}$, and these differences will enable an easier separation of $f$ and $\tau_{\rm T}$ when we work on non-linear scales. This phenomenon further drives the main idea of the spontaneous $\tau_{\rm T}-f$ degeneracy breaking in this paper and we will show how it works in Sec.~\ref{sec:result}.
\section{Mock catalogs}
\label{sec:mock}
\begin{table*}[t]
\begin{tabular}{cccccccccc}
\hline\hline
&FWHM & Noise & Galaxy & Redshift & $V$ & $\bar{n}_{\rm g}$ & $M_{\rm avg}$ & $\tau_{\rm T}$ \\
&$[{\rm arcmin}]$ & $[\mu K\mathchar`-{\rm arcmin}]$ & & & $[(h^{-1}\,{\rm Gpc})^3]$ & $[(h^{-1}\,{\rm Mpc})^{-3}]$ & $[h^{-1}\, M_{\odot}]$ &\\
\hline
CMB-S4 + BOSS & $1$ & $2$ & LRG & 0.5 $(0.43<z<0.70)$ & 4.0 & $3.2\times10^{-4}$ & $2.6\times10^{13}$ & $4.5\times10^{-5}$\\
CMB-S4 + DESI & $1$ & $2$ & LRG & 0.8 $(0.65<z<0.95)$ & 13.5 & $2\times10^{-4}$ & $2.6\times10^{13}$ & $6\times10^{-5}$\\
\hline\hline
\end{tabular}
\caption{The assumed survey specifications. The first two columns show beam size and detector noise in a CMB-S4-like CMB experiment. We assume an ideal beam size of 1 arcmin. The third to eighth column individually shows the type of galaxies, redshift $z$, comoving survey volume $V$, mean comoving number density $\bar{n}_{\rm g}$, the averaged halo mass $M_{\rm avg}$ measured from the simulation data and the average optical depth $\tau_{\rm T}$ of the galaxy catalog calculated assumed a Gaussian projected gas profile~\cite{Zheng20}.}
\label{table:survey}
\end{table*}
\begin{figure
\begin{center}
\includegraphics[width=\columnwidth]{./fig/bias.pdf}
\end{center}
\caption{The measured galaxy density bias $b(k)=P_{\rm gm}/P_{\rm mm}$ from simulations. }
\label{fig:bias}
\end{figure}
In the following sections we will compare our models to mock galaxy catalogs and reveal the $\tau_{\rm T}-f$ degeneracy breaking. The mocks are constructed from the GR part of the ELEPHANT simulation suite~\cite{Cautun2018} run by the \texttt{ECOSMOG} code \citep{ECOSMOG,Bose2017}. The simulation has a box size of $1024{\rm Mpc}/h$ and a dark matter particle number of $1024^3$. There are in total five independent realizations run under the $\rm \Lambda CDM$ model, and the background expansion is quantified by the WMAP9 cosmology \citep{WMAP9}, $\{\Omega_{\rm b}, \Omega_{\rm CDM}, h, n_s, \sigma_8 \} = \{0.046, 0.235, 0.697, 0.971, 0.82\}$.
We analyze two mock catalogs from simulation snapshots of $z=0.5$ and $z=0.8$ in this work. They are constructed to separately mimic the CMASS sample of the BOSS survey \citep{Manera2013} and the LRG galaxy sample of the DESI survey between $z= 0.65$ and $z=0.95$ \citep{DESI16I, Sugiyama18}. The methodology of generating the galaxy mocks are as follows.
At $z=0.5$, the mock galaxy catalogs are generated by applying the HOD model suggested in \citep{ZhengZheng2007} to the dark matter halo catalogs generated by \texttt{ROCKSTAR}~\citep{Rockstar}. The best-fitted HOD parameters are from the CMASS data \citep{Manera2013}. At $z=0.8$, we simply select all dark matter halos with $M>10^{13}M_\odot/h$ to represent the LRG galaxies. In Table~\ref{table:survey} we list some details of the mock catalogs, and a more thorough description of the simulations and catalogs can be found in \citep{Cautun2018,Cesar2019,Alam2021}.
In Fig.~\ref{fig:bias} we show the measured density biases $b(k)$ of the constructed mocks, evaluated by the ratio between the measured galaxy-matter cross-power spectrum and the matter-matter auto-power spectrum, $b(k)=P_{\rm gm}/P_{\rm mm}$. The bias measured by this estimator deviates from the true $b(k)$ by a correlation coefficient $\eta \le 1$ between the galaxy and matter density fields. This coefficient tends to be unity at linear scales, and its impact at small scales will be partially accounted for by the nuisance parameter $\sigma_v$ in our model fitting. In a realistic data analysis, the potential systematic errors coming from $\eta$ will be avoided by a robust model of $b(k)$~\cite{McDonald_bias,Hector_bias,Beutler_bias,Zheng16c}. We estimated the linear density bias $b_L$ by averaging $b(k)$ at $k<0.06h/{\rm Mpc}$, and the measured $b_L =\{2.00,2.48\}$ at $z=\{0.5,0.8\}$.
\section{Methodology}
\label{sec:method}
In this section we present the methodologies we apply for calculating $P^\ell_{\rm kSZ}(k,\mu)$ from simulations and for forecasting the model parameter constraints in this work.
\subsection{Pairwise kSZ multipoles}
\begin{figure
\begin{center}
\includegraphics[width=\columnwidth]{./fig/Pkl1l3.pdf}
\end{center}
\caption{The pairwise kSZ multipoles with error bars. $\Delta_{\rm kSZ}^{\ell=1,3}(k)\equiv i^\ell k^3P_{\rm kSZ}^{\ell=1,3}/(2\pi^2)$. The dashed lines represent the linear model and the solid lines show the NL-Lor model.}
\label{fig:Pkl1l3}
\end{figure}
As the first step, we move the mock galaxies from their real to redshift space positions with the following formula:
\begin{equation}
\label{eq:mapping}
{\bf s}={\bf r}+\frac{{\bf v} \cdot \hat{n}}{a(z)H(z)}\hat{n}\,,
\end{equation}
where {\bf r} is the real space position and {\bf s} is the redshift space position.
For the computation of $P^{\rm s}_{\rm pv}(k,\mu)$, we adopt a field-based estimator which is equivalent to the particle-based one in Eq.~(\ref{eq:P_pv})~\citep{Sugiyama2016},
\begin{equation}
(2\pi)^3\delta_{\rm D}({\bf k}+{\bf k}')P^{\rm s}_{\rm pv}({\bf k})=\left<p_{\rm s}({\bf k})\delta_{\rm s}({\bf k}')-\delta_{\rm s}({\bf k})p_{\rm s}({\bf k}')\right> \,,
\label{eq:P_pv_field}
\end{equation}
where $p_{\rm s}({\bf k})$ and $\delta_{\rm s}({\bf k})$ are Fourier counterparts of the radial momentum field $p_{\rm s}({{\bf s}})=[1+\delta_{\rm s}({{\bf s}})][{\bf v}({{\bf s}})\cdot \hat{n}]$ and the density field $\delta_{\rm s}({{\bf s}})$ respectively. We sample the $p_{\rm s}({\bf s})$ and $\delta_{\rm s}({\bf s})$ fields on $1024^3$ regular grids using the nearest-grid-point (NGP) method and calculate $p_{\rm s}({\bf k})$ and $\delta_{\rm s}({\bf k})$ fields by the fast Fourier transform (FFT). Eq.~(\ref{eq:P_pv_field}) denotes that the $P^{\rm s}_{\rm pv}$ measurement is actually measuring the cross-correlation between the momentum and density fields.
The measured optical depth of a halo depends on its projected gas density profile and the adopted aperture photometry (AP) filter. Following \cite{Sugiyama18}, we assume a Gaussian gas density profile, and choose an AP filter radius $\theta_{\rm c}$ that can maximize the kSZ detection S/N. We choose a CMB-S4-like survey as our baseline CMB survey~\citep{CMBS42019}, and assume that it has a full overlap between the sky coverage of BOSS and DESI. The chosen specification of the CMB-S4-like survey is shown in Table~\ref{table:survey}. We refer readers to~\cite{Sugiyama18} (Eq. (17) therein) and the appendix of~\cite{Zheng20} for the details of the $\tau_{\rm T}$ calculation. As a result, the estimated $\tau_{\rm T}$'s of two galaxy mocks are separately $\tau_{\rm T} = 4.5\times10^{-5}$ at $z=0.5$ with $\theta_{\rm c} = 1.37'$, and $\tau_{\rm T} = 6.0\times10^{-5}$ at $z=0.8$ with $\theta_{\rm c} = 1.12'$. They will be set to be the fiducial values of $\tau_{\rm T}$ in the following analysis. Since $\tau_{\rm T}$ only affects the overall amplitude of $P^{\rm s}_{\rm kSZ}$, we do not expect that the uncertainty of its estimation will affect the conclusion of this paper.
We also calculate the covariance matrix of $P^{\ell=1,3}_{\rm kSZ}(k)$ in a theoretical way. The detailed methodology is presented in Appendix~\ref{app:covariance}. Although we only consider the Gaussian part of the $P^{\rm \ell=1,3}_{\rm kSZ}$ covariance in our analysis, we find in Appendix~\ref{app:covariance} that this theoretical covariance coincides with the measured one from 100 N-body simulations and is pretty accurate even at scales reaching $k\sim5{\rm Mpc}/h$. Therefore we do not expect the simplification in the covariance matrix calculation will significantly bias our results in this work.
In Fig.~\ref{fig:Pkl1l3}, we plot the measured pairwise kSZ multipoles, $\Delta^{\ell=1,3}_{\rm kSZ}\equiv i^\ell k^3P^{\ell=1,3}_{\rm kSZ}/(2\pi^2)$ (hereafter we will use $P^{\ell}_{\rm kSZ}$ and $\Delta^{\ell}_{\rm kSZ}$ interchangeably to refer to the kSZ power spectrum multipole). On the top panel, the shape kernel $S(k,\mu)$ controls the turnover of $\Delta^{\ell=1}_{\rm kSZ}$ from being negative to positive at quasi-linear scales, showing the significant impact of the FoG effect in the non-linear kSZ effect. For both $\Delta^{\ell=1,3}_{\rm kSZ}$, the linear model deviates from the measurement at very large scales of $k\sim0.05h/{\rm Mpc}$, while our non-linear model can account for the non-linear evolution of the measurement in a much better way.
\subsection{Fisher matrix}
\label{subsec:fisher}
We adopt the fisher matrix technique to predict the constraints that future surveys will impose on the cosmological parameters around a fiducial cosmology. The fisher matrix is calculated by
\begin{equation}
\label{eq:fisher_m}
F_{ij} = \frac{\partial X^{\rm T}}{\partial \theta_i}C^{-1}\frac{\partial X}{\partial \theta_j}+(prior)\,.
\end{equation}
Given the vector of model parameters $\Theta = \{\theta_i\}$, $X=X(\Theta)$ is the vector of the measured quantities. $C$ is the covariance matrix assumed to be independent of $\Theta$. The \textit{prior} term is usually a diagonal matrix with its diagonal element being $1/\sigma_i^2$, in which $\sigma_i^2$ is the variance of the Gaussian prior of the $i$th parameter. Although we will apply flat priors $[a_i,a_i+b_i]$ in the MCMC analysis, we practically apply a Guassian prior with $(b_i/2)^2$ being its effective variance in the Fisher matrix calculation.
In an ideal case of the Gaussian likelihood, $\Delta \theta_i\ge(F^{-1})^{1/2}_{ii}$ according to the Cram\'er–Rao inequality. Therefore we can place a firm lower limit on the parameter error bar $\Delta \theta_i$ that can be attained from surveys. We will quote this lower limit as the 1-$\sigma$ parameter constraint forecast hereafter.
In the following calculation of $F_{ij}$, we choose $\Theta=\{f,\log_{10}\tau_{\rm T},\sigma_v\}$ with $f \in [0,1]$, $\log_{10} \tau_{\rm T} \in [-7, -3]$ and $\sigma_v \in [0,1000]$. $X(\Theta)$ is chosen to be $P^{\ell=1,3}_{\rm kSZ}(k)$ at $0.035h/{\rm Mpc}\le k\le k_{\rm max}$ and $k_{\rm max}$ is a certain cut-off scale we can vary. $P^{\ell=1,3}_{\rm kSZ}(k)$ is theoretically evaluated under the same WMAP9 cosmology as the ELEPHANT simulations.
The fiducial values of $\sigma_v$ are chosen to be the averages of the best fitted $\sigma_v$ at different cut-off scales in the MCMC analysis. The non-linear density power spectrum $P_{\delta\delta}$ is calculated by the Halofit operator of~\cite{Mead2015} embedded in the COLIBRI~\footnote{https://github.com/GabrieleParimbelli/COLIBRI} python package. The derivative $\partial X/\partial\theta_i$ is evaluated numerically by varying $\theta^0_i$, the fiducial value of $\theta_i$, with $\pm \lambda\theta^0_i$ ($\lambda=0.01$) and then
\begin{equation}
\frac{\partial X}{\partial\theta_i} = \frac{X(\theta^0_i+\lambda\theta^0_i)-X(\theta^0_i-
\lambda\theta^0_i)}{2\lambda\theta^0_i}\,.
\end{equation}
\subsection{Monte Carlo analysis}
\label{subsec:MCMC}
In Sec.~\ref{subsec:MCMC}, we perform MCMC method to fit all four non-linear models presented in Sec.~\ref{subsec:nonlinear_models} against the measured $\Delta^{\ell=1,3}_{\rm kSZ}$ from simulations.
The fitted parameters are $f$, $\log_{10} \tau_{\rm T}$ and $\sigma_v$, with uniform priors of $f \in [0,1]$, $\log_{10}\tau_{\rm T} \in [-7, -3]$ and $\sigma_v \in [0,1000]$ .
The Likelihood function we choose is a common form,
\begin{equation}
\mathcal{L}_\ell(k_\mathrm{max}) \propto \exp{[-\chi^2_\ell(k_\mathrm{max})/2]} \label{eq:likelihood}
\end{equation}
\noindent where
\begin{equation}
\chi^2_\ell(k_{\mathrm{max}}) = \sum_{k=k_{\rm min}}^{k_{\mathrm{max}}} \frac{\left[\Delta_{\mathrm{kSZ, model}}^\ell(k) - \Delta_{\mathrm{kSZ, data}}^\ell(k) \right]^2}{{\sigma^{\ell}_{k}}^2}\,. \label{eq:chi2}
\end{equation}
\noindent $k_{\rm min}=0.035h/{\rm Mpc}$ and $k_{\mathrm{max}}$ denotes the max fitted $k$ mode between models and the mock data. As demonstrated in Appendix~\ref{app:covariance}, we ignore the off-diagonal elements of the covariance matrix and only use the diagonal variance ${\sigma^{\ell}_{k}}^2$ in evaluating the $\chi^2_\ell$.
\section{results}
\label{sec:result}
\begin{figure*}[t]
\includegraphics[width=0.98\columnwidth]{./fig/fisher_linear_z0d5.pdf}
\includegraphics[width=\columnwidth]{./fig/fisher_linear_z0d8.pdf}
\caption{Fisher matrix results for the linear model. The solid contours denote the Fisher matrix analysis combining $P^{\ell=1}_{\rm kSZ}$ and $P^{\ell=3}_{\rm kSZ}$, and the dashed contours are for those with $P^{\ell=1}_{\rm kSZ}$ only. Different colors represent different cut-off scales $k_{\rm max}$ in the Fisher matrix calculation.}
\label{fig:fisher_linear}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=\columnwidth]{./fig/fisher_nonlinear_z0d5.pdf}
\includegraphics[width=\columnwidth]{./fig/fisher_nonlinear_z0d8.pdf}
\caption{Same as Fig.~\ref{fig:fisher_linear}, except for the NL-Lor model.}
\label{fig:fisher_nonlinear}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=1.7\columnwidth]{fig/contour-z0d5-b_array-P_new.pdf}
\caption{MCMC results for the NL-Lor model at $z=0.5$. The solid contours denote the MCMC analysis combining $P^{\ell=1}_{\rm kSZ}$ and $P^{\ell=3}_{\rm kSZ}$, and the dashed contours are for those with $P^{\ell=1}_{\rm kSZ}$ only. The marginalized 1-D PDFs of $f$ and $\log_{10}\tau_{\rm T}$ are shown on the top left and bottom right panels respectively. Different colors represent different cut-off scales $k_{\rm max}$ in the MCMC fitting.}
\label{fig:MCMC_z0d5}
\end{figure*}
\begin{figure*}[t]
\includegraphics[width=1.7\columnwidth]{fig/contour-z0d8-b_array-P_new.pdf}
\caption{Same as Fig.~\ref{fig:MCMC_z0d5}, but at $z=0.8$}
\label{fig:MCMC_z0d8}
\end{figure*}
In this section we will first present results of the Fisher matrix analysis, showing the $\tau_{\rm T}-f$ degeneracy breaking in redshift space, in particular at non-linear scales. Then this main result will be further validated by the MCMC analysis.
\subsection{Fisher matrix results}
\label{subsec:fisher_result}
In Fig.~\ref{fig:fisher_linear}, we display the Fisher matrix constraints on $\log_{10}\tau_{\rm T}$ and $f$ with the linear model at different cut-off scales.
By comparing the results using both $\Delta^{\ell=1,3}_{\rm kSZ}$ (`$\ell=1+\ell=3$' case) with those using only $\Delta^{\ell=1}_{\rm kSZ}$ (`$\ell=1$' case), we confirm that the RSD effect encoded in the measured $P^{\rm s}_{\rm kSZ}$ can break the $\tau_{\rm T}-f$ degeneracy and shrink the contour length along the degeneracy direction. Yet as we discussed in Sec.~\ref{subsec:linear_model}, in linear theory the degeneracy can be difficult to be further broken when we increase the cut-off scale in the analysis. This is justified in Fig.~\ref{fig:fisher_linear}, where the contours do not shrink notably with the cut-off scale $k_{\rm max}$ ranging from $0.1h/{\rm Mpc}$ to $0.5h/{\rm Mpc}$.
For a realistic estimation of the $\tau_{\rm T}-f$ contour at non-linear scales, we take a further step and utilize the non-linear models in Sec.~\ref{subsec:nonlinear_models} in the analysis. Four models give similar results. Being the relatively more accurate one within these variants in the following MCMC analysis (as detailed in Appendix~\ref{app:accuracy}), we show the Fisher matrix contours from the `NL-Lor' model in Fig.~\ref{fig:fisher_nonlinear}.
The further $\tau_{\rm T}-f$ degeneracy breaking due to the non-linear evolution of the density and velocity fields becomes manifest in Fig.~\ref{fig:fisher_nonlinear}. Although the width of the contour perpendicular to the degeneracy direction is enlarged due to the additional marginalized nuisance parameter $\sigma_v$, the degeneracy direction gradually rotates from $k_{\rm max}=0.1h/{\rm Mpc}$ to $k_{\rm max}=0.5h/{\rm Mpc}$, and the length of the contour considerably decreases. This is a clear evidence that, due to the different sensitivities of $P^{\rm s}_{\rm kSZ}$ with respect to $\tau_{\rm T}$ and $f$ generated by the non-linear evolution , we can robustly break the $\tau_{\rm T}-f$ degeneracy at non-linear scales in redshift space.
By comparing the left and right panels of Fig.~\ref{fig:fisher_nonlinear}, we find a stronger degeneracy breaking on the left. This is probably due to the lower density bias of the $z=0.5$ mock than that of the $z=0.8$ one. The lower bias implies that the $bf\tau_{\rm T}$ term is less dominant in $P^{\rm s}_{\rm kSZ}$ so that the $\tau_{\rm T}-f$ degeneracy kept by this term is easier to be broken. This fact suggests that the $\tau_{\rm T}-f$ degeneracy breaking will be more efficient for galaxy samples with low density biases in a kSZ observation.
We also compare the `$\ell=1+\ell=3$' case with the `$\ell=1$' one in Fig.~\ref{fig:fisher_nonlinear}. The improvement of the parameter fitting by including $\Delta^{\ell=3}_{\rm kSZ}$ is larger for lower $k_{\rm max}$, and becomes smaller towards higher $k_{\rm max}$. It indicates that S/N of the kSZ measurement is dominated by $\Delta^{\ell=1}_{\rm kSZ}$ at small scales.
Finally, we notice that the forecasts for $f$ include unphysical results of $f>1$ in Figs.~\ref{fig:fisher_linear} and~\ref{fig:fisher_nonlinear}. This is because we practically (wrongly) apply an equivalent Gaussian prior with $0.5^2$ as the variance and $f_{\rm fid}(z)$ as the mean when we include the prior of $f$ in the Fisher matrix analysis. This unphysical phenomenon will be avoided in the MCMC results of Figs.~\ref{fig:MCMC_z0d5} and~\ref{fig:MCMC_z0d8}.
\subsection{MCMC fitting results}
In order to confirm the Fisher matrix results in a more practical situation, we confront the non-linear theoretical models to the measured $\Delta^{\ell=1,3}_{\rm kSZ}$ of the mock galaxies using the MCMC technique in this section. We simultaneously fit $\{\log_{10}\tau_{\rm T},f,\sigma_v\}$ in the analysis and the fitting results of $\{\log_{10}\tau_{\rm T},f\}$ are shown in Figs.~\ref{fig:MCMC_z0d5} and~\ref{fig:MCMC_z0d8}.
As we can see from the lower left panels of Figs.~\ref{fig:MCMC_z0d5} and~\ref{fig:MCMC_z0d8}, the fitted contours of MCMC results gradually turn their degeneracy directions when the adopted $k_{\rm max}$ becomes higher, along with the shrinking of the contour sizes. This is consistent with the Fisher matrix forecasts. It further consolidates our statement that the non-linear structure evolution in redshift space robustly break the $\tau_{\rm T}-f$ degeneracy in a spontaneous way.
We also compare MCMC results using both $\Delta^{\ell=1,3}_{\rm kSZ}$ with those using only $\Delta^{\ell=1}_{\rm kSZ}$. Similar to the Fisher matrix analysis, the additional information of $\Delta^{\ell=3}_{\rm kSZ}$ helps tighten the $\tau_{\rm T}-f$ constraint at small $k_{\rm max}$ cases, while the improvement diminishes when $k_{\rm max}$ is large.
There is a clear difference between MCMC and Fisher matrix results. In Figs.~\ref{fig:MCMC_z0d5} and~\ref{fig:MCMC_z0d8}, there are evident cuts at $f=1$ on the $\log_{10}\tau_{\rm T}-f$ contours. This is due to the correct implementation of the flat $f\in[0,1]$ prior in the MCMC analysis.
Furthermore, a unique advantage of the MCMC analysis is that besides the precision forecast, it can test the model accuracy as well. As shown in Figs.~\ref{fig:MCMC_z0d5} and~\ref{fig:MCMC_z0d8}, the NL-Lor model is accurate within roughly $1-2\sigma$ level in predicting $\log_{10}\tau_{\rm T}$ and $f$ even at non-linear scales of $k>0.3h/{\rm Mpc}$. Considering that our models are simple and phenomenological, this level of accuracy is encouraging and we expect that further refinements on the current models will improve their accuracy. The detailed accuracy comparison of different non-linear models is shown in Appendix~\ref{app:accuracy}.
\section{Conclusion and discussion}
\label{sec:discussion}
In a late-time kSZ effect observation, we apply the kSZ tomography technique in redshift space where the velocity field leaves its own distinct imprint on the galaxies' redshift space positions. The extra information of the linear growth rate $f$ can thus be provided by the RSD effect encoded in the observed kSZ effect itself, allowing for the spontaneous breaking of the $\tau_{\rm T}-f$ degeneracy. We adopt the Fisher matrix and MCMC techniques to validate this idea in this work. Both methods yield positive and consistent results, finding that the level of this degeneracy breaking
is further enhanced on non-linear scales due to the non-linear evolution of the density and velocity
fields. This result highlights the importance of the redshift space analysis of the pairwise kSZ effect in a kSZ observation, in particular on non-linear scales.
In order to fully explore its cosmological information, an accurate non-linear model of the kSZ effect in redshift space is needed. We develop a phenomenological model of the density-weighted pairwise kSZ power spectrum in this work. For a DESI+CMB-S4-like survey combination, its fitted values of the linear growth rate and the optical depth are accurate within $1-2\sigma$ ranges of the fiducial ones, even down to small scales around $k=0.5h/{\rm Mpc}$.
Alternative models of the pairwise kSZ effect exist in the literature. For example, in \cite{Sugiyama2016}, authors applied $P^{\rm s}(k)$ calculated by the Largrangian perturbation theory at zeroth-(Zel'dovich approximation) and one-loop order to Eq.~(\ref{eq:P_kSZ_der}) and derived $P^{\rm s}_{\rm pv}$. The model worked well for dark matter and can be used for halos by incorporating scale dependent density and velocity bias models. In \cite{Okumura2014}, the authors studied the density-weighted velocity statistics in redshift space by the distribution function approach (DFA), in which one-loop SPT and a free parameter describing the nonlinear velocity dispersion were adopted. This model has been successfully utilized to describe the observed pairwise kSZ correlation function from BOSS+ACT~\cite{DeBernardis17}. Furthermore, the deep learning technique has been proposed to reconstruct the velocity field from the kSZ effect, avoiding an independent estimation of the optical depth~\cite{WangYY2021}.
As a summary, we expect that the $\tau_{\rm T}-f$ degeneracy can be robustly broken by carefully studying the pairwise kSZ effect in redshift space and on non-linear scales. In this sense, the prospect of the kSZ effect being an important cosmological probe is promising.
|
{
"arxiv_id": "2302.13169",
"language": "en",
"timestamp": "2023-02-28T02:11:46",
"url": "https://arxiv.org/abs/2302.13169",
"yymm": "2302"
} | \section*{Acknowledgments}
We acknowledge helpful discussions with R.~L.~Greene, S.~A.~Kivelson, S.~Raghu, B.~S.~Shastry, and J.~Zaanen.
This work was supported by the U.S. Department of Energy (DOE), Office of Basic Energy Sciences,
Division of Materials Sciences and Engineering.
EWH was supported by the Gordon and Betty Moore Foundation EPiQS Initiative through the grants GBMF 4305 and GBMF 8691.
Computational work was performed on the Sherlock cluster at Stanford University and on resources of the National Energy Research Scientific Computing Center, supported by the U.S. DOE, Office of Science, under Contract no. DE-AC02-05CH11231.
The data and analysis routines (Jupyter/Python) needed to reproduce the figures can be found at \url{https://doi.org/10.5281/zenodo.7250161}.
|
{
"arxiv_id": "2302.13243",
"language": "en",
"timestamp": "2023-02-28T02:14:03",
"url": "https://arxiv.org/abs/2302.13243",
"yymm": "2302"
} | \section{Perturbative calculations} \label{integral}
\subsection{Derivation of Polarization operator }
We start with the definition of the polarization operator,
\begin{eqnarray}
\Pi_{s,s'}(q,\omega)&=&-i \int \frac{dk}{2\pi}\frac{d\omega'}{2\pi}G^s_0(k,\omega') G^{s'}_0(k-q,\omega'-\omega).
\end{eqnarray}
Insert the expression of the propagators,
\begin{eqnarray}
\Pi_{s,s'}(q,\omega)&=&-i \int \frac{dk}{2\pi}\frac{d\omega'}{2\pi} \frac{1}{\omega'-s\epsilon(k)+i\delta \text{sgn}(\omega')} \frac{1}{\omega'-\omega-s'\epsilon(k-q)+i\delta \text{sgn}(\omega'-\omega)}.
\end{eqnarray}
Note that in the integral, the effect of $\text{sgn}(\omega)$ is the same as the $s$. Therefore we define
\begin{eqnarray}
f_{s,s'}(k,q)=\int_{-\infty}^{\infty} \frac{d\omega'}{2\pi} \frac{1}{\omega'-s\epsilon(k)+is\delta } \frac{1}{\omega'-\omega-s'\epsilon(k-q)+i s' \delta }
\end{eqnarray}
Now it is purely about applying the residue theorem in the complex analysis. That gives
\begin{eqnarray}
&&\text{if } s=s',\quad f_{s,s'}(k,q)=0\\
&&\text{if } s=- s', \quad f_{s,s'}(k,q)= is \frac{1}{-\omega+s\epsilon(k)+s\epsilon(k-q)-i s\delta}=-i \frac{1}{s\omega- \epsilon(k)- \epsilon(k-q)+i \delta}\nonumber
\end{eqnarray}
Thus one reaches the integral expression,
\begin{eqnarray}
\Pi_{s,s'}(q,\omega)=-(\sigma^{s,s'}_x)\int \frac{dk}{2\pi} \frac{1}{s\omega- \epsilon(k)- \epsilon(k-q)+i \delta}.
\end{eqnarray}
One may make use of energy expression and redefine the integral variable $k=k-q/2$,
\begin{eqnarray}
\label{17}
\Pi_{s,-s}(q,\omega)
&=&- \int_{-\infty}^{+\infty} \frac{dk}{ 2\pi} \frac{1}{ s\omega-2(k-q/2)^2- q^2/2+i \delta}=\int_{-\infty}^{+\infty} \frac{dk}{ 4\pi} \frac{1}{k^2+q^2/4-s \omega/2-i \delta } \nonumber.
\end{eqnarray}
Below we give an analysis based on the sign of $q^2/4-s\omega/2u$.
\begin{itemize}
\item If $q^2/4-s\omega/2u> 0$, $\delta$ is not important and can be taken as zero. One may need the equation below,
\begin{eqnarray}
\int_{-\infty}^{+\infty} \frac{dz}{2\pi} \frac{1}{z^2+ a^2}=\int_{-\infty}^{+\infty} \frac{dz}{2\pi} \frac{1}{(z+ia)(z-ia)}
\end{eqnarray}
Deforming the contour to the upper arc, one finds
\begin{eqnarray}
\Pi_{s,-s}(q,\omega)
&=& \frac{1}{ 2 \sqrt{ -2s\omega+ q^2} },\quad \text{ if } q^2/4+s\omega/2u> 0
\end{eqnarray}
\item If $q^2/4-s\omega/2u< 0$, the PO may be rewritten as
\begin{eqnarray}
\Pi_{s,-s}(q,\omega)
&=&\int_{-\infty}^{\infty} \frac{dk}{ 4\pi} \frac{1}{k^2-a^2-i \delta } = -\frac{i}{4 } \frac{1}{ \sqrt{|-s\omega/2u+ q^2/4|} }
\end{eqnarray}
where $a^2=|q^2/4-s \omega/2|$
\end{itemize}
Therefore, one can reach a simple expression,
\begin{eqnarray}
\Pi_{s,s'}(q,\omega)= \frac{1}{4} \frac{|s-s'|}{\sqrt{ - 2s\omega + q^2 }}.
\end{eqnarray}
This is Eq.~4 in the main text.
\subsection{Derivation of Self-energy}
Correction to the green function may be obtained by inserting the self-energy,
\begin{eqnarray}
\delta G_+(k,\omega)= G_+(k,\omega) \Sigma_+(k,\omega) G_+(k,\omega).
\end{eqnarray}
At the one-loop level, the self-energy is approximated by
\begin{eqnarray}
\Sigma_+(k,\omega)\simeq -i u_0^2 \int \frac{d\nu}{2\pi }\int \frac{dq }{ 2\pi } G_{-}(k-q,\omega-\nu) \Pi_{+,-}(q,\nu)
\end{eqnarray}
Now we trace the imaginary part of $\Sigma_+$.
\begin{eqnarray}
\text{Im} \Sigma_+(k,\omega)\equiv F_1+F_2;
\end{eqnarray}
where $F_{1/2}$ involves the product of two imaginary/real parts of $G$ and $\Pi$,
\begin{eqnarray}
F_1&=&\int \frac{d\nu}{2\pi }\int \frac{dq }{ 2\pi } \text{Im} G_{-}(k-q,\omega-\nu) \text{Im}\Pi_{+,-}(q,\nu),\\
F_2&=& -\int \frac{d\nu}{2\pi }\int \frac{dq }{ 2\pi } \text{Re} G_{-}(k-q,\omega-\nu) \text{Re}\Pi_{+,-}(q,\nu).
\end{eqnarray}
Below we evaluate two integrals of $F_1$ and $F_2$.
{\bf Evaluation of $F_1$ .} Note that $\text{Im}G_{-}(k-q,\omega-\nu)=\alpha \pi\delta(\omega-\nu+ \epsilon_{k-q})$ and
\begin{eqnarray}
\text{Im} \Pi_{+-}(q,\nu)=\frac{1}{2\sqrt{2}u\sqrt{ \nu - q^2/2 }}\Theta( \nu - q^2/2)
\end{eqnarray}
Then one may obtain
\begin{eqnarray}
F_1= \frac{ u_0^2 }{2\sqrt{2} } \int \frac{dq }{ 2\pi } \int \frac{d\nu}{2\pi }
\pi\delta(\omega-\nu+ \epsilon_{k-q}) \frac{1}{\sqrt{ \nu - q^2/2 }}\Theta( \nu - q^2/2).
\end{eqnarray}
The delta function can be integrated out firstly,
\begin{eqnarray}
F_1= \frac{ u_0^2 }{4\sqrt{2} } \int \frac{dq }{ 2\pi }
\frac{1}{\sqrt{ \omega + (k-q)^2- q^2/2 }}\Theta( \omega + (k-q)^2- q^2/2) \nonumber
\end{eqnarray}
Use the identity below
\begin{eqnarray}
(k-q)^2-\frac{q^2}{2}=\frac{1}{2}(q-2k )^2-k^2
\end{eqnarray}
Then $J_+$ can be expressed as
\begin{eqnarray}
J_+= \frac{u_0^2}{4\sqrt{2} } \int \frac{dq }{ 2\pi }
\frac{1 }{\sqrt{ \omega + q^2/2-k^2 }}\Theta( \omega + q^2/2-k^2)
\end{eqnarray}
Now consider two different scenarios based on the sign of $\omega/u-k^2$ .
{$ (a), \omega/u-k^2 \geq 0 $}. In this case, one may call $Q^2=2(\omega/u-k^2)$. Then the integral becomes
\begin{eqnarray}
J_+=\frac{u_0^2}{2 } \int_{0}^{\infty} \frac{dq }{ 2\pi }
\frac{ 1}{\sqrt{ q^2 +Q^2 }} =\frac{u_0^2}{2 }\ln \frac{\Lambda+\sqrt{\Lambda^2+Q^2}}{Q} ,\quad Q\geq 0
\end{eqnarray}
where $\Lambda$ is the UV cutoff. We may claim the result below
\begin{eqnarray}
J_+\simeq \frac{u_0^2}{2}\ln
\frac{ \Lambda}{\sqrt{\omega/u-k^2} }
\end{eqnarray}
One may observe a { blow-up} at the on-shell condition $\omega=uk^2$. It is traced back to an infrared divergence in the integral. This could be avoided if we have $f(q)\simeq q^z$ with $z\geq 1/2$. Even in the off-shell case, $J_+$ is still logarithmically large.
{$(b). \omega/u-k^2 \leq 0 $}. In this case, one writes $Q^2=2(k^2-\omega/u)$. The integral becomes
\begin{eqnarray}
J_+= \frac{u_0^2}{2 } \int_{Q}^{\infty} \frac{dq }{ 2\pi }
\frac{1}{ \sqrt{ q^2 -Q^2 }} = \frac{u_0^2}{2}\cosh^{-1}\left(
\frac{\Lambda}{Q}\right)| \simeq \frac{u_0^2}{2} \ln \frac{\Lambda}{k^2-\omega/u}
\end{eqnarray}
As a summary of $J_+$, we find the expression valid for arbitrary $\omega/u-k^2$,
\begin{eqnarray}
J_+= J_+\simeq \frac{u_0^2}{2}\ln
\frac{ \Lambda}{|\omega/u-k^2|}.
\end{eqnarray}
{\bf Evaluation of $F_2$.}
In the previous calculation, we only focus on the imaginary part.
The real part of polarization becomes
\begin{eqnarray}
\text{Re} \Pi_{+-}(q,\omega)= \frac{1}{2\sqrt{2}\sqrt{ - \omega/u+ q^2/2 }} \Theta(- \omega/u+ q^2/2 )
\end{eqnarray}
The real part of the propagator carries the principal part (which simply means deleting the pole),
\begin{eqnarray}
\text{Re}G_{-\alpha}(k,\omega)=P \frac{1}{\omega+\alpha uk^2/2}.
\end{eqnarray}
With the variable change $x=q^2/2-\nu $, one may write the integral expression,
\begin{eqnarray}
F_2\simeq -u_0^2\int \frac{dx}{2\pi }\int \frac{dq }{ 2\pi } P \frac{1}{\omega+ x+ k^2/2- kq} \frac{1}{2\sqrt{2} \sqrt{ x}} \Theta(x ).
\end{eqnarray}
As the first step, we consider the integral over $q$,
\begin{eqnarray}
\label{26}
\int \frac{dq }{ 2\pi } P \frac{1}{\omega+ x+ k^2/2-ukq}=\frac{1}{ k}\int \frac{dq }{ 2\pi } P \frac{1}{\omega/ k+x/k+k/2- q} .
\end{eqnarray}
Denoting $Y=\omega/uk+x/k+k/2$, one may write Eq.~\ref{26} as
\begin{eqnarray}
\frac{1}{uk}\int \frac{dq }{ 2\pi } P \frac{1}{Y- q} =-\frac{1}{uk}(\int_{-\Lambda}^{Y- \epsilon}+\int^\Lambda_{Y+ \epsilon} )\frac{dq }{ 2\pi } \frac{1}{q-Y}
\end{eqnarray}
One may notice that, if $\Lambda/Y \rightarrow \infty$, the integral above vanishes. In the asymptotic limit $\Lambda/Y\gg 1$, Eq.~\ref{26} can be reduced to be,
\begin{eqnarray}
-\frac{1}{uk} \ln \frac{\Lambda-Y}{\Lambda+Y}= -\frac{1}{uk} \ln \left(
1-\frac{2Y}{\Lambda+Y}
\right)\simeq \frac{1}{uk} \frac{2Y}{\Lambda }.
\end{eqnarray}
Thus in this limit, $\text{Im}\Sigma_\alpha(k,\omega)$ is small.
Now we consider the opposite limit $k=0$, i.e., $Y$ goes to infinity. The integral expression of $F_2$ is simple,
\begin{eqnarray}
F_2\simeq -u_0^2\int \frac{dx}{2\pi }\int \frac{dq }{ 2\pi } P \frac{1}{\omega+ x } \frac{1}{2\sqrt{2} \sqrt{ x}} \Theta(x ).
\end{eqnarray}
The variable $q$ is free and the integral of it immediately gives,
\begin{eqnarray}
F_2\simeq -u_0^2 \frac{\Lambda}{\pi} \int_0^\infty \frac{dx}{2\pi } P \frac{1}{\omega+x} \frac{1}{2\sqrt{2}\sqrt{ x}}.
\end{eqnarray}
Now we replace the variable $x=z^2$,
\begin{eqnarray}
F_2= -u_0^2\frac{\Lambda}{2 \sqrt{2}\pi} \int_{-\infty}^\infty \frac{dz}{2\pi } \frac{1}{\omega+z^2} .
\end{eqnarray}
When $\omega>0$, the integral above gives
\begin{eqnarray}
F_2=-u_0^2\frac{\Lambda}{4 \sqrt{2}\pi} \frac{1}{\sqrt{\omega}}
\end{eqnarray}
When $\omega<0$, $F_2$ is zero. To check this, one may need to consider
\begin{eqnarray}
F_2 \propto \int_{0}^\infty \frac{dz}{2\pi } P\frac{1}{-|\omega|+uz^2}=\left( \int_{0}^{\sqrt{|\omega|} -\delta}\frac{dz}{2\pi } +\int^{\infty}_{\sqrt{|\omega|} +\delta}\frac{dz}{2\pi } \right) \frac{1}{-|\omega|+z^2}
\end{eqnarray}
Recall that $\int \frac{dz}{z^2-a^2}=\frac{1}{2a} \ln \frac{z-a}{z+a}$.
Thus the integrals becomes $-\frac{1}{2\sqrt{|\omega|}} \ln \frac{\delta}{2\sqrt{\omega}}+\frac{1}{2\sqrt{|\omega|}} \ln \frac{\delta}{2\sqrt{\omega}} =0$. Therefore one obtains the expression of $F_2$,
\begin{eqnarray}
F_2=-u_0^2\frac{\Lambda}{4 \sqrt{2}\pi} \frac{1}{\sqrt{|\omega|}} \Theta(\omega)
\end{eqnarray}
Combining $F_1$ and $F_2$, one can obtain Eq.~5 in the main text.
\section{Renormlaization group}
In the momentum-shell method, the RG is implemented by: (1), integrating out the fast modes (2) obtaining the effective Lagrangian (3) Introduce the re-scaled parameters and field operators (which are used to keep Gaussian action invariant).
Now we have a bare theory defined by Lagrangian and action. Separate the field operator into slow and fast modes
\begin{eqnarray}
\psi(k)=\psi_>(k)+\psi_<(k),\quad >: \Lambda/s<k<\Lambda ,\quad <: 0<k<\Lambda/s.
\end{eqnarray}
The operator $\psi_>(k)$ is defined by $\psi_>(k)=\psi(k)$ if $\Lambda/s<k<\Lambda$, otherwise zero. Similar definition for $\psi_<(k)$. Then the bare action/Lagrangian can be written as
\begin{eqnarray}
Z=\int D[\psi_>] D[\psi_<] e^{S(\psi_>,\psi_<)},\quad S(\psi_>,\psi_<)=S_0(\psi_>)+S_0(\psi_<)+S_{\text{int}}(\psi_>,\psi_<)
\end{eqnarray}
Effective action of $\psi_<$ can be obtained by integrating out the fast modes,
\begin{eqnarray}
Z=\int D[\psi_<]e^{S_0(\psi_<)} \int D[\psi_>] e^{S_0(\psi_>)} e^{S_{\text{int}}(\psi_>,\psi_<)}=Z_> \int D[\psi_<] e^{S_0(\psi_<)} \langle e^{S_{\text{int}}(\psi_>,\psi_<)} \rangle_>
\end{eqnarray}
This equation determines the effective Lagrangian of $\psi_<$.
The average of interacting exponential is performed for the Gaussian action of $\psi_>$. The technical part is about evaluating the perturbation of $S_{\text{int}}$, which follows from
\begin{eqnarray}
\log \langle e^{S_{\text{int}}(\psi_>,\psi_<)} \rangle_> \simeq \langle S_{\text{int}} \rangle_>+ \frac{\langle S_{\text{int}}^2 \rangle_>-\langle S_{\text{int}} \rangle^2_>}{2}+ O(S^3_{\text{int}} )
\end{eqnarray}
The result is generally valid from cumulant expansion. Here I have hidden the dependence of $S$ on the field operators. Then the effective Lagrangian/action can be obtained perturbatively,
\begin{eqnarray}
S_{\text{eff}}(\psi_<)\simeq S_0(\psi_<)+\langle S_{\text{int}} \rangle_>+ \frac{\langle S_{\text{int}}^2 \rangle_>-\langle S_{\text{int}} \rangle^2_>}{2}+...
\end{eqnarray}
Here I only preserve up to the second order. One can perform higher-order perturbation, which is fundamentally equivalent to plotting connected Feynman diagrams. Although the first/second orders are written separately, they may contribute to the same coupling constant. Similar behaviors may happen for higher perturbations. One shall be careful about arguing that high-order perturbations do not change results.
To study the RG flow of coupling constant in $S_{\text{eff}}(\psi_<)$, one has to introduce the new momentum/frequency and field operators,
\begin{eqnarray}
\label{new_v}
k'=s k,\quad \omega'=s^z \omega, \quad \psi'(k')=s^{-(2z+1)/2}\psi_<(k)
\end{eqnarray}
Here $k/\omega$ is the old momentum/frequency (note that k only lies in the smaller momentum region $<$) and $k'/\omega'$ are new ones. Then perform change of variables in $S_{\text{eff}}$ in Eq.~\ref{new_v}. One would observe that the $S_0$ is invariant under this re-scale process ( or more precisely the non-interacting parameter is invariant). Other coupling constants shall be also expressed in terms of this new set of variables/fields. Below we handle the theory perturbatively.
\subsection{First order in interaction}
The first-order perturbation involves the average,
\begin{eqnarray}
\label{first_order}
\langle S_{\text{int}} \rangle_{0>} &=&\frac{1}{2!2!}\prod_{i=1,2,3,4}\left[
\int_{|k|<\Lambda } \frac{dk_i}{(2\pi)}\int_{-\infty}^{+\infty} \frac{d\omega_i}{2\pi}
\right]u_{i_4 i_3 i_2 i_1}(4321) \langle \bar\psi_{i_4}(4) \bar \psi_{i_3}(3) \psi_{i_2}(2) \psi_{i_1} (1)\rangle_{0>}\nonumber \\
&\times& 2\pi \delta(k_4+k_3-k_2-k_1)2\pi \delta(\omega_4+\omega_3-\omega_2-\omega_1)
\end{eqnarray}
Express 4-point correlation into fast and slow modes,
\begin{eqnarray}
\langle \bar\psi_{i_4} \bar \psi_{i_3} \psi_{i_2} \psi_{i_1} \rangle_{0>} = \langle (\bar\psi_{i_4,<}+\bar\psi_{i_4,>} ) (\bar\psi_{i_3,<}+\bar\psi_{i_3,>} ) ( \psi_{i_2,<}+ \psi_{i_2,>} ) ( \psi_{i_1,<}+ \psi_{i_1,>} ) \rangle_{0>}
\end{eqnarray}
One can expand the brackets. Non-zero contributions consist of three types: (1). $4$ slow terms, namely, tree-level scaling. (2) $2$ slow /$2$ fast terms. This part gives corrections to the Gaussian part. (3) $4$ fast terms, giving a constant in effective action thus negligible.
\begin{itemize}
\item {\it Tree-level.} This level is nothing but performing power counting based on Eq.~\ref{new_v}. This leads to
\begin{eqnarray}
v'(4'3'2'1')= s^{-3(z+1)+2(2z+1)}v(432 1)=s^{z-1}v(432 1).
\end{eqnarray}
The first part is from the re-scale of variables while the second part is from field operators. If $z=1$, it is marginal perturbation which is typical for Luttinger liquid. If $z=2$, it becomes a relevant perturbation even at the classical level.
\item {\it Correction to Gaussian action.} Here we consider the symmetric potential and take the constant potential. Via some simple combinatorics, one would see terms like
\begin{eqnarray}
-4u_0 \bar{\psi}_{<,-}(4) \psi_{<,-}(3)\langle \bar{\psi}_{>,+} (2) \psi_{>,+} (1) \rangle-4u_0 \bar{\psi}_{<,+}(4) \psi_{<,+}(3) \langle \bar{\psi}_{>,-}(2) \psi_{>,-}(1) \rangle.
\end{eqnarray}
Note $1$ contains both momentum and frequency. But the notation is abused: in the previous calculation. The average here is simply the Gaussian type of integral with
\begin{eqnarray}
\langle \bar{\psi}_{\alpha >} (\omega k) \psi_{\beta >}(\omega' k') \rangle=\frac{\delta_{\alpha\beta}}{i\omega-\alpha uk^2} 2\pi \delta(k-k') 2\pi \delta(\omega-\omega')
\end{eqnarray}
Thus, in Eq.~\ref{first_order}, this delta function only preserves one integral in fast modes. The conservation of total momentum makes the momenta of two slow modes equal.
\begin{eqnarray}
\langle S_{\text{int}} \rangle_{0>} &=&
-u_0\int_{|k|<\Lambda/s } \frac{dk}{(2\pi)}\int_{-\infty}^{+\infty} \frac{d\omega}{2\pi}\sum_{\alpha=\pm}
\bar{\psi}_{<,\alpha}(k\omega)\psi_{<,\alpha}(k\omega) \int_{|k_1|>\Lambda/s} \frac{dk_1}{2\pi} \int \frac{d\omega_1}{2\pi} \frac{e^{i\epsilon 0^\alpha}}{i\omega_1+\alpha k_1^2} \nonumber
\end{eqnarray}
The integral can be explicitly down, e.g.,
\begin{eqnarray}
\int_{|k_1|>\Lambda/s} \frac{dk_1}{2\pi} \int \frac{d\omega_1}{2\pi} \frac{e^{i\epsilon 0^+}}{i\omega_1+ k_1^2} =\int_{\Lambda>|k_1|>\Lambda/s} \frac{dk_1}{2\pi}= \frac{\Lambda(1-s^{-1})}{\pi}
\end{eqnarray}
Thus the correction to the Gaussian action reads
\begin{eqnarray}
\langle S_{\text{int}} \rangle_{0>} &=&
-u_0\int_{|k|<\Lambda/s } \frac{dk}{(2\pi)}\int_{-\infty}^{+\infty} \frac{d\omega}{2\pi}\sum_{\alpha=\pm}
\bar{\psi}_{<,\alpha}(k\omega)\psi_{<,\alpha}(k\omega) \frac{\Lambda(1-s^{-1})}{\pi}
\end{eqnarray}
This correction is simply a constant shift to the theory. One can introduce such a counter-term back to the bare Lagrangian s.t. there is no induced RG flow on the Gaussian theory part.
\end{itemize}
\subsection{Second-order in interaction}
Second-order perturbation involves the term below,
\begin{eqnarray}
u_{i_4 i_3 i_2 i_1}(4321) (\bar\psi_{i_4,<}+\bar\psi_{i_4,>} ) (\bar\psi_{i_3,<}+\bar\psi_{i_3,>} ) ( \psi_{i_2,<}+ \psi_{i_2,>} ) ( \psi_{i_1,<}+ \psi_{i_1,>} ) \nonumber\\
\times u_{i'_4 i'_3 i'_2 i'_1}(4'3'2'1') (\bar\psi_{i'_4,<}+\bar\psi_{i'_4,>} ) (\bar\psi_{i'_3,<}+\bar\psi_{i'_3,>} ) ( \psi_{i'_2,<}+ \psi_{i'_2,>} ) ( \psi_{i'_1,<}+ \psi_{i'_1,>} )
\end{eqnarray}
There are $2^8$ terms. Since the average is done over the Gaussian distribution, one only needs to trace even terms.
\begin{itemize}
\item $8 \psi_>/0 \psi_< $. This term is purely giving constant contribution.
\item $6 \psi_>/2 \psi_<$. Gives renormalization to the non-interacting part but in higher order in $u_0$.
\item $4 \psi_>/4 \psi_< $. This term renormalizes the quartic interaction.
\item $2\psi_>/6\psi_<$, $0\psi_>/8\psi_<$. These terms are indeed generated in the re-normalization process.
\end{itemize}
{\it Corrections to the quartic term. }
To list all possible terms generating quartic corrections, one may enumerate all possible terms in some principle. Since two interaction potentials are involved, one may list terms from (1). Two come-in (creation) operators from one potential while two come-out (annihilation) operators arise from one potential. (2) One come-in is from one potential while the other come-in is from one potential. From this principle, there are only two types.
To symmetrize the local potential in momentum space, the second type is classified into two types. To see how this happen, let us consider the short-hand representation below ( which is used to analyze the expansion),
\begin{eqnarray}
\label{23}
u(4321) \bar\psi(4) \bar\psi(3) \psi(2)\psi(1) \times u(8765) \bar\psi(8) \bar\psi(7) \psi(6)\psi(5).
\end{eqnarray}
Each field operator here contains both $>$ and $<$. Now consider one piece of contribution in Eq.~\ref{23}
\begin{eqnarray}
u(8765) u(4321)\,\bar\psi_>(4)\, \bar\psi_<(3)\, \psi_>(2)\,\psi_<(1) \, \bar\psi_>(8) \, \bar\psi_<(7) \, \psi_>(6)\, \psi_<(5).
\end{eqnarray}
Note that the average will be done on the $>$ components. Thus a simple re-arrangement leads to
\begin{eqnarray}
-u(8765) u(4321) \bar\psi_<(3) \, \bar\psi_<(7)\, \psi_<(1) \psi_<(5)\,
\langle
\bar\psi_>(4) \, \psi_>(2)\, \bar\psi_>(8)\, \psi_>(6) \,
\rangle
\end{eqnarray}
There is only one way to pair field operators in a connected way,
\begin{eqnarray}
u(8765) u(4321) \bar\psi_<(3) \, \bar\psi_<(7)\, \psi_<(1) \psi_<(5)\,
G(4)\delta(4,6) G(2)\delta(2,8)
\end{eqnarray}
Then taking integral and delta-function (momentum/energy conservations) into consideration, one finds that $S_{\text{int}}^2$ contains the contribution like
\begin{eqnarray}
\label{ttt}
\delta(4+3-2-1) \bar\psi_<(4) \, \bar\psi_<(3)\, \psi_<(2) \psi_<(1)\, \int d5d6
G(5) G(6) \delta (3+5-1-6) u(6452)u(5361)\nonumber
\end{eqnarray}
In fact, exchanging the role of $1$ and $2$ leads to another form of expression,
\begin{eqnarray}
-\delta(4+3-2-1) \bar\psi_<(4) \, \bar\psi_<(3)\, \psi_<(2) \psi_<(1)\, \int d5d6
G(5) G(6) \delta (3+5-2-6) u(6451)u(5362)\nonumber
\end{eqnarray}
This exchange seems to be trivial but important in obtaining a symmetric vertex correction. Since there are totally $2^4$ terms in the second type contributing the same as Eq.~\ref{ttt}, one may split them into so-called $ZS$ and $ZS'$ diagrams. To be consistent with the reference, we play some variable change and reach,
\begin{eqnarray}
ZS= \bar\psi_<(4) \, \bar\psi_<(3)\, \psi_<(2) \psi_<(1)\, \int d5d6
G(5) G(6) \delta (3+5-1-6) u(6351)u(4526)\\
ZS'=-\bar\psi_<(4) \, \bar\psi_<(3)\, \psi_<(2) \psi_<(1)\, \int d5d6
G(5) G(6) \delta (3+5-2-6) u(6451)u(3526)
\end{eqnarray}
Note identity coefficient comes from $8/(2\times 2!\times 2!)=1$. Of course, the first type of contribution is the so-called $BCS$ diagram. It contributes as
\begin{eqnarray}
BCS=-\frac{1}{2}\bar\psi_<(4) \, \bar\psi_<(3)\, \psi_<(2) \psi_<(1)\, \int d5d6
G(5) G(6) \delta (5+6-4-3) u(4365)u(6521) \nonumber
\end{eqnarray}
The coefficient $1/2$ comes from $4/8$. Now we have symbolically represented one-loop level contributions. One would see that this is equivalent to the usual Feynman diagrams of four-point correlations. Now we consider the loop-correction to $u(+-+-)$, i.e., $4=+,3=-,2=+,1=-$.
\begin{itemize}
\item {\it ZS diagram.} From the specific potential condition, we demand that the sub-indices part satisfies
\begin{eqnarray}
63=51=54=26=\{+,-\}.
\end{eqnarray}
It is easy to see that this is impossible. Thus contribution from ZS diagram to this typical potential vanishes.
\item {\it ZS' diagram.} From the specific potential condition, we demand that the sub-indices part satisfies
\begin{eqnarray}
64=51=35=26=\{+,-\}.
\end{eqnarray}
It is easy to conclude that $6=-,5=+$. The integral involved reads
\begin{eqnarray}
ZS'= -\int_{\Lambda/s}^{\Lambda} \frac{dk}{2\pi} \int_{-\infty}^{+\infty} \frac{d\omega}{2\pi} G_+(i\omega,k) G_-(i\omega-i\nu,k-q) u(-++-)u(-++-)
\end{eqnarray} where $q/\nu$ is the net come-in momentum/frequency. If we assume they are zero, then
\begin{eqnarray}
ZS'=-2u^2\int_{\Lambda/s}^{\Lambda} \frac{dk}{2\pi} \int_{-\infty}^{+\infty}\frac{e^{i0^+\omega}d\omega}{2\pi} \frac{ 1}{i\omega- k^2} \frac{1}{i\omega+ k^2}
\end{eqnarray}
Use the formula
\begin{eqnarray}
\frac{ 1}{i\omega- k^2} \frac{1}{i\omega+ k^2} = \frac{1}{2k^2} \left(\frac{1}{i\omega-k^2}-\frac{1}{i\omega+k^2} \right)
\end{eqnarray}
The first term does not contribute to the contour integral. Thus,
\begin{eqnarray}
ZS'=2u^2\int_{\Lambda/s}^{\Lambda} \frac{dk}{2\pi} \int_{-\infty}^{+\infty}\frac{e^{i0^+\omega}d\omega}{2\pi} \frac{1}{2k^2} \frac{1}{i\omega+k^2} = u^2\int_{\Lambda/s}^{\Lambda} \frac{dk}{2\pi} \frac{1}{k^2}=\frac{\Lambda^{-1}u^2}{2\pi} (s-1) \nonumber
\end{eqnarray}
Two come from $\pm k $. The result of this calculation does not depend on the regulator $e^{i0^+\omega}$. One would observe a weaker CDW instability. Namely,
\begin{eqnarray}
\beta_{zs'}(u)=\Lambda \frac{du_{zs'}}{d\Lambda}=\frac{\Lambda^{-1}u^2}{2\pi}
\end{eqnarray}
\item {\it BCS diagram.} Now consider the BCS contribution. We demand that the sub-indices part satisfies $\{5,6\}=\{+,-\}$. There are two options. They contribute,
\begin{eqnarray}
BCS&=&-\frac{u^2}{2}\int_{\Lambda/s}^{\Lambda} \frac{dk}{2\pi} \int_{-\infty}^{+\infty} \frac{d\omega}{2\pi} G_+(i\omega,k) G_-(i\nu-i\omega,q-k) \nonumber \\
&-&\frac{u^2}{2}\int_{\Lambda/s}^{\Lambda} \frac{dk}{2\pi} \int_{-\infty}^{+\infty} \frac{d\omega}{2\pi} G_-(i\omega,k) G_+(i\nu-i\omega,q-k)
\end{eqnarray}
Still trace the zero input frequency and momentum,
\begin{eqnarray}
BCS&=&\frac{u^2}{2}\int_{\Lambda/s}^{\Lambda} \frac{dk}{2\pi} \int_{-\infty}^{+\infty}\frac{d\omega}{2\pi} \left(
\frac{1}{i\omega- k^2} \frac{1}{i\omega- k^2}+\frac{1}{i\omega+ k^2} \frac{1}{i\omega+ k^2}
\right)
\end{eqnarray}
Note that the residue of $1/z^2$ is zero. Thus the BCS contribution is zero.
\end{itemize}
Both tree and one-loop contribute to $\beta$-function,
\begin{eqnarray} \frac{du}{dt}=u+\frac{\Lambda^{-1}u^2}{2\pi},
\end{eqnarray}
which is Eq.~9 in the main text.
\end{document} |
{
"arxiv_id": "2302.13245",
"language": "en",
"timestamp": "2023-02-28T02:14:05",
"url": "https://arxiv.org/abs/2302.13245",
"yymm": "2302"
} | \section{Introduction}
\paragraph{}
The Efficient Market Hypothesis (EMH) states that a market is said to be ‘Efficient’ if the price of a security always ‘fully reflects’ all available information about that security \cite{fama1970efficient}. EMH also implies that security always trades at its fair value making it impossible to buy undervalued securities or sell overvalued securities. However, there have been multiple studies \cite{schwert2003anomalies,shleifer1997survey} that have challenged EMH by empirically proving the existence of arbitrage, indicating market inefficiency, and such inefficiencies or inadequacies can be exploited to make consistent profits. Although pricing models such as the Capital Asset Pricing Model imply EMH \cite{sharpe1964capital}, alternative theories allowing the existence of market anomalies, such as the adaptive market hypothesis, have been introduced and well documented \cite{lo2004adaptive,lo2005reconciling}. Statistical arbitrage is one such existing market anomaly that plays a key role and is often used by hedge funds and portfolio managers on Wall Street to design profitable strategies \cite{choi2014physical}. Even though their origins are not well explained, statistical arbitrage opportunities such as momentum, mean reversion, and pairs trading strategies have been proven to be profitable in multiple markets \cite{avellaneda2008statistical}.
\paragraph{}
Momentum, a form of statistical arbitrage, is the tendency for asset prices to continue moving in the same direction along which the asset has moved in the past. The central idea of momentum is that stock prices and individuals alike overreact or underreact to information \cite{de1985does}. If that is the case, then profitable trading strategies can be designed based on the stock’s past returns \cite{jegadeesh2011momentum}. For instance, Jegadeesh and Titman \cite{jegadeesh1993returns} proved that portfolios based on buying past winners and selling past losers, yielded abnormal returns. This strategy involves creating a long-short portfolio, wherein the assets are either bought or sold based on their “momentum” during the lookback period. Momentum for a particular asset is measured using a relative strength index, and this index acts as a ranking system to select and compile assets into winner or loser baskets. Long-short portfolios are then constructed by taking a long position on the winner basket and taking a short position on the loser basket. The price momentum effect has been heavily researched in the equities market, but its presence can also be seen in other markets; currency exchange and crypto-currency market are examples where momentum portfolios resulted in excessive market returns \cite{asness2013value,liu2020dynamical}.
\paragraph{}
Price Momentum effect as an arbitrage opportunity has been the center of attention for both fund managers \cite{grinblatt1995momentum,menkhoff2005use} and academics alike. This interest is mainly due to the fact that despite there being no consensus to explain the source of momentum profits, it has remained, as we shall see below, a profitable strategy across global markets. In the US market, Jegadeesh and Titman \cite{jegadeesh1993returns}, showed that over medium-horizons, a period of 6 months to 12 months, firms show trend continuation, i.e past winners continue to outperform past losers. Rouwenhorst \cite{rouwenhorst1998international} also documented similar momentum behavior in European markets, by following Jegadeesh and Titman’s approach to portfolio construction and analyzing its performance in 12 European countries. Kan and Kirikos \cite{kan1996now} found the presence of a short-run momentum effect in the Canadian market, while Chui, Wei, and Titman \cite{chui2000momentum} reported momentum profits in most of the Asian markets except Japan. By contrast, De Bondt and Thaler \cite{de1985does} introduced a contrarian momentum effect, that suggested a reversal of performance in returns for individual firms. The study showed that over long horizons, a period of three to five years, past losers tend to outperform past winners.
\paragraph{}
In the Indian context, Sehgal and Balakrishnan \cite{sehgal2002contrarian} obtained results that, for the years 1989 to 1999, supported a weak reversal in long-horizon returns provided a year is skipped between portfolio formation and holding period. The same study further proved a strong continuation pattern in short-horizon returns also resulting in significantly higher returns than the long-horizon contrarian strategy. Sehgal \cite{sehgal2006rational} show that the momentum returns which are not explained by the CAPM model are explained by the Fama-French three-factor model. They further suggest that rational sources of momentum profits exist. Ansari and Khan \cite{ansari2012momentum} also reported evidence of momentum which, for the years 1995 to 2006, were in conformity with the results observed by Sehgal and Balakrishnan \cite{sehgal2002contrarian}. Ansari and Khan \cite{ansari2012momentum} found that momentum portfolios based 3-month look back and a 3-month holding period resulted in the highest average monthly returns. Garg and Varshney \cite{garg2015momentum} examined the existence of the momentum effect, for the years 2000 to 2013, in four sectors of the Indian economy. The study considered large-cap stocks of Automotive, Banking, Pharmaceutical, and IT sectors and reported the highest presence of momentum effect in the Pharmaceutical sector, proceeded by Automotive and Banking with similar presence, with the least presence in the IT sector. The study further found that portfolios-based long-horizon returns yielded the most profits in all four sectors of the Indian market. Additionally, Mohapatra and Misra \cite{mohapatra2020momentum} examined the effect of momentum, for the years 2005, to 2015, and reported abnormal returns for medium horizon portfolios, which were in accordance with findings reported by Jegadeesh and Titman \cite{jegadeesh1993returns} for the US market. They however found a stark difference, where in the Indian equity market, momentum reversal was not observed for long-run holding periods of over two to five years.
\paragraph{}
Over the years, multiple research papers have been introduced attempting to develop alternative momentum strategies and testing them across the Global markets. Moskowitz et al. \cite{moskowitz2012time} introduced the concept of time series momentum as an alternative to the traditional cross-section momentum that was introduced by Jegadeesh and Titman \cite{jegadeesh1993returns}. While the traditional cross-section momentum focuses on the relative returns of the assets, time series momentum focuses on the assets' absolute past return. An autoregressive model was used to predict future excess returns of an asset scaled by volatility based on its lagged past returns scaled by volatility. This predicted future excess return was then used to construct the time series portfolio, and the study found that past 12-month excess returns of an asset return positive profits across 58 asset contracts. George and Hwang \cite{george200452} report momentum profits while using a 52-week high price as a ranking criterion. The study finds that the 52-week high strategy has higher predictive power for future returns than traditional past returns. They further suggest that the nearness of a stock’s price to its 52-week high is information that is easily available to investors, and it is common practice to use it as an “anchor” while assessing an asset's value. Rachev et al. \cite{rachev2007momentum} proposed reward-risk ratios as a criterion to select stocks for the momentum portfolio. In addition to the traditional Sharpe Ratio, the study also used alternative reward-risk ratios such as STARR ratio, and R-ratio to better capture the distributional behavior of price data. These alternative reward-risk portfolios provided better risk-adjusted performance than traditional momentum and traditional Sharpe ratio portfolios. The study further stated that investors who consider cumulative return criterion face heavy-tail distributions and hence higher tail risk when compared to investors who follow alternative reward-risk ratios as a selection criterion.
\paragraph{}
Antonacci \cite{antonacci2012momentum} introduced the concept of dual momentum combining both the relative and absolute momentum of an asset to build a momentum portfolio. The study reported higher risk-adjusted annualized returns than traditional cross-section momentum in the US market. The study used a two-stage selection process, where the assets are selected based on their relative strength momentum and if the selected asset further shows an absolute positive momentum with respect to Treasury bills, only when both conditions are met will the asset selected for the dual momentum portfolio. If the asset fails to show absolute momentum, then the Treasury bill return will be used as a proxy investment, as it can act as a safer alternative and ensures the portfolio is diversified as well. Blitz, Huij, and Martens \cite{blitz2011residual} used residual momentum estimated using Fama and French three-factor model to develop an alternative momentum strategy. The study extended the work of Grundy and Martin \cite{grundy2001understanding} which showed that traditional momentum has substantial exposures to the Fama and French factors. The study showed that using residual momentum significantly reduced dynamic factor exposures of the momentum strategy, resulting in a reduction in the volatility of the strategy. It was further found in this study that residual momentum has similar returns when compared to traditional momentum but at half the volatility, i.e., with roughly double the Sharpe ratio.
\paragraph{}
Choi \cite{choi2014physical} measured the strength of an asset using ‘physical’ momentum as an analogy for price momentum, wherein the behavior of the instrument is assumed to be similar to a one-dimensional particle. The study defined mass and velocity for an asset and used these definitions to calculate the physical momentum of that asset. Choi \cite{choi2014physical} then created portfolios based on physical momentum and established the presence of abnormal profits in the US equity market. In our study, we shall use physical momentum as defined by Choi \cite{choi2014physical} and study its existence and the abnormal profits it entails in the Indian equity market scenario.
\section{Introduction to Physical Momentum}
\subsection{One Dimensional Space}
\paragraph{}
Market information and investor behavior are the main driving forces behind price change for an instrument. The instrument price of any asset class is a time-dependent variable whose value is always positive. However, to establish a price space for an instrument that is analogous to physical space, the instrument price needs to be extended to both the positive and negative lines. To extend the price space, one method is to apply log transformation to the instrument price.
\begin{equation}
x(t)={\log}S(t)
\end{equation}
Where $x(t)$ is the log price and $S(t)$ is the instrument price. Logarithmic price is a common transformation used by a majority of technical analysts in the field of asset management. The advantage of log price is not just limited to extending the price space, but also capturing the price change in terms of percentage rather than in terms of absolute dollars. The distance between prices on the log scale decreases as the price increases. In other words, a {\$}1 increase in price is less significant if the underlying price is much higher as it corresponds to a lesser percentage change. This ensures that higher percentage moves have more significance in the model when compared to lower percentage moves. Log price further ensures that equivalent price changes are represented by the same number of vertical changes on the scale.
\subsection{Velocity}
\paragraph{}
Now that a mathematical representation of instrument price in one-dimensional space has been established, the velocity of the instrument price can be calculated. Choi suggested that the log return $R(t)$ can be a representation of log price velocity.
\begin{equation*}
\begin{split}
R(t) &={\frac{{\log}S(t)-{\log}S(t-\Delta t)}{\Delta t}} \\
&={\frac{x(t)-x(t-\Delta t)}{\Delta t}} \\
&={\frac{\Delta x(t)}{\Delta t}} \\
&={\frac{dx(t)}{dt}}=v(t)
\end{split}
\end{equation*}
This relationship between log return and velocity is valid under the assumption that $\Delta t\to$0. In our case the assumption holds as the length of our data is large enough that individual time steps can be approximated to be infinitesimally smaller.
\paragraph{}
Cumulative return $r(t)$, can be expressed as,
\begin{equation*}
\begin{split}
r(t) &={\frac{S(t)-S(t-\Delta t)}{S(t-\Delta t)}} \\
&={\frac{S(t)}{S(t-\Delta t)}-1} \\
&=\exp(R(t))-1 \\
&=\exp(v(t))-1
\end{split}
\end{equation*}
So, the above equation can be re-written as,
\begin{equation}
v(t)={\log}(r(t)+1)
\end{equation}
\subsection{Mass}
\paragraph{}
The efficient market hypothesis, as discussed previously, implies that all market information of an instrument is available to investors and any change in the price of said instrument is fully explained by any new information available to the investors. Since instrument price is heavily influenced by investor behavior, which in turn, is dictated by market information available to them. In practice, information availability among investors is inhomogeneous, and its effective exchange decides whether the observed price change is meaningful. Every instrument is uniquely affected by its respective investor behavior, raising the need for a metric to capture and normalize the correlation between price and behavior across all assets.
\paragraph{}
In physics, mass is a physical property that is unique to a particle, and given constant force, the mass of a particle decides the particle’s acceleration and velocity. In finance, market information and investor behavior are analogous to physical force, and since log return is the velocity of an instrument price, mass for an instrument can also be established. Choi introduces the concept of financial mass, as an analogy to physical mass, which acts as a filter for market information. When compared to traditional momentum, physical momentum applies mass to amplify the change in log price additionally incorporating instrument-specific market information in the ranking criteria of the momentum strategy.
\paragraph{}
Physical mass and velocity are inversely proportional given constant force or momentum, similarly, the candidate for financial mass should have an inverse relationship with log return. The liquidity of an instrument is a market feature that measures how quickly the instrument can be bought or sold without a significant change in its price. By this definition, it can be inferred that instruments with high liquidity have a greater number of transactions when compared to illiquid instruments. Additionally, liquidity also influences market efficiency, where a highly liquid market is more efficient, resulting in the price change being more meaningful. Datar et al. \cite{datar1998liquidity} showed an inverse relationship between liquidity and future returns, which suggests that liquidity is a possible candidate for financial mass.
\paragraph{}
Liquid markets exhibit tightness, immediacy, breadth, depth, and resiliency \cite{sarr2002measuring}. Volume-based metrics primarily measure the breadth and depth of market liquidity, while bid-ask spreads measure tightness \cite{datar1998liquidity}. Choi considers transaction value, volatility, and volume as measures of liquidity and as financial masses for the physical momentum portfolio. Volume traded is a direct measure of liquidity for an instrument. However, raw volume must be normalized for the ranking criteria in the momentum strategy to be uniform and homogeneous across all assets. This normalization is to account for the fact that assets differ in the amount of total outstanding shares, and hence some assets inherently have higher daily raw volume when compared to other assets. Turnover rate, expressed as $\upsilon$, which is raw volume divided by total outstanding shares, is used as a candidate for financial mass by Choi and in our paper. Additionally, the Inverse Turnover rate, expressed as $1/{\upsilon}$, is used as a candidate for financial mass in our paper.
\paragraph{}
Volatility is the measure of spread for asset returns. Highly volatile stocks have greater variations in stock price compared to low-volatile stocks. Stocks with high volatility grant massive positive returns at the risk of incurring equally massive negative returns. In either case, absolute log returns are positively correlated with volatility. Volatility is an important metric in the field of finance. Since investors are assumed to be risk-averse, changes in the volatility of an instrument can be understood as a direct consequence of efficient information exchange between investors. Since mass is inversely related to log returns, the inverse of volatility, expressed as $1/{\sigma}$, will be used as a financial mass in our paper.
\subsection{Momentum}
With analogies of mass, velocity, and a one-dimensional space for instrument price have been defined, the momentum of the instrument can be calculated. Choi applies three different measures of linear momentum, represented as $ p^1_{t,k}$,$\;p^2_{t,k}$, and $\;p^3_{t,k}$, to calculate the performance of stocks in the US capital markets and rank them to form the physical momentum portfolio.
\begin{equation}
p_{t,k}^1\left(m,v\right)=\sum_{i=0}^{k-1}{m_{t-i}v_{t-i}}
\end{equation}
\begin{equation}
p_{t,k}^2\left(m,v\right)=\frac{\sum_{i=0}^{k-1}{m_{t-i}v_{t-i}}}{\sum_{i=0}^{k-1}m_{t-i}}
\end{equation}
\begin{equation}
p_{t,k}^3\left(m,v\right)=\frac{{\bar{v}}_{t,k}}{\sigma_{t,k}}
\end{equation}
Where $t$, is look back period over which the physical momentum index for the portfolio is calculated, and $k$ is the holding period over which the physical momentum portfolio is held. Mass candidate for the momentum measures $p_{t,k}^1$ and $ \;p_{t,k}^2$, is turnover rate $\upsilon$, while the mass candidate for momentum measure $p_{t,k}^3$ is inverse volatility 1/ $\sigma$.
\paragraph{}
$p_{t,k}^3$ is a redefined version of $p_{t,k}^1$, where the mass and velocity for the former are calculated over the lookback period when compared to the latter, where the mass and velocity are calculated at each of the lookback steps. With the above three definitions of momentum, momentum portfolios will be created based on varying lookback period t and holding period k.
\section{Methodology}
\subsection{Portfolio Construction}
\paragraph{}
In our study, we examine the profitability of traditional and contrarian physical momentum strategies for high-frequency, short, medium, and long-horizon time periods. Table~\ref{tab:horizon} maps the different time horizons employed in the study with their respective definitions.
\begin{table}[H]
\caption{Time steps and their respective horizon category definitions as employed in this study.}
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{Time Steps} & \textbf{Lookback Period} & \textbf{Horizon Category} \\
\hline
\textbf{Day} & 1 to 7 Days & High Frequency \\
\hline
\textbf{Week} & 2 to 8 Weeks & Short \\
\hline
\textbf{Month} & 3 to 12 Months & Medium \\
\hline
\textbf{Year} & >=1 Years & Long \\
\hline
\end{tabular}
\label{tab:horizon}
\end{table}
Our momentum portfolios are built using Jegadeesh and Titman's $J$-month/$K$-month method, where $J$ is the lookback period in months and $K$ is the holding duration in months. Our method broadens the scope of this strategy by extending it over days, weeks, and years. The portfolio is built at time $t=0$, and the physical momentum, as defined in the preceding section, is determined for all of the stocks in our universe from time $t=-J$ to time $t=-1$. The stocks are then ranked and divided into equal groups based on their momentum values. For example, if the universe contains $500$ stocks, $50$ groups with ten stocks each are constructed. These groups are named using Jegadeesh and Titman’s nomenclature, where the top-ranked group is the winner group and is named $R50$, while the bottom-ranked group is the loser group and is named $R1$. In the traditional strategy, when the portfolio is constructed at $t=0$, the winner group is bought, and the loser group is shorted in equal cash-size positions to create a dollar-neutral portfolio. In the contrarian strategy, the dollar-neutral portfolio at time $t=0$ is constructed by short-selling the winners and buying the losers. In either of these strategies, the portfolio is held until $t=k$ and is liquidated at the end of the holding period by closing the winner and loser positions.
\subsection{High Frequency Portfolios}
\paragraph{}
High-frequency portfolios are constructed based on daily time horizons. The lookback period J, and the holding period K, vary from 1 to 7 days. For each mass candidate in the $p_{t,k}^1$, $\;p_{t,k}^2$ momentum criteria, a total of 49 portfolios are created. For $p_{t,k}^3$ momentum, however, this is not the case because the lookback period J can only take values from 2 to 7 days. This is because the mass of $p_{t,k}^3$ momentum is defined by historical standard deviation, and a singular lookback period has no standard deviation measure. Based on this limitation, 237 physical momentum portfolios with a time horizon of days are constructed. These high-frequency portfolios are built every day at market open and liquidated at market close on the last day of the holding period, assuming both days are trading days.
\subsection{Short Horizon Portfolios}
\paragraph{}
Short-horizon portfolios are constructed based on weekly time horizons. The lookback period J varies from 2 to 8 weeks, while the holding period K, varies from 1 to 8 weeks. For each mass candidate in the $p_{t,k}^1$, $\;p_{t,k}^2$, and $\;p_{t,k}^3$ momentum definitions, 56 portfolios can be constructed. This generates a total of 280 portfolios with weeks as their time horizon. These short-horizon portfolios are constructed on Monday or the first valid trading day of every week and liquidated on the last valid trading day of their respective holding period.
\subsection{Medium Horizon Portfolios}
\paragraph{}
Medium horizon portfolios are constructed based on monthly time horizons. The lookback period J varies from 3 to 12 months, while the holding period K, varies from 1 to 12 months. For each mass candidate in the $p_{t,k}^1$, $\;p_{t,k}^2$, and $\;p_{t,k}^3$ momentum definitions, a total of 120 portfolios can be constructed. This generates a total of 600 portfolios with months as their time horizon. The first valid trading day of each month is used to build these medium horizon portfolios, which are then liquidated at the conclusion of their respective holding periods.
\subsection{Long Horizon Portfolios}
\paragraph{}
Long-horizon portfolios are constructed based on yearly time horizons. The lookback period J varies from 2 to 5 years, while the holding period K, varies from 1 to 3 years. Since the length of our time period is limited to eight years, combinations of J/K whose sum exceeds eight years will be excluded. Based on this limitation, for each mass candidate in the $p_{t,k}^1$, $\;p_{t,k}^2$, and $\;p_{t,k}^3$ momentum definitions, 12 portfolios can be constructed. This generates a total of 60 portfolios with years as their time horizon. These long-horizon portfolios are constructed on the first valid trading day of every year and liquidated at the end of their respective holding period.
\subsection{Overlapping Portfolios}
\paragraph{}
Overlapping portfolios are a major aspect in the Jegadeesh and Titman \cite{jegadeesh2011momentum} study of momentum. For multi-period holding strategies, i.e., strategies whose $K>1$, there exist portfolios that overlap over a period of time. This is due to the fact that portfolios created in previous $K-1$ steps have not yet been liquidated when the new portfolio is created. This indicates that at any given time t, between $t=0$ and $t=K$, the number of portfolios that are held simultaneously is equal to the holding period K. {The following diagram illustrates the number of portfolios that are held at time t, for $J=2$ months and $K=3$ months strategy.}
According to Jegadeesh and Titman \cite{jegadeesh2011momentum}, there is no significant difference in returns between overlapping and non-overlapping portfolios. In addition to this, overlapping portfolios provide the added benefit of diversification. Since our dataset is limited to eight years, constructing non-overlapping portfolios would yield fewer return samples when compared to overlapping portfolios.
\subsection{Portfolio Performance Measurement}
\paragraph{}
To calculate the performance of a zero-cost portfolio, both the long and short portfolios are assumed to be bought, and their respective returns are calculated. The return of the short portfolio is then subtracted from the return of the long portfolio to calculate the zero-cost portfolio return. The winners are bought in a traditional momentum strategy, while the losers are shorted. Hence, the return of a traditional momentum portfolio, represented as $R_p$ is calculated as,
\begin{equation*}
R_p=R_w - R_l
\end{equation*}
where $R_w$ and $R_l$ are the expected returns of the winner and loser groups, respectively. For the contrarian momentum strategy, the winners are shorted, and the losers are bought. Hence, the return of a contrarian momentum portfolio, represented as $R^*_p$ is,
\begin{equation*}
R^*_p=R_l-R_w
\end{equation*}
In our study, the portfolios constructed are paper traded and are not implemented in the real market, hence commissions are not considered while calculating momentum profits.
\subsection{Data Tabulation and Benchmark}
\paragraph{}
A total of 1177 traditional and contrarian momentum portfolios are constructed based on the above-defined time horizons. The results section is divided into four subsections, to discuss the findings for all the above-shown time horizons. For every momentum definition in each of the time horizons, the best-performing portfolio’s statistics and risk measures are tabulated and benchmarked against Nifty 50. Nifty 50 is a well-diversified index consisting of 50 companies and is the most common benchmark used by asset managers in India. Additionally, the familiar CAPM equation will be used to estimate and provide support for the extranormal returns from each of the best-performing physical momentum portfolios.
\section{Results}
\subsection{High Frequency Returns}
\paragraph{}
Table~\ref{tab:daily_returns} contains all high-frequency monthly mean returns and standard deviations of the best-performing portfolios from every mass and momentum combination. All the high-frequency physical momentum portfolios have a higher mean return than the benchmark Nifty index, with $p{\ast}^2(1/\upsilon,R)$ portfolio having the highest monthly mean return of 6.04\%. Additionally, the $p{\ast}^2(1/\upsilon,R)$ portfolio has lower volatility (4.78\%) than the Nifty benchmark (5.03\%).
\begin{table}[H]
\caption{Returns and standard deviation measures of high-frequency physical momentum portfolios in NSE 500.}
\centering
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Portfolio & Strategy & J-K & Basket & Mean & Std. Dev. \\
\hline
\multirow{3}{*}{$p{\ast}^1(\upsilon,R)$} & \multirow{3}{*}{Contrarian} & \multirow{3}{*}{1-2} & Winner (W) & -4.64415 & 7.178647 \\ \cline{4-6}
& & & Loser (L) & -1.65496 & 7.387066 \\ \cline{4-6}
& & & L - W & 2.989188 & 2.61962 \\
\hline
\multirow{3}{*}{$p^1(1/\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{1-2} & Winner (W) & -0.90117 & 6.522656 \\ \cline{4-6}
& & & Loser (L) & -2.34479 & 5.579123 \\ \cline{4-6}
& & & W - L & 1.443627 & 2.420286 \\
\hline
\multirow{3}{*}{$p{\ast}^2(\upsilon,R)$} & \multirow{3}{*}{Contrarian} & \multirow{3}{*}{2-1} & Winner (W) & -9.46681 & 7.500539 \\ \cline{4-6}
& & & Loser (L) & -4.93755 & 7.915264 \\ \cline{4-6}
& & & L - W & 4.529257 & 4.696118 \\
\hline
\multirow{3}{*}{$p{\ast}^2(1/\upsilon,R)$} & \multirow{3}{*}{Contrarian} & \multirow{3}{*}{3-1} & Winner (W) & -8.98657 & 7.605155 \\ \cline{4-6}
& & & Loser (L) & -2.94504 & 7.076357 \\ \cline{4-6}
& & & L - W & 6.041535 & 4.780503 \\
\hline
\multirow{3}{*}{$p{\ast}^3(1/\sigma,R)$} & \multirow{3}{*}{Contrarian} & \multirow{3}{*}{3-1} & Winner (W) & -4.51109 & 5.747774 \\ \cline{4-6}
& & & Loser (L) & -1.65428 & 6.08766 \\ \cline{4-6}
& & & L - W & 2.856803 & 3.646641 \\
\hline
\end{tabular}
\label{tab:daily_returns}
\end{table}
\paragraph{}
All of the best-performing portfolios, with the exception of $p^1\left(1/\upsilon,R\right)$, are contrarian establishing a reversal in high-frequency returns.$p^1(1/\upsilon,R)$ momentum is the only non-contrarian portfolio whose monthly mean return beat the benchmark. However, $p^1(1/\upsilon,R)$ also is the only portfolio with the least monthly mean returns and standard deviation. The winner and loser groups of all five physical momentum portfolios have negative mean returns, regardless of the strategy. This suggests that the loser group in all momentum criteria shows stronger momentum continuation, while the winner groups show a strong reversal. This additionally proves that the profits gained by dollar-neutral portfolios based on the traditional (contrarian) approach are purely attributable to the winner (loser) group relatively outperforming the loser (winner) group.
\paragraph{}
It can also be seen that dollar-neutral portfolios exhibit lower volatilities when compared to the individual winner and loser groups that constitute them. This reduction in volatility can be attributed to the diversification effect introduced by the assets of the winner and loser groups. P2 portfolios outperformed both P1 and P3 portfolios in terms of mean returns. This is because the winner groups of P2 contrarian portfolios show stronger reversals. However, P2 portfolios exhibit twice the volatility when compared to P1 and P3 portfolios making them riskier. The P3 portfolio has the least return among contrarian portfolios and relatively higher volatility, making it unsuitable for investment.
\begin{table}[H]
\caption{Risk measures of high-frequency physical momentum portfolios in NSE 500.}
\centering
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
Portfolio & Strategy & J-K & Basket & Fin. Wealth & Sharpe Ratio & VaR$_{95\%}$ & MDD\\
\hline
$p{\ast}^1(\upsilon,R)$ & Contrarian & 1-2 & (L - W) & 4.117029 & 0.550245 & 3.028288 & -12.5527 \\
\hline
$p^1(1/\upsilon,R)$ & Traditional & 1-2 & (W - L) & 1.906175 & 0.295709 & 2.57145 & -14.8726 \\
\hline
$p{\ast}^2(\upsilon,R)$ & Contrarian & 2-1 & (L - W) & 8.290232 & 0.461488 & 4.423333 & -22.1972 \\
\hline
$p{\ast}^2(1/\upsilon,R)$ & Contrarian & 3-1 & (L - W) & 17.08302 & 0.617544 & 3.513963 & -13.6907 \\
\hline
$p{\ast}^3(1/\sigma,R)$ & Contrarian & 3-1 & (L - W) & 3.689321 & 0.379053 & 4.471247 & -23.4431 \\
\hline
\end{tabular}
\label{tab:daily_risks}
\end{table}
\paragraph{}
Table~\ref{tab:daily_risks} contains the risk measures of all the high-frequency dollar-neutral portfolios. All contrarian strategies have outperformed the benchmark Nifty in every risk metric. The momentum portfolios’ 95\% VaR values range from 2.5\% to 4.7\% and are at least 25\% lower than the benchmark Nifty’s 95\% VaR value. Similarly, the maximum drawdowns range from -12.5\% to -23.4\% and are reduced by a minimum of 20\% when compared to the benchmark Nifty’s -29.3\% drawdown. The outperformance of every contrarian portfolio in terms of final wealth can be explained as a result of adopting lower risk. The exception is $p^1\left(1/\upsilon,R\right)$ portfolio which failed to outperform the benchmark in terms of final wealth even with lower risk measures.
\paragraph{}
Figure~\ref{fig:daily} shows the historical daily portfolio values of all high-frequency momentum portfolios along with the Nifty benchmark. Similar to our findings in monthly mean returns, P2 momentum portfolios outperform P1 and P3 portfolios in terms of final wealth. ${p\ast}^2(1/\upsilon,R)$ is the best-performing portfolio yielding a 16-fold increment over the initial investment. From the figure, it can be seen that prior to the Covid-19 market crash in March 2020, the P3 portfolio underperformed when compared to the Nifty benchmark. However, the P3 portfolio managed to outperform the Nifty benchmark by taking advantage of the post-Covid-19 bull run observed in the Indian market \cite{dhall2020covid}. With the exception of $p^1\left(1/\upsilon,R\right)$ portfolio, all the high-frequency momentum portfolios managed to leverage this bull run.
\begin{figure}[ht]
\centering
{\includegraphics{daily.eps}}
\caption{Cumulative returns for $p{\ast}^1\left(\upsilon,R\right)$ (Blue), $p^1\left(1/\upsilon,R\right)$ (Yellow), $p{\ast}^2\left(\upsilon,R\right)$ (Green), $p{\ast}^2\left(1/\upsilon,R\right)$ (Red), $p{\ast}^3\left(1/\sigma,R\right)$ (Purple) high-frequency physical momentum portfolios vs Nifty 50 (Brown) in Indian Market.}
\label{fig:daily}
\end{figure}
\subsection{Short Horizon Returns}
\paragraph{}
Table~\ref{tab:weekly_returns} contains all short horizon monthly mean returns and standard deviations of the best-performing portfolios from every mass and momentum combination. All the short horizon physical momentum portfolios have a higher mean return than the benchmark Nifty index, with $p^2(1/\upsilon,R)$ portfolio having the highest monthly mean return of 2.54\%. Additionally, the $p^2(1/\upsilon,R)$ portfolio has lower volatility (4.42\%) than the Nifty benchmark (5.03\%).
\begin{table}[]
\caption{Returns and standard deviation measures of short horizon physical momentum portfolios in NSE 500.}
\centering
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Portfolio & Strategy & J-K & Basket & Mean & Std. Dev. \\
\hline
\multirow{3}{*}{$p^1(\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{7-1} & Winner (W) & 0.174126 & 8.661214 \\ \cline{4-6}
& & & Loser (L) & -1.99152 & 9.896505 \\ \cline{4-6}
& & & W - L & 2.16565 & 4.473935 \\
\hline
\multirow{3}{*}{$p^1(1/\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{7-1} & Winner (W) & 0.804623 & 6.30491 \\ \cline{4-6}
& & & Loser (L) & -1.62198 & 6.974194 \\ \cline{4-6}
& & & W - L & 2.426601 & 2.895703 \\
\hline
\multirow{3}{*}{$p^2(\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{7-1} & Winner (W) & -0.72846 & 8.514958 \\ \cline{4-6}
& & & Loser (L) & -2.29441 & 9.877659 \\ \cline{4-6}
& & & W - L & 1.565944 & 4.77213 \\
\hline
\multirow{3}{*}{$p^2(1/\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{8-8} & Winner (W) & 1.323656 & 8.422598 \\ \cline{4-6}
& & & Loser (L) & -1.22108 & 12.4391 \\ \cline{4-6}
& & & W - L & 2.544735 & 4.427298 \\
\hline
\multirow{3}{*}{$p^3(1/\sigma,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{8-8} & Winner (W) & 1.727272 & 6.182747 \\ \cline{4-6}
& & & Loser (L) & 0.390887 & 8.261543 \\ \cline{4-6}
& & & W - L & 1.336385 & 2.838392 \\
\hline
\end{tabular}
\label{tab:weekly_returns}
\end{table}
\paragraph{}
All of the above short-horizon portfolios employ the traditional physical momentum strategy, proving the existence of continuation in short-horizon returns. Additionally, dollar-neutral portfolios show evidence of diversification as they exhibit lower volatilities when compared to the winner and loser groups that constitute them. The P3 portfolio has the least return among the short-horizon portfolios. This is due to the presence of a weak reversal in returns of the P3 loser group, while the P1 and P2 loser groups showed strong continuation. It is also clear that the P1 and P2 portfolios' gains were primarily attributable to the loser group's significant continuation of returns, whilst their respective winner groups demonstrated weak continuation or reversal.
\begin{table}[]
\caption{Risk measures of short horizon physical momentum portfolios in NSE 500.}
\centering
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
Portfolio & Strategy & J-K & Basket & Fin. Wealth & Sharpe Ratio & VaR$_{95\%}$ & MDD\\
\hline
$p^1(\upsilon,R)$ & Traditional & 7-1 & (W - L) & 2.506523 & 0.213385 & 6.107373 & -20.3576 \\
\hline
$p^1(1/\upsilon,R)$ & Traditional & 7-1 & (W - L) & 2.989653 & 0.394423 & 3.486973 & -9.43847 \\
\hline
$p^2(\upsilon,R)$ & Traditional & 7-1 & (W - L) & 1.857258 & 0.136727 & 7.051863 & -25.9175 \\
\hline
$p^2(1/\upsilon,R)$ & Traditional & 8-8 & (W - L) & 2.691209 & 0.236764 & 4.677592 & -18.4447 \\
\hline
$p^3(1/\sigma,R)$ & Traditional & 8-8 & (W - L) & 1.699671 & 0.203426 & 3.423759 & -11.0897 \\
\hline
\end{tabular}
\label{tab:weekly_risks}
\end{table}
\begin{figure}[ht]
\centering
{\includegraphics{weekly.eps}}
\caption{Cumulative returns for $p^1\left(\upsilon,R\right)$ (Blue), $p^1\left(1/\upsilon,R\right)$ (Yellow), $p^2\left(\upsilon,R\right)$ (Green), $p^2\left(1/\upsilon,R\right)$ (Red), $p^3\left(1/\sigma,R\right)$ (Purple) short horizon physical momentum portfolios vs Nifty 50 (Brown) in Indian Market.}
\label{fig:weekly}
\end{figure}
\paragraph{}
Table~\ref{tab:weekly_risks} contains the risk measures of all the short-horizon dollar-neutral momentum portfolios. $p^1(1/\upsilon,R)$ portfolio is the only momentum portfolio that managed to beat the benchmark nifty index in terms of risk measures. Even though $p^2(1/\upsilon,R)$ portfolio’s mean returns (2.54\%) were higher than $p^1(1/\upsilon,R)$ portfolio’s mean returns (2.42\%), the $p^2(1/\upsilon,R)$ portfolio failed to outperform both Nifty and $p^1(1/\upsilon,R)$ in terms of final wealth. This can be attributed to the $p^2(1/\upsilon,R)$ portfolio’s higher volatility, causing an increase in risk measures, primarily a two-fold increase in maximum drawdown when compared to the $p^1(1/\upsilon,R)$ portfolio. The maximum drawdowns range from -11.08\% to -25.91\% and are reduced by a minimum of 11.6\% when compared to the benchmark Nifty’s -29.34\% drawdown.
\paragraph{}
Figure~\ref{fig:weekly} shows the historical daily portfolio values of all short-horizon momentum portfolios along with the Nifty benchmark. Similar to our findings in monthly mean returns, the P3 momentum portfolio was the worst-performing portfolio in terms of final wealth. Whereas $p^1(1/\upsilon,R)$ is the best-performing portfolio yielding nearly a three-fold increment over the initial investment. From the figure above, it can be seen for P1 and P2 portfolios that employed Inverse Turnover Rate as their mass, namely $p^1(1/\upsilon,R)$ and $p^2(1/\upsilon,R)$, reacted the most to the Covid-19 market crash when compared to the remaining portfolios. With the exception of $p^1(1/\upsilon,R)$, none of the short horizon momentum portfolios managed to leverage the post-Covid-19 Indian market bull run \cite{dhall2020covid}.
\subsection{Medium horizon Returns}
\paragraph{}
For medium horizon momentum portfolios, we shall benchmark the results against Nifty for the period 2015-2021. This is because the lookback period for the best-performing medium horizon portfolios ranges between 6 to 12 months. This shifts the investing start date for every medium horizon portfolio, and the benchmark statistics and risk values will shift accordingly. Since one of the best-performing portfolios has a maximum lookback duration of 12 months, subtracting one year from our test period should suffice, and so 2015-2021 is selected as the new test period.
\begin{table}[]
\caption{Returns and standard deviation measures of medium horizon physical momentum portfolios in NSE 500.}
\centering
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Portfolio & Strategy & J-K & Basket & Mean & Std. Dev. \\
\hline
\multirow{3}{*}{$p^1(\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{8-1} & Winner (W) & 1.306097 & 8.180783 \\ \cline{4-6}
& & & Loser (L) & -1.83147 & 13.73708 \\ \cline{4-6}
& & & W - L & 3.137563 & 5.075968 \\
\hline
\multirow{3}{*}{$p^1(1/\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{6-2} & Winner (W) & 2.12104 & 7.102213 \\ \cline{4-6}
& & & Loser (L) & -0.20381 & 8.403248 \\ \cline{4-6}
& & & W - L & 2.32485 & 3.097818 \\
\hline
\multirow{3}{*}{$p^2(\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{9-4} & Winner (W) & 1.058269 & 8.030754 \\ \cline{4-6}
& & & Loser (L) & -1.8857 & 12.8104 \\ \cline{4-6}
& & & W - L & 2.943969 & 4.089426 \\
\hline
\multirow{3}{*}{$p^2(1/\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{6-2} & Winner (W) & 2.519064 & 8.551126 \\ \cline{4-6}
& & & Loser (L) & -1.80115 & 14.15218 \\ \cline{4-6}
& & & W - L & 4.320216 & 5.311912 \\
\hline
\multirow{3}{*}{$p^3(1/\sigma,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{12-4} & Winner (W) & 1.653944 & 6.185016 \\ \cline{4-6}
& & & Loser (L) & -0.84587 & 11.28948 \\ \cline{4-6}
& & & W - L & 2.499812 & 4.466255 \\
\hline
\end{tabular}
\label{tab:monthly_returns}
\end{table}
\paragraph{}
Table~\ref{tab:monthly_returns} contains all medium horizon monthly mean returns and standard deviations of the best-performing portfolios from every mass and momentum combination. All the medium horizon physical momentum portfolios have a higher mean return than the benchmark Nifty index, with $p^2(1/\upsilon,R)$ portfolio having the highest monthly mean return of 4.32\%. However, $p^2(1/\upsilon,R)$ has the highest volatility of 5.31\% and was the only medium horizon portfolio that failed to beat the benchmark volatility. For the $p^2(1/\upsilon,R)$ portfolio, it can be inferred that its high mean return, and the subsequent outperformance can be attributed to its high-risk profile, i.e., its high volatility.
\paragraph{}
All of the above medium horizon portfolios employ the traditional physical momentum strategy, proving the existence of continuation for medium horizon returns. The volatility levels of every loser group are higher than their respective winning group. Additionally, dollar-neutral portfolios show evidence of diversification as they exhibit lower volatilities when compared to the winner and loser groups that constitute them. P2 portfolios are slightly better performing than P1 and P3 portfolios, mainly due to the presence of stronger continuation of returns in P2’s loser groups when compared to P1 and P3 loser groups. Loser groups of P1 and P2 portfolios with Turnover Rate ($\upsilon$) as their mass candidate show similar strength in momentum continuation and outperformed their respective winner groups. Winner groups of P1 and P2 portfolios with Inverse Turnover Rate (1/$\upsilon$) and P3 portfolio with Inverse volatility (1/$\sigma$) significantly outperformed their respective loser groups and contributed the most to their respective dollar-neutral portfolios.
\begin{table}[]
\caption{Risk measures of medium horizon physical momentum portfolios in NSE 500.}
\centering
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
Portfolio & Strategy & J-K & Basket & Fin. Wealth & Sharpe Ratio & VaR$_{95\%}$ & MDD\\
\hline
$p^1(\upsilon,R)$ & Traditional & 8-1 & (W - L) & 2.674691 & 0.224043 & 9.082746 & -32.4652 \\
\hline
$p^1(1/\upsilon,R)$ & Traditional & 6-2 & (W - L) & 2.640953 & 0.361444 & 3.485223 & -10.7785 \\
\hline
$p^2(\upsilon,R)$ & Traditional & 9-4 & (W - L) & 2.75605 & 0.294026 & 5.568717 & -16.7409 \\
\hline
$p^2(1/\upsilon,R)$ & Traditional & 6-2 & (W - L) & 4.354233 & 0.324809 & 6.799119 & -16.9386 \\
\hline
$p^3(1/\sigma,R)$ & Traditional & 12-4 & (W - L) & 2.36502 & 0.23382 & 4.062821 & -28.0848 \\
\hline
\end{tabular}
\label{tab:monthly_risks}
\end{table}
\paragraph{}
Table~\ref{tab:monthly_risks} contains the risk measures of all the medium horizon dollar-neutral momentum portfolios. All the medium horizon portfolios managed to outperform the benchmark Nifty in terms of final wealth and Sharpe ratio. $P^1(\upsilon,R)$, $p^2(1/\upsilon,R)$ are the only portfolios that failed to outperform the benchmark in terms of 95\% VaR measure, furthermore, $p^1(\upsilon,R)$ failed to outperform in terms of maximum drawdown as well. For P2 portfolios, maximum drawdowns were reduced by nearly 45\% with respect to the benchmark Nifty.
\begin{figure}[ht]
\centering
{\includegraphics{monthly.eps}}
\caption{Cumulative returns for $p^1\left(\upsilon,R\right)$ (Blue), $p^1\left(1/\upsilon,R\right)$ (Yellow), $p^2\left(\upsilon,R\right)$ (Green), $p^2\left(1/\upsilon,R\right)$ (Red), $p^3\left(1/\sigma,R\right)$ (Purple) medium horizon physical momentum portfolios vs Nifty 50 (Brown) in Indian Market.}
\label{fig:monthly}
\end{figure}
\paragraph{}
Figure~\ref{fig:monthly} shows the historical daily portfolio values of all medium horizon momentum portfolios along with the Nifty benchmark. The P3 momentum portfolio was the worst-performing portfolio in terms of final wealth, returning 2.3 times the initial investment. Whereas $p^1(1/\upsilon,R)$ is the best-performing portfolio yielding a four-fold return on the initial investment. From the figure above, it can be seen that prior to the Covid-19 market crash in March 2020, portfolios $p^1(1/\upsilon,R)$, $p^3(1/\sigma,R)$ did outperform the benchmark, but the major outperformance in terms of portfolio value occurred during the post-Covid-19 bull run observed in the Indian market \cite{dhall2020covid}. The $p^1(1/\upsilon,R)$ portfolio was the only portfolio that was unaffected by the 2020 market crash, while $p^1(\upsilon,R)$ portfolio was the most affected. Additionally, $p^1(\upsilon,R)$ was the only portfolio that failed to leverage the post-covid-19 bull run.
\subsection{Long Horizon Returns}
\paragraph{}
For long-horizon momentum portfolios, we shall benchmark the results against Nifty for the period 2018-2021. This is because the lookback period for the best-performing long-horizon portfolios ranges between 1 to 3 years. This shifts the investing start date for every medium horizon portfolio, and the benchmark statistics and risk values will shift accordingly. Since one of the best-performing portfolios has a maximum lookback duration of three years, subtracting three years from our test period should suffice, and so 2018-2021 is selected as the new test period.
\begin{table}[]
\caption{Returns and standard deviation measures of long horizon physical momentum portfolios in NSE 500.}
\centering
\begin{tabular}{|l|l|l|l|l|l|}
\hline
Portfolio & Strategy & J-K & Basket & Mean & Std. Dev. \\
\hline
\multirow{3}{*}{$p^1(\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{1-1} & Winner (W) & 0.991022 & 9.229006 \\ \cline{4-6}
& & & Loser (L) & -2.01568 & 16.88496 \\ \cline{4-6}
& & & W - L & 3.006704 & 4.197911 \\
\hline
\multirow{3}{*}{$p^1(1/\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{2-1} & Winner (W) & 2.170307 & 6.976874 \\ \cline{4-6}
& & & Loser (L) & -0.92625 & 10.42304 \\ \cline{4-6}
& & & W - L & 3.096553 & 4.640399 \\
\hline
\multirow{3}{*}{$p^2(\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{1-1} & Winner (W) & 1.150526 & 10.50427 \\ \cline{4-6}
& & & Loser (L) & -2.06852 & 16.00266 \\ \cline{4-6}
& & & W - L & 3.21905 & 4.769432 \\
\hline
\multirow{3}{*}{$p^2(1/\upsilon,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{3-1} & Winner (W) & 0.922383 & 9.581754 \\ \cline{4-6}
& & & Loser (L) & -1.52129 & 14.81716 \\ \cline{4-6}
& & & W - L & 2.443672 & 6.01926 \\
\hline
\multirow{3}{*}{$p^3(1/\sigma,R)$} & \multirow{3}{*}{Traditional} & \multirow{3}{*}{3-1} & Winner (W) & 0.65425 & 6.644175 \\ \cline{4-6}
& & & Loser (L) & -0.61694 & 8.667792 \\ \cline{4-6}
& & & W - L & 1.27119 & 3.425697 \\
\hline
\end{tabular}
\label{tab:yearly_returns}
\end{table}
\paragraph{}
Table~\ref{tab:yearly_returns} contains all long-horizon monthly mean returns and standard deviations of the best-performing portfolios from every mass and momentum combination. All the long horizon physical momentum portfolios have a higher mean return than the benchmark Nifty index, with $p^2(\upsilon,R)$ portfolio having the highest monthly mean return of 3.21\%. Additionally, the $p^2(\upsilon,R)$ portfolio has lower volatility (4.76\%) than the Nifty benchmark (6.10\%).
\begin{table}[]
\caption{Risk measures of long horizon physical momentum portfolios in NSE 500.}
\centering
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
Portfolio & Strategy & J-K & Basket & Fin. Wealth & Sharpe Ratio & VaR$_{95\%}$ & MDD\\
\hline
$p^1(\upsilon,R)$ & Traditional & 8-1 & (W - L) & 2.016235 & 0.335661 & 4.357561 & -16.5675 \\
\hline
$p^1(1/\upsilon,R)$ & Traditional & 6-2 & (W - L) & 2.171599 & 0.343956 & 4.397591 & -16.5916 \\
\hline
$p^2(\upsilon,R)$ & Traditional & 9-4 & (W - L) & 2.138261 & 0.319444 & 4.725888 & -16.2056 \\
\hline
$p^2(1/\upsilon,R)$ & Traditional & 6-2 & (W - L) & 1.381071 & 0.109031 & 7.716793 & -31.127 \\
\hline
$p^3(1/\sigma,R)$ & Traditional & 12-4 & (W - L) & 1.259924 & 0.135235 & 5.122885 & -21.3893 \\
\hline
\end{tabular}
\label{tab:yearly_risks}
\end{table}
\paragraph{}
All of the best-performing portfolios are traditional establishing a continuation in long-horizon returns. $p^3(1/\sigma,R)$ momentum is the only long horizon portfolio whose monthly mean return failed to outperform the benchmark in terms of mean, however, $p^3(1/\sigma,R)$ had the least standard deviation as well. P1 and P2 portfolios had similar performance in terms of mean returns, however, P1 portfolios performed better than P2 portfolios in terms of standard deviation. Additionally, P1 and P2 portfolios' gains were primarily attributable to the loser group's significant continuation of returns, whilst their respective winner groups demonstrated weak continuation. The low performance of the P3 portfolio can be attributed to the weak continuation of physical momentum in its winner and loser groups. It can also be seen that the dollar-neutral portfolios exhibit lower volatilities when compared to the individual winner and loser groups that constitute them. This reduction in volatility can be attributed to the diversification effect introduced by the assets of the winner and loser groups.
\paragraph{}
Table~\ref{tab:yearly_risks} contains the risk measures of all the long-horizon dollar-neutral portfolios. $p^3(1/\sigma,R)$ and $p^2(1/\upsilon,R)$ are the only portfolios that failed to outperform Nifty in every risk measure. The momentum portfolios’ 95\% VaR values range from 4.35\% to 7.12\% and their maximum drawdowns range from -16.20\% to -31.12\%. For P1 portfolios maximum drawdowns reduced by nearly 43\% when compared to the benchmark Nifty. The failure of $p^3(1/\sigma,R)$ and $p^2(1/\upsilon,R)$ portfolios can be attributed to higher drawdowns and VaR measures.
\begin{figure}[ht]
\centering
{\includegraphics{yearly.eps}}
\caption{Cumulative returns for $p^1\left(\upsilon,R\right)$ (Blue), $p^1\left(1/\upsilon,R\right)$ (Yellow), $p^2\left(\upsilon,R\right)$ (Green), $p^2\left(1/\upsilon,R\right)$ (Red), $p^3\left(1/\sigma,R\right)$ (Purple) long horizon physical momentum portfolios vs Nifty 50 (Brown) in Indian Market.}
\label{fig:yearly}
\end{figure}
\paragraph{}
Figure~\ref{fig:yearly} shows the historical yearly portfolio values of all long-horizon momentum portfolios along with the Nifty benchmark. The P3 momentum portfolio was the worst-performing portfolio in terms of final wealth, returning 1.2 times the initial investment. Whereas $p^1(1/\upsilon,R)$ is the best-performing portfolio yielding a 2.17-fold return on the initial investment. From the figure above, it can be seen that prior to the Covid-19 market crash in March 2020, all momentum portfolios did outperform the benchmark Nifty, but the major outperformance of $p^1(\upsilon,R)$, $p^1(1/\upsilon,R)$, and $p^2(\upsilon,R)$ occurred during the post-Covid-19 bull run observed in the Indian market \cite{dhall2020covid}. $p^2(1/\upsilon,R)$ portfolio was the most affected by the Covid-19 market crash observing a drawdown of nearly 32\% during this period. Additionally, $p^2(1/\upsilon,R)$ and $p^3\left(1/\sigma,R\right)$ portfolios were the only portfolio that failed to leverage the post-covid-19 bull run and hence failed to outperform the benchmark Nifty.
\section{Conclusion}
In this paper, we apply the physical momentum strategy, whose quantitative and mathematical definitions are introduced in Choi (2014), to the Indian Market. We created momentum portfolios based on a ranked selection of winner and loser stocks from the NSE 500 index. Stocks comprising the winner and loser baskets are chosen based on their physical price momentum values. All the physical momentum portfolios outperform the benchmark Nifty 50 in terms of expected returns and risk profiles. The majority of these physical momentum portfolios are traditional, indicating that the winner basket outperforms the loser basket, implying a robust continuation of stock returns in the Indian market. High-frequency portfolios (daily) are the only exception where the loser baskets outperform. All of the physical momentum portfolios have sharpe-ratio values of less than one, indicating that they are not suitable for investment. Given the assets in our portfolios are equally weighted, the Sharpe ratio can be improved by optimizing the weights of each asset. The application of the Fama-French model to explain the performance of the physical momentum portfolios is beyond the scope of this research and will be addressed in future work.
\bibliographystyle{unsrt}
|
{
"arxiv_id": "2302.13225",
"language": "en",
"timestamp": "2023-02-28T02:13:19",
"url": "https://arxiv.org/abs/2302.13225",
"yymm": "2302"
} | \section{Preparing an Anonymous Submission}
This document details the formatting requirements for anonymous submissions. The requirements are the same as for camera ready papers but with a few notable differences:
\begin{itemize}
\item Anonymous submissions must not include the author names and affiliations. Write ``Anonymous Submission'' as the ``sole author'' and leave the affiliations empty.
\item The PDF document's metadata should be cleared with a metadata-cleaning tool before submitting it. This is to prevent leaked information from revealing your identity.
\item References must be anonymized whenever the reader can infer that they are to the authors' previous work.
\item AAAI's copyright notice should not be included as a footer in the first page.
\item Only the PDF version is required at this stage. No source versions will be requested, nor any copyright transfer form.
\end{itemize}
You can achieve all of the above by enabling the \texttt{submission} option when loading the \texttt{aaai23} package:
\begin{quote}\begin{scriptsize}\begin{verbatim}
\documentclass[letterpaper]{article}
\usepackage[submission]{aaai23}
\end{verbatim}\end{scriptsize}\end{quote}
The remainder of this document are the original camera-
ready instructions. Any contradiction of the above points
ought to be ignored while preparing anonymous submis-
sions.
\section{Camera-Ready Guidelines}
Congratulations on having a paper selected for inclusion in an AAAI Press proceedings or technical report! This document details the requirements necessary to get your accepted paper published using PDF\LaTeX{}. If you are using Microsoft Word, instructions are provided in a different document. AAAI Press does not support any other formatting software.
The instructions herein are provided as a general guide for experienced \LaTeX{} users. If you do not know how to use \LaTeX{}, please obtain assistance locally. AAAI cannot provide you with support and the accompanying style files are \textbf{not} guaranteed to work. If the results you obtain are not in accordance with the specifications you received, you must correct your source file to achieve the correct result.
These instructions are generic. Consequently, they do not include specific dates, page charges, and so forth. Please consult your specific written conference instructions for details regarding your submission. Please review the entire document for specific instructions that might apply to your particular situation. All authors must comply with the following:
\begin{itemize}
\item You must use the 2023 AAAI Press \LaTeX{} style file and the aaai23.bst bibliography style files, which are located in the 2023 AAAI Author Kit (aaai23.sty, aaai23.bst).
\item You must complete, sign, and return by the deadline the AAAI copyright form (unless directed by AAAI Press to use the AAAI Distribution License instead).
\item You must read and format your paper source and PDF according to the formatting instructions for authors.
\item You must submit your electronic files and abstract using our electronic submission form \textbf{on time.}
\item You must pay any required page or formatting charges to AAAI Press so that they are received by the deadline.
\item You must check your paper before submitting it, ensuring that it compiles without error, and complies with the guidelines found in the AAAI Author Kit.
\end{itemize}
\section{Copyright}
All papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form. They must also contain the AAAI copyright notice at the bottom of the first page of the paper. There are no exceptions to these requirements. If you fail to provide us with a signed copyright form or disable the copyright notice, we will be unable to publish your paper. There are \textbf{no exceptions} to this policy. You will find a PDF version of the AAAI copyright form in the AAAI AuthorKit. Please see the specific instructions for your conference for submission details.
\section{Formatting Requirements in Brief}
We need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. The design and appearance of the paper is strictly governed by the aaai style file (aaai23.sty).
\textbf{You must not make any changes to the aaai style file, nor use any commands, packages, style files, or macros within your own paper that alter that design, including, but not limited to spacing, floats, margins, fonts, font size, and appearance.} AAAI imposes requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. All papers submitted to AAAI for publication will be recompiled for standardization purposes. Consequently, every paper submission must comply with the following requirements:
\begin{itemize}
\item Your .tex file must compile in PDF\LaTeX{} --- (you may not include .ps or .eps figure files.)
\item All fonts must be embedded in the PDF file --- including your figures.
\item Modifications to the style file, whether directly or via commands in your document may not ever be made, most especially when made in an effort to avoid extra page charges or make your paper fit in a specific number of pages.
\item No type 3 fonts may be used (even in illustrations).
\item You may not alter the spacing above and below captions, figures, headings, and subheadings.
\item You may not alter the font sizes of text elements, footnotes, heading elements, captions, or title information (for references and mathematics, please see the limited exceptions provided herein).
\item You may not alter the line spacing of text.
\item Your title must follow Title Case capitalization rules (not sentence case).
\item \LaTeX{} documents must use the Times or Nimbus font package (you may not use Computer Modern for the text of your paper).
\item No \LaTeX{} 209 documents may be used or submitted.
\item Your source must not require use of fonts for non-Roman alphabets within the text itself. If your paper includes symbols in other languages (such as, but not limited to, Arabic, Chinese, Hebrew, Japanese, Thai, Russian and other Cyrillic languages), you must restrict their use to bit-mapped figures. Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or 300 dpi bitmap or removed from the document (even if they are in a graphics file embedded in the document).
\item Two-column format in AAAI style is required for all papers.
\item The paper size for final submission must be US letter without exception.
\item The source file must exactly match the PDF.
\item The document margins may not be exceeded (no overfull boxes).
\item The number of pages and the file size must be as specified for your event.
\item No document may be password protected.
\item Neither the PDFs nor the source may contain any embedded links or bookmarks (no hyperref or navigator packages).
\item Your source and PDF must not have any page numbers, footers, or headers (no pagestyle commands).
\item Your PDF must be compatible with Acrobat 5 or higher.
\item Your \LaTeX{} source file (excluding references) must consist of a \textbf{single} file (use of the ``input" command is not allowed.
\item Your graphics must be sized appropriately outside of \LaTeX{} (do not use the ``clip" or ``trim'' command) .
\end{itemize}
If you do not follow these requirements, your paper will be returned to you to correct the deficiencies.
\section{What Files to Submit}
You must submit the following items to ensure that your paper is published:
\begin{itemize}
\item A fully-compliant PDF file.
\item Your \LaTeX{} source file submitted as a \textbf{single} .tex file (do not use the ``input" command to include sections of your paper --- every section must be in the single source file). (The only allowable exception is .bib file, which should be included separately).
\item The bibliography (.bib) file(s).
\item Your source must compile on our system, which includes only standard \LaTeX{} 2020 TeXLive support files.
\item Only the graphics files used in compiling paper.
\item The \LaTeX{}-generated files (e.g. .aux, .bbl file, PDF, etc.).
\end{itemize}
Your \LaTeX{} source will be reviewed and recompiled on our system (if it does not compile, your paper will be returned to you. \textbf{Do not submit your source in multiple text files.} Your single \LaTeX{} source file must include all your text, your bibliography (formatted using aaai23.bst), and any custom macros.
Your files should work without any supporting files (other than the program itself) on any computer with a standard \LaTeX{} distribution.
\textbf{Do not send files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.
\textbf{Do not send supporting files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.
\textbf{Obsolete style files.} The commands for some common packages (such as some used for algorithms), may have changed. Please be certain that you are not compiling your paper using old or obsolete style files.
\textbf{Final Archive.} Place your PDF and source files in a single archive which should be compressed using .zip. The final file size may not exceed 10 MB.
Name your source file with the last (family) name of the first author, even if that is not you.
\section{Using \LaTeX{} to Format Your Paper}
The latest version of the AAAI style file is available on AAAI's website. Download this file and place it in the \TeX\ search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete AAAI Author Kit so that you will have the latest instruction set and style file.
\subsection{Document Preamble}
In the \LaTeX{} source for your paper, you \textbf{must} place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation).
Leave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2.
\subsubsection{The Following Must Appear in Your Preamble}
\begin{quote}
\begin{scriptsize}\begin{verbatim}
\documentclass[letterpaper]{article}
\usepackage[submission]{aaai23}
\usepackage{times}
\usepackage{helvet}
\usepackage{courier}
\usepackage[hyphens]{url}
\usepackage{graphicx}
\urlstyle{rm}
\def\UrlFont{\rm}
\usepackage{graphicx}
\usepackage{natbib}
\usepackage{caption}
\frenchspacing
\setlength{\pdfpagewidth}{8.5in}
\setlength{\pdfpageheight}{11in}
\pdfinfo{
/TemplateVersion (2023.1)
}
\end{verbatim}\end{scriptsize}
\end{quote}
\subsection{Preparing Your Paper}
After the preamble above, you should prepare your paper as follows:
\begin{quote}
\begin{scriptsize}\begin{verbatim}
\begin{document}
\maketitle
\begin{abstract}
\end{abstract}\end{verbatim}\end{scriptsize}
\end{quote}
\noindent You should then continue with the body of your paper. Your paper must conclude with the references, which should be inserted as follows:
\begin{quote}
\begin{scriptsize}\begin{verbatim}
\section*{Appendix}
\subsection{Algorithms}
\subsection{Figures}
\begin{figure}[!tbh]
\centering
\hspace*{-1.5em}
\subfigure[random-no]{\label{subfig:orientation_mcts_walksat_random_no}
\resizebox{0.45\linewidth}{!}{\input{plots/rn_mcts_walksat.tex}}
}
\hspace*{-1.5em}
\subfigure[random-yes]{\label{subfig:orientation_mcts_walksat_random_yes}
\resizebox{0.45\linewidth}{!}{\input{plots/ry_mcts_walksat.tex}}
}
\hspace*{-1.5em}
\subfigure[global-no]{\label{subfig:orientation_mcts_walksat_global_no}
\resizebox{0.45\linewidth}{!}{\input{plots/gn_mcts_walksat.tex}}
}
\hspace*{-1.5em}
\subfigure[global-yes]{\label{subfig:orientation_mcts_walksat_global_yes}
\resizebox{0.45\linewidth}{!}{\input{plots/gy_mcts_walksat.tex}}
}
\caption{Walksat Based MCTS, the curve categorized by the fliplimits $fl$, the x-axis represents the time cost with the MCTS simulation. In this experiments, the MCTS simulation increases 25 from 50 to 150, which are marked in curves.}
\label{fig:orientation_mcts_walksat_gryn}
\end{figure}
For Walksat based MCTS in Fig~\ref{fig:orientation_mcts_walksat_gryn}, Comparing panel (a) with (b), and panel (c) with (d), we see using the best solution ever found by walksat as the final result of walksat contributes to the performance. Comparing panel (a) with (c), and panel (b) with (d), we see if the walksat is initialized based on the global best solution, then the performs improve. Clearly, using the best solution ever found by walksat is more important than initializing based on the global best solution according to panel (b) and (c). And the best combination is to implement both two components.
Interestingly, for Walksat based NMC in Fig~\ref{fig:orientation_nest_walksat_gryn}, we see the same pattern, which is because the performance of both MCTS and NMC are highly depended on the quality of estimation of the state, and both components improves the estimation results. In particular, using the best solution ever found by walksat guarantee the upper bounds that walksat can find as the final result for estimation since the walksat might flip to a worse result during flipping step. In addition, for initializing walksat using the global best solution can speed up the flip process and avoid repeat flips. Therefore, in the following experiments, we adopt these two components.
\begin{figure}[!tbh]
\centering
\hspace*{-1.5em}
\subfigure[random-no]{\label{subfig:orientation_nest_walksat_random_no}
\resizebox{0.45\linewidth}{!}{\input{plots/rn_nest_walksat.tex}}
}
\hspace*{-1.5em}
\subfigure[random-yes]{\label{subfig:orientation_nest_walksat_random_yes}
\resizebox{0.45\linewidth}{!}{\input{plots/ry_nest_walksat.tex}}
}
\hspace*{-1.5em}
\subfigure[global-no]{\label{subfig:orientation_nest_walksat_global_no}
\resizebox{0.45\linewidth}{!}{\input{plots/gn_nest_walksat.tex}}
}
\hspace*{-1.5em}
\subfigure[global-yes]{\label{subfig:orientation_nest_walksat_global_yes}
\resizebox{0.45\linewidth}{!}{\input{plots/gy_nest_walksat.tex}}
}
\caption{Walksat Based NMC, the curve categorized by the fliplimits $fl$, the x-axis represents the time cost with the different repetitions. In this experiments, the NMC repetition increases 1 from 1 to 5, which are marked in curves.}
\label{fig:orientation_nest_walksat_gryn}
\end{figure}
From Fig.~\ref{fig:orientation_nest_dyn_walksat} and Fig.~\ref{subfig:orientation_nest_walksat_global_yes}, we see that in 1000 seconds, the dynamic $fl$ in Fig.~\ref{subfig:orientation_nest_dynwe_walksat} achieves the better performance. Fig.~\ref{subfig:orientation_mcts_dynwe_walksat} shows that given a fixed repetition for NMC, higher exponent gets better performance, and higher $w$ also gets better performance, this is because both higher exponent and higher $w$ lead to larger $fl$, larger $fl$ is more likely to get better estimation result. However, enlarging the exponent results in a quick consumption of time budget. In Fig.~\ref{subfig:orientation_mcts_dynws_walksat}, we see that setting exponent as 1, and increasing the MCTS simulation achieves the best performance within a small budget~(around 400 seconds). Therefore, in the full experiments, we set exponent as 1 for NMC and determine the repetition according to the budget.
\begin{figure}[!tbh]
\centering
\hspace*{-1.5em}
\subfigure[one repetition, exponent increases]{\label{subfig:orientation_nest_dynwe_walksat}
\resizebox{0.45\linewidth}{!}{\input{plots/nest_dynwe_walksat}}
}
\hspace*{-1.5em}
\subfigure[exponent=1, repetition increases]{\label{subfig:orientation_nest_dynwr_walksat}
\resizebox{0.45\linewidth}{!}{\input{plots/nest_dynwr_walksat}}
}
\caption{NMC with Dynamic Flip Limits for Walksat.
}
\label{fig:orientation_nest_dyn_walksat}
\end{figure}
\subsection{Tables}
\begin{table*}[tbh!]
\hspace{-10cm}
\scriptsize
\caption{Results of Mastering Max3Sat~(70 variables) on different Instances Using SingleUCT SLS, MultiUCT SLS and NMC SLS respectively, 10 repetitions each. }\label{max3sat70variable_sls}
\resizebox{\textwidth}{!}{\input{tables/fullexperiments_70v}}
\end{table*}
\begin{table*}[tbh!]
\hspace{-10cm}
\scriptsize
\caption{Results of Mastering Max2Sat~(120 variables) on different Instances Using SingleUCT SLS, MultiUCT SLS and NMC SLS respectively, 10 repetitions each. }\label{max2sat120variable_sls}
\resizebox{\textwidth}{!}{\input{tables/fullexperiments_120v}}
\end{table*}
\begin{table*}[tbh!]
\hspace{-10cm}
\scriptsize
\caption{Results of Mastering Max2Sat~(140 variables) on different Instances Using SingleUCT SLS, MultiUCT SLS and NMC SLS respectively, 10 repetitions each. }\label{max2sat140variable_sls}
\resizebox{\textwidth}{!}{\input{tables/fullexperiments_140v}}
\end{table*}
\begin{table*}[tbh!]
\hspace{-10cm}
\scriptsize
\caption{Results of Mastering Max2Sat~(200 variables) on different Instances Using SingleUCT SLS, MultiUCT SLS and NMC SLS respectively, 10 repetitions each. }\label{max2sat200variable_walksat}
\resizebox{\textwidth}{!}{\input{tables/fullexperiments_200v}}
\end{table*}
\begin{table*}[tbh!]
\hspace{-10cm}
\scriptsize
\caption{Results of Mastering Max2Sat~(250 variables) on different Instances Using SingleUCT SLS, MultiUCT SLS and NMC SLS respectively, 10 repetitions each. }\label{max2sat250variable_walksat}
\resizebox{\textwidth}{!}{\input{tables/fullexperiments_250v}}
\end{table*}
\begin{table*}[tbh!]
\hspace{-10cm}
\scriptsize
\caption{Results of Mastering Max2Sat~(300 variables) on different Instances Using SingleUCT SLS, MultiUCT SLS and NMC SLS respectively, 10 repetitions each. }\label{max2sat300variable_walksat}
\resizebox{\textwidth}{!}{\input{tables/fullexperiments_300v}}
\end{table*}
\subsection{Flip or Search Trade off}\label{subsect:flipvssearch}
Demonstrating that Nested makes better use of time than walksat
Does NMC-walksat benefit from doing more flips?
How to do dynamic setting of flips help NMC-walksat?
In dynamic setting, how best to use additional budget? More reps or more flips?
For MCTS-walksat, how best to use additional budget? More simulations or more flips?
\begin{figure*}[!tbh]
\centering
\hspace*{-1.5em}
\subfigure[]{\label{subfig:orientation_124}
\resizebox{0.45\linewidth}{!}{\input{plots/124}}
}
\hspace*{-1.5em}
\subfigure[]{\label{subfig:orientation_345}
\resizebox{0.45\linewidth}{!}{\input{plots/345}}
}
\hspace*{-1.5em}
\subfigure[]{\label{subfig:orientation_496}
\resizebox{0.45\linewidth}{!}{\input{plots/496}}
}
\hspace*{-1.5em}
\subfigure[]{\label{subfig:orientation_4678}
\resizebox{0.45\linewidth}{!}{\input{plots/4678}}
}
\hspace*{-1.5em}
\subfigure[]{\label{subfig:orientation_41011}
\resizebox{0.45\linewidth}{!}{\input{plots/41011}}
}
\caption{
}
\label{fig:orientation_tradeoff}
\end{figure*}
\section{Introduction}
Maximum Satisfiability~(MaxSAT) problem is an extension of Boolean Satisfiability~(SAT) problem. For MaxSAT, the task is to find a truth value assignment for each literal which satisfies the maximum number of clauses~\cite{goffinet2016monte}. Stochastic Local Search~(SLS) algorithms like WalkSat~\cite{kautz2004walksat} and Novelty~\cite{menai2003efficient} are well studied to solve MaxSAT problems. These methods can not find a provable optimal solution but are usually used to search for an approximate optimal solution especially for larger problem instances. However, SLS algorithms are easy to get stuck in a local optimal solution and it's hard for them to escape. Thus, it's important to find an effective way to get rid of the local optimal solution. As a well-known successful method to address this exploration-exploitation dilemma, Monte Carlo Tree Search~(MCTS) with UCT formula~\cite{browne2012survey} is an ideal algorithm to deal with MaxSAT problems.
MCTS has shown impressive performance on game playing~(including perfect information games and imperfect information games)~\cite{gelly2007combining,cowling2012ensemble,wang2018assessing}, probabilistic single-agent planning~\cite{seify2020single}, as well as most of problems which can be formed as a sequential decision making process, also know as Markov Decision Process~(MDP)~\cite{brechtel2011probabilistic}. Based on the UCT formula, MCTS can address the exploration and exploitation dilemma in a theoretically sound way because UCT provides a state-of-the-art way to build the search tree based on the previous search records~(including the node visited count and the node estimate values of the visit). Typically, the estimate
method of the leaf node in the search tree is a random rollout policy. However, in a lot of applications, many other rollout policies are created to improve the accuracy of the leaf node value estimation.
For MaxSAT problem, UCTMAXSAT~(simply written as UCTMAX in the following parts) employs SLS algorithms to estimate the node value~\cite{goffinet2016monte}.
However, UCTMAX only runs MCTS for the root node to build a search tree until the time out, which may not sufficiently use the advantage of UCT reported by the Nested Monte Carlo Tree Search~(NMCTS)~\cite{baier2012nested}. NMCTS runs MCTS from root to the end or the time out. For each step, after performing the MCTS, it chooses the best assignment value for the current step and then enters into the next step and performs the MCTS again. In addition, UCTMAX employs a fixed flip limit for SLS algorithms. But in a UCT-style SLS, the number of the unassigned variables~(literals below the search tree frontier are unassigned) will decreases along with the search tree deepens. Therefore, we design a novel computation called \emph{Dynamic SLS}, see Equation~\ref{eq:dynamicflip}, for Monte Carlo methods used in this paper.
The experimental results show that for most of the MaxSAT instances~\footnote{The instances are from \emph{$ms\_random$} benchmark:\\ \url{http://www.maxsat.udl.cat/15/benchmarks/index.html}}, the Dynamic SLS way is more robust than the fixed way used for UCTMAX to achieve comparable performance on a variety of instances without extra tuning. Besides, the results show that the NMCTS is better than the UCTMAX on most instances with moderate improvement.
Moreover, Nested Monte Carlo Search~(NMCS) method~\cite{cazenave2009nested} and its variants~\cite{cazenave2020generalized,cazenave2012application,cazenave2020generalized} have been successfully applied to master many NP-hard combinatorial optimization problems, like Morpion Solitaire~\cite{demaine2006morpion}, and achieve impressive performance~\cite{cazenave2009nested,wang2020tackling}. However, NMCS has not been investigated to deal with MaxSAT problems. Therefore, this paper further studies the effectiveness of NMCS~(also using Dynamic SLS as the state estimate) for MaxSAT.
Overall, the main contribution of this paper can be summarized as follows:
\begin{enumerate}
\item We examine various Monte Carlo Search techniques for the domain of MaxSAT, especially rollout policies and high-level searches.
Through an extensive empirical analysis, we establish that
\begin{enumerate*}
\item Purely random or heuristic-based rollouts are weaker than a Stochastic Local Search policy.
\item An MCTS-based search is weaker than Nested MCTS, especially in larger instances. NMCTS with WalkSat is weaker than NMCS, but is stronger with Novelty.
\end{enumerate*}
\item We introduce Dynamic SLS, a new rollout policy that dynamically computes the flip budget available for a stochastic local search. We demonstrate that Monte Carlo algorithms building on Dynamic SLS achieve comparable performance on standard MaxSAT benchmarks with previously existing Monte Carlo approaches without extra tuning.
\end{enumerate}
The rest of the paper is structured as follows.
Before introducing preliminaries of this work in Sec.~\ref{sec:preliminaries}, we present an overview of the most relevant literature in Sect.\,\ref{sec:relatedwork}. Then we present Dynamic SLS based Monte Carlo methods in Sect.\,\ref{sec:slsmctsnmc}. Thereafter, we illustrate the orientation experiments on a group of MaxSAT instances to finalize the structure of our proposed methods in Sect.\,\ref{sec:orientation}. Then the full length experiments are presented in Sect.\,\ref{sec:full-exp}.
Finally, we conclude our paper and discuss future work.
\section{Related Work}\label{sec:relatedwork}
There are a lot of solvers created to master MaxSAT problems~\cite{heras2008minimaxsat,martins2014open,ansotegui2017wpm3,ignatiev2019rc2}. Generally, these solvers can be categorized into two different types, i.e. $complete$ solvers and $incomplete$ solvers. Complete solvers can provide provable the best solution for the problem. Incomplete solvers start from a random assignment and continue to search for a better solution according to some strategies. Typically, Stochastic Local Search algorithms like WalkSat~\cite{kautz2004walksat} and Novelty~\cite{menai2003efficient} are well studied on MaxSAT~\cite{pelikan2003hierarchical,kroc2009integrating}. These $incomplete$ solvers suffer from an exploration-exploitation dilemma. And MCTS has shown successful performance of dealing with this dilemma~\cite{browne2012survey}. Therefore, Tompkins et al. implemented an experimentation environment for mastering SAT and MaxSAT, called UCBMAX~\cite{tompkins2004ubcsat}. Furthermore, Goffinet et al proposed UCTMAX algorithm to enhance the performance of SLS~\cite{goffinet2016monte}. However, UCTMAX only performs UCT search once from the root, which
may not sufficiently use the power of MCTS comparing to run UCT search for each step until to the terminal node or time out, which is known as Nested Monte Carlo Tree Search~\cite{baier2012nested}. In addition to MCTS and NMCTS, NMCS~\cite{cazenave2009nested} and its variations~\cite{cazenave2012application,rosin2011nested,cazenave2020generalized} also perform well especially for single agent NP-hard combinatorial problems, like Morpion Solitaire~\cite{demaine2006morpion}, where they achieve the best record which has not yet been improved even employing deep learning techniques~\cite{wang2020tackling,douxdeep}. Therefore, in this paper, we firstly employ NMCTS and NMCS to master MaxSAT problems with SLS methods.
\section{Preliminaries}\label{sec:preliminaries}
\subsection{MaxSAT}
In MaxSAT, like SAT, the problem is specified by a propositional formula described in conjunctive normal form (CNF)~\cite{morgado2013iterative}. But unlike SAT which the aim is to find a truth assignment to satisfy all clauses, MaxSAT is just to find a truth assignment to satisfy the maximum number of clauses. For a set of Boolean variables $V=\{v_1, v_2, v_3, ..., v_i\}$, a literal $l_j$
is either a variable $v_j$ or its negation $\neg v_j$, $1\leq j\leq i$. A clause is a disjunction of literals (i.e., $c_i = l_1 \vee l_2 \vee, ..., \vee l_j$ ). A CNF formula $F$ is a set of clauses as conjunctive normal form (i.e., $F = c_1\wedge c_2 \wedge, ..., \wedge c_i$). MaxSAT instances written as CNF can be easily found in our tested benchmark.
\subsection{Heuristics}
In order to test the different rollout policies for Monte Carlo Methods, here we present 3 simple heuristics that commonly used for MaxSAT.
\begin{enumerate}
\item H1 is the heuristic which assigns the value \emph{from the first variable to the last variable} and H1 sets 0 for a variable that its positive value occurs more times than its negative value in all clauses.
\item H2 is the heuristic which, for each step, assigns the \emph{variable} first which occurs the most times and H2 sets 0 for a variable that its positive value occurs more times than its negative value in all clauses.
\item H3 is the heuristic which, for each step, assigns the \emph{literal} first which occurs the most times and H3 sets 0 for a variable that its positive value occurs more times than its negative value in all clauses.
\end{enumerate}
\subsection{Stochastic Local Search}
Based on~\cite{goffinet2016monte}, in this paper, we also investigate two well-studied Stochastic Local Search~(SLS) algorithms to deal with MaxSAT problem, namely WalkSat and Novelty.
\subsubsection{WalkSat}\label{subsec:walksat}
\begin{algorithm}[bth!]
\caption{Walksat}
\label{alg:walksat}
\begin{algorithmic}[1]
\Function {WalkSat}{$s$}
\State assignment$\leftarrow$\textsc{InitAssignment}()
\While {fliptimes $< f$}
\If{\textsc{random}() $< \epsilon_1$}
\State $v \leftarrow$ random variable
\Else
\State $v \leftarrow$ best unassigned variable\label{bestvariblefound}
\EndIf
\State assignment$\leftarrow$flip($v$)
\EndWhile
\Return assignment
\EndFunction
\end{algorithmic}
\end{algorithm}
As it can be seen in Algorithm~\ref{alg:walksat}, the idea of WalkSat is to initialize a random assignment~(basic version) or according to the current found best solution~(enhanced version) for each variable. Then an unsatisfied clause is selected. Further step is to select a variable to flip which has the highest bonus after flipping in the selected unsatisfied clause. The bonus is the change of the number of satisfied clauses after flipping the variable.
\subsubsection{Novelty}\label{subsec:novelty}
Novelty is similar to WalkSat. The first step is also to initialize a random assignment~(basic version) or according to the current found best solution~(enhanced version). But differently, for each variable in all unsatisfied clauses, its bonus is computed. Then in order to avoid flipping in a dead loop, a variable which has the highest bonus but not selected in the most recent flipping is selected to flip. Simply, after line~\ref{bestvariblefound} in Algorithm~\ref{alg:walksat}, we add \textbf{If $v = v_f$ and \textsc{random}() $< 1-\epsilon_2$ then $v \leftarrow v_s$}. $v_f$ is the most recent flipped variable and $v_s$ is the second best unassigned variable.
\subsection{Monte Carlo Tree Search}
\begin{algorithm}[bth!]
\caption{Monte Carlo Tree Search}
\label{alg:mcts}
\begin{algorithmic}[1]
\footnotesize
\Function{MCTS}{$s$}
\State Search($s$)
\State $\pi_s\leftarrow$normalize($Q(s,\cdot)$)\label{line:getpolicy}
\State \Return $\pi_s$
\EndFunction
\Function{Search}{$s$}
\If{$s$ is a terminal state}
\State $v\leftarrow v_{end}$\label{mctsvalueend}
\Return $v$
\EndIf
\If{$s$ is not in the Tree}
\State Add $s$ to the Tree, initialize $Q(s, \cdot)$ and $N(s, \cdot)$ to 0
\State Run rollout policy and get the solution score $v_{rollout}$\label{mctsrollout}
\State $v\leftarrow v_{rollout}$\label{mctsvaluerollout}
\Return $v$
\Else
\State Select an action $a$ with highest UCT value\label{line:uct}
\State $s^\prime\leftarrow$getNextState($s$, $a$)
\State $v\leftarrow$Search($s\prime$)
\State $Q(s,a)\leftarrow\frac{N(s,a)*Q(s,a)+v}{N(s,a)+1}$
\State $N(s,a)\leftarrow N(s,a)+1$ \label{line:update}
\EndIf
\State \Return $v$
\EndFunction
\end{algorithmic}
\end{algorithm}
According to~\cite{wang2020analysis,wang2020warm,wang2021adaptive}, a recursive MCTS pseudo code is given in Algorithm~\ref{alg:mcts}.
For each search, the rollout value is returned (or the game termination score). For each visit of a non-leaf node, the action with the highest UCT value is selected to investigate next~\cite{browne2012survey}.
After each search, the average win rate value $Q(s,a)$ and visit count $N(s,a)$ for each node in the visited trajectory is updated correspondingly. The UCT formula is as follows:
\begin{equation}
U(s,a) = Q(s,a) +c \sqrt{\frac{ln(N(s,\cdot))}{N(s,a)+1}}
\end{equation}
The Nested Monte Carlo Tree Search~(Due to the high computation, we only investigate level 1 for NMCTS in this paper) calls MCTS for each step of the assignment process.
\subsection{Nested Monte Carlo Search}\label{subsec:nmcs}
\begin{algorithm}[bth!]
\caption{Nested Monte Carlo Search}
\label{alg:nmc}
\begin{algorithmic}[1]
\footnotesize
\Function{NMC}{$s$, level}
\State chosenSeq$\leftarrow$[], bestScore$\leftarrow -\infty$, bestSeq$\leftarrow$[]
\While{$s$ is not terminal}
\For {each $m$ in legalMoves($s$)}\label{alg:nmc:linefor}
\State $s^\prime\leftarrow$ PerformMove($s$, $m$)\label{alg:nmc:lineperform}
\If{level = 1}
\State (score, seq) $\leftarrow$ run rollout policy\label{nmcrollout}
\Else
\State (score, seq) $\leftarrow$ NMC($s^\prime$, level-1)\label{changetozmcs}
\EndIf
\EndFor
\State highScore $\leftarrow$ highest score of the moves from $s$
\If {highScore $>$ bestScore}
\State bestScore $\leftarrow$ highScore
\State chosenMove $\leftarrow$ $m$ associated with highScore
\State bestSeq $\leftarrow$ seq associated with highScore
\Else
\State chosenMove $\leftarrow$ first move in bestSeq
\State bestSeq $\leftarrow$ remove first move from bestSeq
\EndIf
\State $s \leftarrow$ perform chosenMove to $s$
\State chosenSeq $\leftarrow$ append chosenMove to chosenSeq
\EndWhile
\State \Return (bestScore, chosenSeq);
\EndFunction
\end{algorithmic}
\end{algorithm}
According to~\cite{cazenave2009nested}, the Nested Monte Carlo Search algorithm employs nested calls with rollouts and the record of the best sequence of moves with different levels. The basic
level only performs random moves. Since a nested search may obtain worse results than a previous lower level
search, recording the currently found best sequence and following it when the searches result in
worse results than the best sequence is important. Therefore, we present the pseudo code for the basic Monte Carlo Search algorithm as~\ref{alg:nmc}. In order to estimate the leaf nodes from themselves instead of their children, we further test a variant of NMCS, named ZNMCS~(Zero Nested Monte Carlo Search), where in Algorithm~\ref{alg:nmc}, line~\ref{alg:nmc:linefor} is changed to \textbf{for $i=0, i<t, i++$ do}, in our experiments, $t=10$. In addition, line~\ref{alg:nmc:lineperform} has been removed. And line~\ref{changetozmcs} is changed to (score, seq)$\leftarrow$ ZNMCS($s$, level-1).
\section{Dynamic SLS Based Monte Carlo Methods}\label{sec:slsmctsnmc}
This section proposes the Dynamic SLS method with MCTS and NMCS.
Since the number of the unassigned variables decreases as the search tree deepens, we propose a Dynamic SLS to avoid redundant flips and enlarge search tree to improve the performance within a fixed time budget.
The flip limit~(written as $f$) is simply computed according to the following Equation:
\begin{equation}\label{eq:dynamicflip}
f=w \times u
\end{equation}
$w$ is a weight number, $u$ is the number of the unassigned variables which can be flipped. Considering MCTS, in the search tree, the variables, upon the leaf nodes, have already been assigned to a value, so they can not be flipped anymore. We also tested several exponent values powered by $u$ and finally found exponent equals 1 is the best.
In this work, we insert Dynamic SLS to replace rollout policy for MCTS~(line~\ref{mctsrollout} in Algorithm~\ref{alg:mcts}) and NMCS~(line~\ref{nmcrollout} in Algorithm~\ref{alg:nmc}, same to ZNMCS). In addition, according to~\cite{goffinet2016monte}, it is reported that using square number of the score is the best for UCTMAX, so in this work, for MCTS, we also replace the value calculation in line~\ref{mctsvalueend} and line~\ref{mctsvaluerollout} in Algorithm~\ref{alg:mcts} to $v=pow(v_{end}, 2)$ and $v=pow(v_{dsls}, 2)$.
\section{Orientation Experiments}\label{sec:orientation}
\subsection{Trial with Different Rollout}\label{subsect:trialplayout}
There are several ways to estimate the state value for Monte Carlo methods. One typical way is to simply run random simulations to get approximate values. In addition, for MaxSAT, there are many well designed heuristics to assign the truth values, based on the assignment, a proper value can be obtained. Besides, there are also several well studied SLS algorithms which can be applied to estimate the state value. Therefore, in order to determine which way is the best for the state estimate function, we use different ways to work together with NMCTS and NMCS to process our test setting~(50 different instances, 70 variables each). The NMCTS simulation is set as 100. Time cost for each run is 50 seconds. each setting runs 10 repetitions. The results are shown in Table~\ref{tab:rolloutpolicies}. We see that the heuristics all outperform random rollout, H3 is better than H2 and H2 is better than H1. Importantly, SLS methods perform significantly the best. So we adopt WalkSat and Novelty as the rollout policies for the further experiments. In addition, WalkSat for NMCS is better than NMCTS, but NMCTS with Novelty is the best.
\begin{table}[bht!]
\caption{Results for Max3Sat Instances~(70 variables) Using Different Rollout Policies for MCTS, NMCS.
Results are average number of unsatisfied clauses on tested group instances, same to the following results.}
\centering
\begin{tabular}{l*1r*3r}
\toprule
Method & NMCTS & \multicolumn{3}{c}{NMCS}\\
\cmidrule(lr){3-5}
Level & - & playout & level 1 & level 2 \\
\midrule
Random&81.4 &125.2 &80.8 &80.5\\
H1&56.1 &70.0& 54.4& 53.7\\
H2&55.1 & 69.5 &54.6 &53.8\\
H3&53.2& 64.4 &52.2 &52.2 \\
WalkSat&47.9&52.0&\textbf{47.4}&\textbf{47.7}\\
Novelty&\textbf{47.7}&\textbf{51.9}&48.8&49.0\\
\bottomrule
\end{tabular}\label{tab:rolloutpolicies}
\end{table}
\subsection{UCTMAX vs NMCTS}\label{subsect:SingleUCTvsMultiUCT}
Since \cite{goffinet2016monte}
only investigated the UCTMAX with one time MCTS from the root until the time out. However, it does not perform an action to enter next state and run UCT again like game playing. To this end, the NMCTS~\cite{baier2012nested} method should be further investigated. We let the MCTS simulation as a fixed value~(set as 100) so that each step will stop and get a search tree. Based on this search tree, a best action can be found and performed to enter to next state. Then it runs another UCT process until the time out or the termination. The results show that the NMCTS performs clearly better than the UCTMAX way. In order to enlarge the result difference for different settings, we use larger instances~(50 instances, each has 140 variables. \cite{goffinet2016monte} also used 140 as the test instance size, but they only tested on one instance, we test on 50 different instances with this size to reduce the noisy.) for this experiment and the following orientation experiments.
\begin{figure}[bth!]
\centering
\hspace*{-1.5em}
\input{plots/mcts_singlemulti_walksat}
\caption{Comparison of UCTMAX with NMCTS. NMCTS outperforms UCTMAX on 50 instances which has 140 variables each. For both UCTMAX and NMCTS, the $f$ is set as 2000 which is reported as the best.}
\label{fig:orientation_singlemulti_walksat}
\end{figure}
\subsection{Current Global Best Solution}\label{subsect:globalbest}
Based on~\cite{goffinet2016monte} and \cite{cazenave2009nested}, we know that it is the key to keep the global best solution~(the best of the local solutions from all steps) found so far and initialize the SLS algorithms with this global best solution. We still do not know whether it is also important in our Nested Monte Carlo Methods with SLS. Therefore, we design different combinations to show the importance.
\begin{table}[bth!]
\caption{Impact of Random variable initialization and of keeping the global best solution on the performance of NMCTS and NMCS.
Fixed number of flips (2000), 50 instances, 140 variables each.}\label{tab:currentbest}
\centering
\begin{tabular}{l*5r}
\toprule
Keep Global & \multicolumn{2}{c}{No} & \multicolumn{3}{c}{Yes} \\
\cmidrule(lr){2-3}\cmidrule(lr){4-6}
Initialization & Rand & Best & Rand &Local& Best \\
\midrule
Time Budget & \multicolumn{5}{c}{100s}\\
\cmidrule(lr){2-6}
NMCTS & 221.2 & 220.8 & 204.8 &205.1& \textbf{204.6} \\
NMCS & 198.8 & 199.1 & 199.3 &198.8& \textbf{198.7} \\
\toprule
Time Budget & \multicolumn{5}{c}{300s}\\
\cmidrule(lr){2-6}
NMCTS & 219.9 & 219.8 & 202.9 &202.9 & \textbf{202.6}\\
NMCS & 195.3 & 195.6 & 195.3 &195.6& \textbf{193.1}\\
\bottomrule
\end{tabular}
\end{table}
The results are shown in Table~\ref{tab:currentbest}, we see that with a small time budget~(100 seconds), for NMCTS, keeping the global best records has shown the advantage, and initializing based on the global best records is also better than not but with small improvements. For NMCS, with 100 seconds, although we still find that keeping the global best records and initializing with them is the best, but it's not very significant. However, we see a clear improvement with larger time budget~(300 seconds). The reason that different initialization does not differ too much might be that the flip limit is set too big so even if it is initialized from random, it can also reach a global record level after flipping. From this experiment, we can conclude that keeping the global best records and initializing based on them for SLS~(in this case, it is WalkSat) are both important to the nested search. NMCS works better than NMCTS with WalkSat on 140 variables instances.
\subsection{Probabilistic SLS Initialization}\label{subsect:slsinitial}
In order to further investigate the contribution of initializing WalkSat based on the global best solution found so far, we adopt the simplest but commonly used way to balance the exploration and exploitation, $\epsilon$-greedy, to initialize the assignment.
\begin{figure}[bth!]
\centering
\hspace*{-1.5em}
\input{plots/mcts_epsilon}
\caption{Initializing Walksat Based on $\epsilon$-greedy for NMCTS on 50 instances with 140 variables each, $\epsilon$=0 means initializing WalkSat totally based on the global best solution. $\epsilon$=0.1 means there is 10\% probability to take a random initialization for the literal, and so on. The $\epsilon$ equals 0.1 is the best.}
\label{fig:orientation_nest_walksat_epsilon}
\end{figure}
From Fig~\ref{fig:orientation_nest_walksat_epsilon}, we see that $\epsilon$=0.1 performs best, which further shows the best initialization way is to set literal assignment based on the best solution found so far but with a small randomness to initialize randomly. Thus, our following experiments are done with the $\epsilon$ as 0.1.
\subsection{Fixed Flip Limits vs Dynamic Flip Limits}
Goffinet et al.~\cite{goffinet2016monte} used the fixed flip limits, which we found can be implemented in a dynamical way. Therefore, in this section, we test different $w$ values~(from 0.5 to 25, but finally we only present results of $w\in \{$1, 2, 4$\}$ as they are better) for dynamic flip limits calculation equation~(see Equation~\ref{eq:dynamicflip}). And we found generally for both NMCTS and NMCS with different budget, $w$=2 is the best~(only the result of 300 second is weaker for NMCTS). In addition, we test fixed flip limit with 2000~(which is reported the best for UCTMAX tuned on a single instance) and 140~(same as the average flip limit for each step with $w$=2). We found that with a fixed flip limit as 2000 is the worst and smaller limits increase the performance which shows that for Nested Monte Carlo methods, allocating time cost for relatively more steps contributes more.
\begin{figure}[!tbh]
\centering
\hspace{-1.5em}
\subfloat[NMCTS]{\label{subfig:orientation_mcts_fixedvsdyn_walksat}
\resizebox{0.49\columnwidth}{!}{\input{plots/fixedvsdyn_mcts}}
}
\hfill
\subfloat[NMCS]{\label{subfig:orientation_nmcs_fixedvsdyn_walksat}
\resizebox{0.49\columnwidth}{!}{\input{plots/fixedvsdyn_nmc}}
}
\caption{Comparison of Fixed SLS with Dynamic SLS for NMCTS and NMCS. In order to keep the $w$ consistent for all runs, considering the overall results, we decide setting the weight $w$ for Dynamic SLS flip limits as 2 is the best.
}
\label{fig:orientation_fixedvsdyn_walksat}
\end{figure}
Intuitively, even if a fine tuned fixed flip limit is found for a type of instances, it is not really applicable to set as the best for other instances. However, it is obviously that along with the increasing of sizes, the flip limit should also be larger. In order to test this assumption, we proposed the dynamic SLS and showed it works well for the category 140. Therefore, in order to show the adaptation of our Dynamic SLS method, after tuning the $w$ for Dynamic SLS, we further test the the best value we get for other larger instances which have 180 and 200 variables respectively, and compare the results with the fixed flip limit way~(the best value is 140 for instances which have 140 variables). The results are presented in Fig~\ref{fig:orientation_fixedvsdyn_walksat_180200}. We see that $2u$ achieves better performance for both 180 and 200 variables categories, showing that our Dynamic SLS is more adaptive to other instances. Therefore, no redundant extra tuning cost is needed.
\begin{figure}[bth!]
\centering
\hspace*{-1.5em}
\input{plots/fixedvsdyn_mcts_180200}
\caption{Examples: Comparison of $2u$ and 140 flips for instances which have 180 and 200 variables respectively. NMCTS with Dynamic SLS is better than fixed flip limit on both 180 and 200 variables type, showing that our Dynamic SLS is more adaptive to other instances with different variable numbers.}
\label{fig:orientation_fixedvsdyn_walksat_180200}
\end{figure}
\section{Experiments on Benchmark}\label{sec:full-exp}
In this section, we will show the experimental results on tested benchmark instances with aforementioned SLS based different Monte Carlo methods. The benchmark consists of 383 instances categorized by different numbers of variables. And for each category, there are a bunch of instances with different numbers of clauses.
\begin{table*}[tbh!]
\hspace{-10cm}
\scriptsize
\caption{Results of MaxSat Instances Using WalkSat based UCTMAX, NMCTS, ZNMCS and NMCS respectively, with 300 seconds budget each run, 10 repetitions each. }\label{maxsatallinstances_walksat}
\resizebox{\textwidth}{!}{\input{tables/fullexperiments}}
\end{table*}
\begin{table*}[tbh!]
\hspace{-10cm}
\scriptsize
\caption{Results of MaxSat Instances Using Novelty based UCTMAX, NMCTS, ZNMCS and NMCS respectively, with 300 seconds budget each run, 10 repetitions each. }\label{maxsatallinstances_noveltyt}
\resizebox{\textwidth}{!}{\input{tables/fullexperiments_Novelty}}
\end{table*}
From Table~\ref{maxsatallinstances_walksat}, we can see that with WalkSAT, Nested Monte Carlo methods perform better than UCTMAX. For smaller instances like 70 and 80 variables categories, ZNMCS and NMCS level 1 perform the best, and ZNMCS level 2 achieves similar scores. Interestingly, for categories from 120 to 200, the best performance is achieved by ZNMCS level 2. And for largest instances, NMCTS is the best. These results confirm that the high level nesting of Monte Carlo methods may lead to worse performance.
From Table~\ref{maxsatallinstances_noveltyt}, we still see that for Novelty, NMCTS performs the best for larger instances. But differently, for the small instances, UCTMAX achieves best scores. Only for type 200, ZNMCS achieves the best and the scores do not vary too much. Importantly, it is clear that for most instances, Comparing with WalkSat, Novelty achieves better scores which are much more close to the known optimal solutions, which also shows that a better SLS estimate method achieves better performance together with Nested Monte Carlo. This also leads to that the improvements of NMCTS for Novelty are smaller than that for WalkSat, but we still see a possibility of increasing improvements along with the increasing of the instances sizes, which we should further investigate in future work.
In addition, the type 250 and 300 variables instances are different from others since their clauses are much more easy to be satisfied. In these cases, we find that the NMCTS performs much stably the best.
Therefore, for both WalkSat and Novelty, we can conclude that the nesting search improves the performance of Monte Carlo methods, especially for nesting the MCTS while dealing with larger instances and employing the better SLS method.
\section{Conclusion and Future Work}
In this paper, we first investigated different rollout policies~(random, heuristics, SLS) for different Nested Monte Carlo methods, including NMCTS and NMCS to deal with MaxSAT problem. We found that heuristics are better than random, but SLS is the best rollout policy to work with Monte Carlo methods in the domain of MaxSAT. In addition, we confirmed that also for Nested Monte Carlo methods, SLS methods should also record the global best records and initialize assignment based on the found current best record. In order to further balance the exploration and exploitation, we employed $\epsilon$-greedy and found a proper $\epsilon$ value as 0.1 to randomly initialize the assignment for SLS, which improves the way that~\cite{goffinet2016monte} initialized assignment fully based on the best record. The full benchmark experimental results show that for both WalkSat and Novelty based Monte Carlo methods, the nested tree search outperforms UCTMAX~(Novelty in particularly performs better on larger instances), and NMCS with WalkSat also outperforms UCTMAX and even NMCTS. Therefore, we can conclude that nested search is important to deal with MaxSAT problems, especially for tree search on larger instances.
In the future, one way is to apply more powerful SLS algorithms together with Nested Monte Carlo methods like CCLS~\cite{luo2014ccls}. Besides, further investigation to find a light computation way for employing high level nested search is promising, especially for larger MaxSAT instances.
\bibliographystyle{splncs04}
|
{
"arxiv_id": "2302.13204",
"language": "en",
"timestamp": "2023-02-28T02:12:58",
"url": "https://arxiv.org/abs/2302.13204",
"yymm": "2302"
} | \section{Introduction}
\label{sec:intro}
Central to the axioms of quantum theory is the Hermiticity of the Hamiltonian, as it guarantees a unitary description of time evolution. Unitary time evolution only applies to \textit{isolated} quantum systems. When a small quantum system interacts with the environment, the resulting dynamics for the reduced density matrix of the system is, typically, decoherence inducing. Under mild conditions such as a Markovian bath, this dynamics is described by a completely positive trace preserving (CPTP) map that is generated by the Lindblad equation~\cite{Gorini1976,Lindblad1976}. In recent years, non-Hermitian Hamiltonians have been extensively studied due to their emergence as effective descriptions of classical systems with gain and loss~\cite{Joglekar2013}. In the truly quantum domain, it has been shown that they emerge from Lindblad equation through post-selection where trajectories with quantum jumps are eliminated~\cite{Naghiloo2019}. Examples of phenomena modelled by non-Hermitian Hamiltonians vary from gain and loss in photonics \cite{ElGanainy2007,Guo2009,Rter2010,ElGanainy2018}, radioactive decay in nuclear systems \cite{Siegert1939,Feshbach1958,Feshbach1962}, and renormalization in quantum field theories \cite{quantumRG,LeeModel,kallen1955mathematical,LeeModelPT}.
Of particular interest in the study of non-Hermitian Hamiltonians are those with an antilinear symmetry. A system whose time evolution is governed by Hamiltonian with an antilinear symmetry exhibits time reversal symmetry \cite{Wigner1932}. A Hamiltonian with an antilinear symmetry has eigenvalues which are purely real or occur in complex conjugate pairs, because if $\lambda$ is an eigenvalue of that operator, then $\lambda^*$ also satisfies the characteristic equation. This feature - pairing of complex-conjugate eigenvalues - is often used to reflect systems with balanced loss and gain.
Additionally, if a Hamiltonian exhibits $\textit{unbroken}$ antilinear symmetry, so that all of its eigenspaces are invariant under the same symmetry, the Hamiltonian's spectrum is real \cite{bender1999pt}. For historical reasons, the linear and complex-conjugation parts of the antilinear symmetry are called \textit{parity} and \textit{time-reversal} symmetries respectively. In our models of $n$-site graphs, the state space is the Hilbert space $\mathbb{C}^n$, and the actions of parity and time-reversal operators are given by
\begin{align}
\mathcal{P}_n e_k &= e_{\overline{k}} \label{P} \\
\mathcal{T} e_{k} &= e_{k}, \label{T}
\end{align}
where $(e_k)_j = \delta^k_{j}$ is the canonical basis for $\mathbb{C}^n$, $\delta$ is the Kronecker delta, and $\overline{k} = n+1-k$. Hamiltonians $H$ which are $\mathcal{PT}$-symmetric in this sense satisfy the constraint $H_{pq}=H^{*}_{\overline{p}\overline{q}}$ and are referred to as \textit{centrohermitian} \cite{Lee1980}.
Given a $\mathcal{PT}$-symmetric Hamiltonian which depends on a set of parameters, the $\mathcal{PT}$-symmetry is unbroken for a subset of parameter space, the boundary of which consists exceptional points. \textit{Exceptional points} (EPs) are spots in parameter space where the number of distinct eigenvalues (and corresponding eigenvectors)
decreases \cite{Kato1995}. We define the \textit{order} of an EP to be the number of coalescing eigenvectors (irrespective of the algebraic multiplicity of corresponding eigenvalue) at the EP, and refer to an $k$-th order EP as an EP$k$.
An equivalent condition for the existence of an antilinear symmetry for $H$ is \textit{pseudo-Hermiticity} \cite{mostafazadeh2002pseudo3,Siegl2009,siegl2008quasi}. Pseudo-Hermitian operators are those such that there is an Hermitian \textit{intertwining operator}, $M=M^\dagger$, satisfying
\begin{equation}
H = M^{-1} H^\dag M. \label{Dieudonne}
\end{equation}
In the case where $M$ is positive definite, we refer to it as a \textit{metric operator}, and $H$ is called \textit{quasi-Hermitian} \cite{dieudonne,QuasiHerm92}. A finite dimensional matrix has real eigenvalues if and only if it's quasi-Hermitian \cite{Drazin1962,BiOrthogonal,mostafazadeh2002pseudo2,mostafazadeh2008metric}. Furthermore,
the metric operator defines an inner product for which a quasi-Hermitian operator is self-adjoint. Thus, quasi-Hermitian operators can be realized as observables in a fundamental extension of quantum theory to self-adjoint but non-Hermitian Hamiltonians~\cite{QuasiHerm92}. On the other hand, if non-Hermitian Hamiltonians are considered an effective description, where loss of unitarity is not prohibited, one uses the Dirac-inner product to obtain observables and predictions, and the intertwining operators take the role of time invariants \cite{bian2019time}.
Given a pair of Hamiltonian $H$ and metric $M$, the metric for all similar Hamiltonians $S^{-1} H S$ can be constructed as~\cite{kretschmer2001interpretation},
\begin{equation}
(H,M)\leftrightarrow (H',M')=(S^{-1}HS,M'=S^\dag M S). \label{MetricMapper}
\end{equation}
Notably, choosing $S = M^{-1/2}$ implies $M'=\mathbb{1}$ and therefore $H'^\dagger=H'$, i.e. an equivalent Hermitian Hamiltonian exists for all quasi-Hermitian Hamiltonians with bijective metric operators \cite{williams1969operators,mosta2003equivalence}.
The models in this paper are special cases of transpose-symmetric tridiagonal matrices over $\mathbb{C}^n$ with perturbed corners,
\begin{align}
H_{ij} &= z_i \delta^i_{j} + t_j \delta^i_{j+1} + t_i \delta^j_{i+1} + \delta^i_{\bar{j}}(t_L \delta^i_{1} + t_R \delta^i_{n})
\nonumber\\
&= \begin{pmatrix}
z_1 & t_1 & 0 & \dots &0 & t_L \\
t_1 & z_2 & t_2 & \ddots & \ddots & 0 \\
0 & t_2 & z_3 & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & t_{n-2} & 0 \\
0 & \ddots & \ddots & t_{n-2} & z_{n-1} &t_{n-1} \\
t_R & 0 & \dots & 0 & t_{n-1} & z_n
\end{pmatrix},
\label{TriDiag}
\end{align}
where $z_i, t_L, t_R \in \mathbb{C}$ and $t_i\in\mathbb{R}$.
$\mathcal{PT}-$symmetric variants of \cref{TriDiag} have been well explored \cite{Korff2008,jin2009solutions,
InfiniteLattice,Babbey,JoglekarSaxena,PTRing,MyFirstPaper,
ortega2019mathcal,farImpurityMetric,guo2016solutions,
Znojil2009,ZnojilGeneralized,Ruzicka2015,Zhu2014,Klett2017,Jin2017,Lieu2018,Yao2018,Turker2019,Mochizuki2020}. Numerous examples of \cref{TriDiag} have closed form solutions for the spectrum, an incomplete list includes \cite{rutherford1948xxv,elliott1953characteristic,losonczi1992eigenvalues,
yueh2005eigenvalues,da2007characteristic,YUEH2008,Willms2008,Chang2009,Joglekar2010,
daFonseca2015,KILI2016,Chu2019,Alazemi2021}. Due to the well-known similarity transformation between generic tridiagonal matrices and their transpose-symmetric counterparts, displayed in \cite{santra2002non,JoglekarSaxena}, the results of this report readily generalize to tridiagonal matrices which are not transpose symmetric, such as the Hatano-Nelson model \cite{Hatano1996}.
\section{Tight-binding models}
The results of this paper can be categorized into three groups. The first two groups pertain to two special cases of matrices of the form \cref{TriDiag}, and the last group pertains to generic features of exceptional points of bivariate matrix polynomials. These results are now described in order.
\begin{figure}[htp!]
\centering
\includegraphics[width = 80mm]{chain.pdf}
\includegraphics[width = 80mm]{cycle.pdf}
\caption{Graphical representation of the two cases of \cref{TriDiag} studied in this report, for $n = 6$ (open chain, top) and $N=10$ (closed, SSH chain, bottom) respectively. The first case of an open chain with nearest neighbour defects is studied in \cite{Babbey}, and the second case of an SSH chain with non-Hermitian defects on the boundary is studied in \cref{SSH}.} \label{chains}
\end{figure}
The types of matrices studied in the first two groups of results are graphically depicted in \cref{chains}. In the first case, we consider a general Hermitian chain on an even lattice with open boundary conditions and non-Hermitian perturbations on the central two sites. In the second case, we consider a Su-Schrieffer-Heeger (SSH) chain with a pair non-Hermitian perturbations at the edges of the chain.
The diagonal elements of $H$ are assumed to be real-valued everywhere except at a pair of mirror-symmetric sites, $(m, \overline{m})$ with $m \in \{1, \dots, n \}$. The sites $(m, \overline{m})$ will be referred to as \textit{defects}. More explicitly,
\begin{align}
z_j \in \mathbb{R} \, \, \forall j \notin \{m ,\bar{m} \}.
\end{align} To simplify select equations, we will denote the defect potentials as $z_m=\Delta+i\gamma$ and $z_{\overline{m}}=z_m^*=\Delta-i\gamma$. Here, without loss of generality, we take $\gamma\geq 0$; therefore, the site $m$ is the gain site and its mirror-symmetric site $\overline{m}$ is the loss site.
We will refer to the parameter $\Delta$ as \textit{detuning}. To enforce $\mathcal{PT}$-symmetry, in most of the paper, we make the following assumptions on the model parameters:
\begin{align}
t_k &= t_{n-k} \in \mathbb{R} \setminus \{0\} \nonumber \\
z_k &= z_{\bar{k}}^* \nonumber \\
t_L &= t_R^* \in \mathbb{C}. \label{mostAssumptions}
\end{align}
The spectrum of $H$, $\sigma(H)$, describing an open chain obeys the following symmetries:
\begin{equation}
\sigma(H(z_i)) = -\sigma(H(-z_i)) = \sigma(H(z_i^*))^* ,
\end{equation}
where the first equality arises from the similarity transform $(-1)^{i+j} H_{ij}(z_i) = - H_{ij}(-z_i)$ \cite{kahan1966accurate,Valiente2010,Joglekar2010} and the second equality arises from the $\mathcal{PT}$-symmetry of the Hamiltonian. When $z_i=0$, this symmetry is called chiral symmetry. Physically, it states that eigenvalues of $H$ arise in particle-hole symmetric pairs, and signals the existence of an operator $\Pi_{ij}=(-1)^i\delta_{ij}$ that anticommutes with the Hamiltonian.
\subsection{Nearest Neighbour Defects}
In \cref{openChain}, we consider the case with nearest neighbour defects and open boundary conditions, i.e. $n = 2m$, and $t_L =0= t_R$. In this case, the $\mathcal{PT}$-threshold is equal to the magnitude of the tunnelling amplitude $t_m>0$ between the nearest-neighbour defect sites. Note that $t_m>0$ can be chosen without loss of generality. For $\gamma \leq t_m$, the spectrum is purely real, and the EP occurs when $\gamma=\gamma_\textrm{EP}=t_m$ where the $2m$ dimensional system has exactly $m$ linearly independent eigenvectors. For $\gamma>t_m$, there are no real eigenvalues, i.e. the system has maximally broken $\mathcal{PT}$-symmetry~\cite{MyFirstPaper}.
In \cref{homomorphismMetric}, we obtain a one-parameter family of intertwining operators, $M$. A subset of positive-definite metric operators exists when the gain-loss strength satisfies $\gamma < t_m$. The intertwining operator is used to construct a so-called $\mathcal{C}$-symmetry of our transpose-symmetric Hamiltonian \cite{bender2002complex,CorrectCPT}. Using the metric, we compute a similar Hermitian Hamiltonian, $H'$, in \cref{equivHermHam}.
Notably, the similarity-transformed Hermitian Hamiltonian is local. This contrasts with the generic cases of local $\mathcal{PT}-$ unbroken Hamiltonians, whose similar Hermitian operators are nonlocal \cite{Korff2008}.
Where the tunnelling is uniform, $t_i = t>0$, the spectrum of $H$ can be computed exactly for some special cases of defect potentials, summarized in \cref{table}. Note that for a uniform chain, the choice of positive $t$ is always possible by a unitary transformation of the Hamiltonian. Since the eigenvalues of tridiagonal matrices always have geometric multiplicity equal to one \cite{elliott1953characteristic}, the cases in table~\eqref{table} where $H$ has less than $n$ distinct eigenvalues are exceptional points.
\begin{table*}[htp!]
\centering
\begingroup
\setlength{\tabcolsep}{8pt}
\renewcommand{\arraystretch}{1.25}
\begin{tabular}{|l|l|}
\hline
Constraints & Eigenvalues of $H$ \\
\hhline{|=|=|}
$z_m = \pm i t$
& $\left\{
2t \cos \left(\dfrac{j \pi}{m+1}\right)\,\mid \, j \in \{1, \dots, m\}
\right\}$\\
\hline
$z_m = (-1\pm i) t$
& $\left\{
2 t\cos\left( \dfrac{2j \pi }{2m + 1} \right)\,|\, j \in \{1, \dots, m\} \right\} $\\
\hline
$
z_m = (1\pm i) t$
& $\left\{
2t \cos \left(\dfrac{(2j-1)\pi}{2m+1} \right)\,|\, j \in \{1, \dots, m\} \right\}$ \\
\hline
$z_m = e^{\pm i \pi/3} t$
& $\begin{array}{l} \left\{ 2t \cos \left(\dfrac{j \pi}{m+1} \right)\,|\, j \in \{1, \dots, m\} \right\} \cup\\ \left\{2t \cos \left(\dfrac{(2j-1) \pi }{2m + 1}\right)\,|\, j \in \{1, \dots, m\} \right\} \end{array}$\\
\hline
$
z_m = e^{\pm 2 \pi i/3} t $
& $\begin{array}{l} \left\{2t \cos \left(\dfrac{j \pi}{m+1}\right)\,|\, j \in \{1, \dots, m\} \right\} \cup\\ \left\{2t \cos \left(\dfrac{2j \pi}{2m+1}\right)\,|\, j \in \{1, \dots, m\} \right\} \end{array}$ \\
\hline
\end{tabular}
\centering
\caption{Non-Hermitian cases where the spectrum of $H$ has a closed form solution, for an even, uniform lattice with open boundary conditions and nearest-neighbour defects. The entry in the first row was known to \cite{Joglekar2010}. The eigenvectors can be constructed using the ansatz of \cite{Joglekar2010} or by computing characteristic polynomials \cite{Gantmacher2002}.}
\label{table}
\endgroup
\end{table*}
\subsection{SSH Chain}
\label{subSSH}
Our second set of results pertain to a non-Hermitian perturbation of the SSH model with open boundary conditions and non-Hermitian defects at the edges of the chain ($m = 1$). Mathematically, we assume the tunnelling amplitudes are 2-periodic, given by $t_1,t_2>0$ respectively, and we set
\begin{equation}
z_i = 0 \, \, \forall i \notin \{m ,\bar{m} \} \label{ParamAssumptions}
\end{equation}
Note that the choice of positive $t_1,t_2$ for an open chain is always possible by using a diagonal, unitary transform. The case with zero detuning was studied in \cite{Zhu2014,Klett2017}, additional non-Hermitian perturbations of the SSH chain can be found in \cite{Ruzicka2015,Jin2017,Lieu2018,Yao2018,Turker2019,Mochizuki2020}, and several special cases of the eigenvalue equation are exactly solvable~\cite{rutherford1948xxv,elliott1953characteristic,losonczi1992eigenvalues,
yueh2005eigenvalues,da2007characteristic,Willms2008,Chu2010,Joglekar2010,daFonseca2015}. The characteristic polynomial for even and odd SSH chains are presented in \cref{charPolyTable}, generalizing the results in \cite{da2007characteristic,ortega2019mathcal}.
Several special cases of the eigenvalue equation are exactly solvable~\cite{rutherford1948xxv,elliott1953characteristic,losonczi1992eigenvalues,
yueh2005eigenvalues,da2007characteristic,Willms2008,YUEH2008,Chu2010,Joglekar2010,daFonseca2015,modak2021eigenstate}.
When $t_2<t_1$, i.e. the weak links are in the interior of the chain, the system is in the topologically trivial phase. When $t_1<t_2$, the weak-links are at the edges of the chain, thereby rendering the system in the topologically nontrivial phase. In the thermodynamic limit ($n \to \infty$), in topologically nontrivial phase with zero detuning, \cite{Klett2017} demonstrated that the $\mathcal{PT}$-symmetry breaks at $\gamma = 0$ due to the presence of \textit{edge states}, eigenstates which are peaked at the edges of the chain and decay exponentially as one moves inwards. Thus, the uniform chain with $t_1=t_2$ marks the transition between topologically trivial and non-trivial phases. We will therefore refer to it as a critical SSH chain as well. When we place two defects with detuning in an SSH system, the constraints of proposition~\eqref{inclusionTheorem} yield $\sigma(H)\subset\mathbb{R}$, i.e. a $\mathcal{PT}$-unbroken phase. A subset of the $\mathcal{PT}$-unbroken domain includes
\begin{align}
&(\Delta^2 + \gamma^2 \leq t_2^2 \leq t_1^2 ) \vee (\Delta^2 + \gamma^2 =t_2^2 \wedge \gamma^2 < t_1^2 ).
\end{align}
A subset of the $\mathcal{PT}$-broken phase is given by \cref{unbrokenIneq1,unbrokenIneq2}.
Continuing with the case of the critical SSH chain, $t_2=t_1=t$, we expand upon the works of \cite{Korff2008,jin2009solutions,farImpurityMetric}. The set of exceptional points is determined analytically in \cref{EPSurface}. Asymptotic expressions for this set are studied in the large detuning case, $\Delta/t\rightarrow \infty$, and we find the critical defect strength scales as $\gamma_\textrm{EP}/t\propto (\Delta/t)^{(n-2)}$. In the thermodynamic limit of $n \rightarrow \infty$, the $\mathcal{PT}-$unbroken region is numerically demonstrated to approach the union of a unit disk $|z_1|/t =1$ and the real axis $\gamma = 0$.
For defects inside a uniform chain, instead of at its end-points, we find that a subset of the spectrum is independent of the defect strength $z_m$ whenever $m$ shares a nontrivial factor with $n+1$; this occurs because precisely the open-uniform-chain eigenfunctions have a node at defect location, thereby rendering the defect invisible to their energies.
Exceptional points occur when these eigenvalues are multiple roots of the characteristic polynomial. In general, these exceptional points do not coincide with the $\mathcal{PT}$-symmetry breaking threshold, and the spectrum is generically complex in the vicinity of these exceptional points. Furthermore, as demonstrated in \cite{ortega2019mathcal}, when $\Delta = 0$, there are even more constant eigenvalues.
\section{Open Chain with Nearest Neighbour Impurities} \label{openChain}
In this section, we present analytical results for a non-uniform open chain with $n=2m$ and nearest neighbour defects $z_m=\Delta+i\gamma=z^*_{m+1}$. In particular we show that most of its properties are determined solely by the tunnelling amplitude $t_m>0$ connecting the two defect sites.
\subsection{Intertwining operators and Inner product}
\begin{proposition}
A Hermitian intertwining operator $M$ for the matrix $H$ of \cref{TriDiag} in the $\mathcal{PT}$-symmetric case $t_k=t_{n-k}$ with nearest neighbour defects and open boundary conditions is given by the block matrix
\begin{align}
M(Z_m) &= \begin{pmatrix}
\mathbb{1}_m & \frac{Z^*_m}{t_m} \mathcal{P}_m \\
\frac{Z_m}{t_m} \mathcal{P}_m & \mathbb{1}_m
\end{pmatrix}, \label{homomorphismMetric}
\end{align}
where $\mathbb{1}_m$ is the $m \times m$ identity matrix and $Z_m$ is a constant with arbitrary real part and $\Im Z_m = \gamma$. $M$ is positive-definite when $|Z_m|< t_m$. $M$ is the only intertwining operator for $H$ which is a sum of the identity matrix and an antidiagonal matrix.
\end{proposition}
\begin{proof}
The proof is by induction. For $n=2$, the most general intertwining operator (modulo trivial multiplicative constant) is \cite{wang20132}
\begin{equation}
M = \begin{pmatrix}
1 & Z_m^*/t_m \\
Z_m/t_m & 1
\end{pmatrix}.
\end{equation}
Suppose $M$ has the form \cref{homomorphismMetric} when $n = n_0$. $M$ is the sum of a diagonal and an antidiagonal matrix, the following identity is a re-expression of \cref{Dieudonne} for $n=n_0+1$,
\begin{equation}
\sum^{n_0}_{j=2} M_{i\,j} H_{jk} = \sum^{n_0}_{j=2} H^{\dag}_{ij} M_{j\,k}, \,\,\,\,\, \forall \,\, 1 < i,k < n.
\end{equation}
Thus, for $n = n_0+1$, $M$ is a sum of a diagonal and an antidiagonal matrix and satisfies \cref{Dieudonne} if and only if
\begin{align}
\begin{pmatrix}
M_{1,1} & M_{1,n} \\
M_{n,1} & M_{n,n}
\end{pmatrix} &=
\begin{pmatrix}
M_{2,2} & M_{2,n-1} \\
M_{n-1,2} & M_{n-1,n-1}
\end{pmatrix},
\end{align}
which implies $M$ must have the form of \cref{homomorphismMetric}. Since $M$ is a direct sum of commuting $2 \times 2$ block matrices, $M$ is positive-definite if and only if $|Z_m|<t_m$. That $M$ is the metric of $H$ was initially stated in \cite{Barnett_2021}. Previous literature found the special case of $M$ for a uniform chain \cite{Znojil2009} and the special case with $n = 2$ \cite{mosta2003equivalence,wang20132}.
\end{proof}
We remind the reader of the similarity transform between the general tridiagonal model and the transpose symmetric variant, given in for instance \cite{santra2002non,JoglekarSaxena}. Using the mapping \eqref{MetricMapper} with this similarity transform, the metric operator \eqref{homomorphismMetric} is easily generalized to cases where the Hamiltonian is not transpose symmetric \cite{JoglekarSaxena}.
\subsection{Equivalent Hermitian Hamiltonian} \label{Equivalent Hamiltonian Section}
When the intertwining operator is postive-definite, i.e. the non-Hermitian Hamiltonian has purely real spectrum, we can construct an equivalent Dirac-Hermitian Hamiltonian as follows. In this section, we assume $|Z_m|< t_m$. An Hermitian Hamiltonian, $h = h^\dag$, which is similar to $H$ is defined as
\begin{align}
h:&= \Omega H \Omega^{-1}=M^{1/2}HM^{-1/2},
\end{align}
where $\Omega=\sqrt{M}$ denotes the unique positive square root of $M$. Since the metric defined in \cref{homomorphismMetric} is block diagonal, the non-unitary similarity transform $\Omega$ can explicitly be calculated as
\begin{align}
\Omega &= \begin{pmatrix}
\frac{\alpha}{2} \mathbbm{1}_m & \frac{Z_m^*}{\alpha t_m} \mathcal{P}_m \\
\frac{Z_m}{\alpha t_m} \mathcal{P}_m & \frac{\alpha}{2} \mathbb{1}_m
\end{pmatrix}=\frac{\alpha}{2}\mathbbm{1}_{2m}+\frac{1}{\alpha t_m}\left(\Re Z_m \sigma_x+\Im Z_m\sigma_y\right)\otimes\mathcal{P}_m,
\end{align}
where $\alpha =\sqrt{1+|Z_m|/t_m}+\sqrt{1-|Z_m|/t_m}$. Thus, the equivalent Hermitian Hamiltonian for the non-uniform open chain is given by
\begin{align}
h_{ij} &= t'_i \delta_{i+1,j} + {t'_i}^*\delta_{i,j+1} + \Re z_i \delta_{i,j}\\
t'_i &:= \begin{cases}
\sqrt{|t_i t_{n-i}|} & \, \text{if }i \neq m \\
\frac{\Re Z_m}{Z_m}t_m + i\frac{\Im Z_m}{Z_m} \sqrt{t_m^2-|Z_m|^2} & \,\text{if }i = m
\end{cases}\label{equivHermHam}
\end{align}
Interestingly, this equivalent Hamiltonian remains tridiagonal, and is interpreted as \textit{local} to a one-dimensional chain. This is in stark contrast to most other cases where the non-unitary similarity transform $\Omega$ generates long-range interactions thereby transforming a local, $\mathcal{PT}$-symmetric Hamiltonian $H$ with real spectra into an equivalent, non-local Hermitian Hamiltonian whose range of interaction diverges as one approaches the exceptional point degeneracy~\cite{Korff2008}.
\subsection{\texorpdfstring{$\mathcal{C}$}{C} Symmetry}
Consider a pseudo-Hermitian matrix $H$ with two intertwining operators, $\eta_1$ and $\eta_2$. It is straightforward to show that $\eta_2^{-1} \eta_1$ commutes with $H$~\cite{BiOrthogonal}. Owing to the $\mathcal{PT}$-symmetry and transpose symmetry of $H$ with open boundary conditions, a particular operator which commutes with $H$ is
\begin{align}
\mathcal{C} := \frac{1}{\sqrt{t_m^2 - \gamma^2}} \mathcal{P} M(\i \gamma) = \frac{1}{\sqrt{t_m^2-\gamma^2}}\left( t_m\mathbbm{1}_{2m}+\gamma \sigma_y\otimes\mathcal{P}_m\right),
\label{generalC}
\end{align}
In the domain where $H$ is $\mathcal{PT}$-unbroken and diagonalizable, the symmetry $\mathcal{C}$ is a Hermitian involution $\mathcal{C}^2 = \mathbb{1}$ which commutes with $\mathcal{PT}$. Due to the $\mathcal{C}$ symmetry and non-degeneracy of $H$ \cite{elliott1953characteristic}, the eigenvectors of $H$ are elements of the eigenspaces of $\mathcal{C}$,
\begin{align}
V_{\pm} = \text{span} \left\{t_m e_k +(-\i\gamma\pm \sqrt{t^2_m - \gamma^2} ) e_{\bar{k}} \,|\, k \in \{1, \dots, m \}\right\}.
\end{align}
The coalescence of $V_+$ and $V_-$ as one approaches $\gamma =t_m$ is a signature that this is an exceptional point.
\subsection{Complexity of spectrum}
The central result of this section is that if $\gamma > t_m$, every eigenvalue has a nonzero imaginary part. This generalizes the result of \cite{MyFirstPaper} to the case with finite detuning and site-dependent tunnelling profiles. Suppose a given eigenvalue, $\lambda \in \sigma(H)$, is real, $\lambda \in \mathbbm{R}$. Since the geometric multiplicity of $\lambda$ is one \cite{elliott1953characteristic}, the corresponding eigenstate, $|\psi\rangle=\sum_{k=1}^{2m} \psi_k e_k$, is also an eigenstate of the antilinear operator $\mathcal{PT}$. As a consequence of the eigenvalue equations, without loss of generality, the eigenstate can be taken to be real for all sites on the left half of the lattice, $\psi_k \in \mathbbm{R}\,\forall k \leq m$. By $\mathcal{PT}$ symmetry, there exists a phase $\chi \in [0, 2 \pi)$ such that $\psi_{\bar{k}} e^{i \chi} = \psi_k \,\forall k \leq m$.
With these observations in mind, the eigenvalue equations at the nearest-neighbour defect sites $(m,m+1)$ are equivalent to
\begin{equation}
\begin{pmatrix}
(z_m - \lambda) \psi_m + t_{m-1} \psi_{m-1} & t_m \psi_m \\
t_m \psi_m & (z_{m+1} - \lambda) \psi_m + t_{m-1} \psi_{m-1}
\end{pmatrix}
\begin{pmatrix}
1 \\ e^{i \chi}
\end{pmatrix} =0.
\end{equation}
For this matrix to have a nontrivial kernel, its determinant must vanish. However, if $\gamma > t_m$, the determinant is strictly positive. The contradictory assumption was taking $\lambda \in \mathbbm{R}$, thus, every eigenvalue is has a nonzero imaginary part when $\gamma > t_m$.
\subsection{Degree of \texorpdfstring{$\mathcal{PT}-$}{PT-}Symmetry Breaking}
The reality of the spectrum of $H$ for $\gamma < |t_m|$ follows from the positive-definite nature of the explicitly constructed intertwining operator \cref{homomorphismMetric} in that domain. When $\gamma = t_m$, the intertwining operator $M$ is no longer positive definite, but is positive \textit{semi}-definite. Consequently, in this section we demonstrate at at $\gamma=t_m$ the spectrum of $H$ is still real, but $H$ is no longer diagonalizable.
\begin{proposition}
\label{prop2}
When $\gamma = |t_m|$, $H$ has exactly $m$ orthogonal eigenvectors corresponding to real eigenvalues with algebraic multiplicity equal to two and geometric multiplicity equal to one.
\end{proposition}
\begin{proof}
First, we prove that $H$ has at most $m$ linearly independent eigenvectors. To achieve this goal, consider the characteristic polynomial of $H$. Denoting $H_i$ as the matrix formed by taking the first $i$ rows and columns of $H$, denoting $p_A$ as the monic characteristic polynomial of a matrix $A$, and applying the linearity property of determinants, we find
\begin{align}
p_H(\lambda) = (\gamma^2-t_m^2) p_{H_{m-1}}^2(\lambda) + \left[\Delta p_{H_{m-1}}(\lambda) + p_{H_m}(\lambda) \right]^2.
\end{align}
When $\gamma=t_m$, $p_H$ is the square of a monic polynomial of degree $m$. Thus, in this case, each eigenvalue of $H$ has an algebraic multiplicity of at least two. Since the geometric multiplicity of every eigenvalue of $H$ equals one \cite{elliott1953characteristic}, there are at most $m$ linearly independent eigenvectors.
One simple proof that $H$ has at least $m$ eigenvectors when $\gamma^2 =t_m^2$ follows from applying theorem 1 of \cite{Drazin1962} to the positive semi-definite intertwiner $M$. We provide an alternative proof here. Consider the action of $H$ on $\ker M$. An orthonormal basis of $\ker M$ is
\begin{align}
\ker M &= \text{span} \{\tilde{e}_j\,|\,j \in \{1, \dots, m\} \} \\
\tilde{e}_j &= \frac{\i \gamma}{\sqrt{2} t_m} e_j + \frac{1}{\sqrt{2}} e_{\bar{j}}.
\end{align}
Then
\begin{align}
H \tilde{e}_j &= \begin{cases}
t_{1} \tilde{e}_2 & \text{ if } j = 1\\
t_{j-1} \tilde{e}_{j-1} + t_{j} \tilde{e}_{j+1} & \text{ if } j \in \{2, \dots, m-1\} \\
t_{m-1} \tilde{e}_{m-1} & \text{ if } j = m
\end{cases}. \label{tildeH}
\end{align}
Thus, $\ker M$ is an invariant subspace of $H$. Define $\tilde{H}:\ker M \to \ker M$ by the condition $\tilde{H}(v) = H(v)$ for all $v \in \ker M$. Equation~\eqref{tildeH} implies that $\tilde{H}$ is Hermitian, so it has $m$ orthogonal eigenvectors whose corresponding eigenvalues are real. These eigenvectors are also of $H$, demonstrating $H$ has at least $m$ eigenvectors corresponding to real eigenvalues. We emphasize that proposition~\eqref{prop2} is valid for arbitrary, $\mathcal{PT}$-symmetric tunnelling profiles and finite detuning.
\end{proof}
\subsection{Exact spectra for detuned uniform chain}
In this section, we outline the process by which the exact spectra presented in \cref{table} are obtained for a uniform ($t_j=t>0$) chain with a pair of detuned defects $z_m=z^*_{\bar{m}}$. Generalizing the works of \cite{rutherford1948xxv,losonczi1992eigenvalues,Joglekar2010}, the eigensystem in this case was computed in \cite{ortega2019mathcal}. For our purposes, we need the (rescaled) characteristic polynomial,
\begin{align}
P_{n,m}(x;z') :&= \text{det}(2x I - H/t) \nonumber\\
&= U_n\left(x\right) - (z'_m + z'_{\bar{m}}) U_{n-m}\left(x\right) U_{m-1}\left(x\right) \nonumber \\
&+ z'_m z'_{\bar{m}} U_{n-2m}\left(x\right) U_{m-1}\left(x\right)^2 = 0 \label{charPolySpecific},
\end{align}
where $x=\lambda/2t$ is the dimensionless eigenvalue of $H$, and denote scaled defect strength by $z'_m=z_m/t$, and the Chebyshev polynomial of second kind is denoted by $U_n(x)=\sin\left[(n+1) \arccos x\right]/\sin (\arccos x)$. The special cases with closed form spectrum in \cref{table} can be derived from simplifying \cref{charPolySpecific} with the identity
\begin{align}
U_{2m}(x) &= U^2_{m}(x) - U^2_{m-1}(x).
\end{align}
If the polynomials $U_n(x), U_{n-m}(x), $ and $U_{n-2m}(x)$ share a common zero, then corresponding to this zero is an eigenvalue of $H$ which is independent of the complex defect strength $z_m, z_{\bar{m}}$. Similarly, in the zero-detuning case, $z_m = - z_{\bar{m}}$, $H$ has an eigenvalue independent of $z_m$ if $U_n(x)$ and $U_{n-2m}(x)$ share a common zero. It also follows that this eigenvalue is real since $\sigma(H)$ is real in the Hermitian limit. Thus, a subset of the spectrum is
\begin{align}
g := \begin{cases}
\gcd(2 m,n+1) & \text{ if } z_m = -z_{\bar{m}}\\
\gcd(m,n+1) & \text{ otherwise}
\end{cases} \\
\sigma(H) \supseteq \left\{ 2t \cos \left(\pi \frac{k}{g}\right): k \in \{1, \dots g-1\} \right\}.
\end{align}
We also point out to the reader that $x=0$ is solution of the characteristic polynomial independent of $z_m$ whenever $n$ is odd and $\Delta=0$. It represents the zero-energy state that is decouped from the defect potentials due to its vanishing weigths on the defect sites.
\begin{figure}
\centering
\includegraphics[width = 80mm]{plot522.pdf}
\caption{Exceptional points (blue) of $H$ with $t_i = t$, $t_L = t_R = 0$, $(n,m) = (5,2)$ as a function of the defect strength $\gamma/t$ and $\Delta/t$. At small $\gamma/t$, all five eigenvalues are real. The points $z_m=0+i\gamma= i(\sqrt{3} \pm 1)t$ are a crunode $(-)$ and an acnode $(+)$, respectively, with eigenvalues $\sigma(H) = \{0, -(\pm 1)^{1/2}3^{1/4}, (\pm 1)^{1/2} 3^{1/4}\}$. In addition to the zero eigenvalue, the remaining eigenvalues at $z_m = i(\sqrt{3} \pm 1) t$ have algebraic multiplicity 2 and geometric multiplicity 1.}
\label{fig:exactnm}
\end{figure}
Figure~\ref{fig:exactnm} shows the exceptional points for an $n=5$ site chain with defects at sites $m=2$ and $\bar{m}=4$. Although this is a uniform chain, since it has odd number of sites, the nearest defects are still two sites apart. Therefore, the exact results presented earlier for nearest-neighbour defects do not apply.
\section{Non-Hermitian SSH Model with farthest defects} \label{SSH}
In this section, we enforce the parametric restrictions $t_k=t_{k'}>0$ for $k'=k\mod 2$ and defects on sites $m=1$ and $\bar{m}=n$. Our key results regard the number of real eigenvalues and the set of exceptional points.
\subsection{Eigensystem Solution} \label{esystemSection}
The characteristic polynomial of the tridiagonal matrix corresponding to the Hermitian SSH model with open boundary conditions~\cite[Remark 5]{Elsner1967} is given by
\begin{align}
D_n(t_1,t_2) :&= (t_1 t_2)^{-\floor{n/2}}\text{det} \begin{pmatrix}
\lambda & -t_1 & 0 & \dots \\
-t_1 & \lambda & -t_2 & \ddots \\
0 & -t_2 & \lambda & -t_1 \\
\vdots & \ddots & -t_1 & \ddots \\
\end{pmatrix} \\
&= \begin{cases}
\lambda U_k(Q) & \text{ if } n = 2k + 1 \\
U_k Q + \frac{t_2}{t_1} U_{k-1}(Q) & \text{ if } n = 2k.
\end{cases}
\end{align}
where
\begin{align}
Q := \frac{\lambda^2 -( t_1^2 + t_2^2)}{2 t_1 t_2}.
\end{align}
To generalize this result to a non-Hermitian model ($z_k\neq 0$) with corner elements ($t_L,t_R\neq 0$), we use the linearity property of determinants for rows and columns. Computing the characteristic polynomial for $H(z_1,z_n,t_L,t_R)$ can thus be reduced to the problem of finding $D_n(t_1,t_2)$. The result is summarized in \cref{charPolyTable}. The case for an open chain, $t_L =0= t_R$, was known to \cite{da2007characteristic}, and the case $t_1=t_2$ was known to \cite{YUEH2008}.
\begin{table*}[htp!]
\centering
\begingroup
\setlength{\tabcolsep}{8pt}
\renewcommand{\arraystretch}{2}
\begin{tabular}{|l|l|}
\hline
Constraints & $(t_1 t_2)^{-\floor{n/2}} \text{det} (\lambda I - H)$ \\
\hhline{|=|=|}
$n = 2k $ &
$\begin{array}{l}
U_{k}(Q) + \left(\dfrac{z_1 z_n - t_L t_R}{t_2^2} \right)U_{k-2}(Q) \\
+ \left(\dfrac{t_2^2 - \lambda (z_1 + z_n) + z_1 z_n - t_L t_R}{t_1 t_2} \right) U_{k-1}(Q) - \dfrac{t_L + t_R}{t_2}
\end{array} $ \\
\hline
$n = 2k +1$ &
$\begin{array}{l}
\left(\lambda - z_1 - z_n\right) U_k(Q) -( t_L+ t_R) \\
+ \left(\dfrac{\lambda(z_1 z_n - t_L t_R)- z_1 t_1^2 - z_n t_2^2}{t_1 t_2} \right)U_{k-1}(Q)
\end{array}.$ \\
\hline
\end{tabular}
\endgroup
\caption{Characteristic polynomial of $H(z_1,z_n,t_L,t_R)$ for even and odd SSH chains.}
\label{charPolyTable}
\end{table*}
To find the eigenvector corresponding to a root of \cref{charPolyTable} we express eigenvalue equation as a linear recurrence relation,
\begin{align}
t_i\psi_{i+1} = \lambda \psi_i - t_{i-1} \psi_{i-1} &\quad& \forall i \in \{2, \dots n-1\}.
\end{align}
Solving this linear recurrence relation is equivalent to computing the $2 \times 2$ matrix power,
\begin{align}
\begin{pmatrix}
\psi_{2k} \\
\psi_{2k - 1}
\end{pmatrix} &= \begin{pmatrix}
\frac{(\lambda^2-t_2^2)}{t_1 t_2} & -\frac{\lambda}{t_2} \\
\frac{\lambda}{t_2} & - \frac{t_1}{t_2}
\end{pmatrix}^{k-1} \begin{pmatrix}
\psi_2 \\
\psi_1
\end{pmatrix}.
\end{align}
Using the following expression for powers of invertible $2\times 2$ matrix $A$~\cite{Ricci1975},
\begin{align}
A^k &=(\det A)^{k/2}\left[-U_{k-2}(y)\mathbbm{1}_2+ U_{k-1}(y)\frac{A}{\sqrt{\det A}}\right],
\end{align}
where $y=\text{tr} A/2\sqrt{\det A}$ is the dimensionless argument, we arrive at
\begin{align}
\begin{pmatrix}
\psi_{2k} \\
\psi_{2k - 1}
\end{pmatrix} &= \begin{pmatrix}
U_{k-1}(Q) + \frac{t_1}{t_2} U_{k-2}(Q) & -\frac{\lambda}{t_2} U_{k-2}(Q) \\
\frac{\lambda}{t_2} U_{k-2}(Q) & -\left(\frac{t_1}{t_2} U_{k-2}(Q) + U_{k-3}(Q) \right)
\end{pmatrix} \begin{pmatrix}
\psi_2 \\
\psi_1
\end{pmatrix}
\end{align}
These results are valid for $2k\leq n$. In addition, by using the equations that relate $\psi_1,\psi_2,\psi_n$ with tunnelling $t_L$ and $\psi_1,\psi_{n-1},\psi_n$ with tunnelling $t_R$, we get
\begin{align}
\left[z_1 - \lambda + \lambda \frac{t_L}{t_2} U_{m-2}(Q)\right]\psi_1= \left[t_L U_{m-1}(Q) + \frac{t_L t_1}{t_2} U_{m-2}(Q)-t_1 \right] \psi_2,
\end{align}
thereby determining the eigenvector modulo normalization. On the other hand, if the identity holds due to vanishing prefactors of $\psi_1$ and $\psi_2$, then the state corresponding to that $\lambda$ is doubly degenerate.
\subsection{Eigenvalue Inclusion Results}
This section is devoted to finding subsets of the complex plane which contain some or all of the eigenvalues of $H$. As a consequence, we will find a subset of the $\mathcal{PT}-$unbroken and $\mathcal{PT}$-broken domains. A subset of the $\mathcal{PT}$-unbroken domain is found by applying the intermediate value theorem to the characteristic polynomial. To simplify results, we define
\begin{align}
\mu_k=| t_1+ t_2 e^{(2i\pi/n)k}|\geq 0
\end{align}
and denote the intervals with endpoints $(-1)^{s_1} (t_1 + (-1)^{s_2} t_2)$ and $(-1)^{s_1} \text{sgn} (t_1 + (-1)^{s_2} t_2) \mu_1$ with $s_1, s_2 \in \{0,1\}$ as $I( (-1)^{s_1}, (-1)^{s_2} )$.
\begin{proposition} \label{inclusionTheorem}
Consider an even chain with $n = 2k$ and assume $t_L = - t_R$. This realization includes an open chain ($t_L=0=t_R$), a closed chain with purely imaginary, Hermitian coupling ($t_L=i|t|=-t_R$), and a closed, non-Hermitian chain ($t_L=-t_R\in\mathbb{R}$). If $t_2^2=z_1 z_n - t_L t_R$, then
\begin{align}
\sigma(H) = \left\{ \pm \mu_j\,|\,1\leq j \leq k-1 \right \}\cup\left\{\frac{z_1 + z_n}{2}\pm \sqrt{t_1^2 - t_2^2 + \left(\frac{z_1 + z_n}{2} \right)^2}\,\right\}. \label{exactSpectrum}
\end{align}
If $t_2^2 \neq z_1 z_n - t_L t_R$, then the intervals $(\mu_{j+1}, \mu_j)$ and $(-\mu_j, -\mu_{j+1})$ each contain one eigenvalue of $H$ for $1\leq j\leq (k-1)$. Constraining other parameters as specified below guarantees the existence of additional real eigenvalues in corresponding intervals,
\begin{align}
1 + k \left(1 \pm_2 \frac{t_2}{t_1} \right)\frac{\left(\Delta \mp_1 t_2\right)^2 + \gamma^2 -t_L t_R}{t_2^2 - \Delta^2 - \gamma^2 + t_L t_R}\geq 0 \, &\Rightarrow \, \sigma(H) \cap I(\pm_1 1,\pm_2 1) \neq \emptyset \label{ineq}.
\end{align}
\end{proposition}
\begin{proof}
We utilize an alternative expression to the characteristic polynomial. Using the identity
\begin{align}
U_{n\pm 1}(x) &= x U_n(x) \pm T_{n + 1}(x),
\end{align}
where $T_n(x)$ is the Chebyshev polynomial of the first kind, $T_n(x) = \cos (n \arccos x)$, we can rewrite the characteristic polynomial, with $n=2k$, as
\begin{align}
(t_1 t_2)^{-k} \text{det} (\lambda I - H) &= T_k(Q) \left(1- \frac{z_1 z_n - t_L t_R}{t_2^2} \right) - \frac{t_L + t_R}{t_2} \nonumber \\
&+ \left(1 + \frac{z_1 z_n - t_L t_R}{t_2^2}\right)Q U_{k-1}(Q) \nonumber \\
&+ \left(\frac{t_2^2 - \lambda (z_1 + z_n) + z_1 z_n - t_L t_R}{t_1 t_2} \right) U_{k-1}(Q)
\label{altPoly}
\end{align}
Equation~\eqref{exactSpectrum} follows from the observation that the first term of \cref{altPoly} vanishes when $z_1 z_n - t_L t_R = t_2^2$ and the second one vanishes when $t_R=-t_L$.
Next, consider the case $z_1 z_n - t_L t_R \neq t_2^2$, and $n\geq 4$. The sign of the characteristic polynomial is different at the endpoints of the intervals $(\mu_j, \mu_{j+1})$ and $(-\mu_{j+1},-\mu_j)$ for all $j \in \{ 1, \dots, k-2 \}$, so there exists a real eigenvalue of $H$ inside each of these intervals. The inequalities of \cref{ineq} follow from considering the sign of the characteristic polynomial at the endpoints of the intervals $I(\pm_1 1, \pm_2 1)$.
\end{proof}
The inequalities of \cref{ineq} when $t_1^2=t_2^2$ were known to \cite[eq. (63-64)]{Willms2008} while special cases of \cref{exactSpectrum} were presented in \cite{rutherford1948xxv,Willms2008,Korff2008,guo2016solutions}. Two of the inequalities \cref{ineq} are satisfied if $t_2^2 \geq z_1z_n-t_Lt_R=\Delta^2 + \gamma^2-t_L t_R$, and all four inequalities are satisfied if $t_1 \geq t_2$. Thus, $\sigma(H) \subset \mathbb{R}$ when $\Delta^2 + \gamma^2 - t_L t_R \leq t_2^2 \leq t_1^2$.
Now we focus on the complex part of the spectrum. A subset of the $\mathcal{PT}$-broken domain is found as an application of the Brauer-Ostrowski ovals theorem \cite{Brauer1947,Ostrowski1937,Varga2004}. Let $C(w_1,w_2; b) = \{w \in \mathbb{C} \,|\, |w - w_1| \cdot |w-w_2| \leq b\}$ denote a Cassini oval. By the Brauer-Ostrowski ovals theorem, all eigenvalues of $H$ are elements of the union of Cassini ovals, specifically
\begin{align}
\sigma(H) \subseteq &C(0,0;(t_1+t_2)^2) \cup C(0,z_n; (t_1+t_2)(t_1+|t_R|)) \cup \nonumber \\
&C(z_1, 0; (t_1+t_2)(t_1+|t_L|)) \cup C(z_1,z_n;(t_1+|t_L|)(t_1+|t_R|)). \label{Cassini}
\end{align}
Since eigenvalues are continuous in the arguments of a continuous matrix function \cite{Kato1995}, if the union of the Cassini ovals in \cref{Cassini} contains disjoint components, then each component contains at least one eigenvalue of $H$. In particular, if both of the inequalities
\begin{align}
\left[|z_1|^2-(t_1+t_2)^2\right] \gamma &> |z_1| (t_1+|t_L|)(t_1+|t_R|), \label{unbrokenIneq1} \\
2(r_1+t_2) &< \left(|z_1| + \sqrt{|z_1|^2 - 4t_1^2 - 4 \min\{|t_L|^2, |t_R|^2\} }\right)\label{unbrokenIneq2}
\end{align}
hold, then there exist disjoint components containing the points $z_1$ and $z_n=z_1^*$, implying the existence of at least two eigenvalues with nonzero imaginary parts.
\subsection{Topological Phases of the even SSH chain with open boundary conditions}
The eigenvectors of tight-binding models are characterized as either \textit{bulk} or \textit{edge} states based on how their inverse participation ratio scales with the chain size $n$~\cite{JoglekarSaxena,Joglekar2011z}. Roughly, the bulk eigenstates are spread over most of the chain irrespective of the chain size, whereas edge states remain exponentially localized within a few sites even with increasing chain size. Observing
\begin{align}
U_n(Q) = \frac{(Q + \sqrt{Q^2 - 1})^{n+1} - (Q - \sqrt{Q^2 - 1})^{n+1}}{2 \sqrt{Q^2 - 1}},
\end{align}
we see the sequence of Chebyshev polynomials $U_n(Q)$ is oscillatory in $n$ for $|Q|\leq 1$ and scales exponentially with $n$ otherwise. Thus, existence of non-trivial topological phase with edge-localized states is equivalent to existence of eigenvalues which do not satisfy $|Q|\leq 1$. Thus, for this particular model, in the thermodynamic limit, the $\mathcal{PT}$-broken phase is equivalent to topologically nontrivial phase, as complex eigenvalues correspond to edge states.
\subsection{Exceptional Points for the Critical SSH Chain}
This section will locate the exceptional points of a uniform chain with defect potentials at the edges of an open lattice, so $t_1 = t = t_1$. Given that the spectrum is exactly solvable in the case $z_1 z_n = t^2$, we will assume $z_1 z_n \neq t^2$ for this section.
The theory of resultants~\cite{Gelfand1994}, applied to the characteristic polynomial $P(\lambda):=\text{det}(\lambda I - H)$, can be used to analytically determine the exceptional points of $H$. The resultant of two monic polynomials $f(x)$ and $g(x)$ with degrees $F$ and $G$ respectively can be defined as
\begin{align}
\text{Res}_x(f, g) &:=\prod^{\text{F}}_{i=1} g(x_i
\end{align}
where $\{x_i\}$ denotes the full set of roots of $f(x)$. The resultant vanishes if and only if its inputs share one or more roots \cite{Gelfand1994}. In particular, the Hamiltonian $H$ has $k$ degenerate eigenvalues if and only if
\begin{align}
\text{Res}_\lambda \left(P(\lambda), \pdv[i]{P(\lambda)}{\lambda}\right) &= 0 \,\,\, \forall i \in \{1, \dots, k\} \label{algCurve1}\\
\text{Res}_\lambda \left(P(\lambda), \pdv[k+1]{P(\lambda)}{\lambda}\right) &\neq 0. \label{algCurve2}
\end{align}
For the generic Hamiltonian, \cref{algCurve1,algCurve2} are not satisfied for all parameters. Thus, in the generic case where only two (but not more) eigenvalues become degenerate, finding the EPs reduces to locating Hamiltonian parameters and an eigenvalue $\lambda_0$ such that
\begin{align}
P(\lambda_0) &= 0=\pdv{P(\lambda)}{\lambda}\Bigr|_{\lambda=\lambda_0}.
\end{align}
Computation of the general set of exceptional points reduces to finding the resultants in \cref{algCurve1,algCurve2}, derived from the characteristic polynomial $P_{n,1}$, \cref{charPolySpecific}. The following properties of Chebyshev polynomials are used in subsequent calculations~\cite{olver2010nist,abramowitz1972handbook,Mason1984}
\begin{align}
U_n(x) &= 2x U_{n-1}(x) - U_{n-2}(x), \\
\frac{dU_n}{dx} &=\dfrac{(n+1) U_{n+1}(x)-nxU_n(x)}{1-x^2},\\
U_{n-1}\left(\frac{x+x^{-1}}{2} \right)&= \frac{x^n-x^{-n}}{x-x^{-1}}.
\label{Joukowski}
\end{align}
The resultant of Chebyshev polynomials, calculated in~\cite{Jacobs2011,Louboutin2013}, shows that $\text{Res}\left(U_n, U_m\right)\neq 0$ if $n$ and $m$ are co-prime and $\text{Res}\left(U_n, U_m\right)= 0$ if $n+1$ and $m+1$ are not co-prime. Due to the denominator $(1-x^2)$ in the derivative of Chebyshev polynomials, \cref{Joukowski}, it is convenient to work with $\text{Res}_x\left[P_{n,1}(x), (1-x^2) dP_{n,1}(x)/dx\right]$ instead. To simplify this resultant, we use the identity $(1-x^2) dP_{n,1}/dx=A P_{n,1}(x) + B U_{n-1}(x) $ where the $H$-dependent prefactors are given by
\begin{align}
A(x)&= n \frac{z'_1 + z'_n - x (1+z'_1 z'_n)}{1-z'_1 z'_n },\label{a}\\
B(x) &= 2 z'_1 z'_n-x(z'_1+z'_n)+ (n+1)(1-z'_1 z'_n)+\frac{n(2 x-z'_1-z'_n)(2x z'_1 z'_n -z'_1-z'_n)}{1-z'_1 z'_n}.\label{b}
\end{align}
We remind the reader that the dimensionless defect strengths $z'_1,z'_n$ in \cref{a,b} are scaled by the uniform tunnelling amplitude $t$. Denoting the two roots of $B(x)$, \cref{b}, as $b_{\pm}$, we obtain the resultant,
\begin{align}
P_{n,1}(1)P_{n,1}(-1)\text{Res}_{x}\left(P_{n,1},\frac{dP_{n,1}}{dx}\right)\propto
P_{n,1}(b_+)P_{n,1}(b_-).
\label{EPSurface}
\end{align}
The resultant of \cref{EPSurface} vanishes if and only if $(z_1, z_n)$ is an exceptional point, with a single eigenvector, as long as the corresponding $H(z_1,z_n)$ is not Hermitian; if $H$ is Hermitian, then a vanishing resultant merely denotes a doubly-degenerate eigenvalue which supports two linearly independent eigenvectors. Note that \cref{EPSurface} does not yield insight for parameters $(z_1,z_n)$ that are tuned such that $P_{n,1}(\pm 1) = 0$ for arbitrary $n$. However, we readily identify $P_{n,1}(\pm 1)=0$ if and only if $(n-1)z'_1z'_n\mp n(z'_1+z'_n)+(n+1)=0$. In this case, exceptional points occur when $\pm 1$ is a double root of $P_n$, which occurs when $z'_1 = {z'_n}^* \in \left\{\frac{2-2n^2 + i\sqrt{3 n^2 - 3}}{(2n-1)(n-1)}, \frac{2n^2-2+ i\sqrt{3 n^2 - 3}}{(2n-1)(n-1)} \right\}$.
This analytical expression also allows us to extract the dependence of the $\mathcal{PT}$-threshold value on the detuning, where $z_1=\Delta+i\gamma=z_n^*$. Using the expansion of the roots $b_\pm$ of the quadratic expression $B(x)=0$, \cref{b}, in the limit $\Delta/t\gg 1$, and applying the method of dominant balance \cite{bender2013advanced} gives
\begin{align}
P_{n,1}(b_+) P_{n,1}(b_-)= \frac{\Delta^2}{n^2 t^2}\left(1-\frac{1}{n}\right)^{n-2}\left[\frac{\gamma^2\Delta^{2(n-2)}}{t^{2n}}-1\right] + O(1,\gamma^4 \Delta^{2n-4}).
\label{rhoInf}
\end{align}
It follows that the EPs determined by vanishing of \cref{rhoInf} satisfy $\gamma_\text{EP}(\Delta)=t^{n-1}/\Delta^{(n-2)}$ in the limit $\Delta/t\gg 1$. Figure~\ref{PTBreaking} shows the numerically obtained EP contours for this problem in the $(\gamma/t,\Delta/t)$ plane with varying chain sizes $n$. When $n=2$, the $\Delta$ term contributes $\mathbbm{1}_2$ to the Hamiltonian and therefore does not change the threshold $\gamma_\text{EP}(\Delta)=t$.
The zero detuning threshold is \cite{Korff2008,jin2009solutions}
\begin{align}
\gamma_{EP}(0)/t = \begin{cases}
\sqrt{1+1/n} & \text{if } n \text{ is odd} \\
1 & \text{if } n \text{ is even}
\end{cases}.
\end{align}
This exceptional point corresponds to a zero mode with geometric multiplicity one, and algebraic multiplicity which is three in the odd case and two in the even case.
\begin{figure}[htp!]
\centering
\includegraphics[width=\textwidth]{EPContoursOBCnoTail.png}
\caption{Exceptional points (EPs) of a uniform, n-site, tight-binding, open chain, \cref{TriDiag}, with defects $z_1=\Delta+i\gamma=z_n^*$ at its end points. $\mathcal{PT}$-symmetry breaks as one passes from below the EP contour to above. Each contour, an algebraic curve, has $(n-2)$ cusp singularities marked by filled circles. The cusp singularities correspond to third order exceptional points (EP3s) while the rest of the contours are exceptional points of second order (EP2s). Our numerics suggest that the EP contours are one-to-one functions of $\text{arg}(z_1)$. As $n \rightarrow \infty$, the $\mathcal{PT}$-unbroken region approaches the unit disk $|z_1|/t=1$. The curves for $n=3,4,5$ are in \cite{Ruzicka2015}.}
\label{PTBreaking}
\end{figure}
\subsection{Locating EP3s in contours of EP2s}
As a consequence of the Newton-Puiseux theorem, given an $n \times n$ matrix which is a polynomial in one parameter, $\theta$, the eigenvalues, $\lambda_i$ can be expanded as a Puiseux series in $\theta$,
\begin{align}
\lambda_i(\theta)=\lambda_i(\theta_0)+ \sum^\infty_{j = 1} \epsilon_{ij}(\theta_0) (\theta-\theta_0)^{j/k(i)},
\end{align}
where $k(i) \in \mathbb{N}$. To guarantee a real spectrum in a neighbourhood of $\theta_0$, as is the case for a Hermitian matrix, the condition $k(i)=1$ is necessary and sufficient. On the other hand, if $\lambda_i(\theta_0)$ is an EP of order $N$, then $k(i)=N$. The sensitivity of the spectrum to perturbations in $\theta_0$ is quantified by
\begin{align}
\tau(\theta_0) :=\max\{\epsilon_{i1}(\theta_0)|k(i)=\sup(k(1),\cdots,k(n))\}
\end{align}
If $\theta_0$ parametrizes an EP contour which contains EPs with different values of $k$, the corresponding $\tau(\theta_0)$ must diverge as one approaches a point with an increased $k$ value. We now consider perturbations of the eigenvalues of the $\mathcal{PT}$-symmetric case of $H$ for $m = 1$ at the exceptional points, \cref{PTBreaking}. If the tangent to an algebraic curve of EPs is unique and 2-directional, then perturbations along the tangents to the contour have $k = 1$ and result in real eigenvalues. In orthogonal direction, there exists exactly one pair of eigenvalues which displays a real-to-complex-conjugtes transition, so $k=2$. Only at the cusp singularities is $k = 3$ satisfied. To show this, in \cref{tau} we plot the coefficient of the square-root term $\tau(\theta)$ as a function of angle in the $(\Delta,\gamma)$ plane, i.e. $\theta=\text{arg}(z_1)$, for $\theta\in[0,\pi/2]$. As is expected, $\tau(\theta)$ diverges at non-uniformly distributed cusp points.
\begin{figure}[htp!]
\centering
\includegraphics[width =0.5\textwidth]{evalPertFeb.pdf}
\caption{Coefficient $\tau(\theta)$ of the square-root term in the perturbative expansion of EP2 degenerate eigenvalue as a function of $\theta=\text{arg}(z_1)$. Since $\tau(\theta)=\tau(\pi-\theta)=\tau(-\theta)$, the range is confined to $[0,\pi/2]$. The divergences are signatures of EP3s where the eigenvalue expansion is expressed through cube-roots instead of square-roots.}
\label{tau}
\end{figure}
To exlore the generality of our observation, we next consider the SSH Hamiltonian with detuned defects $(z_1,z_n)$ as a function of three dimensionless parameters, $\Delta/t_2, \gamma/t_2,$ and $t_1/t_2$. In this case, the EPs form a 2-dimensional surface, with ridges that correspond to EP3s. At $\Delta=0$, these ridges intersect giving rise to EP4 cusp singularities.
\begin{figure}[htp!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{surface.png}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width = \textwidth]{ThirdOrderEPs2.png}
\caption{}
\end{subfigure}
\caption{(a) EPs of an open SSH chain with end-defects and $n = 8$. The domain satisfying the inequalities of \cref{ineq} lies in between this EP2 surface and the $\gamma = 0$ plane. Ridges of this surface correspond to EP3s, plotted in \cref{thirdOrderFig}(b). As one passes through this surface along a ray originating at $\gamma = 0$, we pass from the $\mathcal{PT}-$unbroken region to the $\mathcal{PT}$-broken region. The $\mathcal{PT}$-broken region is a subset of the topologically nontrivial phase, marked by the existence of edge states.
(b) Contours of EP3s from \cref{thirdOrderFig}(a) show cusp singularities at $\Delta = 0$ and are fourth order exceptional points (EP4s).}
\label{thirdOrderFig}
\end{figure}
\section{Generic Structure of Exceptional Surfaces}
In this section, we use a perturbative argument to demonstrate that EPs of third order generically occur at cusp singularities of curves second order EPs.
Consider a Hamiltonian which is polynomial in $d \geq 2$ complex parameters, $H: \mathbb{C}^d \rightarrow \text{End}(\mathbb{C}^n)$ with a third-order exceptional point, $x_0 \in \mathbb{C}^d$. At this EP3, there exists at least one root, $\lambda_0$, whose algebraic multiplicity increases by 2. Then the characteristic polynomial may be written
\begin{align}
\text{det}(\lambda - H(x)) = \sum_{i=0}^n p_i(x) (\lambda - \lambda_0)^i,
\label{puiseux}
\end{align}
where the polynomials $p_i(x)$ satisfy $p_i(x_0)=0\,\forall i<3$ and $p_3(x_0)\neq 0$. As we perturb $x_0 \rightarrow x_0+\delta x$, the first-order correction $\delta\lambda=\lambda -\lambda_0$ to the eigenvalue $\lambda_0$ is found by substituting the Taylor expansions of $p_i$ in \cref{puiseux}. To simplify future calculations, we will assume $p_0'(x_0) \neq 0$, so the eigenvalue near $\lambda_0$ is approximated by
\begin{align}
p_3(x_0)\delta\lambda^3+ p_2(x) \delta\lambda^2+ p_1(x) \delta\lambda+\left(p_0'(x_0) \cdot \delta x \right) \approx 0.
\label{cubic}
\end{align}
Consider a perturbation along a line in parameter space passing through the exceptional point. Explicitly, let this line be $\{\theta u\,|\, \theta \in \mathbb{R}\}$ for some $u \in \mathbb{C}^d$. Given $\delta x = \theta u$, a Puiseux series expansion for a subset of eigenvalues along this line is
\begin{align}
\delta\lambda(\theta) \approx \begin{cases}
\dfrac{(p_0'(x_0) \cdot u)^{1/3}}{p_3(x_0)} \,\theta^{1/3} &\text{if } p_0'(x_0) \cdot u \neq 0 \\
\dfrac{(p_1'(x_0) \cdot u)^{1/2}}{p_3(x_0)} \,\theta^{1/2} &\text{if } p_0'(x_0) \cdot u = 0 \text{ and } p_1'(x_0) \cdot u \neq 0 \\
\dfrac{(p_2'(x_0) \cdot u)}{p_3(x_0)} \,\theta &\text{if } p_0'(x_0) \cdot u = p_1'(x_0) \cdot u =0 \text{ and } p_2'(x_0) \cdot u \neq 0
\end{cases}.
\end{align}
Notably, the order of the Puiseux series expansion decreases if the line spanned by $u$ is orthogonal to the normal vector of the surface $p_0 = 0$ at $x_0$.
The set of exceptional points near $x = x_0$ is approximated by the discriminant of \cref{cubic},
\begin{align}
p_1^2 p_2^2 - 4 p_0 p_2^3 - 4 p_1^3 p_3 + 18 p_0 p_1 p_2 p_3 - 27 p_0^2 p_3^2 = 0. \label{disc}
\end{align}
The point $\delta x = 0$ is readily interpreted as a \textit{singular point} of the affine algebraic variety of exceptional points approximated by \cref{disc}, since the derivatives of \cref{disc} with respect to each coordinate $\delta x_i$ all vanish. The leading term in \cref{disc} for small $\delta x_i$ is the $p_0^2 p_3^2$ term. Assuming the leading term in the expansion of $p_1^2 p_2^2 - 4 p_0 p_2^3 - 4 p_1^3 p_3 + 18 p_0 p_1 p_2 p_3$ is an odd function of $\theta$ for a perturbation along $\delta x = \theta u$, then the line determined by $p_0(x) \approx p_0'(x_0) \delta x = 0$ is a one-directional tangent to the surface of exceptional points, so we can interpret the point $x = x_0$ as a cusp singularity \cite{hilton1920plane}.
\section{Conclusion}
Non-Hermitian, tridiagonal, finite-dimensional matrices with $\mathcal{PT}$-symmetry are particularly amenable to analytical treatment. They also model a vast variety of physically realizable classical and quantum systems with balanced gain and loss. Introducing just one pair of gain-loss defect potentials breaks translational symmetry in such models and makes them non-amenable to traditional, Fourier-space band-structure methods. However, experimentally implementing $O(n)$ balanced gain-loss pairs in an $n$-site chain is exceptionally challenging, if not impossible. Therefore, we have considered models with {\it minimal non-Hermiticity} that leads to $\mathcal{PT}$-symmetry, i.e. one pair of defect potentials at mirror-symmetric sites.
Our results include the explicit analytical expressions for various intertwinning operators, construction of equivalent Hermitian Hamiltonian in the $\mathcal{PT}$-exact phase, and analytical results for the EP contours in a uniform chain with detuned defects at the end points. We have shown that cusp points of contours of EPs correspond to EPs of one-higher order. Taken together, these results deepen our understanding of exceptional-point degeneracies in physically realizable models.
\section*{Acknowledgment}
This work was supported, in part, by ONR Grant No. N00014-21-1-2630. Y.N.J. acknowledges the hospitality of Perimeter Institute where this work was finalized.
\printbibliography
\end{document}
|
{
"arxiv_id": "2302.13209",
"language": "en",
"timestamp": "2023-02-28T02:13:08",
"url": "https://arxiv.org/abs/2302.13209",
"yymm": "2302"
} | \section{Introduction}
Speaker Verification (SV) is the task of validating the identity of a speaker using the voice sample of the claimant. The tremendous development in SV technology in the last five decades has enabled the system to be deployed in various application areas, starting from voice-based attendance system to authentication for bank transactions~\cite{bai2021speaker}. However, the performance of the systems suffer when multiple languages and sensors are involved during testing~\cite{khosravani2017plda}. Hence, the scalability of SV systems is limited considering such scenarios. The citizens of India use approximately $122$ major and $1599$ other languages in their day-to-day conversation. Most importantly, they are polyglot in nature. Therefore, the flexibility in language and sensors during testing may restrict the reach of SV technologies. With this motivation, the Indian Institute of Technology Guwahati Multi Variability (IITG-MV) data was collected using five different sensors from the people coming from different geographical locations of India having variations in the native language, dialect, and accent~\cite{haris2012multivariability}.
In the literature, there exist few works on the development of SV in multilingual and domain mismatch scenarios~\cite{khosravani2017plda}. The reported works contribute to the feature, model, and score level for minimizing the impact of language and domain mismatch~\cite{khosravani2017plda}. Most of the reported work uses either an in-house dataset or publicly available data (mostly crawled from the public domain) for performing their studies. The in-house data are limited by the number of speakers, languages, and sensors. Though the publicly available data have a huge number of speakers, languages and environmental variations, the unavailability of appropriate annotations (mostly done with automatic algorithms) poses a challenge for an in-depth analysis~\cite{khosravani2017plda}. The current challenge was planned with aim of resolving the above-mentioned issues by inviting the community to work on the development of the language and sensor invariant speaker representation.
This work considers the conversation recordings of the IITG-MV phase-I dataset. The dataset is divided into four parts, viz. (1) Development, (2) Enrollment, (3) Public, and (4) Private test set. The development set consists of speech utterances from $50$ speakers recorded with all $5$ sensors and in $13$ languages. The enrollment set consists of utterances from the remaining $50$ speakers, spoken in English language and through a headset microphone. The public test set consists of utterances from the $50$ enrolled speaker in both matched and mismatched sensors and languages. The private test set only consists of cross-lingual and sensor utterances. Along with releasing the dataset, the challenge was offered in the form of two sub-tasks, (1) constrained and (2) unconstrained. The constrained sub-task restricts the participants to use only the provided data. On the other hand, no such restrictions are there in the unconstrained sub-task. The aim of the constrained sub-task here was to encourage the community to develop the SV with limited training data. Conversely, the aim of the unconstrained sub-task was to observe the performance of SV technologies developed with a sufficient amount of training data. A baseline system implemented with X-vector framework for both constrained and unconstrained sub-tasks was made available to the participants during the challenge (available at \href{https://github.com/jagabandhumishra/I-MSV-Baseline} {\url{https://github.com/jagabandhumishra/I-MSV-Baseline}}). The performance of the baseline in public test data on both the sub-tasks were $9.32\%$ and $8.15\%$, respectively.
The rest of the paper is organized as follows: the challenge rules are described in section~\ref{2}. The detailed description of the data preparation is described in section~\ref{3}. Section~\ref{4} reports the procedure of baseline system development and the performance measure used. A brief description of the top five systems along with their performance are described in section~\ref{5}. Finally, the summary and future directions are reported in section~\ref{6}.
\section{Challenge Rules}
\label{2}
As mentioned in the earlier section, the challenge consisted of two sub-tasks, viz. (1) constrained SV and (2) unconstrained SV.
\begin{itemize}
\item \textbf{Constrained SV}: Participants were not allowed to use speech data other than the speech data released as a part of the constrained SV challenge for the development of the SV system.
\item \textbf{Unconstrained SV}: Participants were free to use any publicly available speech data in addition to the audio data released as a part of unconstrained SV.
\end{itemize}
The challenge was organized as a part of the $25^{th}$ edition of the O-COCOSDA-2022 conference along with the Asian-Multilingual Speaker Verification (A-MSV) track. The participants were asked for registration. Upon agreeing to the data usage licenses agreement, the download link of the development, enrollment, and public test sets were provided. Through a license agreement, the participant teams agreed that they could use the data only for research purposes. Moreover, the top five systems in both the sub-tasks would have to submit the source code of their systems and a detailed report.
The public test set released during the time of registration had ground truth information. The purpose here was to tune the system parameter using the public test data. The participants were asked to upload their score files in a specific format on the challenge portal. The corresponding performance was evaluated by a back-end script and the results were uploaded to a online leader board. There was no constraint on uploading and evaluating the score files on the public test set. After around one month of the public test set release, the private test set was released without ground truth information. The participant teams were asked to submit their final results on the private test set within $24$ hours from the release of the private test set. A maximum of three successful attempts were allowed for each team for evaluating their system on the private test set.
\section{Data Preparation}
\label{3}
The IITG-MV speaker recognition dataset was recorded in four phases for dealing with various speaker recognition applications, viz. speaker identification, verification, and change detection, etc.~\cite{haris2012multivariability}. Among the four phases, the phase-I dataset is considered for this study. The IITG-MV-Phase-I dataset consists of recordings from $100$ speakers in reading and conversation mode. In both modes, each speaker has given their speech data in two sessions. The duration of each session is around $5-8$ minutes. In addition, each speaker has given their data in two languages, viz. English and favorite language. Favorite language mostly meant their mother tongue/native language and varied from person to person. Furthermore, all the speech utterances were recorded through five different sensors, viz. H01, M01, M02, D01 and T01. The details of the dataset can be found at~\cite{haris2012multivariability}. The utterances belonging to the conversation mode were only considered here. The total duration of the selected utterances is approximately $100$ hours. The selected utterances are named as the I-MSV dataset. Further, the I-MSV dataset is segregated into four parts, viz. development, enrollment, public test, and private test.
\subsection{Development set}
This partition consists of recordings from $50$ speakers. The utterances from each speaker are available in two languages, with two sessions, and with five sensors. The approximate duration of the development set is $50$ hours.
\subsection{Enrollment set}
This partition consists of recordings from $50$ speakers that are disjoint from the speakers used in the development set. The utterances belonging to both the sessions with the English language and the Headset (H01) sensor are used here. The first session utterances are completely used in this set. However, the utterances from the second session are segmented into two parts. Half of them are used in enrollment and the rest have been used in the public test set (to observe the performance in matched sensor and language conditions). The approximate duration of speech available for each speaker is $8-10$ minutes.
\subsection{Public test set}
This set consists of the utterances from the second session recordings with three sensors and cross-languages along with the matched utterances. The second session utterances in the original IITG-MV-Phase-I dataset are segregated into two parts. Half of them are reserved for the preparation of the private test set. After that, each utterance is segmented into $10$, $30$, and $60$ second utterances. The segments are split into silence regions using the knowledge of Voice Activity Detection. The segmented files were made available to the participants as the public test set. The total number of utterances available in this partition is $5907$.
\subsection{Private test set}
This set consists of the utterances from the second session recordings with four sensors and cross-languages. This partition does not consist of matched sensors and language utterances. The selected utterances are segmented into $10$s, $30$s, and $60$s second utterances and made available to the participants as the private test set. The total number of utterances available in this partition is $9521$. The partition consists of cross-language utterances from $10$ Indian languages.
\begin{table}[!t]
\centering
\caption{Baseline results on I-MSV dataset}
\label{baseline_perf}
\begin{tabular}{
|p{0.2\columnwidth}
|p{0.3\columnwidth}
|p{0.3\columnwidth}
|}
\hline
& \multicolumn{2}{c|}{\textbf{EER} (\%)} \\
\cline{2-3}
\textbf{Model} & \textbf{Overall} & \textbf{Matched Sensor and Language} \\
\hline
I-vector & $13.72$ & $4.61$ \\
\hline
X-vector & $9.32$ & $2.40$ \\
\hline
X-vector (unconstrained) & $8.15$ & $0.82$ \\
\hline
\end{tabular}
\end{table}
\begin{table*}[!t]
\centering
\caption{Summary of top $5$ submissions to the challenge. FE:=\emph{Frontend}, LF:=\emph{Loss Function}, BE:=\emph{Backend}, C-SV:={Constrained-SV}, UC-SV:={Unconstrained-SV}.}
\label{submission_summary}
\begin{tabular}{
|p{0.05\linewidth}
|p{0.2\linewidth}
|p{0.22\linewidth}
|p{0.22\linewidth}
|p{0.08\linewidth}
|p{0.08\linewidth}
|}
\hline
& & & & \multicolumn{2}{c|}{\textbf{EER} (\%)} \\
\cline{5-6}
\textbf{Team} &
\textbf{BE} &
\textbf{LF} &
\textbf{FE} &
\textbf{C-SV} &
\textbf{UC-SV} \\
\hline
team0 & Rawnet3 & Training: triplet margin loss; Fine-tuning: AAM Loss + K-Subcenter loss + Inter-topK loss & Cosine similarity & -- & $0.26$ \\
\hline
team1 & ResNet with SE attention & Softmax + Angular Prototypical Loss & Model scoring (DNN, Random Forest and Gradient Boosting Trees) & -- & $0.36$ \\
\hline
team2 & ECAPA-TDNN + SE-ResNet blocks & Weight Transfer loss + AAM-Softmax loss + L2 loss & Cosine similarity & $2.12$ & $0.63$ \\
\hline
team3 & ECAPA-TDNN SE-ResNet blocks & AAM Loss & Cosine similarity & $2.77$ & $2.70$ \\
\hline
team4 & ECAPA-TDNN + SE-ResNet blocks & Large
Margin Cosine Loss & PLDA & 2.97 & $2.97$ \\
\hline
\end{tabular}
\end{table*}
\section{Performance Measures and Baselines}
\label{4}
This challenge employs the Equal Error Rate (EER) measure to compare the performances of the different submissions with the baseline results. This section briefly describes the method of computing the EER measure and reports the baseline results on the I-MSV dataset. Let, $N_{P}$ and $N_{N}$ be the number of positive and negative test samples in the data, respectively. The number of samples out of a total of $N_{P}$ positive samples predicted as positive are termed as True Positives (TP). On the other hand, the number of samples out of a total of $N_{N}$ negative samples correctly predicted as negative are termed as True Negatives (TN). Incorrectly predicted positive and negative samples are termed as False Positives (FP) and False Negatives (FN), respectively. The prediction of a test sample as positive or negative is based on a pre-determined threshold $\tau$ which may be varied. The total number of TP, TN, FP, and FN for the whole test data can be used to compute two measures, viz., False Acceptance Rate (FAR) and False Rejection Rate (FRR). The FAR can be defined using eq.~\ref{far}.
\begin{equation}
\label{far}
\text{FAR}=\dfrac{FP}{FP+TN}
\end{equation}
\noindent Similarly, the FRR can be defined as in eq.~\ref{frr}.
\begin{equation}
\label{frr}
\text{FRR}=\dfrac{FN}{TP+FN}
\end{equation}
\noindent When $\tau$ is varied, different values of FAR and FRR can be obtained. Among all the different $\tau$ used, a specific threshold $\tau_{equal}$ can be identified which provides equal (or almost equal) values of FAR and FRR. The EER measure is computed as the mean of FAR and FRR at $\tau_{equal}$ (eq.~\ref{eer}).
\begin{equation}
\label{eer}
\text{EER}=\dfrac{1}{2} \left(FAR+FRR\right)
\end{equation}
\noindent where, $\mid \text{FAR}-\text{FRR}\mid \to 0$.
The challenge organizers provided results on the I-MSV dataset using Kaldi based I-vector and X-vector systems as a baseline for comparison. The baseline performances are reported in Table~\ref{baseline_perf}.
\begin{figure*}[!t]
\centerline{
\includegraphics[width=0.7\linewidth]{Duration_performance}
}
\centerline{
(a)
}
\centerline{
\includegraphics[width=0.7\linewidth]{Language_performance}
}
\centerline{
(b)
}
\centerline{
\includegraphics[width=0.7\linewidth]{Sensor_performance}
}
\centerline{
(c)
}
\caption{Illustrating the effect of (a) different duration, (b) different languages, and (c) different sensors on the performance of submitted systems.}
\label{fig:duration_language_sensor_effect}
\end{figure*}
\section{Systems and Results}
\label{5}
A total of $25$ teams registered for the I-MSV 2022 challenge. Among these, $10$ teams submitted their results for the public test set evaluation. For the private test set evaluation, a total of $6$ teams submitted their results and systems. The best $5$ participating systems are summarised in the next paragraph. Table~\ref{submission_summary} lists a brief summary of the top $5$ systems.
The submission of \emph{team0} obtained the best EER of $0.26$ on the private test set using unconstrained training data. The best system of \emph{team0} used the Rawnet3 architecture~\cite{jung2022raw} as their front-end system. They initially trained the model with a Triplet Margin loss~\cite{BMVC2016_119}. Subsequently, they fine-tuned their model with a combination of Adaptive Angular Margin (AAM) K-Subcenter loss~\cite{deng2019arcface} and Inter-TopK loss~\cite{zhao2022multi}. They performed the backend scoring using the cosine-similarity measure and used adaptive score normalization.
The second best EER of $0.36$ using unconstrained data was obtained by \emph{team1}. They used the ResNet-34 architecture proposed in~\cite{heo2020clova} with Attentive Statistics Pooling~\cite{okabe18_interspeech} for their front-end. They trained the model using a combination of vanilla Softmax loss and Angular Prototypical loss~\cite{chung20b_interspeech}. They also proposed a two-layer model scoring system composed of Fully-Connected Feed-Forward layers, Random Forests and Gradient Boosting Trees.
The EER obtained by \emph{team2} on the constrained data scenario was $2.12$. They achieved an EER of $0.63$ using unconstrained training data. They used combination of ECAPA-TDNN~\cite{desplanques20_interspeech} and ResNet-34~\cite{heo2020clova} with Squeeze-and-Excitation (SE) attention as front-end models to obtain the best results in the constrained data scenario. However, only the ResNet-34-SE network provided the best performance in the unconstrained scenario. For the unconstrained scenario, they fine-tuned the backbone model using a combination of Weight-Transfer loss~\cite{zhang2022npu}, AAM-Softmax loss and $L_{2}$ loss. The backend scoring was performed using cosine similarity measure.
The \emph{team3} obtained an EER of $2.77$ in the constrained scenario and and EER of $2.70$ in the unconstrained scenario. They used a similar front-end system as that of \emph{team2} and trained it using the AAM loss. They also performed the backend scoring using cosine similarity.
The EER obtained by \emph{team4} in the unconstrained scenario was $2.97$. They also employed a similar front-end architecture as that of \emph{team2} and used the Large Margin Cosine loss for training. They performed the backend scoring using Probabilistic Linear Discriminant Analysis (PLDA)~\cite{jiang12_interspeech}.
\section{Summary and Discussion}
\label{6}
The results obtained by the submitted systems can be summarised along the following broad directions. First, use of unconstrained training data is hugely beneficial in performing SV in low-resource scenario like the current challenge. Second, automatic feature learning and end-to-end models can learn highly discriminating features. Third, the choice of loss function for the front-end system has a huge impact on the obtained performance of similar architectures. Fourth, simple backend scoring like cosine similarity might be enough if the learnt speaker embedding are highly discriminating. Fifth, longer utterances (Fig.~\ref{fig:duration_language_sensor_effect}(a)) are more helpful in identifying the speakers. Sixth, change in language (Fig.~\ref{fig:duration_language_sensor_effect}(b)) degrades the SV performance. However, it might also be noted that such an observation may also be the result of imbalance in the number of utterances for the different languages in the I-MSV dataset. Seventh, the change in sensor (Fig.~\ref{fig:duration_language_sensor_effect}(a)) has a huge impact on the performance of SV systems. More specifically, SV systems fare poorly when presented with telephone channel recordings. In future, better SV systems may be developed by taking into consideration the observations made in this challenge.
\section*{Acknowledgments}
The authors like to acknowledge Ministry of Electronics and Information Technology (MeitY), Govt. of India, for supporting us through "Bhashini: Speech technologies in Indian languages" project. We are also grateful to K. T. Deepak, Rajib Sharma and team (IIIT Dharwad, Karnataka), S. R. Nirmala, S. S. Chikkamath and team (KLETech, Hubballi, Karnataka), Debadatta Pati, Madhusudan Singh and team (NIT Nagaland, Nagaland), Joyanta Basu, Soma Khan and team (CDAC Kolkata, WB), Akhilesh Kumar Dubey, Govind Menon and team (KLU Vijayawada, AP), Gayadhar Pradhan, Jyoti Prakash Singh and team (NIT Patna, Bihar), and S. R. M. Prasanna, Gayathri A. and team (IIT Dharwad, Karnataka) for their help and cooperation in successfully organizing this challenge.
|
{
"arxiv_id": "2302.13271",
"language": "en",
"timestamp": "2023-03-02T02:00:31",
"url": "https://arxiv.org/abs/2302.13271",
"yymm": "2302"
} | \section{Introduction\label{sec:Introduction}}
Accurate estimation of parameters for a given model is central to modern scientific discovery. It is particularly important in the modeling of biological systems which can involve both first principles-based and phenomenological models and for which measurement errors can be substantial, often in excess of 20\%. The dominant methodologies for parameter inference are either not capable of handling realistic errors, or are computationally costly relying on forward solvers or Markov chain Monte Carlo methods. In this work, we propose an accurate, robust and efficient weak form-based approach to estimate parameters for parameter inference. We demonstrate that our ``Weak form Estimation of Nonlinear Dynamics'' (WENDy) method offers many advantages including high accuracy, robustness to substantial noise, and computational efficiency often up to several orders of magnitude over the existing methods.
In the remainder of this section, we provide an overview of modern parameter estimation methods in ODE systems, as well as a discussion of the literature that led to the WENDy idea. Section \ref{sec:WENDy} contains the core weak-form estimation ideas as well as the WENDy algorithm itself. In Section \ref{sec:WfOLS}, we introduce the idea of weak-form parameter estimation, including a simple algorithm to illustrate the idea. In Section \ref{sec:WENDy_IRLS}, we describe the WENDy method in detail.
We describe the Errors-In-Variables (EiV) framework, and derive a Taylor expansion of the residual which allows us to formulate the
(in Section \ref{sec:WENDy_IRLS}) Iteratively Reweighted Least Squares (IRLS) approach to inference. The EiV and IRLS modifications are important as they offers significant improvements to the Ordinary Least Squares approach. In Section \ref{sec:TestFunction}, we present a strategy for computing an orthogonal set of test functions that facilitate a successful weak-form implementation. In Section \ref{sec:Illustrating-Examples} we illustrate the performance of WENDy using five common mathematical models from the biological sciences and in Section \ref{sec:ConcDisc} we offer some concluding remarks.
\subsection{Background}
A ubiquitous version of the parameter estimation problem in the biological sciences is
\begin{equation}
\widehat{\mathbf{w}}:={\arg \min_{\mathbf{w}\in \mathbb{R}^{J}}} \|u(\mathbf{t};\mathbf{w})-\mathbf{U}\|_{2}^{2},\label{eq:ParEstProblem}
\end{equation}
where the function $u:\mathbb{R}\to\mathbb{R}^d$ is a solution to a differential equation model\footnote{While we restrict ourselves to deterministic differential equations, there is nothing in the WENDy approach that inhibits extension to discrete or stochastic models.}
\begin{equation}
\begin{array}{rl}
\dot{u}&=\sum_{j=1}^{J}w_j f_j(u),\label{eq:UPE_DE}\\
u(t_0)&=u_0\in\mathbb{R}^{d},
\end{array}
\end{equation}
The ODE system in equation~\ref{eq:UPE_DE} is parameterized by $\mathbf{w}\in \mathbb{R}^{J}$, the vector of $J$ true parameters which are to be estimated by $\widehat{\mathbf{w}}$.
The solution to the equation is then compared (in a least squares sense) with data $\mathbf{U}\in\mathbb{R}^{(M+1)\times d}$ that is sampled at $M+1$ timepoints $t:=\{t_i\}_{i=0}^{M}$. We note that in this work, we will restrict the differential equations to those with right sides that are linear combinations of the $f_j$ functions with coefficients $w_j$, as in equation \eqref{eq:UPE_DE}.
Conventionally, the standard approach for parameter estimation methodologies has been forward solver-based nonlinear least squares (FSNLS). In that framework, 1) a candidate parameter vector is proposed, 2) the resulting equation is numerically solved on a computer, 3) the output is compared (via least squares) to data, and 4) then this process is repeated until a convergence criteria is met. This is a mature field and we direct the interested reader to references by Ljung \cite{Ljung1999,Ljung2017WileyEncyclopediaofElectricalandElectronicsEngineering} and, for those interested in a more theoretical perspective, to the monograph by Banks and Kunisch \cite{BanksKunisch1989}.
The FSNLS methodology is very well understood and its use is ubiquitous in the biological, medical, and bioengineering sciences. However, as models get larger and more realism is demanded of them, there remain several important challenges that do not have fully satisfying answers. For example, the accuracy of the solver can have a huge impact on parameter estimates; see \cite{NardiniBortz2019InverseProbl} for an illustration with PDE models and \cite{Bortz2006JCritCare} for an example with ODE and DDE models. There is no widespread convention on detection of this type of error and the conventional strategy would be to simply increase the solution accuracy (usually at significant computational cost) until the estimate stabilizes. Perhaps more importantly, the choice of the initial candidate parameter vector can have a huge impact upon the final estimate, given that nonlinear least squares cost functions frequently have multiple local minima in differential equations applications. There are several algorithms designed to deal with the multi-modality, such as particle swarm optimization \cite{BonyadiMichalewicz2017EvolComput} and simulated annealing \cite{vanLaarhovenAarts1987}; however, all come at the cost of additional forward solves and unclear dependence on the hyperparameters used in the solver and optimization algorithms.
Given the above, it is reasonable to consider alternatives to fitting via comparing an approximate model solution with the measured data. A natural idea would be to avoid performing forward solves altogether via substituting the data directly into the model equation \eqref{eq:UPE_DE}. The derivative could be approximated via differentiating a projection of the data onto, e.g., orthogonal polynomials, and the parameters could then be estimated by minimizing the norm of the residual of the equation \eqref{eq:UPE_DE} -- i.e., via a gradient matching criteria. Indeed, Richard Bellman proposed exactly this strategy in 1969 \cite{Bellman1969MathematicalBiosciences}. There have been similar ideas in the literature of chemical and aerospace engineering, which can be traced back even further \cite{PerdreauvilleGoodson1966JBasicEng, Greenberg1951NACATN2340}. However, these methods are known to perform poorly in the presence of even modest noise.
To account for the noise in the measurements while estimating the parameters (and in some cases the state trajectories), researchers have proposed a variety of different non-solver-based methods. The most popular modern approaches involve denoising the measured state via Gaussian Processes \cite{YangWongKou2021ProcNatlAcadSciUSA,Martina-PerezSimpsonBaker2021ProcRSocA,WangZhou2021IntJUncertaintyQuantification,WenkAbbatiOsborneEtAl2020AAAI,CalderheadGirolamiLawrence2008AdvNeuralInfProcessSyst} and collocations projecting onto a polynomial or spline basis \cite{Varah1982SIAMJSciandStatComput, RamsayHookerCampbellEtAl2007JRStatSocSerBStatMethodol,LiangWu2008JournaloftheAmericanStatisticalAssociation,PoytonVarziriMcAuleyEtAl2006ComputersChemicalEngineering, Brunel2008ElectronJStat,ZhangNanshanCao2022StatComput}. For example, Yang et al. \cite{YangWongKou2021ProcNatlAcadSciUSA}, restricted a Gaussian Process to the manifold of solutions to an ODE to infer both the parameters and the state using a Hamiltonian Markov chain Monte Carlo method. Ramsey et al. \cite{RamsayHookerCampbellEtAl2007JRStatSocSerBStatMethodol} proposed a collocation-type method in which the solution is projected onto a spline basis. In a two-step procedure, both the basis weights and the unknown parameters are iteratively estimated. The minimization identifies the states and the parameters by penalizing poor faithfulness to the model equation (i.e., gradient matching) and deviations too far from the measured data.
Liang and Wu \cite{LiangWu2008JournaloftheAmericanStatisticalAssociation} proposed a similar strategy based on local polynomial smoothing to first estimate the state and its derivative, compute derivatives of the smoothed solution, and then estimate the parameters. Ding and Wu later improved upon this work in \cite{DingWu2014StatSin} by using local polynomial regression instead of the pseudo-least squares estimator used in \cite{LiangWu2008JournaloftheAmericanStatisticalAssociation}.
There are also a few approaches which focus on transforming the equations with operators that allow efficiently solving for the parameters.
In particular Xu and Khanmohamadi created smoothing and derivative smoothing operators based on Fourier theory \cite{XuKhanmohamadi2008Chaos} and Chebyshev operators \cite{KhanmohamadiXu2009Chaos}. However, they have not proven to be as influential as the integral and weak form methods described in the next subsection.
\subsection{Integral and Weak Form Methods}
Recent efforts by our group and others suggest that there is a considerable advantage in parameter estimation performance to be gained from using an integral-based transform of the model equations. The two main approaches are to 1) use integral forms of the model equation or 2) convolve the equation with a compactly supported test function to obtain the so-called \emph{weak form} of the equation. The analysis of the weak form of a model was pioneered by Lax and Milgram in the 1950's for relaxing smoothness requirements on unique solutions to parabolic systems in Hilbert spaces \cite{LaxMilgram1955ContributionstotheTheoryofPartialDifferentialEquations}. Since then, the weak form has been heavily used in studying solutions to PDEs as well as numerically solving for the solutions (e.g., the Finite Element Method), but not with the goal of directly estimating parameters.
The idea of weak-form based estimation has been repeatedly discovered over the years (see \cite{PreisigRippin1993ComputChemEng} for a good historical overview). Briefly, in 1954, Shinbrot created a proto-weak-form parameter inference method, called the Equations Of Motion (EOM) method \cite{Shinbrot1954NACATN3288}. In it, he proposes to multiply the model equations by so-called method functions, i.e., what we would now call test functions. These test functions were based on $\sin^n(\nu t)$ for different values of $\nu$ and $n$. In 1965, Loeb and Cahen \cite{LoebCahen1965Automatisme, LoebCahen1965IEEETransAutomControl} independently discovered the same method, calling it the Modulating Function (MF) method. They proposed and advocated for the use of polynomial test functions. The issue with these approaches (and indeed all subsequent developments based on these methods) is that the maximum power $n$ is chosen to exactly match the number of derivatives needed to perform integration by parts (IBP). As we have shown, this choice means that these methods are not nearly as effective as they could be. As we initially reported in \cite{MessengerBortz2021MultiscaleModelSimul}, a critical step in obtaining robust and accurate parameter estimation is to use \emph{highly} smooth test functions, e.g., to have $n$ be substantially higher than the minimum needed by the IBP. This insight led to our use of the $C^{\infty}$ bump functions in WENDy (see Section \ref{sec:TestFunction}).
In the statistics literature, there are several examples of using integral or weak-form equations. Dattner et al. \cite{DattnerMillerPetrenkoEtAl2017JRSocInterface} illustrate an integral-based approach and Dattner's 2021 review \cite{Dattner2021WIREsCompStat}
provides a good overview of other efforts to use the integral form for parameter estimation. Concerning the weak form, several researchers have used it as a core part of their estimation methods \cite{BrunelClairondAlche-Buc2014JAmStatAssoc,Sangalli2021InternationalStatisticalReview}. Unlike WENDy, however, either these approaches smooth the data before substitution into the model equation (which can lead to poor performance) or still require forward solves. As with the EOM and MF method above, the test functions in these methods were also chosen with insufficient smoothness to yield the highly robust parameter estimates we obtain with WENDy.
As the field of SINDy-based equation learning \cite{BruntonProctorKutz2016ProcNatlAcadSci} is built upon direct parameter estimation methods, there are also several relevant contributions from this literature. Schaeffer and McCalla \cite{SchaefferMcCalla2017PhysRevE} showed that parameter estimation and learning an integral form of equations can be done in the presence of significant noise. Broadly speaking, however, the consensus has emerged that the weak form is more effective than a straightforward integral representation. In particular, several groups (including ours) independently proposed weak form-based approaches \cite{PantazisTsamardinos2019Bioinformatics,GurevichReinboldGrigoriev2019Chaos,MessengerBortz2021MultiscaleModelSimul,PantazisTsamardinos2019Bioinformatics,WangHuanGarikipati2019ComputMethodsApplMechEng, MessengerBortz2021JComputPhys}. The weak form is now even implemented in the PySINDy code \cite{KaptanogludeSilvaFaselEtAl2022JOSS} which is actively developed by the authors of the original SINDy papers \cite{BruntonProctorKutz2016ProcNatlAcadSci,RudyBruntonProctorEtAl2017SciAdv}. However, we do note that the Weak SINDy in PySINDy is based on an early weak form implementation (proposed in \cite{GurevichReinboldGrigoriev2019Chaos,ReinboldGurevichGrigoriev2020PhysRevE}). A more recent implementation with autotuned hyperparameters can be found at \url{https://github.com/MathBioCU/WSINDy_ODE} for ODEs \cite{MessengerBortz2021MultiscaleModelSimul} and \url{https://github.com/MathBioCU/WSINDy_PDE} for PDEs \cite{MessengerBortz2021JComputPhys}.
While our group wasn't the first to propose a weak form methodology, we have pioneered its use for equation learning in a wide range of model structures and applications including: ODEs \cite{MessengerBortz2021MultiscaleModelSimul}, PDEs \cite{MessengerBortz2021JComputPhys}, interacting particle systems of the first \cite{MessengerBortz2022PhysicaD} and second \cite{MessengerWheelerLiuEtAl2022JRSocInterface} order, and online streaming \cite{MessengerDallAneseBortz2022ProcThirdMathSciMachLearnConf}. We have also studied and advanced the computational method itself. Among other contributions, we were the first to automate (with mathematical justification) test function hyperparameter specification, feature matrix rescaling (to ensure stable computations), and
to filter high frequency noise \cite{MessengerBortz2021JComputPhys}. Lastly we have also studied the theoretical convergence properties for WSINDy in the continuum data limit
\cite{MessengerBortz2022arXiv221116000}. Among the results are a description of a broad class of models for which the asymptotic limit of continuum data can overcome \emph{any} noise level to yield both an accurately learned equation and a correct parameter estimate (see \cite{MessengerBortz2022arXiv221116000}
for more information).
\begin{comment}
\section*{Cite?}
\begin{itemize}
\item Doostan / Wentz / Cortiella
\item Glasner
\begin{itemize}
\item \cite{Glasner2023CompApplMath} A new method, approximate model inference (AMI), is proposed for data-driven parameter estimation in differential equations where uncertainties exist in the model itself. This is formulated in the context of likelihood maximization over both the state and
parameter variables. This approach is shown to deal with both significant
noise and incomplete data. This is heavy stat and still requires forward
solves. FN, Lorenz, Swift-Hohenberg (pattern formation), KS,
\end{itemize}
\item Kang \begin{itemize}
\item \cite{HeKangLiaoEtAl2022SIAMJSciComput} Proposes a Successively Denoised
Differentiation (SDD) scheme to stabilize the amplified noise in numerical
differentiation. Also present recovery algorithms: Subspace pursuit
Time evolution (ST) and Subspace pursuit Cross-validation (SC). Both
utilize the Subspace Pursuit (SP) greedy algorithm. Apply to the transport
equation with sin\textasciicircum 2 as IC. Burger's with diffusion,
KdV, (for single trajectory)
\end{itemize}
\item Sangalli \begin{itemize}
\item \cite{Sangalli2021InternationalStatisticalReview,SangalliRamsayRamsay2013JRStatSocB,ArnoneSangalliVicini2021StatisticalModelling,AzzimontiNobileSangalliEtAl2014SIAMASAJUncertainQuantif,FerraccioliSangalliFinos2022JournalofMultivariateAnalysis,AzzimontiSangalliSecchiEtAl2015JournaloftheAmericanStatisticalAssociation}
\item \cite{Sangalli2021InternationalStatisticalReview} 2D domain. SR-PDE
(spatial regression with partial differential equations). Regularize
to estimate PDE parameters. Applies to eco-colour doppler measurements
of blood-flow velocity, FMRI signals associated with neural connectivity
in cerebral cortex.
\item \cite{AzzimontiNobileSangalliEtAl2014SIAMASAJUncertainQuantif}Nonparametric
regression for spatial fields with pointwise noisy observations -
where the problem-specific prior information is the solution to a
PDE governing the phenomenon under study. Proves well-posedness of
the estimator using mixed equal order FEM. Proves well-posedness and
optimal convergece rate of proposed discretization method. This smoothing
technique is extended to areal data (i.e., data representing linear
quantities computed on some subdomains). Both this paper and the previous
one \cite{Sangalli2021InternationalStatisticalReview} still solve
the PDE using FEM.
\item \cite{SangalliRamsayRamsay2013JRStatSocB}same as other Sangalli work
- only difference is PDE and that the data have holes.
\end{itemize}
\end{itemize}
\end{comment}
\section{Weak form Estimation of Nonlinear Dynamics (WENDy)\label{sec:WENDy}}
In this work, we assume that the exact form of a differential equation-based mathematical
model is known, but that the precise values of constituent parameters are
to be estimated using existing data. As the model equation is not being learned,
this is different than the WSINDy methodology and, importantly, does not use
sparse regression used in WSINDy. We thus denote the method presented in this paper as the Weak-form Estimation
of Nonlinear Dynamics (WENDy) method.
In Section \ref{sec:WfOLS}, we start with an introduction to the idea of weak-form parameter estimation in a simple OLS setting. In Section \ref{sec:WENDy_IRLS} we describe the WENDy algorithm in detail, along with several strategies for improving the accuracy: in Section \ref{sec:TestFunction} we describe a strategy for optimal test function selection, and in Section \ref{sec:SC} the strategy for improved iteration termination criteria.
\subsection{Weak-form Esimation with Ordinary Least Squares\label{sec:WfOLS}}
We begin by considering a $d$-dimensional matrix form of \eqref{eq:UPE_DE}, i.e., an ordinary differential
equation system model
\begin{equation}
\dot{u}=\Theta(u)W\label{eq:general model}
\end{equation}
with row vector of the $d$ solution states $u(t;W):=[\begin{array}{cccc}
u_{1}(t;W) & u_{2}(t;W) & \cdots & u_{d}(t;W)]\end{array}$, row vector of $J$ features (i.e., right side terms) $\Theta(u):=[\begin{array}{cccc}
f_{1}(u) & f_{2}(u) & \cdots & f_{J}(u)]\end{array}$ where $f_j:\mathbb{R}^d\to\mathbb{R}$, and the matrix of unknown parameters $W\in\mathbb{R}^{J\times d}$. We consider
a $C^{\infty}$ test function $\phi$ compactly supported in the time interval $[0,T]$ (e.g.\ $\phi \in C_{c}^{\infty}([0,T])$),
multiply both sides of (\ref{eq:general model}) by $\phi$, and integrate
over $0$ to $T$. Via integration by parts we obtain
\[
\phi(T)u(T)-\phi(0)u(0) -
\int_{0}^{T}\dot{\phi}u\textsf{d}t
=\int_{0}^{T}\phi\Theta(u)W\textsf{d}t.
\]
As the compact support of $\phi$ implies that $\phi(0)=\phi(T)=0$, this yields
a transform of (\ref{eq:general model}) into
\begin{equation}
-\int_{0}^{T}\dot{\phi}u\textsf{d}t
=\int_{0}^{T}\phi\Theta(u)W\textsf{d}t.\label{eq:WENDyContinuum}
\end{equation}
This weak form of the equation allows us to define a novel methodology for estimating the entries
in $W$.
Observations of states of this system are (in this paper) assumed to occur at a discrete set of
$M+1$ timepoints $\{t_{m}\}_{m=0}^{M}$ with uniform stepsize $\Delta t$.
The test functions are thus centered at a subsequence of $K$
timepoints $\{t_{m_{k}}\}_{k=1}^{K}$. We choose the test function
support to be centered at a timepoint $t_{m_{k}}$ with radius $m_{t}\Delta t$
where $m_{t}$ is an integer (to be chosen later). Bold variables denote
evaluation at or dependence on the chosen timepoints, e.g.,
\begin{equation*}
\begin{array}{ccc}
\mathbf{t}:=\left[\begin{array}{c}
t_0\\
\vdots\\
t_M\end{array}\right],\phantom{\quad} & \mathbf{u}:=\left[\begin{array}{ccc}
u_1(t_0) & \cdots & u_d(t_0) \\
\vdots & \ddots & \vdots \\
u_1(t_M) & \cdots & u_d(t_M)
\end{array}\right],\phantom{\quad} & \Theta(\mathbf{u}):=\left[\begin{array}{ccc}
f_1(u(t_0)) & \cdots & f_J(u(t_0))\\
\vdots & \ddots & \vdots\\
f_1(u(t_M)) & \cdots & f_J(u(t_M))
\end{array}\right].
\end{array}
\end{equation*}
Approximating the integrals in (\ref{eq:WENDyContinuum}) using a
Newton-Cotes quadrature yields
\begin{equation}
-\dot{\Phi}_{k}\mathbf{u}\approx\Phi_{k}\Theta(\mathbf{u})W,\label{eq:ApproxWENDy}
\end{equation}
where
\[
\begin{array}{ccc}
\Phi_k:=\left[\begin{array}{c|c|c}
\phi_k(t_0) & \cdots & \phi_k(t_M)
\end{array}\right]\mathbf{Q},&\phantom{\qquad}& \dot{\Phi}_k:=\left[\begin{array}{c|c|c}
\dot{\phi}_k(t_0) & \cdots & \dot{\phi}_k(t_M)
\end{array}\right]\mathbf{Q}
\end{array}
\]
and $\phi_{k}$ is a test function centered at timepoint $t_{m_{k}}$. To account for proper scaling, in computations we normalize each test function $\phi_k$ to have unit $\ell_2$-norm, or $\sum_{m=0}^M\phi_k^2(t_m) = 1$.
The $\mathbf{Q}$ matrix contains the quadrature weights on the diagonal. In this work we use the composite Trapezoidal
rule\footnote{The composite Trapezoidal rule works best for the uniform spacing and thus the left and right sides of \eqref{eq:ApproxWENDy} are sums weighted by $\dot{\phi}_{k}(\mathbf{t})$ and $\phi_{k}(\mathbf{t})$, respectively.} for which the matrix is
\[
\mathbf{Q}:=\mathsf{diag}(\nicefrac{\Delta t}{2},\Delta t,\ldots,\Delta t,\nicefrac{\Delta t}{2})\in \mathbb{R}^{(M+1)\times(M+1)}.
\]
We defer full consideration of the integration error until Section \ref{sec:TestFunctionminrad} but note that in the case of a non-uniform timegrid, $\mathbf{Q}$ would simply be adapted with the correct stepsize and quadrature weights.
The core idea of the weak-form-based direct parameter estimation is to identify $W$ as a least squares solution
to
\begin{equation}
\min_{W}\left\Vert \textsf{vec}(\mathbf{G}W-\mathbf{B})\right\Vert _{2}^{2}\label{eq:WENDy}
\end{equation}
where ``$\textsf{vec}$'' vectorizes a matrix,
\[
\begin{array}{rl}
\mathbf{G} & :=\Phi\Theta(\mathbf{U})\in\mathbb{R}^{K\times J},\\
\mathbf{B} & :=-\dot{\Phi}\mathbf{U}\in\mathbb{R}^{K\times d},
\end{array}
\]
and where $\mathbf{U}$ represents the data. The integration matrices are
\[
\begin{array}{rl}
\Phi=\left[\begin{array}{c}
\Phi_{1}\\
\vdots\\
\Phi_{K}
\end{array}\right]\in\mathbb{R}^{K\times (M+1)}\quad\textsf{and} & \dot{\Phi}=\left[\begin{array}{c}
\dot{\Phi}_{1}\\
\vdots\\
\dot{\Phi}_{K}
\end{array}\right]\in\mathbb{R}^{K\times (M+1)}.\end{array}
\]
The ordinary least squares (OLS) solution to (\ref{eq:WENDy}) is presented in Algorithm \ref{alg:WENDy-with-naive}. We note that we have written the algorithm this way to promote clarity concerning the weak-form estimation idea. For actual implementation, we create a different $\Theta_i$ for each variable $i=1\ldots,d$ and use regression for state $i$ to solve for a vector $\widehat{\mathbf{w}}_i$ of parameters (instead of a matrix of parameters $W$, which can contain values known to be zero). To increase computational efficiency, we make sure to remove any redundancies and use sparse computations whenever possible.
\begin{algorithm}
\caption{\label{alg:WENDy-with-naive}Weak-form Parameter Estimation with Ordinary Least Squares}
\SetKwInOut{Input}{input} \SetKwInOut{Output}{output}
\Input{Data $\{\mathbf{U}\}$, Feature Map $\{\Theta\}$, Test Function Matrices $\{\Phi,\dot{\Phi}\}$}
\Output{Parameter Estimate $\{\widehat{W}\}$}
\BlankLine
\BlankLine
\tcp{Solve Ordinary Least Squares Problem}
$\mathbf{G}\leftarrow \Phi\Theta(\mathbf{U})$\\
$\mathbf{B}\leftarrow -\dot{\Phi}\mathbf{U}$\\
$\widehat{W}\leftarrow (\mathbf{G}^T\mathbf{G})^{-1}\mathbf{G}^T\mathbf{B}$
\end{algorithm}
The OLS solution has respectable performance in some cases, but in general there is a clear need for improvement upon OLS. In particular, we note that \eqref{eq:WENDy}
is \emph{not} a standard least squares problem. The (likely noisy)
observations of the state $u$ appear on both sides of
(\ref{eq:ApproxWENDy}). In Statistics, this is known as an Errors
in Variables (EiV) problem.\footnote{
The EiV problem with standard additive i.i.d.\,Gaussian measurement errors is known as a Total Least Squares (TLS) problem in applied and computational mathematics. The literature of EiV and TLS is very similar, but TLS problems are a subset of EiV problems. We direct the interested reader to \cite{VanHuffelLemmerling2002} for more information.} While a full and rigorous analysis of the statistical properties
of weak-form estimation is beyond the scope of this article\footnote{See our work in \cite{MessengerBortz2022arXiv221116000} for an investigation of the
asymptotic consistency in the limit of continuum data.}, here we will present several formal derivations aimed at improving the accuracy of parameter estimation. These improvements are critical as the OLS approach is not reliably accurate. Accordingly, we define WENDy (in the next section) as a weak-form parameter estimation method which uses techniques that address the EiV challenges.
\subsection{WENDy: Weak-form estimation using Iterative Reweighting\label{sec:WENDy_IRLS}}
In this subsection, we acknowledge that the
regression problem does not fit within the framework of ordinary least squares (see Figure \ref{HR_res}) and is actually an Errors-In-Variables problem. We now derive a linearization that yields insight into the covariance
structure of the problem. First, we denote the vector
of true (but unknown) parameter values used in all state variable equations as $\mathbf{w}^{\star}$ and let $u^{\star}:=u(t;\mathbf{w}^{\star})$
and $\Theta^{\star}:=\Theta(u^{\star})$. We
also assume that measurements of the system are noisy,
so that at each timepoint $t$ all states are observed with additive
noise
\begin{equation}
U(t)=u^{\star}(t)+\varepsilon(t)\label{eq:additive noise-1}
\end{equation}
where each element of $\varepsilon(t)$ is i.i.d.~$\mathcal{N}(0,\sigma^{2})$.\footnote{Naturally, for real data, there could be different variances for different
states as well as more sophisticated measurement error models. We defer such questions
to future work.} Lastly, we note that there are $d$ variables, $J$ feature terms, and $M+1$ timepoints. In what follows, we present the expansion using Kronecker products
(denoted as $\otimes$).
We begin by considering the sampled data $\mathbf{U}:=\mathbf{u}^\star+\pmb{\varepsilon}\in\mathbb{R}^{(M+1)\times d}$ and vector of parameters to be identified $\mathbf{w}\in\mathbb{R}^{Jd}$. The use bolded variables to represent evaluation at the timegrid $\mathbf{t}$, and use superscript $\star$ notation to denote quantities based on true (noise-free) parameter or states. We now consider the residual
\begin{equation}
\mathbf{r}(\mathbf{U},\mathbf{w}):=\mathbf{G}\mathbf{w}-\mathbf{b},\label{eq:WENDyDataResid}
\end{equation}
where we redefine
\begin{align*}
\mathbf{G} & :=[\mathbb{I}_{d}\otimes(\Phi\Theta(\mathbf{U}))],\\
\mathbf{b} & :=-\mathsf{vec}(\dot{\Phi}\mathbf{U}).
\end{align*}
We then note that we can decompose the residual into several components
\begin{align*}
\mathbf{r}(\mathbf{U},\mathbf{w})&=
\mathbf{G} \mathbf{w} - \mathbf{G}^\star\mathbf{w}+\mathbf{G}^\star\mathbf{w} -\mathbf{G}^\star\wbf^\star+\mathbf{G}^\star\wbf^\star- (\mathbf{b}^\star+\mathbf{b}^{\pmb{\varepsilon}})\\
&= \underbrace{(\mathbf{G}-\mathbf{G}^\star)\mathbf{w}}_{\begin{array}{c}\mathbf{e}_\Theta\end{array}}+\underbrace{\mathbf{G}^\star(\mathbf{w}-\wbf^\star)}_{\begin{array}{c}\mathbf{r}_{0}\end{array}}+\underbrace{(\mathbf{G}^\star\wbf^\star-\mathbf{b}^\star)}_{\begin{array}{c}\mathbf{e}_{\text{int}}\end{array}}-\mathbf{b}^{\pmb{\varepsilon}},
\end{align*}
where
\begin{align*}
\mathbf{G}^\star & :=[\mathbb{I}_{d}\otimes(\Phi\Theta(\mathbf{u}^\star))],\\
\mathbf{b} & :=\underbrace{-\mathsf{vec}(\dot{\Phi}\mathbf{u}^\star)}_{\begin{array}{c}\mathbf{b}^\star\end{array}}+\underbrace{-\textsf{vec}(\dot{\Phi}\,\pmb{\varepsilon})}_{\begin{array}{c}\mathbf{b}^{\pmb{\varepsilon}}\end{array}}.
\end{align*}
Here, $\mathbf{r}_0$ is the residual without measurement noise or integration errors, and $\mathbf{e}_{\text{int}}$ is the numerical integration error induced by the quadrature (and will be analyzed in Section \ref{sec:TestFunction}).
Let us further consider the leftover terms $\mathbf{e}_\Theta-\mathbf{b}^{\pmb{\varepsilon}}$ and take a Taylor expansion around the data $\mathbf{U}$
\begin{equation}
\begin{array}{rl}
\mathbf{e}_\Theta-\mathbf{b}^{\pmb{\varepsilon}} & = (\mathbf{G}-\mathbf{G}^\star)\mathbf{w} +\textsf{vec}(\dot{\Phi}\,\pmb{\varepsilon})\\
& = \Big[\mathbb{I}_d\otimes \big(\Phi\left(\Theta(\mathbf{U})-\Theta(\mathbf{U}-\pmb{\varepsilon})\right)\big)\Big]\mathbf{w} + \Big[\mathbb{I}_d\otimes \dot{\Phi}\Big]\textsf{vec}(\pmb{\varepsilon})\\
& = \mathbf{L}_{\mathbf{w}}\mathsf{vec}(\pmb{\varepsilon})+\mathbf{h}(\mathbf{U},\mathbf{w},\pmb{\varepsilon})
\end{array}\label{eqn:eThetabeps}
\end{equation}
where $\mathbf{h}(\mathbf{U},\mathbf{w},\pmb{\varepsilon})$ is a vector-valued function of higher order terms in the measurement errors $\pmb{\varepsilon}$ (including the Hessian as well as higher order derivatives). Note that the $\mathbf{h}$ function will generally produce a bias and higher-order dependencies for all system where $\nabla^2 \Theta \neq \mathbf{0}$, but vanishes when $\pmb{\varepsilon}=\mathbf{0}$.
The first order matrix in the expansion \eqref{eqn:eThetabeps} is
\[
\mathbf{L}_{\mathbf{w}} :=[\mathsf{mat}(\mathbf{w})^{T}\otimes\Phi]\nabla\Theta\mathbf{P}+[\mathbb{I}_{d}\otimes\dot{\Phi}],
\]
where ``$\mathsf{mat}$'' is the matricization operation and $\mathbf{P}$ is a permutation matrix such that $\mathbf{P}\textsf{vec}(\boldsymbol{\varepsilon})=\textsf{vec}(\boldsymbol{\varepsilon}^{T})$. The matrix $\nabla \Theta$ contains derivatives of the features
\begin{align*}
\nabla \Theta & :=\left[\begin{array}{ccc}
\nabla f_{1}(\mathbf{U}_{0})\\
& \ddots\\
& & \nabla f_{1}(\mathbf{U}_{M})\\
\hline & \vdots\\
\hline \nabla f_{J}(\mathbf{U}_{0})\\
& \ddots\\
& & \nabla f_{J}(\mathbf{U}_{M})
\end{array}\right],
\end{align*}
where
\[
\nabla f_{j}(\mathbf{U}_{m})=\left[\begin{array}{c|c|c}
\frac{\partial}{\partial u_{1}}f_{j}(\mathbf{U}_{m}) & \cdots & \frac{\partial}{\partial u_{d}}f_{j}(\mathbf{U}_{m})\end{array}\right],
\]
and $\mathbf{U}_{m}$ is the row vector of true solution
states at $t_{m}$.
As mentioned above, we assume that all elements of $\boldsymbol{\varepsilon}$
are i.i.d.\,Gaussian, i.e., $\mathcal{N}(0,\sigma^2)$ and thus to first order the residual
is characterized by
\begin{equation}
\mathbf{G} \mathbf{w}-\mathbf{b}-(\mathbf{r}_{0}+\mathbf{e}_{\text{int}})\sim\mathcal{N}(\mathbf{0},\sigma ^2\mathbf{L}_{\mathbf{w}}(\mathbf{L}_{\mathbf{w}})^{T})\label{eqn:ResidDistw}.
\end{equation}
In the case where $\mathbf{w}=\wbf^\star$ and the integration error is negligible, \eqref{eqn:ResidDistw} simplifies to
\begin{equation}
\mathbf{G} \wbf^\star-\mathbf{b}\sim\mathcal{N}(\mathbf{0},\sigma ^2\mathbf{L}_{\wbf^\star}(\mathbf{L}_{\wbf^\star})^{T})\label{eqn:ResidDistwstar}.
\end{equation}
We note that in \eqref{eqn:ResidDistwstar} (and in \eqref{eqn:ResidDistw}), the covariance is dependent upon the parameter vector $\mathbf{w}$.
In the statistical inference literature, the Iteratively Reweighted
Least Squares (IRLS) \cite{Jorgensen2012EncyclopediaofEnvironmetrics}
method offers a strategy to account for a parameter-dependent covariance by iterating between solving for $\mathbf{w}$ and updating
the covariance matrix $\mathbf{C}$. In Algorithm \ref{alg:WENDy-IRLS} we present WENDy method, updating $\mathbf{C}^{(n)}$ (at the $n$-th iteration step) in lines 7-8 and then the new parameters $\mathbf{w}^{(n+1)}$ are computed in line 9 by
weighted least squares.
\begin{algorithm}
\caption{\label{alg:WENDy-IRLS}WENDy}
\SetKwInOut{Input}{input} \SetKwInOut{Output}{output}
\Input{Data $\{\mathbf{U}\}$, Feature Map $\{\Theta,\nabla\Theta\}$, Test Function Matrices $\{\Phi,\dot{\Phi}\}$, Stopping Criteria $\{\text{SC}\}$, Covariance Relaxation Parameter $\{\alpha\}$, Variance Filter $\{\mathbf{f}\}$}
\Output{Parameter Estimate $\{\widehat{\mathbf{w}},\widehat{\mathbf{C}},\widehat{\sigma},\mathbf{S},\texttt{stdx}\}$}
\BlankLine
\BlankLine
\tcp{Compute weak-form linear system}
$\mathbf{G} \leftarrow \left[\mathbb{I}_{d}\otimes(\Phi\Theta(\mathbf{U}))\right]$\\
$\mathbf{b} \leftarrow -\textsf{vec}(\dot{\Phi}\mathbf{U})$\\
\BlankLine
\BlankLine
\tcp{Solve Ordinary Least Squares Problem}
$\mathbf{w}^{(0)} \leftarrow (\mathbf{G}^T\mathbf{G})^{-1}\mathbf{G}^T\mathbf{b}$\\
\BlankLine
\BlankLine
\tcp{Solve Iteratively Reweighted Least Squares Problem}
$n \leftarrow 0$\\
\texttt{check} $\leftarrow$ true\\
\While{\texttt{check} is true}{
$\mathbf{L}^{(n)} \leftarrow [\textsf{mat}(\mathbf{w}^{(n)})^{T}\otimes\Phi]\nabla\Theta(\mathbf{U})\mathbf{P}+[\mathbb{I}_{d}\otimes\dot{\Phi}]$\\
$\mathbf{C}^{(n)} = (1-\alpha)\mathbf{L}^{(n)}(\mathbf{L}^{(n)})^T + \alpha \mathbf{I}$\\
$\mathbf{w}^{(n+1)} \leftarrow (\mathbf{G}^T(\mathbf{C}^{(n)})^{-1}\mathbf{G})^{-1}\mathbf{G}^{T}(\mathbf{C}^{(n)})^{-1}\mathbf{b}$\\
\texttt{check} $\leftarrow \text{SC}(\mathbf{w}^{(n+1)},\mathbf{w}^{(n)})$ \\
$n \leftarrow n+1$
}
\BlankLine
\BlankLine
\tcp{Return estimate and standard statistical quantities}
$\widehat{\mathbf{w}} \leftarrow \mathbf{w}^{(n)}$\\
$\widehat{\mathbf{C}} \leftarrow \mathbf{C}^{(n)}$\\
$\widehat{\sigma} \leftarrow (Md)^{-1/2}\nrm{\mathbf{f}*\mathbf{U}}_\text{F}$ \\
$\mathbf{S} \leftarrow \widehat{\sigma}^2 ((\mathbf{G}^T\mathbf{G})^{-1}\mathbf{G}^T)\
\widehat{\mathbf{C}}\ (\mathbf{G}(\mathbf{G}^T\mathbf{G})^{-1}))$\\
$\texttt{stdx} \leftarrow \sqrt{\texttt{diag}(\mathbf{S})}$
\end{algorithm}
The IRLS step in line 9 requires inverting $\mathbf{C}^{(n)}$, which is done by computing its Cholesky factorization and then applying the inverse to $\mathbf{G}$ and $\mathbf{b}$. Since this inversion may be unstable, we allow for possible regularization of $\mathbf{C}^{(n)}$ in line 8 via a convex combination between the analytical first-order covariance $\mathbf{L}^{(n)}(\mathbf{L}^{(n)})^T$ and the identity via the covariance relaxation parameter $\alpha$. This regularization allows the user to interpolate between the OLS solution ($\alpha=1$) and the unregularized IRLS solution ($\alpha=0$). In this way WENDy extends and encapsulates Algorithm 1. However, in the numerical examples below, we simply set $\alpha=10^{-10}$ throughout, as the aforementioned instability was not an issue. Lastly, any iterative scheme needs a stopping criteria and we will defer discussion of ours until Section \ref{sec:SC}.
The outputs of Algorithm \ref{alg:WENDy-IRLS} include the estimated parameters ${\widehat{\wbf}}$ as well as the covariance $\widehat{\mathbf{C}}$ of the response vector $\mathbf{b}$ such that approximately
\[\mathbf{b} \sim {\mathcal{N}}(\mathbf{G}{\widehat{\wbf}},\sigma^2\widehat{\mathbf{C}}).\]
A primary benefit of the WENDy methodology is that the parameter covariance matrix $\mathbf{S}$ can be estimated from $\widehat{\mathbf{C}}$ using
\begin{equation}
\mathbf{S} := \widehat{\sigma}^2 ((\mathbf{G}^T\mathbf{G})^{-1}\mathbf{G}^T)\
\widehat{\mathbf{C}}\ (\mathbf{G}(\mathbf{G}^T\mathbf{G})^{-1})).
\end{equation}
This yields the variances of individual components of ${\widehat{\wbf}}$ along $\textsf{diag}(\mathbf{S})$ as well as the correlations between elements of ${\widehat{\wbf}}$ in the off-diagonals of $\mathbf{S}$. Here $\widehat{\sigma}^2$ is an estimate of the measurement variance $\sigma^2$, which we compute by convolving each compartment of the data $\mathbf{U}$ with a high-order\footnote{The order of a filter is defined as the number of moments that the filter leaves zero (other than the zero-th moment). For more mathematical details see \cite{MessengerBortz2022arXiv221116000} Appendix F.}
filter $\mathbf{f}$ and taking the Frobenius norm of the resulting convolved data matrix $\mathbf{f} *\mathbf{U}$. Throughout we set $\mathbf{f}$ to be the centered finite difference weights of order 6 over 15 equally-spaced points (computed using \cite{Fornberg1988MathComput}), so that $\mathbf{f}$ has order 5. The filter $\mathbf{f}$ is then normalized to have unit 2-norm. This yields a high-accuracy approximation of $\sigma^2$ for underlying data $\mathbf{u}^\star$ that is locally well-approximated by polynomials up to degree 5.
\subsection{Choice of Test Functions\label{sec:TestFunction}}
When using WENDy for parameter estimation, a valid question concerns
the choice of test function. This is particularly challenging in the sparse data regime, where integration errors can easily affect parameter estimates. In
\cite{MessengerBortz2021MultiscaleModelSimul} we reported that using higher order polynomials as test functions yielded more accuracy (up to machine precision). Inspired by this result and to render moot the question of what order polynomial is needed, we have developed a 2-step process for offline computation of highly efficient test functions, given a timegrid $\mathbf{t}$.
We first derive an estimator of the integration error that can be computed using the noisy data $\mathbf{U}$ and used to detect a minimal radius $\underline{m}_{t}$ such that $m_t>\underline{m}_{t}$ leads to negligible integration error compared to the errors introduced by random noise. Inspired by wavelet decompositions, we next row-concatenate convolution matrices of test functions at different radii $\mathbf{m}_t:= (2^\ell \underline{m}_{t};\ \ell=\{0,\dots,\bar{\ell}\}).$ An SVD of this tall matrix yields an orthonormal test function matrix $\Phi$, which maximally extracts information across different scales. We note that in the later examples we have $\bar{\ell} = 3$, which in many cases leads to a largest test function support covering half of the time domain.
To begin, we consider a $C^\infty$ bump function
\begin{equation}
\psi(t;a) = C\exp\left(-\frac{\eta}{[1-(t/a)^2]_+}\right),\label{eq:CinftyBump}
\end{equation}
where the constant $C$ enforces that $\nrm{\psi}_2=1$, $\eta$ is a shape parameter, and $[\boldsymbol{\cdot}]_+ := \max(\boldsymbol{\cdot},0)$, so that $\psi(t;a)$ is supported only on $[-a,a]$ where
\begin{equation}\label{raddef}
a = m_t\Delta t.
\end{equation}
With the $\psi$ in \eqref{eq:CinftyBump} we have discovered that the accuracy of the parameter estimates is relatively insensitive to a wide range of $\eta$ values. Therefore, based on empirical investigation we arbitrarily choose $\eta=9$ in all examples and defer more extensive analysis to future work. In the rest of this section, we will describe the computation of $\underline{m}_t$ and how to use $\psi$ to construct $\Phi$ and $\dot{\Phi}$.
\subsubsection{Minimum radius selection\label{sec:TestFunctionminrad}}
In \eqref{eqn:eThetabeps}, it is clear that reducing the numerical integration errors $\mathbf{e}_{\text{int}}$ will improve the estimate accuracy. However, there will be an $m_t$ value above which the errors will be dominated by the noise. To establish a lower bound $\underline{m}_t$ on the test function radius $m_t$, we create an estimate for the integration error which works for any of the $d$ variables in a model. To promote clarity, we will let $u$ be any of the $d$ variables for the remainder of this section. However, it is important to note the the final $\widehat{\mathbf{e}}_\text{rms}$ sums over all $d$ variables.
We now consider the $k$-th element of $\mathbf{e}_\text{int}$
\[\mathbf{e}_\text{int}(u^\star,\phi_k,M)_k = (\mathbf{G}^\star\wbf^\star-\mathbf{b}^\star)_k = \sum_{m=0}^{M-1}\left(\phi_k(t_m)\dot{\mathbf{u}}_m^\star + \dot{\phi}_k(t_m)\mathbf{u}_m^\star\right)\Delta t = \frac{T}{M} \sum_{m=0}^{M-1}\frac{d}{dt}(\phi_k(t_m) \mathbf{u}^\star_m),\]
where $\Delta t =T/M$ for a uniform timegrid $\mathbf{t}=(0,\Delta t, 2\Delta t,\ldots,M\Delta t)$ with overall length $T$. We also note that the biggest benefit of this approach is that $\mathbf{e}_\text{int}$ does not explicitly depend upon $\wbf^\star$.
By expanding $\frac{d}{dt}(\phi_k(t)u^\star(t))$ into its Fourier Series\footnote{We define the $n$th Fourier mode of a function $f:[0,T]\to \mathbb{C}$ as ${\mathcal{F}}_{n}[f] := \frac{1}{\sqrt{T}}\int_0^T f(t)e^{-2\pi in/T}dt$.} we then have
\begin{equation}\label{FFTerrorform}
\mathbf{e}_\text{int}(u^\star,\phi_k,M)=\frac{T}{M\sqrt{T}} \sum_{n\in \mathbb{Z}} {\mathcal{F}}_n\left[\frac{d}{dt}(\phi_k(t) u^\star(t))\right] \left( \sum_{m=0}^{M-1}e^{2\pi inm/M}\right) =\frac{2\pi i}{\sqrt{T}}\sum_{n\in \mathbb{Z}}nM {\mathcal{F}}_{nM}[\phi_k u^\star],
\end{equation}
so that the integration error is entirely represented by aliased modes $\{M,2M,\dots\}$ of $\phi_k u^\star$. Assuming $[-a+t_k,a+t_k]\subset [0,T]$ and $T>2a>1$, we have the relation
\[{\mathcal{F}}_n[\phi_k(\boldsymbol{\cdot};a)] = a{\mathcal{F}}_{na}[\phi_k(\boldsymbol{\cdot};1)],\]
hence increasing $a$ corresponds to higher-order Fourier coefficients of $\phi_k(\boldsymbol{\cdot}; 1)$ entering the error formula \eqref{FFTerrorform}, which shows, using \eqref{FFTerrorform}, that increasing $a$ (eventually) lowers the integration error. For small $m_t$, this leads to the integration error $\mathbf{e}_\text{int}$ dominating the noise-related errors, while for large $m_t$, $\mathbf{e}_\text{int}$ noise-related effects are dominant.
We now derive a surrogate approximation of $\mathbf{e}_\text{int}$ using the noisy data $\mathbf{U}$ to estimate this transition from integration error-dominated to noise error-dominated residuals.
From the noisy data $\mathbf{U}$ on timegrid $\mathbf{t}\in\mathbb{R}^M$, we wish to compute $\mathbf{e}_\text{int}(u^\star,\phi_k,M)$ by substituting $\mathbf{U}$ for $u^\star$ and using the discrete Fourier transform (DFT), however the highest mode\footnote{We define the $n$th discrete Fourier mode of a function $f$ over a periodic grid $(m\Delta t)_{m=0}^M$ by\newline $\widehat{{\mathcal{F}}}_n[f] := \frac{\Delta t}{\sqrt{M\Delta t}}\sum_{m=0}^{M-1} f(m\Delta t)e^{-2\pi i n m/M}$.} we have access to is $\widehat{{\mathcal{F}}}_{\pm M/2}[\phi \mathbf{U}]$. On the other hand, we \textit{are} able to approximate $\mathbf{e}_\text{int}(u^\star,\phi_k,\lfloor M/s\rfloor)$ from $\mathbf{U}$, that is, the integration error over a {\it coarsened} timegrid $(0,\tilde{\Delta t},2\tilde{\Delta t}, \dots, \lfloor M/s\rfloor \tilde{\Delta t})$, where $\tilde{\Delta t} = T / \lfloor M/s\rfloor$ and $s>2$ is a chosen coarsening factor. By introducing the truncated error formula
\[ \widehat{\mathbf{e}}_\text{int}(u^\star,\phi_k,\lfloor M/s\rfloor,s) := \frac{2\pi i}{\sqrt{T}}\sum_{n=-\flr{s/2}}^{\flr{s/2}}n\lfloor M/s\rfloor {\mathcal{F}}_{n\lfloor M/s\rfloor}[\phi_k u^\star],\]
we have that
\[\widehat{\mathbf{e}}_\text{int}(u^\star,\phi_k,\lfloor M/s\rfloor,s)\approx \mathbf{e}_\text{int}(u^\star,\phi_k,\lfloor M/s\rfloor),\]
and $\widehat{\mathbf{e}}_\text{int}$ can be directly evaluated at $\mathbf{U}$ using the DFT. In particular, with $2<s<4$, we get
\[\widehat{\mathbf{e}}_\text{int}(\mathbf{U},\phi_k,\lfloor M/s\rfloor,s) = \frac{2\pi i \flr{M/s}}{\sqrt{T}}\left(\widehat{{\mathcal{F}}}_{\lfloor M/s\rfloor}[\phi_k \mathbf{U}]-\widehat{{\mathcal{F}}}_{-\lfloor M/s\rfloor}[\phi_k \mathbf{U}]\right) = -\frac{4\pi\flr{M/s}}{\sqrt{T}}\text{Im}\{\widehat{{\mathcal{F}}}_\flr{M/s}[\phi_k \mathbf{U}]\}\]
where $\text{Im}\{z\}$ denotes the imaginary portion of $z\in \mathbb{C}$, so that only a single Fourier mode needs computation. In most practical cases of interest, this leads to (see Figure \ref{interrfig})
\begin{equation}\label{interr_err_ineq}
\mathbf{e}_\text{int}(u^\star,\phi_k,M) \ \leq \ \widehat{\mathbf{e}}_\text{int}(\mathbf{U},\phi_k,\lfloor M/s\rfloor,s) \ \leq \ \mathbf{e}_\text{int}(u^\star,\phi_k,\lfloor M/s\rfloor)
\end{equation}
so that ensuring $\widehat{\mathbf{e}}_\text{int}(\mathbf{U},\phi_k,\lfloor M/s\rfloor,s)$ is below some tolerance $\tau$ leads also to $\mathbf{e}_\text{int}(u,\phi_k,M)<\tau$.
Statistically, under our additive noise model we have that $\widehat{\mathbf{e}}_\text{int}(\mathbf{U},\phi_k,\lfloor M/s\rfloor,s)$ is an unbiased estimator of $\widehat{\mathbf{e}}_\text{int}(u^\star,\phi_k,\lfloor M/s\rfloor,s)$, i.e.,
\[\mathbb{E}[\widehat{\mathbf{e}}_\text{int}(\mathbf{U},\phi_k,\lfloor M/s\rfloor,s)]=\mathbb{E}[-(\nicefrac{4\pi\flr{M/s}}{\sqrt{T}})\text{Im}\{\widehat{{\mathcal{F}}}_\flr{M/s}[\phi_k (\mathbf{u}^\star+\pmb{\varepsilon})]\}]=\mathbb{E}[\widehat{\mathbf{e}}_\text{int}(u^\star,\phi_k,\lfloor M/s\rfloor,s)],\]
where $\mathbb{E}$ denotes expectation.
The variance satisfies, for $2<s<4$,
\[\textbf{Var}[\widehat{\mathbf{e}}_\text{int}(\mathbf{U},\phi_k,\lfloor M/s\rfloor,s)] := \sigma^2\left(\frac{4\pi\flr{M/s}}{M}\right)^2\sum_{j=1}^{M-1}\phi^2_k(j\Delta t)\sin^2(2\pi
\flr{M/s}j/M)\leq \sigma^2\left(\frac{4\pi\flr{M/s}}{M}\right)^2\]
where $\sigma^2 = \mathbf{Var}[\epsilon]$. The upper bound follows from $\nrm{\phi_k}_2 = 1$, and shows that the variance is not sensitive to the radius of the test function $\phi_k$.
We pick a radius $\underline{m}_t$ as a changepoint of $\log(\hat{\mathbf{e}}_\text{rms})$, where $\hat{\mathbf{e}}_\text{rms}$ is the root-mean-squared integration error over test functions placed along the timeseries,
\begin{equation}
\hat{\mathbf{e}}_\text{rms}(m_t):= K^{-1}\sum_{k=1}^K\sum_{i=1}^{d}\widehat{\mathbf{e}}_\text{int}(\mathbf{U}^{(i)},\phi_k(\cdot;m_t),\lfloor M/s\rfloor,s)^2,
\label{eq:IntErr}
\end{equation}
where $\mathbf{U}^{(i)}$ is the $i$th variable in the system.
Figure \ref{interrfig} depicts $\widehat{\mathbf{e}}_\text{rms}$ as a function of support radius $m_t$. As can be seen, since the variance of $\widehat{\mathbf{e}}_\text{int}$ is insensitive to the radius $m_t$, the estimator is approximately flat over the region with negligible integration error, a perfect setting for changepoint detection. Crucially, Figure \ref{interrfig} demonstrates that, in practice, the minimum radius $\underline{m}_t$ lies to the right of the changepoint of the coefficient errors
\[E_2({\widehat{\wbf}}) := \nrm{{\widehat{\wbf}}-\wbf^\star}_2^2/\nrm{\wbf^\star}_2^2,\]
as a function of $m_t$.
\begin{comment}
The radius $m_t$ and decay rate $\eta$ play crucial roles in balancing the effects of integration error and measurement noise.
\end{comment}
\begin{figure}
\begin{tabular}{cc}
\includegraphics[trim={0 0 35 20},clip,width=0.48\textwidth]{interr_fitzhugh} &
\includegraphics[trim={0 0 35 20},clip,width=0.48\textwidth]{interr_fitzhugh_data}
\end{tabular}
\caption{Visualization of the minimum radius selection using single realizations of Fitzhugh-Nagumo data with 512 timepoints at three different noise levels. Dashed lines indicate the minimum radius $\underline{m}_t$ Left: we see that inequality \eqref{interr_err_ineq} holds empirically for small radii $m_t$. Right: coefficient error $E_2$ as a function of $m_t$ is plotted, showing that for each noise level the identified radius $m_t$ using $\hat{\mathbf{e}}_\text{rms}$ lies to right of the dip in $E_2$, as random errors begin to dominate integration errors. In particular, for low levels of noise, $\underline{m}_t$ increases to ensure high accuracy integration.}
\label{interrfig}
\end{figure}
\subsubsection{Orthonormal test functions}
Having computed the minimal radius $\underline{m}_t$, we then construct the test function matrices $(\Phi,\dot{\Phi})$ by orthonormalizing and truncating a concatenation of test function matrices with $\mathbf{m}_t:= \underline{m}_t\times(1,2,4,8)$. Letting $\Psi_{\ell}$ be the convolution matrix for $\psi(\boldsymbol{\cdot}\ ; 2^\ell \underline{m}_t \Delta t)$, we compute the SVD
\[\Psi := \begin{bmatrix} \Psi_0 \\ \Psi_1 \\ \Psi_2 \\ \Psi_3 \end{bmatrix}= \mathbf{Q}\Sigma\mathbf{V}^T.\]
The modes $\mathbf{V}$ then form an orthonormal basis for the set of test functions forming the rows of $\Psi$. Letting $r$ be the rank of $\Psi$, we then truncate the SVD to rank $K$, where $K$ is selected as the changepoint in the cumulative sum of the singular values $(\Sigma_{ii})_{i=1}^r$. We then let
\[\Phi = (\mathbf{V}^{(K)})^T\]
be the test function basis where $\mathbf{V}^{(K)}$ indicates the first $K$ modes of $\mathbf{V}$. Unlike our previous implementations, the derivative matrix $\dot{\Phi}$ must now be computed numerically, however the compact support and smoothness of the reference test functions $\psi(\boldsymbol{\cdot} ; 2^\ell \underline{m}_t \Delta t)$, this can be done very accurately with Fourier differentiation. Hence, we let
\[\dot{\Phi} = {\mathcal{F}}^{-1}\textsf{diag}(i\pmb{k}){\mathcal{F}}\Phi\]
where ${\mathcal{F}}$ is the discrete Fourier transform and $\pmb{k}$ are the requisite wavenumbers.
\subsection{Stopping criteria \label{sec:SC}}
Having formed the test function matrices $\{\Phi,\dot{\Phi}\}$, the remaining unspecified process in Algorithm \ref{alg:WENDy-IRLS} is the stopping criteria $\text{SC}$. The iteration can stop in one of three ways: (1) the iterates reach a fixed point, (2) the number of iterates exceeds a specified limit, or (3) the residuals
\[\mathbf{r}^{(n+1)} := (\mathbf{C}^{(n)})^{-1/2}(\mathbf{G}\mathbf{w}^{(n+1)}-\mathbf{b})\]
are no longer approximately normally distributed. (1) and (2) are straightfoward limitations of any iterative algorithm while (3) results from the fact that our weighted
least-squares framework is only approximate. In ideal scenarios where the discrepancy terms $\mathbf{e}_\text{int}$ and $\mathbf{h}(\mathbf{u}^\star,\wbf^\star;\pmb{\varepsilon})$ are negligible, equation \eqref{eqn:ResidDistw} implies that
\[(\mathbf{C}^\star)^{-1}(\mathbf{G}\wbf^\star-\mathbf{b})\sim {\mathcal{N}}(\pmb{0},\sigma^2\mathbf{I})\]
where $\mathbf{C}^\star = \mathbf{L}^\star(\mathbf{L}^\star)^T$ is the covariance computed from $\wbf^\star$. Hence we expect $\mathbf{r}^{(n)}$ to agree with a normal distribution more strongly as $n$ increases. If the discrepancy terms are non-negligible, it is possible that the reweighting procedure will not result in an increasingly normal $\mathbf{r}^{(n)}$, and iterates $\mathbf{w}^{(n)}$ may become worse approximations of $\wbf^\star$. A simple way to detect this is with the Shapiro-Wilk (S-W) test for normality \cite{ShapiroWilk1965Biometrika}, which produces an approximate $p$-value under the null hypothesis that the given sample is i.i.d.\ normally distributed. However, the first few iterations are also not expected to yield i.i.d.\ normal residuals (see Figure \ref{HR_res}), so we only check the S-W test after a fixed number of iterations $n_0$. Letting $\text{SW}^{(n)}:=\text{SW}(\mathbf{r}^{(n)})$ denote the $p$-value of the S-W test at iteration $n> n_0$, and setting $\text{SW}^{(n_0)}=1$, we specify the stopping criteria as:
\begin{equation}
\text{SC}(\mathbf{w}^{(n+1)},\mathbf{w}^{(n)}) = \{\|\mathbf{w}^{(n+1)}-\mathbf{w}^{(n)}\|_2/\|\mathbf{w}^{(n)}\|_2>\tau_\text{FP}\}\ \text{and}\ \{n<\texttt{max\_its}\}\ \text{and}\ \{\text{SW}^{(\max\{n,n_0\})}> \tau_\text{SW}\}.
\end{equation}
We set the fixed-point tolerance to $\tau_\text{FP}=10^{-6}$, the S-W tolerance and starting point to $\tau_\text{SW}=10^{-4}$ and $n_0=10$, and $\texttt{max\_its}=100$.
\section{Illustrating Examples\label{sec:Illustrating-Examples}}
Here we demonstrate the effectiveness of WENDy applied to five ordinary differential equations canonical to biology and biochemical modeling. As demonstrated in the works mentioned in Section \ref{sec:Introduction}, it is known that the weak or integral formulations are advantageous, with previous works mostly advocating for a two step process involving (1) pre-smoothing the data before (2) solving for parameters using ordinary least squares. The WENDy approach does not involve smoothing the data, and instead leverages the covariance structure introduced by the weak form to iteratively reduce errors in the ordinary least squares (OLS) weak-form estimation. Utilizing the covariance structure in this way not only reduces error, but reveals parameter uncertainties as demonstrated in Section \ref{sec:uncertainty}.
We compare the WENDy solution to the weak-form ordinary least squares solution (described in Section \ref{sec:WENDy} and denoted simply by OLS in this section) to forward solver-based nonlinear least squares (FSNLS).
Comparison to OLS is important due to the growing use of weak formulations in joint equation learning / parameter estimation tasks, but often without smoothing or further variance reduction steps \cite{MessengerBortz2021JComputPhys,FaselKutzBruntonEtAl2021ArXiv211110992CsMath,NicolaouHuoChenEtAl2023arXiv230102673,BertsimasGurnee2023NonlinearDyn}. In most cases WENDy reduces the OLS error by $60\%$-$90\%$ (see the bar plots in Figures \ref{Logistic_Growth_fig}-\ref{biochemM1_fig}). When compared to FSNLS, WENDy provides a more efficient and accurate solution in typical use cases, however in the regime of highly sparse data and large noise, FSNLS provides an improvement in accuracy at a higher computational cost. Furthermore, we demonstrate that FSNLS may be improved by using the WENDy output as an initial guess. We aim to explore further benefits of combining forward solver-based approaches with solver-free weak-form approaches in a future work. Code to generate all examples is available at \url{https://github.com/MathBioCU/WENDy}.
\begin{figure}
\centering
\begin{tabular}{ccc}
\includegraphics[trim={0 0 35 20},clip,width=0.3\textwidth]{LG_res}&
\includegraphics[trim={0 0 35 20},clip,width=0.3\textwidth]{LV_res}&
\includegraphics[trim={0 0 35 20},clip,width=0.3\textwidth]{FN_res}
\end{tabular}
\caption{Histograms of the WENDy (red) and OLS (blue) residuals evaluated at the WENDy output ${\widehat{\wbf}}$ applied to the (left-right) Logistic Growth, Lotka-Volterra, and Fitzhugh-Nagumo data, each with 256 timepoints and $20\%$ noise. Curves are averaged over 100 independent trials with each histogram scaled by its empirical standard deviation. In each case, the WENDy residual agrees well with a standard normal, while the OLS residual exhibits distictly non-Gaussian features, indicative that OLS is the wrong statistical regression model.}
\label{HR_res}
\end{figure}
\subsection{Numerical methods and performance metrics}
In all cases below, we solve for approximate weights ${\widehat{\wbf}}$ using Algorithm \ref{alg:WENDy-IRLS} over 100 independent trials of additive Gaussian noise with standard deviation $\sigma = \sigma_{NR}\|\textsf{vec}(\mathbf{U}^\star)\|_\text{rms}$ for a range of noise ratios $\sigma_{NR}$. This specification of the variance implies that
\[\sigma_{NR} \approx \frac{\|\textsf{vec}(\mathbf{U}^\star-\mathbf{U})\|_\text{rms}}{\|\textsf{vec}(\mathbf{U})\|_\text{rms}},\]
so that $\sigma_{NR}$ can be interpreted as the relative error between the true and noisy data. Results from all trials are aggregated by computing the mean and median. Computations of Algorithm \ref{alg:WENDy-IRLS} are performed in MATLAB on a laptop with 40GB of RAM and an 8-core AMD Ryzen 7 pro 4750u processor. Computations of FSNLS are also performed in MATLAB but were run on the University of Colorado Boulder's Blanca Condo Cluster in a trivially parallel manner over a homogeneous CPU set each with Intel Xeon Gold 6130 processors and 24GB RAM. Due to the comparable speed of the two processors (1.7 GHz for AMD Ryzen 7, 2.1 GHz for Intel Xeon Gold) and the fact that each task required less than 5 GB working memory (well below the maximum allowable), we believe the walltime comparisons between WENDy and FSNLS below are fair.
As well as $\sigma_{NR}$, we vary the stepsize $\Delta t$ (keeping the final time $T$ fixed for each example), to demonstrate large and small sample behavior. For each example, a high-fidelity solution is obtained on a fine grid (512 timepoints for Logistic Growth, 1024 for all other examples), which is then subsampled by factors of 2 to obtain coarser datasets.
To evaluate the performance of WENDy, we record the relative coefficient error
\begin{equation}
E_2:= \frac{\|{\widehat{\wbf}}-\wbf^\star\|_2}{\|\wbf^\star\|_2}
\end{equation}
as well as the forward simulation error
\begin{equation}
E_\text{FS}:= \frac{\|\textsf{vec}(\mathbf{U}^\star-\widehat{\mathbf{U}})\|_2}{\|\textsf{vec}(\mathbf{U}^\star)\|_2}.
\end{equation}
The data $\widehat{\mathbf{U}}$ is obtained by simulating forward the model using the learned coefficients ${\widehat{\wbf}}$ from the exact initial conditions $u(0)$ using the same $\Delta t$ as the data. The RK45 algorithm is used for all forward simulations (unless otherwise specified) with relative and absolute tolerances of $10^{-12}$. Comparison with OLS solutions is displayed in bar graphs which give the drop in error from the OLS solution to the WENDy solution as a percentage of the error in the OLS solution.
\begin{table}
\centering
\begin{tabular}{@{} |p{3cm}|l|p{6cm}|@{} }
\hline
Name & ODE & Parameters \\
\hline
Logistic Growth & \hspace{0.3cm}$\dot{u} = w_1u+w_2u^2$ & $T = 10$, $u(0) = 0.01$, \newline $\|\textsf{vec}(\mathbf{U}^\star)\|_\text{rms} = 0.66$,\newline $\wbf^\star = (1,-1)$ \\
\hline
Lotka-Volterra &
$\begin{dcases} \dot{u}_1 = w_1u_1+w_2u_1u_2 \\ \dot{u}_2 = w_3u_2 + w_4u_1u_2 \end{dcases}$ & $T = 5$, $u(0) = (1,1)$, \newline $\|\textsf{vec}(\mathbf{U}^\star)\|_\text{rms} = 6.8$,\newline $\wbf^\star = (3,-1,-6,1)$ \\
\hline
Fitzhugh-Nagumo &
$\begin{dcases} \dot{u}_1 = w_1u_1+w_2u_1^3 + w_3u_2 \\ \dot{u}_2 = w_4u_1 + w_5(1) + w_6u_2 \end{dcases}$ & $T = 25$, $u(0) = (0,0.1)$, \newline $\|\textsf{vec}(\mathbf{U}^\star)\|_\text{rms} = 0.68$,\newline $\wbf^\star = (3,-3,3,-1/3,17/150,1/15)$ \\
\hline
Hindmarsh-Rose & $\begin{dcases} \dot{u}_1 = w_1u_2+w_2u_1^3 + w_3u_1^2 + w_4 u_3 \\ \dot{u}_2 = w_5(1) + w_6u_1^2 + w_7u_2 \\ \dot{u}_3 = w_8u_1+w_9(1)+w_{10}u_3\end{dcases}$ & \vspace{-0.8cm}$T = 10$, $u(0) = (-1.31,-7.6,-0.2)$, \newline $\|\textsf{vec}(\mathbf{U}^\star)\|_\text{rms} = 2.8$,\newline $\wbf^\star = (10,-10,30,-10,10,-50,-10,$ \newline $ 0.04,0.0319,-0.01)$ \\
\hline
Protein Transduction Benchmark (PTB) & $\begin{dcases} \dot{u}_1 = w_1u_1+w_2u_1u_3 + w_3u_4 \\ \dot{u}_2 = w_4u_1 \\ \dot{u}_3 = w_5u_1u_3+w_6u_4+w_7\frac{u_5}{0.3 + u_5} \\ \dot{u}_4 = w_8 u_1u_3 + w_9u_4 \\ \dot{u}_5 = w_{10}u_4 + w_{11}\frac{u_5}{0.3 + u_5}\end{dcases}$ & \vspace{-1cm}$T = 25$, $u(0) = (1,0,1,0,1)$, \newline $\|\textsf{vec}(\mathbf{U}^\star)\|_\text{rms} = 0.81$,\newline $\wbf^\star = (-0.07,-0.6,0.35,0.07,-0.6,0.05,$ \newline $ 0.17,0.6,-0.35,0.3,-0.017)$ \\
\hline
\end{tabular}
\caption{Specifications of ODE examples. Note that $\|\textsf{vec}(\mathbf{U}^\star)\|_\text{rms}$ is included for reference in order to compute the noise variance using $\sigma = \sigma_{NR}/\|\textsf{vec}(\mathbf{U}^\star)\|_\text{rms}$.}
\end{table}
\subsection{Summary of results}
\subsubsection{Logistic Growth}
\begin{figure}
\centering
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[trim={-20 0 30 15},clip,width=0.48\textwidth]{wendydata_Logistic_Growth.mat_sol} &
\includegraphics[trim={-20 0 30 20},clip,width=0.48\textwidth]{wendy_Logistic_Growth_bar} \\
\includegraphics[trim={0 0 30 20},clip,width=0.48\textwidth]{wendy_Logistic_Growth_E2}&
\includegraphics[trim={0 0 30 20},clip,width=0.48\textwidth]{wendy_Logistic_Growth_dd}
\end{tabular}
\caption{\textbf{Logistic Growth}: Estimation of parameters in the Logistic Growth model. Left and middle panels display parameter errors $E_2$ and forward simulation error $E_{FS}$, with solid lines showing mean error and dashed lines showing median error. Right: median percentage drop in $E_2$ from the OLS solution to the WENDy output (e.g. at $30\%$ noise and 512 timepoints WENDy results in a 85\% reduction in error).}
\label{Logistic_Growth_fig}
\end{figure}
The logistic growth model is the simplest nonlinear model for population growth, yet the $u^2$ nonlinearity generates a bias that affects the OLS solution more strongly as noise increases. Figure \ref{Logistic_Growth_fig} (top right) indicates that when $M\geq 256$ WENDy decreases the error by 50\%-85\% from the OLS solution for noise level is 10\% or higher. WENDy also leads to a robust fit for smaller $M$, providing coefficient errors $E_2$ and forward simulation errors $E_\text{FS}$ that are both less than $6\%$ for data with only 64 points and $10\%$ noise (Figure \ref{Logistic_Growth_fig} (top left) displays an example dataset at this resolution).
\subsubsection{Lotka-Volterra}
\begin{figure}
\centering
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[trim={-20 0 25 15},clip,width=0.48\textwidth]{wendydata_Lotka_Volterra.mat_sol} &
\includegraphics[trim={-20 0 25 20},clip,width=0.48\textwidth]{wendy_Lotka_Volterra_bar} \\
\includegraphics[trim={0 0 25 20},clip,width=0.48\textwidth]{wendy_Lotka_Volterra_E2}&
\includegraphics[trim={0 0 25 20},clip,width=0.48\textwidth]{wendy_Lotka_Volterra_dd}
\end{tabular}
\caption{\textbf{Lotka-Volterra}: Estimation of parameters in the Lotka-Volterra model (for plot details see Figure \ref{Logistic_Growth_fig} capture).}
\label{Lotka_Volterra_fig}
\end{figure}
The Lotka-Volterra model is a system of equations designed to capture predator-prey dynamics \cite{Lotka1978TheGoldenAgeofTheoreticalEcology1923-1940}. Each term in the model is unbiased when evaluated at noisy data (under the i.i.d. assumption), so that the first-order residual expansion utilized in WENDy is highly accurate. The bottom right plot in Figure \ref{Lotka_Volterra_fig} shows even with $30\%$ noise and only 64 timepoints, the coefficient error is still less than $10\%$. WENDy reduces the error by $40\%$-$70\%$ on average from the OLS (top right panel).
\subsubsection{Fitzhugh-Nagumo}
\begin{figure}
\centering
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[trim={-20 0 30 15},clip,width=0.48\textwidth]{wendydata_FitzHugh-Nagumo.mat_sol} &
\includegraphics[trim={-20 0 26 20},clip,width=0.48\textwidth]{wendy_FitzHugh-Nagumo_bar} \\
\includegraphics[trim={0 0 26 20},clip,width=0.48\textwidth]{wendy_FitzHugh-Nagumo_E2}&
\includegraphics[trim={0 0 26 20},clip,width=0.48\textwidth]{wendy_FitzHugh-Nagumo_dd}
\end{tabular}
\caption{\textbf{FitzHugh-Nagumo}: Estimation of parameters in the FitzHugh-Nagumo model (for plot details see Figure \ref{Logistic_Growth_fig} capture).}
\label{FitzHugh-Nagumo_fig}
\end{figure}
The Fitzhugh-Nagumo equations are a simplified model for an excitable neuron \cite{FitzHugh1961BiophysJ}. The equations contain six fundamental terms with coefficients to be identified. The cubic nonlinearity implies that the first-order covariance expansion in WENDy becomes inaccurate at high levels of noise. Nevertheless, Figure \ref{FitzHugh-Nagumo_fig} (lower plots) shows that WENDy produces on average $6\%$ coefficient errors at $10\%$ noise with only 128 timepoints, and only $7\%$ forward simulation errors (see upper left plot for an example dataset at this resolution). In many cases WENDy reduces the error by over $50\%$ from the FSNLS solution, with $80\%$ reductions for high noise and $M=1024$ timepoints (top right panel). For sparse data (e.g.\ 64 timepoints), numerical integration errors prevent estimation of parameters with lower than $3\%$ error, as the solution is nearly discontinuous in this case (jumps between datapoints are ${\mathcal{O}}(1)$).
\subsubsection{Hindmarsh-Rose}
\begin{figure}
\centering
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[trim={-20 0 30 15},clip,width=0.48\textwidth]{wendydata_Hindmarsh-Rose.mat_sol} &
\includegraphics[trim={-20 0 30 20},clip,width=0.48\textwidth]{wendy_Hindmarsh-Rose_bar} \\
\includegraphics[trim={0 0 30 20},clip,width=0.48\textwidth]{wendy_Hindmarsh-Rose_E2}&
\includegraphics[trim={0 0 30 20},clip,width=0.48\textwidth]{wendy_Hindmarsh-Rose_dd}
\end{tabular}
\caption{\textbf{Hindmarsh-Rose}: Estimation of parameters in the Hindmarsh-Rose model (for plot details see Figure \ref{Logistic_Growth_fig} capture).}
\label{Hindmarsh-Rose_fig}
\end{figure}
The Hindmarsh-Rose model is used to emulate neuronal bursting and features 10 fundamental parameters which span 4 orders of magnitude \cite{HindmarshRose1984ProcRSocLondB}. Bursting behavior is observed in the first two solution components, while the third component represents slow neuronal adaptation with dynamics that are two orders of magnitude smaller in amplitude. Bursting produces steep gradients which render the dynamics numerically discontinuous at $M=128$ timepoints, while at $M=256$ there is at most one data point between peaks and troughs of bursts (see Figure \ref{Hindmarsh-Rose_fig}, upper left). Furthermore, cubic and quadratic nonlinearities lead to inaccuracies at high levels of noise. Thus, in a multitude of ways (multiple coefficient scales, multiple solution scales, steep gradients, higher-order nonlinearities, etc.) this is a challenging problem, yet an important one as it exhibits a canonical biological phenomenon. Figure \ref{Hindmarsh-Rose_fig} (lower left) shows that WENDy is robust to $2\%$ noise when $M\geq 256$, robust to $5\%$ noise when $M\geq 512$, and robust to $10\%$ noise when $M\geq 1024$. It should be noted that since our noise model applies additive noise of equal variance to each component, relatively small noise renders the slowly-varying third component $u_3$ unidentifiable (in fact, the noise ratio of only $\mathbf{U}^{(3)}$ exceeds $100\%$ when the total noise ratio is $10\%$). In the operable range of $1\%$-$2\%$ noise and $M\geq 256$, WENDy results in $70\%$-$90\%$ reductions in errors from the naive OLS solution, indicating that inclusion of the approximate covariance is highly beneficial under conditions which can be assumed to be experimentally relevant. We note that the forward simulation error here is not indicative of performance, as it will inevitably be large in all cases due to slight misalignment with bursts in the true data.
\subsubsection{Protein Transduction Benchmark (PTB)}
\begin{figure}
\centering
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[trim={-20 0 30 15},clip,width=0.48\textwidth]{wendydata_biochemM1.mat_sol} &
\includegraphics[trim={-20 0 30 20},clip,width=0.48\textwidth]{wendy_biochemM1_bar} \\
\includegraphics[trim={0 0 30 20},clip,width=0.48\textwidth]{wendy_biochemM1_E2}&
\includegraphics[trim={0 0 30 20},clip,width=0.48\textwidth]{wendy_biochemM1_dd}
\end{tabular}
\caption{\textbf{Protein Transduction Benchmark (PTB)}: Estimation of parameters in the PTB model (for plot details see Figure \ref{Logistic_Growth_fig} capture).}
\label{biochemM1_fig}
\end{figure}
The PTB model is a five-compartment protein transduction model identified in \cite{SchoeberlEichler-JonssonGillesEtAl2002NatBiotechnol} as a mechanism in the signaling cascade of epidermal growth factor (EGF). It was used in \cite{VyshemirskyGirolami2008Bioinformatics} to compare between four other models, and has since served as a benchmark for parameter estimation studies in biochemistry \cite{MacdonaldHusmeier2015BioinformaticsandBiomedicalEngineering,NiuRogersFilipponeEtAl2016Proc33rdIntConfMachLearn,KirkThorneStumpf2013CurrOpinBiotechnol}. The nonlinearites are quadratic and sigmoidal, the latter category producing nontrivial transformations of the additive noise. WENDy estimates the 11 parameters with reasonable accuracy when 256 or more timepoints are available (Figure \ref{biochemM1_fig}), which is sufficient to result in forward simulation errors often much less than $10\%$. The benefit of using WENDy over the OLS solution is most apparent for $M\geq 512$, where the coefficient errors are reduced by at least $70\%$, leading to forward simulation errors less than $10\%$, even at $20\%$ noise.
\subsection{Parameter uncertainties using learned covariance}
\label{sec:uncertainty}
\begin{figure}
\centering
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[trim={30 0 45 5},clip,width=0.5\textwidth]{wendy_FitzHugh-Nagumo_CI}&
\includegraphics[trim={30 0 45 5},clip,width=0.5\textwidth]{wendy_FitzHugh-Nagumo_CI_2}
\end{tabular}
\caption{\textbf{FitzHugh-Nagumo:} Performance of WENDy for all estimated parameters. The true parameters are plotted in green, the purple lines indicate the average learned parameters over all experiments and the black lines represent the 95\% confidence intervals obtained from averaging the learned parameter covariance matrices $\mathbf{S}$. The $x$-axis indicates noise level and number of timepoints for each interval.}
\label{FHN_CI}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[trim={35 25 55 10},clip,width=0.5\textwidth]{wendy_Hindmarsh-Rose_CI}&
\includegraphics[trim={35 25 55 10},clip,width=0.5\textwidth]{wendy_Hindmarsh-Rose_CI_2}
\end{tabular}
\caption{\textbf{Hindmarsh-Rose:} Performance of WENDy for all estimated parameters. See Figure \ref{FHN_CI} for a description.}
\label{HR_CI}
\end{figure}
We now demonstrate how the WENDy methodology may be used to inform the user about uncertainties in the parameter estimates. Figures \ref{FHN_CI} and \ref{HR_CI} contain visualizations of confidence intervals around each parameter in the FitzHugh-Nagumo and Hindmarsh-Rose models computed from the diagonal elements of the learned parameter covariance matrix $\mathbf{S}$. Each combination of noise level and number of timepoints yields a 95\% confidence interval around the learned parameter\footnote{Scripts are available at \url{https://github.com/MathBioCU/WENDy} to generate similar plots for the other examples.}. As expected, increasing the number of timepoints and decreasing the noise level leads to more certainty in the learned parameters, while lower quality data leads to higher uncertainty. Uncertainty levels can be used to inform experimental protocols and even be propagated into predictions made from learned models. One could also examine the off-diagonal correlations in $\mathbf{S}$, which indicate how information flows between parameters. We aim to explore these directions in a future work.
\subsection{Comparison to nonlinear least squares}
We now briefly compare WENDy and forward solver-based nonlinear least squares (FSNLS) using walltime and relative coefficient error $E_2$ as criteria. For nonlinear least-squares one must specify the initial conditions for the ODE solve (IC), a simulation method (SM), and an initial guess for the parameters ($\mathbf{w}^{(0)}$). Additionally, stopping tolerances for the optimization method must be specified (Levenberg-Marquardt is used throughout). Optimal choices for each of these hyperparameters is an ongoing area of research. We have optimized FSNLS in ways that are unrealistic in practice in order to demonstrate the advantages of WENDy even when FSNLS is performing somewhat optimally in both walltime and accuracy. Our hyperparameter selections are collected in Table \ref{NLShp} and discussed below.
To remove some sources of error from FSNLS, we use the true initial conditions $u(0)$ throughout, noting that these would not be available in practice. For the simulation method, we use state-of-the-art ODE solvers for each problem, namely for the stiff differential equations Fitzhugh-Nagumo and Hindmarsh-Rose we use MATLAB's \text{ode15s}, while for Lotka-Volterra and PTB we use \texttt{ode45}. In this way FSNLS is optimized for speed in each problem. We fix the relative and absolute tolerances of the solvers at $10^{-6}$ in order to prevent numerical errors from affecting results without asking for excessive computations. In practice, the ODE tolerance, as well as the solver, must be optimized to depend on the noise in the data, and the relation between simulation errors and parameters errors in FSNLS is an on-going area of research \cite{NardiniBortz2019InverseProbl}.
Due to the non-convexity of the loss function in FSNLS, choosing a good initial guess $\mathbf{w}^{(0)}$ for the parameters $\wbf^\star$ is crucial. For comparison, we use two strategies. The first strategy (simply labeled FSNLS in Figures \ref{LV_NLS}-\ref{PTB_NLS}), consists of running FSNLS on five initial guesses sampled i.i.d\ from a uniform distribution
\[\mathbf{w}^{(0)}\sim U(\wbf^\star,\mathbf{I}\pmb{\sigma})\]
and keeping only the best-performing result. Since the sign of coefficients greatly impacts the stability of the ODE, we take the standard deviations to be
\begin{equation}\label{ICstrat1}
\sigma_j = 0.25|\wbf^\star_j|
\end{equation}
so that initial guesses always have the correct sign but with approximately $25\%$ error from the true coefficients. (For cases like Hindmarsh-Rose, this implies that the small coefficients in $\wbf^\star$ are measured to high accuracy relative to the large coefficients.) In practice, one would not have the luxury of selecting the lowest-error result of five independent trials of FSNLS, however it may be possible to combine several results to boost performance.
For the second initial guess strategy we set $\mathbf{w}^{(0)} ={\widehat{\wbf}}$, the output from WENDy (labeled WENDy-FSNLS in Figures \ref{LV_NLS}-\ref{PTB_NLS}). In almost all cases, this results in an increase in accuracy, and in many cases, a decrease in walltime.
\begin{table}
\centering
\begin{tabular}{|c|p{3cm}|p{2.5cm}|c|c|c|c|}
\hline
IC & Simulation method & $\mathbf{w}^{(0),\text{batch}}$ & $\mathbf{w}^{(0),\text{WENDy}}$ & \textsf{max.\ evals} & \textsf{max.\ iter} & \textsf{min.\ step}\\
\hline $u^\star(0)$ & L-V, PTB: \texttt{ode45} \newline FH-N, H-R: \texttt{ode15s}\newline (abs/rel tol=$10^{-6}$) & $\mathbf{w}^{(0)}\sim ~U(\wbf^\star,\pmb{\sigma})$,\newline best out of 5 & $\mathbf{w}^{(0)} = {\widehat{\wbf}}$ & 2000 & 500 & $10^{-8}$ \\
\hline
\end{tabular}
\caption{Hyperparameters for the FSNLS algorithm.}
\label{NLShp}
\end{table}
Figures \ref{LV_NLS}-\ref{PTB_NLS} display comparisons between FSNLS, WENDy-FSNLS, and WENDy for Lotka-Volterra, FitzHugh-Nagumo, Hindmarsh-Rose, and PTB models. In general, we observe that WENDy provides significant decreases in walltime and modest to considerable increases in accuracy compared to the FSNLS solution. Due to the additive noise structure of the data, this is surprising because FSNLS corresponds to (for normally distributed measurement errors) a maximum likelihood estimation, while WENDy only provides a first order approximation to the statistical model. At lower resolution and higher noise (top right plot in Figures \ref{LV_NLS}-\ref{PTB_NLS}), all three methods are comparable in accuracy, and WENDy decreases the walltime by two orders of magnitude. In several cases, such as Lotka-Volterra Figure \ref{LV_NLS}, the WENDy-FSNLS solution achieves a lower error than both WENDy and FSNLS, and improves on the speed of FSNLS. For Hindmarsh-Rose, even with high-resolution data and low noise (bottom left plot of Figure \ref{HR_NLS}), FSNLS is unable to provide an accurate solution ($E_2\approx 0.2$), while WENDy and WENDy-FSNLS result in $E_2\approx 0.005$. The clusters of FSNLS runs in Figure \ref{HR_NLS} with walltimes $\approx 10$ seconds correspond to local minima, a particular weakness of FSNLS, while the remaining runs have walltimes on the order of 20 minutes, compared to 10-30 seconds WENDy. We see a similar trend in $E_2$ for the PTB model (Figure \ref{PTB_NLS}), with $E_2$ rarely dropping below $10\%$, however in this case FSNLS runs in a more reasonable amount of time, taking onlys. The WENDy solution offers speed and error reductions. For high-resolution data ($M=1024$), WENDy runs in 40-50 seconds on PTB data due to the impact of $M$ and $d$, the number of ODE compartments (here $d=5$), on the computational complexity. It is possible to reduce this using more a sophisticated implementation (in particular, symbolic computations are used to take gradients of generic functions, which could be precomputed).
\begin{figure}
\centering
\begin{tabular}{@{}c@{}c@{}c@{}}
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{Lotka_Volterra_NLS_3_3}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{Lotka_Volterra_NLS_4_3}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{Lotka_Volterra_NLS_5_3}\\
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{Lotka_Volterra_NLS_3_1}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{Lotka_Volterra_NLS_4_1}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{Lotka_Volterra_NLS_5_1}
\end{tabular}
\caption{Comparison between FSNLS, WENDy-FSNLS, and WENDy for the Lotka-Volterra model. Left to right: noise levels $\{5\%,10\%,20\%\}$. Top: 256 timepoints, bottom: 1024 timepoints. We note that the $M=1024$ with $20\%$ noise figure on the lower right suggests that WENDy results in slightly higher errors than the FSNLS. This is inconsistent with all other results in this work and appears to be an outlier. Understanding the source of this discrepancy is a topic or future work.}
\label{LV_NLS}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{@{}c@{}c@{}c@{}}
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{FitzHugh-Nagumo_NLS_3_3}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{FitzHugh-Nagumo_NLS_4_3}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{FitzHugh-Nagumo_NLS_5_3}\\
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{FitzHugh-Nagumo_NLS_3_1}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{FitzHugh-Nagumo_NLS_4_1}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{FitzHugh-Nagumo_NLS_5_1}
\end{tabular}
\caption{Comparison between FSNLS, WENDy-FSNLS, and WENDy for the FitzHugh-Nagumo model. Left to right: noise levels $\{5\%,10\%,20\%\}$. Top: 256 timepoints, bottom: 1024 timepoints.}
\label{FHN_NLS}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{@{}c@{}c@{}c@{}}
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{Hindmarsh-Rose_NLS_1_2}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{Hindmarsh-Rose_NLS_2_2}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{Hindmarsh-Rose_NLS_3_2}\\
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{Hindmarsh-Rose_NLS_1_1}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{Hindmarsh-Rose_NLS_2_1}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{Hindmarsh-Rose_NLS_3_1}
\end{tabular}
\caption{Comparison between FSNLS, WENDy-FSNLS, and WENDy for the Hindmarsh-Rose model. Left to right: noise levels $\{1\%,2\%,5\%\}$. Top: 512 timepoints, bottom: 1024 timepoints.}
\label{HR_NLS}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{@{}c@{}c@{}c@{}}
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{biochemM1_NLS_2_3}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{biochemM1_NLS_3_3}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{biochemM1_NLS_4_3}\\
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{biochemM1_NLS_2_1}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{biochemM1_NLS_3_1}&
\includegraphics[trim={0 0 25 0},clip,width=0.33\textwidth]{biochemM1_NLS_4_1}
\end{tabular}
\caption{Comparison between FSNLS, WENDy-FSNLS, and WENDy for the PTB model. Left to right: noise levels $\{2\%,5\%,10\%\}$. Top: 256 timepoints, bottom: 1024 timepoints.}
\label{PTB_NLS}
\end{figure}
\section{Concluding Remarks\label{sec:ConcDisc}}
In this work, we have proposed the Weak-form Estimation of Nonlinear Dynamics (WENDy) method for directly
estimating model parameters, without relying on forward solvers. The essential feature of the method involves converting the strong form representation of a
model to its weak form and then substituting in the data and solving a regression problem for the parameters. The method is robust to substantial amounts of noise, and in particular to levels frequently seen in biological experiments.
As mentioned above, the idea of data substitution followed by a least squares solve for the parameters has existed since at least the late 1960's \cite{Bellman1969MathematicalBiosciences}. However, FSNLS-based methods have proven highly successful and are ubiquitous in the parameter estimation literature and software. The disadvantage of FSNLS is that fitting using repeated forward solves comes at substantial computational cost and unclear dependence on the initial guess and hyperparameters in both the solver and the optimizer. Several researchers over the years have created direct parameter estimation methods (that do not rely on forward solves), but they have historically included some sort of data smoothing step. The primary issue with this is that projecting the data onto a spline basis (for example) represents the data using a basis which does not solve the original equation\footnote{This is a problem WENDy does not suffer from as there is no pre-smoothing of the data.}. Importantly, that error propagates to the error in the parameter estimates. However, we note that the WENDy framework introduced here is able to encapsulate previous works that incorporate smoothing, namely by including the smoothing operator in the covariance matrix $\widehat{\mathbf{C}}$.
The conversion to the weak form is essentially a weighted integral transform of the equation. As there is no projection onto a non-solution based function basis, the weak-form approach bypasses the need to estimate the true solution to directly estimate the parameters.
The main message of this work is that weak-form-based direct parameter estimation offers intriguing advantages over FSNLS-based methods. In almost all the examples shown in this work and in particular for larger dimensional systems with high noise, the WENDy method is faster and more accurate by orders of magnitude. In rare cases where an FSNLS-based approach yields higher accuracy, WENDy can be used as an efficient method to identify a good initial guess for parameters.
\begin{acknowledgements}
The authors would like to thank Dr.~Michael Zager (Pfizer) and Dr.~Clay Thompson
(SAS) for offering insight into the state of the art parameter estimation
methods used in industry.
\end{acknowledgements}
\section*{\textemdash \textemdash \textemdash \textemdash \textemdash \textemdash \textemdash{}}
\bibliographystyle{spmpsci}
\addcontentsline{toc}{section}{\refname}
\section{Expansion of the WENDy residual}
The WENDy residual can be decomposed as follows:
\begin{align*}
\mathbf{R} :&= \mathbf{G}\mathbf{w}-\mathbf{b}\\
&=\mathbf{G}(\mathbf{w}-\mathbf{w}^\star)+(\mathbf{G}\mathbf{w}^\star-\mathbf{G}^\star\mathbf{w}^\star) + (\mathbf{G}^\star\mathbf{w}^\star-\mathbf{b})\\
&=\underbrace{\boxed{\mathbf{G}(\mathbf{w}-\mathbf{w}^\star)}}_{=:\mathbf{R}_0}+\underbrace{\mathbf{V}(\mathbf{F}(\mathbf{U})-\mathbf{F}(\mathbf{U}^\star))}_{=:E_F} + \underbrace{\mathbf{V}\mathbf{F}(\mathbf{U}^\star) + \bigdot{\mathbf{V}}\mathbf{U}^\star}_{=:E_\text{int}} + \underbrace{\bigdot{\mathbf{V}}\epsilon}_{=:E_0}
\end{align*}
Each term is interpretable and may be useful for informing hyperparameter choices as well as building intuition.
\begin{itemize}
\item[$\mathbf{R}_0$:] Without {\it noise} or {\it integration errors}, we have $\mathbf{R} = \mathbf{R}_0$ and ${\widehat{\wbf}} = \mathbf{w}^\star$ minimizes $\nrm{\mathbf{R}}^2$.
\item[$E_0$:] Without {\it errors in variables} (i.e.\ $\mathbf{G}$ constructed from $\mathbf{U}^\star$) or {\it integration errors}, we have $\mathbf{R} = \mathbf{R}_0 + E_0$, and ${\widehat{\wbf}} = \wbf^\star$ maximizes the log-likelihood
\[L = -\nrm{\mathbf{C}^{-1/2}\mathbf{R}}_2^2, \qquad \mathbf{C} = \bigdot{\mathbf{V}}\bigdot{\mathbf{V}}^T.\]
\item[$E_\text{int}$:] Without {\it noise} the residual is $\mathbf{R} = \mathbf{R}_0 + E_\text{int}$, hence $E_\text{int}$ persists regardless of the noise level and contributes to errors in ${\widehat{\wbf}}$: the noiseless coefficient error is $\nrm{{\widehat{\wbf}}-\wbf^\star}_2 = \nrm{\mathbf{G}^\dagger E_\text{int}}_2$, with $\mathbf{G}$ computed using $\mathbf{U}^\star$.
By definition we have
\[(E_\text{int})_k := \Delta t\sum_{i=1}^M \left(\phi_k(t_i)\mathbf{F}(u(t_i))+\bigdot{\phi}_k(t_i) u(t_i)\right) = \Delta t\sum_{i=1}^M \bigdot{(\phi_k u)}(t_i) \]
$E_\text{int}$ has several exact representations. First, with $P_p = B_p(x-\lfloor x\rfloor)$ being the $p$th periodized Bernoulli polynomial, we have
\[(E_\text{int})_k = \frac{(-1)^p}{p!}\Delta t^p\int_{\supp{\phi_k}} (\phi_k u)^{(p+1)}P_p\left(\frac{t-\mu_k}{\Delta t}\right)\, dt\]
where $\mu_k$ is the midpoint of $\supp{\phi_k}$. This holds for all $p$ such that $\phi_k u$ has $p+1$ weak derivatives. In particular, we have for $p=1$,
\[(E_\text{int})_k = \Delta t\int_{\supp{\phi_k}} \ddot{(\phi_k u)} P_1\left(\frac{t-\mu_k}{\Delta t}\right)\, dt.\]
where $P_1 = x-\lfloor x \rfloor -0.5$ is a sawtooth wave. This can be used to derive the Fourier representation of the error,
\[(E_\text{int})_k = \frac{2\pi i}{\sqrt{M\Delta t}}\sum_{k\in \mathbb{Z}} k {\mathcal{F}}_{[0,M\Delta t]}[\phi_k u](Mk)\]
where ${\mathcal{F}}_I[f](\xi)$ is the $\xi$th Fourier coefficient of $f$ over the interval $I$ of length $|I|$:
\[{\mathcal{F}}_I[f](\xi) := \frac{1}{\sqrt{|I|}}\int_Ie^{-\frac{2\pi i}{|I|}\xi t} f(t)\, dt.\]
Using that $\supp{\phi_k}\subset [0,M\Delta t]$, we can also represent this in terms of the {\it local} Fourier transform:
\[(E_\text{int})_k = \frac{2\pi i}{\sqrt{2m\Delta t}}\sum_{k\in \mathbb{Z}} \left(\frac{2m}{M}\right)k {\mathcal{F}}_{\supp{\phi_k}}[\phi_k u](2mk)\]
where $m$ is the number of points in $\supp{\phi_k}\cap \mathbf{t}$. In either the local or global representation, the error only involves frequencies beyong the Nyquist frequency, however one may be able to extrapolate to ${\mathcal{F}}_{\supp{\phi_k}}[\phi_k u](2mk)$ using frequencies in $[-mk,mk]$, or bound the error of coarse-approximations using frequency information contained in ${\mathcal{F}}_{[0,M\Delta t]}[\phi_k u]$.
\item[$E_F$:] This is the integrated vector field error. It is the only error term that {\it depends on the model}, since $E_0$ depends only on $\mathbf{V}$ and the noise, and $E_\text{int}$ depends only on $u$, $\phi_k$, and $\Delta t$. In this way, $E_F$ is a ``linking term'', correlating the effects of noise, numerical integration, and model uncertainty. The way WENDy approximates $E_F$ is to use a Taylor expansion around the true data:
\[E_F = -\mathbf{V}\nabla \mathbf{F}(\mathbf{U})\epsilon + \frac{1}{2}\mathbf{V}\epsilon^T\mathbf{H}\mathbf{F}(\mathbf{U})\epsilon + \cdots\]
where the Hessian of each vector compenent $\mathbf{F}_i$ is defined
\[\frac{1}{2}\epsilon^T\mathbf{H}\mathbf{F}_i(\mathbf{U})\epsilon := \frac{1}{2}\frac{\partial^2\mathbf{F}_i}{\partial x_jx_k}(\mathbf{U})\epsilon_j\epsilon_k.\]
\end{itemize}
Isolating the linear term in $E_F$, we get that
\[\mathbf{R} = \mathbf{R}_0 + {\mathcal{L}}\epsilon + E_\text{int} + \mathbf{V}{\mathcal{N}}(\epsilon)\]
where the linear noise operator ${\mathcal{L}}$ is defined
\[{\mathcal{L}}\epsilon := (\bigdot{\mathbf{V}}+\mathbf{V}\nabla \mathbf{F})\epsilon\]
and ${\mathcal{N}}(\epsilon)$ is the nonlinear transformation of the noise:
\[{\mathcal{N}}(\epsilon) := \frac{1}{2}\epsilon^T\mathbf{H}\mathbf{F}(\mathbf{U})\epsilon + \cdots,\]
which we will come back to below. We thus have that a leading-order approximation to $\wbf^\star$ is
\begin{equation}\label{WENDyest}
{\widehat{\wbf}}:=\text{argmin}_\mathbf{w} \nrm{\mathbf{C}^{-1/2}\mathbf{R}}_2^2, \qquad \mathbf{C} = {\mathcal{L}}\CalL^T
\end{equation}
which gets increasingly closer to the maximum likelihood estimate of $\wbf^\star$ as $E_\text{int},\mathbf{V}{\mathcal{N}}(\epsilon)\ll {\mathcal{L}}\epsilon$.\\
If $E_\text{int}$ can be neglected, which for most cases of interest is possible, we may attempt to choose the radius $m$ of $\phi_k$ by examining ${\mathcal{N}}(\epsilon)$. To ${\mathcal{O}}(\sigma^2)$, we have
\begin{equation}\label{lobias}
\mathbb{E}\left[{\mathcal{N}}_i(\epsilon)\right]=\frac{\sigma^2}{2}\Delta \mathbf{F}_i(\mathbf{U})
\end{equation}
which constitutes the leading-order bias term (along with $E_\text{int}$). We can also compute the covariance
\[ \Sigma_{\mathcal{N}}:= \text{Cov}\left[{\mathcal{N}}(\epsilon)\right]=\frac{\sigma^4}{4}\text{Tr}\Big(\mathbf{H} \mathbf{F}_i(\mathbf{U}) \mathbf{H} \mathbf{F}_j(\mathbf{U})^T+\mathbf{H} \mathbf{F}_i(\mathbf{U}) \mathbf{H} \mathbf{F}_j(\mathbf{U})\Big),\]
which in particular reveals the variances (in terms of the Frobenius norm)
\[\mathbb{V}[{\mathcal{N}}_i(\epsilon)]=\frac{\sigma^4}{2}\nrm{\mathbf{H} \mathbf{F}_i(\mathbf{U})}_F^2.\]
Since the mean and variance are both proportional to second derivatives of $\mathbf{F}_i$, minimizing the bias term \eqref{lobias} will in many cases also minimize the variance. In this way, choosing $m$ such that $\mathbf{V}|\Delta \mathbf{F}_i(\mathbf{U})|$ is minimized will reduce bias (and variance) of the estimator \eqref{WENDyest}.
\section{Illustrating Examples\label{sec:Illustrating-Examples}}
Things to include later:
\begin{itemize}
\item $p$-value reduction from OLS to WENDy
\item Confidence intervals covariance of ${\widehat{\wbf}}$
\item Cramer-Rao bounds
\end{itemize}
In what follows,
\subsection{Ordinary Differential Equations}
choose $r\geq$ than max abs acceleration.
\begin{itemize}
\item \cite{SimpsonBrowningWarneEtAl2022JTheorBiol}Logistic (fit $r$ and
$\kappa$)
\[
\dot{x}=rx(1-\frac{x}{\kappa})
\]
Gompertz (fit $r$, $K$)
\[
\dot{x}=r\log\left(\frac{K}{x(t)}\right)x(t)
\]
\item \cite{Dattner2015Biom} SI model and data from a measles outbreak,
neuronal behavior (zebra fish)
\item \cite{WenkAbbatiOsborneEtAl2019arXiv190206278} \cite{WenkAbbatiOsborneEtAl2020AAAI}
Lotka-Voltarra,
\begin{align*}
\dot{x} & =\alpha x-\beta xy\\
\dot{y} & =\delta xy-\gamma y
\end{align*}
\item FitzHugh-Nagumo,
\begin{align*}
\dot{v} & =v-\frac{v^{3}}{3}-w+RI_{\text{ext}}\\
\tau\dot{w} & =v+a-bw
\end{align*}
\begin{itemize}
\item true parameter $=(0.2,0.2,0.2)=(\theta_{0},\theta_{1},\theta_{2})=(\alpha,\beta,\gamma)$=
0.2, 0.2, 0.3 = $\theta_{0}$, $\theta_{1}$, $\theta_{2}$=$ $
\item initial state -1.0, 1.0
\item tspan = {[}0 10{]}
\item dt sim = 0.01
\item dt obs = 0.5
\item
\begin{align*}
\dot{y}_{0} & =\theta_{2}(y_{0}-y_{0}^{3}/3+y_{1})\\
\dot{y}_{1} & =-\frac{1}{\theta_{2}}\left(y_{0}-\theta_{0}+\theta_{1}y_{1}\right)
\end{align*}
\begin{align*}
\dot{v} & =\gamma\left(v-\frac{v^{3}}{3}+w\right)\\
\dot{w} & =-\frac{1}{\gamma}\left(v-\alpha+\beta w\right)
\end{align*}
\begin{align*}
\dot{x}_{1} & =\gamma v+\gamma w-\frac{\gamma}{3}v^{3}\\
\dot{x}_{2} & =-\frac{1}{\gamma}v-\frac{\beta}{\gamma}w+\frac{\alpha}{\gamma}
\end{align*}
\end{itemize}
\item Protein Transduction
\begin{align*}
\dot{S} & =-\theta_{1}S-\theta_{2}SR+\theta_{3}R_{S}\\
\dot{dS} & =\theta_{1}S\\
\dot{R} & =-\theta_{2}SR+\theta_{3}R_{S}+\theta_{5}\frac{R_{pp}}{\theta_{6}+R_{pp}}\\
\dot{R}_{S} & =\theta_{2}SR-\theta_{3}R_{S}-\theta_{4}R_{S}\\
\dot{R}_{pp} & =\theta_{4}R_{S}-\theta_{5}\frac{R_{pp}}{\theta_{6}+R_{pp}}
\end{align*}
\begin{itemize}
\item true $\theta=(0.07,0.6,0.05,0.3,0.017,0.3)$
\item IC $x_{0}=[1,0,1,0,0]$
\item timepoints $t=[0,1,2,4,5,7,10,15,20,30,40,50,60,80,100]$
\end{itemize}
\item \cite{WenkGorbachGotovosEtAl2019ProcTwenty-SecondIntConfArtifIntellStat}
LV from \cite{Lotka1978TheGoldenAgeofTheoreticalEcology1923-1940},
PST from \cite{VyshemirskyGirolami2008Bioinformatics}
\item Lotka Volterra possibly with forcing \cite{SchmidtKramerHennig2021AdvNeuralInfProcessSyst}
\item \cite{LinialRavidEytanEtAl2021ProcConfHealthInferenceLearn} cardiac
model from \cite{ZenkerRubinClermont2005PLoSComputBiol}
\begin{itemize}
\item Zenker model - can't use this one - it's too involved right now.
\[
\]
\item true
\begin{align*}
P_{0_{LV}} & =2.03\text{mmHG}\\
V_{ED_{0}} & =7.14\text{ml}\\
k_{E_{LV}} & =0.066\text{ml}^{-1}
\end{align*}
\item other pars
\[
\]
\end{itemize}
\item \cite{KerstingKramerSchieggEtAl2020Proc37thIntConfMachLearn} Glucose
update in yeast (GUiY) \cite{SchillingsSunnakerStellingEtAl2015PLoSComputBiol}
\item \cite{YaariDattner2019JOSS} biochem example Chapter 2, Page 54 of
(Computational analysis of biochemical systems : a practical guide
for biochemists and molecular biologists / Eberhard O. Voit, 2000
\item \cite{DattnerMillerPetrenkoEtAl2017JRSocInterface} application of
Dattner's direct integral approach. Experimentally monitored the temporal
dynamic of the predatory bacterium \emph{Bdellovibrio bacteriovorus},
and its prey, the bacterium \emph{Burkholderia stabilis} in a structured
habitat consisting of sand under various regimes of wetness.
\begin{itemize}
\item model
\begin{align*}
\dot{P} & =ksC-dP\\
\dot{C} & =a(N-r)P-sC\\
\dot{N} & =-a(N-r)P
\end{align*}
\item true
\begin{align*}
k & =5\\
s & =0.05\\
d & =0.02\\
a & =4\times10^{-9}\\
r & =3\times10^{5}
\end{align*}
\item IC
\begin{align*}
P(0) & =10^{6}\\
C(0) & =0\\
N(0) & =10^{8}
\end{align*}
\item sampling times
\[
\left\{ t_{i}\right\} _{i=1}^{8}=[0,8,16,24,36,48,96,169]
\]
\[
\left\{ t_{i}\right\} _{i=1}^{16}=[0,4,8,12,16,20,24,28,32,36,40,44,48,72,96,168]
\]
\item measurement of only $P$ and $N$ with artificial error variance
\begin{align*}
\sigma_{P} & =0.01\times\overline{P}\\
\sigma_{N} & =0.01\times\overline{N}
\end{align*}
\end{itemize}
\item \cite{DattnerShipVoit2020Complexity}leans into the separability in
fitting and using the integral form. SIR, LV, Generalized Mass Action
\item Mackey-Glass DDE equation for immune system dynamics
\item SIR
\item SEIR \cite{KramerHennig2021AdvNeuralInfProcessSyst}
\item SIRD \cite{SchmidtKramerHennig2021AdvNeuralInfProcessSyst}
\item \cite{YangWongKou2021ProcNatlAcadSciUSA}Oscillations of Hes1 mRNA
(messenger ribonucleic acid) (M) and Hes1 protein (P) levels in cultured
cells, where it is postulated that an Hes1-interacting (H) factor
contributes to a stable osciallation, a manifestation of biological
rhythm. The ODEs of the three-component system $X=(P,M,H)$ are
\begin{align*}
\dot{P} & =-aPH+bM-cP\\
\dot{M} & =-dM+\frac{e}{1+P^{2}}\\
\dot{H} & =-aPH+\frac{f}{1+P^{2}}-gH
\end{align*}
where $\theta=(a,b,c,d,e,f,g)$.
\begin{itemize}
\item If we only observe $P$ and $M$ and not $H$, then let's consider
how to deal with this. Since the equation for $H$ is
\[
\dot{H}=(-aP-g)H+\frac{f}{1+P^{2}}\,,
\]
we could possibly multiply by the test function and integrate by parts
to get
\[
\int\phi\dot{H}dt=-\int\phi(aP+g)Hdt+\int\phi\frac{f}{1+P^{2}}dt
\]
and then integrate by parts with the compactly supported $\phi$ to
obtain
\[
\int\left(\phi(aP+g)-\dot{\phi}\right)Hdt=\int\phi\frac{f}{1+p^{2}}dt
\]
and the discretization over the time domain allows a linear solve
for $H$.
\item So, this is for when we do unobserved compartments.
\end{itemize}
\end{itemize}
\subsection{Partial Differential Equations}
\begin{itemize}
\item Reaction Diffusion
\end{itemize}
\subsection{Subsection title\label{subsec:Subsection-title}}
as required. Don't forget to give each section and subsection a unique
label (see Sect.~\ref{subsec:Subsection-title}).
\paragraph{Paragraph headings}
Use paragraph headings as needed.
\begin{equation}
a^{2}+b^{2}=c^{2}
\end{equation}
\begin{figure}
<insert a figure here>
\caption{\label{fig:one-column-figure}Please write your figure caption here}
\end{figure}
\begin{figure*}
<insert a figure here>
\caption{\label{fig:two-column-figure}Please write your figure caption here}
\end{figure*}
\begin{table}
\caption{\label{tab:Please-write-your}Please write your table caption here}
\begin{tabular}{lll}
\hline
first & second & third\tabularnewline
& & \tabularnewline
\hline
number & number & number\tabularnewline
number & number & number\tabularnewline
& & \tabularnewline
\hline
\end{tabular}
\end{table}
\section{Conclusions and Discussion\label{sec:ConcDisc}}
\begin{itemize}
\item We did not consider when the parameters are nonlinear, though that
is certainly possible via a Box-Cox something. For example, Richards
\[
\dot{x}=\alpha\left(1-\left(\frac{x}{K}\right)^{\nu}\right)x
\]
in which we try to identify $\nu$.
\end{itemize}
\begin{acknowledgements}
The authors would like to thank Dr.\,Michael Zager (Pfizer) and Dr.\,Clay Thompson
(SAS) for offering insight into the state of the art parameter estimation
methods used in industry.
\end{acknowledgements}
\section*{\textemdash \textemdash \textemdash \textemdash \textemdash \textemdash \textemdash{}}
\bibliographystyle{spmpsci}
\addcontentsline{toc}{section}{\refname}
|
{
"arxiv_id": "2302.13220",
"language": "en",
"timestamp": "2023-02-28T02:13:14",
"url": "https://arxiv.org/abs/2302.13220",
"yymm": "2302"
} | \subsubsection*{\bibname}}
\bibliographystyle{apalike}
\newtheorem{theorem}{Theorem}
\newtheorem{definition}{Definition}
\newtheorem{lemma}{Lemma}
\def\perp\!\!\!\perp{\perp\!\!\!\perp}
\begin{document}
\runningtitle{Combining Graphical and Algebraic Approaches for Parameter Identification in Latent Variable SEM}
\twocolumn[
\aistatstitle{Combining Graphical and Algebraic Approaches for \\ Parameter Identification in Latent Variable Structural Equation Models}
\aistatsauthor{ Ankur Ankan \And Inge Wortel \And Kenneth A. Bollen \And Johannes Textor}
\aistatsaddress{ Radboud University \And Radboud University \And University of North Carolina \\ at Chapel Hill \And Radboud University} ]
\begin{abstract}
Measurement error is ubiquitous in many variables -- from blood
pressure recordings in physiology to intelligence measures in
psychology. Structural equation models (SEMs) account for the process
of measurement by explicitly distinguishing between \emph{latent}
variables and their measurement \emph{indicators}. Users often fit
entire SEMs to data, but this can fail if some model parameters are not
identified. The model-implied instrumental variables (MIIVs) approach
is a more flexible alternative that can estimate subsets of model
parameters in identified equations. Numerous methods to identify
individual parameters also exist in the field of graphical models (such
as DAGs), but many of these do not account for measurement effects.
Here, we take the concept of ``latent-to-observed'' (L2O)
transformation from the MIIV approach and develop an equivalent
graphical L2O transformation that allows applying existing graphical
criteria to latent parameters in SEMs. We combine L2O transformation
with graphical instrumental variable criteria to obtain an efficient
algorithm for non-iterative parameter identification in SEMs with
latent variables. We prove that this graphical L2O transformation with
the instrumental set criterion is equivalent to the state-of-the-art
MIIV approach for SEMs, and show that it can lead to novel
identification strategies when combined with other graphical criteria.
\end{abstract}
\section{\MakeUppercase{Introduction}}
Graphical models such as directed acyclic graphs (DAGs) are currently used in
many disciplines for causal inference from observational studies. However, the
variables on the causal pathways modelled are often different from those being
measured. Imperfect measures cover a broad range of sciences, including health
and medicine (e.g., blood pressure, oxygen level), environmental sciences
(e.g., measures of pollution exposure of individuals), and the social (e.g.,
measures of socioeconomic status) and behavioral sciences (e.g., substance
abuse).
Many DAG models do not differentiate between the variables on the causal
pathways and their actual measurements in a dataset \citep{Tennant2019}. While
this omission is defensible when the causal variables can be measured reliably
(e.g., age), it becomes problematic when the link
between a variable and its measurement is more
complex. For example, graphical models employed in fields like Psychology or
Education Research often take the form of \emph{latent variable structural
equation models} (LVSEMs, Figure~\ref{fig:example_sem}; \citet{bollen1989}),
which combine a \emph{latent level} of unobserved variables and their
hypothesized causal links with a \emph{measurement level} of their observed
indicators (e.g., responses to questionnaire items). This structure is so
common that LVSEMs are sometimes simply referred to as SEMs. In contrast,
models that do not differentiate between causal factors and their measurements
are traditionally called \emph{simultaneous equations} or \emph{path
models}\footnote{Path models can be viewed as
LVSEMs with all noise set to 0; some work on path models,
importantly by Sewall Wright himself, does incorporate latent variables. }.
Once a model has been specified,
estimation can be performed in different ways. SEM parameters are
often estimated all at once by iteratively minimizing some difference measure
between the observed and the model-implied covariance matrices. However,
this ``global'' approach has some pitfalls. First, all model parameters must be
algebraically identifiable for a unique minimum to exist; if only a single
model parameter is not identifiable, the entire fitting procedure may
not converge \citep{boomsma1985nonconvergence} or provide meaningless results.
Second, local model specification errors can propagate through the entire model
\citep{bollen2007latent}. Alternatively, \citet{bollen1996alternative}
introduced a ``local'', equation-wise approach for SEM parameter identification termed
``model-implied instrumental variables'' (MIIVs), which is non-iterative and
applicable even to models where not all parameters are simultaneously
identifiable. MIIV-based SEM identification is a mature approach with a
well-developed underlying theory as well as implementations in multiple
languages, including R \citep{fisher2019miivsem}.
\begin{figure}
\centering
\includegraphics[page=1]{figures-inge.pdf}
\caption{SEM based on the \emph{Industrialization and
Political Democracy} model \citep{bollen1989} with latent
variables $ l_1 $ (industrialization), and $ l_2 $ (political democracy).
The model contains 3 indicators for $ l_1 $: (1) gross
national product ($ y_1 $), (2) energy consumption
($ y_2 $), and (3) labor force in industry ($ y_3 $), and 4
indicators for $ l_2 $: (1) press freedom rating ($y_4$), (2)
political opposition freedom ($y_5$), (3) election fairness
($y_6$), and (3) legislature effectiveness ($y_7$). $
\lambda_{11} \dots \lambda_{13}, \lambda_{24} \dots
\lambda_{27}, \text{ and } \beta_{11} $ are the path
coefficients. $ \epsilon_1, \dots, \epsilon_7, \text{ and }
\zeta_1 $ represent noise/errors.}
\label{fig:example_sem}
\end{figure}
Of all the model parameters that are identifiable in principle, any given
estimator (such as the MIIV-based approach) can typically only identify
parameters in identified equations and identified parameters in underidentified
equations. Different identification methods are therefore complementary and can
allow more model parameters to be estimated. Having a choice of such
methods can help
users to keep the stages of \emph{specification} and
\emph{estimation} separated.
For example, a researcher who only has access to global
identification methodology might be tempted to impose model restrictions just
to ``get a model identified'' and not because there is a theoretical rationale
for the restrictions imposed.
With more complementary methods to choose from, researchers can
instead base model specification on substantive theory and causal assumptions.
The development of parameter identification methodology has received intense
attention in the graphical modeling field. The most general identification
algorithm is Pearl's do-calculus, which provides a complete solution in
non-parametric models
\citep{DBLP:conf/uai/HuangV06,DBLP:conf/aaai/ShpitserP06}. The back-door and
front-door criteria provide more convenient solutions in special cases
\citep{pearl2009causality}. While there is no practical general algorithm to
decide identifiability for models that are linear in their parameters, there has been a flurry of
work on graphical criteria for this case, such as instrumental sets
\citep{BritoP02}, the half-trek criterion \citep{Foygel2012}, and auxiliary
variables \citep{ChenKB17}. Unfortunately, these methods were all developed for
the acyclic directed mixed graph (ADMG) framework and require at least the
variables connected to the target parameter to be observed -- which is rarely
the case in SEMs. Likewise, many criteria in graphical models are based on
``separating'' certain paths by conditioning on variables, whereas no such
conditioning-based criteria exist for SEMs.
The present paper aims to make identification
methods from the graphical model literature available to the SEM field.
We offer the following contributions:
\begin{itemize}
\item We note that \citet{bollen1996alternative}'s latent-to-observed (L2O)
transformation that transforms a latent variable SEM into a model with
only observed variables can be used more generally in models containing
arbitrary mixtures of latent and observed variables
(Section~\ref{sec:l2o}).
\item We present a graphical equivalent of L2O transformation that allows us to
apply known graphical criteria to SEMs (Section~\ref{sec:graphical_l2o}).
\item We prove that Bollen's MIIV approach \citep{bollen1996alternative,bollen2004automating,Bollen2022} is
equivalent to a graphical L2O transformation followed by the application of
the graphical instrumental set criterion (\citet{BritoP02}; Section~\ref{sec:equiv}).
\item We give examples where the graphical L2O transformation approach
can identify more parameters compared to the MIIV approach
implemented in the R package MIIVsem (\citet{fisher2019miivsem}; Section~\ref{sec:examples}).
\end{itemize}
Thus, by combining the L2O transformation idea from the SEM literature with
identification criteria from the graphical models field, we bridge
these two fields -- hopefully to the benefit of both.
\section{\MakeUppercase{Background}}
\label{sec:background}
In this section, we give a brief background on basic graphical terminology
and define SEMs.
\subsection{Basic Terminology}
We denote variables using lowercase letters ($x_i$), sets and vectors of
variables using uppercase letters ($X$), and matrices using boldface
($\bm{\Lambda}$). We write the cardinality of a set $V$ as $|V|$, and the rank of
a matrix $ \bm{\Lambda} $ as $ \textrm{rk}(\bm{\Lambda}) $. A \emph{mixed
graph} (or simply \emph{graph}) ${\cal G}=(V,A)$ is defined by sets of
variables (nodes) $V=\{x_1,\ldots,x_n\}$ and arrows $A$, where arrows can be
directed ($x_i \to x_j$) or bi-directed $(x_i \leftrightarrow x_j)$. A
variable $x_i$ is called a \emph{parent} of another variable $x_j$ if $x_i \to
x_j \in A$, or a \emph{spouse} of $x_j$ if $x_i \leftrightarrow x_j \in A$. We
denote the set of parents of $ x_i $ in $ {\cal G} $ as $ Pa_{\cal G}(x_i) $.
{\bfseries Paths:} A \emph{path} of length $k$ is a sequence of $k$ variables
such that each variable is connected to its neighbours by an arrow. A
\emph{directed path} from $x_i$ to $x_j$ is a path on which all arrows point
away from the start node $x_i$. For a path $\pi$, let $\pi[x_i \sim x_j]$
denote its subsequence from $x_i$ to $x_j$, in reverse order when $x_i$ occurs
after $x_j$; for example, if $\pi=x_1 \gets x_2 \to x_3$ then $\pi[x_2 \sim
x_3]=x_2 \to x_3$ and $\pi[x_1 \sim x_2]=x_2 \to x_1$.
Importantly, this definition of a path is common in DAG literature
but is different from the SEM literature, where ``path'' typically
refers to a single arrow between two
variables. Hence, a path in a DAG is equivalent to a sequence of paths in path
models. An \emph{acyclic directed mixed graph} (ADMG) is a mixed graph with no
directed path of length $\geq 2$ from a node to itself.
{\bfseries Treks and Trek Sides:} A \emph{trek} (also called \emph{open path})
is a path that does not contain a \emph{collider}, that is, a subsequence $ x_i
\to x_j \gets x_k $. A path that is not
open is a \emph{closed path}. Let $\pi$ be a trek from $x_i$ to $x_j$, then
$\pi$ contains a unique variable $t$ called the \emph{top}, also written as
$\pi^\leftrightarrow$, such that $\pi[t \sim x_i]$ and $\pi[t \sim x_j]$ are
both directed paths (which could both consist of a single node).
Then we call $\pi^\gets := \pi[t \sim x_i]$ the \emph{left side} and $\pi^\to
:= \pi[t \sim x_j]$ the \emph{right side} of $\pi$.\footnote{In the literature, treks are also often represented
as tuples of their left and right sides.}
{\bfseries Trek Intersection:} Consider two treks $\pi_i$ and $\pi_j$, then we
say that $\pi_i$ and $\pi_j$ \emph{intersect} if they contain a common variable
$v$. We say that they \emph{intersect on the same side} (have a same-sided
intersection) if $v$ occurs on $\pi_i^\gets$ and $\pi_j^\gets$ or $\pi_i^\to$
and $\pi_j^\to$; in particular, if $v$ is the top of $\pi_i$ or $\pi_j$, then
the intersection is always same sided. Otherwise, $\pi_i$ and $\pi_j$
\emph{intersect on opposite sides} (have an opposite-sided intersection).
{\bfseries t-separation:} Consider two sets of variables, $L$ and $R$, and a set
$T$ of treks. Then we say that the tuple $(L,R)$ \emph{$t$-separates} (is a
$t$-separator of) $T$ if every trek in $T$ contains either a variable in $L$ on
its left side or a variable in $R$ on the right side. For two sets of variables,
$A$ and $B$, we say that $(L,R)$ $t$-separates $A$ and $B$ if it $t$-separates
all treks between $A$ and $B$. The \emph{size} of a $t$-separator $(L,R)$ is
$|L|+|R|$.
\subsection{Structural Equation Models}
\label{subsec:sem_ram}
We now define structural equation models (SEMs) as they are usually considered
in the DAG literature \citep[e.g.,][]{sullivant2010trek}. This
definition is the same as the Reticular Action Model (RAM) representation
\citep{mcardle1984some} from the SEM literature. A \emph{structural equation
model} (SEM) is a system of equations linear in their parameters such that:
\[
X = \bm{B} X + E
\]
where $ X $ is a vector of variables (both latent and observed), $\bm{B}$ is a
$ |X| \times |X| $ matrix of \emph{path coefficients}, and
$E=\{\epsilon_1,\ldots,\epsilon_{|X|}\}$ is a vector of error
terms with a positive definite covariance matrix $\bm{\Phi}$ (which has typically
many or most of its off-diagonal elements set to 0) and zero
means.\footnote{This can be extended to allow for non-zero means, but our focus
here is on the covariance structure, so we omit that for simplicity.} The
\emph{path diagram} of an SEM $(\bm{B},\bm{\Phi})$ is a mixed graph with nodes
$V=X \cup E $ and arrows $A = \{ \epsilon_i \to x_i \mid i \in 1,\ldots, |X| \}
\cup \{ x_i \to x_j \mid \bm{B}[i,j] \neq 0 \} \cup \{ \epsilon_i
\leftrightarrow \epsilon_j \mid i \neq j, \bm{\Phi}[i,j] \neq 0 \} $. We also
write $\beta_{x_i \to x_j}$ for the path coefficients in $\bm{B}$ and
$\phi_{\epsilon_i}$ for the diagonal entries (variances) in $\bm{\Phi}$.
Each equation in the model corresponds to one node in this graph, where
the node is the dependent variable and its parent(s) are the explanatory
variable(s). Each arrow represents one parameter to be
estimated, i.e., a \emph{path coefficient} (e.g., directed arrow between latents and
observed variables), a \emph{residual covariance} (bi-directed arrow), or a
\emph{residual variance} (directed arrow from error term to latent or
indicator). However, some of these parameters could be fixed; for example,
at least one parameter per latent variable needs to be fixed to set its scale,
and covariances between observed exogenous variables (i.e., observed
variables that have no parents) are typically fixed to their observed values.
In this paper, we focus on estimating the path coefficients. We
only consider \emph{recursive} SEMs in this paper -- i.e., where the path
diagram is an ADMG -- even though the methodology can be generalized.
\citet{sullivant2010trek} established an
important connection between treks and the ranks of submatrices of the
covariance matrix, which we will heavily rely on in our paper.
\begin{theorem}{Trek separation; \citep[Theorem~2.8]{sullivant2010trek}}
\label{thm:treksep}
Given an SEM $\cal G$ with an implied covariance matrix $ \bm{\Sigma} $, and two subsets of variables $A,B \subseteq X$,
\[
\text{rk}(\bm{\Sigma}[A,B]) \leq \text{min}\left\{|L|+|R| \mid (L,R) \text{
t-separates } A \text{ and } B \right\}
\]
where the inequality is tight for generic covariance matrices implied by
$\cal G$.
\end{theorem}
In the special case $A=\{x_1\},B=\{x_2\}$, Theorem~\ref{thm:treksep} implies
that $x_1$ and $x_2$ can only be correlated if they are connected by a trek.
Although the compatible covariance matrices of SEMs can also be characterized
in terms of $d$-separation \citep{chen2014graphical}, we use $t$-separation for
our purpose because it does not require conditioning on variables, and it
identifies more constraints on the covariance matrix implied by SEMs
than d-separation \citep{sullivant2010trek}.
\section{\MakeUppercase{Latent-To-Observed Transformations for SEMs}}
\label{sec:l2o}
A problem with IV-based identification criteria is that they cannot be directly
applied to latent variable parameters. The MIIV approach addresses this issue
by applying the L2O transformation to these model equations, such that they
only consist of observed variables. The L2O transformation in
\citet{bollen1996alternative} is presented on the LISREL representation of SEMs
(see Supplementary Material). In this section, we first briefly introduce
``scaling indicators'', which are required for performing L2O transformations.
We then use it to define the L2O transformation on the RAM notation (defined in
Section~\ref{subsec:sem_ram}) and show that with slight modification to the
transformation, we can also use it to partially identify equations. We will, from
here on, refer to this transformation as the ``algebraic L2O
transformation'' to distinguish it from the purely graphical L2O transformation
that we introduce later in Section~\ref{sec:graphical_l2o}.
\subsection{Scaling Indicators}
The L2O transformation (both algebraic and graphical) uses the fact that any
SEM is only identifiable if the scale of each latent variable is fixed to an
arbitrary value (e.g., 1), introducing new algebraic constraints. These
constraints can be exploited to rearrange the model equations in such a
way that latent variables can be eliminated.
The need for scale setting is well known in the SEM literature and arises from
the following lemma (since we could not find a direct proof in the literature
-- perhaps due to its simplicity -- we give one in the Appendix).
\begin{lemma} (Rescaling of latent variables).
\label{lemma:scaling}
Let $x_i$ be a variable in an SEM $(\bm{B},\bm{\Phi})$. Consider another SEM
$(\bm{B}',\bm{\Phi}')$ where we choose a scaling factor $\alpha \neq 0$
and change the coefficients as follows: For every parent $p$ of $x_i$,
$\beta_{p \to x_i}' = \alpha^{-1} \; \beta_{p \to x_i}$; for every
child $c$ of $x_i$, $\beta_{x_i \to c}' = \alpha \; \beta_{x_i \to c}$;
for every spouse $s$ of $x_i$, $\phi_{x_i \leftrightarrow s}' =
\alpha^{-1} \phi_{x_i \leftrightarrow s}$; and $\phi_{x_i}'=
\alpha^{-2} \phi_{x_i}$. Then for all $j,k \neq i$,
$\bm{\Sigma}[j,k]=\bm{\Sigma}'[j,k]$.
\end{lemma}
If $x_i$ is a latent variable in an SEM, then Lemma~\ref{lemma:scaling} implies
that we will get the same implied covariance matrix among the observed
variables for all possible scaling factors. In other words, we need to set the
scale of $x_i$ to an arbitrary value to identify any parameters in such a
model. Common choices are to either fix the error variance of every latent
variable such that its total variance is $ 1 $, or to choose one indicator per
latent and set its path coefficient to $ 1 $. The latter method is often
preferred because it is simpler to implement. The chosen indicators for each
latent are then called the \emph{scaling indicators}. However, note that
Lemma~\ref{lemma:scaling} tells us that we can convert any fit based on scaling
indicators to a fit based on unit latent variance, so this choice does not
restrict us in any way.
\subsection{Algebraic L2O Transformation for RAM}
The main idea behind algebraic L2O transformation is to replace each of the
latent variables in the model equations by an observed expression involving
the scaling indicator.
As in \citet{bollen1996alternative}, we assume that each of the latent
variables in the model has a unique scaling indicator. We show the
transformation on a single model equation to simplify the notation. Given an
SEM $\cal G$ on variables $ X $, we can write the equation of any variable $
x_i \in X $ as:
\[
x_i = \epsilon_i + \sum_{x_j \in Co(x_i)}\beta_{x_j \rightarrow x_i} x_j
\]
where $ Co(x_i) = \{ Co_l(x_i), Co_o(x_i) \} $ is the set of covariates in the
equation for $ x_i $. $ Co_l(x_i) $ and $ Co_o(x_i) $ are the latent and
observed covariates, respectively. Since each latent variable $x_j$ has a
unique scaling indicator $x_j^s$, we can write the latent variable as
$ x_j = x_j^s - \epsilon_{x_j^s} $.
Replacing all the latents in the above equation with their scaling indicators,
we obtain:
\[
x_i = \epsilon_i + \sum_{x_j \in Co_l(x_i)} \beta_{x_j \rightarrow x_i} (x_j^s - \epsilon_{x_j^s}) + \sum_{x_k \in Co_o(x_i)} \beta_{x_k \rightarrow x_i} x_k
\]
If $ x_i $ is an observed variable, the transformation is complete as the
equation only contains observed variables. But if $ x_i $ is a latent variable,
we can further replace $ x_i $ as follows:
\[
x_i^s = \epsilon_i + \epsilon_{x_i^s} + \sum_{x_j \in Co_l(x_i)} \beta_{x_j \rightarrow x_i} (x_j^s - \epsilon_{x_j^s}) + \sum_{x_k \in Co_o(x_i)} \beta_{x_k \rightarrow x_i} x_k
\]
As the transformed equation now only consists of observed variables, IV-based
criteria can be applied to check for identifiability of parameters.
\subsection{Algebraic L2O Transformations for Partial Equation Identification}
\label{sec:partial_transform}
In the previous section, we used the L2O transformation to replace all the
latent variables in the equation with their scaling indicators, resulting in an
equation with only observed variables. An IV-based estimator applied to these
equations would try to estimate all the parameters together. However, there are
cases (as shown in Section~\ref{sec:examples}) where not all of the parameters
of an equation are identifiable. If we apply L2O transformation to the whole
equation, none of the parameters can be estimated.
Here, we outline an alternative, ``partial'' L2O transformation that
replaces only some of the latent variables in the equation. Assuming $
Co_l^i(x_i) \subset Co_l(x_i) $ as the set of latent variables whose parameters
we are interested in estimating, we can write the partial L2O transformation
as:
\[
\begin{split}
x_i = \epsilon_i + & \sum_{x_j \in Co_l^i(x_i)} \beta_{x_j \rightarrow x_i} (x_j^s - \epsilon_{x_j^s}) + \\
& \sum_{x_k \in Co_l(x_i) \setminus Co_l^i(x_i)} \beta_{x_k \rightarrow x_i} x_k + \sum_{x_l \in Co_o(x_i)} \beta_{x_l \rightarrow x_i} x_l
\end{split}
\]
Similar to the previous section, we can further apply L2O transformation for $
x_i $ if it is also a latent variable. As the parameters of interest are now
with observed covariates in the transformed equation, IV-based criteria can be
applied to check for their identifiability while treating the variables in $
Co_l(x_i) \setminus Co_l^i(x_i) $ as part of the error term.
\section{\MakeUppercase{Graphical L2O Transformation}}
\label{sec:graphical_l2o}
Having shown the algebraic L2O transformation, we now show that these
transformations can also be done graphically for path diagrams. An
important difference is that the algebraic transformation
is applied to all equations in a model simultaneously by
replacing all latent variables, whereas we
apply the graphical transform only to a single equation at a time (i.e.,
starting from the original graph for every equation). Applying the graphical
transformation to multiple equations simultaneously results in a
non-equivalent model with a different implied covariance matrix.
Given an SEM $\cal G$, the equation for any variable $x_j$ can be written in
terms of its parents in the path diagram as: $ x_j = \sum_{x_k \in
\textrm{Pa}_{\cal G}(x_j)} \beta_{x_k \to x_j} x_k + \epsilon_{x_j} $. Using this equation,
we can write the relationship between any latent variable $ x_j $ and its scaling
indicator $ x_j^s $ as (where $ \beta_{x_j \to x_j^s} $ is fixed to $ 1 $):
\begin{equation}
\label{eq:graphical_transform}
x_j = x_j^s - \epsilon_{x_j^s} - \sum_{x_k \in \textrm{Pa}_{\cal G}(x_j^s) \setminus x_j} \beta_{x_k \to x_j^s} x_k
\end{equation}
We use this graphical L2O transformation as follows. Our goal is to identify a path
coefficient $\beta_{x_i \to x_j}$ in a model $\cal G$. If both $x_i$ and $x_j$
are observed, we leave the equation untransformed and apply graphical
identification criteria \citep{chen2014graphical}. Otherwise,
we apply the graphical L2O transformation to $\cal G$ with respect to $ x_i $,
$ x_j$, or both variables -- ensuring that the resulting model ${\cal G}'$
contains an arrow between two observed variables $x_i'$ and $x_j'$, where the
path coefficient $\beta_{x_i' \to x_j'}$ in ${\cal G}'$ equals $\beta_{x_i \to
x_j}$ in $\cal G$.
We now illustrate this approach on an example for each of the three possible
combinations of latent and observed variables.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{.33\linewidth}
\centering
\includegraphics[page=4]{figures-inge.pdf}
\caption{}
\label{fig:l2o_parent}
\end{subfigure}%
\begin{subfigure}[b]{.33\linewidth}
\centering
\includegraphics[page=5]{figures-inge.pdf}
\caption{}
\label{fig:l2o_child}
\end{subfigure}%
\begin{subfigure}[b]{.33\linewidth}
\centering
\includegraphics[page=6]{figures-inge.pdf}
\caption{}
\label{fig:l2o_both}
\end{subfigure}
\caption{Example L2O transformations for path coefficients
(a) from a latent to an observed variable;
(b) from an observed to a latent variable;
(c) between two latent variables.}
\end{figure}
{\bfseries Latent-to-observed arrow:} Consider the arrow $l_1 \to y_3 $ in
Figure~{\ref{fig:l2o_parent}}, and let $\beta$ be the path coefficient of this
arrow. To perform the L2O transformation, we start with the
model equation involving $\beta$:
\[
y_3 = \beta l_1 + \beta_{y_5 \to y_3} y_5 + \epsilon_3
\]
We then use Equation~\ref{eq:graphical_transform} to write the latent variable,
$ l_1 $ in terms of its scaling indicator, $y_2$ as: $ l_1 = y_2 - \epsilon_{2} +
\beta_{y_1 \to y_2} y_1 $, and replace it in the above equation to obtain:
\[
y_3 = \beta y_2 - \beta \beta_{y_1 \to y_2} y_1 + \beta_{y_5 \to y_3} y_5 - \beta \epsilon_2 + \epsilon_3
\]
The transformation has changed the equation for $ y_3 $, which now regresses on
the observed variables $ y_2 $, $ y_1 $, and $ y_5 $, as well as the errors $
\epsilon_2 $ and $ \epsilon_3 $. We make the same changes in the graphical
structure by adding the arrows $ y_2 \to y_3 $, $ y_1 \to y_3 $, $ \epsilon_2
\to y_3 $, and removing the arrow $ l_1 \to y_3 $.
{\bfseries Observed-to-latent arrow:} Consider the arrow $y_1 \to l_1$ in
Figure~\ref{fig:l2o_child} with coefficient $\beta$. For L2O transformation in
this case, we apply Equation~\ref{eq:graphical_transform} to replace $ l_1 $ in
the model equation $ l_1 = \beta y_1 + \zeta_1 $ to obtain:
\[
y_4 = \beta y_1 + \beta_{y_3 \to y_4} y_3 + \beta_{y_2 \to y_4} y_2 + \zeta_1 + \epsilon_4
\]
The equivalent transformation to the path diagram consists of adding the arrows $ y_1 \to y_4
$, and $ \zeta_1 \to y_4 $, and removing the arrows: $ l_1 \to y_4 $ and $ y_1 \to l_1 $.
{\bfseries Latent-to-latent arrow:} Consider the arrow $ l_1 \to l_2$ in
Figure~\ref{fig:l2o_both} with coefficient $ \beta $. In this case, we again
apply Equation~\ref{eq:graphical_transform} to replace both $ l_1 $ and $ l_2 $ in the model
equation for $ l_2 = \beta l_1 + \zeta_2 $. This is equivalent to applying two L2O transformations in sequence
and leads to the transformed equation:
\[
y_2 = \beta y_1 - \beta \epsilon_1 + \zeta_2 + \epsilon_2
\]
Equivalently, we now add the arrows $ y_1 \to y_2 $, $
\zeta_2 \to y_2 $, and $ \epsilon_1 \to y_2 $. We also remove the arrows $
l_2 \to y_2 $ and $ l_1 \to l_2 $.
\section{\MakeUppercase{Model-Implied Instrumental Variables Are Equivalent to Instrumental Sets}}
\label{sec:equiv}
After applying the L2O transformations from the previous sections, we can
use either algebraic or graphical criteria to check whether the path
coefficients are identifiable. In this section, we introduce the Instrumental
set criterion \citep{BritoP02} and the MIIV approach from
\citet{bollen1996alternative} that precedes it, and show that they are
equivalent. Importantly, even though we refer to the MIIV
approach as an algebraic criterion to distinguish it from the graphical
criterion, it is not a purely algebraic approach and utilizes the graphical
structure of the model to infer correlations with error terms.
We will first focus on the instrumental set criterion proposed by
\citet{BritoP02}. We state the criterion below in a slightly rephrased
form that is consistent with our notation in Section~\ref{sec:background}:
\begin{definition}[Instrumental Sets \citep{BritoP02}]
\label{defn:graphicis}
Given an ADMG $\cal
G$, a variable $y$, and a subset $X$ of the parents of $y$,
a set of variables
$I$ fulfills the
\emph{instrumental set condition}
if for {some} permutation $ i_1 \ldots i_k $ of
$ I $ and {some} permutation
$ x_1 \ldots x_k $ of $ X $ we have:
\begin{enumerate}
\item There are no treks from $I$ to $y$ in the graph ${\cal
G}_{\overline{X}}$ obtained by removing all arrows
between $X$ and $y$.
\item For each $j$, $1 \leq j \leq k$, there is a trek $\pi_j$ from
$I_j$ to $X_j$ such that for all $i < j$: (1) $I_i$ does not
occur on any trek $\pi_j$; and (2) all intersections between
$\pi_i$ and $\pi_j$ are on the left side of $\pi_i$ and the
right side of $\pi_j$.
\end{enumerate}
\end{definition}
Its reliance on permutation makes the instrumental set criterion fairly
complex; in particular, it is not obvious how an algorithm to find such sets
could be implemented, since enumerating all possible permutations and paths is
clearly not a practical option. Fortunately, we can rewrite this criterion into
a much simpler form that does not rely on permutations and has an obvious
algorithmic solution.
\begin{definition}[Permutation-free Instrumental Sets]
\label{defn:graphicistrek}
Given an ADMG $\cal G$, a variable $y$ and a subset $X$ of the parents
of $y$, a set of variables $I$ fulfills the \emph{permutation-free
instrumental set condition} if: (1) There are no treks from $I$ to $y$
in the graph ${\cal G}_{\overline{X}}$ obtained by removing all arrows
leaving $X$, and (2) All $t$-separators $(L,R)$ of $I$ and $X$ have
size $\geq k$.
\end{definition}
\begin{theorem}
\label{thm:graphicistrek}
The instrumental set criterion is equivalent to the permutation-free
instrumental set criterion.
\end{theorem}
\begin{proof}
This is shown by adapting a closely related existing result \citep{ZanderL16}.
See Supplement for details.
\end{proof}
\begin{definition}[Algebraic Instrumental Sets (\citet{bollen1996alternative}, \citet{bollen2012instrumental})]
\label{defn:algebraicis}
Given a regression equation $y = B \cdot X + \epsilon$, where $X$ possibly
correlates with $\epsilon$, a set of variables
$I$ fulfills the \emph{algebraic instrumental set condition} if: (1) $I \perp\!\!\!\perp \epsilon$,
(2) $\textrm{rk}(\bm{\Sigma}[I,X]) = |X|$, and (3) $\textrm{rk}(\bm{\Sigma}[I]) = |I|$
\end{definition}
Having rephrased the instrumental set criterion
without relying on permutations, we can now establish a
correspondence to the algebraic condition for instrumental variables -- which also
serves as an alternative correctness proof for Definition~\ref{defn:graphicis}
itself. The proof of the Theorem is included in the Supplementary Material.
\begin{theorem}
\label{thm:algebraictographical}
Given an SEM $(\bm{B},\bm{\Phi})$ with path diagram ${\cal G}=(V,A)$
and a variable $y \in V$, let
$X$ be a subset of the parents of $y$ in $\cal G$. Then
a set of variables $I \subseteq V$ fulfills the algebraic instrumental set
condition with respect to the equation
\[
y = B \cdot X + \epsilon ; \textit{ where } \epsilon = \sum_{p \in \textrm{Pa}_{\cal G}(y) \setminus X} p + \epsilon_y
\]
if and only if $I$ fulfills the instrumental set
condition with respect to $X$ and $y$ in $\cal G$.
\end{theorem}
In the R package MIIVsem \citep{fisher2019miivsem} implementation of MIIV, all
parameters in an equation of an SEM are simultaneously identified by (1)
applying an L2O transformation to all the latent variables in this equation;
(2) identifying the composite error term of the resulting equation; and (3)
applying the algebraic instrumental set criterion based on the model matrices
initialized with arbitrary parameter values and derived total effect and
covariance matrices; see \citet{bollen2004automating} for details.
Theorem~\ref{thm:algebraictographical} implies that the MIIVsem approach is
generally equivalent to first applying the graphical L2O transform followed by
the instrumental set criterion (Definition~\ref{defn:graphicis}) using the set
of all observed parents of the dependent variable in the equation as $X$.
\section{\MakeUppercase{Examples}}
\label{sec:examples}
\begin{figure}[t]
\begin{subfigure}[b]{0.5 \linewidth}
\centering
\includegraphics[page=7]{figures-inge.pdf}
\caption{}
\label{fig:transform_example}
\end{subfigure}%
\begin{subfigure}[b]{0.5 \linewidth}
\centering
\includegraphics[page=8]{figures-inge.pdf}
\caption{}
\label{fig:transform_example_single}
\end{subfigure}
\caption{(a) Example model following the structure of
Figure~\ref{fig:example_sem} with explicit error terms.
(b) L2O transformation for the model in (a) for identifying both
coefficients of the equation for $y_3$ simultaneously. We end up with
the regression equation $y_3 \sim y_2 + y_4$ and can identify both
coefficients using $y_1$ and $y_5$ as instrumental variables.}
\label{fig:examples1}
\end{figure}
Having shown that the algebraic instrumental set criterion is equivalent to the
graphical instrumental set criterion, we now show some examples of
identification using the proposed graphical approach and compare it to the MIIV
approach implemented in MIIVsem \footnote{In some examples, a manual implementation
of the MIIV approach can permit estimation of models that are not covered by the
implementation in MIIVsem}.
First, we show an example of a full equation identification where we identify
all parameters of an equation altogether. Second, we show an example of partial
L2O transformation (as shown in Section~\ref{sec:partial_transform}) that
allows us to estimate a subset of the parameters of the equation. Third, we
show an example where the instrumental set criterion fails to identify any
parameters, but the conditional instrumental set criterion \citep{BritoP02} can
still identify some parameters. Finally, we show an example where the
parameters are inestimable even though the equation is identified.
\subsection{Identifying Whole Equations}
\label{sec:example_whole}
In this section, we show an example of identifying a whole equation using the
graphical criterion. Let us consider an SEM adapted from \citet{Shen2001}, as
shown in Figure~\ref{fig:examples1}a. We are interested in estimating the
equation $ y_3 \sim l_1 + l_2 $, i.e., parameters $ \lambda_{13} $ and $
\lambda_{23} $. Doing a graphical L2O transformation for both these parameters
together adds the edges $ y_2 \to y_3 $, $ y_4 \to y_3 $, $ \epsilon_2 \to y_3
$, and $ \epsilon_4 \to y_3 $, and removes the edges $ l_1 \to y_3 $, and $ l_2
\to y_3 $, resulting in the model shown in Figure~\ref{fig:examples1}b. Now, for
estimating $ \lambda_{13} $ and $ \lambda_{23} $ we can use the regression
equation $ y_3 \sim y_2 + y_4 $, with $ y_1 $ and $ y_5 $ as the IVs. As $ y_1
$ and $ y_5 $ satisfy Definition~\ref{defn:graphicistrek}, both the parameters
are identified. Both of these parameters are also identifiable using MIIVsem.
\subsection{Identifying Partial Equations}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.33 \linewidth}
\centering
\includegraphics[page=11]{figures-inge.pdf}
\caption{}
\label{fig:example_orig}
\end{subfigure}%
\begin{subfigure}[b]{0.33 \linewidth}
\centering
\includegraphics[page=12]{figures-inge.pdf}
\caption{}
\label{fig:example_non_corr}
\end{subfigure}%
\begin{subfigure}[b]{0.33 \linewidth}
\centering
\includegraphics[page=13]{figures-inge.pdf}
\caption{}
\label{fig:transform_non_corr}
\end{subfigure}
\caption{(a) Adapted SEM from \citet{Shen2001}; modified by making $ l_1
$ and $ l_2 $ uncorrelated and $ \epsilon_1 $ and $ \epsilon_2
$ correlated. (b) Transformed model for estimating $ y_3 \sim
l_1 + l_2 $. The equation is not identified as $ y_5 $ is the
only IV. (c) With partial L2O transformation, $ \lambda_{23} $
can be estimated using $ y_5 $ as the IV.}
\label{fig:examples3}
\end{figure}
For this section, we consider a slightly modified version of the model in the
previous section. We have added a correlation between $ \epsilon_1 $ and
$\epsilon_2$, and have allowed the latent variables, $l_1$ and $l_2$ to be
uncorrelated, as shown in Figure~{\ref{fig:example_orig}}. The equation $ y_3
\sim l_1 + l_2 $ is not identified in this case, as $ y_5 $ is the only
available IV (Figure~\ref{fig:example_non_corr}). However, using the partial
graphical transformation for $ l_2 $ while treating $ l_1 $ as an error term
(Figure~{\ref{fig:transform_non_corr}}), the parameter $ \lambda_{23} $ can be
identified by using $y_5$ as the IV. As the R package MIIVsem always tries to
identify full equations, it is not able to identify either of the parameters in
this case -- although this would be easily doable when applying the MIIV
approach manually.
\subsection{Identification Based on Conditional IVs}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.5 \linewidth}
\centering
\includegraphics[page=14]{figures-inge.pdf}
\caption{}
\label{fig:example_conditional_iv}
\end{subfigure}%
\begin{subfigure}[b]{0.5 \linewidth}
\centering
\includegraphics[page=15]{figures-inge.pdf}
\caption{}
\label{fig:transform_conditional_iv}
\end{subfigure}
\caption{(a) Modified version of the Figure~\ref{fig:example_orig} model, where $l_1$ and $l_2$
share an observed cause $y_6$.
$\lambda_{13} $ and $ \lambda_{23} $ are still not
simultaneously identified as no IVs are available. (b) Even
with partial transformation, $ \lambda_{23} $ is no longer
identified as $ y_5 $ is not an IV because of the open paths $
y_5 \gets l_2 \gets y_6 \to y_3 $ and $ y_5 \gets l_2 \gets y_6
\to l_1 \to y_3 $. However, using the conditional instrumental
set criterion, we can identify $\lambda_{23}$ by using $y_5$ as
a \emph{conditional IV} for the equation $y_3 \sim y_4$, as
conditioning on $ y_6 $ blocks the open paths.}
\label{fig:examples4}
\end{figure}
So far, we have only considered the instrumental set criterion, but many
other identification criteria have been proposed for DAGs. For example, we can
generalize the instrumental set criteria to hold \emph{conditionally}
on some set of observed variables \citep{BritoP02}. There can be cases when
conditioning on certain variables allows us to use
conditional IVs. This scenario might not occur when we have a standard latent
and measurement level of variables, but might arise in specific cases; for
example, when there are exogenous covariates that can be measured without error
(such as the year in longitudinal studies), or interventional variables in
experimental settings (such as complete factorial designs) which are
uncorrelated, and observed exogenous by definition.
Figure~{\ref{fig:example_conditional_iv}} shows a hypothetical example in which
the latent variables $ l_1 $ and $ l_2 $ are only correlated through a common
cause $ y_6 $, which could, for instance, represent an experimental
intervention. Similar to the previous example, a full identification for $ y_3
\sim l_1 + l_2 + y_6 $ still does not work. Further, because of the added correlation
between $ l_1 $ and $ l_2 $, partial identification is not possible either. The
added correlation between $l_1$ and $l_2$ opens a path from $y_5$ to $y_3$,
resulting in $y_5$ no longer being an IV for $ y_3 \sim y_4 $. However, the
conditional instrumental set criterion \citep{BritoP02} can be used here to
show that the parameter $ \lambda_{23} $ is identifiable by conditioning on
$y_6$ in both stages of the IV regression. In graphical terms, we say that
conditioning on $y_6$ $d$-separates the path between $l_1$ and
$l_2$ (Figure~\ref{fig:transform_conditional_iv}), which means that we end up
in a similar situation as in Figure~\ref{fig:transform_non_corr}. We can
therefore use $y_5$ as an IV for the equation $y_3 \sim y_4$ once we condition
on $y_6$. As the MIIV approach does not consider conditional IVs, it is not able
to identify either of the parameters.
\subsection{Inestimable Parameters in Identified Equations}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.33 \linewidth}
\centering
\includegraphics[page=16]{figures-inge.pdf}
\caption{}
\label{fig:example_original_model}
\end{subfigure}%
\begin{subfigure}[b]{0.33 \linewidth}
\centering
\includegraphics[page=17]{figures-inge.pdf}
\caption{}
\label{fig:example_incorrect_estimate}
\end{subfigure}%
\begin{subfigure}[b]{0.33 \linewidth}
\centering
\includegraphics[page=18]{figures-inge.pdf}
\caption{}
\label{fig:transform_incorrect_estimate}
\end{subfigure}%
\caption{(a) An example model from \citet{griliches1977estimating}
about the economic effects of schooling. The model has $ 1 $ latent
variable $x_1$ (Ability) with $ 4 $ observed variables $ y_1 $ (IQ), $
y_2 $ (Schooling), $ y_3 $ (knowing how the world works), and $ y_4 $
(Income). (b) A slightly modified version of the model in
Figure~\ref{fig:example_original_model} where we add two new edges $ y_1
\rightarrow y_2 $ and $ y_3 \rightarrow y_4 $. (c) L2O transformed
model for the equation of $ y_4 $. The transformed regression equation
for $ y_4 $ is: $ y_4 \sim y_3 + y_2 $ but because of the
transformation, the coefficient of $ y_3 $ has changed
to $ \lambda_{14} + \lambda_{34} $. Because of this changed
coefficient, even though the equation is identified, it is not possible
to estimate either $ \lambda_{14} $ or $ \lambda_{34} $ individually.
}
\label{fig:examples5}
\end{figure}
In the previous examples, the L2O transformation creates a new edge in the
model between two observed variables that has the same path coefficient that we
are interested in estimating. But if the L2O transformation adds a new edge
where one already exists, the new path coefficient becomes the sum of the
existing coefficient and our coefficient of interest. In such cases, certain
parameters can be inestimable even if the transformed equation is
identified according to the identification criteria.
In Figure~\ref{fig:example_original_model}, we have taken a model about the
economic effects of schooling from \citet{griliches1977estimating}.
All parameters in the equation of $y_4$ are identifiable by using
$y_1$ and $y_2$ as the IVs. However, we get an interesting case if we
add two new edges $ y_1 \rightarrow y_3 $ and $ y_2 \rightarrow y_4 $
(Figure~\ref{fig:example_incorrect_estimate}): The L2O transformation for the
equation of $ y_4 $ adds the edges $ \epsilon_3 \rightarrow y_4 $ and $ y_3
\rightarrow y_4 $, as shown in Figure~\ref{fig:transform_incorrect_estimate}.
But since the original model already has the edge $ y_3 \rightarrow y_4 $, the
new coefficient for this edge becomes $ \lambda_{14} + \lambda_{34} $. The
regression equation for $y_4$ is still: $ y_4 \sim y_3 + y_2 $, and it is
identified according to the instrumental set criterion as $ y_2 $ and $ y_1 $
are the IVs for the equation. But if we estimate the parameters, we will obtain
values for $ \lambda_{24} $ and $ \lambda_{14} + \lambda_{34} $. Therefore, $
\lambda_{24} $ remains identifiable in this more general case, but $
\lambda_{14} $ and $\lambda_{34}$ are individually not identified. The graphical
L2O approach allows us to easily visualize such cases after transformation.
\section{\MakeUppercase{Discussion}}
In this paper, we showed the latent-to-observed (L2O) transformation on the RAM
notation and how to use it for partial equation identification. We then
gave an equivalent graphical L2O transformation which allowed us to apply
graphical identification criteria developed in the DAG literature to latent
variable parameters in SEMs. Combining this graphical L2O transformation with
the graphical criteria for parameter identification, we arrived at a generic
approach for parameter identification in SEMs. Specifically, we showed that the
instrumental set criterion combined with the graphical L2O transformation is
equivalent to the MIIV approach. Therefore, the graphical transformation can be
used as an explicit visualization of the L2O transformation or as an
alternative way to implement the MIIV approach in computer programs. To
illustrate this, we have implemented the MIIV approach in the graphical-based R
package \emph{dagitty} \citep{Textor2017} and the Python package \emph{pgmpy} \citep{ankan2015pgmpy}.
Our equivalence proof allows users to combine results from two largely disconnected lines of work.
By combining the graphical L2O transform with other identification criteria,
we obtain novel identification strategies for LVSEMs, as we have illustrated
using the conditional instrumental set criterion. Other
promising candidates would be auxiliary
variables \citep{ChenKB17} and instrumental cutsets \citep{KumorCB19}.
Conversely, the SEM literature is more developed than the graphical literature
when it comes to non-Gaussian models. For example,
MIIV with two-stages least squares estimation is asymptotically distribution-free
\citep{bollen1996alternative},
and our results imply that normality is not required for applying the
instrumental set criterion.
|
{
"arxiv_id": "2302.13256",
"language": "en",
"timestamp": "2023-02-28T02:14:23",
"url": "https://arxiv.org/abs/2302.13256",
"yymm": "2302"
} |
\section{Conclusions}
\label{sec:concl}
This paper presents a flexible and accurate structure for space-time video super-resolution, which can modulate the given video to arbitrary frame rate and spatial resolution. We first perform multi-frame information enhancement with a bi-directional RNN structure. After that, a memory-friendly forward warping guided feature alignment module is proposed to synthesize intermediate frames at an arbitrary intermediate time. To better preserve texture consistency in the intermediate reconstructed frames, we propose an optical flow guided pseudo-label generation strategy that greatly optimizes the search space. Further, all motion information is organized in an efficient manner, which greatly avoids the redundant estimation of motion information. Finally, a straightforward yet efficient cascaded upsampling module is proposed, which enables our model to achieve scale-arbitrary upsampling and meanwhile, significantly reduce memory consumption. With the incorporation of these techniques into an end-to-end framework, our model can make usage of long-range information with high efficiency. Extensive experiments have demonstrated that our approach not only has good flexibility but also achieves significant performance gains over existing methods.
\section{Experiments}
\subsection{ Experimental Setup}
\noindent \textbf{Dataset}
We adopt Vimeo-90K-T\cite{DBLP:journals/corr/abs-1711-09078} to train our model. Vimeo-90k-T contains 91701 video clips, each consisting of seven consecutive frames at the resolution of 448 $\times$ 256.
We follow \cite{2021Zooming,xu2021temporal} and split the Vimeo-90K-T test set into three subsets of the fast, medium, and slow motion according to their average motion magnitude. Some all-black sequences are removed from the test set since they will lead to infinite values on PSNR.
We use various datasets including Vimeo-90K-T\cite{DBLP:journals/corr/abs-1711-09078}, Vid4\cite{5995614}, REDS\cite{Nah_2019_CVPR_Workshops}, Adobe240FPS\cite{2019Video}, SPMCS\cite{tao2017detail} for evaluation. To be specific, we first conduct fixed spatial-temporal super-resolution experiments on Vimeo-90K-T, Vid4, and REDS. To further validate the model's ability to handle large motions and synthesize continuous temporal trajectories, we perform experiments on two challenging datasets: REDS and Adobe240FPS. Finally, we compare the performance of continuous space-time super-resolution on the SPMCS dataset. The detailed experimental setup is described in the corresponding section. \\
\noindent \textbf{Implemention Details}
We utilize the Adam \cite{kingma2014adam} with $\beta_{1}$ = 0.9 and $\beta_{2}$ = 0.999 as optimizer. The learning rate is initialized as
2 $\times$ 10$^{-4}$ and is decayed to 1 $\times$ 10$^{-7}$ with a cosine annealing scheduler. When training the model, we downsample the odd-index HR frames with bicubic interpolation to obtain the LR frames as the inputs. To allow longer
propagation, we follow the setup of\cite{2020BasicVSR} and perform temporal
augment by flipping the original input sequence. For a fair comparison, we adopt different strategies and train two models, which we refer to as model-fix and model-continuous. The model-fix is trained with a fixed downsample factor of 4. When training model-continuous, we randomly sample scales in distribution $\{2.0,2.2 \dots 3.8,4.0\}$ with a stride of 0.2.
To optimize the model, we use different loss functions to supervise pre-existing and temporally interpolated frames. For the former, we select the Charbonnier loss function\cite{lai2017deep}, as suggested in \cite{2021Zooming,xu2021temporal}:
\begin{equation}
\begin{aligned}
loss_{exist} = \sqrt{{\Vert I^{exist}-I^{GT} \Vert}^{2}+{\epsilon}^{2}}.
\end{aligned}
\label{f16}
\end{equation}
For the latter, we relax the precise correspondence between the intermediate synthesis frames and the ground truth (GT). So the optimization goal is not only $\mathcal{L}_{1}$ between GT and the prediction frames but also considers the same continuous semantic pattern of synthesis and reference frames from both sides. Our total learning goal is formulated as follows:
\begin{equation}
\begin{aligned}
loss_{inter} =\mathcal{L}_{1}(I^{pred},I^{GT})+ \alpha \mathcal{L}_{1}(I^{pred},I^{pseudo}),
\end{aligned}
\label{f17}
\end{equation}
where $I^{pred}$ denotes the predicted intermediate frame, and $I^{GT}$ and $I^{pseudo}$ refer to the ground truth and the pseudo label. $\alpha$ is a hyperparameter to balance the
importance of the two items. We follow \cite{zhou2022exploring} and set it to 0.1. So the total loss function can be described as:
\begin{equation}
\begin{aligned}
loss_{total} =loss_{exist}+loss_{inter}
\end{aligned}
\label{f18}
\end{equation}
We use Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) for evaluation. The inference speed of these methods is measured on the entire Vid4 dataset using one Nvidia 1080Ti GPU.
\subsection{ Comparison with State-of-the-arts methods}
\noindent \textbf{Comparison with fixed spatio-temporal upsampling methods.}
Since the vast majority of current methods can only accomplish space-time upsampling with fixed spatial and temporal interpolation scales, we first need to compare our model with these methods. Specifically, we compare the results for space 4$\times$ and time 2$\times$ super-resolution with existing state-of-the-art one-stage approaches\cite{2020Space,2021Zooming,xu2021temporal} and two-stage approaches which sequentially apply separated VFI\cite{0Super,2017Video,DAIN} and (V)SR\cite{2018Image,2019Recurrent,2019EDVR}. All these methods are tested on three datasets:Vid4\cite{5995614}, Vimeo-90k-T\cite{DBLP:journals/corr/abs-1711-09078}, and REDS\cite{Nah_2019_CVPR_Workshops}. which are commonly used for VSR and VFI. For a fair comparison, we remove the scale-aware convolution in the feature extraction module in stage 1 and fix the spatial up-sampling factor when training our model. From Table~\ref{tab:1}, one can see that the proposed method outperforms all the other
\begin{table*}[t]
\centering
\caption{ Quantitative assessment(PSNR (dB) / SSIM) of multiple-frame interpolation of different methods on Adobe240FPS testset. The top results are highlighted with {\color{red} red} color.
}
\resizebox{0.93\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|c|c}
\hline
Time stamp & 0.000 & 0.167 & 0.333 & 0.500 & 0.667 & 0.833 \\ \hline \hline
TMNet\cite{xu2021temporal} &31.50 / 0.8924 & 27.00 / 0.8101 & 25.47 / 0.7691 & 25.44 / 0.7669 & 25.50 / 0.7703 & 26.86 / 0.8062 \\ \hline
VideoINR\cite{chen2022videoinr} &30.75 / 0.8889 & 27.58 / 0.8301 & 26.17 / 0.7963 & 25.96 / 0.7890 & 26.19 / 0.7953 & 27.53 / 0.8283 \\ \hline
EBFW\cite{zhangVCIP}(our previous work) &32.89 / 0.9195 & 27.99 / 0.8494 & 27.61 / 0.8346 & 27.68 / 0.8354 & 27.67 / 0.8367 & 28.05 / 0.8507 \\ \hline
C-STVSR (Ours) &\textcolor{red}{33.15} / \textcolor{red}{0.9236} & \textcolor{red}{28.13} / \textcolor{red}{0.8564} & \textcolor{red}{27.63} / \textcolor{red}{0.8374} & \textcolor{red}{27.74} / \textcolor{red}{0.8389} & \textcolor{red}{27.72} / \textcolor{red}{0.8403} & \textcolor{red}{28.29} / \textcolor{red}{0.8591} \\ \hline
\end{tabular}
\label{tab:2}}
\end{table*}
methods on all quantitative evaluation indicators and has similar runtime and model size. Compared to our previous work\cite{zhangVCIP}, the new model saves about 35\% the number of parameters and achieves higher performance. However, the multi-scale deformable alignments also somewhat affect the inference speed. Notably, it outperforms TMnet\cite{xu2021temporal} by 0.44 dB in PSNR on the Vid4\cite{5995614} dataset. The visual results of different methods are shown in Fig.~\ref{fig:5}. It can be seen that, in accordance with its significant quantitative improvements, our model can reconstruct more visually pleasing
results with sharp edges and fine details than its competitors.
\\
\textbf{Comparison for temporal consistency. }
We next compare the restoration results of different methods in temporal consistency.
Unlike the Vanilla VSR task, the ST-VSR needs to restore two types of frames: the pre-existing and temporally interpolated frames. The former frames have their low-resolution image as the reference, so the restoration effect is more reliable. In contrast, the latter has no low-resolution reference, so the reconstruction results are relatively poor. This situation may result in quality fluctuations of the restoration video, especially when dealing with large motions. It is, therefore, essential to compare the visual quality of the interpolated intermediate frames. We first analyze the restoration results at $t$=0.5 on the REDS\cite{Nah_2019_CVPR_Workshops}. REDS is a very challenging dataset containing various scenarios and large motions. The objective results are given in Table~\ref{tab:1}, and one can see that our method outperforms all other methods in terms of PSNR and SSIM. We also compare the subjective results in Fig.~\ref{fig:6}.
\begin{table*}[t]
\caption{ Quantitative comparison (PSNR/SSIM) of continuous space-time super-resolution. The results are calculated on SPMCS. T $\times$ A , S $\times$ B refers to A temporal interpolated frames and B up-sampling scale. The top results are highlighted with {\color{red} red} color.}\label{tab:3}
\centering
\resizebox{0.85\linewidth}{!}{\begin{tabular}{c|llllll}
\hline
\multicolumn{1}{l|}{} & T $\times$ 2 S $\times$ 2 & T $\times$ 2 S $\times$ 2.4 & T $\times$ 2 S$\times$ 2.8 & T $\times$ 2 S $\times$ 3.2 & T $\times$ 2 S $\times$ 3.6 & T $\times$ 2 S $\times$ 4.0 \\ \hline \hline
VideoINR\cite{chen2022videoinr} & \multicolumn{1}{c}{30.65 / 0.9024} & \multicolumn{1}{c}{29.86 / 0.9024} & \multicolumn{1}{c}{30.44 / 0.9001} & \multicolumn{1}{c}{29.52 / 0.8772} & \multicolumn{1}{c}{28.63 / 0.8622} & \multicolumn{1}{c}{28.61 / 0.8435} \\ \hline
C-STVSR(Ours) & \multicolumn{1}{c}{\textcolor{red}{35.94} / \textcolor{red}{0.9672}} & \multicolumn{1}{c}{\textcolor{red}{34.33} / \textcolor{red}{0.9504}} &\multicolumn{1}{c}{\textcolor{red}{33.02} / \textcolor{red}{0.9308}} & \multicolumn{1}{c}{\textcolor{red}{31.98} / \textcolor{red}{0.9093}} & \multicolumn{1}{c}{\textcolor{red}{30.20} / \textcolor{red}{0.8831}} & \multicolumn{1}{c}{\textcolor{red}{29.80} / \textcolor{red}{0.8600}} \\ \hline
& T $\times$ 4 S $\times$ 2 & T $\times$ 4 S $\times$ 2.4 & T $\times$ 4 S $\times$ 2.8 & T $\times$ 4 S $\times$ 3.2 & T $\times$ 4 S $\times$ 3.6 & T $\times$ 4 S $\times$ 4.0 \\ \hline
VideoINR\cite{chen2022videoinr} & \multicolumn{1}{c}{30.08 / 0.8907} & \multicolumn{1}{c}{29.49 / 0.8923} & \multicolumn{1}{c}{30.04 / 0.8902} & \multicolumn{1}{c}{29.27 / 0.8699} & \multicolumn{1}{c}{28.44 / 0.8551} & \multicolumn{1}{c}{28.44 / 0.8377} \\ \hline
C-STVSR(Ours) & \multicolumn{1}{c}{\textcolor{red}{34.50} / \textcolor{red}{0.9561}} & \multicolumn{1}{c}{\textcolor{red}{33.16} / \textcolor{red}{0.9393}} & \multicolumn{1}{c}{\textcolor{red}{31.96} / \textcolor{red}{0.9187}} & \multicolumn{1}{c}{\textcolor{red}{31.12} / \textcolor{red}{0.8971}} &\multicolumn{1}{c}{\textcolor{red}{29.51} / \textcolor{red}{0.8713}} & \multicolumn{1}{c}{\textcolor{red}{29.22} / \textcolor{red}{0.8484}} \\ \hline
\end{tabular}}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=15cm]{imgs/vis_reds.pdf}
\caption{ Visual comparison of sythetic intermediate frames under extreme motions on REDS testset. Our method can synthesis more visually plausible results } \label{fig:6}
\end{figure*}
It can be seen that there exists extreme motion between two successive frames. Some kernel-based methods\cite{2021Zooming,xu2021temporal,2017Video} suffers from severe distortion.
This may be because the offset estimated by the kernel-based method is limited by the receptive field of the convolution kernel. At the same time, the motion information is learned unsupervised, which is unreliable when estimating large motions. Some flow-based methods\cite{DAIN,chen2022videoinr} also fails to restore sharp edges. In contrast, our method can recover the best details by benefiting from this coarse-to-fine motion estimation and motion compensation. More discussions and visual results are provided in the Ablation study section.
Since our method can interpolate arbitrary intermediate frames, we then compare the performance of multi-frame interpolation. To verify its performance in temporal consistency, we compare the proposed method with TMNet\cite{xu2021temporal} and VideoINR\cite{chen2022videoinr}, both of which can also perform the modulated temporal interpolation.
Specifically, we follow TMnet and study space 4$\times$ and temporal 6$\times$ results on Adobe240FPS\cite{su2017deep} test set. Please note that TMnet and VideoINR require high frame rate training data in the training stage to supervise multiple intermediate moments and achieve the arbitrary intermediate interpolation. On the contrary, our method only supervises the middle moment $t$=0.5 in the training stage. As shown in Table~\ref{tab:2}, our method surpasses its competitors in objective metrics. In addition, our new model achieves higher performance with fewer parameters than our previous work(EBFW\cite{zhangVCIP}). We also provide a visual comparison in Fig.~\ref{fig:7}. As one can see, the screen moves rapidly from left to right, TMNet and videoINR appear significantly blurred and fail to synthesize a continuous temporal trajectory. By contrast, our method can synthesize the most reliable and plausible results.
All these validation results can prove that our method has great advantages when approximating large motions and synthesizing continuous temporal trajectories.\\
\textbf{Comparison for continuous space-time super-resolution. }
We finally compare the results of continuous spatial-temporal upsampling.
As some previous work\cite{shi2021learning,chen2022videoinr} has pointed out, a two-stage method composed of successive video frame interpolation and image super-resolution performs significantly worse than a one-stage method. In addition, the continuous image super-resolution does not consider temporal information, so the comparison is not fair. Here, we only consider the comparison with the end-to-end spatio-temporal super-resolution method. As far as we know, VideoINR\cite{chen2022videoinr} is the only work that can achieve continuous ST-VSR and release their code. So we choose to compare with it. However, VideoINR is not trained on Vimeo-90K-T\cite{DBLP:journals/corr/abs-1711-09078}, but on Adobe240FPS\cite{2019Video} since this method needs to be trained on a high-frame-rate dataset in order to achieve time-arbitrary modulation. For a fair comparison, we followed its settings and retrained our model on the Adobe240FPS train set. The comparison results are provided in Table~\ref{tab:3}. One can see that our method outperform VideoINR at different combination of intermediate moments and scales. The visual perception also validates the objective results, as can be seen in Fig~\ref{fig:arb}. Our model can reconstruct more reliable results and finer details at different scales.
\begin{table*}[t]
\caption{ Qualitative Comparison of (PSNR(dB))$\uparrow$ / average GPU memory usage (MB)$\downarrow$ on REDS testset when feeding different lengths of LR frames. The best performance is highlighted in \textcolor{red}{red}. Here we perform space 4$\times$ and time 2$\times$ interpolation. The memory usage is computed on the LR size of 180 $\times$ 320 using one Nvidia 1080Ti GPU. N/A (Not available) denotes out-of-memory cases. }
\resizebox{1.00\linewidth}{!}{\begin{tabular}{c|c|c|c|c|c|c|c|c}
\hline
\begin{tabular}[c]{@{}c@{}}Input\\ length\end{tabular} & 4 & 5 & 6 & 7 & 10 & 17 & 26 & 51 \\ \hline \hline
TMNet\cite{xu2021temporal} & 26.80 / 7590 & 26.82 / 7830 & 26.81 / 9692 & 26.81 / 10904 & N/A & N/A & N/A & N/A \\ \hline
Zooming
Slow-Mo\cite{2021Zooming}& 26.68 / 6096 & 26.72 / 7417 & 26.73 / 9154 & 26.72 / 10590 & N/A & N/A & N/A & N/A \\ \hline
C-STVSR (Ours) & \textcolor{red}{26.81} / \textcolor{red}{3091} & \textcolor{red}{26.83} / \textcolor{red}{3298} & \textcolor{red}{26.85} / \textcolor{red}{3422} & \textcolor{red}{26.86} / \textcolor{red}{3660} & \textcolor{red}{26.88} / \textcolor{red}{4984} &\textcolor{red}{26.89} / \textcolor{red}{5462} & \textcolor{red}{26.90} / \textcolor{red}{6534} & \textcolor{red}{26.99} / \textcolor{red}{10965} \\ \hline
\end{tabular}}
\label{tab:length}
\end{table*}
\begin{table*}[t]
\centering
\caption{ Ablation on temporal modulation module. } \resizebox{0.80\linewidth}{!}{
\begin{tabular}{c|c|c|c|c|c}
\hline
& REDS & Vimeo-Fast & Vimeo-Medium & Vimeo-Slow & Vimeo-Total \\
& PSNR SSIM & PSNR SSIM & PSNR SSIM & PSNR SSIM & PSNR SSIM \\ \hline \hline
w/o FWG & 26.65 / 0.7434 & 37.01 / 0.9429 & 35.57 / 0.9384 & 33.63 / 0.9180 & 35.40 / 0.9349 \\ \hline
w/o DA & 26.85 / 0.7483 & 36.67 / 0.9410 & 35.60 / 0.9386 & 33.56 / 0.9169 & 35.35 / 0.9345 \\ \hline
w/o FGL & 26.89 / 0.7494 & 36.80 / 0.9430 & 35.65 / 0.9396 & 33.58 / 0.9175 & 35.40 / 0.9356 \\ \hline
C-STVSR(Ours) & 26.99 / 0.7525 & 37.09 / 0.9447 & 35.81 / 0.9404 & 33.69 / 0.9183 & 35.57 / 0.9365 \\ \hline
\end{tabular}}\label{tab:5}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=16cm]{imgs/adobe_vis.pdf}
\caption{ Visual comparison of temporal consistency. We display the reconstruction results of pre-existing frames and five temporal interpolated frames here. Our method can synthesise continuous and consistent trajectories. } \label{fig:7}
\end{figure*}
\subsection{Algorithmic Analyses}
We next analyze the proposed scheme's performance and efficiency and discuss our approach's advantages in mining long-range spatio-temporal dependencies.\\
\textbf{Comparison for long-range modeling ability. }
Much of previous work on VFI\cite{xu2019quadratic,kalluri2021flavr} and VSR\cite{yi2021omniscient,2020BasicVSR} have shown that proper usage of multiple frames can boost restoration performance. However, a model can only process frames of finite length at a time due to GPU memory limitations. From a practical point of view, it is essential to compare the memory consumption and performance of different algorithms with different frame input lengths. To make a fair comparison, we also choose the method that can accept
multi-frame as input. Specifically, we select Zooming SlowMo\cite{2021Zooming} and TMnet\cite{xu2021temporal}, which adopt the ConvLSTM structure to propagate information within multiple frames. Our experiment is conducted on the
REDS\cite{Nah_2019_CVPR_Workshops} test set, which consists of 8 sequences, and each sequence contains 100 frames. We downsample the odd-index frames(e.g., $1^{st}$,$3^{rd}$ $\cdots$ ) by a scale factor of 4 and feed them into the models to reconstruct the corresponding HR frames.
Quantitative performance are provided in Table.~\ref{tab:length}. As can be seen, while processing the same number of input frames as its competitors, our approach consumes much less memory. Meanwhile, by increasing the number of input frames, our method can continuously and stably improve the reconstruction performance. We also observed that on some test sequences, the length of the input frames significantly affects the reconstruction
performance. For example, on the Vid4 dataset, if our model accepts four frames as input at a time, then the PSNR is only 26.43. If we directly feed all frames of each sequence into the model, the PNSR will rise to 26.87. This result illustrates the importance of long-distance information for improving the performance of ST-VSR. Please note we conduct experiment on REDS dataset here because Vid4 dataset contains four sequences with different resolutions
and lengths, making it difficult to calculate the average memory consumption. These results show that our method can make use of long-distance temporal information more efficiently and effectively. \\
\noindent \textbf{More discussion about performance and efficiency.}
After comparing performance and efficiency, it is necessary to explain why the proposed model is so memory efficient. In general, we fully consider how to save memory when designing the temporal interpolation module and spatial up-sampling module. The specific reasons can be
\begin{table*}[t]
\caption{Ablation study (PSNR(dB) / SSIM) on spatial modulation. T $\times$ A , S $\times$ B refers to A temporal interpolated frames and B space up-sampling scale. The best performance is highlighted in \textcolor{red}{red}. }
\label{tab:6}
\centering
\resizebox{0.90\linewidth}{!}{\begin{tabular}{c|llllll}
\hline
\multicolumn{1}{l|}{} & T $\times$ 2 S $\times$ 2 & T $\times$ 2 S $\times$ 2.4 & T $\times$ 2 S$\times$ 2.8 & T $\times$ 2 S $\times$ 3.2 & T $\times$ 2 S $\times$ 3.6 & T $\times$ 2 S $\times$ 4.0 \\ \hline \hline
C-STVSR (-b) & \multicolumn{1}{c}{37.27 / 0.9791} & \multicolumn{1}{c}{28.22 / 0.9018} & \multicolumn{1}{c}{27.21 / 0.8750} & \multicolumn{1}{c}{32.13 / 0.9208} & \multicolumn{1}{c}{26.78 / 0.8477} & \multicolumn{1}{c}{30.30 / 0.8819} \\ \hline
C-STVSR (-ps1) & \multicolumn{1}{c}{37.07 / 0.9790} & \multicolumn{1}{c}{34.77 / 0.9650} & \multicolumn{1}{c}{33.95 / 0.9464} & \multicolumn{1}{c}{32.32 / 0.9209} & \multicolumn{1}{c}{\textcolor{red}{30.55} / 0.9018} & \multicolumn{1}{c}{\textcolor{red}{30.36} / 0.8781} \\ \hline
C-STVSR (-fix) & \multicolumn{1}{c}{33.54 / 0.9518} & \multicolumn{1}{c}{33.35 / 0.9284} & \multicolumn{1}{c}{33.28 / 0.9337} & \multicolumn{1}{c}{32.55 / 0.9203} & \multicolumn{1}{c}{30.43 / 0.8974} & \multicolumn{1}{c}{30.14 / 0.8861} \\ \hline
C-STVSR (Ours) & \multicolumn{1}{c}{\textcolor{red}{37.65} / \textcolor{red}{0.9797}} & \multicolumn{1}{c}{\textcolor{red}{34.82} / \textcolor{red}{0.9660}} &\multicolumn{1}{c}{\textcolor{red}{34.19} / \textcolor{red}{0.9495}} & \multicolumn{1}{c}{\textcolor{red}{32.99} / \textcolor{red}{0.9304}} & \multicolumn{1}{c}{\textcolor{red}{30.55} / \textcolor{red}{0.9082}} & \multicolumn{1}{c}{30.24 / \textcolor{red}{0.8865}} \\ \hline
& T $\times$ 4 S $\times$ 2 & T $\times$ 4 S $\times$ 2.4 & T $\times$ 4 S $\times$ 2.8 & T $\times$ 4 S $\times$ 3.2 & T $\times$ 4 S $\times$ 3.6 & T $\times$ 4 S $\times$ 4.0 \\ \hline
C-STVSR (-b) & \multicolumn{1}{c}{36.00 / 0.9707} & \multicolumn{1}{c}{28.08 / 0.8966} & \multicolumn{1}{c}{27.14 / 0.8701} & \multicolumn{1}{c}{31.48 / 0.9106} & \multicolumn{1}{c}{26.70 / 0.8420} & \multicolumn{1}{c}{\textcolor{red}{29.88} / 0.8718} \\ \hline
C-STVSR (-ps1) & \multicolumn{1}{c}{35.82 / 0.9706} & \multicolumn{1}{c}{33.81 / 0.9557} & \multicolumn{1}{c}{33.02 / 0.9355} & \multicolumn{1}{c}{31.59 / 0.9099} & \multicolumn{1}{c}{30.05 / 0.8904} & \multicolumn{1}{c}{29.85 / 0.8674} \\ \hline
C-STVSR (-fix) & \multicolumn{1}{c}{32.85 / 0.9436} & \multicolumn{1}{c}{32.58 / 0.9191} & \multicolumn{1}{c}{32.42 / 0.9222} & \multicolumn{1}{c}{31.72 / 0.9081} & \multicolumn{1}{c}{29.92 / 0.8853} & \multicolumn{1}{c}{29.72 / 0.8751} \\ \hline
C-STVSR (Ours) & \multicolumn{1}{c}{\textcolor{red}{36.29} / \textcolor{red}{0.9717}} & \multicolumn{1}{c}{\textcolor{red}{33.89} / \textcolor{red}{0.9574}} & \multicolumn{1}{c}{\textcolor{red}{33.27} / \textcolor{red}{0.9393}} & \multicolumn{1}{c}{\textcolor{red}{32.22} / \textcolor{red}{0.9194}} &\multicolumn{1}{c}{\textcolor{red}{ 30.08} / \textcolor{red}{0.8967}} & \multicolumn{1}{c}{29.78 / \textcolor{red}{ 0.8758}} \\ \hline
\end{tabular}}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=18cm]{imgs/arb_shape.pdf}
\caption{ Qualitative comparison with VideoINR for continuous space-time video super-resolution on SPMCS. T=A , S=B refers to A temporal intermediate time and B up-sampling scale. } \label{fig:arb}
\end{figure*} summarized as follows: First, the motion information is estimated and used in a very efficient manner. The optical flow is estimated only once but used three times. Specifically, we extract the optical flow using Spynet\cite{ranjan2017optical}, which is first used to align features in the first stage. Next, the optical flow is used again in the second phase to synthesize the
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{imgs/forward_guidance.pdf}
\caption{Visual comparison of with or without forward warping guidance. Part of the areas are zoomed in for better view } \label{fig:8}
\end{figure}
intermediate temporal frames. Moreover, the optical flow is finally used as guidance information to generate pseudo-labels for training supervision. In contrast, other RNN-based methods\cite{2021Zooming,xu2021temporal} estimate the motion information of the pre-existing frames and temporally interpolated frames separately, which ignores the spatio-temporal connection between the VFI and the VSR task. Secondly, unlike other complex and elaborate upsampling module designs, we have designed a very straightforward cascading pixel shuffle module. This kind of depth-to-space operation itself does not introduce any learnable parameters. We opt not to design complex conditional
convolutions\cite{hu2019meta,wang2021learning} or implicit local representation\cite{chen2022videoinr} to accurately estimate the weight of each spatial location. Instead, we aim to design a memory-friendly module that can efficiently exploit long-range information.
\subsection{Ablation Study}
To further investigate the proposed algorithm, we design a set of ablation experiments for different modules.
\subsubsection{temporal modulation}
In this section, we demonstrate the effect of each component that contributes to temporal modulation. The quantitative results are shown in Table~\ref{tab:5}.\\
\textbf{Forward warping guidance (FWG)}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{imgs/wo_DCN.pdf}
\caption{Visual comparison of with or without deformable alignment. Part of the areas are zoomed in for better view.}\label{fig:9}
\end{figure}
We compare two alignment methods: using only deformable convolution to approximate the intermediate frames and using forward warping to
guide the deformable alignment.
As seen in Table~\ref{tab:5}, the absence of FWG results in a drop in performance on Vimeo and REDS datasets. This suggests that forward warping can help synthesize more reliable intermediate frames. We also observe that forward warping guidance may considerably influence subjective visual quality in some cases, especially when approximating large motions. A typical example is shown in Fig.~\ref{fig:8}, there exists extremely large motion between two consecutive frames. The model with FWG shows overwhelming subjective visual improvements. We must emphasize that sometimes objective indicators(PSNR or SSIM) cannot reflect the subjective visual perception, especially for the video frame interpolation task. Although our method synthesizes visually pleasing results, there is still a considerable positional bias between the predicted frame and the ground truth due to the fact that much of the real-world motion is non-linear. \\
\textbf{Deformable alignment}
Although forward warping can already synthesize visually appealing results, using only off-the-shelf optical flow estimators may still not be flexible enough. This is because optical flow estimation is not completely accurate. Therefore, we have further introduced deformable alignment(DA). We expect the model can adaptively learn various complex motions during training. The objective results on whether to add deformable convolution are provided in Table.~\ref{tab:5}. It can be seen that deformable alignment after forward warping can further improve the restoration performance. Visual comparisons are illustrated in Fig.~\ref{fig:9}. We found that deformable alignment can help to restore sharper details.\\
\textbf{Flow-guided texture consistency loss}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{imgs/wo_psu.pdf}
\caption{ Visual comparison of results for with or without flow-guided texture consistency loss. Pay attention to the areas indicated by red arrows, zoom in for a better view.} \label{fig:11}
\end{figure}
Based on Zhou \emph{et al.}'s work\cite{zhou2022exploring}, we propose a flow-guided texture consistency loss (FGL) to relax the strict restrictions on ground truth of intermediate frames and greatly reduce its searching space. Through experiments, we observe that this loss also improves reconstruction performance and helps preserve the structures and patterns of interpolated contents. We provide a visual comparison in Fig.~\ref{fig:11}.
A dancer's arm is moving from the top to down. Note the positions indicated by the red arrows, FGL can help restore a complete semantic pattern.
\subsubsection{spatial modulation}
We next focus on the spatial upsampling component, including model design and training strategies. All these models are trained on Vimeo90K-T\cite{DBLP:journals/corr/abs-1711-09078} and tested on the SPMCS\cite{tao2017detail} dataset. The comparison results are shown in Table~\ref{tab:6}.\\
\textbf{Module design}
To verify the validity of the proposed model, we first establish a baseline with the bilinear operation. To be specific, we simply replace stage 3 with a bilinear function followed by a 1$\times$1 Conv to accommodate different scales. We denote this baseline model as C-STVSR (-b), as can be seen in Table~\ref{tab:6}, directly scaling the features with bilinear interpolation results in severe performance fluctuations.
We also observe that such an upsampling approach may do harm to training stability. This is probably due to the direct interpolation of the generated feature maps ignoring the scale information. But the scale information is essential for scale-arbitrary super-resolution.
We then study the cascading upsampling module. To verify the validity of the multi-scale depth-to-space module, we trained a model which only contains a single pixel shuffle layer( 4$\times$ spatial magnification) and denoted it as C-STVSR (-ps1). As can be seen in Table~\ref{tab:6}, it slightly suffers in most situations when compared with the muti-scale depth-to-space module, especially for the missing scales that exist in the cascading module( 2$\times$ in this case). Since depth-to-space layers bring almost no additional compute costs, we sampled the multi-scale features to further enhance the model's adaptability to accommodate different scales.\\
\textbf{Training stratgedy.}
As noted above, we train our model with different scales to force the model to learn scale-aware information. To explore the effectiveness of different training strategies, we train the model with a fixed scale (space 4 $\times$ upscaling) and test it on different scales. When the test size is close to the training size, the model's performance is comparable to C-STVST, but when the test scale varies, there is a severe drop in performance. This suggests that training data at different scales has a huge effect on the network's ability to adapt to different super-resolution magnifications.
\section{Introduction}
\IEEEPARstart{W}{ITH} the rapid development of multimedia technology,
High-resolution (HR) slow-motion video sequences are also becoming more popular since they can provide more visually appealing details. however, when we record a video with
a camera or smartphone, it is often stored with limited
spatial resolutions and temporal frame rates. Space-Time video super-resolution (ST-VSR) aims to convert low-frame-rate and low-resolution videos to higher spatial and temporal resolutions, which finds a wide range of applications\cite{su2017deep,flynn2016deepstereo}.
With the exponential increase of multimedia big data, deep neural network approaches have shown great advantages in various video restoration tasks. In fact, space-time video super-resolution can be divided into
two sub-tasks: Video frame interpolation (VFI) and video super resolution (VSR). In reality, one may adopt a VFI and a VSR method separately to realize spatio-temporal upscaling for a given video.
However, a video sequence's temporal and spatial information is strongly correlated. Two-stage methods will not only introduce additional computational complexity but inevitably fail to explore the spatio-temporal relationship thoroughly.
Rather than performing VFI and VSR independently, Researchers have recently begun to favor modeling two tasks as a joint task\cite{2021Zooming,xu2021temporal} to better exploit the more efficient space-time representation.
Although existing ST-VSR work has made significant progress, some problems remain unsolved, which hinder the practical application of space-time super-resolution technology. One of the most critical points is that most present methods are not flexible. Given a frame sequence, most algorithms can only expand it to pre-defined intermediate moments or resolutions. Although some methods have tried to solve this problem to some extent, they still suffer from temporal inconsistency and lack the capability of exploiting long-range temporal information. Specifically, TMNet\cite{xu2021temporal} implements a temporal modulation network to interpolate arbitrary intermediate frame(s), but this kernel-based motion estimation method often results in serious temporal inconsistencies when dealing with large motions. The work most relevant to our approach is USTVSR\cite{shi2021learning} and VideoINR\cite{chen2022videoinr}. Both these two methods can modulate the input video to arbitrary resolution and frame rate, but they only consider the LR input within two neighboring video frames, and information from a long distance is ignored, which severely limiting their performance. At the same time, these methods are prone to suffer from severe blurring when dealing with extreme motions and fail to synthesize stable and continuous temporal trajectories, bringing about severe degradation in subjective visual quality.
To address these problems, we propose a continuous space-time video super-resolution method. It provides excellent spatiotemporal flexibility and integrates a powerful ability to handle large motions, which allows for synthesizing more visually appealing results.
Our contributions are summarized as follows:
\begin{itemize}
\item A continuous ST-VSR framework is proposed, which incorporates a time-arbitrary interpolation module and a scale-arbitrary upsampling module to achieve arbitrary spatio-temporal super-resolution.
\item For arbitrary temporal interpolation, a lightweight frame interpolation module and a novel flow-guided texture consistency loss is proposed. Benefiting from the straightforward yet effective holes filling design and the optical flow reorganization trick, the proposed model can deal with extreme motions better meanwhile maintaining high efficiency.
\item For arbitrary spatial upsampling, a simple yet effective cascading depth-to-space module is designed. We discard complex scale-aware pixel adaptation modules for memory-saving, such that it is possible to deal with long input sequences and aggregate long-term temporal information.
\end{itemize}
By tapping into long-term space-time knowledge with a memory-friendly design, our method outperforms existing methods in both objective quality and subjective visual effects.
The remainder of the paper is organized as follows. In Section II, we review some related work. Section III presents the details of the proposed method. Experimental results and analysis are given in Section IV. Finally, the paper is concluded in Section V.
\section{Methods}
\begin{figure*}[htbp]
\centering
\includegraphics[width=15.5cm]{imgs/overview_3p.pdf}
\caption{The framework of the proposed method, which can be divided into three stages. The input features first go through a bidirectional RNN network for feature propagation to achieve multiple-frame feature enhancement. Afterward, we synthesize the latent representation of intermediate states with two adjacent temporal features. Finally, the features obtained in the first two steps are fed into an upsampling module to achieve scale-arbitrary super-resolution. } \label{fig:1}
\end{figure*}
Given a sequence of LR inputs with $N$ frames, $I_{LR} \in \mathbb{R}^{N\times H \times W}$ also the desired temporal upscaling factor $R$ and spatial upscaling factor $(S_{H},S_{W})$, our goal is to learn a model $\mathcal{\phi}$ to restore the corresponding high-resolution and high-frame-rate video sequence:
\begin{equation}
\begin{aligned}
I_{SR} &= \mathcal{\phi}(I_{LR},R,S_{H},S_{W}) ,I_{SR} \in \mathbb{R}^{R*N\times H'\times W'} ,\\
& where \quad H' = H*S_{H},W' = W*S_{W}.
\end{aligned}
\label{f8}
\end{equation}
For ease of understanding, we first give an overview of the proposed method before diving into the specific method. The details of each module will be discussed in the corresponding section.
\subsection{ Basic Structure}\label{basic}
As shown in Fig~\ref{fig:1}, our method consists of three seamless stages:
Feature propagation, Temporal modulation, and Spatial upsampling.\\
\textbf{Feature propagation}
In the first stage, we follow the bidirectional recurrent structure from \cite{2021Zooming,2020BasicVSR,yi2021omniscient} to propagate information between input frames. Precisely, the model consists of two sub-branches, the forward and backward branches, which propagate information through continuously updated hidden states. A skip-connection structure is also introduced to facilitate the aggregation of features at different depths. After the bidirectional propagation, the information from the entire input sequence is well exploited for subsequent reconstruction. Specifically, we use flow-guided deformable alignment\cite{chan2021basicvsr++} to aggregate information in the bidirectional propagation process and employ a pre-trained Spynet\cite{ranjan2017optical} to generate optical flow across frames.\\
\textbf{Temporal modulation}
In the second stage, our goal is to interpolate the desired intermediate features using the two adjacent pre-existing features generated in stage one. We propose a cross-scale forward warping guided alignment module(FWGA) to approximate the intermediate feature. Through experiments, we find that FWGA can produce more plausible results when dealing with extreme motions with high efficiency. In addition, we propose a flow-guided texture consistency loss, which can further improve the quality of interpolated frames while significantly reducing the computational cost of patch matching. \\
\textbf{Spatial upsampling}
In the last stage, we aim to generate the final output SR for pre-existing features achieved in stage 1 and temporal interpolated features in stage 2.
We present a memory-saving cascading depth-to-space module to realize scale-arbitrary upsampling. The proposed method's advantages and differences from the existing single-image super-resolution(SISR) are also discussed in this section.
\subsection{ Temporal Modulation }
\begin{figure*}[htbp]
\centering
\includegraphics[width=16cm]{imgs/tim.pdf}
\caption{In (a), we show a series of images including the forward warping results from $I_{0}$ to $I_{t}$ and $I_{1}$ to $I_{t}$.
For better illustration, we colored the holes introduced by forward warping with \textcolor{blue}{blue} and \textcolor{red}{red}, respectively. In (b), we synthesize the intermediate feature in a coarse-to-fine manner, that is: Forward warping is performed first and deformable convolution is further introduced to approximate finer details. } \label{fig:2}
\end{figure*}
After the feature representations are obtained in the first stage, we proceed to interpolate intermediate temporal features in the second stage.
In fact, many VFI methods\cite{2020Softmax,0Super,sim2021xvfi
} can achieve arbitrary intermediate frame synthesis using the optical flow technique. Our work differs from previous work in two ways. First, most current VFI methods aim to approximate an intermediate frame with adjacent two\cite{0Super,sim2021xvfi} or four\cite{xu2019quadratic,kalluri2021flavr} frames. Since for vanilla video frame interpolation, the information from too far frames may not be so helpful for synthesizing the current frame. However, we found that long-range temporal information is critical for space-time video super-resolution and aggregating long-term information leads to higher performance. Second, to handle large motion and occlusion, researchers tend to design more and more complex motion compensation\cite{2020Softmax} and post-processing modules\cite{park2021asymmetric}, which are very time and memory-consuming. Adopting such complex VFI modules for every synthetic intermediate frame in a recurrent structure is impractical. Therefore, a lightweight and efficient frame interpolation module is needed.
The flowchart of the proposed method is illustrated in Fig.~\ref{fig:1}, stage2, and the details of the alignment unit are given in Fig.~\ref{fig:2}. This module is based on our previous work\cite{zhangVCIP}. Compared to our previous work, we have improved its performance through multi-scale feature alignment and a flow-guided texture consistency loss which will be detailed in ~\ref{FGloss}.
Inspired by EDVR\cite{2019EDVR}, the interpolation process is applied in a multi-scale aggregation manner to better address the complex and variable motions. We first use stride convolution to obtain three-level features and perform a coarse-to-fine alignment strategy. Specifically, given two neighboring features $F_{0,1}$ that contain information from preceding and subsequent states, our goal is to interpolate the intermediate features corresponding to the anchor moment $t \in (0,1)$. This process can be described as follows:
\begin{equation}
\begin{aligned}
\widetilde{F}^{l}_{0 \rightarrow t} &= t \cdot \mathcal{FW}(F^{l}_{0},\mathcal{V}_{0 \rightarrow 1}) ,\\
\widetilde{F}^{l}_{1 \rightarrow t} &= (1-t) \cdot \mathcal{FW}(F^{l}_{1},\mathcal{V}_{1 \rightarrow 0}) ,
\end{aligned}
\label{f9}
\end{equation}
where $F^{l}_{\{0,1\}}$ denotes two adjacent features at level $l$ and $\mathcal{FW}$ denotes forward warping. In general, forward warping suffers from hole issues and pixel confliction problems when multiple source pixels move to the same target pixel.
We adopt softmax splatting\cite{2020Softmax} to resolve the pixel confliction problem. Regarding holes issues, different from \cite{2020Softmax,2018Context}, we use a very light module to refill the missing context. Following the assumption of \cite{0Super}, if a pixel is visible at the moment $t$ it is most likely at least visible in one of the two successive images. We also observe that for either of the two warped frames, the majority of holes are complementary to their counterparts( That is to say, in Fig.~\ref{fig:2} (a), most \textcolor{red}{red} areas and \textcolor{blue}{blue} areas are non-overlapping ). So we use two masks to reveal these regions and complement them with pixels in their corresponding positions. This process can be formulated as follows:
\begin{equation}
\begin{aligned}
&F^{l}_{0 \rightarrow t} = \widetilde{F}^{l}_{0 \rightarrow t} \otimes \mathcal{M}^{l}_{0}+\widetilde{F}^{l}_{1 \rightarrow t} \otimes \overline{\mathcal{M}}^{l}_{0}, \\
&F^{l}_{1 \rightarrow t} = \widetilde{F}^{l}_{1 \rightarrow t} \otimes \mathcal{M}^{l}_{1}+\widetilde{F}^{l}_{0 \rightarrow t} \otimes \overline{\mathcal{M}}^{l}_{1}, \\
with \quad & \mathcal{M}^{l}_{\{0,1\}}(x,y) =
\begin{cases}
0 & \text{ $if \quad F^{l}_{\{0,1\} \rightarrow t}(x,y) = 0 $ } \\
1 & \text{ $ otherwise ,$ }
\end{cases}
\end{aligned}
\label{f10}
\end{equation}
Here, we define two binary masks $\mathcal{M}^{l}_{\{0,1\}}$ for the two warped features, pixels are labeled as 0 for holes
and 1 otherwise, and $\overline{\mathcal{M}}$ refers to elementwise-wise NOT. Please note that, in Fig.~\ref{fig:2}, we show the holes filling method with color images for better visualization. This process is actually performed in the feature space in the network.
In reality, this simple process can achieve seemingly plausible results in many cases.
However, some misaligned pixel points still exist due to inaccurate optical flow estimation and complex non-linear motions in the real world. To relax the overly strict restriction of such alignment by optical flow warping, we further introduced DCN\cite{2017Deformable} to hallucinate the intermediate frame so that the network could further correct for misalignment through learning. The whole process of multi-scale feature interpolation is carried out from low resolution to high resolution. The low-resolution results are used to guide the high-resolution alignment, which can be formulated as follows:
\begin{equation}
\begin{aligned}
\Delta{P}_{0 \rightarrow t}^{l} &= f([F^{l}_{0 \rightarrow t},F^{l}_{1 \rightarrow t}], (\Delta{P}^{l+1})^{\uparrow 2}), \\
(F^{aligned}_{0 \rightarrow t})^{l} &= DConv(F^{l}_{0 \rightarrow t},\Delta{P}_{0 \rightarrow t}^{l}),
\end{aligned}
\label{f11}
\end{equation}
where $f$ denotes several convolution layers to estimate the offsets, $\Delta{P}_{0 \rightarrow t}^{l}$ refers to the estimated offset at level $l$, and ${( \dots)}^{\uparrow s}$ represents upscaling by a factor $s$.
$DConv$ denotes deformable convolution\cite{2017Deformable}.
Finally, features from both sides are blended by a learnable weight $\mathcal{W}$ to get the intermediate feature $F_{t}$ at moment $t$:
\begin{equation}
\begin{aligned}
\mathcal{W} &= Sigmoid(Conv(F^{0}_{0 \rightarrow t},F^{0}_{1 \rightarrow t})),\\
F_{t} &= \mathcal{W}\otimes F^{0}_{0 \rightarrow t}+(1-\mathcal{W})\otimes F^{0}_{1 \rightarrow t}.
\end{aligned}
\label{f12}
\end{equation}
We note that this muti-scale forward warping is similar to SoftSplat\cite{2020Softmax} in form. However, our method differs in three aspects. First, SoftSplat employs a nested gridnet\cite{2018Context} to fill the holes invoked by forward warping. Such a dense connection module would be an acceptable complexity if only two adjacent frames were used to interpolate the middle frame. But our model may accept many frames as input while also interpolating multiple frames between every adjacent two frames, in which case gridnet would be too time-consuming and memory intensive. To this end, we directly use the two warped features to complement each other, which only involves several convolutions for feature blending. Second, SoftSplat utilizes PWCnet\cite{2017PWC} as the optical flow estimator, which is also employed by many VFI methods. Still, we use a very light flow estimator Spynet\cite{ranjan2017optical}, which is seldom chosen by flow-based VFI methods. As a result, the accuracy of optical flow estimation is relatively poor. But the accurate estimation of optical flow is crucial for the VFI task. We still choose to use Spynet because 1) the optical flows are already calculated in stage 1 and do not need to be repeatedly estimated. 2) We do not want to use too complicated optical flow modules that would affect the model's efficiency. 3) to compensate for possible inaccurate estimates of optical flow, deformable convolution is further introduced to approximate finer details.
Till now, the temporal modulation process has been completed, and the features of pre-existing frames (in stage 1) and temporally interpolated frames(in stage 2) are ready to be upsampled in the next stage.
\subsection{ Scale-arbitrary Upsampling }\label{upsample}
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{imgs/arb_upsampling.pdf}
\caption{ Muti-scale feature blending module. We first perform a depth-to-space transformation to obtain features that expand to integral multiples of their original size. Afterward, we feed the relative position offsets of the pixels into a multilayer perceptron, allowing the model to adapt to different expanding scales. The learned offset and features are then combined and interpolated to the target space size via a bilinear interpolation function to get the output features.
} \label{fig:3}
\end{figure}
Scale-arbitrary upsamples for single image super-resolution(SISR) have been explored by much previous work\cite{hu2019meta,wang2021learning,chen2021learning}. But in contrast, there is much less exploration related to scale-arbitrary video super-resolution. The straightforward idea is that video super-resolution does not need to be explicitly studied. Just taking any off-the-shelf magnification-arbitrary upsampling module can fulfill this goal. But this is not the case because most of these methods require estimating a unique sampling weight for each position. Such estimation is so memory intensive that it is almost impossible to process video with multiple input frames using an ordinary graphics card(e.g., NVIDIA 1080Ti GPU). In Meta-SR\cite{hu2019meta}, to achieve scale-arbitrary upsampling, Hu \emph{et al.} propose a meta-upscale module that takes a sequence of coordinate-related and scale-related vectors as input to predict the convolutional weights for every pixel point. Based on Meta-SR, Wang \emph{et al.} \cite{wang2021learning} further propose a plug-in module to generate dynamic scale-aware filters, which enable the network to adapt to an arbitrary scale factor. Although their methods achieve satisfactory results at different scales, their memory consumption is still unacceptable for our task. The memory footprint of spatially-varying filtering\cite{wang2021learning} can be very high. (For a 720P HR image, if we set kernel size $k$ as 3, it will cost about 31.6G memory, even if we set $k=1$, it still cost about 3.5G) Based on the above discussion, our motivation is to design a memory-saving and efficient upsampling module.
Several candidate operations exist to realize upsampling, including 1) bilinear upsampling, 2) deconvolution with stride 3) pixel shuffle operation. The bilinear re-scaling method
is more flexible because it can be applied to an arbitrary scale
ratios. However, applying bilinear upsampling for a super-resolved RGB image may lead to severe performance degradation. Among the learning-based methods, we choose to adopt Pixelshuffle\cite{2016Real} as our basic module since it only involves the rearrangement of elements of input features and does not introduce additional learnable parameters. Almost all recent video super-resolution methods have adopted this approach. However, this depth-to-space operation does not accommodate arbitrary sizes. To this end, we propose a cascaded depth-to-space module, which allows the network to learn LR to HR mappings at different scales in an memory-efficient manner while maintaining scale flexibility. As shown in Fig.~\ref{fig:3}, in stage 1, we first employ multiple scale-aware residual blocks to extract features. Every block contains a scale-aware convolution\cite{wang2021learning} followed by several residual convolutions. In this way, features of different depths are integrated with scale information. In stage 3, we first perform the subpixel shuffling operator in three different scales:
\begin{equation}
\begin{aligned}
Feat_{ps}^{l} &= \mathcal{PS}(Fea\_in,{2}^{l}) , \quad l \in \{1,2,3\}
\end{aligned}
\label{f13}
\end{equation}
where $\mathcal{PS}$ means an periodic shuffling operator\cite{2016Real} that rearranges the elements of $H \times W \times C \cdot r^{2}$
tensor to a tensor of shape $r \cdot H \times r \cdot W \times C$. Note that we have transformed the feature's channel dimension $C$ of each scale before the $\mathcal{PS}$ operation to ensure that the results obtained at different scales have the same number of channels. (Here, we set $C=32$.)
\begin{figure*}[htbp]
\centering
\includegraphics[width=16cm]{imgs/flow_loss.pdf}
\caption{(a) Schematic for optical flow guided patch warping, which aims to generate the pseudo label for texture consistency loss. (b) Comparison of patch searching space. Zhou's method\cite{zhou2022exploring} needs to search candidate patches from two reference frames one by one. In contrast, our method can directly find desired patches via patch warping and dramatically reduces the search space.} \label{fig:4}
\end{figure*}
To implicitly enable the network to accommodate sampling at different scales, We follow \cite{wang2021learning} and compute the relative offset positions of input features and output a relative distance vector $[Dis_{x},Dis_{y}]$:
\begin{equation}
\begin{aligned}
LR(\sigma) &= \frac{\sigma+0.5}{R_{\sigma}}-0.5,\\
Dis_{\sigma} &= LR(\sigma)- \lfloor\frac{\sigma+0.5}{R_{\sigma}}\rfloor,
\sigma \in \{x,y\}.
\end{aligned}
\label{f15}
\end{equation}
In this process, each pixel at location $(x,y)$ in HR space is mapped to the LR space to estimate its coordinates($LR(x)$,$LR(y)$) and relative distance($Dis_{x}$,$Dis_{x}$), where $\lfloor \dots \rfloor$ means the rounded down operator. Relative distance vectors ($Dis_{x}$,$Dis_{x}$) and scale factors ($S_{x}$,$S_{y}$) are fed into a small MLP to get the offset map $\delta_{x}$,$\delta_{y}$. Next, we add the learned offset to the $Feat_{ps}$ to get the scale-adaptive features $Feat_{adapt}$. Then, a bilinear interpolation function is adopted to modulate the features to the target scale:
\begin{equation}
\begin{aligned}
Feat^{l} &= bilinear( Feat_{adapt}^{l}, \frac{S_{H}}{{2}^{l}} ,\frac{S_{W}}{{2}^{l}}), \quad l \in \{1,2,3\},
\end{aligned}
\label{f14}
\end{equation}
where $S_{H}$ and $S_{W}$ denote the height and width scaling factor. Finally, we utilize several convolution layers for blending these features and getting the super-resolved color images.
It is worth emphasizing that this cascading depth-to-space module is very conceptually straightforward. Since we do not intend to design complex and elaborate conditional spatial resampling modules for each pixel, nor do we want to argue about which upsampling module is the best. Here we just want to design a memory-efficient and practical module for scale-arbitrary upsampling that can achieve a small memory footprint and handle multiple input frames. Because we find that long-range temporal information is essential for ST-VSR, please refer to the experiment section for more information.
\subsection{ Flow-guided Context-consistency Loss } \label{FGloss}
Since we do not apply complex post-processing or context-refilling modules for temporal feature interpolation, it was still challenging to achieve satisfactory results when dealing with extreme motions and complex textures. A simple question arises whether it is possible to further improve the quality of intermediate frames without increasing the inference computation burden. To address this problem, Zhou \emph{et al.} \cite{zhou2022exploring} propose a texture consistency loss to generate a pseudo intermediate frame to allow the diversity of interpolated content. Their core idea is that existing deep VFI approaches strongly rely on the ground truth of intermediate frames, which sometimes ignore the non-unique motions in the natural world. So, they aim to generate a patch-level pseudo-label for the intermediate frame, which contains the most similar patch of the interpolated intermediate frame and the reference frames from both sides. It is conceptually similar to the frame inter-prediction technique in video coding(Advanced Motion Vector Prediction in HEVC\cite{HEVC}).
But directly borrowing the texture consistency loss is, in fact, infeasible in our task. As mentioned before, since our model accepts multiple frames as input and interpolates multiple intermediate features between two adjacent frames, such a large number of patch searching will result in a huge computation burden. To solve this problem, we propose an optical flow-guided texture consistency loss. In reality, the optical flow estimation itself is a search process for similar regions between two frames. Meanwhile, the optical flow between the two adjacent frames has already been generated in Stage 1, so we decide to use optical flow as a priori knowledge to search for the most similar patches more efficiently. specifically, given the estimated motion fields $\mathcal{V}_{0 \rightarrow 1}$ and $\mathcal{V}_{1 \rightarrow 0}$ for two frames. We utilize the complementary flow reversal(CFR) layer in XVFI\cite{sim2021xvfi} to approximate the intermediate motion flow $\mathcal{V}_{t \rightarrow 0}$ and $\mathcal{V}_{t \rightarrow 1}$:
\begin{equation}
\begin{aligned}
\mathcal{V}_{t \rightarrow 0} , \mathcal{V}_{t \rightarrow 1} = CFR(\mathcal{V}_{0 \rightarrow 1},\mathcal{V}_{1 \rightarrow 0})\\
\end{aligned}
\label{f5}
\end{equation}
After obtaining the intermediate optical flow $\mathcal{V}_{t \rightarrow 0}$ and $\mathcal{V}_{t \rightarrow 1}$, the next step is to use them to warp the two adjacent frames' most similar patches to the intermediate frame. Note that the warping here is not pixel-wise alignment in the usual sense but patch-level alignment. The proposed optical flow-guided patch warping is illustrated in Fig.~\ref{fig:4}. Specifically, we first perform a pooling operation on the optical flow to get the average movement within a patch:
\begin{equation}
\begin{aligned}
\mathcal{V}^{avg}_{t \rightarrow 0} &= AvgPool(\mathcal{V}_{t \rightarrow 0},patch\_size = p), \\
\mathcal{V}^{avg}_{t \rightarrow 1} &= AvgPool(\mathcal{V}_{t \rightarrow 1},patch\_size = p),
\end{aligned}
\label{f6}
\end{equation}
where $AvgPool$ denotes the average pooling operation and the patch size is set by default to 4 here.
Then, the reference frames from both sides are directly warped to the intermediate frame.
\begin{equation}
\begin{aligned}
I_{0 \rightarrow t}^{warped} &= \mathcal{BW}(I_{0},\mathcal{V}^{avg}_{t \rightarrow 0}), \\
I_{1 \rightarrow t}^{warped} &= \mathcal{BW}(I_{1},\mathcal{V}^{avg}_{t \rightarrow 1})
\end{aligned},
\label{f7}
\end{equation}
where $\mathcal{BW}$ denotes backward warping. It is worth noticing that as we perform patch-level rather than pixel-level warping, the optical flow does not need to be particularly accurate.
Finally, the two warped frames are compared patch by patch with the intermediate frame and the most similar patch is selected as the pseudo label. We assume there are $N$ candidate patches to be matched, then, the matching process can be formulated as:
\begin{equation}
\begin{aligned}
k^{*},i^{*} &= \underset{{k \in (1,N),i \in \{0,1\}}} { \arg \min } \quad
\mathcal{L}2(P_{k}^{i},P_{k}^{pred}) \\
\quad I_{t}^{pseudo} &= \bigcup\limits_{k^{*} = 1}^{ N } P_{k^{*}}^{i^{*}}
\end{aligned}
\label{f8}
\end{equation}
\begin{table*}[t]
\caption{Quantitative comparisons of PSNR (dB), SSIM, speed (FPS), on Vid4, REDS and Vimeo-90K-T. The inference time is calculated on Vid4 dataset with one Nvidia 1080Ti GPU. The best two results are highlighted in {\color{red}red} and {\color{blue}blue} colors.}
\label{tab:1}
\centering
\resizebox{0.98\linewidth}{!}{
\begin{tabular}{c|cc|cc|cc|cc|cc|cc|c|c}
\hline
VFI+(V)SR/ST-VSR methods & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Vid4\\ PSNR \quad SSIM\end{tabular}} & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}REDS\\ PSNR \quad SSIM\end{tabular}} & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Vimeo-Fast\\ PSNR \quad SSIM\end{tabular}} & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Vimeo-Medium\\ PSNR \quad SSIM\end{tabular}} & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Vimeo-Slow\\ PSNR \quad SSIM\end{tabular}} & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Vimeo-Total\\ PSNR \quad SSIM\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Speed\\ FPS\end{tabular} & \begin{tabular}[c]{@{}c@{}}Parameters\\ millions\end{tabular} \\ \hline \hline
SuperSloMo\cite{0Super} +Bicubic & 22.84 & 0.5772 & 25.23 & 0.6761 & 31.88 & 0.8793 & 29.94 & 0.8477 & 28.73 & 0.8102 & 29.99 & 0.8449 & - & 19.8 \\
SuperSloMo\cite{0Super} +RCAN\cite{2018Image} & 23.78 & 0.6397 & 26.37 & 0.7209 & 34.52 & 0.9076 & 32.50 & 0.8844 & 30.69 & 0.8624 & 32.44 & 0.8835 & 1.91 & 19.8+16.0 \\
SuperSloMo\cite{0Super}+RBPN\cite{2019Recurrent} & 23.76 & 0.6362 & 26.48 & 0.7281 & 34.73 & 0.9108 & 32.79 & 0.8930 & 30.48 & 0.8354 & 32.62 & 0.8839 & 1.55 & 19.8+12.7 \\
SuperSloMo\cite{0Super} +EDVR\cite{2019EDVR} & 24.40 & 0.6706 & 26.26 & 0.7222 & 35.05 & 0.9136 & 33.85 & 0.8967 & 30.99 & 0.8673 & 33.45 & 0.8933 & 4.94 & 19.8+20.7 \\ \hline
Sepconv\cite{2017Video}+Bicubic & 23.51 & 0.6273 & 25.17 & 0.6760 & 32.27 & 0.8890 & 30.61 & 0.8633 & 29.04 & 0.8290 & 30.55 & 0.8602 & - & 21.7 \\
Sepconv\cite{2017Video} +RCAN\cite{2018Image} & 24.92 & 0.7236 & 26.21 & 0.7177 & 34.97 & 0.9195 & 33.59 & 0.9125 & 32.13 & 0.8967 & 33.50 & 0.9103 & 1.86 & 21.7+16.0 \\
Sepconv\cite{2017Video} +RBPN\cite{2019Recurrent} & 26.08 & 0.7751 & 26.32 & 0.7254 & 35.07 & 0.9238 & 34.09 & 0.9229 & 32.77 & 0.9090 & 33.97 & 0.9202 & 1.51 & 21.7+12.7 \\
Sepconv\cite{2017Video} +EDVR\cite{2019EDVR} & 25.93 & 0.7792 & 26.14 & 0.7205 & 35.23 & 0.9252 & 34.22 & 0.9240 & 32.96 & 0.9112 & 34.12 & 0.9215 & 4.96 & 21.7+20.7 \\ \hline
DAIN\cite{DAIN} +Bicubic & 23.55 & 0.6268 & 25.22 & 0.6783 & 32.41 & 0.8910 & 30.67 & 0.8636 & 29.06 & 0.8289 & 30.61 & 0.8607 & - & 24 \\
DAIN\cite{DAIN} +RCAN\cite{2018Image} & 25.03 & 0.7261 & 26.33 & 0.7233 & 35.27 & 0.9242 & 33.82 & 0.9146 & 32.26 & 0.8974 & 33.73 & 0.9126 & 1.84 & 24.0+16.0 \\
DAIN\cite{DAIN} +RBPN\cite{2019Recurrent} & 25.96 & 0.7784 & 26.57 & 0.7344 & 35.55 & 0.9300 & 34.45 & 0.9262 & 32.92 & 0.9097 & 34.31 & 0.9234 & 1.43 & 24.0+12.7 \\
DAIN\cite{DAIN} +EDVR\cite{2019EDVR} & 26.12 & 0.7836 & 26.39 & 0.7291 & 35.81 & 0.9323 & 34.66 & 0.9281 & 33.11 & 0.9119 & 34.52 & 0.9254 & 4 & 24.0+20.7 \\ \hline
STARnet\cite{2020Space} & 26.06 & 0.8046 & 26.39 & 0.7444 & 36.19 & 0.9368 & 34.86 & 0.9356 & 33.10 & 0.9164 & 34.71 & 0.9318 & 10.54 & 111.61 \\
Zooming Slow-Mo\cite{2021Zooming} & 26.31 & 0.7976 & 26.72 & 0.7453 & 36.81 & 0.9415 & 35.41 & 0.9361 & 33.36 & 0.9138 & 35.21 & 0.9323 & \textcolor{blue}{12.4} & \textcolor{red}{11.1} \\
TMnet\cite{xu2021temporal} & 26.43 & 0.8016 & 26.81 & 0.7476 & \textcolor{blue}{37.04} & 0.9435 & 35.60 & 0.9380 & 33.51 & 0.9159 & 35.39 & 0.9343 & 11.6 & \textcolor{blue}{12.26} \\
EBFW\cite{zhangVCIP} (Our previous work) & \textcolor{blue}{26.74} & \textcolor{blue}{0.8175} & \textcolor{blue}{26.90} & \textcolor{blue}{0.7498} & 37.01 & \textcolor{blue}{0.9445} & \textcolor{blue}{35.76} & \textcolor{blue}{0.9400} & \textcolor{blue}{33.63} & \textcolor{blue}{0.9180} & \textcolor{blue}{35.52} & \textcolor{blue}{0.9362} & \textcolor{red}{17.14 } & 20.13 \\ \hline
C-STVSR-fix (Ours) & \textcolor{red}{26.87} & \textcolor{red}{0.8213} & \textcolor{red}{26.99} & \textcolor{red}{0.7525} & \textcolor{red}{37.09} & \textcolor{red}{0.9447} & \textcolor{red}{35.82} & \textcolor{red}{0.9405} & \textcolor{red}{33.69} & \textcolor{red}{0.9184} & \textcolor{red}{35.57} &\textcolor{red}{0.9365} & 8.03 & 13.19 \\ \hline
\end{tabular}}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=15cm]{imgs/vis_vid4.pdf}
\caption{Visual results on Vid4 of different methods for space 4$\times$ and time 2$\times$ super resolution. Parts of areas are zoomed in to facillate comparison.} \label{fig:5}
\end{figure*}
where $P_{k}^{i}$ is the $k$-th patch from the $i$-th reference frames after census transform\cite{1997Non}, and the final pseudo-label $I_{t}^{pseudo}$ is the concatenation of the patches from the two reference images. For each patch in the pseudo label, the candidate searching number is reduced from 2*$N$ to 2. It is worth emphasizing that the optical flow has already been estimated in stage one and does not need to be estimated repeatedly. So the pseudo-label generation process is very efficient.
In addition, the operations involved in the patch warping process, such as image partition and optical flow pooling, are easy to implement with modern deep learning frameworks, and the whole process is performed parallelly.
\section{related Work}
\label{sec:intro}
Our work is mainly related to three video enhancement tasks, video super-resolution, video frame interpolation, and space-time video super-resolution.
\smallskip
\subsection{Video Frame Interpolation}
Video Frame Interpolation (VFI) targets synthesizing in-between frames with their adjacent reference data. Recent learning-based VFI approaches can be roughly divided into two categories: optical-flow-based methods\cite{0Super,DAIN,2020Softmax} and kernel-based methods\cite{2017Video,2020Video}. For flow-based methods, Bao \emph{et al.} \cite{DAIN} develops a depth-aware flow projection layer to synthesize intermediate frames. SoftSplat\cite{2020Softmax} first calculates the bidirectional flows between input frames and then forward warp the corresponding feature maps via softmax splatting. For kernel-based methods, Niklaus \emph{et al.} propose several algorithms\cite{2017Video,niklaus2021revisiting} and use local convolution to hallucinate pixels for the target frame.
Choi \emph{et al.}\cite{choi2021affine} propose an affine transformation-based deep frame prediction framework and integrate it in the HEVC\cite{HEVC} video coding pipeline to replace the traditional bi-directional prediction methods. Most recently, Zhou \emph{et al.}\cite{zhou2022exploring} propose a kernel-based cross-scale alignment module and a plug-in color consistency loss which is capable of improving the performance of existing VFI
frameworks.
\subsection{Video Super-Resolution}
Video super-resolution (VSR) aims to reconstruct a series of
high-resolution frames from their low-resolution
counterparts. Unlike single image super-resolution(SISR), the key to video super-resolution is mining the information between multiple frames. The recent methods can be divided into two categories: temporal sliding window based methods\cite{caballero2017real,2019EDVR,wen2022video}
and iterative-based methods\cite{2018Deep,2020VideoRSDN,sajjadi2018frame,2019Recurrent}.
For sliding window based methods, Wang\cite{wang2020deep} \emph{et al.} propose an end-to-end VSR network to super-resolve
both optical flows and images so as to recover finer details.
EDVR \cite{2019EDVR}, as a representative method, uses DCNs\cite{2017Deformable} in a
multi-scale pyramid and utilizes multiple attention layers to adopt alignment and integrate the features. Wen \emph{et al.} propose an end-to-end network that dynamically generates the
spatially adaptive filters for the alignment, which are constituted
by the local spatio-temporal channels of each pixel to avoid explicit motion compensation.
For iterative-based methods, Tao \emph{el al}. \cite{tao2017detail} propose a sub-pixel motion compensation layer in a CNN framework and utilize ConvLSTM \cite{2015Convolutional} module for capturing long-range temporal information.
Recently, BasicVSR series\cite{2020BasicVSR,chan2021basicvsr++} and some other work\cite{yi2021omniscient} stress that it is essential to utilize both neighboring LR frames and long-distance LR frames (previous and subsequent) to reconstruct HR frames.
\subsection{Space-Time Video Super-Resolution}
Space-Time Video Super-Resolution (ST-VSR) aims to transform a low spatial resolution video with
a low frame rate into a video with higher spatial and temporal resolutions. Recently, Some deep learning-based work\cite{2020Space,2021Zooming,xu2021temporal} has made great progress in both speed and effect. Zooming Slow-Mo \cite{2021Zooming} develops a unified framework with deformable ConvLSTM to align and aggregate temporal information before performing feature fusion for ST-VSR. Based on Zooming Slow-Mo, Xu \emph{et al.} \cite{xu2021temporal} propose a temporal modulation network for controllable feature interpolation, which can interpolate arbitrary intermediate frames. Wang \emph{et al.}\cite{wang2022bi} propose a bidirectional recurrent space-time upsampling network to utilize auxiliary information at various time steps for one-stage ST-VSR. Most recently, Chen \emph{et al.} \cite{chen2022videoinr} construct an implicit neural network to model the continuous representation of a given video. The learned implicit representation can be decoded to HR results of arbitrary spatial and temporal resolutions. Despite the remarkable progress of the aforementioned work, these efforts only consider information within a short temporal range, and their ability to handle large movements and learn from long-term sequences is severely limited.
|
{
"arxiv_id": "2302.13237",
"language": "en",
"timestamp": "2023-02-28T02:13:46",
"url": "https://arxiv.org/abs/2302.13237",
"yymm": "2302"
} | \section{Introduction}
Task mapping in modern high performance parallel computers can be modeled as a graph embedding problem.
Let $G(V,E)$ be a simple and connected graph with vertex set $V(G)$ and edge set $E(G)$.
Graph embedding\cite{BCHRS1998,AS2015,ALDS2021} is an
ordered pair $<f,P_f>$ of injective mapping between the guest graph $G$ and the host graph $H$ such that
\begin{itemize}
\item[(i)] $f:V(G)\rightarrow V(H)$, and
\item[(ii)] $P_f: E(G)\rightarrow$
$\{P_f(u,v):$ $P_f(u,v)$ is a path in $H$ between $f(u)$ and $f(v)$ for $\{u,v\}\in E(G)\}$.
\end{itemize}
It is known that the topology mapping problem is NP-complete\cite{HS2011}.
Since Harper\cite{H1964} in 1964 and Bernstein\cite{Bernstein1967} in 1967, a series of embedding problems have been studied\cite{E1991,OS2000,DWNS04,FJ2007,LSAD2021}.
The quality of an embedding can be measured by certain cost criteria.
One of these criteria is the wirelength.
Let $WL(G,H;f)$ denote the wirelength of $G$ into $H$ under the embedding $f$.
Taking over all embeddings $f$, the minimum wirelength of $G$ into $H$ is defined as
$$WL(G,H)=\min\limits_{f} WL(G,H;f).$$
Hypercube is one of the most popular, versatile and efficient topological structures of interconnection networks\cite{Xu2001}.
More and more studies related to hypbercubes have been performed\cite{Chen1988,PM2009,PM2011,RARM2012}.
Manuel\cite{PM2011} et al. computated the minimum wirelength of embedding hypercube into a simple cylinder. In that paper, the wirelenth for hypercube into general cylinder and torus were given as conjectures.
Though Rajan et al.\cite{RRPR2014} and Arockiaraj et al.\cite{AS2015} studied those embedding problems, the two conjectures are still open.
We recently gave rigorous proofs of hypercubes into cycles\cite{LT2021} and cylinders (the first conjecture)\cite{Tang2022} successively.
Using those techniques and process,
we try to settle the last conjecture for torus.
In the paper, we also generaliz the results to other Cartesian product of paths and/or cycles.
It is seen that the grid, cylinder and torus are Cartesian product of graphs. In the past, the vertices of those graphs are labeled by a series of nature numbers\cite{PM2009,PM2011,AS2015,Tang2022}.
But it is not convenient for some higher dimensional graphs.
To describe a certain embedding efficiently,
we apply tuples to lable the vertices in the paper.
By the tool of Edge Isoperimetric Problem(EIP)\cite{H2004}, we estimate and explain the minimal wirelength for hypercube into
torus and other Cartesian product of graphs.
\noindent\textbf{Notation.}
For $n\ge 1$,
we define $Q_n$ to be the hypercube with vertex-set $\{0,1\}^n$,
where two $0-1$ vectors are adjacent if they differ in exactly one coordinate \cite{RR2022}.
\noindent\textbf{Notation.}
An $r_1\times r_2$ grid with $r_1$ rows and $r_2$ colums is represented by $P_{r_1}\times P_{r_2}$ where the rows are labeled $1,2,\ldots, r_1$ and
the columns are labeled $1,2,\ldots, r_2$ \cite{PM2009}.
The torus $C_{r_1}\times C_{r_2}$ is a $P_{r_1}\times P_{r_2}$ with a wraparound edge in each column and a wrapround edge in each row.
\textbf{Main Results}
\begin{thm}\label{ccthm}
For any $n_1,\ n_2\ge2,\ n_1+ n_2=n$.
The minimum wirelength of hypercubes into torus is
\begin{equation*}\label{cpwl}
WL(Q_n,C_{2^{n_1}}\times C_{2^{n_2}})=
2^{n_2}(3\cdot 2^{2n_1-3}-2^{n_1-1})+
2^{n_1}(3\cdot 2^{2n_2-3}-2^{n_2-1}).
\end{equation*}
Moreover, Gray code embedding is an optimal embedding.
\end{thm}
\noindent\textbf{Notation.}
Cartesian product of paths and/or cycles is denoted by
$\mathscr{G}=\mathscr{G}_1\times \mathscr{G}_2\times \cdots \times \mathscr{G}_k,$
where $\mathscr{G}_i \in \{P_{2^{n_i}}, C_{2^{n_i}}\}, 1\le i \le k$.
\begin{thm}\label{carthm}
For any $k>0$, $ n_i\ge2$, and $\sum_{i=1}^{k}n_i=n$.
The minimum wirelength of hypercubes into Cartesian product $\mathscr{G}$ is
\begin{equation*}
WL(Q_n,\mathscr{G})=\sum_{i=1}^{k}\mathscr{L}_i,
\end{equation*}
where \begin{equation*}
\mathscr{L}_i=\left\{
\begin{array}{rcl}
&2^{n-n_i}(3\cdot 2^{2n_i-3}-2^{n_i-1}),&\mbox{if}\ \ \mathscr{G}_i=C_{2^{n_i}},\\
&2^{n-n_i}(2^{2n_i-1}-2^{n_i-1}),&\mbox{if}\ \ \mathscr{G}_i=P_{2^{n_i}}.
\end{array}
\right.
\end{equation*}
Moreover, Gray code embedding is an optimal embedding.
\end{thm}
The paper is organized as follows.
In Section \ref{Preliminaries}, some definitions and elementary properties are introduced.
In Section \ref{sec:torus}, we explain the Gray code embedding is an optimal strategy for hypercube into torus.
Section \ref{sec:cartesian} is devoted to Cartesian products of paths and/or cycles.
\section{Preliminaries} \label{Preliminaries}
EIP has been used as a powful tool in the computation of minimum wirelength of graph embedding\cite{H2004}.
EIP is to determine a subset $S$ of cardinality $k$ of a graph $G$ such that the edge cut separating this subset from its complement has minimal size. Mathematically, Harper denotes
$$\Theta(S)=\{ \{u,v\}\in E(G)\ : u\in S, v\notin S \}.$$
For any $S\subset V(Q_n)$, use $\Theta(n,S)$ in place of $\Theta(S)$ and let $|\Theta(n,S)|$ be $\theta(n,S)$.
\begin{lem}\label{swap}
Take a subcube $Q_{n_1}$ of $Q_n$, and $S_1\subset V(Q_{n_1}), S_2\subset V(Q_{n-n_1})$, then
\begin{equation*}
\theta(n,S_1\times S_2)=\theta(n,S_2 \times S_1).
\end{equation*}
\end{lem}
\begin{proof}
By the definition of hypercube $Q_n$,
there is an edge connected in $S_1\times S_2$ if and only if
there is an edge connected in $S_2\times S_1$.
\end{proof}
The following lemma is efficient technique to find the exact wirelength.
\begin{lem}\cite{Tang2022}\label{wl}
Let $f$ be an embedding of $Q_n$ into $H$.
Let $(L_i)_{i=1}^{m}$ be a partition of $E(H)$.
For each $1\le i \le m$, $(L_i)_{i=1}^{m}$ satisfies:
\begin{itemize}
\item[\bf{\normalfont (A1)}]$L_i$ is an edge cut of $H$ such that $L_i$ disconnects $H$ into two components and one of induduced vertex sets is denoted by $l_i$;
\item[\bf{\normalfont (A2)}]$|P_f(u,v)\cap L_i|$ is one if $\{u,v\}\in \Theta(n,f^{-1}(l_i))$ and zero otherwise for any $\{u,v\}\in E(Q_n)$.
\end{itemize}
Then $$WL(Q_n,H;f)=\sum_{i=1}^{m}\theta(n,f^{-1}(l_i)).$$
\end{lem}
\noindent\textbf{Notation.}
$N_n=\{1,2,\cdots,n\}$, and $F_i^{n}=\{i,i+1,\cdots,i+2^{n-1}-1\}, \quad 1\le i \le 2^{n-1}$.
\noindent\textbf{Notation.}
Let $(i,j)$ denote a vertice in row $i$ and column $j$ of cylinder $C_{2^{n_1}}\times P_{2^{n_2}}$, where $1\le i \le 2^{n_1}$ and $1\le j \le 2^{n_2}$.
Then $V(C_{2^{n_1}}\times P_{2^{n_2}})=N_{2^{n_1}}\times N_{2^{n_2}}.$
It is seen that the vertex sets
$F_i^{n_1}\times N_{2^{n_2}}=\{(x_1,x_2):x_1\in F_i^{n_1},\ x_2\in N_{2^{n_2}} \}$ and
$N_{2^{n_1}}\times N_{j}=\{(x_1,x_2): x_1\in N_{2^{n_1}},\ x_2\in N_{j} \}$ are equivalent to
$A_i$ and $B_j$ defined in \cite{Tang2022} , respectively.
See Fig.\ref{label_1} and Fig.\ref{label_2} for examples.
\begin{figure}[htbp]
\centering
\includegraphics[width=5in]{A.jpg}
\caption{$A_2$ and $F_2^{3}\times N_{2^{2}}$ in cylinder $C_{2^{3}}\times P_{2^{2}}$ }
\label{label_1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=5in]{B.jpg}
\caption{$B_3$ and $N_{2^3}\times N_{3}$ in cylinder $C_{2^{3}}\times P_{2^{2}}$}
\label{label_2}
\end{figure}
Now we generalize Gray code map $\xi_n:\{0,1\}^n\rightarrow \{1,2,\cdots,2^n\}$ defined in \cite{LT2021,Tang2022}.
Define $k$-order Gray code map $\xi_{n_1\ldots n_k}$ corresponding to $k$ components.
\begin{defn}
$k$-order Gray code map $\xi_{n_1\ldots n_k}$ is given by $\xi_{n_1\ldots n_k}:\{0,1\}^n\rightarrow N_{2^{n_1}}\times\cdots \times N_{2^{n_k}}$,\ i.e.,
$$\xi_{n_1\ldots n_k}(v)=\xi_{n_1\ldots n_k}(v_1\ldots v_k)=(\xi_{n_1}(v_1),\ldots,\xi_{n_k}(v_k)),$$
where $n_1+\ldots+n_k=n$, and $v=v_1\ldots v_k\in \{0,1\}^n, v_i\in \{0,1\}^{n_i}, 1\le i \le k$.
\end{defn}
For example, $\xi_{32}(11011)=(\xi_{3}(110),\xi_{2}(11))=(5,3)$.
According to the rule of Gray code map, we have that
\begin{equation*}\label{change}
\xi_{n_1n_2}^{-1}(F_i^{n_1}\times N_{2^{n_2}})=\xi_n^{-1}(A_i),\ \ \xi_{n_1n_2}^{-1}(N_{2^{n_1}}\times N_{j})=\xi_n^{-1}(B_j).
\end{equation*}
Together with (12) and (13) in \cite{Tang2022}, we have that
\begin{subequations}\label{cpwl1+2}
\begin{eqnarray}
&&\sum_{i=1}^{2^{n_1-1}} \theta(n,\xi_{n_1n_2}^{-1}(F_i^{n_1}\times N_{2^{n_2}}))=
2^{n-n_1}(3\cdot 2^{2n_1-3}-2^{n_1-1}).\\
&&\sum_{j=1}^{2^{n_2}-1}\theta(n,\xi_{n_1n_2}^{-1}(N_{2^{n_1}}\times N_{j}))
=2^{n-n_2}(2^{2n_2-1}-2^{n_2-1}).
\end{eqnarray}
\end{subequations}
Let $f: \{0,1\}^n\rightarrow N_{2^{n_1}}\times N_{2^{n_2}}$ be an
embedding of $Q_n$ into $C_{2^{n_1}}\times P_{2^{n_2}}$.
Theorems 5.2 and 5.1 in \cite{Tang2022} is rewritten as
\begin{subequations}\label{cp1+2}
\begin{eqnarray}
&&\sum_{i=1}^{2^{n_1-1}} \theta(n,f^{-1}(F_i^{n_1}\times N_{2^{n_2}}))\ge
\sum_{i=1}^{2^{n_1-1}} \theta(n,\xi_{n_1n_2}^{-1}(F_i^{n_1}\times N_{2^{n_2}})).\\
&&\sum_{j=1}^{2^{n_2}-1}\theta(n,f^{-1}(N_{2^{n_1}}\times N_{j}))\ge
\sum_{j=1}^{2^{n_2}-1} \theta(n,\xi_{n_1n_2}^{-1}(N_{2^{n_1}}\times N_{j})).
\end{eqnarray}
\end{subequations}
Cylinder $C_{2^{n_1}}\times P_{2^{n_2}}$ can also be observed as $P_{2^{n_2}}\times C_{2^{n_1}}$.
Let $f: \{0,1\}^n\rightarrow N_{2^{n_2}}\times N_{2^{n_1}}$ be an
embedding of $Q_n$ into $P_{2^{n_2}}\times C_{2^{n_1}}$, then \eqref{cp1+2}
is rewritten as
\begin{subequations}\label{change:cp1+2}
\begin{eqnarray}
&&\sum_{i=1}^{2^{n_1-1}} \theta(n,f^{-1}(N_{2^{n_2}}\times F_i^{n_1}))\ge
\sum_{i=1}^{2^{n_1-1}} \theta(n,\xi_{n_2n_1}^{-1}(N_{2^{n_2}}\times F_i^{n_1})).\\
&&\sum_{j=1}^{2^{n_2}-1}\theta(n,f^{-1}(N_j\times N_{2^{n_1}}))\ge
\sum_{j=1}^{2^{n_2}-1} \theta(n,\xi_{n_2n_1}^{-1}(N_j\times N_{2^{n_1}})).
\end{eqnarray}
\end{subequations}
\begin{rem}
It is seen that
$\xi_{n_1n_2}^{-1}(F_i^{n_1}\times N_{2^{n_2}}) =\xi_{n_1}^{-1}(F_i^{n_1})\times V(Q_{{n_2}})$. Then, by Lemma \ref{swap}, we get that $\theta(n,\xi_{n_1n_2}^{-1}(F_i^{n_1}\times N_{2^{n_2}}))=\theta(n,\xi_{n_2n_1}^{-1}(N_{2^{n_2}}\times F_i^{n_1}))$.
\end{rem}
\section{hypbercubes into torus}\label{sec:torus}
In this section, we prove Theorem \ref{ccthm} in the following procedures.
\noindent$\bullet$ \textbf{Labeling.}\ \
Let a binary tuple set denote the vertex set of torus $C_{2^{n_1}}\times C_{2^{n_2}}$ , that is
$$V(C_{2^{n_1}}\times C_{2^{n_2}})=
\{x=(x_1,x_2): 1\le x_i\le 2^{n_i},\ i=1,2\}=N_{2^{n_1}}\times N_{2^{n_2}}.$$
The edge set
$E(C_{2^{n_1}}\times C_{2^{n_2}})$ is composed of $\mathscr{E}_1$ and $\cup \mathscr{E}_2$,
where
$$\begin{array}{rcl}
\mathscr{E}_1&=&\{\{(x_1,x_2),(x_1',x_2)\}:\{x_1,x_1'\}\in E(C_{2^{n_1}}), x_2\in N_{2^{n_2}}\},\\
\mathscr{E}_2&=&\{\{(x_1,x_2),(x_1,x_2')\}: x_1\in N_{2^{n_1}}, \{x_2,x_2'\}\in E(C_{2^{n_2}})\}.
\end{array}$$
\noindent$\bullet$ \textbf{Partition.}\ \ Construct a partition of the edge set of torus.
\textbf{Step 1.}\
For each $i=1,2$, $j=1,\ldots,2^{n_i-1}$,
let $\mathscr{X}_{ij}$ be an edge cut of the cycle $C_{2^{n_i}}$ such that $\mathscr{X}_{ij}$ disconnects $C_{2^{n_i}}$ into two components where the induced vertex set
is $F_j^{n_i}$.
\textbf{Step 2.}\
For $i=1,2$, denote
\begin{equation*}
\mathscr{P}_{ij}=\bigcup_{\{x_i,x_i'\} \in \mathscr{X}_{ij}}\{\{x,x'\}\in \mathscr{E}_i\},
\end{equation*}
then $\{\mathscr{P}_{ij}:1\le i \le 2, 1\le j \le 2^{n_i-1}\}$ is the partition of $E(C_{2^{n_1}}\times C_{2^{n_2}})$.
\noindent$\bullet$ \textbf{Computation.}\ \
Notice that for each $i,j$,
$\mathscr{P}_{ij}$ is an edge cut of the torus $C_{2^{n_1}}\times C_{2^{n_2}}$.
$\mathscr{P}_{1j}$ disconnects the torus into two components where the induced vertex set is $F_j^{n_1}\times N_{2^{n_2}}$, and
$\mathscr{P}_{2j}$ induces vertex set $N_{2^{n_1}}\times F_j^{n_2}$.
See Fig.\ref{label_3} for an example.
\begin{figure}[htbp]
\centering
\includegraphics[width=5in]{torus.jpg}
\caption{(a) Edge cut $\mathscr{P}_{12}$ disconnects $C_{2^3}\times C_{2^3}$ into two components,where the induced vertex set is $F_2^{3}\times N_{2^{3}}$.
(b) Edge cut $\mathscr{P}_{23}$ disconnects $C_{2^3}\times C_{2^3}$ into two components,where the induced vertex set is $N_{2^{3}}\times F_3^{3}$.
}
\label{label_3}
\end{figure}
Let $f: \{0,1\}^n\rightarrow N_{2^{n_1}}\times N_{2^{n_2}}$ be an
embedding of $Q_n$ into $C_{2^{n_1}}\times C_{2^{n_2}}$.
Under the partition $\{\mathscr{P}_{ij}:1\le i \le 2, 1\le j \le 2^{n_i-1}\}$ and Lemma \ref{wl},
the wirelength is written as a summation related to function $\theta$, i.e.,
\begin{equation}\label{ccsum}
WL(Q_n,C_{2^{n_1}}\times C_{2^{n_2}};f)=
\sum_{j=1}^{2^{n_1-1}}\theta(n,f^{-1}(F_j^{n_1}\times N_{2^{n_2}}))+
\sum_{j=1}^{2^{n_2-1}}\theta(n,f^{-1}(N_{2^{n_1}}\times F_j^{n_2})).
\end{equation}
According to Lemma \ref{swap} and (\ref{cpwl1+2}a), we have that
\begin{equation}\label{cc1}
\sum_{j=1}^{2^{n_2-1}} \theta(n,\xi_{n_1n_2}^{-1}(N_{2^{n_1}}\times F_j^{n_2}))=
2^{n-n_2}(3\cdot 2^{2n_2-3}-2^{n_2-1}).
\end{equation}
According to Lemma \ref{swap} and (\ref{change:cp1+2}a), we have that
\begin{equation}\label{cc2}
\sum_{j=1}^{2^{n_2-1}} \theta(n,f^{-1}(N_{2^{n_1}}\times F_j^{n_2}))\ge
\sum_{j=1}^{2^{n_2-1}} \theta(n,\xi_{n_1n_2}^{-1}(N_{2^{n_1}}\times F_j^{n_2})).
\end{equation}
Combining above three fomulas and (\ref{cpwl1+2}a),(\ref{cp1+2}a),
Theorem \ref{ccthm} holds.
\section{hypercubes into Cartesian product of paths and/or cycles}
\label{sec:cartesian}
In this section, we prove Theorem \ref{carthm} in three parts.
The first part follows the analogous process as Section \ref{sec:torus}.
Then we obtain the wirelength under Gray code embedding.
In the end, we conclude that Gray code embedding is an optimal strategy.
\subsection{Compuation of embedding wirelength}\label{sub1}\
\noindent$\bullet$ \textbf{Labeling.}\ \ Let $$V(\mathscr{G})=\{x=(x_1,\ldots,x_k):x_i\in N_{2^{n_i}}, 1\le i\le k \}
=N_{2^{n_1}}\times \cdots \times N_{2^{n_k}}$$
be the vertex set of Cartesian product $\mathscr{G}$ of $k$ paths and/or cycles.
The edge set $E(\mathscr{G})$ of Cartesian product $\mathscr{G}$ is composed of all edges $\mathscr{E}_i$ correspongding to $k$ paths and/or cycles, denoted by
$E(\mathscr{G})=\bigcup_{i=1}^{k}\mathscr{E}_i$.
\noindent$\bullet$ \textbf{Partition.}\ \ Construct a partition of the edge set of Cartesian product $\mathscr{G}$.
\textbf{Step 1.}\
For each $i=1,\ldots,k$, $j=1,\ldots,2^{n_i-1}$, $\mathscr{X}_{ij}$ is described earlier in Section \ref{sec:torus}.
For each $i=1,\ldots,k$, $j=1,\ldots,2^{n_i}-1$,
let $\mathscr{Y}_{ij}$ be an edge cut of the path $P_{2^{n_i}}$ such that $\mathscr{Y}_{ij}$ disconnects $P_{2^{n_i}}$ into two components where the induced vertex set is $N_j$.
\textbf{Notation.}\ For $1\le i\le k$, let $q_i$ be $2^{n_i-1}$ if $\mathscr{G}_i=C_{2^{n_i}}$ and $2^{n_i}-1$ otherwise $\mathscr{G}_i=P_{2^{n_i}}$.
For $j=1,\ldots,q_i$, denote
\begin{equation*}\label{huaF}
\mathscr{F}_{ij}=\left\{
\begin{array}{cl}
\mathscr{X}_{ij},&\mbox{if}\quad \mathscr{G}_i=C_{2^{n_i}},\\
\mathscr{Y}_{ij},&\mbox{if}\quad \mathscr{G}_i=P_{2^{n_i}}.
\end{array}
\right.
\end{equation*}
\textbf{Step 2.}\
For $i=1,\ldots,k$, $j=1,\ldots,q_i$, denote
\begin{equation*}
\mathscr{P}_{ij}=\bigcup_{\{x_i,x_i'\} \in \mathscr{F}_{ij}}\{\{x,x'\}\in \mathscr{E}_i\},
\end{equation*}
then $\{\mathscr{P}_{ij}:1\le i \le k, 1\le j \le q_i\}$ is a partition of $E(\mathscr{G})$.
\noindent$\bullet$ \textbf{Computation.}\ \
Notice that for each $i,j$,
$\mathscr{P}_{ij}$ is an edge cut of Cartesian product $\mathscr{G}$.
Define a vertext set $\mathscr{A}_{ij}$ to be $F_j^{n_i}$ if $\mathscr{G}_i=C_{2^{n_i}}$ and $N_j$ otherwise $\mathscr{G}_i=P_{2^{n_i}}$.
\textbf{Notation.}\
\begin{equation}\label{Bij}
\begin{array}{cl}
&\mathscr{B}_{1j}=\mathscr{A}_{1j}\times N_{2^{n_2}}\times \cdots \times N_{2^{n_k}},\ \
\mathscr{B}_{kj}=N_{2^{n_1}}\times \cdots \times N_{2^{n_{k-1}}}\times \mathscr{A}_{kj},\\
&\mathscr{B}_{ij}=N_{2^{n_1}}\times \cdots \times \mathscr{A}_{ij}\times \cdots \times N_{2^{n_k}}, \ \ 1<i< k.
\end{array}
\end{equation}
It is seen that $\mathscr{P}_{ij}$ disconnects $\mathscr{G}$ into two components where the induced vertex set $\mathscr{B}_{ij}$.
Let $f: \{0,1\}^n\rightarrow N_{2^{n_1}}\times \cdots \times N_{2^{n_k}}$ be an embedding of $Q_n$ into $\mathscr{G}$.
Under the partition $\{\mathscr{P}_{ij}:1\le i \le k, 1\le j \le q_i\}$ and Lemma \ref{wl},
the wirelength is written as a summation related to function $\theta$, i.e.,
\begin{equation}\label{wlG}
WL(Q_n,\mathscr{G};f)=\sum_{i=1}^{k}\sum_{j=1}^{q_i}\theta(n,f^{-1}(\mathscr{B}_{ij})).
\end{equation}
\subsection{The wirelength under Gray code embedding}\label{sub2}\ \
We deal with the wirelength under Gray code embedding in two cases:
one is that $\mathscr{G}_i$ is cycle $C_{2^{n_i}}$,
and the other is that $\mathscr{G}_i$ is path $P_{2^{n_i}}$.
In the following, set $1\le i \le k, 1\le j \le q_i$.
\begin{lem}\label{BWL1}\
If $\mathscr{G}_i$ is cycle $C_{2^{n_i}}$, then we have that
\begin{equation*}
\sum_{j=1}^{q_i}\theta(n,\xi_{n_1\ldots n_k}^{-1}(\mathscr{B}_{ij}))=
2^{n-n_i}(3\cdot 2^{2n_i-3}-2^{n_i-1}).
\end{equation*}
\end{lem}
\begin{proof} By the Notation \eqref{Bij}, we have that
\begin{equation*}
\begin{array}{rcl}
\xi_{n_1\ldots n_k}^{-1}(\mathscr{B}_{ij})&=&
\xi_{n_1\ldots n_k}^{-1}(N_{2^{n_1}}\times \cdots \times F_j^{n_i}\times \ldots \times N_{2^{n_k}})\\
&=&V(Q_{n_1})\times \ldots\times\xi_{n_i}^{-1}(F_j^{n_i})
\times\ldots\times V(Q_{n_k})\\
&=&V(Q_{n_1+\ldots+n_{i-1}})\times
\xi_{n_i}^{-1}(F_j^{n_i})\times V(Q_{n_{i+1}+\ldots+n_k}).
\end{array}
\end{equation*}
Moreover, by Lemma \ref{swap}, we have that
\begin{equation*}
\begin{array}{rcl}
&&\theta(n,V(Q_{n_1+\ldots+n_{i-1}})\times
\xi_{n_i}^{-1}(F_j^{n_i})\times V(Q_{n_{i+1}+\ldots+n_k}))\\
&=&\theta(n,\xi_{n_i}^{-1}(F_j^{n_i})\times
V(Q_{n_{i+1}+\ldots+n_k})\times V(Q_{n_1+\ldots+n_{i-1}}))\\
&=&\theta(n,\xi_{n_i}^{-1}(F_j^{n_i})\times V(Q_{n-n_i})).
\end{array}
\end{equation*}
Therefore, Lemma \ref{BWL1} follows from (\ref{cpwl1+2}a).
\end{proof}
Similarly, we write the following lemma.
\begin{lem}\label{BWL2}
If $\mathscr{G}_i$ is path $P_{2^{n_i}}$, then we have that
\begin{equation*}
\sum_{j=1}^{q_i}\theta(n,\xi_{n_1\ldots n_k}^{-1}(\mathscr{B}_{ij}))=
2^{n-n_i}(2^{2n_i-1}-2^{n_i-1}).
\end{equation*}
\end{lem}
Combining \eqref{wlG}, Lemma \ref{BWL1} and Lemma \ref{BWL2}, we get the wirelength under Gray code embedding of hypercube into Cartesian product of paths and/or cycles. That is
\begin{equation*}\label{wlgray}
WL(Q_n,\mathscr{G};\xi_{n_1\ldots n_k})=\sum_{i=1}^{k}\mathscr{L}_i,
\end{equation*}
where $\mathscr{L}_i$ is defined in Theorem \ref{carthm}.
\subsection{Minimum wirelength}\label{sub3}\
We show that
Gray code embedding wirelength is the lower bound of wirelength for hypercube into Cartesian product of paths and/or cycles.
According to \eqref{wlG}, it is sufficient to prove that
\begin{lem}\label{finalieq}
Let $f: \{0,1\}^n\rightarrow N_{2^{n_1}}\times \cdots \times N_{2^{n_k}}$ be an embedding of $Q_n$ into $\mathscr{G}$, then
\begin{equation*}
\sum_{i=1}^{k}\sum_{j=1}^{q_i}\theta(n,f^{-1}(\mathscr{B}_{ij}))\ge
\sum_{i=1}^{k}\sum_{j=1}^{q_i}\theta(n,\xi_{n_1\cdots n_k}^{-1}(\mathscr{B}_{ij})).
\end{equation*}
\end{lem}
\begin{proof}
To prove this lemma, we only consider that $i=1$, since a
similar argument works for the other $2\le i\le k$.
\noindent$\bullet$ \textbf{Case 1.}\ \
$\mathscr{G}_1=C_{2^{n_1}}$.
For $1\le j \le q_1=2^{n_1-1}$,
$f^{-1}(\mathscr{B}_{1j})=f^{-1}(F_j^{n_1}\times N_{2^{n_2}}\times \cdots \times N_{2^{n_k}})$.
Define a bijective map $f_1$ from $N_{2^{n_1}}\times N_{2^{n_2}}\times \cdots \times N_{2^{n_k}}$ to $N_{2^{n_1}}\times N_{2^{n-n_1}}$, where
\begin{equation*}\label{f1}
f_1(x_1,x_2,\cdots,x_k)=(x_1, x_k+\sum_{i=2}^{k-1}(x_i-1)2^{\sum_{j=i+1}^{k}n_j}).
\end{equation*}
It is clear that $f_1(F_j^{n_1}\times N_{2^{n_2}}\times \cdots \times N_{2^{n_k}})=F_j^{n_1}\times N_{2^{n-n_1}}$.
Moreover, we have that
\begin{equation*}
f^{-1}(\mathscr{B}_{1j})=f^{-1}\circ f_1^{-1}(F_j^{n_1}\times N_{2^{n-n_1}})=
(f_1\circ f)^{-1}(F_j^{n_1}\times N_{2^{n-n_1}}).
\end{equation*}
Notice that $f_1\circ f$ is an arbitrary map from $\{0,1\}^n$ to $N_{2^{n_1}}\times N_{2^{n-n_1}}$, then, by (\ref{cp1+2}a), we have that
$\sum_{i=1}^{2^{n_1}-1} \theta(n,f^{-1}(\mathscr{B}_{1j}))\ge
\sum_{i=1}^{2^{n_1}-1} \theta(n,\xi_{n_1}^{-1}(F_j^{n_1})\times V(Q_{n-n_1}))$. Therefore, we conclude that
\begin{equation}\label{B1p}
\sum_{i=1}^{2^{n_1}-1} \theta(n,f^{-1}(\mathscr{B}_{1j}))\ge
\sum_{i=1}^{2^{n_1}-1} \theta(n,\xi_{n_1\cdots n_k}^{-1}(\mathscr{B}_{1j})).
\end{equation}
\noindent$\bullet$ \textbf{Case 2.}\ \
$\mathscr{G}_1=P_{2^{n_1}}$.
By a similar analysis, we also get \eqref{B1p}.
Combining \textbf{Case 1} and \textbf{Case 2}, the case for $i=1$ is proved. Thus the lemma holds.
\end{proof}
\noindent \textbf{Proof of Theorem \ref{carthm}.} Theorem \ref{carthm} follows from Subsection \ref{sub1} to \ref{sub3}.
\noindent\textbf{Acknowledgements}
The author is grateful to Prof. Qinghui Liu for his thorough review and suggestions.
This work is supported by the National Natural Science Foundation of China, No.11871098.
|
{
"arxiv_id": "2302.13216",
"language": "en",
"timestamp": "2023-02-28T02:13:13",
"url": "https://arxiv.org/abs/2302.13216",
"yymm": "2302"
} | \section{Introduction}
In this paper we develop the Koszul duality theory for differential algebras with nonzero weight by determining the minimal model and the Koszul homotopy dual cooperad of the corresponding operad as well as the $L_\infty$-structure on the deformation complex, thereby solving a question raised by Loday~\mcite{Lod}.
\vspb
\subsection{Differential algebras with weight}\
For a fixed scalar $\lambda$, a \name{differential operator of weight $\lambda$}~\cite{GK} (called $\lambda$-derivation in~\cite{Lod}) is a linear operator $d=d_\lambda$ on an associative algebra $A$ satisfying the operator identity
\vspa
\begin{equation} \mlabel{eq:diffwt}
d(uv)=d(u)v+ud(v)+\lambda d(u)d(v), \quad u, v\in A.
\vspa
\end{equation}
When $\lambda=0$, this is the Leibniz rule of the derivation in analysis. When $\lambda\neq 0$, this is the operator identity satisfied by the differential quotient $d_\lambda(f)(x):=(f(x+\lambda)-f(x))/\lambda$ in the definition of the derivation. The special cases of $\lambda=\pm 1$ gives the forward and backward difference operators in numerical analysis and other areas. An associative algebra with a differential operator of weight $\lambda$ is called a \name{differential algebra of weight $\lambda$}.
Traditionally, a differential algebra is referring to a differential commutative algebra or a field of weight zero, originated nearly a century ago from the pioneering work of Ritt~\mcite{Rit1} in his algebraic study of differential equations.
Through the works of Kaplansky, Kolchin, Magid, Singer and many others~\mcite{Kap,Kol,Mag,PS}, the subject has evolved into a well-established area in mathematics. In theory its directions include differential Galois theory, differential algebraic geometry and differential algebraic groups. Its importance is also reflected by its connections with broad areas from arithmetic geometry and logic to computer science and mathematical physics~\mcite{CM,FLS,MS,Wu2}.
A commutative differential algebra naturally gives rise to a Novikov algebra by the well-known work of Gelfand and Dorfman~\mcite{GD}. It also gives rise to a Novikov-Poisson algebra and a transposed Poisson algebra~\mcite{BBGW,KSO,LB,Xu}.
More generally, differential algebras without the commutativity condition have attracted the interest of many recent studies. Derivations on path algebras and universal enveloping algebras of differential Lie algebras were studied in~\mcite{GL,Poi,Poi1}.
From the categorical viewpoint, differential categories were introduced and then studies in~\mcite{BCLS,CL,Lem}.
In~\mcite{ATW}, the notion of a differential algebra was generalized to non(anti)commutative superspace by deformation in the study of instantons in string theory.
\vspb
\subsection{Cohomology and deformations of differential algebras}\
Beginning with the pioneering works of Gerstenhaber for associative algebras and of Nijenhuis and Richardson for Lie algebras~\mcite{Ge1,Ge2,NR}, deformation, cohomology and homotopy theory have been fundamental in understanding algebraic structures.
A general philosophy, as evolved from ideas of Gerstenhaber, Nijenhuis, Richardson, Deligne, Schlessinger, Stasheff, Goldman, Millson etc, is that the deformation theory of any given
mathematical object can be described by
a certain differential graded (dg) Lie algebra or more generally a $L_\infty$-algebra associated to the
mathematical object (whose underlying complex is called the deformation complex). This philosophy has been made into a theorem in characteristic zero by Lurie \mcite{Lur} and Pridham \mcite{Pri10}, expressed in terms of infinity categories. It is an important problem to explicitly construct the dg Lie algebra or $L_\infty$-algebra governing the deformation theory of the given mathematical object. For algebraic structures (operads) with binary operations, especially the binary quadratic operads, there is a general theory in which the deformations and homotopy of an algebraic structure are controlled by a dg Lie algebra naturally coming from the operadic Koszul duality for the given algebraic structure; see~\cite{GJ94,GK94} for the original literature and~\mcite{LV,MSS} for general treatises.
Studies of cohomology and homotopy for algebra structures with linear operators have become very active recently. The structures include differential operators and Rota-Baxter operators on associative and Lie algebras~\mcite{GLSZ,PSTZ,TBGS1,TFS}. Due to the complexity of the algebraic structures, the $L_\infty$-algebras in these studies are obtained by direct constructions without using the language of operads.
The first operadic study of algebras with linear operators is the 2010 work of Loday~\mcite{Lod} on differential algebras of weight zero. Since this operad is quadratic, it can be treated applying the above-mentioned general theory of Koszul duality~\mcite{GJ94,GK94}. In particular, Loday obtained the corresponding minimal model and cohomology theory. For further studies, see~\mcite{DL16,KS06,LT13}.
For the case of nonzero weight, Loday made the following observation, where the operad $\lambda$-AsDer is just the operad of differential algebras of weight $\lambda$.
\begin{quote}
If the parameter $\lambda$ is different from 0, then the operad $\lambda$-AsDer is not a quadratic operad since the term $d(a)d(b)$ needs three generating operations to
be defined. So one needs new techniques to extend Koszul duality to this case.
\end{quote}
The goal of this paper is to address this problem observed by Loday.
\vspb
\subsection{Our approach and layout of the paper}\
One of the most important consequences of the general Koszul duality theory for a quadratic operad is that the theory directly gives the minimal model \mcite{GK94, GJ94}, that is, the cobar construction of the Koszul dual cooperad. Then the deformation complex as well as its $L_\infty$-algebra structure (in fact a dg Lie algebra in the Koszul case) could be deduced from the minimal model \mcite{KS00}.
With the absence of the quadratic property for differential algebras with nonzero weight, the existing general Koszul duality theory does not apply, as noted by Loday as above. The purpose of this work is provide such a theory for differential algebras with nonzero weight, in hope to shed light on a general theory beyond the quadratic operads.
To get started, we use a previous work~\mcite{GLSZ}, in which a cohomology theory of differential algebras with nonzero weight was constructed from a specifically constructed cochain complex, and was shown to control the formal deformations as well as abelian extensions of the differential algebras. We first determine the $L_\infty$-algebra structure on this deformation complex, in an ad hoc way, such that differential algebras with weight are realized as Maurer-Cartan elements for the $L_\infty$-algebra. This suggests that the $L_\infty$-algebra should come from the minimal model of the operad $\Dif$ of differential algebras with weight. Thus instead of verifying the $L_\infty$ property directly, we put this $L_\infty$-algebra in a suitable operadic context that is comparable to the weight zero case obtained in Loday's work~\mcite{Lod}.
We introduce homotopy differential algebras with nonzero weight and show that their operad is indeed the minimal model of the operad $\Dif$. As a byproduct, we obtain the Koszul dual homotopy cooperad of $\Dif$.
\smallskip
The outline of the paper is as follows.
In \S\mref{sec:difinf}, after recalling in \S\mref{sec:cohomologyda} the needed notions and background on the cohomology theory developed in \mcite{GLSZ}, we propose an $L_\infty$ structure on the deformation complex for differential algebras with nonzero weight (Theorem~\mref{th:linfdiff}) in \S\mref{ss:difinf}. In \S\mref{ss:mce}, we show that this $L_\infty$-algebra structure is the right one that realizes differential algebras with weight as the Maurer-Cartan elements (Proposition~\mref{pp:difopcochain}), while holding off the verification of the $L_\infty$ property to the end of paper, until the operadic tools are developed.
In \S\mref{sec:dualhomocoop}, once the required basics on homotopy (co)operads are collected in \S\mref{ss:homocood}, we construct in \S\mref{ss:dual} a homotopy cooperad $\Dif^\ac$, called the Koszul dual homotopy cooperad of the operad $\Dif$ of differential algebras with weight because its cobar construction is shown to be the minimal model of $\Dif$ (Theorem~\mref{thm:difmodel}).
In \S\mref{sec:homomodel}, we first introduce the dg operad $\Difinfty$ of homotopy differential algebras with weight in \S\mref{ss:operad}. We establish in \S\mref{sec:model} that the dg operad $\Difinfty$ is the minimal model of $\Dif$ (Theorem~\mref{thm:difmodel}). We give the notion of homotopy differential algebras and its explicit description (Definition~\mref{de:homodifalg} and Proposition~\ref{de:homodifalg2}) in \S\mref{ss:homo}.
In \S\mref{ss:modelinf}, we determine in \S\mref{ss: Linifty} the the $L_\infty$-algebra coming from the Koszul dual homotopy cooperad and show that it coincides with the $L_\infty$-algebra of differential algebras with weight proposed in \S\mref{ss:difinf}. We also give equivalent descriptions of homotopy differential algebras (Propositions~\mref{de:homodifalg3} and \ref{de:homodifalg4}) in \S\mref{ss: another definition}.
\smallskip
\noindent
{\bf Notations.} Throughout this paper, let $\bfk$ be a field of characteristic $0$. Unless otherwise stated, vector spaces, linear maps and tensor products are taken over $\bfk$.
We will use the homological grading. For two graded spaces $V$ and $W$, let $V\otimes W$ (resp. $\Hom(V, W)$) denote the graded tensor space (resp. the space of graded linear maps).
\vspc
\section{The $L_\infty$-algebra controlling deformations of differential algebras} \mlabel{sec:difinf}
In this section, we first recall the cohomology theory of differential algebras with weight, as developed in \mcite{GLSZ} with some modification in sign convention. We then introduce an $L_\infty$-algebra such that the cochain complex of differential algebras can be realized as the shift of the underlying complex of the $L_\infty$-algebra obtained by the twisting procedure. As we will see in \S\mref{ss:modelinf}, this is exactly the $L_\infty$-algebra deduced from the minimal model of the operad governing differential algebras with weight.
\vspb
\subsection{Differential algebras and their cohomology}\mlabel{sec:cohomologyda}\
Let $A=(A, \mu_A, d_A)$ be a differential algebra of weight $\lambda$, with a multiplication $\mu_A$ and a differential operator $d_A$ of weight $\lambda$, as defined in Eq.~\meqref{eq:diffwt}.
A bimodule $M $ over the associative algebra $(A, \mu_A)$ is called a \name{differential bimodule over the differential algebra} $(A, \mu_A, d_A)$ if $M$ is further endowed with a linear transformation
$d_M: M\to M$ such that for all $a,b\in A$ and $x\in M,$
\begin{eqnarray*}
d_M (ax)&=&d_A(a)x+ad_M (x)+\lambda d_A(a)d_M (x),\\
d_M (xa)&=&x d_A(a)+d_M (x)a+\lambda d_M (x)d_A(a).
\end{eqnarray*}
The regular bimodule $A$ is obviously a differential bimodule over itself, called the \name{regular differential bimodule} of the differential algebra $A$.
Given a differential bimodule $M =(M, d_M )$ over the differential algebra $A=(A, \mu_A, d_A)$, a new differential bimodule structure on $M$, with the same derivation $d_M$, is given by
\begin{equation}\mlabel{eq:newdifbim}
a\vdash v=(a+\lambda d_A(a))x,\quad x\dashv a=x(a+\lambda d_A(a)),\quad a \in A, x\in M.
\end{equation}
For distinction, we let ${}_\vdash M _{\dashv}$ denote this new differential bimodule structure over $A=(A, \mu_A, d_A)$; for details, see \cite[Lemma 3.1]{GLSZ}.
Recall that the Hochschild cochain complex $\C^{\bullet}_\Alg(A, M )$ of an associative algebra $A$ with coefficients in a bimodule $M$ is the cochain complex
$$(\C^{\bullet}_\Alg(A, M ):=\bigoplus_{n=0}^\infty \C^n_\Alg(A,M ),\partial_{\Alg}^{\bullet}),$$ where for $n\geqslant 0$, $\C^n_\Alg(A,M )=\Hom(A^{\otimes n}, M )$ (in particular, $\C^0_\Alg(A,M )=M $) and the coboundary operator $$\partial_{\Alg}^n: \C^n_\Alg(A, M )\longrightarrow \C^{n+1}_\Alg(A, M ), n\geqslant 0,$$
\[
\partial_{\Alg}^n(f)(a_{1, n+1}):=(-1)^{n+1} a_1 f(a_{2, n+1})+\sum_{i=1}^n(-1)^{n+1-i}f(a_{1, i-1}\ot a_ia_{i+1}\ot a_{i+2, n+1})
+f(a_{1, n}) a_{n+1}
\]
for $f\in \C^n_\Alg(A, M ),~a_1,\dots, a_{n+1}\in A$, where for $1\leqslant i\leqslant j\leqslant n+1$, we write
$a_{i, j}:=a_i\ot \cdots \ot a_j$ (by convention, $a_{i, j}=\emptyset$ for $i>j$).
The corresponding Hochschild cohomology is denoted by $\rmH^{\bullet}_\Alg(A, M )$.
We write $\rmH^{\bullet}_\Alg(A):=\rmH^{\bullet}_\Alg(A, M)$ when $M $ is taken to be the regular bimodule $A$,
Let $M =(M, d_M )$ be a differential bimodule over a differential algebra $A=(A, \mu_A, d_A)$. The Hochschild cochain complex $C^{\bullet}_\Alg(A, {}_\vdash M _{\dashv})$ of the associative algebra $A=(A, \mu_A)$ with coefficients in the new bimodule ${}_\vdash M _{\dashv}$ is called the \name{cochain complex of the differential operator $d_A$} with coefficients in the differential bimodule $M $, denoted by $(C^{\bullet}_\DO(A, M ), \partial_{\DO}^{\bullet})$. More precisely, for $g\in C^n_\DO(A, M )\coloneqq \Hom_\bfk(A^{\ot n}, M )$ and $a_1,\dots,a_{n+1}\in A$,
we have
\[\begin{split}
\partial_{\DO}^n g(a_{1, n+1})\coloneqq &(-1)^{n+1} a_1 \vdash g(a_{2, n+1})+ \sum_{i=1}^n(-1)^{n+1-i}g(a_{1, i-1}\ot a_ia_{i+1} a_{i+2, n+1})
+ g(a_{1, n})\dashv a_{n+1}\\
=&(-1)^{n+1}(a_1+\lambda d_A(a_1))g(a_{2, n+1})+\sum_{i=1}^n(-1)^{n+1-i}g(a_{1, i-1}\ot a_ia_{i+1} a_{i+2, n+1})\\
&+ g(a_{1, n}) (a_{n+1}+\lambda d_A(a_{n+1})).
\end{split}\]
We write $(C^{\bullet}_\DO(A), \partial_{\DO}^{\bullet})\coloneqq (C^{\bullet}_\DO(A, M ), \partial_{\DO}^{\bullet})$ by taking the differential bimodule $M $ to be the regular differential bimodule $A$.
The cohomology of the cochain complex $(C^{\bullet}_{\DO}(A, M ), \partial_{\DO}^{\bullet})$, denoted by $\rmH^{\bullet}_\DO(A, M )$, is called the \name{cohomology of the differential operator $d_A$} with coefficients in the differential bimodule $M$. When the differential bimodule $M $ is the regular differential bimodule $A$, write $\rmH^{\bullet}_\DO(A):=\rmH^{\bullet}_\DO(A, M)$.
\smallskip
As in \cite[Proposition 3.3]{GLSZ}, define a chain map $\Phi^{\bullet}: C^{\bullet}_\Alg(A,M ) \to C^{\bullet}_\DO (A, M )$ as follows:
for $f\in C^n_\Alg(A,M )$ with $ n\geq 1$, define
\[\begin{split}
\Phi^n(f)(a_{1, n})\coloneqq &\sum_{k=1}^n\lambda^{k-1}\sum_{1\leq i_1<\cdots<i_k\leq n}f(a_{1, i_1-1}\ot d_A(a_{i_1})\ot a_{i_1+1, i_2-1}\ot d_A(a_{i_2})\ot \cdots \ot d_A(a_{i_k}) \ot a_{i_k+1, n})\\
&-d_M (f(a_{1, n})),
\end{split}\]
and $$ \Phi^0(x)\coloneqq - d_M (x), \quad \ x\in C^0_\Alg(A,M )=M .$$
Let $(C_{\DA}^{\bullet}(A,M ), \partial_{\DA}^{\bullet})$ be the negative shift of the mapping cone of $\Phi^{\bullet}$. More precisely,
$$
C_{\DA}^n(A,M )\coloneqq
\begin{cases}
C^n_\Alg(A,M )\oplus C^{n-1}_\DO(A, M ),&n\geq1,\\
C^0_\Alg(A,M )=M ,&n=0,
\end{cases}
$$
and the differentials $\partial_{\DA}^n: C_{\DA}^n(A,M )\rar C_{\DA}^{n+1}(A,M )$ are defined by
$$
\partial_{\DA}^n(f,g) := ( \partial^n_{\Alg} (f), -\partial_\DO^{n-1} (g)- \Phi^n(f)), \quad f\in C^n_\Alg(A, M ),\,g\in C^{n-1}_\DO(A, M ), n\geq 1,$$
$$\partial_{\DA}^0 (x) := ( \partial^0_{\Alg} (x), -\Phi^0( x)), \quad
x\in C^0_\Alg(A,M )=M. $$
The complex $(C_{\DA}^{\bullet}(A,M ), \partial_{\DA}^{\bullet})$ is called the \name{cochain complex of the differential algebra $(A,d_A)$} with coefficients in the differential bimodule $M$. When the differential bimodule $M $ is the regular differential bimodule $A$, we write $(C^{\bullet}_\DA(A), \partial_{\DA}^{\bullet}):=(C^{\bullet}_\DA(A, M ), \partial_{\DA}^{\bullet})$.
The cohomology of the cochain complex $(C_{\DA}^{\bullet}(A, M ),\partial_{\DA}^{\bullet})$, denoted by $\rmH_{\DA}^{\bullet}(A, M )$, is called the \name{cohomology of the differential algebra} $A=(A, \mu_A, d_A)$ with coefficients in the differential bimodule $M =(M, d_M )$.
It is shown in~\mcite{GLSZ} that this cohomology theory controls formal deformations as well as abelian extensions of differential algebras.
We will further clarify this connection theoretically by using the $L_\infty$-algebra structure on the deformation complex in \S\mref{ss:difinf}; see Propositions~\mref{pp:infcohcomp} and \mref{pp:difopcochain}.
\vspb
\subsection{ $L_\infty$-algebra on the deformation complex from differential algebras}\
\mlabel{ss:difinf}
We first recall some basics on $L_\infty$-algebras; for more details, see \mcite{Get09,LS93,LM,Sta92}.
Let $V=\oplus_{n\in \mathbb{Z}} V_n$ be a graded vector space. The graded symmetric algebra $S(V)$ of $V$ is defined to be the quotient of the tensor algebra $T(V)$ by the two-sided ideal $I$ generated by
$x\ot y -(-1)^{|x||y|}y\ot x$ for all homogeneous elements $x, y\in V$. For $x_1\ot\cdots\ot x_n\in V^{\ot n}\subseteq T(V)$, write $ x_1\odot x_2\odot\dots\odot x_n$ for its image in $S(V)$.
For homogeneous elements $x_1,\dots,x_n \in V$ and $\sigma\in S_n$, the Koszul sign $\epsilon(\sigma; x_1,\dots, x_n)$ is defined by the equality
$$x_1\odot x_2\odot\dots\odot x_n=\epsilon(\sigma; x_1,\dots,x_n)x_{\sigma(1)}\odot x_{\sigma(2)}\odot\dots\odot x_{\sigma(n)}\in S(V).$$
We also define
$$ \chi(\sigma; x_1,\dots,x_n)= \sgn(\sigma)\ \epsilon(\sigma; x_1,\dots,x_n),$$
where $\sgn(\sigma)$ is the sign of the permutation $\sigma$.
\begin{defn}\mlabel{Def:L-infty}
An \name{$L_\infty$-algebra} is a graded space $L=\bigoplus\limits_{i\in\mathbb{Z}}L_i$ endowed with a family of linear operators $l_n:L^{\ot n}\rightarrow L, n\geqslant 1$ with degree $|l_n|=n-2$ subject to the following conditions. For $n\geq 1$ and homogeneous elements $x_1,\dots,x_n\in L$,
\begin{enumerate}
\item (generalized anti-symmetry) $$l_n(x_{\sigma(1)}\ot \cdots \ot x_{\sigma(n)})=\chi(\sigma; x_1,\dots, x_n)\ l_n(x_1 \ot \cdots \ot x_n), \ \sigma\in S_n;$$
\item (generalized Jacobi identity)
$$\sum\limits_{i=1}^n\sum\limits_{\sigma\in \Sh(i,n-i)}\chi(\sigma; x_1,\dots,x_n)(-1)^{i(n-i)}l_{n-i+1}(l_i(x_{\sigma(1)}\ot \cdots \ot x_{\sigma(i)})\ot x_{\sigma(i+1)}\ot \cdots \ot x_{\sigma(n)})=0,$$
where $\Sh(i,n-i)$ is the set of $(i,n-i)$
shuffles, that is, those $\sigma\in S_n$ such that
$$\sigma(1)<\cdots<\sigma(i)\ \ \mathrm{and}\ \ \sigma(i+1)<\cdots<\sigma(n).$$
\end{enumerate}
\end{defn}
In particular, if $l_n=0$ for all $n\geqslant 3$, then $(L,l_1,l_2)$ is called a \name{differential graded (=dg) Lie algebra}; if $l_n=0$ except $n=2$, then $(L, l_2)$ is called a \name{graded Lie algebra}.
Let $V=\sum\limits_{i\in \mathbb{Z}}V_i$ be a graded space. Let $sV$ denote the suspension of $V$ and $s^{-1}V$ the desuspension.
Denote
$$ \mathfrak{C}_{\Alg}(V):=\Hom(T(sV), sV).$$ Given homogeneous elements $f\in \Hom((sV)^{\ot m},V)$ with $m\geqslant 1$ and $ g \in \Hom((sV)^{\ot n},V)$ with $n\geqslant 0$, for each $1\leqslant i\leqslant m$, write
$$sf\circ_i sg:=sf\circ (\Id^{\otimes (i-1)}\ot sg \ot \Id^{\otimes (m-i)}),$$
and $sf\{sg\}:=\sum_{i=1}^m sf\circ_i sg$; when $m=0$, $sf\{sg\}$ is defined to be $0$. The Gerstenhaber bracket~\mcite{Ge1} of $sf$ and $sg$
is defined by
\begin{eqnarray}\mlabel{eq:gers} [sf,sg]_G:=sf\{sg\}-(-1)^{(|f|+1)(|g|+1)}sg\{sf\}.\end{eqnarray}
A celebrated result of Gerstenhaber \mcite{Ge1} states that
the graded space $\mathfrak{C}_{\Alg}(V)$ endowed with the Gerstenhaber bracket is a graded Lie algebra.
The following facts are quoted for later applications.
\begin{enumerate}
\item
Let $V$ be an ungraded space considered as a graded space concentrated in degree 0. Then there is a bijection between the set of Maurer-Cartan elements in the graded Lie algebra $\mathfrak{C}_{\Alg}(V)$ and the set of associative algebra structures on the space $V$, where the correspondence is induced by
\begin{eqnarray}\mlabel{eq:iso1} \Hom((sV)^{\ot n}, sV) \simeq \Hom(V^{\ot n}, V), f\mapsto \tilde{f}:= s^{-1}\circ f \circ s^{\ot n}, \quad f\in \Hom((sV)^{\ot n}, sV).
\end{eqnarray}
\item Let $(A, \mu)$ be an associative algebra and $\alpha$ be the corresponding Maurer-Cartan element in the graded Lie algebra $\mathfrak{C}_{\Alg}(A)$. Then the underlying complex of the twisted dg Lie algebra $(\mathfrak{C}_{\Alg}(A), l_1^\alpha, l_2^\alpha)$ is exactly $s\C^{\bullet}_\Alg(A)$, the shift of the Hochschild cochain complex of the associative algebra $A$.
\end{enumerate}
Let $V$ be a graded space. Write
\[ \mathfrak{C}_{\DO}(V):=\Hom(T(sV),V) \] and define the graded space
\begin{equation}\mlabel{eq:linfdiff}
\mathfrak{C}_{\DA}(V):=\mathfrak{C}_{\Alg}(V)\oplus \mathfrak{C}_{\DO}(V)=\Hom(T(sV),sV)\oplus \Hom(T(sV),V).
\end{equation}
Now we introduce an $L_\infty$-algebra structure on the graded space $\mathfrak{C}_{\DA} (V)$ by the following process:
\begin{enumerate}
\item For $sf, sg \in \mathfrak{C}_{\Alg}(V)$, define
\[l_2(sf\ot sg)=[sf,sg]_G\in \mathfrak{C}_{\Alg}(V),\]
where $[-,-]_G$ is the Gerstenhaber Lie bracket in Eq.~\meqref{eq:gers}.
\item Let $sf \in \mathfrak{C}_{\Alg}(V)$ and $g\in \mathfrak{C}_{\DO}(V)$. Define
$$l_2(sf\ot g)=(-1)^{|f|+1}s^{-1}[sf,sg]_G\in \mathfrak{C}_{\DO}(V).$$
\item Let $sf\in \Hom((sV)^{\ot n}, sV)\subseteq\mathfrak{C}_{\Alg}(V)$ and $g_1,\dots,g_m\in \mathfrak{C}_{\DO}(V)$ with $2\leqslant m\leqslant n$. Define
\[l_{m+1}(sf\ot g_1\ot \cdots \ot g_m)=\lambda^{m-1}\sum_{\sigma\in S_m}(-1)^\xi s^{-1}\big(sf\{sg_{\sigma(1)},\dots,sg_{\sigma(m)}\}\big)\in \mathfrak{C}_{\DO}(V),\]
where $(-1)^\xi=\chi(\sigma;g_1,\dots,g_m)(-1)^{m(|f|+1)+\sum\limits_{k=1}^{m-1}\sum\limits_{j=1}^k|g_{\sigma(j)}|}$.
\item Let $sf\in \mathfrak{C}_{\Alg}(V)$ and $g_1,\dots,g_m\in \mathfrak{C}_{\DO}(V)$ with $ m \geqslant 1$. For $1\leqslant k\leqslant m$, define
\[l_{m+1}(g_1\ot \cdots\ot g_k \ot sf\ot g_{k+1}\ot \cdots \ot g_m )\in\mathfrak{C}_{\DO}(V)\]
to be
\[l_{m+1}(g_1\ot \cdots\ot g_k \ot sf\ot g_{k+1}\ot \cdots \ot g_m ):=(-1)^{(|f|+1)(\sum\limits_{j=1}^k|g_j|)+k}l_{n+1}(sf\ot g_1\ot \cdots \ot g_m),\]
where the right hand side has been defined in (ii) and (iii).
\item All other components of the operators $\{l_n\}_{n\geqslant 1}$ vanish.
\end{enumerate}
Then we can state one of our main results in this paper. \begin{thm}\mlabel{th:linfdiff}
Given a graded space $V$ and $\lambda\in \bfk$, the graded space $\mathfrak{C}_{\DA}(V)$ endowed with the operations $\{l_n\}_{n\geqslant 1}$ defined above is an $L_\infty$-algebra.
\end{thm}
The theorem can be proved by a direct but lengthy verification. Instead, we will present an operadic proof in the spirit of~\mcite{KS00,VdL02,VdL03} once we obtain the minimal model. We first sketch the proof below, with details supplied later. We then give an application in the next subsection, realizing differential algebras with weight as the Maurer-Cartan elements in this $L_\infty$-algebra.
\begin{proof} (A sketch)
The minimal model of the differential algebra operad $\Dif$ is obtained in Theorem~\mref{thm:difmodel}. It is shown in \S\mref{ss:modelinf} that $\mathfrak{C}_{\DA}(V)$ is exactly the $L_\infty$-structure resulting from this minimal model.
\end{proof}
\vspb
\subsection{Realising differential algebra structures as Maurer-Cartan elements}\
\mlabel{ss:mce}
We give an application of Theorem~\mref{th:linfdiff}. \begin{defn}\mlabel{de:linfmc}
An element $\alpha$ of degree $-1$ in an $L_\infty$-algebra $(L,\{l_n\}_{n\geqslant1})$ is called a \name{Maurer-Cartan element} if it satisfies the \name{Maurer-Cartan equation}:
\begin{eqnarray}\mlabel{eq:mce}\sum_{n=1}^\infty\frac{1}{n!}(-1)^{\frac{n(n-1)}{2}} l_n(\alpha^{\ot n})=0\end{eqnarray}
(in particular, this infinite sum needs to be well defined).
\end{defn}
For a dg Lie algebra $(L,l_1,l_2)$, Eq.~\meqref{eq:mce} reduces to
\begin{eqnarray}\mlabel{eq:dglamc} l_1(\alpha)-\frac{1}{2}l_2(\alpha\ot \alpha)=0.\end{eqnarray}
\begin{prop}[twisting procedure\cite{Get09}]\mlabel{pp:deflinf}
Let $\alpha$ be a Maurer-Cartan element in an $L_\infty$-algebra $(L,\{l_n\}_{n\geqslant1})$. A new $L_\infty$-structure $\{l_n^\alpha\}_{n\geqslant 1}$ on the graded space $L$ is defined by
\begin{equation}\mlabel{eq:twlinf}
\begin{split}
l_n^{\alpha}:\ &L^{\ot n}\rightarrow L, \\
l^\alpha_n(x_1\ot \cdots\ot x_n):=&\sum_{i=0}^\infty\frac{1}{i!}(-1)^{in+\frac{i(i-1)}{2}}l_{n+i}(\alpha^{\ot i}\ot x_1\ot \cdots\ot x_n),\ x_1, \dots, x_n\in L,
\end{split}
\end{equation}
whenever these infinite sums exist. The new $L_\infty$-algebra $(L, \{l_n^\alpha\}_{n\geqslant 1})$ is called the \name{twisted $L_\infty$-algebra} $($by the Maurer-Cartan element $\alpha$$)$.
\end{prop}
For a dg Lie algebra $(L,l_1,l_2)$ and a Maurer-Cartan element $\alpha\in L_{-1}$, Eq.~\meqref{eq:twlinf} reduces to
\begin{eqnarray}\mlabel{Eq: twist dgla} l_1^\alpha(x) = l_1(x)-l_2(\alpha\ot x), \ x\in L, \ \mathrm{and}\ l_2^\alpha = l_2.
\end{eqnarray}
We fix the isomorphism
\begin{eqnarray}\mlabel{eq:iso2}
\Hom((sV)^{\ot n}, V) \simeq \Hom(V^{\ot n}, V), g\mapsto \check{g}:= g \circ s^{\ot n}, \, g\in \Hom((sV)^{\ot n}, V).
\end{eqnarray}
The following proposition follows from general facts about $L_\infty$-structures obtained from minimal models; see the discussion at the end of \S\mref{ss:homocood}.
\begin{prop}\mlabel{pp:damc} Let $V$ be an ungraded space considered as a graded space concentrated in degree $0$. Then a differential algebra structure of weight $\lambda$ on $V $ is equivalent to a Maurer-Cartan element in the $L_\infty$-algebra $\mathfrak{C}_{\DA}(V)$ in Theorem~\mref{th:linfdiff}.
\end{prop}
The following result justifies the cohomology theory introduced in \mcite{GLSZ} and recalled in \S\mref{sec:cohomologyda}.
\begin{prop}\mlabel{pp:infcohcomp}
Let $(A,\mu, d)$ be a differential algebra of weight $\lambda$. Twisting the $L_\infty$-algebra $\mathfrak{C}_{\DA}(A)$ by the Maurer-Cartan element corresponding to the differential algebra structure $(A,\mu,d)$, then its underlying complex is exactly $s\C^\bullet_{\DA}(A)$, the suspension of the cochain complex of the differential algebra $(A, \mu, d)$.
\end{prop}
\begin{proof}
By Proposition~\mref{pp:damc}, the differential algebra structure $(A,\mu, d)$ is equivalent to a Maurer-Cartan element $\alpha=(m,\tau)$ in the $L_\infty$-algebra $\mathfrak{C}_{\DA}(A)$ with
$$m=-s\circ\mu\circ (s^{-1})^{\ot 2}: (sV)^{\ot 2}\rightarrow sV \ \mathrm{and}\ \tau=d\circ s^{-1}: sV\rightarrow V.$$
By Proposition~\mref{pp:deflinf}, the Maurer-Cartan element induces a new $L_\infty$-algebra structure $\{l_n^\alpha\}_{n\geqslant 1}$ on the graded space $\mathfrak{C}_{\DA}(A)$.
Given $(sf, g)\in\Hom((sA)^{\ot n}, sA)\oplus \Hom((sA)^{\ot n-1}, sA) \subset \mathfrak{C}_{\Alg}(A)\oplus \mathfrak{C}_{\DO}(A)=\mathfrak{C}_{\DA}(A)$, note that by the isomorphisms \meqref{eq:iso1} and \meqref{eq:iso2},
we get $(\widetilde{sf}, \check{g})\in \C_{\DA}^{n}(A)$.
By definition, for $sf\in\Hom((sA)^{\ot n}, sA)\subset \mathfrak{C}_{\Alg}(A)$, we have
{\small$$l_1^\alpha(sf)=\sum_{i=0}^\infty\frac{1}{i!}(-1)^{i+\frac{i(i-1)}{2}}l_{i+1}(\alpha^{\ot i}\ot sf)
=\Big(-l_2(m\ot sf),\ \sum_{i=1}^n\frac{1}{i!}(-1)^{ \frac{i(i+1)}{2}}l_{i+1}(\tau^{\ot i}\ot sf)\Big).
$$}
The definition of $\{l_n\}_{n\geqslant 1}$ on $\mathfrak{C}_{\DA}(A)$ gives $-l_2(m\ot sf)=-[m,sf]_G$.
On the other hand, we have {\small\begin{align*}
& \sum_{i=1}^n\frac{1}{i!}(-1)^{ \frac{i(i+1)}{2}}l_{i+1}(\tau^{\ot i}\ot sf)\\
=& -l_{2}(\tau \ot sf)+\sum_{i=2}^{n}\frac{1}{i!}(-1)^{\frac{i(i+1)}{2}}l_{i+1}(\tau^{\ot i}\ot sf)\\
\stackrel{\rm (iv)}{=}&-(-1)^{|f|} l_{2}(sf\ot \tau)+\sum_{i=2}^{n}\frac{1}{i!}(-1)^{\frac{i(i+1)}{2}} (-1)^{i|f| }l_{i+1}(sf\ot \tau^{\ot i} )\\
\stackrel{\rm (ii)(iii) }{=} & s^{-1}[sf, s\tau]_G+\sum_{i=2}^n \frac{1}{i!}(-1)^{\frac{i(i+1)}{2}}(-1)^{i|f| } i! \lambda^{i-1} (-1)^{i(|f|+1)+\frac{i(i-1)}{2}}
s^{-1}(sf\{\underbrace{s\tau,\dots,s\tau}_{i}\})\\
= &s^{-1}[sf, s\tau]_G+ \sum_{i=2}^n \lambda^{i-1}
s^{-1}(sf\{\underbrace{s\tau,\dots,s\tau}_{i}\}).
\end{align*}}
For $g\in \Hom((sA)^{\ot (n-1)},A)\subset \mathfrak{C}_{\DO}(A)$, we have
{\small\begin{align*}
l_1^{\alpha}(g)=&\sum_{i=0}^\infty\frac{1}{i!}(-1)^{\frac{i(i+1)}{2}}l_{i+1}(\alpha^{\ot i}\ot g)\\
=&-l_2(m\ot g)-\frac{1}{2!}\Big(l_3(m\ot \tau\ot g)+l_3(\tau\ot m\ot g)\Big)\\
=& s^{-1}[m, sg]+\lambda s^{-1} m\circ (s\tau \ot sg)+\lambda s^{-1} m\circ (sg \ot s\tau). \end{align*}}
We obtain
{\small$$\begin{array}{rcl} l_1^\alpha (sf, g)
&=&\Big(-[m,sf]_G, s^{-1}[sf, s\tau]_G+ \sum_{i=2}^n \lambda^{i-1}
s^{-1}(sf\{\underbrace{s\tau,\dots,s\tau}_{i}\})\\
& & \quad +s^{-1}[m, sg]+\lambda s^{-1} m\circ (s\tau \ot sg)+\lambda s^{-1} m\circ (sg \ot s\tau)\Big),\end{array}$$}
which is easily seen to be in
correspondence with $- \partial_{\DA}^n(\widetilde{sf}, \check{g})$ via the fixed isomorphisms \meqref{eq:iso1} and \meqref{eq:iso2}. This completes the proof.
\end{proof}
Although $\mathfrak{C}_{\DA}(A)$ is an $L_\infty$-algebra, once the associative algebra structure $\mu$ over $A$ is fixed, the graded space $\mathfrak{C}_{\DO}(A)$ is indeed a differential graded Lie algebra, as shown by the following result. Moreover, this result justifies the cohomology theory of differential operators.
\begin{prop}\mlabel{pp:difopcochain}
Let $(A,\mu)$ be an associative algebra. Then the graded space $\mathfrak{C}_{\DO}(A)$ can be endowed with a dg Lie algebra structure, of which the Maurer-Cartan elements are in bijection with the differential operators of weight $\lambda$ on $(A,\mu)$.
Given a differential operator $d$ of weight $\lambda$ on the associative algebra $(A, \mu)$, the dg Lie algebra $\mathfrak{C}_{\DO}(A)$ twisted by the Maurer-Cartan element corresponding to $d$ has its underlying complex being the cochain complex $\C^{\bullet}_\DO(A)$ of the differential operator $d$.
\end{prop}
\begin{proof} Regard $A$ as a graded space concentrated in degree 0. Define $m=- s\circ \mu\circ (s^{-1}\ot s^{-1}): (sA)^{\ot 2}\rightarrow sA$. Then it is easy to see that $\beta=(m,0)$ is naturally a Maurer-Cartan element in the $L_\infty$-algebra $\mathfrak{C}_{\DA}(A)$. We apply the twisting procedure by $\beta$ on $\mathfrak{C}_{\DA}(A)$.
By the construction of $\{l_n\}_{\geqslant 1}$ on $\mathfrak{C}_{\DA}(A)$, the graded subspace $\mathfrak{C}_{\DO}(A)$ is closed under the action of the operators $\{l_n^\beta\}_{n\geqslant 1}$.
Since the arity of $m$ is 2, the restriction of $l_n^\beta$ on $\mathfrak{C}_{\DO}(A)$ is $0$ for $n\geqslant 3$. Thus $(\mathfrak{C}_{\DO},\{l_n^\beta\}_{n=1,2})$ forms a dg Lie algebra which can be made explicit as follows. For $g\in \Hom((sA)^{\ot n},A)$ and $h\in \Hom((sA)^{\ot k},A)$, we have
{\small\begin{align*}
l_1^\beta(g)=&-l_2(m\ot g)= s^{-1} [m, sg]_G,\\
l_2^\beta(g\ot h)=&l_3(m\ot g\ot h)\\
=& \lambda (-1)^{|g|} s^{-1} m\circ (sg\ot sh)+\lambda (-1)^{|g||h|+1+|h|} s^{-1} m\circ (sh\ot sg)\\
=& \lambda (-1)^{n} s^{-1} m\circ (sg\ot sh)+\lambda (-1)^{nk+k+1} s^{-1} m\circ (sh\ot sg).
\end{align*}}
Since $A$ is concentrated in degree 0, we have $\mathfrak{C}_{\DO}(A)_{-1}=\Hom(sA,A)$. Take an element $\tau\in \Hom(sA,A)$. Then $\tau$ satisfies the Maurer-Cartan equation:
$$l_1^\beta(\tau)-\frac{1}{2}l_2^{\beta}(\tau\ot \tau)=0$$
if and only if
$$ s^{-1}[m, s\tau]_G+\lambda\ s^{-1} m\circ (s\tau\ot s\tau)=0.$$
Define $d=\tau\circ s:A\rightarrow A$. Under the isomorphisms \meqref{eq:iso1} and \meqref{eq:iso2}, the above equation is exactly the statement that $d$ is a differential operator of weight $\lambda$ on the associative algebra $(A,\mu)$.
Now let $d$ be a differential operator of weight $\lambda$ on the associative algebra $(A,\mu)$ and $\tau=d\circ s^{-1}$ be the corresponding Maurer-Cartan element in the dg Lie algebra $(\mathfrak{C}_{\DO},\{l_n^\beta\}_{n=1,2})$. For $g\in \Hom((sA)^{\ot n},A)$, a direct computation shows that
$\partial_\DO(\check{g})$ corresponds to $(l_1^\beta)^\tau(g)$ via the isomorphism \meqref{eq:iso2}. This proves the last statement.
\end{proof}
\begin{remark} In the course of proving the above proposition, the Lie bracket $(l_2^\beta)^\tau$ on $\C^{\bullet}_\DO(A)$ can also be given explicitly:
for $f\in \C^n_\DO(A)$ and $g\in \C^k_\DO(A)$,
$$[f, g](a_{1, n+k})= (-1)^n \lambda f(a_{1, n})g(a_{n+1, n+k})+ (-1)^{nk+k+1} \lambda g(a_{1, k})f(a_{k+1, n+k}),\quad a_1, \dots, a_{n+k}\in A.$$
Note that when $\lambda=0$, the cochain complex $\C^{\bullet}_\DO(A)$ has the trivial Lie bracket.
\end{remark}
\vspc
\section{The Koszul dual homotopy cooperad}\mlabel{sec:dualhomocoop}
In this section, we will construct a homotopy cooperad $ \Dif^\ac$, which will be called the Koszul dual homotopy cooperad of the operad $\Dif$ of differential algebras with weight $\lambda$. In fact, we will show that the cobar construction of $ \Dif^\ac$ is the minimal model of $\Dif$ (Theorem~\mref{thm:difmodel}) which justifies the name.
\vspb
\subsection{Homotopy (co)operads and $L_\infty$-algebras}\mlabel{ss:homocood}\
We first collect needed background on nonsymmetric homotopy (co)operads that are scattered in several references \mcite{DP16, Mar96, MV09a, MV09b}. We also explain how to obtain $L_\infty$-structures from homotopy operads and, in particular, from convolution homotopy operads.
Recall that a graded collection $\calo=\{\calo(n)\}_{n\geqslant 1}$ is a family of graded space indexed by positive integers, i.e. each $\calo(n)$ itself being a graded space. Elements in $\calo(n)$ are said to have arity $n$. The suspension of $\calo$, denoted by $s\calo$ is defined to be the graded collection $\{s\calo(n)\}_{n\geqslant 1}$. In the same way, one have the desuspension $s^{-1}\calo$ of the graded collection $\calo$.
We need some preliminaries about trees. As we only consider planar reduced trees (with roots at the bottom), we will simply call them trees.
For a tree $T$, denote the weight (=number of vertices) and arity (=number of leaves) of $T$ by $\omega(T)$ and $\alpha(T)$ respectively. Write $\mathfrak{T}$ to be the set of all trees with weight $\geqslant 1$ and, for $n\geqslant 1$, denote by $\mathfrak{T}^{(n)}$ the set of trees of weight $n$.
Since trees are planar, each vertex in a tree has a total order on its inputs going clockwise.
By the existence of the root, there is a naturally induced total order on the set of all vertices of a given tree $T\in \mathfrak{T}$, given by counting the vertices starting from the root clockwise along the tree. We call this order the \name{planar order}.
Let $T'$ be a divisor of $T$, define $T/T'$ to be the tree obtained from $T$ in replacing $T'$ by a corolla (a tree with only one vertex) of arity $\alpha(T')$. There is a natural permutation $\sigma=\sigma(T, T')\in S_{\omega(T)}$ associated to the pair $(T, T')$ defined as follows. Let the ordered set $\{v_1<\dots<v_n\}$ be the sequence of all vertices of $T$ in the planar order and $\omega(T')=j$. Let $v'$ be the vertex in $T/T'$ corresponding to the divisor $T'$ in $T$ and let $i$ be the serial number of $v'$ in $T/T'$ in the planar order, that is, there are $i-1$ vertices before $T'$. Then define $\sigma=\sigma(T, T')\in S_{n}$ to be the unique permutation which does not permute the vertices $v_1, \dots, v_{i-1}$, and such that the ordered set $\{v_{\sigma(i)}<\dots<v_{\sigma(i+j-1)}\}$ is exactly the planar ordered set of all vertices of $T'$ and the ordered set $\{v_{1}<\dots<v_{i-1}<v'<v_{\sigma(i+j)}<\dots< v_{\sigma(n)}\}$ is exactly the planar ordered set of all vertices in the tree $T/T'$.
Let $\calp=\{\calp(n)\}_{n\geqslant 1}$ be a graded collection. Let $T\in \mathfrak{T}$ and $\{v_1<\dots<v_n\}$ be the set of all vertices of $T$ in the planar order. Define $\calp^{\otimes T}$ to be $\calp(\alpha(v_1))\ot \cdots \ot \calp(\alpha(v_n))$; morally an element in $\calp^{\ot T}$ is a linear combination of the tree $T$ with each vertex $v_i$ decorated by an element of $\calp(\alpha(v_i))$.
\begin{defn} \mlabel{de:homood}
A \name{homotopy operad structure} on a graded collection $\calp=\{\calp(n)\}_{n\geqslant 1}$ consists of a family of operations $$\{m_T: \calp^{\ot T}\rightarrow \calp(\alpha(T))\}_{T\in \mathfrak{T}}$$ with $|m_T|=\omega(T)-2$ such that the equation
\[\sum\limits_{T'\subset{T}}(-1)^{i-1+jk}\sgn(\sigma(T,T'))\ m_{T/T'}\circ(\id^{\ot {i-1}}\ot m_{T'}\ot \id^{\ot k})\circ r_{\sigma(T,T')}=0\]
holds for all $T\in \mathfrak{T}$, where $T'$ runs through the set of all subtrees of $T$, $i$ is the serial number of the vertex $v'$ in $T/T'$, $j=\omega(T')$, $k=\omega(T)-i-j$, and
$r_{\sigma(T,T')}$ denotes the right action by $\sigma=\sigma(T,T')$, that is,
$$r_\sigma(x_1\ot \cdots\ot x_n)=\varepsilon(\sigma; x_1,\dots, x_{n})x_{\sigma(1)}\ot \cdots \ot x_{\sigma(n)}.$$
\end{defn}
Given two homotopy operads, a \name{strict morphism} between them is a morphism of graded collections compatible with all operations $m_T, T\in \mathfrak T$.
Let $\cali$ be the collection with $\cali(1)=k$ and $\cali(n)=0$ for $n\ne1$. The collection $\cali$ can be endowed with a homotopy operad structure in a natural way, that is, when $T$ is the tree with two vertices and one unique leaf, then $m_T: \cali(1)\ot \cali(1)\to \cali(1)$ is given by the identity; for other trees $T$, $m_T$ vanish.
A homotopy operad $\calp$ is called \name{strictly unital} if there exists a strict morphism of homotopy operads $\eta: \cali\rightarrow \calp$ such that, for $n\geqslant 1$, the compositions $$\calp(n)\cong \calp(n)\ot \cali(1)\xrightarrow{\id\ot \eta}\calp(n)\ot \calp(1)\xrightarrow{m_{T_{1, i}}}\calp(n)$$ and
$$\calp(n)\cong \cali(1)\ot \calp(n)\xrightarrow{\eta\ot \id}\calp(1)\ot \calp(n)\xrightarrow{m_{T_2}} \calp(n)$$ are identity maps on $\calp(n)$, where $T_{1,i}$ with $1\leqslant i\leqslant n$ is the tree of weight $2$, arity $n$ with its second vertex having arity $1$ and connecting to the first vertex on its $i$-th leaf, and $T_2$ is the tree of weight $2$, arity $n$ whose first vertex has arity $1$.
A homotopy operad $\calp$ is called \name{augmented} if there exists a strict morphism of homotopy operads $\varepsilon: \calp\rightarrow \cali$ such that $\varepsilon \circ \eta=\id_{\cali}$.
Given a dg operad $\calp$, for each tree $T$, one can define the composition $m_\calp^T:\calp^{\ot T}\to \calp(\alpha(T))$ in $\calp$ along $T$ as follows: for $\omega(T)=1$, set $m_\calp^T:=\Id$; for $\omega(T)=2$, set $m_\calp^T:=m_T$;
for $\omega(T)\geqslant 3$, write $T$ as the grafting of a subtree $T'$, whose vertex set is that of $T$ except the last one, with the corolla whose unique vertex is just the last vertex of $T$ in the planar order, then set
$m_\calp^T:=m_{T/T'}\circ (m_\calp^{T'}\ot \Id) $, where $m_\calp^{T'}$ is obtained by induction.
Dualizing the definition of homotopy operads, one has the notion of homotopy cooperads.
\begin{defn}Let $\calc=\{\calc(n)\}_{n\geqslant 1}$ be a graded collection. A \name{homotopy cooperad structure} on $\calc$ consists of a family of operations $\{\Delta_T: \calc(n)\rightarrow \calc^{\ot T }\}_{T\in\mathfrak{T}}$ with $|\Delta_T|=\omega(T)-2$ such that for $c\in \calc$, $\Delta_T(c)=0$ for all but finitely many $T\in\frakt$, and the family of operations $\{\Delta_T\}_{T\in\frakt}$ satisfies the identity
$$\sum_{T'\subset T}\sgn(\sigma(T,T')^{-1})(-1)^{i-1+jk} r_{\sigma(T,T')^{-1}}\circ (\id^{\ot i-1}\ot \Delta_{T'}\ot \id^{\ot k})\circ \Delta_{T/T'}=0$$
for all $T\in\frakt$, where $T', i, j, k$ have the same meaning as for homotopy operads.
\end{defn}
The graded collection $\cali$ has a natural homotopy cooperad structure. It is obtained by taking $\Delta_T: \cali(1)\to \cali(1)\ot \cali(1)$ to be the identity when $T$ is the tree with two vertices and one unique leaf, and $\Delta_T=0$ for other $T$.
Dualizing the notions of being strictly unital and being augmented gives the nations of being \name{strictly counital and coaugmented}.
For a coaugmented homotopy cooperad $\calc$, the graded collection $\overline{\calc}=\ker(\varepsilon)$ endowed with operations $\{\overline{\Delta}_T\}_{T\in\frakt}$ is naturally a homotopy cooperad, where $\overline{\Delta}_T$ is the the restriction of operation $\Delta_T$ on $\overline{\calc}$.
A homotopy cooperad $\cale=\{\cale(n)\}_{n\geqslant 1}$ such that $\{\Delta_T\}$ vanish for all $\omega(T)\geqslant 3$ is exactly a noncounital dg cooperad in the sense of Markl \mcite{Mar08}.
For a (noncounital) dg cooperad $\cale$, one can define the cocomposition $\Delta^{T}_\cale:\cale(\alpha(T))\to \cale^{\ot T}$ along a tree $T$ in duality of the composition $m^T_\calp$ along $T$ for a dg operad $\calp$.
\begin{prop-def} Let $\calc$ be a homotopy cooperad and $\cale$ be a dg cooperad. Then the graded collection $\calc\ot \cale$ with $(\calc\ot\cale)(n):=\calc(n)\ot \cale(n), n\geqslant 1$, has a natural structure of homotopy cooperad as follows:
\begin{enumerate}
\item For a tree $T\in \frakt$ of weight $1$ and arity $n$, define $$\Delta_T^H(c\ot e):=\Delta_T^\calc(c)\ot e+(-1)^{|c|}c\ot d_{\cale}e$$ for homogeneous elements $c\in \calc(n), e\in \cale(n)$;
\item For a tree $T$ of weight $n\geqslant 2$, define
$$\Delta_T^H(c\ot e):=(-1)^{\sum\limits_{k=1}^{n-1}\sum_{j=k+1}^n|e_k||c_j|}(c_1\ot e_1)\ot \cdots \ot (c_n\ot e_n)\in (\calc\ot \cale )^{\ot T},$$
with $c_1\ot \cdots\ot c_n=\Delta_T^\calc(c)\in \calc^{\ot T}$ and $e_1\ot \cdots \ot e_n=\Delta^T_\cale(e)\in \cale^{\ot T}$, where $\Delta^T_\cale$ is the cocomposition in $\cale$ along $T$.
\end{enumerate}
The new homotopy cooperad is called the \name{Hadamard product} of $\calc$ and $\cale$, denoted by $\calc\ot_{\rmH}\cale$.
\end{prop-def}
Define $\cals:=\mathrm{End}_{\bfk s}^c$ to be the graded cooperad whose underlying graded collection is given by $\cals(n)=\Hom((\bfk s)^{\ot n},\bfk s),n\geqslant 1$. Denote $\delta_n$ to be the map in $\cals(n)$ sending $s^{\ot n}$ to $s$. The cooperad structure is given by
$$\Delta_T(\delta_n):=(-1)^{(i-1)(j-1)}\delta_{n-j+1}\ot \delta_j\in \cals^{\ot T} $$
for a tree $T$ of weight $2$ whose second vertex is connected with the $i$-th leaf of its first vertex. We also define $\cals^{-1}$ to be the graded cooperad whose underlying graded collection is $\cals^{-1}(n)=\Hom((\bfk s^{-1})^{\ot n},s^{-1})$ for all $n\geqslant 1$ and the cooperad structure is given by
$$\Delta_T(\varepsilon_n):=(-1)^{(j-1)(n-i+1-j)}\varepsilon_{n-j+1}\ot \varepsilon_j\in (\cals^{-1})^{\ot T},$$
where $\varepsilon_n\in \cals^{-1}(n)$ is the map which takes $(s^{-1})^{\ot n}$ to $s^{-1}$ and $T$ is the same as before.
It is easy to see that $\cals\ot_{\mathrm{H}}\cals^{-1}\cong \cals^{-1}\ot_{\mathrm{H}}\cals=:\mathbf{As}^\vee$. Notice that for a homotopy cooperad $\calc$, we have $ \calc\ot_\rmH \mathbf{As}^\vee\cong\calc\cong \mathbf{As}^\vee\ot_\rmH \calc.$
Let $\calc$ be a homotopy cooperad. Define the \name{operadic suspension} (resp. \name{desuspension}) of $\calc$ to be the homotopy cooperad $\calc\ot _{\mathrm{H}} \cals$ (resp. $\calc\ot_{\mathrm{H}}\cals^{-1}$), denoted as $\mathscr{S}\calc$ (resp. $\mathscr{S}^{-1}\calc$). For cooperads, we have
\begin{defn}\mlabel{Def: cobar construction}
Let $\calc=\{\calc(n)\}_{n\geqslant 1}$ be a coaugmented homotopy cooperad. The \name{cobar construction} of $\calc$, denoted by $\Omega\calc$, is the free graded operad generated by the graded collection $s^{-1}\overline{\calc}$, endowed with the differential $\partial$ which is lifted from
$$\partial: s^{-1}\overline{\calc}\to \Omega\calc, \ \partial(s^{-1}f):=-\sum_{T\in \mathfrak{T} } (s^{-1})^{\otimes \omega(T)} \circ \Delta_T(f) \tforall f\in \overline{\calc}(n).$$
\end{defn}
This provides an alternative definition for homotopy operads. In fact, a graded collection $\overline{\calc}=\{\overline{\calc}(n)\}_{n\geqslant 1}$ is a homotopy cooperad if and only if the free graded operad generated by $s^{-1}\overline{\calc}$ (also called the cobar construction of $\calc=\overline{C}\oplus \cali$) is endowed with a differential such that it becomes a dg operad.
A natural $L_\infty$-algebra is associated with a homotopy operad $\calp=\{\calp(n)\}_{n\geqslant 1}$.
Denote $\calp^{\prod}:= \prod\limits_{n=1}^\infty\calp(n)$. For each $n\geqslant 1$, define operations $m_n=\sum\limits_{T\in \mathfrak{T}^{(n)}}m_T: (\calp^{\prod})^{\ot n}\to \calp^{\prod} $
and $l_n$ to be the anti-symmetrization of $m_n$, that is, $$l_n(x_1\ot \cdots \ot x_n):=\sum\limits_{\sigma\in S_n}\chi(\sigma; x_1,\dots, x_n)m_n(x_{\sigma(1)}\ot \cdots \ot x_{\sigma(n)}).$$
\begin{prop}\mlabel{pp:linfhomood}
Let $\calp$ be a homotopy operad. Then $(\calp^{\prod}, \{l_n\}_{n\geqslant 1})$ is an $L_\infty$-algebra.
\end{prop}
If a homotopy operad $\calp$ satisfies $m_T=0$ for all $T\in \frakt$ with $\omega(T)\geqslant 2$, then $\calp$ is just a nonunital dg operad in the sense of Markl \mcite{Mar08}. In this case, $\calp^{\prod}$ is just a dg Lie algebra.
We shall need a fact about dg operads.
\begin{defn}Let $\calp$ be a (nonunital) dg operad.
For $f\in \calp(m)$ and $g_1\in\calp(l_1),\dots,g_n\in \calp(l_n) $ with $1\leqslant n\leqslant m$, define
\[f\{g_1,\dots,g_n\}:=\sum_{i_j\geqslant l_{j-1}+i_{j-1},n\geqslant j\geqslant 2, i_1\geqslant 1}\Big(\big((f\circ_{i_1} g_1)\circ_{i_2}g_2\big)\dots\Big)\circ_{i_n}g_n.\]
It is called the \name{brace operation} on $\calp^{\prod}$. For $f\in \calp(m)$ and $g\in \calp(m)$, define
\[[f,g]_{G}:=f\{g\}-(-1)^{|f||g|}g\{f\}\in \calp(m+n-1),\]
called the \name{Gerstenhaber bracket} of $f$ and $g$.
\end{defn}
The operation $l_2$ in the dg Lie algebra $\calp^{\prod}$ is exactly the Gerstenhaber bracket defined above.
The brace operation in a dg operad $\calp$ satisfies the following pre-Jacobi identity:
\begin{prop}\mcite{Ge1, GV95, Get93}
For homogeneous elements $f, g_1,\dots, g_m, h_1,\dots,h_n$ in $\calp^{\prod}$, we have
\begin{eqnarray}
\mlabel{Eq: pre-jacobi} &&\Big(f \{g_1,\dots,g_m\}\Big)\{h_1,\dots,h_n\}=\\
\notag &&\quad \sum\limits_{0\leqslant i_1\leqslant j_1\leqslant i_2\leqslant j_2\leqslant \dots \leqslant i_m\leqslant j_m\leqslant n}(-1)^{\sum\limits_{k=1}^m(|g_k|)(\sum\limits_{j=1}^{i_k}(|h_j|))}
f\{h_{1, i_1}, g_1\{sh_{i_1+1, j_1}\},\dots, g_m\{h_{i_m+1, j_m} \}, h_{j_m+1, n}\}.
\end{eqnarray}
\end{prop}
In particular, we have
\begin{eqnarray}
\mlabel{Eq: pre-jacobi1}
(f \{g\})\{h\}=f\{g\{h\}\}+f\{g, h\}+(-1)^{|g||h|}f\{h, g\}.
\end{eqnarray}
Next we introduce the notion of convolution homotopy operads.
\begin{prop-def}\mlabel{Prop: convolution homotopy operad}
Let $\calc$ be a homotopy cooperad and $\calp$ be a dg operad. Then the graded collection $\mathbf{Hom}(\calc,\calp)=\{\Hom(\calc(n), \calp(n))\}_{n\geqslant 1}$ has a natural homotopy operad structure as follows.
\begin{enumerate}
\item For $T\in \mathfrak{T}$ with $\omega(T)=1$, $$m_T(f):=d_{\calp}\circ f-(-1)^{|f|}f\circ \Delta_T^\calc, \ f\in\mathbf{Hom}(\calc,\calp)(n).$$
\item For $T\in \mathfrak{T}$ with $\omega(T)\geqslant 2$, $$m_T(f_1\ot \cdots \ot f_r):=(-1)^{\frac{r(r-1)}{2}+1+r\sum_{i=1}^r |f_i|}m_{\calp}^T\circ(f_1\ot \cdots\ot f_r)\circ \Delta_T^\calc,$$ where $m_\calp^T$ is the composition in $\calp$ along $T$.
\end{enumerate}
This homotopy operad is called the \textbf{convolution homotopy cooperad}.
\end{prop-def}
The following result explains the Maurer-Cartan elements of the $L_\infty$-algebra associated to a convolution homotopy operad.
\begin{prop}\mlabel{Prop: Linfinity give MC}
Let $\calc$ be a coaugmented homotopy cooperad and $\calp$ be a unital dg operad. Then there is a natural bijection:
\[\Hom_{udgOp}(\Omega\calc, \calp)\cong \calm\calc\Big(\mathbf{Hom}(\overline{\calc},\calp)^{\prod}\Big),\]
between the set of morphisms of unital dg operads from $\Omega C$ to $\calp$ and the set of Maurer-Cartan elements in the $L_\infty$-algebra $\mathbf{Hom}(\overline{\calc},\calp)^{\prod}$.
\end{prop}
At last we recall the notions of minimal models and Koszul dual homotopy cooperads of operads and explain how they are related to deformation complexes and $L_\infty$-algebra structures on deformation complexes.
For a collection $M=\{M(n)\}_{n\geqslant 1} $ of (graded) vector spaces, denote by $ \mathcal{F}(M)$ the free graded operad generated by $M$. Recall that a dg operad is called {\bf quasi-free} if its underlying graded operad is free.
\begin{defn}\mcite{DCV13} \mlabel{de:model} A \name{minimal model} for an operad $\mathcal{P}$ is a quasi-free dg operad $ (\mathcal{F}(M),d)$ together with a surjective quasi-isomorphism of operads $(\mathcal{F}(M), \partial)\overset{\sim}{\twoheadrightarrow}\mathcal{P}$, where the dg operad $(\mathcal{F}(M), \partial)$ satisfies the following conditions.
\begin{enumerate}
\item The differential $\partial$ is decomposable, i.e. $\partial$ takes $M$ to $\mathcal{F}(M)^{\geqslant 2}$, the subspace of $\mathcal{F}(M)$ consisting of elements with weight $\geqslant 2$; \label{it:min1}
\item The generating collection $M$ admits a decomposition $M=\bigoplus\limits_{i\geqslant 1}M_{(i)}$ such that $\partial(M_{(k+1)})\subset \mathcal{F}\Big(\bigoplus\limits_{i=1}^kM_{(i)}\Big)$ for all $k\geqslant 1$. \label{it:min2}
\end{enumerate}
\end{defn}
By~\mcite{DCV13}, if an operad $\mathcal{P}$ admits a minimal model, then it is unique up to isomorphisms.
For an operad $\calp$, assume that its minimal model $\calp_\infty$ exists. Since $\calp_\infty$ is quasi-free,
there exists a coaugmented homotopy cooperad $\calc$ such that $\Omega\calc\cong \calp_\infty$. So $\calc$ is called the \textbf{Koszul dual homotopy cooperad} of $\calp$, denoted by $\calp^\ac$.
Let $V$ be a complex. Denote by $\End_V$ the dg endomorphism operad. Then the underlying complex of the convolution homotopy cooperad $\mathbf{Hom}(\overline{\calp^\ac}, \End_V)$ is called the \textbf{deformation complex} of $\calp$ on the complex $V$.
An element of $\Hom_{udgOp}( \calp_\infty, \End_V)$ is exactly a homotopy $\calp$-structure in $V$. So
Proposition~\ref{Prop: Linfinity give MC} gives a bijection between the set of homotopy $\calp$-structures on $V$ and that of Maurer-Cartan elements in the $L_\infty$-algebra on the deformation complex.
\vspb
\subsection{ Koszul dual homotopy cooperad of $ \Dif$}\
\mlabel{ss:dual}
Now we introduce the operad of differential algebras with weight.
\begin{defn}\mlabel{de:difod}
The \name{operad for differential algebras of weight $\mathbf \lambda$} is defined to be the quotient of the free graded operad $\mathcal{F(M)}$ generated by a graded collection $\mathcal{M}$ by an operadic ideal $I$, where the collection $\mathcal{M}$ is given by $\mathcal{M}(1)=\bfk d, \mathcal{M}(2)=\bfk \mu $ and $\mathcal{M}(n)=0$ for $n\neq 1,2$, and where $I$ is generated by
\begin{equation}\mlabel{eq:difod}
\mu\circ_1\mu-\mu\circ_2\mu \ \ \mathrm{and}\ \
d\circ_1\mu-\big(\mu\circ_1d+\mu\circ_2 d+\lambda\ (\mu\circ_1 d)\circ_2 d\big).
\end{equation}
Denote this operad by $\Dif$.
\end{defn}
The homotopy cooperad $\mathscr{S} (\Dif^\ac)$ is defined on the graded collection with arity-$n$ component
$$\mathscr{S} (\Dif^\ac)(n):=\bfk \widetilde{m_n} \oplus \bfk \widetilde{d_n}$$
with $|\widetilde{m_n}|=0$, $|\widetilde{d_n}|=1$ for $n\geq 1$.
The coaugmented homotopy cooperad structure on the graded collection $\mathscr{S} (\Dif^\ac)$ is defined as follows.
First consider the following two types of trees:
\begin{itemize}[keyvals]
\item[(i)]
\begin{eqnarray*}
\begin{tikzpicture}[scale=0.6,descr/.style={fill=white}]
\tikzstyle{every node}=[thick,minimum size=3pt, inner sep=1pt]
\node(r) at (0,-0.5)[minimum size=0pt,circle]{};
\node(v0) at (0,0)[fill=black, circle,label=right:{\tiny $ n-j+1$}]{};
\node(v1-1) at (-1.5,1){};
\node(v1-2) at(0,1)[fill=black,circle,label=right:{\tiny $\tiny j$}]{};
\node(v1-3) at(1.5,1){};
\node(v2-1)at (-1,2){};
\node(v2-2) at(1,2){};
\draw(v0)--(v1-1);
\draw(v0)--(v1-3);
\draw(v1-2)--(v2-1);
\draw(v1-2)--(v2-2);
\draw[dotted](-0.4,1.5)--(0.4,1.5);
\draw[dotted](-0.5,0.5)--(-0.1,0.5);
\draw[dotted](0.1,0.5)--(0.5,0.5);
\path[-,font=\scriptsize]
(v0) edge node[descr]{{\tiny$i$}} (v1-2);
\end{tikzpicture}
\end{eqnarray*}
where $n\geq 1, 1\leq j\leq n, 1\leq i \leq n-j+1$.
\item[(ii)]
\begin{eqnarray*}
\begin{tikzpicture}[scale=0.6,descr/.style={fill=white}]
\tikzstyle{every node}=[thick,minimum size=3pt, inner sep=1pt]
\node(v1-2) at(0,1.2)[circle,fill=black,label=right:{\tiny $p$}]{};
\node(v2-1) at(-1.9,2.6){};
\node(v2-2) at (-0.9, 2.8)[circle,fill=black,label=right:{\tiny $l_1$}]{};
\node(v2-3) at (0,2.9){};
\node(v2-4) at(0.9,2.8)[circle,fill=black,label=right:{\tiny $l_q$}]{};
\node(v2-5) at(1.9,2.6){};
\node(v3-1) at (-1.5,3.5){};
\node(v3-2) at (-0.3,3.5){};
\node(v3-3) at (0.3,3.5){};
\node(v3-4) at(1.5,3.5){};
\draw(v1-2)--(v2-1);
\draw(v1-2)--(v2-3);
\draw(v1-2)--(v2-5);
\path[-,font=\scriptsize]
(v1-2) edge node[descr]{{\tiny$k_1$}} (v2-2)
edge node[descr]{{\tiny$k_{q}$}} (v2-4);
\draw(v2-2)--(v3-1);
\draw(v2-2)--(v3-2);
\draw(v2-4)--(v3-3);
\draw(v2-4)--(v3-4);
\draw[dotted](-0.5,2.4)--(-0.1,2.4);
\draw[dotted](0.1,2.4)--(0.5,2.4);
\draw[dotted](-1.4,2.4)--(-0.8,2.4);
\draw[dotted](1.4,2.4)--(0.8,2.4);
\draw[dotted](-1.1,3.2)--(-0.6,3.2);
\draw[dotted](1.1,3.2)--(0.6,3.2);
\end{tikzpicture}
\end{eqnarray*}
where $ 2\leq p\leq n , 2\leq q \leq p, 1\leq k_1<\dots<k_{q}\leq p, l_1, \dots, l_q\geq 1, l_1 + \cdots + l_q+p-q=n$.
\end{itemize}
Next, we define a family of operations $\{\Delta_T: \mathscr{S} (\Dif^\ac)\rightarrow \mathscr{S}( \Dif^\ac)^{\ot T}\}_{T\in \frakt}$ as follows:
\begin{itemize}
\item[(i)] For the element $\widetilde{m_n}\in\mathscr{S} (\Dif^\ac)(n)$ and a tree $T$ of type $\mathrm{(i)}$, define
\begin{eqnarray*}
\begin{tikzpicture}[scale=0.8,descr/.style={fill=white}]
\tikzstyle{every node}=[thick,minimum size=5pt, inner sep=1pt]
\node(r) at (0,-0.5)[minimum size=0pt,rectangle]{};
\node(v-2) at(-2,0.5)[minimum size=0pt, label=left:{$\Delta_T(\widetilde{m_n})=$}]{};
\node(v0) at (0,0)[draw,rectangle]{{\small $\widetilde{m_{n-j+1}}$}};
\node(v1-1) at (-1.5,1){};
\node(v1-2) at(0,1)[draw,rectangle]{\small$\widetilde{m_j}$};
\node(v1-3) at(1.5,1){};
\node(v2-1)at (-1,2){};
\node(v2-2) at(1,2){};
\draw(v0)--(v1-1);
\draw(v0)--(v1-3);
\draw(v1-2)--(v2-1);
\draw(v1-2)--(v2-2);
\draw[dotted](-0.4,1.5)--(0.4,1.5);
\draw[dotted](-0.5,0.5)--(-0.1,0.5);
\draw[dotted](0.1,0.5)--(0.5,0.5);
\path[-,font=\scriptsize]
(v0) edge node[descr]{{\tiny$i$}} (v1-2);
\end{tikzpicture}
\end{eqnarray*}
\item[(ii)] For the element $\widetilde{d_n}\in\mathscr{S} (\Dif^\ac)(n)$ and a tree $T$ of type $\mathrm{(i)}$, define
\begin{eqnarray*}
\begin{tikzpicture}[scale=0.8,descr/.style={fill=white}]
\tikzstyle{every node}=[thick,minimum size=5pt, inner sep=1pt]
\node(r) at (0,-0.5)[minimum size=0pt,rectangle]{};
\node(va) at(-2,0.5)[minimum size=0pt, label=left:{$\Delta_T(\widetilde{d_n})=$}]{};
\node(vb) at(-1.5,0.5)[minimum size=0pt ]{};
\node(vc) at (0,0)[draw, rectangle]{\small $\widetilde{d_{n-j+1}}$};
\node(v1) at(0,1)[draw,rectangle]{\small $\widetilde{m_j}$};
\node(v0) at(-1.5,1){};
\node(v2) at(1.5,1){};
\node(v2-1)at (-1,2){};
\node(v2-2) at(1,2){};
\node(vd) at(1.5,0.5)[minimum size=0, label=right:$+$]{};
\node(ve) at (3.5,0)[draw, rectangle]{\small $\widetilde{m_{n-j+1}}$};
\node(ve1) at (3.5,1)[draw,rectangle]{\small $\widetilde{d_j}$};
\node(ve2-1) at(2.5,2){};
\node(ve0) at (2,1){};
\node(ve2) at (5,1){};
\node(ve2-2) at(4.5,2){};
\draw(v1)--(v2-1);
\draw(v1)--(v2-2);
\draw[dotted](-0.4,1.5)--(0.4,1.5);
\draw(vc)--(v0);
\draw(vc)--(v2);
\draw(ve)--(ve0);
\draw(ve)--(ve2);
\draw(ve1)--(ve2-1);
\draw(ve1)--(ve2-2);
\draw[dotted](3.1,1.5)--(3.9,1.5);
\draw[dotted](-0.5,0.5)--(0,0.5);
\draw[dotted](0,0.5)--(0.5,0.5);
\draw[dotted](3,0.5)--(3.5,0.5);
\draw[dotted](3.5,0.5)--(4,0.5);
\path[-,font=\scriptsize]
(vc) edge node[descr]{{\tiny$i$}} (v1);
\path[-,font=\scriptsize]
(ve) edge node[descr]{{\tiny$i$}} (ve1);
\end{tikzpicture}
\end{eqnarray*}
\item[(iii)] For the element $\widetilde{d_n}\in\mathscr{S} (\Dif^\ac)(n)$ and a tree $T$ of type $\mathrm{(ii)}$, define
\begin{eqnarray*}
\begin{tikzpicture}[scale=0.8,descr/.style={fill=white}]
\tikzstyle{every node}=[minimum size=4pt, inner sep=1pt]
\node(v-2) at (-5,2)[minimum size=0pt, label=left:{$\Delta_T(\widetilde{d_n})=$}]{};
\node(v-1) at(-5,2)[minimum size=0pt,label=right:{$(-1)^\frac{q(q-1)}{2}\lambda^{q-1}$}]{};
\node(v1-1) at (-2,1){};
\node(v1-2) at(0,1.2)[rectangle,draw]{\small $\widetilde{m_p}$};
\node(v1-3) at(2,1){};
\node(v2-1) at(-1.9,2.6){};
\node(v2-2) at (-0.9, 2.8)[rectangle,draw]{\small$\widetilde{d_{l_1}}$};
\node(v2-3) at (0,2.9){};
\node(v2-4) at(0.9,2.8)[rectangle,draw]{\small $\widetilde{d_{l_q}}$};
\node(v2-5) at(1.9,2.6){};
\node(v3-1) at (-1.5,3.5){};
\node(v3-2) at (-0.3,3.5){};
\node(v3-3) at (0.3,3.5){};
\node(v3-4) at(1.5,3.5){};
\draw(v1-2)--(v2-1);
\draw(v1-2)--(v2-3);
\draw(v1-2)--(v2-5);
\path[-,font=\scriptsize]
(v1-2) edge node[descr]{{\tiny$k_1$}} (v2-2)
edge node[descr]{{\tiny$k_{q}$}} (v2-4);
\draw(v2-2)--(v3-1);
\draw(v2-2)--(v3-2);
\draw(v2-4)--(v3-3);
\draw(v2-4)--(v3-4);
\draw[dotted](-0.5,2.4)--(-0.1,2.4);
\draw[dotted](0.1,2.4)--(0.5,2.4);
\draw[dotted](-1.4,2.4)--(-0.8,2.4);
\draw[dotted](1.4,2.4)--(0.8,2.4);
\draw[dotted](-1.1,3.2)--(-0.6,3.2);
\draw[dotted](1.1,3.2)--(0.6,3.2);
\end{tikzpicture}
\end{eqnarray*}
\item[(iv)] $\Delta_T$ vanishes elsewhere.
\end{itemize}
Note that $\Delta_T(\widetilde{m_1})=\widetilde{m_1}\otimes \widetilde{m_1}\in \mathscr{S} (\Dif^\ac)^{\otimes T}$ when $T$ is the tree of type $(i)$ with $i=j=n=1$ and $\Delta_T(\widetilde{m_1})=0$ for every other tree $T$. So $\bfk \widetilde{m_1}\cong \cali$ as homotopy cooperads and there is a natural projection and injection $\varepsilon:\mathscr{S} (\Dif^\ac)\twoheadrightarrow \bfk \widetilde{m_1}\cong \cali$ and $\eta:\cali\cong \bfk \widetilde{m_1}\hookrightarrow \mathscr{S} (\Dif^\ac).$
\begin{prop}\mlabel{pp:dualcoop}
The graded collection $\mathscr{S} (\Dif^\ac)$ endowed with operations $\{\Delta_T\}_{T\in \frakt}$ forms a coaugmented homotopy cooperad with strict counit $\varepsilon:\mathscr{S} (\Dif^\ac)\twoheadrightarrow \bfk \widetilde{m_1}\cong \cali$ and the coaugmentation $\eta:\cali\cong \bfk \widetilde{m_1}\hookrightarrow \mathscr{S} (\Dif^\ac).$
\end{prop}
\begin{proof}
First of all, it is easy to check that $|\Delta_T|=\omega(T)-2$ for all $T\in \frakt$.
Next, we prove that $\{\Delta_T\}_{T\in \frakt}$ endows $\mathscr{S} (\Dif^\ac)$ with a homotopy cooperad structure. It suffices to prove that the induced derivation $\partial$ on the cobar construction $\Omega \mathscr{S} (\Dif^\ac)$, namely the free operad generated by $s^{-1}\overline{\mathscr{S} (\Dif^\ac)}$ is a differential: $\partial^2=0$. To simplify notations, we denote the generators $s^{-1}\widetilde{m_n}, n\geqslant 2$ and $s^{-1}\widetilde{d_n}, n\geqslant 1$ by $\mu_n$ and $\nu_n$ respectively and note that $|\mu_n|=-1, n\geqslant 2$ and $|\nu_n|=0, n\geqslant 1$.
By the definition of $\Omega \mathscr{S} (\Dif^\ac)$ (Definition~\mref{Def: cobar construction}), one computes the action of $\partial$ on generators $\mu_n=s^{-1}\widetilde{m_n}$ and $v_n=s^{-1}\widetilde{d_n}$ as
{\small$$
\partial (\mu_n) = -\sum_{j=2}^{n-1}\mu_{n-j+1}\{\mu_{j} \},\ \partial (\nu_n) = \sum_{j=2}^{n}\nu_{n-j+1}\{\mu_{j} \} -\sum_{\substack{2\leq p\leq n,\\ 1\leq q\leq p}}\sum_{\substack{ l_1+\cdots+l_q+p-q=n, \\ l_1,\cdots,l_q\geq 1}} \lambda^{q-1} \mu_{p}\{\nu_{l_1} ,\cdots, \nu_{l_q}\}.
$$}
As $\partial ^2$ is also a derivation in the free operad $ \Omega \mathscr{S} (\Dif^\ac) $, to prove $\partial ^2=0$ is equivalent to proving that $\partial ^2=0 $ holds on the generators $\mu_n,n\geq 2$ and $\nu_n,n\geq 1$.
For $\partial^2(\mu_n)$, we have
{\small
$$\begin{aligned}
\partial ^2(\mu_n) = &\partial (-\sum_{j=2}^{n-1}\mu_{n-j+1}\{\mu_{j}\}) \\
= & - \sum_{j=2}^{n-1} \partial (\mu_{n-j+1}) \{\mu_{j}\} +
\sum_{j=2}^{n-1} \mu_{n-j+1} \{\partial (\mu_{j})\} \\
= &\sum_{\substack{i+j+k-2=n,\\ 2\leq i,j,k\leq n-2}} (\mu_{i}\{ \mu_{j}\}) \{ \mu_{k}\} -\sum_{\substack{i+j+k-2=n,\\ 2\leq i,j,k\leq n-2}} \mu_{i}\{ \mu_{j} \{ \mu_{k}\}\} \\
\stackrel{\meqref{Eq: pre-jacobi1}}{=} & \sum_{\substack{i+j+k-2=n,\\ 2\leq i,j,k\leq n-2}} \mu_{i}\{ \mu_{j} \{ \mu_{k}\}\} + \sum_{\substack{i+j+k-2=n,\\ 2\leq i,j,k\leq n-2}} \mu_{i}\{ \mu_{j} , \mu_{k}\} - \sum_{\substack{i+j+k-2=n,\\ 2\leq i,j,k\leq n-2}} \mu_{i}\{\mu_{k} , \mu_{j}\} -\sum_{\substack{i+j+k-2=n,\\ 2\leq i,j,k\leq n-2}} \mu_{i}\{ \mu_{j} \{ \mu_{k}\}\}\\
=& 0.
\end{aligned}$$
}
The computation of $\partial ^2(\nu_n)$ is more involved.
{\small
$$\begin{aligned}
\partial ^2(\nu_n) = & \partial ( \sum_{j=2}^{n}\nu_{n-j+1}\{\mu_{j} \} -\sum_{\substack{2\leq p\leq n,\\ 1\leq q\leq p}}\sum_{\substack{ l_1+\cdots+l_q+p-q=n ,\\ l_1,\cdots,l_q\geq 1}} \lambda^{q-1} \mu_{p}\{\nu_{l_1} ,\cdots, \nu_{l_q}\})\\
= & \sum_{j=2}^{n}\partial (\nu_{n-j+1})\{\mu_{j} \} + \sum_{j=2}^{n}\nu_{n-j+1}\{\partial (\mu_{j}) \} - \sum_{\substack{2\leq p\leq n, \\ 1\leq q\leq p}}\sum_{\substack{ l_1+\cdots+l_q+p-q=n ,\\ l_1,\cdots,l_q\geq 1}} \lambda^{q-1} \partial (\mu_{p})\{\nu_{l_1} ,\cdots, \nu_{l_q}\}\\
& + \sum_{\substack{2\leq p\leq n, \\ 1\leq q\leq p}}\sum_{\substack{ l_1+\cdots+l_q+p-q=n, \\ l_1,\cdots,l_q\geq 1}} \sum_{i=1}^{q}\lambda^{q-1} \mu_{p}\{\nu_{l_1} ,\cdots, \partial (\nu_{l_i}),\cdots, \nu_{l_q}\}\\
= & \underbrace{ \sum_{\substack{i+j+k-2=n,\\ 1\leq i\leq n-2,2\leq j,k\leq n-1}} (\nu_i\{ \mu_j\})\{\mu_k\} }_{(a)} - \underbrace{\sum_{j=2}^{n}\sum_{\substack{2\leq p\leq n-j+1,\\ 1\leq q\leq p}}\sum_{\substack{ l_1+\cdots+l_q+p-q=n-j+1, \\ l_1,\cdots,l_q\geq 1}} \lambda^{q-1} (\mu_p \{ \nu_{l_1} ,\cdots, \nu_{l_q} \})\{\mu_j\}}_{(b)}\\
& - \underbrace{ \sum_{\substack{i+j+k-2=n,\\ 1\leq i\leq n-2,2\leq j,k\leq n-1}}\nu_i\{ \mu_j\{\mu_k\}\}}_{(c)} + \underbrace{\sum_{\substack{2\leq p\leq n, \\ 1\leq q\leq p}}\sum_{\substack{ l_1+\cdots+l_q+p-q=n, \\ l_1,\cdots,l_q\geq 1}} \sum_{j=2}^{p-1}\lambda^{q-1} (\mu_{p-j+1} \{\mu_j\})\{\nu_{l_1} ,\cdots, \nu_{l_q}\}}_{(d)}\\
& + \underbrace{\sum_{\substack{2\leq p\leq n, \\ 1\leq q\leq p}}\sum_{\substack{ l_1+\cdots+l_q+p-q=n, \\ l_1,\cdots,l_q\geq 1}}\sum_{i=1}^{q}\sum_{j=2}^{l_i} \lambda^{q-1} \mu_{p}\{\nu_{l_1} ,\cdots, \nu_{l_i-j+1}\{\mu_j\},\cdots, \nu_{l_q}\}}_{(e)}\\
& - \underbrace{\sum_{\substack{2\leq p \leq n,\\ 1\leq q \leq p}}\sum_{\substack{ l_1+\cdots+l_q+p-q=n, \\ l_1,\cdots,l_q\geq 1}} \sum_{i=1}^{q} \sum_{\substack{2\leq \alpha \leq l_i,\\ 1\leq \beta \leq \alpha}} \sum_{\substack{r_1+\cdots+r_{\beta}+\alpha-\beta=l_i,\\ r_1,\cdots,r_\beta\geq 1}}\lambda^{q+\beta-2} \mu_{p}\{\nu_{l_1} ,\cdots, \mu_{\alpha}\{ \nu_{r_1},\cdots , \nu_{r_{\beta}}\} ,\cdots, \nu_{l_q}\}}_{(f)}
\end{aligned}$$
}
By the pre-Jacobi identity \meqref{Eq: pre-jacobi}, we have
$
(a) = (c) \ \mathrm{and}\ (b)+(f) = (d)+(e).$
Thus we have $\partial ^{2}(\nu_n)=0$.
Finally, note that the maps $\varepsilon$ and $\eta$ are strict morphisms of homotopy cooperads. This completes the proof.
\end{proof}
\begin{defn}\mlabel{de:Koszul dual homotopy cooperad}
The Hadamard product $\mathscr{S} (\Dif^\ac)\ot_{\mathrm{H}} \cals^{-1}$ is called the \name{Koszul dual homotopy cooperad} of $\Dif$, and is denoted by $\Dif^\ac$.
\end{defn}
To be precise, let $sm_n=\widetilde{m_n}\otimes \varepsilon_n, n\geqslant 1$ and $sd_n=\widetilde{d_n}\otimes \varepsilon_n, n\geqslant 1$. Then the underlying graded collection of $ \Dif^\ac$ is $ \Dif^\ac(n)=\bfk sm_n\oplus \bfk sd_n$ with $|sm_n|=n-1$ and $|sd_n|=n$.
The family of operations $\{\Delta_T\}_{T\in \frakt}$ defining its homotopy cooperad structure is given by the following formulas.
\begin{itemize}
\item[(i)] For the element $\widetilde{m_n}\in\mathscr{S} (\Dif^\ac)(n)$ and $T$ of type $\mathrm{(i)}$, define
\begin{eqnarray*}
\begin{tikzpicture}[scale=0.7,descr/.style={fill=white}]
\tikzstyle{every node}=[thick,minimum size=5pt, inner sep=1pt]
\node(r) at (0,-0.5)[minimum size=0pt,rectangle]{};
\node(v-2) at(-2,0.5)[minimum size=0pt, label=left: {$\Delta_T(sm_n)= (-1)^{(j-1)(n-i-j+1)}$}]{};
\node(v0) at (0,0)[draw,rectangle]{{\small $sm_{n-j+1}$}};
\node(v1-1) at (-1.5,1){};
\node(v1-2) at(0,1)[draw,rectangle]{\small$sm_j$};
\node(v1-3) at(1.5,1){};
\node(v2-1)at (-1,2){};
\node(v2-2) at(1,2){};
\draw(v0)--(v1-1);
\draw(v0)--(v1-3);
\draw(v1-2)--(v2-1);
\draw(v1-2)--(v2-2);
\draw[dotted](-0.4,1.5)--(0.4,1.5);
\draw[dotted](-0.5,0.5)--(-0.1,0.5);
\draw[dotted](0.1,0.5)--(0.5,0.5);
\path[-,font=\scriptsize]
(v0) edge node[descr]{{\tiny$i$}} (v1-2);
\end{tikzpicture}
\end{eqnarray*}
\item[(ii)] For the element $\widetilde{d_n}\in\mathscr{S} (\Dif^\ac)(n)$ and $T$ of type $\mathrm{(i)}$, define
\begin{eqnarray*}
\begin{tikzpicture}[scale=0.7,descr/.style={fill=white}]
\tikzstyle{every node}=[thick,minimum size=5pt, inner sep=1pt]
\node(r) at (0,-0.5)[minimum size=0pt,rectangle]{};
\node(va) at(-2,0.5)[minimum size=0pt, label=left:{$\Delta_T(sd_n)=(-1)^{(j-1)(n-j-i+1)}$}]{};
\node(vb) at(-1.5,0.5)[minimum size=0pt ]{};
\node(vc) at (0,0)[draw, rectangle]{\small $sd_{n-j+1}$};
\node(v1) at(0,1)[draw,rectangle]{\small $sm_j$};
\node(v0) at(-1.5,1){};
\node(v2) at(1.5,1){};
\node(v2-1)at (-1,2){};
\node(v2-2) at(1,2){};
\node(vd) at(-0.5,-1.5)[minimum size=0, label=left:{$+(-1)^{(j-1)(n-i-j+1)+n-j}$}]{};
\node(ve) at (2,-2)[draw, rectangle]{\small $sm_{n-j+1}$};
\node(ve1) at (2,-1)[draw,rectangle]{\small $sd_j$};
\node(ve2-1) at(1,0){};
\node(ve0) at (0.5,-1){};
\node(ve2) at (3.5,-1){};
\node(ve2-2) at(3,0){};
\draw(v1)--(v2-1);
\draw(v1)--(v2-2);
\draw[dotted](-0.4,1.5)--(0.4,1.5);
\draw(vc)--(v0);
\draw(vc)--(v2);
\draw(ve)--(ve0);
\draw(ve)--(ve2);
\draw(ve1)--(ve2-1);
\draw(ve1)--(ve2-2);
\draw[dotted](1.6,-0.5)--(2.4,-0.5);
\draw[dotted](-0.5,0.5)--(0,0.5);
\draw[dotted](0,0.5)--(0.5,0.5);
\draw[dotted](1.5,-1.5)--(2,-1.5);
\draw[dotted](2,-1.5)--(2.5,-1.5);
\path[-,font=\scriptsize]
(vc) edge node[descr]{{\tiny$i$}} (v1);
\path[-,font=\scriptsize]
(ve) edge node[descr]{{\tiny$i$}} (ve1);
\end{tikzpicture}
\end{eqnarray*}
\item[(iii)] For the element $\widetilde{d_n}\in\mathscr{S} (\Dif^\ac)(n)$ and tree $T$ of type $\mathrm{(ii)}$, define
\begin{eqnarray*}
\begin{tikzpicture}[scale=0.7,descr/.style={fill=white}]
\tikzstyle{every node}=[minimum size=4pt, inner sep=1pt]
\node(v-2) at (-5,2)[minimum size=0pt, label=left:{$\Delta_T(sd_n)=$}]{};
\node(v-1) at(-5,2)[minimum size=0pt,label=right:{$(-1)^\alpha \lambda^{q-1}$}]{};
\node(v1-1) at (-2,1){};
\node(v1-2) at(0,1.2)[rectangle,draw]{\small $sm_p$};
\node(v1-3) at(2,1){};
\node(v2-1) at(-1.9,2.6){};
\node(v2-2) at (-0.9, 2.8)[rectangle,draw]{\small$sd_{l_1}$};
\node(v2-3) at (0,2.9){};
\node(v2-4) at(0.9,2.8)[rectangle,draw]{\small $sd_{l_q}$};
\node(v2-5) at(1.9,2.6){};
\node(v3-1) at (-1.5,3.5){};
\node(v3-2) at (-0.3,3.5){};
\node(v3-3) at (0.3,3.5){};
\node(v3-4) at(1.5,3.5){};
\draw(v1-2)--(v2-1);
\draw(v1-2)--(v2-3);
\draw(v1-2)--(v2-5);
\path[-,font=\scriptsize]
(v1-2) edge node[descr]{{\tiny$k_1$}} (v2-2)
edge node[descr]{{\tiny$k_{q}$}} (v2-4);
\draw(v2-2)--(v3-1);
\draw(v2-2)--(v3-2);
\draw(v2-4)--(v3-3);
\draw(v2-4)--(v3-4);
\draw[dotted](-0.5,2.4)--(-0.1,2.4);
\draw[dotted](0.1,2.4)--(0.5,2.4);
\draw[dotted](-1.4,2.4)--(-0.8,2.4);
\draw[dotted](1.4,2.4)--(0.8,2.4);
\draw[dotted](-1.1,3.2)--(-0.6,3.2);
\draw[dotted](1.1,3.2)--(0.6,3.2);
\end{tikzpicture}
\end{eqnarray*}
where $$\alpha=\sum_{s=1}^q(l_s-1)(p-k_s)+q(p-1) + \sum_{t=1}^{q-1}(q-t)l_t;$$
\item[(iv)]All other components of $\Delta_T, T\in \frakt$ vanish.
\end{itemize}
\section{Operad of homotopy differential algebras with weight and minimal model}
\mlabel{sec:homomodel}
Now, we are going to introduce the notion of homotopy differential algebras and their governing dg operad and show that this dg operad is the minimal model of the operad of differential algebras with weight.
\vspb
\subsection{Operad of homotopy differential algebras with weight}\
\mlabel{ss:operad}
\begin{defn}
The \name{operad $\Difinfty$ of homotopy differential algebras of weight $\lambda$} is defined to be the cobar construction $\Omega(\Dif^\ac)$ of $\Dif^\ac$.
\end{defn}
A direct inspection gives the following description of $\Difinfty$.
\begin{prop}\mlabel{def: expanded def of homotopy differential algebras}
Let $\mathcal{O}=(\mathcal{O}(1),\dots,\mathcal{O}(n),\dots)$ be the graded collection where $\mathcal{O}(1)=\bfk d_1$ with $ |d_1|=0$ and for $n\geqslant 2$, $\mathcal{O}(n)=\bfk d_n\oplus \bfk m_n$ with $ |d_n|=n-1, |m_n|=n-2$.
The operad $\Difinfty$ of homotopy differential algebras of weight $\lambda$ is the differential graded operad $(\mathcal{F(O)},\partial)$, where the underlying free graded operad is generated by the graded collection $\mathcal{O}$ and the action of the differential $\partial$ on generators is given by the following equations.
For each $n\geqslant 2$,
\begin{equation}\mlabel{eq:diffgen1} \partial ({m_n}) =\sum_{j=2}^{n-1}\sum_{i=1}^{n-j+1}(-1)^{i+j(n-i)}m_{n-j+1}\circ_i m_j\end{equation}
and for each $n\geqslant 1$,
{\small\begin{align}\mlabel{eq:diffgen2} \partial (d_n)&=-\sum_{j=2}^{n}\sum_{i=1}^{n-j+1}(-1)^{i+j(n-i)}d_{n-j+1}\circ_i m_j\\
\notag &-\hspace{-.7cm}\sum\limits_{\substack{1\leqslant k_1<\dots<k_{q}\leqslant p\\l_1+\dots+l_q+p-q=n\\l_1, \dots, l_q\geqslant 1, 2\leqslant p\leqslant n,1\leqslant q\leqslant p}} \hspace{-.5cm}(-1)^{\xi}\lambda^{q-1}\Big(\cdots\big(\big((m_p\circ_{k_1}d_{l_1})\circ_{k_2+l_1-1}d_{l_2}\big)\circ_{k_3+l_1+l_2-2}d_{l_3}\big)\cdots\Big)\circ_{k_{q}+l_1+\dots+l_{q-1}-q+1} d_{l_q}
\end{align}}
where $\xi:=\sum_{s=1}^q(l_s-1)(p-k_s).$
\end{prop}
\begin{remark}\mlabel{re:diffod}
\begin{enumerate}
\item When $\lambda=0$, the operad $\Difinfty$ is precisely the operad $\mathrm{AsDer}_\infty$ constructed by Loday in \mcite{Lod} by using Koszul duality for operads.
\item \label{it:diffod3} It is easy to see that
$\partial(m_2)=0=\partial(d_1)$. So the degree zero part of $\Difinfty$ is just the free non-graded operad generated by $m_2$ and $d_1$.
This fact will be used in Lemma~\mref{Lem: H0 of Difinfty is Dif}.
\end{enumerate}
\end{remark}
As free operads are constructed from planar rooted trees, we use planar rooted trees to display the dg operad $\Difinfty$ for later applications.
First, we use the following two kinds of corollas to represent generators of $\Difinfty$:
\begin{figure}[h]
\begin{tikzpicture}[grow'=up,scale=0.5]
\tikzstyle{every node}=[thick,minimum size=6pt, inner sep=1pt]
\node(r)[fill=black,circle,label=below:$\ \ \ \ \ \ \ \ \ \ \ \ \ \ m_n(n\geqslant 2)$]{}
child{node(1){1}}
child{node(i) {i}}
child{node(n){n}};
\draw [dotted,line width=0.5pt] (1)--(i);
\draw [dotted, line width=0.5pt] (i)--(n);
\end{tikzpicture}
\hspace{8mm}
\begin{tikzpicture}[grow'=up,scale=0.5]
\tikzstyle{every node}=[thick,minimum size=6pt, inner sep=1pt]
\node(r)[draw,circle,label=below:$\ \ \ \ \ \ \ \ \ \ \ \ \ \ d_n(n\geqslant 1)$]{}
child{node(1){1}}
child{node(i){i}}
child{node(n){n}};
\draw [dotted,line width=0.5pt] (1)--(i);
\draw [dotted, line width=0.5pt] (i)--(n);
\end{tikzpicture}
\end{figure}
For example, the element $(((m_3\circ_{1}d_4)\circ_3m_4)\circ_9m_2)\circ_{10}d_2$ can be represented by the following tree
{\tiny\begin{figure}[h]
\begin{tikzpicture}[grow'=up,scale=0.4]
\tikzstyle{every node}=[level distance=30mm,sibling distance=10em,thick,minimum size=1.5pt, inner sep=2pt]
\node[fill=black,draw,circle,label=right:$\small m_3$]{}
child{
node[draw, circle, label=left:$d_4$](1){}
child{node(1-1){}}
child{node(1-2){}}
child{
node[fill=black, draw, circle,label=left:$m_4$](1-3){}
child{node(1-3-1){}}
child{node(1-3-2){}}
child{node(1-3-3){}}
child{node(1-3-4){}}
}
child{node(1-4){}}
}
child{node(2){}}
child{
node[fill=black,draw,circle,label=right:$m_2$](3){}
child{node(3-1){}}
child{
node[circle, draw, label=right:$d_2$](3-2){}
child{node(3-2-1){}}
child{node(3-2-2){}}}
};
\end{tikzpicture}
\end{figure}
}
Eqs. \meqref{eq:diffgen1} and \meqref{eq:diffgen2} can be represented by
\begin{eqnarray*}\hspace{-1.5cm}
\begin{tikzpicture}[scale=0.4]
\tikzstyle{every node}=[thick,minimum size=6pt, inner sep=1pt]
\node(a) at (-4,0.5){\begin{large}$\partial$\end{large}};
\node[circle, fill=black, label=right:$m_n (n\geqslant 2)$] (b0) at (-2,-0.5) {};
\node (b1) at (-3.5,1.5) [minimum size=0pt,label=above:$1$]{};
\node (b2) at (-2,1.5) [minimum size=0pt,label=above:$i$]{};
\node (b3) at (-0.5,1.5) [minimum size=0pt,label=above:$n$]{};
\draw (b0)--(b1);
\draw (b0)--(b2);
\draw (b0)--(b3);
\draw [dotted,line width=1pt] (-3,1)--(-2.2,1);
\draw [dotted,line width=1pt] (-1.8,1)--(-1,1);
\end{tikzpicture}
&
\hspace{1mm}
\begin{tikzpicture}[scale=0.4]
\node(0){{$= \sum\limits_{j=2}^{n-1}\sum_{i=1}^{n-j+1}(-1)^{i+j(n-i)}$}};
\end{tikzpicture}
&
\begin{tikzpicture}[scale=0.5]
\tikzstyle{every node}=[thick,minimum size=6pt, inner sep=1pt]
\node(e0) at (0,-1.5)[circle, fill=black,label=right:$m_{n-j+1}$]{};
\node(e1) at(-1.5,0){{\tiny$1$}};
\node(e2-0) at (0,-0.5){{\tiny$i$}};
\node(e3) at (1.5,0){{\tiny{$n-j+1$}}};
\node(e2-1) at (0,0.5) [circle,fill=black,label=right: $m_j$]{};
\node(e2-1-1) at (-1,1.5){{\tiny$1$}};
\node(e2-1-2) at (1, 1.5){{\tiny $j$}};
\draw [dotted,line width=1pt] (-0.7,-0.5)--(-0.2,-0.5);
\draw [dotted,line width=1pt] (0.3,-0.5)--(0.8,-0.5);
\draw [dotted,line width=1pt] (-0.4,1)--(0.4,1);
\draw (e0)--(e1);
\draw (e0)--(e3);
\draw (e0)--(e2-0);
\draw (e2-0)--(e2-1);
\draw (e2-1)--(e2-1-1);
\draw (e2-1)--(e2-1-2);
\end{tikzpicture}
\end{eqnarray*}
\begin{eqnarray*}
\hspace{1.2cm}
\begin{tikzpicture}[scale=0.4]
\tikzstyle{every node}=[thick,minimum size=6pt, inner sep=1pt]
\node(a) at (-4,0.5){\begin{large}$\partial$\end{large}};
\node[circle, draw, label=right:$d_n(n\geqslant 1)$] (b0) at (-2,-0.5) {};
\node (b1) at (-3.5,1.5) [minimum size=0pt,label=above:$1$]{};
\node (b2) at (-2,1.5) [minimum size=0pt,label=above:$i$]{};
\node (b3) at (-0.5,1.5) [minimum size=0pt,label=above:$n$]{};
\draw (b0)--(b1);
\draw (b0)--(b2);
\draw (b0)--(b3);
\draw [dotted,line width=1pt] (-3,1)--(-2.2,1);
\draw [dotted,line width=1pt] (-1.8,1)--(-1,1);
\end{tikzpicture}
&\hspace{-1cm}
\begin{tikzpicture}[scale=0.4]
\node(0){{${\Large=}\ \ \ \ \ -\sum\limits_{j=2}^{n}\sum\limits_{i=1}^{n-j+1}(-1)^{i+j(n-i)}$}};
\end{tikzpicture}
& \hspace{-1.4cm}
\begin{tikzpicture}[scale=0.6]
\tikzstyle{every node}=[thick,minimum size=6pt, inner sep=1pt]
\node(e0) at (0,-1.5)[circle, draw,label=right:$d_{n-j+1}$]{};
\node(e1) at(-1.5,-0.3){};
\node(e2-0) at (0,-0.7){{\tiny$i$}};
\node(e3) at (1.5,-0.3){};
\node(e2-1) at (0,0) [draw,circle,fill=black,label=right: $m_j$]{};
\node(e2-1-1) at (-1,1){};
\node(e2-1-2) at (1, 1){};
\draw [dotted,line width=1pt] (-0.7,-0.5)--(-0.2,-0.5);
\draw [dotted,line width=1pt] (0.3,-0.5)--(0.8,-0.5);
\draw [dotted,line width=1pt] (-0.2,0.5)--(0.2,0.5);
\draw (e0)--(e1);
\draw (e0)--(e3);
\draw (e0)--(e2-0);
\draw (e2-0)--(e2-1);
\draw (e2-1)--(e2-1-1);
\draw (e2-1)--(e2-1-2);
\end{tikzpicture} \\
&\hspace{1.5cm} \begin{tikzpicture}[scale=0.4]
\node(0){{$-\sum\limits_{\substack{1\leqslant k_1<\dots<k_{q}\leqslant p\\l_1+\dots+l_q+p-q=n\\l_1, \dots, l_q\geqslant 1, 2\leqslant p\leqslant n,1\leqslant q\leqslant p}} (-1)^{\xi}\lambda^{q-1}$}};
\end{tikzpicture}&
\hspace{-3mm}
\begin{tikzpicture}[scale=0.7,descr/.style={fill=white}]
\tikzstyle{every node}=[thick,minimum size=6pt, inner sep=1pt]
\node(e0) at(0,0)[circle,draw, fill=black,label=below:$m_p$]{};
\node(e1-1) at(-2.7,1.5){};
\node(e1-2) at (-1.5,1.6)[circle, draw, label=right:$d_{l_1}$]{};
\node(e1-3) at (-0.7,1.5){};
\node(e1-4) at(0,1.6)[circle, draw, label=right:$d_{l_i}$]{};
\node(e1-5) at(0.7,1.5){};
\node(e1-6) at(1.5,1.6)[circle, draw, label=right:$d_{l_q}$]{};
\node(e1-7) at(2.7,1.5){};
\node(e2-1) at (-2,2.5){};
\node(e2-2) at(-1,2.5){};
\node(e2-3) at(-0.5,2.5){};
\node(e2-4) at (0.5,2.5){};
\node(e2-5) at (1,2.5){};
\node(e2-6) at (2,2.5){};
\draw (e0)--(e1-1);
\draw (e0)--(e1-3);
\draw (e0)--(e1-5);
\draw(e0)--(e1-7);
\draw(e1-2)--(e2-1);
\draw(e1-2)--(e2-2);
\draw(e1-4)--(e2-3);
\draw(e1-4)--(e2-4);
\draw(e1-6)--(e2-5);
\draw(e1-6)--(e2-6);
\draw[dotted,line width=1pt](-1.4,0.9)--(1.4,0.9);
\draw[dotted, line width=1pt](-1.7,2.2)--(-1.3,2.2);
\draw[dotted, line width=1pt](-0.2,2.2)--(0.2,2.2);
\draw[dotted, line width=1pt](1.7,2.2)--(1.3,2.2);
\path[-,font=\scriptsize]
(e0) edge node[descr]{{\tiny$k_1$}} (e1-2)
edge node[descr]{{\tiny$k_i$}} (e1-4)
edge node[descr]{{\tiny$k_q$}}(e1-6);
\end{tikzpicture}
\end{eqnarray*}
\vspb
\subsection{The minimal model of the operad of differential algebras}
\mlabel{sec:model}\
The following result is the main result of this paper.
\begin{thm} \mlabel{thm:difmodel}
The dg operad $\Difinfty$ is the minimal model of the operad $\Dif$.
\end{thm}
The proof of Theorem~\mref{thm:difmodel} will be carried out in the rest of this subsection.
\begin{lem} \mlabel{Lem: H0 of Difinfty is Dif}
The natural surjection
\begin{equation} \label{eq:proj}
p:\Difinfty\rightarrow \Dif
\end{equation}
induces an isomorphism $\rmH_0(\Difinfty)\cong \Dif$.
\end{lem}
\begin{proof} By Remark~\mref{re:diffod} \eqref{it:diffod3},
the degree zero part of $\Difinfty$ is the free (non-graded) operad generated by $m_2$ and $d_1$. By definition, one has
$$\partial(m_3)=-\mu\circ_1\mu+\mu\circ_2\mu,$$
$$\partial(d_2)= d_1\circ_1 m_2-(m_2\circ_1d_1+m_2\circ_2 d_1+\lambda (m_2\circ_1 d_1)\circ_2 d_1). $$
So the surjection $p:\Difinfty\rightarrow \Dif$ sending $m_2$ to $\mu $ and $d_1$ to $d$ induces $\rmH_0(\Difinfty)\cong \Dif$.
\end{proof}
Obviously, the differential $\partial$ of $\Difinfty$ satisfies Definition~\mref{de:model}~\eqref{it:min1} and \eqref{it:min2}, so in order to complete the proof of Theorem~\mref{thm:difmodel}, we only need to show that $p:\Difinfty\rightarrow \Dif$ is a quasi-isomorphism. For this purpose, we are going to build a homotopy map, i.e., a degree $1$ map $H:\Difinfty\rightarrow \Difinfty$ such that in each positive degree
$$\partial\circ H+H\circ \partial =\Id.$$
We need the following notion of graded path-lexicographic ordering on $\Difinfty$.
Each tree monomial gives rise to a path sequence~\cite[Chapter 3]{BD16}. More precisely,
to a tree monomial $T$ with $n$ leaves (written as $\mbox{arity}(T)=n$), we can associate a sequence $(x_1, \dots, x_n)$ where $x_i$ is the word formed by generators of $\Difinfty$ corresponding to the vertices along the unique path from the root of $T$ to its $i$-th leaf.
We define an order on the graded tree monomials as follows.
For graded tree monomials $T$ and $T'$, define $T>T'$ if
\begin{enumerate}
\item either $\mbox{arity}(T)>\mbox{arity}(T')$;
\item or $\mbox{arity}(T)=\mbox{arity}(T')$, and $\deg(T)>\deg(T')$, where $\deg(T)$ is the sum of the degrees of all generators of $\Difinfty$ appearing in the tree monomial $T$;
\item or $\mbox{arity}(T)=\mbox{airty}(T')(=n), \deg(T)=\deg(T')$, and the path sequences $(x_1,\dots,x_n)$ and $(x'_1,\dots,x_n')$ associated to $T$ and $T'$ satisfy $(x_1,\dots,x_n)>(x_1',\dots,x_n')$ with respect to the length-lexicographic order of words induced by \[m_2<d_1<m_3<d_2<\dots<m_n<d_{n-1}<m_{n+1}<d_n<\dots.\]
\end{enumerate}
It is readily seen that $>$ is a well order.
Given a linear combination of tree monomials, its \textbf{leading monomial} is the largest tree monomial appearing with nonzero coefficient in this linear combination, and the coefficient of the leading monomial is called the \textbf{leading coefficient}.
Let $S$ be a generator of degree $\geqslant 1$ in $ \Difinfty$. Denote the leading monomial of $\partial S$ by $\widehat{S}$ and the coefficient of $\widehat{S}$ in $\partial S$ by $c_S$. It is easily seen that the coefficient $c_S$ is always $\pm 1$.
A tree monomial of the form $\widehat{S}$ with the degree of $S$ being positive is called \textbf{typical}.
More precisely, typical tree monomials are of the following forms:
\begin{equation*}
\begin{tikzpicture}[scale=0.5]
\tikzstyle{every node}=[thick,minimum size=4pt, inner sep=1pt]
\node(e0) at (0,0)[circle, draw, fill=black, label=right:$m_n\quad (n\geqslant 2)$]{};
\node(e1-1) at (-1.5,1.5)[circle, draw,fill=black, label=right:$m_2$]{};
\node(e1-2) at(1.5,1.5){};
\node(e2-1) at(-2.5,2.5){};
\node(e2-2) at(-0.5,2.5){};
\draw(e0)--(e1-1);
\draw(e0)--(e1-2);
\draw(e1-1)--(e2-1);
\draw(e1-1)--(e2-2);
\draw[dotted,line width=1pt](-0.7,0.75)--(0.7,0.75);
\end{tikzpicture}
\hspace{8mm}
\begin{tikzpicture}[scale=0.5]
\tikzstyle{every node}=[thick,minimum size=4pt, inner sep=1pt]
\node(e0) at (0,0)[circle, draw, label=right:$d_n\quad (n\geqslant 1)$]{};
\node(e1-1) at (-1.5,1.5)[circle, draw,fill=black, label=right:$m_2$]{};
\node(e1-2) at(1.5,1.5){};
\node(e2-1) at(-2.5,2.5){};
\node(e2-2) at(-0.5,2.5){};
\draw(e0)--(e1-1);
\draw(e0)--(e1-2);
\draw(e1-1)--(e2-1);
\draw(e1-1)--(e2-2);
\draw[dotted,line width=1pt](-0.7,0.75)--(0.7,0.75);
\end{tikzpicture}
\end{equation*}
where the first one is the leading monomial of $\partial m_{n+1}, n\geqslant 2$, the second being that of $\partial d_{n+1}, n\geqslant 1$.
\begin{defn}
We call a tree monomial \textbf{ effective} if it satisfies the the following conditions:
\begin{enumerate}
\item There exists a typical divisor $\widehat{S}$ in $T$;
\item On the path from the root $v$ of $\widehat{S}$ to the leftmost leaf $l$ of $T$ above $v$, there are no other typical divisors and there are no vertices of positive degree except possibly $v$ itself;
\item For a leaf $l'$ of $T$ which is located on the left of $l$, there are no vertices of positive degree and no typical divisors on the path from the root of $T$ to $l'$.
\end{enumerate}
The divisor $\widehat{S}$ is unique if exists, and is called the {\bf effective divisor} of $T$ and $l$ is called the {\bf effective leaf} of $T$.
\end{defn}
Morally, the effective divisor of a tree monomial $T$ is the left-upper-most typical divisor of $T$.
It follows by the definition that for an effective divisor $T'$ in $T$ with the effective leaf $l$, no vertex in $T'$ belongs to the path from the root of $T$ to any leaf $l'$ located on the left of $l$.
\begin{exam}
Consider the following three tree monomials with the same underlying tree:
\begin{eqnarray*}
\begin{tikzpicture}[scale=0.4]
\tikzstyle{every node}=[thick,minimum size=4pt, inner sep=1pt]
\node(r) at(0,-0.5)[minimum size=0pt, label=below:$(T_1)$]{};
\node(0) at(0,0)[circle,draw, fill=black]{};
\node(1-1)at(-1,1)[circle,draw]{};
\node(1-2)at(1,1)[circle,draw,fill=black]{};
\node(2-1) at(-2,2){};
\node(2-2) at(0,2) [circle,draw,fill=black]{};
\node(2-3) at(1,2){};
\node(2-4) at(2,2){};
\node(3-1) at (-1,3)[circle, draw]{};
\node(3-2) at(1,3)[circle,draw,fill=black]{};
\node(4-1) at (-2,4)[circle, draw, fill=black]{};
\node(4-2) at (0,4)[circle,draw,fill=black]{};
\node(4-3) at(2,4){};
\node(5-1) at(-3,5)[circle,draw]{};
\node(5-2) at(-1.3,4.7){};
\node(5-3) at(-0.7,4.7){};
\node(5-4)at (0.7,4.7){};
\node(6-1) at (-3.7,5.7){};
\draw(0)--(1-1);
\draw(0)--(1-2);
\draw(1-1)--(2-1);
\draw(1-2)--(2-2);
\draw(1-2)--(2-3);
\draw(1-2)--(2-4);
\draw(2-2)--(3-1);
\draw(2-2)--(3-2);
\draw(3-1)--(4-1);
\draw(3-2)--(4-2);
\draw(3-2)--(4-3);
\draw(4-1)--(5-1);
\draw(4-1)--(5-2);
\draw(4-2)--(5-3);
\draw(4-2)--(5-4);
\draw(5-1)--(6-1);
\draw[dashed,red](-2.5,3.8) to [in=150, out=120] (-1.9,4.4) ;
\draw[dashed,red](-2.5,3.8)--(-1.2,2.5);
\draw[dashed, red](-0.6,3.1)--(-1.9,4.4);
\draw[dashed,red] (-0.6,3.1)to [in=-30, out=-60] (-1.2,2.5);
\end{tikzpicture}
\begin{tikzpicture}[scale=0.4]
\tikzstyle{every node}=[thick,minimum size=4pt, inner sep=1pt]
\node(r) at(0,-0.5)[minimum size=0pt, label=below:$(T_2)$]{};
\node(0) at(0,0)[circle,draw, label=right:${\color{red}\times}$]{};
\node(1-1)at(-1,1)[circle,draw]{};
\node(1-2)at(1,1)[circle,draw,fill=black]{};
\node(2-1) at(-2,2){};
\node(2-2) at(0,2) [circle,draw,fill=black]{};
\node(2-3) at(1,2){};
\node(2-4) at(2,2){};
\node(3-1) at (-1,3)[circle, draw]{};
\node(3-2) at(1,3)[circle,draw,fill=black]{};
\node(4-1) at (-2,4)[circle, draw, fill=black]{};
\node(4-2) at (0,4)[circle,draw,fill=black]{};
\node(4-3) at(2,4){};
\node(5-1) at(-3,5)[circle,draw]{};
\node(5-2) at(-1.3,4.7){};
\node(5-3) at(-0.7,4.7){};
\node(5-4)at (0.7,4.7){};
\node(6-1) at (-3.7,5.7){};
\draw(0)--(1-1);
\draw(0)--(1-2);
\draw(1-1)--(2-1);
\draw(1-2)--(2-2);
\draw(1-2)--(2-3);
\draw(1-2)--(2-4);
\draw(2-2)--(3-1);
\draw(2-2)--(3-2);
\draw(3-1)--(4-1);
\draw(3-2)--(4-2);
\draw(3-2)--(4-3);
\draw(4-1)--(5-1);
\draw(4-1)--(5-2);
\draw(4-2)--(5-3);
\draw(4-2)--(5-4);
\draw(5-1)--(6-1);
\draw[dashed,red](-1.5,2.8) to [in=150, out=120] (-0.9,3.4) ;
\draw[dashed,red](-1.5,2.8)--(-0.2,1.5);
\draw[dashed, red](0.4,2.1)--(-0.9,3.4);
\draw[dashed,red] (0.4,2.1)to [in=-30, out=-60] (-0.2,1.5);
\end{tikzpicture}
\begin{tikzpicture}[scale=0.4]
\tikzstyle{every node}=[thick,minimum size=4pt, inner sep=1pt]
\node(r) at(0,-0.5)[minimum size=0pt, label=below:$(T_3)$]{};
\node(0) at(0,0)[circle,draw, fill=black]{};
\node(1-1)at(-1,1)[circle,draw]{};
\node(1-2)at(1,1)[circle,draw,fill=black]{};
\node(2-1) at(-2,2){};
\node(2-2) at(0,2) [circle,draw,fill=black]{};
\node(2-3) at(1,2){};
\node(2-4) at(2,2){};
\node(3-1) at (-1,3)[circle, draw]{};
\node(3-2) at(1,3)[circle,draw,fill=black]{};
\node(4-1) at (-2,4)[circle, draw, label=right:$\color{red}\times$]{};
\node(4-2) at (0,4)[circle,draw,fill=black]{};
\node(4-3) at(2,4){};
\node(5-1) at(-3,5)[circle,draw]{};
\node(5-2) at(-1.3,4.7){};
\node(5-3) at(-0.7,4.7){};
\node(5-4)at (0.7,4.7){};
\node(6-1) at (-3.7,5.7){};
\draw(0)--(1-1);
\draw(0)--(1-2);
\draw(1-1)--(2-1);
\draw(1-2)--(2-2);
\draw(1-2)--(2-3);
\draw(1-2)--(2-4);
\draw(2-2)--(3-1);
\draw(2-2)--(3-2);
\draw(3-1)--(4-1);
\draw(3-2)--(4-2);
\draw(3-2)--(4-3);
\draw(4-1)--(5-1);
\draw(4-1)--(5-2);
\draw(4-2)--(5-3);
\draw(4-2)--(5-4);
\draw(5-1)--(6-1);
\draw[dashed,red](-0.5,1.8) to [in=150, out=120] (0.1,2.4) ;
\draw[dashed,red](-0.5,1.8)--(0.8,0.5);
\draw[dashed, red](1.4,1.1)--(0.1,2.4);
\draw[dashed,red] (1.4,1.1)to [in=-30, out=-60] (0.8,0.5);
\end{tikzpicture}
\end{eqnarray*}
\begin{enumerate}
\item The tree monomial $T_1$ is effective and the divisor in the red/dashed circle is its effective divisor. Note that there is a vertex of positive degree on the path from the root of the tree to the root of the effective divisor.
\item The tree monomial $T_2$ is not effective, because there is a vertex of degree $1$ (the root itself) on the path from the root of the entire tree to the first leaf.
\item For the tree monomial $T_3$, although there is a typical divisor $m_3\circ_1 m_2$ on the path from the root to the second leaf, there is a vertex $d_2$ of positive degree on the path from the root of this divisor to the second leaf of $T_3$. Thus $T_3$ is not effective.
\end{enumerate}
\end{exam}
\begin{defn}
Let $T$ be an effective tree monomial in $\Difinfty$ and $T'$ be its effective divisor. Assume that $T'=\widehat{S}$, where $S$ is a generator of positive degree. Then define $$\overline{H}(T):=(-1)^\omega \frac{1}{c_S}m_{T', S}(T),$$ where $m_{T',S}(T)$ is the tree monomial obtained from $T$ by replacing the effective divisor $T'$ by $S$, $\omega$ is the sum of degrees of all the vertices on the path from the root of $T'$ to the root of $T$ (except the root vertex of $T'$) and on the left of this path.
\end{defn}
Now, we construct the homotopy map $H$ via the following inductive procedure:
\begin{enumerate}
\item For a non-effective tree monomial $T$, define $H(T)=0$;
\item For an effective tree monomial $T$, define $H(T)= \overline{H}(T)+H(\overline{T})$, where $\overline{T}$ is obtained from $T$ by replacing $\widehat{S}$ in $T$ by $\widehat{S}-\frac{1}{c_S}\partial S$. Then each tree monomials appearing in $\overline{T}$ is strictly smaller than $T$, and we define $H(\overline{T})$ by taking induction on the leading terms.
\end{enumerate}
We explain more on the definition of $H$. Denote $T$ also by $T_1$ and take $I_1:=\{1\}$ for convenience. Then by the definition above, $H(T)=\overline{H}(T_1)+H(\overline{T_1})$. Since $H$ vanishes on non-effective tree monomials, we have $H(\overline{T}_1)=H(\sum_{i_2\in I_2} T_{i_2})$ where $\{T_{i_2}\}_{i_2\in I_2}$ is the set of effective tree monomials together with their nonzero coefficients appearing in the expansion of $\overline{T_1}$. Then by the definition of $H$, we have $H\Big(\sum_{i_2\in I_2} T_{i_2}\Big)=\overline{H}\Big(\sum_{i_2\in I_2} T_{i_2}\Big)+H\Big(\sum_{i_2\in I_2}\overline{T_{i_2}}\Big),$
giving
$$H(T)=\overline{H}\Big(\sum_{i_1\in I_1} T_{i_1}\Big)+\overline{H}\Big(\sum_{i_2\in I_2} T_{i_2}\Big)+H\Big(\sum_{i_2\in I_2}\overline{T_{i_2}}\Big).$$ An induction on the leading terms shows that $H(T)$ is the following series:
\begin{eqnarray}
\mlabel{eq:defhomo}
H(T)=\sum_{k=1}^\infty
\overline{H}\Big(\sum_{i_k\in I_k} T_{i_k}\Big),
\end{eqnarray}
where $\{T_{i_k}\}_{i_k\in I_k}, k\geq 1,$ is the set of effective tree monomials appearing in the expansion of $\sum_{i_{k-1}\in I_{k-1}} \overline{T_{i_{k-1}}}$ with nonzero coefficients.
\begin{lem}\mlabel{Lem: homotopy well defined} For an effective tree monomial $T$, the expansion of $H(T)$ in Eq. \meqref{eq:defhomo} is always a finite sum, that is, there exists an integer $n$ such that none of the tree monomials in $\sum_{i_n\in I_n}\overline{T_{i_n}}$ is effective.
\end{lem}
\begin{proof} It is easy to see that $\max\{T_{i_k}|i_k\in I_k\}>\max\{T_{i_{k+1}}|i_{k+1}\in I_{k+1}\}$ for all $k\geq 1$ (by convention, $i_1\in I_1=\{1\}$). So Eq.~\meqref{eq:defhomo} is a finite sum, as $>$ is a well order.
\end{proof}
\begin{lem} \mlabel{Lem: Induction}Let $T$ be an effective tree monomial in $\Difinfty$. Then
$$\partial \overline{H}(T)+H\partial(T-\overline{T})=T-\overline{T}.$$
\end{lem}
\begin{proof}
We can write $T$ via partial compositions as follows:
$$(\cdots(((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \widehat{S})\circ_{j_1}Y_1)\circ_{j_2}Y_2)\circ_{j_3} \cdots)\circ_{j_q}Y_q,$$
where $\widehat{S}$ is the effective divisor of $T$ and $X_1,\cdots, X_p$ are generators of $\Difinfty$ corresponding to the vertices that live on the path from the root of $T$ to the root of $\widehat{S}$ (except the root of $\widehat{S}$) and on the left of this path.
By definition,
{\small\begin{align*}
\partial \overline{H}(T)
=&\frac{1}{c_S}(-1)^{\sum\limits_{j=1}^p|X_j|}\partial \Big((\cdots(( ((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} S)\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q\Big)\\
=&\frac{1}{c_S} \sum\limits_{k=1}^p(-1)^{\sum\limits_{j=1}^p|X_j|+\sum\limits_{j=1}^{k-1}|X_j|}\\
& (\cdots ((((\cdots ((\cdots (X_1\circ_{i_1}X_2)\circ_{i_2}\cdots)\circ_{i_{k-1}}\partial X_k )\circ_{i_k}\cdots)\circ_{i_{p-1}}X_p)\circ_{i_p} S)\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q \\
&+\frac{1}{c_S}(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \partial S)\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q\\
&+ \frac{1}{c_S}\sum\limits_{k=1}^q (-1)^{|S|+\sum\limits_{j=1}^{k-1}|Y_j|}\\
&(\cdots((\cdots ((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} S)\circ_{j_1}Y_1)\circ_{j_2} \cdots \cdots)\circ_{j_k}\partial Y_{k})\circ_{j_{k+1}}\cdots )\circ_{j_q}Y_q
\end{align*}}
and
{\small
\begin{align*}
&H\partial (T-\overline{T})\\
=&\frac{1}{c_S}H\partial \Big((\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \partial S)\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q\Big)\\
=&\frac{1}{c_S} \sum\limits_{k=1}^p(-1)^{\sum\limits_{j=1}^p|X_j|+\sum\limits_{j=1}^{k-1}|X_j|}\\
& H\big((\cdots ((((\cdots ((\cdots (X_1\circ_{i_1}X_2)\circ_{i_2}\cdots)\circ_{i_{k-1}}\partial X_k )\circ_{i_k}\cdots)\circ_{i_{p-1}}X_p)\circ_{i_p} \partial S)\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q \big)\\
&+\frac{1}{c_S}\sum\limits_{k=1}^q(-1)^{\sum\limits_{j=1}^p|X_j|+|S|-1+\sum\limits_{j=1}^{k-1}|Y_j|}\\
&H\Big((\cdots((\cdots ((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \partial S)\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_k}\partial Y_k)\circ_{j_{k+1}}\cdots)\circ_{j_q}Y_q\Big)\\
=& \sum\limits_{k=1}^p(-1)^{\sum\limits_{j=1}^p|X_j|+\sum\limits_{j=1}^{k-1}|X_j|}\\
& H\big((\cdots ((((\cdots ((\cdots (X_1\circ_{i_1}X_2)\circ_{i_2}\cdots)\circ_{i_{k-1}}\partial X_k )\circ_{i_k}\cdots)\circ_{i_{p-1}}X_p)\circ_{i_p} \widehat{S})\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q \big)\\
&+ \sum\limits_{k=1}^p(-1)^{\sum\limits_{j=1}^p|X_j|+\sum\limits_{j=1}^{k-1}|X_j|}\\
& H\big((\cdots ((((\cdots ((\cdots (X_1\circ_{i_1}X_2)\circ_{i_2}\cdots)\circ_{i_{k-1}}\partial X_k )\circ_{i_k}\cdots)\circ_{i_{p-1}}X_p)\circ_{i_p} (\frac{1}{c_S} \partial S-\widehat{S}))\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q \big)\\
&+ \sum\limits_{k=1}^q(-1)^{\sum\limits_{j=1}^p|X_j|+|S|-1+\sum\limits_{j=1}^{k-1}|Y_j|}\\
&H\Big((\cdots((\cdots ((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \widehat{S})\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_k}\partial Y_k)\circ_{j_{k+1}}\cdots)\circ_{j_q}Y_q\Big)\\
&+\sum\limits_{k=1}^q(-1)^{\sum\limits_{j=1}^p|X_j|+|S|-1+\sum\limits_{j=1}^{k-1}|Y_j|}\\
&H\Big((\cdots((\cdots ((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p}(\frac{1}{c_S} \partial S-\widehat{S}))\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_k}\partial Y_k)\circ_{j_{k+1}}\cdots)\circ_{j_q}Y_q\Big).
\end{align*}
}
By the definition of effective divisors in an effective tree monomial, it can be easily seen that each tree monomial in the expansion of
$$T_k':=(\cdots ((((\cdots ((\cdots (X_1\circ_{i_1}X_2)\circ_{i_2}\cdots)\circ_{i_{k-1}}\partial X_k )\circ_{i_k}\cdots)\circ_{i_{p-1}}X_p)\circ_{i_p} \widehat{S})\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q$$
and $$T_k'':=(\cdots((\cdots ((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \widehat{S})\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_k}\partial Y_k)\circ_{j_{k+1}}\cdots)\circ_{j_q}Y_q$$
is still an effective tree monomial with $\widehat{S}$ as the effective divisor.
We thus have
{\small\begin{align*}
&H\partial (T-\overline{T})\\
=& \frac{1}{c_S} \sum\limits_{k=1}^p(-1)^{\sum\limits_{j=1}^p|X_j|+\sum\limits_{j=1}^{k-1}|X_j|} (H(T_k')- H(\overline{T_k'}))+ \sum\limits_{k=1}^q(-1)^{\sum\limits_{j=1}^p|X_j|+|S|-1+\sum\limits_{j=1}^{k-1}|Y_j|} (H(T_k'')- H(\overline{T_k''}))\\
=&\frac{1}{c_S} \sum\limits_{k=1}^p(-1)^{\sum\limits_{j=1}^p|X_j|+\sum\limits_{j=1}^{k-1}|X_j|}\overline{H}(T_k') + \sum\limits_{k=1}^q(-1)^{\sum\limits_{j=1}^p|X_j|+|S|-1+\sum\limits_{j=1}^{k-1}|Y_j|} \overline{H}(T_k'') \\
=&\frac{1}{c_S} \sum\limits_{k=1}^p(-1)^{\sum\limits_{j=1}^p|X_j|+\sum\limits_{j=1}^{k-1}|X_j|}\\
& (\cdots ((((\cdots ((\cdots (X_1\circ_{i_1}X_2)\circ_{i_2}\cdots)\circ_{i_{k-1}}\partial X_k )\circ_{i_k}\cdots)\circ_{i_{p-1}}X_p)\circ_{i_p} S)\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q \\
&+\frac{1}{c_S} \sum\limits_{k=1}^q(-1)^{|S|-1+\sum\limits_{j=1}^{k-1}|Y_j|}\\
&(\cdots((\cdots ((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} S)\circ_{j_1}Y_1)\circ_{j_2} \cdots \cdots)\circ_{j_k}\partial Y_{k})\circ_{j_{k+1}}\cdots )\circ_{j_q}Y_q.
\end{align*}}
Therefore, adding the expansions of $\partial \overline{H}(T)$ and $H\partial(T-\overline{T})$, we obtain
$$\hspace{4.5cm} \partial \overline{H}(T)+H\partial(T-\overline{T})=T-\overline{T}.
\hspace{4.5cm} \qedhere$$
\end{proof}
\begin{prop}\mlabel{Prop: homotopy}
The degree $1$ map $H$ defined in Eq.~\meqref{eq:defhomo} satisfies $\partial H+H\partial=\mathrm{id}$ on $\Difinfty$ in each positive degree.
\end{prop}
\begin{proof}
We first prove that for a non-effective tree monomial $T$, the equation $\partial H(T)+H\partial(T)=T$ holds. By the definition of $H$, since $T$ is not effective, $H(T)=0$. Thus we just need to check $H\partial(T)=T$. Since $T$ has positive degree, there exists at least one vertex of positive degree. We pick such a vertex $S$ that satisfies the following additional conditions:
\begin{enumerate}
\item On the path from $S$ to the leftmost leaf $l$ of $T$ above $S$, there are no other vertices of positive degree;
\item For a leaf $l'$ of $T$ located on the left of $l$, the vertices on the path from the root of $T$ to $l'$ are all of degree 0.
\end{enumerate}
Such a vertex always exists.
Then the element in $\Difinfty$ corresponding to $T$ can be written as
$$(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} S)\circ_{j_1}Y_1)\circ_{j_2}Y_2\cdots)\circ_{j_q}Y_q ,$$
where $X_1,\cdots,X_p$ correspond to the vertices along the path from the root of $T$ to $S$ and the vertices on the left of this path.
Thus, all $X_i, i=1, \dots, p$ are of degree zero.
Then by definition,
{\small\begin{align*}
H\partial T=&H\ \Big( (-1)^{|X_1|+\cdots+|X_p|}(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \partial S)\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q\Big)\\
&+\sum\limits_{k=1}^q(-1)^{\sum\limits_{t=1}^p|X_t|+|S|+\sum\limits_{t=1}^{k-1}|Y_t|} \\
&\quad H\ \Big((\cdots((\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} S)\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_k}\partial Y_{k})\circ_{j_{k+1}}\cdots\circ_{j_q}Y_q\Big).
\end{align*}}
By the assumption, the divisor consisting of the path from $S$ to $l$ must be of the following forms
\begin{eqnarray*}
\begin{tikzpicture}[scale=0.4,descr/.style={fill=white}]
\tikzstyle{every node}=[thick,minimum size=4pt, inner sep=1pt]
\node(r) at(0,-0.5)[minimum size=0pt, label=below:$(A)$]{};
\node(0) at(0,0)[circle, fill=black, label=right:$m_n\quad (n\geqslant 3)$]{};
\node(1-1) at(-2,2)[circle, draw]{};
\node(1-2) at(0,2){};
\node(1-3) at (2,2){};
\node(2-1) at(-3,3)[circle,draw]{};
\node(3-1) at(-4,4){};
\draw(0)--(1-1);
\draw(0)--(1-2);
\draw(0)--(1-3);
\draw[dotted, line width=1pt](-0.8,1)--(0.8,1);
\draw[dotted, line width=1pt](1-1)--(2-1);
\draw(2-1)--(3-1);
\path[-,font=\scriptsize]
(-1.8,1.2) edge [bend left=80] node[descr]{{\tiny$\sharp\geqslant 0$}} (-3.8,3.2);
\end{tikzpicture}
\hspace{5mm}
\begin{tikzpicture}[scale=0.4,descr/.style={fill=white}]
\tikzstyle{every node}=[thick,minimum size=4pt, inner sep=1pt]
\node(r) at(0,-0.5)[minimum size=0pt, label=below:$(B)$]{};
\node(0) at(0,0)[circle, draw, label=right:$d_n\quad (n\geqslant 2)$]{};
\node(1-1) at(-2,2)[circle, draw]{};
\node(1-2) at(0,2){};
\node(1-3) at (2,2){};
\node(2-1) at(-3,3)[circle,draw]{};
\node(3-1) at(-4,4){};
\draw(0)--(1-1);
\draw(0)--(1-2);
\draw(0)--(1-3);
\draw[dotted, line width=1pt](-0.8,1)--(0.8,1);
\draw[dotted, line width=1pt](1-1)--(2-1);
\draw(2-1)--(3-1);
\path[-,font=\scriptsize]
(-1.8,1.2) edge [bend left=80] node[descr]{{\tiny$\sharp\geqslant 0$}} (-3.8,3.2);
\end{tikzpicture}
\end{eqnarray*}
By the assumption that $T$ is not effective and the additional properties of the position of $S$ stated above, one can see that the effective tree monomials in $\partial T$ will only appear in the expansion of
$$(-1)^{|X_1|+\cdots+|X_p|}(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \partial S)\circ_{j_1}Y_1)\circ_{j_2}Y_2\cdots)\circ_{j_q}Y_q.$$
Consider the tree monomial $(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \widehat{ S})\circ_{j_1}Y_1)\circ_{j_2}Y_2\cdots)\circ_{j_q}Y_q$ in $\partial T$. The path connecting the root of $\widehat{S}$ and $l$ must be one of the following forms:
\begin{eqnarray*}
\begin{tikzpicture}[scale=0.4,descr/.style={fill=white}]
\tikzstyle{every node}=[thick,minimum size=4pt, inner sep=1pt]
\node(r) at(0,-0.5)[minimum size=0pt,label=below:$(A)$]{};
\node(1) at (0,0)[circle,draw,fill=black,label=right:$\ m_{n-1}\quad (n\geqslant 3)$]{};
\node(2-1) at(-1,1)[circle,draw,fill=black]{};
\node(2-2) at (1,1) {};
\node(3-2) at(0,2){};
\node(3-1) at (-2,2)[circle,draw]{};
\node(4-1) at (-3,3)[circle,draw]{};
\node(5-1) at (-4,4){};
\draw(1)--(2-1);
\draw(1)--(2-2);
\draw(2-1)--(3-1);
\draw(2-1)--(3-2);
\draw [dotted,line width=1pt](-0.4,0.5)--(0.4,0.5);
\draw(2-1)--(3-1);
\draw[dotted,line width=1pt] (3-1)--(4-1);
\draw(4-1)--(5-1);
\path[-,font=\scriptsize]
(-1.8,1.2) edge [bend left=80] node[descr]{{\tiny$\sharp\geqslant 0$}} (-3.8,3.2);
\end{tikzpicture}
\hspace{10mm}
\begin{tikzpicture}[scale=0.4,descr/.style={fill=white}]
\tikzstyle{every node}=[thick,minimum size=4pt, inner sep=1pt]
\node(r) at(0,-0.5)[minimum size=0pt,label=below:$(B)$]{};
\node(1) at (0,0)[circle,draw,label=right:$\ d_{n-1}\quad (n\geqslant 2)$]{};
\node(2-1) at(-1,1)[circle,draw,fill=black]{};
\node(2-2) at (1,1) {};
\node(3-1) at(-2,2)[circle,draw]{};
\node(3-2) at(0,2){};
\node(4-1) at(-3,3)[circle,draw]{};
\node(5-1) at (-4,4){};
\draw(1)--(2-1);
\draw(1)--(2-2);
\draw(2-1)--(3-1);
\draw(2-1)--(3-2);
\draw[dotted,line width=1pt](3-1)--(4-1);
\draw(4-1)--(5-1);
\draw [dotted,line width=1pt](-0.4,0.5)--(0.4,0.5);
\path[-,font=\scriptsize]
(-1.8,1.2) edge [bend left=80] node[descr]{{\tiny$\sharp\geqslant 0$}} (-3.8,3.2);
\end{tikzpicture}
\end{eqnarray*}
By the assumption that $T$ is not effective and the choice of $S$, there exists no effective divisor on the left of the path from the root of $T$ to $l$. So the tree monomial $$(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \widehat{ S})\circ_{j_1}Y_1)\circ_{j_2}Y_2\cdots)\circ_{j_q}Y_q$$
is effective and its effective divisor is simply $\widehat{S}$ itself.
Then we have
{\small
\begin{eqnarray*}&&H\partial T\\
&=&H\Big((-1)^{|X_1|+\cdots+|X_p|}(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \partial S)\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q\Big)\\
&=&c_SH\Big((-1)^{|X_1|+\cdots+|X_p|}(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \widehat{ S})\circ_{j_1}Y_1)\circ_{j_2}Y_2\cdots)\circ_{j_q}Y_q\Big)\\
&&+H\Big((-1)^{|X_1|+\cdots+|X_p|}(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} (\partial S-c_S\widehat{S}))\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q\Big)\\
&=& c_S\overline{H}\Big((-1)^{|X_1|+\cdots+|X_p|}(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \widehat{ S})\circ_{j_1}Y_1)\circ_{j_2} \cdots)\circ_{j_q}Y_q\Big)\\
&&+c_SH\Big((-1)^{|X_1|+\cdots+|X_p|}(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} (\widehat{S}-\frac{1}{c_S}\partial S))\circ_{j_1}Y_1)\circ_{j_2}Y_2\cdots)\circ_{j_q}Y_q\Big)\\
&&+H\Big((-1)^{|X_1|+\cdots+|X_p|}(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} (\partial S-c_S\widehat{S}))\circ_{j_1}Y_1)\circ_{j_2}Y_2\cdots)\circ_{j_q}Y_q\Big)\\
&=& c_S\overline{H}\Big((-1)^{|X_1|+\cdots+|X_p|}(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} \widehat{ S})\circ_{j_1}Y_1)\circ_{j_2}Y_2\cdots)\circ_{j_q}Y_q\Big)\\
&=&(\cdots((((\cdots(X_1\circ_{i_1}X_2)\circ_{i_2}\cdots )\circ_{i_{p-1}}X_p)\circ_{i_p} S)\circ_{j_1}Y_1)\circ_{j_2}Y_2\cdots)\circ_{j_q}Y_q\\
&=& T .
\end{eqnarray*}
}
Let $T$ be an effective tree monomial.
By Lemma~\mref{Lem: Induction}, we have $\partial \overline{H}(T)+H\partial(T-\overline{T})=T-\overline{T}$. Moreover, since each tree monomial of $\overline{T}$ is strictly smaller than $T$, by induction, we obtain $H\partial(\overline{T})+\partial H(\overline{T})=\overline{T}$.
By $H(T)=\overline{H}(T)+H(\overline{T})$, we get
$$\hspace{1cm} \partial H(T)+H\partial (T)=\partial \overline{H}(T)+\partial H(\overline{T})+H\partial(T-\overline{T})+H\partial(\overline{T})=T-\overline{T}+\overline{T}=T.
\hspace{1cm} \qedhere $$
\end{proof}
Combining Lemma~\mref{Lem: H0 of Difinfty is Dif} and Proposition~\mref{Prop: homotopy}, we obtain a surjective quasi-isomorphism $p: \Difinfty\to\Dif$.
Notice that conditions (i) and (ii) of Definition~\mref{de:model} hold by the construction of the dg operad $\Difinfty$. Thus the proof of Theorem~\mref{thm:difmodel} is completed.
\vspb
\subsection{Homotopy differential algebras with weight}\
\mlabel{ss:homo}
\begin{defn}\mlabel{de:homodifalg} Let $V=(V, m_1)$ be a complex. A \name{homotopy differential algebra structure of weight $\lambda$} on $V$ is a homomorphism $\Difinfty\to \mathrm{End}_V$ of unital dg operads. \end{defn}
Definition~\mref{de:homodifalg} translates to the following explicit characterization of homotopy differential algebras.
\begin{prop}\mlabel{de:homodifalg2} A homotopy differential algebra structure of weight $\lambda$ on a graded space $V$ is equivalently given by two family of operators $\{ m_n: V^{\ot n}\to V\}_{n\geqslant 1}$ and $\{d_n: V^{\ot n}\to V\}_{n\geqslant 1}$ with $|m_n|=n-2$ and $|d_n|=n-1$, fulfilling the following identities: for each $n\geqslant 1$,
\begin{eqnarray}\mlabel{eq:homodifalg2}
\sum_{i+j+k=n\atop i, k\geqslant 0, j\geqslant 1}(-1)^{i+jk}m_{i+1+k}\circ(\Id^{\ot i}\ot m_j\ot \Id^{\ot k})=0,
\end{eqnarray}
\vspc
\begin{eqnarray}\mlabel{eq:homodifop2}
&&\sum_{i+j+k=n\atop i, k\geqslant 0, j\geqslant 1}(-1)^{i+jk}d_{i+1+k}\circ (\Id^{\ot i}\ot m_j\ot \Id^{\ot k})\\
\notag &&\quad \quad\quad\quad\quad =\sum_{\substack{l_1+\dots+l_q+j_1+\dots+j_{q+1}=n \\ j_1+\dots+j_{q+1}+q=p\\ j_1, \dots, j_{q+1}\geqslant 0, l_1, \dots, l_{q}\geqslant 1 \\n\geqslant p\geqslant q\geqslant 1}}(-1)^{\eta}\lambda^{q-1}m_p\circ(\Id^{\ot j_1}\ot d_{l_1}\ot \Id^{\ot j_2}\ot \cdots\ot d_{l_q}\ot \Id^{\ot j_{q+1}}),
\end{eqnarray}
where
{\small\vspc
$$\eta:=\frac{n(n-1)}{2}+\frac{p(p-1)}{2}+\sum\limits_{k=1}^q\frac{l_k(l_k-1)}{2}+\sum\limits_{k=1}^q\big(l_k-1\big)\big(\sum\limits_{r=1}^kj_r+\sum\limits_{r=1}^{k-1}l_r\big)
=\sum_{k=1}^q(l_k-1)(q-k+\sum_{r=k+1}^{q+1}j_r).
$$}
\end{prop}
Notice that Eq.~\meqref{eq:homodifalg2} is exactly the Stasheff identity defining $A_\infty$-algebras \mcite{Sta63}.
As is well known, the homology $\rmH_*(V, m_1)$ endowed with the associative product induced by $m_2$ is a graded algebra.
Expanding Eq.~\meqref{eq:homodifop2} for small $n$'s, one obtains
\begin{enumerate}
\item when $n=1$, $|d_1|=0$ and $d_1\circ m_1=m_1\circ d_1,$
that is, $d_1: (V, m_1)\to (V, m_1)$ is a chain map;
\item when $n=2$, $|d_2|=1$ and
\begin{eqnarray*} d_1\circ m_2-\Big(m_2\circ(d_1\ot \Id)+m_2\circ(\Id\ot d_1)+\lambda m_2\circ(d_1\ot d_1)\Big) \\
= d_2\circ(\Id\ot m_1 +m_1\ot \Id)+
m_1\circ d_2, \end{eqnarray*}
which shows that $d_1$ is, up to a homotopy given by $d_2$, a differential operator of weight $\lambda$ with respect to the ``multiplication" $m_2$.
\end{enumerate}
As a consequence, the homology $\rmH_*(V, m_1)$, endowed with the associative product induced by $m_2$ and the differential operator induced by $d_1$, is a differential algebra of weight $\lambda$.
This indicates that the notion of homotopy differential algebras is a ``higher homotopy" version of that of differential algebras with weight.
\vspc
\section{From the minimal model to the $L_\infty$-algebra controlling deformations}
\mlabel{ss:modelinf}
To finish the paper, we will use the homotopy cooperad $ \Dif^{\ac}$ to deduce the $L_\infty$-structure controlling the deformations of differential algebras with weight, as promised in Theorem~\mref{th:linfdiff}. As a consequence, homotopy differential algebras can be described as the Maurer-Catan elements in (a reduced version) of this $L_\infty$-algebra.
\vspb
\subsection{Proof of Theorem~\mref{th:linfdiff}} \mlabel{ss: Linifty}\
By Proposition~\mref{pp:linfhomood} and Proposition-Definition~\mref{Prop: convolution homotopy operad}, for a graded space $V$, the $L_\infty$-algebra from the deformation complex of differential algebras is $$\mathfrak{C}_{\Dif}(V):=\mathbf{Hom}( \Dif^{\ac}, \End_V)^{\prod}.$$
Now, we determine the $L_{\infty}$-algebra $\frakC_{\Dif}(V)$ explicitly and thus give the promised operadic proof of Theorem~\mref{th:linfdiff}.
The sign rules in $ \Dif^{\ac}$ are complicated, so we use some transformations. Notice that there is a natural isomorphism of operads: $\mathbf{Hom}(\cals,\End_{sV}) \cong \End_V$. Then we have the following isomorphisms of homotopy operads:
$$\begin{array}{rcl}
\mathbf{Hom}( \Dif^{\ac}, \End_V)& \cong & \mathbf{Hom}( \Dif^{\ac}, \mathbf{Hom}(\cals,\End_{sV})) \cong \mathbf{Hom}( \Dif^{\ac}\otimes_{\mathrm{H}}\cals, \End_{sV})\\
& = & \mathbf{Hom}(\mathscr{S} (\Dif^{\ac}), \End_{sV}).
\end{array}$$
Recall that for $n\geq 1$, $\mathscr{S} (\Dif^{\ac})(n)=\bfk \widetilde{m_n}\oplus \bfk \widetilde{d_n}$ with $|\widetilde{m_n}|=0$ and $|\widetilde{d_n}|=1$. By definition, we have
$$\begin{array}{rcl}
\mathbf{Hom}(\mathscr{S} (\Dif^{\ac}), \End_{sV})(n) & = & \Hom(\bfk \widetilde{m_n}\oplus \bfk \widetilde{d_n}, \Hom((sV)^{\otimes n}, sV))\\
& \cong & \Hom(\bfk \widetilde{m_n}, \Hom((sV)^{\otimes n}, sV))\oplus \Hom( \bfk \widetilde{d_n}, \Hom((sV)^{\otimes n}, sV)).
\end{array}$$
For a homogeneous element $f\in \Hom((sV)^{\otimes n}, sV)$, define
$$\widehat{f}:\bfk \widetilde{m_n} \to \Hom((sV)^{\otimes n}, sV)), \quad \widetilde{m_n} \mapsto f, $$
and for a homogeneous element $g\in \Hom((sV)^{\otimes n}, V)$, define
$$\overline{g}: \bfk \widetilde{d_n} \to \Hom((sV)^{\otimes n}, sV)), \quad \widetilde{d_n} \mapsto (-1)^{|g|} sg.$$
The resulting bijections, sending $f$ and $g$ to $\widehat{f}$ and $\overline{g}$ respectively, allow us to identify $\frakC_{\DA}(V)$ with $\mathbf{Hom}(\mathscr{S} (\Dif^{\ac}), \End_{sV})$.
Now we compute the $L_{\infty}$-structure $\{ \ell_n \}_{n\geq 1}$ of $\frakC_{\Dif}(V)$.
Recall that for homogeneous elements $x_i\in \frakC_{\Dif}(V),1\leq i\leq n$,
$$\ell_n(x_1,\cdots,x_n)=\sum_{\sigma \in S_n} \chi(\sigma; x_1,\dots,x_n) m_n(x_{\sigma(1)}\otimes \cdots \otimes x_{\sigma(n)})$$
with
\vspc
$$m_n(x_1\otimes \cdots \otimes x_n)=\sum_{T\in \frakt, w(T)=n} m_T(x_1\otimes \cdots \otimes x_n),$$
where $\{m_T\}_{T\in \frakt}$ is given by the homotopy operad structure of $\mathbf{Hom}(\mathscr{S} (\Dif^{\ac}), \End_{sV})$.
One computes the maps $m_n$ as follows.
\begin{itemize}
\item[(i)] For homogeneous elements $f,g\in\End_{sV}$,
$m_2(\widehat{f}\otimes \widehat{g})=\widehat{f\{ g \}};$
\item[(ii)] For homogeneous elements $f,g\in\End_{sV}$,
$m_2(\overline{f}\otimes \widehat{g})=(-1)^{|\widehat{g}|} \overline{f\{ g \}};$
\item[(iii)] For homogeneous elements $f_0,f_1,\dots,f_n\in \End_{sV},n\geq 1$,
$$ m_{n+1}(\widehat{f_0}\otimes \overline{f_1}\otimes \cdots \otimes \overline{f_n})= (-1)^{ (n+1)|\widehat{f_0}| + \sum_{k=1}^{n}(n-k)|\overline{f_k}|} \lambda^{n-1} \overline{f_0\{f_1,\dots, f_n\}};$$
\item[(iv)] All other components of operators $\{m_n \}_{n\geq 1}$ vanish.
\end{itemize}
Furthermore, the maps $\{\ell_n\}_{n\geqslant 1}$ are given as follows.
\begin{itemize}
\item[(i)] For homogeneous elements $f,g\in\End_{sV}$,
$$\begin{array}{rcl}
\ell_2(\widehat{f}\otimes \widehat{g})& = & m_2(\widehat{f}\otimes \widehat{g})-(-1)^{|\widehat{f}|| \widehat{g}|} m_2(\widehat{g}\otimes \widehat{f})\\
& = & \widehat{ f\{g\}} -(-1)^{|f||g|}\widehat{ g\{ f \}} \\
& =&\widehat{[f,g]_G};
\end{array}$$
\item[(ii)] For homogeneous elements $f,g\in\End_{sV}$,
$$\begin{array}{rcl}
\ell_2(\widehat{f}\otimes \overline{g}) & = & m_2(\widehat{f}\otimes\overline{g})-(-1)^{|\widehat{f}|| \overline{g}|} m_2(\overline{g}\otimes \widehat{f})\\
& = & \overline{f\{g\}} -(-1)^{|f||g|}\overline{g\{f\}}
\\
& =& \overline{[f,g]_G};
\end{array}$$
\item[(iii)] For homogeneous elements $f_0,f_1,\dots,f_n\in \End_{sV}$ with $n\geq 1$,
$$\begin{aligned}
\ell_{n+1}(\widehat{f_0}\otimes \overline{f_1}\otimes \cdots \otimes \overline{f_n})
= & \sum_{\sigma\in S_n} \chi(\sigma;\overline{f_1},\dots, \overline{f_n})m_n(\widehat{f_0} \otimes \overline{f_{\sigma(1)}}\otimes \cdots \otimes \overline{f_{\sigma(n)}})\\
= & \sum_{\sigma\in S_n}(-1)^{(n+1)|\widehat{f_0}|+\sum_{k=1}^{n-1}(n-k)|\overline{f_{\sigma(k)}}|} \lambda^{n-1} \overline{f_0\{ f_{\sigma(1)},\dots ,f_{\sigma(n)} \}};
\end{aligned}$$
\item[(iv)] For homogeneous elements $f_0,f_1,\dots,f_n\in \End_{sV}$ with $n\geq 1$ and $1\leq i\leq n$,
$$\ell_{n+1}(\overline{f_1}\otimes \cdots \otimes \overline{f_{i}} \otimes \widehat{f_0} \otimes \overline{f_{i+1}} \otimes \cdots \otimes \overline{f_n})=(-1)^{ i+|\widehat{f_0}|(\sum_{k=1}^{i} |\overline{f_k}|)} \ell_{n+1} (\widehat{f_0}\otimes \overline{f_1}\otimes \cdots \otimes \overline{f_n});$$
\item[(v)] All other components of the operators $\{\ell_n \}_{n\geq 1}$ vanish.
\end{itemize}
Via the bijections $f\mapsto \widehat{f}$ and $g\mapsto \overline{g}$, it is readily verified that the above $L_{\infty}$-structure on $\mathbf{Hom}(\mathscr{S} (\Dif^{\ac}), \End_{sV})$ is exactly the one on $\frakC_{\DA}(V)$ defined in Eq~\meqref{eq:linfdiff}, thereby proving Theorem~\mref{th:linfdiff}.
\vspb
\subsection{Another description of homotopy differential algebras}\
\mlabel{ss: another definition}
We end the paper with a characterization of the key notion of homotopy differential algebras and give a generalization of a work of Kajura and Stasheff~\mcite{KS06}.
By the last paragraph of \S\mref{ss:homocood}, there exists another definition of homotopy differential algebras in terms of the Maurer-Cartan elements in the $L_\infty$-algebra on the deformation complex. Let us make it precise.
Let $V$ be a graded space. Denote
$$\overline{T}(V)\coloneqq \oplus_{n\geq 1} V^{\ot n},\ \overline{\frakC_\Alg}(V):=\Hom(\overline{T}(sV),sV),\ \overline{\frakC_\DO}(V):= \Hom(\overline{T}(sV),V). $$
We consider the subspace
$$\overline{{\frakC}_{\DA}}(V):=\overline{\frakC_\Alg}(V)\oplus\overline{\frakC_\DO}(V)$$
of $\frakC_\DA(V)$ in Eq.~\meqref{eq:linfdiff}.
It is easy to see that $\overline{{\frakC}_{\DA}}(V)$ is an $L_\infty$-subalgebra of $\frakC_{\DA}(V)$.
By the remark at the end of \S\mref{ss:homocood}, Definitions~\mref{de:homodifalg} can be rephrased as follows.
\begin{prop}\mlabel{de:homodifalg3}
Let $V$ be a graded space. A \name{homotopy differential algebra structure} of weight $\lambda$ on $V$ is defined to be a Maurer-Cartan element in the $L_\infty$-algebra $\overline{{\frakC}_{\DA}}(V)$.
\end{prop}
Notice that if we take the whole space $\frakC_{\DA}(V)$ instead of the reduced part $\overline{\frakC_\Alg}(V)$, we get the curved homotopy differential algebra structure.
Solving the Maurer-Cartan equation in the $L_\infty$-algebra $\overline{{\frakC}_{\DA}}(V)$, Proposition~\mref{de:homodifalg3} can be explicated as follows.
\begin{prop} \mlabel{de:homodifalg4}
A homotopy differential algebra structure of weight $\lambda$ on a graded space $V$ is equivalent to two families of linear maps
$$b_n: (sV)^{\ot n}\rightarrow sV, \quad R_n:(sV)^{\ot n}\rightarrow V, \quad n\geq 1,$$
both of degree $-1$ and subject to the following equations:
$$\sum_{i+j-1=n\atop i, j\geqslant 1} b_{i}\{b_j\}=0 \ \mathrm{and}\
\sum_{u+j-1=n\atop u, j\geqslant 1} sR_{u}\{b_j\}
=\sum_{l_1+\dots+l_q+p-q=n \atop {l_1, \dots, l_q\geqslant 1\atop
1\leqslant q\leqslant p\leqslant n} }\lambda^{q-1} b_p\{sR_{l_1},\dots,sR_{l_q}\}.
$$
\end{prop}
\begin{remark}
\begin{enumerate}
\item
The equivalence between Definition~\mref{de:homodifalg2} and Proposition~\mref{de:homodifalg4} is given by
\[m_n:=s^{-1}\circ b_n\circ s^{\ot n} :V^{\ot n}\rightarrow V,\quad d_n:=R_n\circ s^{\ot n}:V^{\ot n}\rightarrow V,\ n\geqslant 1.\]
\item Thanks to Proposition~\mref{de:homodifalg4}, Definition~\mref{de:homodifalg2} generalizes the notion of homotopy derivations on $A_\infty$-algebras introduced by
Kajura and Stasheff \mcite{KS06} from zero weight to nonzero weight.
\end{enumerate}
\end{remark}
\noindent
{\bf Acknowledgments. } This work is supported by NSFC (12071137, 11971460) and by STCSM (22DZ2229014).
\noindent
{\bf Declaration of interests. } The authors have no conflicts of interest to disclose.
\noindent
{\bf Data availability. } No new data were created or analyzed in this study.
\vspc
|
{
"arxiv_id": "2302.13154",
"language": "en",
"timestamp": "2023-02-28T02:11:23",
"url": "https://arxiv.org/abs/2302.13154",
"yymm": "2302"
} | \section{Introduction}
Over recent years, the world has seen a tremendous growth in the wind energy sector with the global wind energy production capacity in 2021 standing nearly 4.5 times the capacity in 2010 \citep{IEA}. However, aligning with ambitious net-zero targets still requires a colossal effort to exploit the full potential of wind energy resources through improved design of wind farm layouts as well as individual turbines. The performance of a single wind turbine can be affected by several factors such as interaction with the atmospheric boundary layer, inflow turbulence, presence of complex topologies, atmospheric stability and so on. In addition, inside a grid of turbines, the wake produced from one turbine can contribute to significant power losses and fatigue damage to subsequent wind turbines downstream \citep{vermeer2003wind, sanderse2011review, stevens2017flow, porte2020wind}. Therefore, a better understanding of the spatial development of a turbine-wake, as well as its dynamic properties, is necessary. {With ever increasing turbine diameter, particularly for offshore turbines, the turbine spacing is no longer a free parameter that can be solely decided based on optimising for total power output, rather land/area related constraints also become a key factor in designing wind farm layouts \citep{lignarolo2016experimental, gaumond2012benchmarking, howland2019wind}. In that regard, studying the near wake of the turbine, where strong coherence is present, becomes particularly important. }
The near wake (coarsely defined as the region within 3 rotor diameters (D) downstream of the rotor plane) of a wind turbine is multiscale in nature as the flow is forced simultaneously at multiple length scales, for example by the tower, nacelle, blade tip/root vortices etc., thereby introducing coherence into the overall wake at multiple time/length scales \citep{porte2020wind, crespo1999survey, abraham2019effect}. Out of these, the most pronounced structures in the near wake are the tip vortices, the dynamics of which has been studied extensively through numerous experimental \citep{sherry2013interaction, okulov2014regular, lignarolo2014experimental, lignarolo2015tip} and numerical studies \citep{ivanell2010stability, lu2011large, sarmast2014mutual, hodgkin2022numerical}. The stability analysis of \citet{widnall1972stability} showed the presence of three different instability modes of the tip vortices, a short-wave mode, a long-wave mode and the mutual inductance mode, out of which the mutual inductance mode was found to dominate the breakup process of the helicoidal vortex system. The intensity of the mode depends on the pitch of the helical vortex system, hence on the tip speed ratio ($\lambda = \omega R/U_{\infty}$, where $R$ is the turbine radius, $\omega$ is the rotational speed, and $U_{\infty}$ is the free stream velocity) \citep{sherry2013interaction, okulov2014regular}. Depending on the blade configuration, distinct root vortices can also form near the root region of the blade \citep{sherry2013interaction}. However, they are much less persistent in comparison to the tip vortices, whose dynamics and breakdown is particularly important to initiate the recovery of the turbine wake. \citet{medici2005experimental} noted that the tip vortex system in the near field acts as a shield, restricting the exchange of mass and momentum with the outer, background fluid. In other words, breakdown of the tip vortices is a necessary process to re-energise the wake in the far field, reducing the velocity deficit, which is beneficial for the subsequent turbines in the grid.
Interestingly, \citet{okulov2014regular} found a single dominant frequency in the far wake (which they defined as streamwise distance, $x > 2.5D$) of a wind turbine which was nearly independent of operating conditions. The corresponding Strouhal number based on rotor diameter and freestream velocity was 0.23. A similar Strouhal number in the range (0.15 - 0.4) has been noted in several other works \citep{chamorro2013interaction, foti2018wake, medici2008measurements, heisel2018spectral}. This Strouhal number was associated with wake meandering which is responsible for large transverse displacements of the wake centreline (which can be roughly defined as the location of maximum velocity deficit) in the far wake of the turbine. Although, the dominance of wake meandering in the far wake has been known for many years, the scientific community still holds varied opinions about the genesis of the meandering motion in the far wake. For instance, wake meandering has been seen as a passive advection of the turbine wake due to large scale atmospheric structures \citep{larsen2008wake, espana2011spatial}. In contrast, \citet{okulov2014regular} observed wake meandering when the free stream turbulence level was negligible. The authors proposed that wake meandering could be related to the instability of the shed vortices and connected it to the slow precession motion of the helicoidal vortex system. On a similar note, the importance of the nacelle in the generation of wake meandering was reported through linear stability analysis \citep{iungo2013linear} and numerical simulations \citep{kang2014onset, foti2016wake, foti2018wake}. The large eddy simulations of \citet{foti2018wake} showed that wake meandering is related to the slow precession motion of the helical hub vortex formed behind the nacelle. The authors suggested that the hub vortex grows radially and interacts with the outer wake which can potentially augment wake meandering. Even after a significant amount of research, the exact cause of wake meandering still remains elusive which is perhaps indicated by the ineludible scatter of Strouhal numbers associated with wake meandering observed in different studies \citep{medici2008measurements}. The distance from the rotor plane, where a wake meandering frequency has been observed, has also varied in different studies. For instance, \citet{chamorro2013interaction} found wake meandering only after 3 rotor diameters whereas, \citet{okulov2014regular} reported the presence of a wake meandering frequency as close as $1.5D$ from the rotor (see fig. 6 of \citet{okulov2014regular}) where they found small variation in the frequency for different operating conditions. \citet{medici2008measurements} reported a similar observation at 1 diameters dowstream of the rotor and concluded that wake meandering frequency varied with both tip speed ratio and the thrust coefficient of the rotor. In the present study we attempt to address some of these discrepancies through extensive laboratory experiments on a model wind turbine.
We perform a series of particle image velocimetry (PIV) experiments to study the near wake of a wind turbine at a range of tip speed ratios, while focusing primarily on $\lambda=4.5$ and $\lambda=6$. The wind turbine model had a nacelle and tower to imitate a real wind turbine as closely as possible within laboratory scale constraints. We report four main results: (a) the spatial region over which different frequencies are dominant in the near field varies drastically with $\lambda$. We introduce a length scale termed as the `convective pitch' which depends on $\lambda$ (and hence the spatial unfolding of the tip vortices) and show that it could be a better length scale than turbine diameter (D) to demarcate the near wake. (b) the free stream turbulence intensity for the experiments was negligible ($<1\%$), however, we still observed wake meandering. The Strouhal number of wake meandering decreased with tip speed ratio ($\lambda$). (c) the wake meandering frequency is found to be present even very close to the nacelle, upholding the notion that the nacelle is important to `seed' wake meandering. (d) the tower acts as an important source of asymmetry, resulting in a downward bending of the mean wake centerline, and increased turbulence and mixing in the lower plane.
\section{Experimental method}\label{Section_model}
A series of particle image velocimetry (PIV) experiments were performed on a small scale wind turbine model in the hydrodynamics flume in the Department of Aeronautics at Imperial College London. At the operating water depth, the flume has a cross section of 60$\times$60 cm$^2$. A schematic of the model wind turbine is shown in figure \ref{fig:sch} which was designed to mimic the design of an actual wind turbine to the closest extent possible while satisfying several experimental constraints. The diameter of the model was restricted to 20 cm to keep the blockage low ( 8.7$\%$ based on turbine diameter which is comparable to blockages encountered in previous experimental studies \citep{sherry2013interaction, miller2019horizontal}). Note that the actual blockage is smaller as the rotor can be considered as a porous body. The freestream velocity ($U_{\infty}$) was kept constant at 0.2 m/s which led to a global Reynolds number (based on turbine diameter) of 40,000 which is several orders of magnitude smaller than the Reynolds number actual wind turbines operate at. The turbine was specifically designed to operate at low Reynolds number which is discussed in the next paragraph. The nacelle was represented as a cylindrical body of diameter 3.3 cm and length 4.9 cm. A hollow cylindrical tower of outer diameter 2.1 cm was attached to the nacelle. The tower was attached at the top to a mounting frame and the wind turbine model was hung in an inverted fashion in the flume. A stepper motor RS 829-3512 was used along with a drive and signal generator to rotate the turbine at a prescribed RPM. The motor, along with the speed controlling electronics, were located outside the flume and the torque from the motor was transmitted to the turbine shaft via a belt and pulley mechanism that was housed inside the hollow tower which restricted any further reduction in the tower's diameter.
Standard wind turbines operate at a high chord based Reynolds number $\sim 10^6$ (See fig:1 of \citep{lissaman1983low}) and at such Re standard airfoils operate at high maximum lift to drag ratio O(100) (See table 1 of \citep{sunada1997airfoil}) which is impossible to achieve in a small scale model (for which the chord based $Re$ is barely $\sim 9000$ here) unless a pressurised facility is used to tailor the density of the incoming flow \citep{miller2019horizontal}. Accordingly, there is inherently a Reynolds number mismatch of the order of 100. At $Re$ $\sim 10^4$, thin flat plate airfoils perform better than standard thicker smooth airfoils \citep{mcmasters1980low, winslow2018basic, sunada1997airfoil, hancock2014wind}. Thus, a flat plate airfoil with thickness ratio $5\%$ and camber ratio $5\%$ was initially chosen for the blade which gives the best performance at low Reynolds number ($Re \sim 4 \times 10^3$) \citep{sunada1997airfoil}. However the blades were found to be incapable of sustaining the structural loads under the present experimental conditions and hence the thickness ratio was increased to $10\%$ for more structural rigidity. The operating angle of attack was selected to be $5^o$. The chord and twist distribution of the blades were similar to that used by \citet{hancock2014wind}. Near the root section, the blade was linearly interpolated to a circular section which was fixed to the hub. Experiments were conducted at tip speed ratios in the range $4.5\leq \lambda\leq 6$.
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 0 0 0 ,width= 1.1\textwidth]{figures/schematic_final.png} }
\caption{A schematic of the wind turbine model and the field of views used in the different experiments. The streamwise distance $x$ is measured from the rotor plane and transverse distances $z$ and $y$ are measured from the nacelle centerline. Experiment 1 (A-C) focused on the plane aligned with the tower's axis and streamwise direction, $\textit{i.e.}$ $xy$ plane (sub figure \ref{fig:sch}(a)), while experiments 2-4 focused on $xz$ planes at different $y$ offsets (sub figure \ref{fig:sch}(b-c)).}
\label{fig:sch}
\end{figure}
\begin{table}
\begin{center}
\def~{\hphantom{0}}
\begin{tabular}{lcccccccc}
Exp & $U_{\infty}(m/s)$ & $\lambda$ & FOV & Plane & $\delta x$ & $f_{aq}(Hz)$ & $\delta t(s)$ & $T(s)$ \\[3pt]
1A & 0.2 & 4.5, 6 & \hspace{0.2 cm}\makecell{$0.25D<x<3.45D$, \\ $0<y<0.85D$} \hspace{0.2 cm} & $z=0$ & $0.0088D$ & 100 & 0.01 & 54.75\\
1B & 0.2 & 4.5, 6 & \hspace{0.2 cm}\makecell{$0.25D<x<3.45D$, \\ $-0.85D<y<0$}\hspace{0.2 cm} & $z=0$ & $0.0088D$ & 100 & 0.01 & 54.75\\
1C & 0.2 & 4.5, 6 & \hspace{0.2 cm}\makecell{$0.25D<x<3.45D$, \\ $-0.42D<y<0.43D$}\hspace{0.2 cm} & {$z=0$} & $0.0088D$ & 100 & 0.01 & 54.55\\
2A & 0.2 & 4.5, 6 & \hspace{0.2 cm}\makecell{$0.29D<x<1.95D$, \\ $-0.73D<y<0.69D$}\hspace{0.2 cm} & {$y=0$} & $0.0092D$ & 100 & 0.01 & 54.55\\
2B & 0.2 & 4.5, 6 & \hspace{0.2 cm}\makecell{$0.29D<x<1.95D$, \\ $-0.73D<y<0.69D$}\hspace{0.2 cm} & {$y=-0.35D$} & $0.0092D$ & 100 & 0.01 & 54.55\\
3A & 0.2 & 4.5-6 & \makecell{$x\in(2D\pm 0.062D)$, \\ $-0.6D<y<0.6D)$} & $y=0$ & $0.0082D$ & 10 & 0.01 & 900\\
3B & 0.2 & 4.5-6 & \makecell{$x\in(3D\pm 0.062D)$, \\ $-0.6D<y<0.6D)$} & $y=0$ & $0.0082D$ & 10 & 0.01 & 900\\
3C & 0.2 & 4.5-6 & \makecell{$x\in(5D\pm 0.062D)$, \\ $-0.6D<y<0.6D)$} & $y=0$ & $0.0082D$ & 10 & 0.01 & 900\\
4 & 0.2 & 4.5-6 & \makecell{$x\in(-1.2D\pm 0.062D)$, \\ $-0.7D<y<0.7D)$} & $y=0$ & $0.0082D$ & 10 & 0.01 & 120\\
\end{tabular}
\caption{parameters associated with different experiments. Here $\delta x$ represents the spatial resolution of the experiments. $f_{aq}$, $\delta t$, and $T$ are the acquisition frequency, time between successive laser pulses and total time of data acqiusition respectively. }
\label{tab:kd}
\end{center}
\end{table}
Four different experimental campaigns, named campaigns 1 - 4, were conducted to capture different regions and properties of the flow. The details of the experiments can be found in table \ref{tab:kd} and fig. \ref{fig:sch}. In campaign 1, three Phantom v641 cameras were used to obtain a stitched field of view spanning $0.25 \leq x/D \leq 3.45$ in the streamwise direction in the $xy$ plane (see fig. \ref{fig:sch}), the distance being measured from the rotor plane. Here, in the plane of symmetry ($z=0$), three different experiments were conducted at each tip speed ratio considered. The first experiment in campaign 1 (henceforth referred to as $1A$) focused on the region $0\leq y/D \leq 0.85$, measured from the symmetry line. Similarly experiment $1B$ focused on the region $-0.85 \leq y/D \leq 0$ which contains the wake region of the tower. The 3rd experiment ($1C$) covered the central region, $-0.42 \leq y/D \leq 0.43$. In experimental campaign 2, two Phantom v641 cameras were used simultaneously to capture the near wake ($0.29 \leq x/D \leq 1.95$) in the $xz$ plane. Experiment $2A$ focused on the symmetry plane ($y=0$), while experiment $2B$ focused on an offset plane ($y=-0.35D$) such that some influence from the tower-wake is captured. All the parameters associated with different experiments are tabulated in table \ref{tab:kd}. For experiments 1 and 2, each camera captured images (of dimension $2560 px\times 1600 px$) in cinematographic mode at an acquisition frequency ($f_{aq}$) of $100$ $Hz$ which was found to be adequate to resolve all scales of dynamic importance for the present study. Data was obtained for a total time ($T$) of 54.75$s$ and 54.55$s$ for experiments 1 and 2 respectively.
The objective of campaign 3 was to obtain the wake meandering frequency accurately. Only one camera was used for each experiment and the field of view was shrunk into a thin strip of dimension $2560 px \times 256 px$ (see fig. \ref{fig:sch}(c)) which facilitated obtaining large time series of data ($900 s$, which is close to 180 wake meandering cycles) within the memory constraints of the camera. The tip speed ratio was slowly varied from 4.5 to 6 in increments of 0.1. The acquisition frequency was reduced to $10Hz$ and the time between successive laser pulses ($\delta t$) was kept fixed at $0.01s$ for all experiments. The field of views of the experiments were centred at $x=2D$, $3D$ and $5D$ as shown in fig. \ref{fig:sch}. Finally experiment 4 used a similar strip field of view just upstream of the rotor (see details in table \ref{tab:kd}) which is used to obtain the approach velocity at different tip speed ratios. The images acquired were processed in PIVlab \citep{thielicke2014pivlab}. The adaptive cross correlation algorithm in PIVlab used a multi-pass, fast Fourier transform to determine the average particle displacement. An initial interrogation area (IA) of $64\times64$ pixels was reduced in $3$ passes with a final IA size of $16\times16$ pixels with $50$\% overlap in the $x$ and $y$ directions. The spatial resolution ($\delta x$) of the experiments were close and were around $0.0082D-0.0092D$ (see table \ref{tab:kd}). The smallest scales of dynamic importance in the near field, \textit{i.e.} the tip vortex cores were found to span $\approx$ $6 \delta x$. Since the interrogation windows had a $50\%$ overlap, \NB{meaning that adjacent vectors in the velocity fields were spaced $\delta x/2$ apart}, the tip vortex cores spanned $\approx$ 12 PIV vectors. Accordingly, all scales of dynamic importance are believed to be resolved adequately.
\section{Results and discussion}
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 0 50 0 ,width= \textwidth]{figures/vorticity2.png} }
\caption{Instantaneous vorticity fields of a wind turbine wake for (a) $\lambda=4.5$ and (b) $\lambda=6$ in the $xy$ plane. Field of views from experiment $1A$ and experiment $1B$ (at different time instants) are stitched together for a visual representation of the entire wake. }
\label{fig:inst}
\end{figure}
\subsection{Instantaneous vorticity field}
Fig. \ref{fig:inst}(a-b) show instantaneous vorticity fields at $\lambda = 4.5$ and $\lambda=6$ at the $xy(z=0)$ plane. Note that the field of views from experiments $1A$ and $1B$ are stitched together for visual representation but they were not acquired concurrently. The flow fields in fig. \ref{fig:inst} are inherently complex and contain several length/time scales (also see supplementary video 1 and 2). In the top plane (experiment $1A$), the array of the tip vortices can be seen which acts as a boundary between the wind turbine wake and the free stream. For $\lambda=4.5$, the vortices shed from the 3 blades start interacting at a streamwise distance of $x/D \approx 2.25$ and initiate the merging process. Until $x/D \approx 2.25$, the wake boundary remains nearly horizontal or in other words, in the presence of the tip vortices, the wake does not expand in the very near field. It supports the observation of \citep{medici2005experimental, lignarolo2015tip} who noted that the tip vortices in the near field act as a shield to prevent mixing with the outer fluid. Beyond $x/D \approx 2.25$ the interaction and merging of the tip vortices aid wake expansion and wake recovery. For $\lambda=6$, the tip vortices are stronger and are more closely spaced (due to higher rotational speed), which results in an earlier interaction and merging (see supplementary video 2). In the lower plane (experiment $1B$), the vorticity field looks drastically different from the top plane. The presence of the tower causes an early breakdown of the tip vortices. Note that the vorticity levels in the lower plane are significantly enhanced compared to the upper plane and the tower acts as an important source for this asymmetry in the flow.
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 50 0 0 ,width= \textwidth]{figures/vorticity_top.png} }
\caption{Instantaneous vorticity field of a wind turbine wake in $xz(y=0)$ plane for (a) $\lambda=4.5$ and (b) $\lambda=6$. Subfigures (c) and (d) show the vorticity field at an offset plane ($y=-0.35D$) for $\lambda=$ 4.5 and 6 respectively.}
\label{fig:inst2}
\end{figure}
Fig. \ref{fig:inst2} shows the instantaneous vorticity field in the $xz$ plane at different $y$ offsets at $\lambda=4.5$ and 6 (Exp 2). At the symmetry plane (experiment $2A$) the flow field looks similar to that obtained from experiment $1A$ (see supplementary videos 3 and 4) and the wakes are mostly symmetric about the nacelle centerline ($z=0$) (see fig. \ref{fig:inst2}(a, b)). Fig. \ref{fig:inst2}(c, d) show the vorticity field at an offset (experiment $2B$) from the nacelle. The central region of the wake shows an oscillatory behaviour (see supplementary videos 5 and 6) which was not pronounced at the symmetry plane in experiment $2A$. It is evident that this oscillation results from the vortex shedding of the tower which interacts with the tip vortices. This interaction causes an earlier breakdown of the tip vortices, promoting turbulence and mixing in the lower plane seen in fig. \ref{fig:inst}. It is worthwhile to note that the flow field shown in fig. \ref{fig:inst} or \ref{fig:inst2} is similar to that observed in an actual turbine (see fig. 7 of \citep{abraham2019effect}). \citet{abraham2019effect} utilised natural snowfall to visualise the wake of an utility scale turbine in the symmetry plane ($xy$ plane according to present nomenclature) and showed that the flow structures in the lower plane were significantly more chaotic and distorted due to the presence of the tower. Hence, we believe the present experiments replicate the wake of an actual turbine well in spite of the inherent Reynolds number difference.
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 100 200 50 ,width= 1.2\textwidth]{figures/mean_deficit_local.png} }
\caption{ Mean velocity deficit at (a) $x/D = 0.5$, (b) $x/D = 1.0$, (c) $x/D = 1.5$, (d) $x/D = 2.0$, (e) $x/D = 2.5$, and (f) $x/D = 3.0$ for $\lambda = 4.5 $ and $6$}
\label{fig:def}
\end{figure}
\subsection{Mean velocity field}
Fig. \ref{fig:def} shows the mean velocity deficit ($\Delta U/U_{\infty}$, where $\Delta U = U_{\infty} - U$) profiles at different streamwise distances for the two different tip speed ratios. The plots are only shown in the $xy$ plane (experiment 1). Close to the turbine, in the upper plane, 3 distinct inflection points are observed (fig. \ref{fig:def}(a)). We know that the mean velocity/velocity deficit profiles in the wake of a single scale bluff body contains only one inflection point on each side of the wake. The existence of multiple inflection points in the wake of the wind turbine essentially shows the multiscale nature of a wind turbine wake. \citet{baj2017interscale} observed mean velocity profiles of similar nature in the near wake of a multiscale array of prisms. Note that the qualitative nature of the wake deficit profiles are similar for both tip speed ratios considered. In the upper plane, the inflection point near the centreline occurs at $y/D \approx 0.1$, which is close to the surface of the nacelle. Hence it corresponds to the wake of the nacelle, where the velocity deficit is maximum. The second inflection point occurs at $y/D \approx 0.3$ which is close to the root region of the blade and corresponds to the blade wake. The third inflection point occurs at $y/D \approx 0.55$ and corresponds to the tip vortices. The first inflection point is the least persistent (until $x/D \approx 1$) indicating a small spatial extent of the nacelle wake. The second inflection point persists until $\approx$ 1.5 diameters, while the 3rd inflection point persists far beyond that, which can be expected as the tip vortices are more spatially persistent than the vortices shed from the root region of the blade, as can also be seen from fig. \ref{fig:inst}. Important observations can be made if we compare the mean velocity deficit profiles with standard wake models used in industry. Standard models like Jensen \citep{jensen1983note} or Frandsen \citep{frandsen2006analytical} assume that the velocity deficit has a symmetric top hat shape which is not true even at $x/D \approx 3$. The model proposed by \citet{bastankhah2014new} assumes a Gaussian distribution of the velocity deficit profile, which is by far a better assumption to make but still has limitations. As can be seen from figure \ref{fig:def}, even at $x/D = 3$, the wake deficit profile is not exactly Gaussian. Sharp gradients exist near the wake edge as the tip vortices are still coherent. Additionally, the location where velocity deficit is maximum drifts from the geometric centreline ($y=0$) towards $y<0$. This asymmetry is caused by the presence of the tower which is not accounted for in the aforementioned models in use. Indeed, the tower is the most important source of top/bottom asymmetry in the wake which leads to higher velocity deficit in the lower plane that persists far downstream.
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 0 0 0 ,width= 1.05\textwidth]{figures/spectra_tip_final.png} }
\caption{Transverse velocity spectra obtained at $x/L_c=1$ for (a) $\lambda=4.5$ and (e) $\lambda=6$. Subfigure (b) and (f) show the same at $x/L_c=1$, $y/D=0.55$ for the two tip speed ratios. Similarly, subfigure (c) and (g) show the spectra at $x/L_c=4$ for $\lambda = 4.5$ and $\lambda = 6$ respectively. The corresponding spectra at $x/L_c=4$, $y/D=0.55$ are shown in (d) and (h). Strouhal numbers $St_C$ and $St_D$ are defined based on $L_c$ and $D$ as the length scale and free stream velocity as the velocity scale.}
\label{fig:fft_turb_1}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 0 0 0 ,width= 0.9\textwidth]{figures/spectra_top.png} }
\caption{Transverse velocity spectra obtained at (a) $x/D=0.5$, $z/D=0$ and (b) $x/D=1.5$, $z/D=0$ at the plane $y=0$ (experiment $2A$). The same is shown at an offset plane (experiment $2B$) at (c) $x/D=0.5$, $z/D=0$, and (d) $x/D=1.5$, $z/D=0$. All the spectra are shown for for $\lambda = 6$ only.}
\label{fig:fft_turb_2}
\end{figure}
\subsection{Important frequencies}
The temporal fluctuation in the wake of a wind turbine is important as that determines the nature of the fluctuating loads induced on downstream turbines exposed to the wake of the upstream machine. Due to the inherent multiscale nature of the wake, multiple frequencies can be expected to characterise the wake dynamics. Let us first identify the important frequencies towards the outer edge of the wake for different tip speed ratios. For that, we introduce a length scale, which we term as \emph{Convective pitch, $L_c$} and define it as $L_c=\pi D/\lambda$. This length scale can physically be interpreted as the distance travelled by a fluid element in the time taken for the rotor to complete one full rotation. For the present configuration $L_c = 0.70D$ and $0.52D$ for $\lambda=$ 4.5 and 6 respectively. We evaluate fast Fourier transforms at selected points (shown by \textbf{\textcolor{blue}{+}} in fig. \ref{fig:inst}) based on the fluctuating transverse velocity component and show them in fig. \ref{fig:fft_turb_1}. The Strouhal number, $St_C$ is calculated based on freestream velocity ($0.2$m/s) and $L_c$. Figs. \ref{fig:fft_turb_1}(a) and \ref{fig:fft_turb_1}(e) show the transverse velocity spectra at $x/L_c = 1$ for $\lambda = 4.5$ and $\lambda=6$ respectively. Apart from a low frequency region near the nacelle ($y\approx 0$), a number of distinct frequencies are observed which match well with integer multiples of $St_C$. These frequencies and their relative strengths can be more clearly observed from fig. \ref{fig:fft_turb_1}(b) and \ref{fig:fft_turb_1}(f) (for $\lambda = 4.5$ and $\lambda=6$ respectively) which show the frequency spectra at $x/L_c=1$ and $y/D=0.55$, \textit{i.e.} near the outer wake. Note that, the spectra look qualitatively similar for both tip speed ratios. Here, the dominant frequency is $St_C \approx 3$, which corresponds to the blade passing frequency (henceforth denoted as $3f_r$, where $f_r$ is the rotor frequency). This similarity holds up the possibility to demarcate the very near field of the turbine wake (where the influence of the blade passing frequency is significant) based on the convective pitch, $L_c$, which we discuss further in section \ref{freq_map}. Although the dominant frequency is the same for both $\lambda$ at this location, there are subtle differences in the spectra. Especially, for $\lambda=6$, the rotor frequency $f_r$ ($St_C \approx 1$) as well as other harmonics of blade/rotor frequencies are more pronounced compared to $\lambda=4.5$.
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 0 0 0 ,width= 0.95\textwidth]{figures/f_spatial_lam_4pt5_edited.png} }
\caption{ Zones of dominant frequencies for $\lambda = 4.5$ }
\label{fig:spat_4pt5}
\end{figure}
The difference between the two tip speed ratios becomes more significant in the far field as can be seen from figs. \ref{fig:fft_turb_1}(c) and \ref{fig:fft_turb_1}(g) which show the transverse velocity spectra at $x/L_c = 4$. The tip vortices are much stronger for $\lambda=6$ here. The corresponding spectra at $x/L_c=4$ and $y/D=0.55$ are shown in fig. \ref{fig:fft_turb_1}(d) and \ref{fig:fft_turb_1}(h). It can be seen that the blade passing frequency ($3f_r$) is no longer dominant. For $\lambda=4.5$, the dominant frequency is $2f_r$, while $f_r$ is comparable to $2f_r$. For $\lambda=6$ however, $f_r$ is by far the dominant frequency at this location, which implies that, the merging process for $\lambda=6$ is fundamentally different from $\lambda=4.5$. Different modes of tip vortex merging have been reported in experiments \citep{sherry2013interaction, felli2011mechanisms} and theory \citep{widnall1972stability}. The merging process is known to be primarily driven by mutual inductance of the helical tip vortices and it has been shown to depend on vortex strength, vortex core size and pitch of the vortices \citep{widnall1972stability}. \citet{felli2011mechanisms, sherry2013interaction} argued that the merging of the tip vortices is a two-step process, where two vortex filaments get entangled first and thereafter merge with the third filament further downstream leading to a single vortical structure. The dominance of $2f_r$ in the far field for $\lambda=4.5$ is believed to be the result of such a two-step merging process. However, this is not very evident from supplementary video 1, as the vortex cores of the tip vortices for $\lambda=4.5$ are weaker compared to $\lambda=6$ and they quickly get diffused. For $\lambda=6$, the vortex cores are much stronger and their separation is shorter, as a result of which there is a stronger and earlier interaction. From supplementary video 2 ($\lambda=6$) it can be noted that two tip vortices start revolving around the third tip vortex and eventually merge into a single vortical structure hinting at a single-step merging. Our data leads us to believe that the merging process may or may not be a multistage process depending on the tip speed ratio, other geometric factors such as specific design of the turbine may also influence how the tip vortices ultimately merge.
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 0 0 0 ,width= 0.9\textwidth]{figures/f_spatial_lam_6_edited.png} }
\caption{ Zones of dominant frequencies for $\lambda = 6$.}
\label{fig:spat_6}
\end{figure}
Let us now look at the frequencies near the central region of the wake. For that we perform fast Fourier transforms at selected points from experiment 2, which are shown by the \textbf{\textcolor{blue}{+}} sign in fig. \ref{fig:inst2}. The results are shown in fig. \ref{fig:fft_turb_2} and are only for $\lambda=6$, as the corresponding results for $\lambda=4.5$ were similar. Hence, unlike fig. \ref{fig:fft_turb_1}, the Strouhal number is calculated only based on diameter $D$. Figs. \ref{fig:fft_turb_2}(a) and \ref{fig:fft_turb_2}(b) are obtained from experiment $2A$ ($y=0$ plane) at $x/D=0.5,z/D=0$ and $x/D=1.5,z/D=0$ respectively. Near the nacelle (fig. \ref{fig:fft_turb_2}(a)) the dominant frequency is at $St_D=0.42$. Note that the nacelle diameter for this study is close to 6 times smaller than the rotor diameter. Hence, if we use the nacelle diameter to scale the frequency instead, the Strouhal number comes to around $0.069$. \citet{abraham2019effect} reported a similar nacelle shedding frequency $St \approx 0.06$ for a utility scale turbine. Accordingly, we believe this frequency is related to vortex shedding from the nacelle and henceforth denote it as $f_n$. Interestingly, away from the nacelle, at $x/D=1.5,z/D=0$ (fig. \ref{fig:fft_turb_2}(b)), an even lower dominant frequency is observed at $St_D \approx 0.23$. This Strouhal number correlates well with the wake meandering frequencies reported in several previous studies \citep{okulov2014regular, chamorro2009wind} and is henceforth denoted as $f_{wm}$. The origin of $f_{wm}$ is discussed in more detail in section 4. Figures \ref{fig:fft_turb_2}(c)-(d) are obtained from experiment $2B$, \textit{i.e.} at an offset plane ($y=-0.35D$), away from the nacelle and downstream of the tower. In the near wake ($x/D=1.5,z/D=0$, fig. \ref{fig:fft_turb_2}(c)), a dominant frequency is found at $St_D \sim 0.8$. We argue that this frequency corresponds to the vortex shedding from the tower and denote it as $f_T$. Upon non-dimensionalisation based on tower diameter and $U_{\infty}$, the Strouhal number is around 0.084, which is significantly lower than the expected value of $St\approx 0.2$ for vortex shedding from a cylinder. Note that, the Strouhal number was calculated based on freestream velocity, however the flow ahead of the tower is strongly sheared due to the presence of the rotor. The incoming velocity just ahead of the tower was not measured in the experiments, but can be expected to be significantly lower than the freestream velocity, hence resulting in a local reduction in shedding frequency. Further downstream (fig. \ref{fig:fft_turb_2}(d)), $f_T$ is no longer prominent. The wake meandering frequency is still observed, but is weaker compared to the central plane ($y=0$), indicating a stronger influence of wake meandering in the central region of the wake.
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 20 0 0 0 ,width= 0.75\textwidth]{figures/f_zones_final2.png} }
\caption{ Zones of dominant frequencies for $\lambda = 4.5$ at the plane (a) $y=0$ and (b) $y=-0.35D$. Subfigures (c) and (d) show the same for $\lambda=6$.}
\label{fig:spat_top}
\end{figure}
\subsubsection{Frequency maps}
\label{freq_map}
Having observed the presence of multiple frequencies in the near field, let us identify the zones in which a particular frequency is dominant. We define the dominant frequency to be that at which the transverse velocity spectrum attains its maximum value at a particular location. For instance, in fig. \ref{fig:fft_turb_1}(b) and \ref{fig:fft_turb_1}(f), \textit{i.e.} in the near field and in the tip region, the dominant frequency is $3f_r$, while relatively far from the rotor, near the central region (see fig. \ref{fig:fft_turb_2}(b)), it is the wake meandering frequency ($f_{wm}$), which is dominant. We obtain the dominant frequencies at all the spatial locations and create a frequency map demarcating zones where a particular frequency is dominant. The dominant frequency map for $\lambda =4.5$ in the $xy$ plane is shown in fig. \ref{fig:spat_4pt5}. On the top plane, large zones with distinct boundaries are observed where a particular frequency is dominant. Near the rotor in the tip region, $3f_r$ is dominant until around $x/D \approx 2.25$, beyond which $2f_r$ and $f_r$ become important. Around $x/D \approx 2.25$, the merging process of tip vortices initiates as was seen from fig. \ref{fig:inst}. Distinct root vortices (having a frequency of $3f_r$) have been reported in some studies \citep{sherry2013interaction} in the vicinity of the root region of the blade. We do observe a small region in the vicinity of the root region where $3f_r$ is dominant (see fig. \ref{fig:spat_4pt5}), however, root vortices were not pronounced for the present study, which can be seen from fig. \ref{fig:inst} (see also supplementary videos 1 and 2). We believe this has to do with the specific design of the blade in the root region that does not produce strong root vortices. Interestingly, near the root region of the blade, until around $x/D \approx 1.5$, there is a large region where $f_r$ is dominant. Note that, the porosity of the rotor disk is effectively low near the root region of the blade. As a result, there is a strong contribution from viscosity in driving the flow at the rotor frequency. Very close to the nacelle, there is a small region of nacelle shedding frequency ($f_n$). Further downstream, large regions are seen where the tower frequency ($f_T$) is dominant. The observation of $f_T$ in the $xy$ plane is particularly interesting as it indicates the inherent three-dimensionality of the vortex shedding from the tower. Several factors can lead to a three-dimensional vortex shedding from the tower such as the strongly sheared inflow condition \citep{silvestrini2004direct} caused by the rotor and finite-span effects of the tower near the nacelle \citep{williamson1996vortex}. It has been shown that the presence of shear or end effects can lead to oblique vortex shedding which can promote three-dimensionality in the wake \citep{silvestrini2004direct, williamson1996vortex}. Such oblique three-dimensional vortex shedding from the tower could be clearly observed in supplementary videos 1 and 2 (shown by a blue arrow). This obliqueness in the tower vortex shedding is also manifested in the form of oblique outward bursts from the centreline of the turbine wake. Such bursts can be repeatedly observed in supplementary videos 1 and 2 for $\lambda=4.5$ and $\lambda=6$ (shown by white arrows).
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 0 0 0 ,width= 1.2\textwidth]{figures/f_evol_lam_4pt5_6.png} }
\caption{ The evolution of the `strength' of different frequencies with streamwise distance for (a) $\lambda=4.5$ (a) and (b) $\lambda=6$. The strength is calculated as the magnitude of the largest peak in the spectrum of transverse velocity at a particular streamwise location. }
\label{fig:evo_down}
\end{figure}
In the lower plane ($y<0$), the tip vortices break down earlier due to their interaction with the tower's vortex shedding. Patches of high frequency regions can be seen near the tip region in the lower plane, which are the remains of the blade passing or rotor frequencies. Apart from that, a number of other frequencies are also observed, making it look more broadband and turbulent like. The wake meandering frequency ($f_{wm}$) is found to be dominant only near the central region in the near wake. The entire wake looks like a shell of high frequency fluid surrounding the central region dominated by low frequency dynamics.
The scenario remains qualitatively similar for $\lambda = 6$ as can be seen in fig. \ref{fig:spat_6}, that is the high frequency shell surrounds a low frequency zone. However, the extents of the dominant zones clearly look different. In the upper plane, the zones where $3f_r$ and $2f_r$ are dominant is quite small, and $f_r$ is dominant in a large portion of the upper plane. This is because at a higher $\lambda$, the tip vortices start to interact much earlier. Also, in the top plane the wake is much wider for $\lambda = 6$. In the lower plane, traces of the tip vortices are still observed but again there are several new frequencies present and it is more broadband. In the central region wake meandering is dominant again and the extent of the region where it is dominant grows with downstream distance. In fact, $f_{wm}$ remains dominant over a larger region at $\lambda=6$ compared to $\lambda=4.5$, which shows a possible dependence of wake meandering on the tip speed ratio. This dependence is further discussed in section \ref{wm}.
Next we investigate the frequency zones obtained from experiment 2. Figs. \ref{fig:spat_top}(a) and \ref{fig:spat_top}(c) correspond to $\lambda=4.5$ and $\lambda=6$ respectively at the central plane ($y=0$) and are similar to the upper plane in fig. \ref{fig:spat_4pt5} and fig. \ref{fig:spat_6}. This offers reassurance that the results of the experiments are reproducible. The wake meandering frequency remains dominant over a larger region for $\lambda=6$ compared to $\lambda=4.5$. The difference between the two tip speed ratios is more pronounced at the offset plane (experiment 2B). For $\lambda=4.5$, the tower frequency ($f_T$) dominates the central part of the wake (see fig. \ref{fig:spat_top}(b)). Contrastingly, for $\lambda=6$, $f_T$ has a sole dominance only before $x/D \approx 1$, beyond which the strengths of $f_T$ and $f_{wm}$ become comparable. This evidence indicates that the strength of wake meandering depends on the tip speed ratio. Note that for $\lambda=6$, the wake appears to be slightly deflected towards the side $z>0$ which could be caused by a slight mis-alignment of the rotor plane with the freestream direction in the experiments. The angle of deflection was measured to be $< 3^\circ$ and it does not alter the conclusions.
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 0 0 0 ,width= 1.2\textwidth]{figures/f_evol_top2.png} }
\caption{ Strength of different frequencies in the $xz (y=0)$ plane for (a) $\lambda = 4.5$ and (c) $\lambda = 6$ respectively. Sub figures (b) and (d) show the same at an offset plane $y = -0.35D$. }
\label{fig:evo_top}
\end{figure}
Let us cast a closer look at the streamwise evolution of the `strength' of different frequencies. The `strength' of a frequency can be measured locally by taking the maximum amplitude in the transverse velocity spectra of a particular frequency at a particular streamwise distance, which we denote as $|a|_{v}$. In fig. \ref{fig:evo_down}(a), $|a|_{v}$ is shown over streamwise distance in the upper plane (experiment $1A$) for $\lambda = 4.5$. In the near field, the blade passing frequency ($3f_r$) dominates over other frequencies and corresponds to the passage of tip vortices. At $x/D \approx 2.25$, $2f_r$ surpasses $3f_r$ and at $x/D \approx 3.2$, $f_r$ surpasses $2f_r$, which is indicative of the two-step merging process. The higher harmonics are comparatively damped and less important beyond $x/D \approx 1.5$. The wake meandering frequency is observed even very close to the nacelle and the local strength of the frequency does not change appreciably throughout the domain of investigation.
Fig. \ref{fig:evo_down}(b), shows $|a|_{v}$ for $\lambda = 6$. Similarly to $\lambda = 4.5$, the dominant frequency in the near field is $3f_r$. However, owing to the higher tip speed ratio, the tip vortices interact much earlier. As a result, $f_r$ surpasses $3f_r$ much closer to the turbine, at $x/D \approx 1$ and remains dominant even beyond 3 rotor diameters. Although $2f_r$ is present, it never becomes dominant, which again indicates that the merging process is fundamentally different for $\lambda = 6$. The strength of wake meandering shows a trend similar to $\lambda = 4.5$ and remains roughly constant throughout the domain of investigation. However, the strength of wake meandering is markedly increased for $\lambda = 6$. An interesting observation can be made if we measure the location where the strength of the wake meandering frequency ($f_{wm}$) surpasses the blade passing frequency $3f_r$. The precise location where it happens depends on $\lambda$. For $\lambda=4.5$ this happens at $x/D \approx 2.2$, while for $\lambda=6$ it occurs at $x/D \approx 1.7$. Interestingly, these distances are close in terms of convective pitch $L_c$ and correspond to $3.15 L_c$ and $3.24 L_c$ respectively. We argue that a distance of roughly $3L_c$ from the rotor plane can be considered as a robust definition of the near wake (where the effect of the individual tip vortices can be felt or beyond which wake meandering can be important) of a turbine independent of the tip speed ratio. The wake meandering frequency, although not dominant, is present in the near wake ($x/L_c<3$), and in fact it exists close to the nacelle in the central region which hints at the fact that the genesis of wake meandering could be related to the shedding of the nacelle or the nacelle/turbine assembly considered together as a porous bluff body. The porosity of the bluff body changes with tip speed ratio, which in turn changes the nature of the vortex shedding from the bluff body. This is explored in more detail in section \ref{wm}.
In the lower plane (experiment $1B$), the frequency spectrum is more broadband and turbulent, as can be seen from fig. \ref{fig:spat_4pt5} and fig. \ref{fig:spat_6}. \NB{As a result, a similar analysis for the lower plane revealed a large number of frequencies of comparable strengths making it difficult to draw firm conclusions hence it is not discussed any further}. The same analysis is also performed for experiment 2 and the strengths of the different frequencies are shown in fig. \ref{fig:evo_top}. The results from experiment $2A$ (\textit{i.e.} at the plane $y=0$) are shown in figs. \ref{fig:evo_top}(a) and \ref{fig:evo_top}(c) for $\lambda=4.5$ and $\lambda=6$ respectively, and they closely resemble fig. \ref{fig:evo_down} despite being different experiments. Note that, for $\lambda=6$ $f_{wm}$ surpasses $3f_r$ exactly at $x/D\approx 1.7$ similarly to fig. \ref{fig:evo_down}(b). The nacelle frequency $f_n$ is important only in the very near field and the tower frequency is rather weak. Fig. \ref{fig:evo_top}(b) and \ref{fig:evo_top}(d) show the strength of frequencies at the offset plane (experiment $2B$). The tower frequency now becomes important and can even be the dominant frequency in the near field. The streamwise evolution of $f_T$ is similar in fig. \ref{fig:evo_top}(b) and (d), showing minimal dependence on $\lambda$. Similarly to fig. \ref{fig:evo_down}, the strength of wake meandering for $\lambda=6$ is higher than $\lambda=4.5$ for both the planes considered, which firmly establishes the fact that the strength of wake meandering is a function of the tip speed ratio.
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 0 20 0 ,width= 0.8\textwidth]{figures/wake_meandering.png} }
\caption{Transverse velocity components of the phase-averaged wake meandering modes for (a) $\lambda = 4.5$ and (b) $\lambda = 6$. }
\label{fig:wm}
\end{figure}
\section{Wake meandering}
\label{wm}
Wake meandering has been described as the large scale motion that dominates the `far wake' dynamics of a wind turbine. However, there exists varied opinions about the exact cause of wake meandering \citep{larsen2008wake, espana2011spatial, okulov2014regular, foti2018wake}.
\NB{Wake meandering has been described as large \emph{intermittent} displacements of the whole wake due to large scale structures in the incoming flow \citep{espana2011spatial}. While \citet{okulov2014regular} reported a well-defined frequency of wake meandering when free stream turbulence was negligible. The wake meandering observed in the present study is of the later type which is induced by the wake-generating body itself, as opposed to inflow turbulence.} Fig. \ref{fig:spat_4pt5} and \ref{fig:spat_6} reveal that the wake meandering frequency is dominant in the central region, within $-0.4D <y<0.4D$. Accordingly, experiment $1C$ focused on this region (see fig. \ref{fig:sch}(a)) to capture the centreline dynamics and the nature of wake meandering. To understand the nature of wake meandering, we utilise phase averaging (see for instance \cite{reynolds1972mechanics,cantwell1983experimental}) based on the frequency of wake meandering observed for the two tip speed ratios. 48 phase bins were used to obtain the phase-averaged flow fields. Thereafter the second Fourier mode of the phase-averaged flow field was obtained and the time varying coefficients were obtained by projecting the phase-averaged modes onto the flow data. More details about this method can be found in \citep{baj2017interscale, biswas2022energy}. \NB{The limited total acquisition time ($\approx$ 10-15 cycles of wake meandering) in experiment $1C$, however, yielded poorly converged modes. Nevertheless, to have a qualitative idea,} the transverse velocity component of the phase-averaged modes are shown in figure \ref{fig:wm}(a-b) for $\lambda=4.5$ and $\lambda=6$ respectively. Note that the modes are qualitatively similar and they resemble vortex shedding from a bluff body of characteristic diameter $D$ which initiates from the vicinity of the nacelle. It indicates that wake meandering is possibly a global shedding mode of the `porous' turbine seeded from the nacelle.
Parallels have been drawn between a rotating turbine and a non-rotating porous disk \citep{lignarolo2016experimental, neunaber2021comparison, vinnes2022far}. It has been shown in several works that the wake characteristics of a porous disk can be similar to that of a turbine, at least in the far field, weakly agreed upon as $x>3D$ \citep{aubrun2013wind, neunaber2021comparison}. The seminal work of \citet{castro1971wake} showed that the Strouhal number of vortex shedding from a porous disk decreases if porosity is reduced and it asymptotes to 0.2 for a solid disk. Note that in a wind turbine, the effective porosity of the rotor disk does change if $\lambda$ is changed. If $\lambda$ is increased, a greater area is swept by the blades in the time taken for a parcel of fluid to convect across the rotor disk, thus increasing the effective blockage, or reducing the porosity. Hence, if wake meandering in a wind turbine wake is related to a global vortex shedding mode, the frequency of wake meandering ought to reduce if porosity is reduced or $\lambda$ is increased. To establish this fact we perform a series of experiments (termed as experiment 3 and 4 in fig. \ref{fig:sch} and table \ref{tab:kd}) which focused on thin strip-like fields of view at different locations in the flow.
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 0 0 0 ,width= 1\textwidth]{figures/porosity_final.png} }
\caption{(a) Time-averaged velocity profile (solid black line) in front of the rotor disk for $\lambda=4.5$. The red solid line represents the average of the profile within $-0.5<x/D<0.5$ which is denoted as $U_{ind}$. (b) Variation of $U_{ind}$ with $\lambda$. (c) Variation of porosity ($\beta$) with $\lambda$. $\beta$ is defined as the ratio of the unswept open area to the total area of the rotor.}
\label{fig:poro}
\end{figure}
Let us try to estimate the porosity of our model wind turbine as a function of tip speed ratio. First of all, we need a time scale to obtain the area swept by the blades of the turbine at a particular $\lambda$. We can estimate the time scale as the time taken by the flow to pass through the rotor disk. For that we approximate the thickness of the rotor disk as the projected area of the turbine blades (at a plane parallel to $xy$ plane in fig. \ref{fig:sch}) at the root section. Note that the incoming velocity just ahead of the rotor disk is smaller than the free stream velocity due to the induction effect. Hence we obtain the incoming velocity from experiment 4, which considered a thin strip-like field of view just upstream of the rotor (see table \ref{tab:kd} for further details). The black solid curve in fig. \ref{fig:poro}(a) shows the time averaged velocity profile just ahead of the rotor for $\lambda=4.5$. An average of the velocity profile is calculated for $y\in (-0.5D,0.5D)$ which we term as the induction velocity or $U_{ind}$ (shown as red solid line in fig. \ref{fig:poro}(a)). The variation of $U_{ind}$ with $\lambda$ is shown in fig. \ref{fig:poro}(b). With $U_{ind}$ as the velocity scale and the approximate rotor disk's thickness as the length scale we estimate the time taken by the flow to pass through the rotor disk at different $\lambda$. Based on the time scale we obtain the porosity, $\beta$ ($\beta$ is defined as the ratio of the unswept open area to the total area of the rotor disk) of the disk as a function of $\lambda$ and show it in fig. \ref{fig:poro}(c). Note that porosity reduces with $\lambda$ which is consistent with our expectation.
For a precise measurement of the wake meandering frequency, a large time series of data is required as wake meandering involves rather slow dynamics. A strip FOV was therefore taken which allowed us to obtain a long time series ($\approx 15$ minutes or $\approx 180$ wake meandering cycles) of data considering the storage constraints of the cameras used. The FOVs were centred at three different locations, $x/D = 2$, 3 and 5. Experiments were conducted at 16 tip speed ratios from 4.5 to 6 at steps of 0.1. The Strouhal numbers of wake meandering, $St_{wm}$ (defined based on $D$ and $U_{\infty}$) obtained at different locations are shown in fig. \ref{fig:fwm} with `+' signs. A decreasing trend of wake meandering Strouhal number with $\lambda$ is observed at all three locations. The Strouhal numbers reported by \citet{castro1971wake} for a porous disk at porosities calculated in fig. \ref{fig:poro}(c) are also shown in fig. \ref{fig:fwm} (red line). The yellow shaded region shows the error margin given by \citet{castro1971wake}. The agreement is reasonable considering the approximations that were made to calculate the porosity of the wind turbine at different $\lambda$. This result establishes that wake meandering in a wind turbine wake is related to the global vortex shedding mode of a porous bluff body, the frequency of which depends on $\lambda$ (\textit{i.e.} porosity). Note that a similar decreasing trend of wake meandering frequency with $\lambda$ was reported by \citet{medici2008measurements} at $x/D=1$. \citet{okulov2014regular} reported that wake meandering frequency was a function of operating condition for $1.5<x/D<2.5$, beyond which it was invariant of operating condition. Our results however show that a similar dependence of wake meandering frequency on $\lambda$ persists at least up to $5D$ downstream of the rotor plane.
\begin{figure}
\centerline{
\includegraphics[clip = true, trim = 0 0 0 0 ,width= 1.1\textwidth]{figures/st_xbyD_2_3_5_Castro.png} }
\caption{Variation of wake meandering strouhal number with $\lambda$ at (a) $x/D=2$, (b) $x/D=3$, and (c) $x/D=5$. `\textbf{\textcolor{red}{---}}' shows the Strouhal number reported by \citet{castro1971wake} for a porous disk at corresponding porosities ($\beta$) with an error margin shown by the yellow shaded region. }
\label{fig:fwm}
\end{figure}
\section{Conclusion}
We conducted particle image velocimetry (PIV) experiments on the near wake of a lab scale wind turbine model at varying tip speed ratios ($\lambda$). The wind turbine model consisted of a nacelle and a tower to imitate a real scale turbine and to examine the influence of the geometry on the near wake. The freestream turbulence level was low and the wake properties obtained were solely due to the wake generating body. The near field was found to be dominated by the array of coherent tip vortices which appeared to be inhibiting mixing with the outer non turbulent fluid in the immediate near wake, before the tip vortices merged. The merging and the breakdown process of the tip vortices was found to be strongly dependent on $\lambda$. To be precise, for $\lambda=4.5$, a two-step merging process was observed as reported in previous studies \citep{felli2011mechanisms, sherry2013interaction}. For $\lambda=6$ however, there was an earlier and stronger interaction between the tip vortices and the vortices merged directly in an one step process. We defined a length scale termed as the convective pitch ($L_c = \pi D/\lambda$, $D$ being the diameter) that varies with $\lambda$ and is related to the pitch of a helical vortex filament. We proposed that a distance of $3L_c$ from the rotor disk could be used as a robust definition of the immediate near wake (where the tip vortices are not merged) of the turbine irrespective of tip speed ratio.
Apart from the tip vortices, distinct frequencies associated with the shedding from the tower and nacelle were identified in the near field. The tower frequency is observed over a broad region and it could even be the dominant frequency in the near field for the present experimental condition. Below the nacelle ($y<0$), the interaction of the tip vortices with the tower resulted in an earlier breakdown of the tip vortices and increased levels of turbulence and mixing. Indeed, the tower acted as the major source of asymmetry in the wake also evident by a deflection of the wake centerline towards the tower side ($y<0$). Interestingly, the tower's vortex shedding frequency was observed above the nacelle shear layer as well. Outward bursts of fluid were observed from around the nacelle centreline which are believed to be linked to the oblique nature of vortex shedding from the tower \citep{williamson1996vortex, silvestrini2004direct}.
\NB{The nacelle frequency was important only very close to the nacelle and was not particularly energetic. However, the nacelle was found to be important in `seeding' wake meandering, indicated by the presence of the wake meandering frequency from relatively close to the nacelle ($x/D<0.5$). A plausible role of the nacelle in aiding wake meandering, combined with the fact that free stream turbulence levels were negligible in the present experiments, uphold wake meandering as a global instability of the turbine wake, the characteristic of which should vary with operating condition. In order to justify this, }separate experiments were performed to calculate $f_{wm}$ at three different streamwise locations, $x/D=2,3$ and $5$ for $4.5\leq \lambda \leq 6$. The Strouhal number of wake meandering was found to decrease when $\lambda$ was increased (or effective porosity was decreased) at all three streamwise stations probed in a similar fashion. Interestingly, a similar decreasing trend of Strouhal number with porosity was observed for vortex shedding behind a porous disk \citep{castro1971wake}. This similarity bolsters the notion that wake meandering is a global instability of the wake generating body, \textit{i.e.} a `porous' turbine with characteristic length scale $D$ and is in contrast to the observation of previous works who found wake meandering frequency to be invariant of operating condition in the far wake \citep{okulov2014regular}. \\
\textbf{Declaration of Interests:} The authors report no conflict of interest.
|
{
"arxiv_id": "2302.13232",
"language": "en",
"timestamp": "2023-03-02T02:18:47",
"url": "https://arxiv.org/abs/2302.13232",
"yymm": "2302"
} | \section{Introduction}
Symmetric games, where all players face identical incentives, play a central role in game-theoretic analysis.
Many of the classic examples used to teach game-theoretic concepts \cite{essentials_of_game_theory, multiagent_systems} such as prisoner's dilemma and rock-paper-scissors are symmetric, and the original paper proposing the concept of a Nash equilibrium \cite{Nash} addressed symmetry and proved that symmetric games must have symmetric equilibria.
Symmetric games are quite common in the recent multi-agent systems literature \cite{wang2021spoofing, boix2020multiplayer}.
The role of symmetry becomes especially prominent in games with a large number of players, because it is often only with the help of player symmetry that incentives can even be tractably described, and multiple distinct sub-fields of computational game theory \cite{EGTA, MFG} rely heavily on foundational assumptions about player symmetry to scale up game-theoretic analysis.
Despite this importance, data structures for representing and solving large symmetric games have received a paucity of attention in the research literature.
Most libraries for representing and solving games, particularly Gambit \cite{Gambit}, but also GameTracer \cite{GameTracer} and QuantEcon \cite{QuantEcon}, include few tools for efficiently solving large symmetric games.
One notable exception is the work of \citet{action-graph_games} on action-graph games, which incorporate player symmetry, and which have been partially implemented in Gambit.
However, action-graph games focus primarily on theoretical compactness in the action space, and only consider expected utility calculations enough to ensure they take polynomial time.
The data structures that have been implemented are not specifically optimized for equilibrium computation under player symmetry, meaning that solving large symmetric games remains inefficient.
In the sub-fields that scale up symmetric game analysis, approximations play a central role.
Simulation-based games \cite{EGTA} frequently employ player reduction methods \cite{DPR} that use a reduced game with a very small number of players to replace a game with dozens or hundreds of players in the hope that analysis of the reduced-game will approximately hold in the large game.
Mean-field games \cite{MFG} give up on representing discrete players and replace them with an average effect that aggregates over a continuum.
Other approaches aim to identify underlying representational compactness \cite{twins_reduction, structure_learning} or to avoid explicitly constructing game models in favor of sampling-based techniques \cite{ADIDAS, DPLearn}.
These approximations are sometimes unavoidable, when the representation is too large or the payoff computations are too slow for a game to even be enumerated, but analysts often end up resorting to approximation when faced with any non-trivial number of players.
The result is a substantial gap between the very small games that can be represented and solved exactly and the very large games where approximation is necessary.
We bridge this gap by designing efficient data structures that make it practical to exactly solve much larger instances.
In this work, we present a detailed exploration of data structure improvements for symmetric normal-form games, with specific focus on the task of computing symmetric mixed-strategy Nash equilibria.
We argue, following \citet{notes_on_symmetric_equilibria}, that this is by far the most compelling solution concept for symmetric games, because symmetric equilibria are guaranteed to exist, and it provides greater intuitive explanatory power for symmetries of the game to be reflected in its equilibria.
While computing symmetric Nash equilibria can be hard in the worst case \cite{Daskalakis_nash_complexity, Conitzer_nash_complexity}, incomplete local-search algorithms such as replicator dynamics and gradient descent are often highly successful in practice.
To facilitate algorithms for identifying symmetric equilibria, we aim for data structures that optimize the calculation of \emph{deviation payoffs} (and their derivatives).
For a symmetric mixed strategy employed by all players, the deviation payoff vector gives, for each action, the expected utility a single agent would receive if they deviated unilaterally to that action.
For a large fraction of the algorithms used to compute Nash equilibria, the ability to evaluate deviation payoffs for symmetric mixed strategies is both necessary and sufficient; for most other algorithms, deviation payoffs plus their derivatives suffice.
We show that focusing the design of symmetric game data structures toward deviation payoffs leads to a number of optimizations that jointly yield a dramatic improvement in the practical efficiency of solving large symmetric games.
\subsection{Contributions}
We describe seven distinct upgrades to classic data structures for representing symmetric normal-form games.
Two of these (sections \ref{sec:pre-reps} and \ref{sec:opp-config}) constitute asymptotic improvements:
changing from $P$-player profiles to $(P-1)$-opponent configurations reduces the number of stored payoff entries (with $A$ actions) from $A\binom{P+A-1}{P}$ to $A \binom{P+A-2}{P-1}$,
and pre-computing probability weights to avoid repeated multinomial calculations accelerates equilibrium searches by a factor of $A$.
Four of the upgrades focus on vectorization, resulting in better constants and enabling SIMD acceleration.
And we find---as has been well-established in the settings of neural networks and scientific computation---that such improvements can qualitatively change the scope of the computational tools.
The remaining upgrade (section~\ref{sec:log-probs}) follows because the overall expansion-of-scope enables us to analyze games so large that the probability calculations can overflow 64-bit integers, necessitating a switch to a log-space representation of payoffs and probabilities.
Our main result is a roughly ten-thousand-fold speedup in the running time of equilibrium computation algorithms for symmetric games with many players.
This makes it possible to run practical-but-incomplete search methods like replicator dynamics on any symmetric game that can fit in memory, and can also facilitate other slower equilibrium-search techniques.
Our results effectively close the gap between small and large symmetric games, relegating approximation techniques to only those games too large to be represented explicitly.
Two open-source libraries implement most of the data structures and algorithms we discuss.
The \texttt{gameanalysis.jl}\footnote{\url{https://github.com/Davidson-Game-Theory-Research/gameanalysis.jl}} repository provides simple Julia implementations of the best CPU and GPU versions of our symmetric game data structure, and also includes all of the code for experiments in this paper.
The \texttt{gameanalysis.py}\footnote{\url{https://github.com/egtaonline/gameanalysis}} module implements in Python the data structure variants for role-symmetric games that we discuss in Section~\ref{sec:role-symmetry} and provides a richer command-line interface with numerous tools for solving role-symmetric games.
\section{Background}
\subsection{Terminology}
In a symmetric game, all players have the same action set and identical incentives.
We therefore refer to the number of players $P$, but rarely distinguish individual players.
We call the number of actions $A$, and will often index the actions by $a \in \{ 1, \ldots, A\}$.
A \emph{profile} specifies an action for each player, and in a symmetric game, we can represent a profile by an integer-vector $\vec{s}$ specifying a non-negative number of players selecting each action.
We denote the entries in a vector with a subscript, so $\vec{s}_a$ is the player-count for action $a$.
We will also distinguish a profile from an \emph{opponent configuration} $\vec{c}$, which differs only in that $\sum_a \vec{s}_a = P$, while $\sum_a \vec{c}_a = P-1$.
We refer to the configuration resulting from removing action $a$ from profile $\vec{s}$ as $(\vec{s} \mid a)$, since it will often appear in probability calculations and other contexts where it is \emph{given} that one player selects action $a$.
In terms of the integer-vector representation, $(\vec{s} \mid a)$ subtracts 1 from dimension $a$ of $\vec{s}$.
A symmetric game's payoffs can be expressed as the value achieved by a player choosing action $a$ when opponents play configuration $\vec{c}$, which we denote by $v_a \left( \vec{c} \right)$ normally, or by $v_a \left( \vec{s} \mid a \right)$, when working in terms of profile $\vec{s}$.
A mixed strategy specifies a probability distribution over actions, and a mixed strategy used by all players is called a symmetric mixed-strategy profile.
We denote this with the variable $\vec{\sigma}$, and will often abbreviate the term by referring to a \emph{mixture}.
When computing probabilities, we will frequently refer to the number of asymmetric arrangements corresponding to a symmetric configuration, which we call \emph{repetitions}.
We denote this quantity by $\reps{\vec{c}}$ or $\reps{\vec{s} \mid a}$, and calculate it with the following multinomial:
\begin{equation}
\label{eq:dev_reps}
\reps{\vec{c}} = \binom{P-1}{\vec{c}_1, \vec{c}_2, \ldots, \vec{c}_A} = \frac{(P-1)!}{\vec{c}_1! \vec{c}_2! \ldots \vec{c}_A!}
\end{equation}
\subsection{Deviation Payoffs}
When analyzing a symmetric game, we are most often interested in computing symmetric mixed-strategy Nash equilibria.
For many algorithms that compute such equilibria a necessary and sufficient condition is the ability to compute deviation payoffs, and for most other algorithms, deviation payoffs plus deviation derivatives suffice.
We begin by formally defining these terms, and then describe their application to our preferred equilibrium computation algorithms in the next subsection.
Given a symmetric mixed-strategy profile $\vec{\sigma}$, we define the \emph{deviation payoff} $\vec{u}_a(\vec{\sigma})$ for action $a$ as the expected utility one player would receive if they played $a$ while all opponents randomized according to $\vec{\sigma}$.
This expectation is often expressed as a sum over all profiles in which $a$ is played of the probability that profile occurs times the payoff to $a$ in that profile:
\begin{equation}
\label{eq:prof_dev_pays}
\vec{u}_a(\vec{\sigma}) = \sum_{\vec{s} : \, \vec{s}_a > 0} \Pr_{\vec{\sigma}} \left( \vec{s} \mid a \right) v_a \left( \vec{s} \mid a \right)
\end{equation}
\noindent
but can be stated much more cleanly using configurations:
\begin{align}
\label{eq:config_dev_pays}
\vec{u}_a(\vec{\sigma}) &= \sum_{\vec{c}} v_a \left( \vec{c} \right) \Pr_{\vec{\sigma}} \left(\vec{c} \right) \\
&= \sum_{\vec{c}} v_a \left( \vec{c} \right) \reps{ \vec{c} } \prod_{a'} \left( \vec{\sigma}_{a'} \right)^{\vec{c}_{a'}}
\label{eq:config_reps_dev_pays}
\end{align}
\noindent
The deviation payoff vector $\vec{u}(\vec{\sigma})$ collects these values for all actions $a \in \{ 1, \ldots, A\}$.
We call the partial derivatives of deviation payoffs with respect to mixture probabilities \emph{deviation derivatives}.
Specifically, $\frac{\partial \vec{u}_a(\vec{\sigma})}{\partial \vec{\sigma}_s}$ gives the change in the deviation payoff for action $a$ as the probability of action $s$ is varied.
Again this can be expressed in terms of profiles, but is more straightforward in terms of configurations:
\footnote{\label{note:div_zero}Note that efficient computation of $\left( \vec{\sigma}_{s}\right)^{\vec{c}_s - 1}$ can result in numerical errors for mixtures where $\vec{\sigma}_s = 0$.
This sort of error can be avoided here and elsewhere with no real loss of precision by lower-bounding mixture probabilities at machine-epsilon.}
\begin{equation}
\label{eq:dev_deriv}
\devderiv{\vec{\sigma}}{a}{\vec{\sigma}_s} = \sum_{\vec{c}} v_a(\vec{c}) \reps{\vec{c}} \left( \vec{c}_s \right) \left( \vec{\sigma}_{s}\right)^{\vec{c}_s - 1} \prod_{a' \ne s} \left( \vec{\sigma}_{a'} \right)^{\vec{c}_{a'}}
\end{equation}
We can define most other standard game-theoretic quantities for symmetric games in terms of deviation payoffs.
The expected utility experienced by all players when following a symmetric mixed strategy is given by the dot product $u(\vec{\sigma}) = \vec{u}(\vec{\sigma}) \cdot \vec{\sigma}$.
The regret of a mixture is $\textrm{reg}(\vec{\sigma}) = \max_a \left( \vec{u}_a(\vec{\sigma}) - u(\vec{\sigma}) \right)$.
A symmetric Nash equilibrium is a mixture with $\textrm{reg}(\vec{\sigma}) = 0$, while an approximate Nash equilibrium has $\textrm{reg}(\vec{\sigma}) \le \varepsilon$, for suitable $\varepsilon$.
\subsection{Equilibrium Computation}
\begin{figure}[t]
\center
\includegraphics[width=\columnwidth]{figures/nash_traces.png}
\caption{Execution traces for both replicator dynamics (red) and gradient descent (yellow) on a 100-player, 3-action game.
The underlying heatmap shows mixture regret.
Black points are starting mixtures.
Large points indicate $\varepsilon$-equilibria.
The 3 orange equilibria were found by both algorithms.
RD found 2 unique equilibria, while GD found 6.}
\label{fig:nash_traces}
\end{figure}
Computing deviation payoffs and/or their derivatives is the key step for a number of algorithms that identify Nash equilibria in symmetric games.
We describe two of the most practical algorithms here: replicator dynamics, which depends on deviation payoffs, and gradient descent on sum-of-gains, which depends on deviation derivatives.
Our data structures can also efficiently support a number of other Nash algorithms, including fictitious play \cite{fictitious_play}, Scarf's simplicial subdivision \cite{Scarf}, and the global Newton method of \citet{global_newton}, as well as some algorithms for correlated equilibria \cite{compact_correlated}.
The Nash Algorithms Appendix\footnote{See Appendix C: Nash Algorithms. \url{https://arxiv.org/abs/2302.13232}}
in the supplement presents further details on how some of these algorithms depend on deviation payoffs.
\paragraph{Replicator dynamics} \cite{replicator_dynamics} is often presented as a rule governing evolutionary dynamics, but can also be viewed as an algorithm for computing symmetric Nash equilibria.
Expressed in terms of deviation payoffs, replicator dynamics starts from some initial mixture $\vec{\sigma}^0$ at $t=0$, and performs iterative updates of the form:
\begin{align*}
\vec{w}_a &\gets \vec{\sigma}_a^{t} \ \vec{u}_a(\vec{\sigma}^{t}) \\
\vec{\sigma}_a^{t+1} &\gets \frac{\vec{w}_a}{\sum_{a^\prime}\vec{w}_{a^\prime}}
\end{align*}
\noindent
This update assumes that all payoffs are non-negative; a positive affine transformation can be applied to any game to ensure this assumption holds (and to adjust the effective step-size).
\paragraph{Gradient descent} is a classic local-search algorithm for minimizing differentiable functions.
We can easily define a function whose minima correspond to Nash equilibria based on the following sum of deviation gains:
\begin{equation*}
g(\vec{\sigma}) = \sum_a \max \left( 0 , \vec{u}_a(\vec{\sigma}) - \vec{u}(\vec{\sigma}) \cdot \vec{\sigma} \right)
\end{equation*}
\noindent
Then we can iteratively take steps in the direction of $-\nabla g(\vec{\sigma})$.
The elements of this gradient vector are given by:
\begin{equation*}
\nabla_s(g) = \sum_{a} \left( \devderiv{\vec{\sigma}}{a}{s} - \vec{u}_s(\vec{\sigma}) - \sum_{a'} \vec{\sigma}_{a'} \devderiv{\vec{\sigma}}{a'}{s} \right) 1_{g_s}
\end{equation*}
\noindent
Where $1_{g_s}$ is an indicator variable for ${\vec{u}_s(\vec{\sigma}) > \vec{u}(\vec{\sigma}) \cdot \vec{\sigma}}$.
So deviation payoffs and deviation derivatives suffice to compute the gain gradient.
When performing gradient descent, the mixture resulting from $\vec{\sigma} - \nabla g(\vec{\sigma})$ may not lie on the probability simplex, so it is necessary to project each step back onto the simplex, which we do using the method from \citet{simplex_projection}.
Neither replicator dynamics nor gradient descent is guaranteed to identify a Nash equilibrium.
However, since these algorithms are many orders of magnitude faster than complete algorithms like simplicial subdivision or global Newton, it is practical to re-run them from many initial mixtures.
We find that both algorithms tend to identify multiple equilibria in large symmetric games and that the sets of equilibria they find are often only partially overlapping.
On the other hand, we find that fictitious play and other best-response-based updates are ineffective on large symmetric games.
Therefore in practice, we recommend repeatedly running both replicator dynamics and gradient descent, filtering by regret to isolate $\varepsilon$-Nash mixtures, and merging the resulting equilibrium sets.
An example trace from running both algorithms on a 100-player, 3-action Gaussian mixture game (represented using our data structures) appears in Figure~\ref{fig:nash_traces}.
\section{Data Structure Improvements}
The classic payoff-matrix representation of a normal-form game has a dimension for each player, and a size along each dimension equal to the number of actions available to that player.
In a symmetric game, it suffices to store just one player's payoffs, so if there are $P$ players and $A$ actions, a symmetric game can be represented by a symmetric tensor of size $A^P$.
Because symmetric tensors also arise in other settings, it is worth considering whether generic techniques for symmetric tensors would suffice for efficiently representing symmetric games.
In particular, \citet{symmetric_tensors} proposes a block-representation of symmetric tensors and a cache-conscious algorithm for multiplying with them.
Their representation has the same level of asymptotic compression as the data structures presented here.
However, in Figure~\ref{fig:nfg_memory}, we compare the memory footprint of the \citeauthor{symmetric_tensors} symmetric-tensor representation (as implemented by the \texttt{SymmetricTensors.jl} library) against the largest of our symmetric-game data structures (pre-computed repetitions), showing that while the block-symmetric representation is better than storing the full payoff matrix, it is significantly larger than our data structure variants.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/NFG_memory_comparison.png}
\caption{The normal-form (dotted lines) and symmetric-tensor (dashed lines) representations use significantly more memory than the largest of our proposed data structures.}
\label{fig:nfg_memory}
\end{figure}
\subsection{Payoff Dictionary}
We therefore turn to purpose-built data structures for storing symmetric games.
The classic approach is to store a mapping from profiles to payoffs.
In Gambit \cite{Gambit} the implementation of symmetric action-graph games stores this mapping in a trie, while others have used a database \cite{EGTAOnline}.
Such a mapping is also implicit in many theoretical descriptions, for example when relating the size of a compact representation to the number of profiles \cite{action-graph_games, resource-graph_games}.
In our experiments, our baseline data structure stores this mapping using a hash table that maps a vector of integers representing the profile $\vec{s}$ to a vector of floats that stores the payoff $v_a(\vec{s})$ for each played action with $\vec{s}_a > 0$.
With $P$ players and $A$ actions, this table will store $\binom{P+A-1}{P}$ profiles.
Calculating a deviation payoff $u_a(\vec{\sigma})$ using this mapping requires iterating through all profiles to compute the sum from Equation~\eqref{eq:prof_dev_pays}, where the probabilities are given by this expression:
\begin{equation}
\label{eq:prof_prob}
\Pr_{\vec{\sigma}} \left( \vec{s} \mid a \right) = \reps{\vec{s} \mid a} \left( \vec{\sigma}_a \right)^{\vec{s}_a - 1} \prod_{a' \ne a} \left( \vec{\sigma}_{a'} \right)^{\vec{s}_{a'}}
\end{equation}
\noindent
$\reps{\vec{s} \mid a}$, calculated according to the multinomial from Equation~\eqref{eq:dev_reps}, gives the number of asymmetric orderings of the symmetric configuration $\left( \vec{s} \mid a \right)$, while the remaining terms gives the probability of the opponents jointly playing one such asymmetric ordering.
The Worked Examples Appendix shows the full mapping and a detailed walk-through of the deviation payoff calculation on an example 3-player, 3-strategy symmetric game for this version of the data structure as well as for all subsequent variants.
We strongly recommend stepping through these examples to help resolve any confusion that arises from the data structure and algorithm descriptions that follow.\footnote{\label{note:worked_examples}See Appendix A: Worked Examples. \url{https://arxiv.org/abs/2302.13232}}
\subsection{Array Vectorization}
The first idea for improving this deviation payoff calculation is to store the profiles $\vec{s}$ and the payoffs $v$ in a pair of two-dimensional arrays with parallel structure, denoted by a shared index $i$ over all profiles.
In both arrays, each row corresponds to an action and each column corresponds to a profile, resulting in arrays of size $A \times \binom{P+A-1}{P}$.
Extracting column $i$ of the profiles-array gives a profile vector $\vec{s}_i$ with the count for each action.
The corresponding column of the payoffs-array stores the payoff $v_a \left( \vec{s}_i \mid a \right)$ for each action $a$ where that profile has a non-zero count.
Of note, the array representation here (and in all subsequent data structure variants) hinders direct look-up of payoffs for a pure-strategy profile using the mapping.
However, this trade-off is clearly worthwhile for several reasons:
\begin{enumerate}
\item Calculating mixed-strategy deviation payoffs is a much tighter bottleneck than looking up pure-strategy payoffs.
\item Symmetric mixed-strategy equilibria in symmetric games are more often relevant than asymmetric pure-strategy equilibria (and are guaranteed to exist).
\item The profile-to-index mapping can be computed by a ranking algorithm for combinations-with-replacement \cite{Combinatorics}, or can be stored alongside the arrays for fast lookup with only linear additional memory.
\end{enumerate}
Using this array representation, we can vectorize each of the steps in the calculation of the deviation payoff $\vec{u}_a(\vec{\sigma})$.
We describe each of the following operations in terms of profile $i$, and broadcast those operations across array columns.
First, we can compute a mask $m$ identifying the profiles where action $a$ is played:
\begin{equation}
\label{eq:mi}
m_{i} \gets \vec{s}_{i\,a} \ne 0
\end{equation}
Then for each such profile, we can remove a player choosing $a$ to get the opponent-configuration $\vec{c}_i = (\vec{s}_{i} | a)$ by subtracting an indicator vector $\vec e_{a}$ for action $a$:
\begin{equation}
\label{eq:ci}
\vec{c}_i \gets \vec{s}_i - \vec e_{a}
\end{equation}
For each profile, the probability $p_i = \Pr \left( \vec{c}_i \right)$ of the opponent configuration is calculated by a multinomial to compute repetitions along with a broadcast-and-reduce to apply exponents and take the product, resulting in an array of configuration probabilities:
\begin{equation}
\label{eq:pi}
p_{i} \gets \reps{\vec{c}_{i}} \prod_{a^\prime} \left( \vec{\sigma}_{a^\prime} \right)^{\vec{c}_{i \, a^\prime}}
\end{equation}
Finally the deviation payoff for action $a$ can be computed by extracting row $a$ of the payoffs array, multiplying element-wise by the configuration probabilities, masking out the invalid configurations, and summing over the remaining profiles:\footnotemark[5]
\begin{equation*}
\vec{u}_{a}(\vec{\sigma}) \gets \sum_i m_{i}\ v_{a}\left(\vec{c}_i \right)\ p_{i}
\end{equation*}
Since these steps can be performed using only arithmetic primitives and indexed reduction, the computation is vectorizable in any numeric library.
Further vectorization over actions is in principle possible, but we defer this to later sections.
The degree of improvement from array vectorization will vary with the language used.
More importantly, it sets the stage for our subsequent innovations.
\subsection{Pre-computed Repetitions}
\label{sec:pre-reps}
The next improvement comes from recognizing that each time we compute a deviation-payoff vector $\vec{u}(\vec{\sigma})$, we make use of the $\reps{\vec{s} \mid a}$ value for every profile-action pair.
Since equilibrium search involves many, many deviation-payoff calculations, we can save significant work by pre-computing these repetitions and storing them in a third parallel array.
Then the probability calculation in Equation~\eqref{eq:pi} can extract row $a$ of this repetitions array and element-wise multiply by the array of products.
Our code supplement and the Worked Examples Appendix each include a version of this data structure that computes $\vec{u}(\vec{\sigma})$ in a single pass instead of computing $\vec{u}_a(\vec{\sigma})$ separately for each action, but we defer the detailed explanation of this vectorization to the following variant where it is dramatically simplified.
\subsection{Opponent Configurations}
\label{sec:opp-config}
A closer inspection of these parallel array structures reveals some redundancies: first the payoffs array contains a number of meaningless entries, since whenever $\vec{s}_a = 0$, the corresponding $v_a$ is masked out of the calculations.\footnotemark[5]
Second, the repetitions array contains a number of redundant entries: whenever two profiles differ by a single player's action,
that is $(\vec{s} \mid a) = (\vec{s}\,' \mid a')$ we will end up storing identical entries for $\reps{\vec{s} \mid a}$ and $\reps{\vec{s}\,' \mid a'}$.
Both of these issues can be avoided if we switch to storing opponent-configurations, with a shared index $j$ over all configurations.
This gives us two parallel arrays of size $A \times \binom{P+A-2}{P-1}$ for configurations and payoffs, and a $1 \times \binom{P+A-2}{P-1}$ repeats-array.
The configurations-array now stores each configuration $\vec{c}_j$ over $P-1$ opponents.
Each corresponding column of the repeats-array stores $\reps{\vec{c}_j}$, and the same column of the payoffs-array stores, for each action $a$, the payoff $v_a(\vec{c}_j)$ for a player choosing $a$ when their opponents play according to $\vec{c}_j$.
This also simplifies the deviation-payoffs calculation.
We compute probability $p_j = \Pr(\vec{c}_j)$ of a configuration in same way as before, except we can skip the steps from Equations \ref{eq:mi} and \ref{eq:ci}, instead simply accessing configuration $c_j$ from storage:
\begin{equation*}
p_j \gets \reps{\vec{c}_j} \prod_{a^\prime} \left( \vec{\sigma}_{a^\prime} \right) ^ {\vec{c}_{j\,a^\prime}}
\end{equation*}
The key difference in this equation is that we compute the probability of a configuration $j$, instead of the probability of a configuration derived from a masked profile $i$.
This change removes the need to mask payoffs and means that the configuration probabilities are identical for all deviations, meaning we do not have to perform a separate probability calculation for each action.
This single array of configuration-probabilities in turn simplifies vectorization over deviation actions, easily giving us the whole deviation-payoff vector at once.
To get deviation payoffs, we multiply the configuration probabilities $p_j$ by the payoffs-array (broadcasting over actions), and sum over the configurations dimension.
\begin{equation}
\label{eq:dev_conf}
\vec{u}(\vec{\sigma}) \gets \sum_j p_j \vec{v}_j
\end{equation}
Here $\vec{v}_j$ refers to a column of the payoffs-array, and summing these configuration-payoff vectors gives us a vector of deviation payoffs.
The code for this version is the easiest to understand, so despite our recommended further improvements, we include it in the Julia Code Appendix.\footnote{\label{note:julia_code}Implementation shown in Appendix B: Julia Code. \url{https://arxiv.org/abs/2302.13232}}
\subsection{Pre-Weighting by Repetitions}
The next improvement comes from the realization that by associativity we can re-group Equation~\eqref{eq:dev_conf} as follows.
\begin{equation}
\vec{u}_a(\vec{\sigma}) \gets \sum_{j} \Bigl( v_a \left( \vec{c}_j \right) \reps{ \vec{c}_j } \Bigr) \prod_{a'} \left( \vec{\sigma}_{a'} \right)^{\vec{c}_{j\,a'}}
\end{equation}
Which means that even though repetitions are logically part of the probability calculation, we can simplify our computations by storing them with the payoff values.
Specifically, we can combine the repeats-array into the payoffs-array by multiplying each payoff value by that configuration's repetitions, so that the entry in row $a$, column $j$ of the payoffs array stores the value:
\begin{equation*}
\reps{ \vec{c}_j } v_a \left( \vec{c}_j \right)
\end{equation*}
This operation can be performed once in advance, saving space and speeding up all subsequent deviation payoff calculations.\footnotemark[5]
\subsection{Log Transformation}
\label{sec:log-probs}
The combined effect of all the improvements so far is sufficient to allow reasonably efficient deviation payoff calculations for games with over 100 players, as long as the number of actions is kept small, but this poses a new problem: $\binom{32}{6,6,6,7,7} > 2^{63}$, meaning that the repetitions overflow a 64-bit integer for some profiles with $P=33$ and $A=5$.
We can solve this problem by working with log-probabilities, and this incidentally produces a slight computation speed-up by making use of arithmetic operations that consume fewer processor cycles when we transform exponents into multiplication and multiplication into addition.
Specifically, we can store the natural (or other base) log of the repetition-weighted-payoffs $\lambda$, and to calculate repetitions, we can use a log-gamma function to avoid overflows.
\begin{equation*}
\lambda_{ja} \gets \log \left( \reps{\vec{c}_j} \right) + \log \left( v_a(\vec{c}_j) \right)
\end{equation*}
This will not work if payoffs are negative, but since any positive affine transformation of utilities has no effect on incentives, we can transform any game into one with non-negative payoffs.
In fact, we find it useful under all of our game representations to transform the payoffs into a standardized range, to simplify hyperparameter tuning for various equilibrium-computation algorithms.
And of course these calculations can be vectorized as before with the exponential applied element-wise to an array with contribution-values for each configuration.
\begin{table}[b]
\small
\caption{For a given number of actions $A$, the largest number of players $P$ before \emph{repetitions} overflows a 64-bit integer.}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|}
\cline{2-10}
$A$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline
$\max P$ & 67 & 44 & 36 & 32 & 29 & 27 & 26 & 25 & 25 \\ \cline{2-10}
\multicolumn{9}{c}{} \\ \cline{2-10}
$A$ & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & $\ge \text{19}$ \\ \hline
$\max P$ & 24 & 23 & 23 & 23 & 23 & 22 & 22 & 22 & 21 \\ \cline{2-10}
\end{tabular}
\center
\label{tab:int_overflow}
\end{table}
We can now compute deviation payoffs using this representation by first computing each configuration's contribution $\gamma_j$ to the deviation payoff in log-space.
\begin{equation}
\label{eq:log_cont}
\gamma_j \gets \exp \left( \lambda_j + \sum_{a^\prime} \vec{c}_{ja^\prime} \log \vec{\sigma}_{a^\prime} \right)
\end{equation}
and then summing over all configurations.\footnotemark[5]
\begin{equation*}
\vec{u}_a(\vec{\sigma}) \gets \sum_j \gamma_j
\end{equation*}
\subsection{GPU Acceleration}
Now that our deviation-payoff calculations are done as a sequence of simple mathematical operations broadcast across large arrays of floating-point data, an obvious way to accelerate them is to move the computation to a graphics processor.
Most modern programming languages have libraries that make GPU translation manageable for operations as simple as ours, and specifically in Julia the translation is trivial, requiring only that we choose an array data type that moves the data to the GPU; the code for CPU and GPU versions of our final data structure is otherwise identical.\footnotemark[6]
\subsection{Batch Processing}
The final improvement we implemented requires no change to the preceding data structure, but instead uses it more efficiently.
If memory allows, we can take even greater advantage of SIMD operations on the GPU by computing deviation payoffs for a batch containing multiple mixtures.\footnotemark[6]
This is useful because many of the algorithms we use to compute Nash equilibria are forms of local search where it can help to initialize them from many starting points.
For example, in both replicator dynamics and gain-gradient descent, we generally restart the algorithm as many as 100 times from different initial mixtures, so by parallelizing deviation-payoff calculations, we can parallelize multiple restarts of the algorithm.
This adds an extra dimension to the intermediate arrays constructed by the deviation payoff computation, giving them dimension actions~$\times$ configurations $\times$ mixtures.
This is mainly helpful with small games, since large games can occupy all GPU execution units with the deviation-payoff calculation for a single mixture.
If batch processing is used on large games, the number of mixtures must be carefully chosen to not exceed available GPU memory.
\subsection{Deviation Derivatives}
For any of the data structure variants we describe, it is possible to calculate deviation derivatives, but here we describe the approach only for the final log-transformed variants.
Computing the Jacobian with respect to the mixture probability $\vec{\sigma}_s$ only requires a small addition to the deviation payoff contributions computed in equation~\eqref{eq:log_cont}; we multiply by $\frac{\vec{c}_s}{\vec{\sigma}_{s}}$
over a new dimension $s$, before summing across configurations.\footnotemark[5]
\begin{equation*}
\devderiv{\vec{\sigma}}{a}{\vec{\sigma}_s} \gets \sum_j \frac{\vec{c}_{j s}}{\vec{\sigma}_{s}} \gamma_j
\end{equation*}
This calculation can be fully vectorized using similar operations but adds a dimension since we have a partial derivative for each \emph{pair} of actions.\footnotemark[6]
This extra dimension is the main reason gradient descent runs slower than replicator dynamics in our experiments.
\section{Validation and Experiments}
\subsection{Size Limits}
\label{sec:sizes}
\begin{figure}[t]
\center
\includegraphics[width=\columnwidth]{figures/log_mem_limit_32.png}
\caption{GPU memory required to store the final version of our data structure. Batch-computation of deviation payoffs increases memory footprint by a linear factor.}
\label{fig:mem_limit}
\end{figure}
The first constraint on the size of the data structures comes from integer overflows when calculating repetitions.
Table~\ref{tab:int_overflow} shows the upper limit in terms of $P$ and $A$ beyond which at least one profile will overflow a signed 64-bit integer.
For example, if we want to represent a 5-action game with more than 32 players, we must use the log-transform to ensure that we calculate probabilities correctly.
Note that using unsigned integers usually makes no difference and never allows more than one extra player.
This raises the question of whether we sacrifice precision when using the log-transform.
To test this, we generated 10 random bipartite action-graph games for every combination of $2 \le P \le 512$ and $2 \le A \le 20$ for which the size of our data structures would be below 1GiB.
We then represented these games using the CPU (64-bit) and GPU (32-bit) versions of our data structure, and calculated deviation payoffs for 1000 mixtures per game and recorded the largest error in the calculated deviation payoffs.
These errors were tiny, except in games with hundreds of players in the 32-bit representation, and even then the scale of the errors was at least 7 orders of magnitude smaller than the payoff range of the game.
Full results are shown in the supplement.\footnote{\label{note:supplementary_figures}See Appendix D: supplementary figures. \url{https://arxiv.org/abs/2302.13232}}
With these results, the question of what games we can represent comes down to what can fit in (main or GPU) memory.
Figure~\ref{fig:mem_limit} shows the size of the GPU data structure as $P$ increases for various values of $A$.
A similar figure for 64-bit arrays appears in the supplement.\footnotemark[4]
Note that calculating deviation payoffs requires storing intermediate arrays of similar size to the data structure, so game size should be restricted to below half of available memory (or smaller if we want to calculate deviation payoffs in batches).
Note that these sizes mean that it is entirely reasonable to store a 100-player, 6-action game.
\subsection{Timing Comparisons}
\label{sec:timing}
\begin{figure}
\center
\includegraphics[width=\columnwidth]{figures/deviation_payoff_timing_4A.png}
\caption{Time to required to compute deviation payoffs for 1024 mixtures in 4-action games using each data structure variant. Lines stop when either an integer overflow is encountered or when more than 1GB of memory is required. Similar plots for $A=6$ and $A=8$ appear in Appendix D.}
\label{fig:dev_pay_timing}
\end{figure}
Figures \ref{fig:dev_pay_timing} and \ref{fig:eq_timing} show our main results: our data structures produce significant speedup in computing deviation payoffs and therefore running Nash-finding algorithms.
Timing comparisons were run using an AMD Threadripper 2990WX CPU and NVIDIA RTX 2080-Ti GPUs.
Figure~ \ref{fig:dev_pay_timing} shows the time to compute deviation payoffs using each of the data structures described in the previous section.
The first several lines end at $P=32$, since this is near the int-overflow limit for $A=4$.
The batch-processing line ends at $P=128$, because operating on a batch of 64 mixtures requires much more memory.
Note that our proposed data structure can compute deviation payoffs in a game with 384 players faster than the baseline data structure can handle $P=12$.
Many more deviation-payoff timing comparisons including 6- and 8-action games appear in the supplement.\footnotemark[4]
Figure~\ref{fig:eq_timing} shows a similar result for Nash-finding, but focusing only on our best data structure, and using batch sizes adapted to available memory.
First, note that replicator dynamics using our data structure outperforms the previous implementation by four orders of magnitude.
Gradient descent is consistently an order of magnitude slower than replicator dynamics, making it impractical and not generally implemented with older data structures.
But with our data structure that slowdown can often be acceptable because gradient descent and replicator dynamics frequently identify some distinct equilibria (as illustrated in Figure~\ref{fig:nash_traces}), and so in cases where replicator dynamics fails or more equilibria are desired it is entirely reasonable to run both algorithms.
For example, with $A=4$ and $P=512$, we can perform 100 runs of replicator dynamics or gradient descent in a reasonable time-frame.
Timing results shown in the supplement for $A=6$ and $A=8$ are broadly similar.\footnotemark[7]
\begin{figure}[t]
\center
\includegraphics[width=\columnwidth]{figures/nash_timing_4A.png}
\caption{Time to compute Nash equilibria in 4-action games with 100 starting mixtures and 1000 iterations. Replicator dynamics is considerably faster than gradient descent (and could get away with fewer iterations), but it is often worthwhile to run both. Our data structure lets us represent and solve much larger games than the previous state-of-the-art. Similar plots for $A=6$ and $A=8$ appear in Appendix D.}
\label{fig:eq_timing}
\end{figure}
\section{Extensions}
\subsection{Many Strategies}
\label{sec:many-strategies}
The size of our data structures and therefore the time to compute deviation payoffs scales reasonably similarly with respect to the number of players $P$ and the number of actions $A$.
The configurations and payoffs arrays both have size $A \times \binom{P+A-2}{P-1}$, which is dominated by the binomial that grows exponentially in the smaller of $P$ and $A$.
As a result, it is generally possible to represent games with a large number of players but a small number of actions or a large number of actions but a small number of players, with the caveat that actions contribute an extra linear factor.
Unfortunately, equilibrium computation does not scale equally well in players and actions.
Identifying symmetric mixed-strategy Nash equilibria means searching for specific points in the mixed-strategy simplex, whose dimension grows with the number of actions.
The various complete algorithms are generally hopelessly slow with large number of actions, and the local search methods we prefer are much less likely to identify an equilibrium on any given run.
This means that solving symmetric games with many players is far more feasible than solving ones with many actions.
To handle games with many actions, we can take inspiration from \citet{simple_search}, and search for equilibria with small support sets.
This requires solving a subgame defined by restricting the game to a subset of the actions, and then checking whether those subgame solutions have beneficial deviations in the full action set.
To facilitate this, we can use a variant of our data structure where we include additional rows in the payoffs-array representing actions outside the support set, but make no change to the configurations-array.
Then for a mixture in the support set, we can compute deviation payoffs for all actions against that mixture by expanding the step where we broadcast the multiplication of configuration probabilities and payoffs.
If we determine that the current subgame set does not contain a Nash equilibrium, we can update the support set replacing weaker strategies with stronger ones, as measured by deviation gain against the subgame's candidate solutions (or other metrics).
This will iteratively search the space of supports and eventually identify a small-support equilibrium if any exist.
This approach of constructing complete subgames plus payoff information for unilateral deviations is already widely used in empirical game settings \cite{EGTA} where determining payoffs for a single configuration is expensive.
\subsection{Role-Symmetric Games}
\label{sec:role-symmetry}
Outside of fully-symmetric games, many large normal-form games exhibit role-symmetry, where players can be grouped into $R$ roles---like buyers and sellers or attackers and defenders---and within each role, all players are indistinguishable.
To solve role symmetric games, analysts typically search for role-symmetric Nash equilibria, where all players in a given role play the same mixed strategy.
Role symmetric mixed-strategy profiles can be represented by a vector that concatenates the mixed strategy played by each role, and the deviation payoff vector of the same dimension gives the expected utility of a unilateral deviator selecting each action.
These deviation payoffs and their derivatives remain central to computing role-symmetric equilibria, and almost all of the techniques we propose for improving representations of symmetric games also apply under role symmetry.
Unfortunately, representing opponent configurations gets trickier with more than one role, because the set of opponents is different for players belonging to different roles.
For example, in a game with 10 buyers and 10 sellers, when computing deviation payoffs for buyer-actions, opponent configurations include 9 buyers and 10 sellers, whereas for seller actions, there are 10 buyer-opponents and 9 seller-opponents.
We could resolve this by building separate opponent-configuration tables for each role $r$, where the arrays for role $r$ are based on configurations with $P_r - 1$ opponents in the same role and $P_{r'}$ opponents in other roles.
This gives us $R$ pairs of configuration- and payoff-arrays where the arrays for role $r$ have $\binom{P_{r} + A_{r} - 2}{P_{r}-1} \left(\prod_{r' \ne r} \binom{P_{r'} + A_{r'} - 1}{P_{r'}} \right) $ columns, by $\left(\sum_{r=1}^{R} A_r \right)$ rows for configurations and $A_r$ rows for payoffs.
We could also dispense with the opponent-configuration approach and instead store full $P$-player profiles.
This results in one profile-array and one payoff-array that each have size $ \left(\sum_{r=1}^{R} A_r \right) \times \left(\prod_{r=1}^{R} \binom{P_r + A_r - 1}{P_r} \right) $, but requires us to return to masking-based dev\-iation-payoff computations.
Thus for multi-role games, we have a choice between slightly smaller storage requirements for a profile-based data structure or slightly simpler computations for a config\-uration-based representation;
the \texttt{gameanalysis.py} library\footnotemark[2] employs the former option.
Under either of these approaches, all of our other optimizations can still apply, and as long as the number of roles is small both options provide for reasonably efficient deviation-payoff calculations.
\subsection{Action-Graph Games}
Action-graph games \cite{action-graph_games} represent certain games compactly by storing, for each node, a mapping from neighborhood configurations to payoffs.
Each action (for any player) corresponds to a node in a graph, and an action's payoff depends only on the number of players choosing each of the adjacent actions.
This means that action-graph games are role-symmetric, with any group of players who share an action set belonging to the same role; when all players have the same action set action-graph games are symmetric.
In the symmetric case, we can extend our data structures to efficiently compute deviation payoffs in action-graph games with a pair of arrays for each action storing that action's neighborhood configurations and repetition-weighted-payoffs.
The key augmentation is to include, as part of the representation of an opponent-configuration, an extra ``action'' capturing the number of players choosing any action outside the neighborhood.
Then when computing the deviation payoff for an action, we can sum the probabilities for each non-adjacent action to get the out-of-neighborhood probability.
This means that all of our data structure improvements can be applied to symmetric AGG-$\emptyset$ games.
For role-symmetric AGG-$\emptyset$ games, our role-symmetric data structure variants could be applied after splitting up action nodes shared by more than one role.
For action-graph games with contribution-independent function nodes, many our data-structure improvements can be applied, but the representation of these games tends to be sufficiently compact that the potential gains may be small.
The \texttt{gameanalysis.jl} library\footnotemark[1] implements symmetric action-graph games with a bipartite graph between actions and contribution-independent functions as one of the tools for generating interesting random symmetric game instances.
This implementation makes use of vectorization, opponent configurations, pre-computed repetitions, and the log transformation.
\subsection{Multi-Threading}
The next extension we would like to explore would focus on improving the CPU version of our data structure to capture some of the parallelism achievable on the GPU.
At present, our deviation-payoff calculations are single-threaded, and while we might not expect to outperform SIMD array operations, there should still be room for significant speedup on multi-core systems from parallelizing a batch of deviation payoff computations across threads, especially for games in the multi-gigabyte size-range that can stress GPU memory but still fit in system RAM.
\section{Conclusion}
Our validation experiments show that the data structures we propose are capable of storing (Section~\ref{sec:sizes}) and efficiently computing (Section~\ref{sec:timing}) deviation payoffs in symmetric games with very large numbers of players.
By using incomplete-but-effective search algorithms we are consistently able to identify symmetric Nash equilibria.
In combination with the iterative exploration approach described in Section~\ref{sec:many-strategies} we can also find small-support equilibria in games with many actions.
These results dramatically out-class existing solvers, enabling analysis of a much wider range of symmetric games, and closing the gap between micro- and macro-scale symmetric games.
\balance
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
Symmetric games, where all players face identical incentives, play a central role in game-theoretic analysis.
Many of the classic examples used to teach game-theoretic concepts \cite{essentials_of_game_theory, multiagent_systems} such as prisoner's dilemma and rock-paper-scissors are symmetric, and the original paper proposing the concept of a Nash equilibrium \cite{Nash} addressed symmetry and proved that symmetric games must have symmetric equilibria.
Symmetric games are quite common in the recent multi-agent systems literature \cite{wang2021spoofing, boix2020multiplayer}.
The role of symmetry becomes especially prominent in games with a large number of players, because it is often only with the help of player symmetry that incentives can even be tractably described, and multiple distinct sub-fields of computational game theory \cite{EGTA, MFG} rely heavily on foundational assumptions about player symmetry to scale up game-theoretic analysis.
Despite this importance, data structures for representing and solving large symmetric games have received a paucity of attention in the research literature.
Most libraries for representing and solving games, particularly Gambit \cite{Gambit}, but also GameTracer \cite{GameTracer} and QuantEcon \cite{QuantEcon}, include few tools for efficiently solving large symmetric games.
One notable exception is the work of \citet{action-graph_games} on action-graph games, which incorporate player symmetry, and which have been partially implemented in Gambit.
However, action-graph games focus primarily on theoretical compactness in the action space, and only consider expected utility calculations enough to ensure they take polynomial time.
The data structures that have been implemented are not specifically optimized for equilibrium computation under player symmetry, meaning that solving large symmetric games remains inefficient.
In the sub-fields that scale up symmetric game analysis, approximations play a central role.
Simulation-based games \cite{EGTA} frequently employ player reduction methods \cite{DPR} that use a reduced game with a very small number of players to replace a game with dozens or hundreds of players in the hope that analysis of the reduced-game will approximately hold in the large game.
Mean-field games \cite{MFG} give up on representing discrete players and replace them with an average effect that aggregates over a continuum.
Other approaches aim to identify underlying representational compactness \cite{twins_reduction, structure_learning} or to avoid explicitly constructing game models in favor of sampling-based techniques \cite{ADIDAS, DPLearn}.
These approximations are sometimes unavoidable, when the representation is too large or the payoff computations are too slow for a game to even be enumerated, but analysts often end up resorting to approximation when faced with any non-trivial number of players.
The result is a substantial gap between the very small games that can be represented and solved exactly and the very large games where approximation is necessary.
We bridge this gap by designing efficient data structures that make it practical to exactly solve much larger instances.
In this work, we present a detailed exploration of data structure improvements for symmetric normal-form games, with specific focus on the task of computing symmetric mixed-strategy Nash equilibria.
We argue, following \citet{notes_on_symmetric_equilibria}, that this is by far the most compelling solution concept for symmetric games, because symmetric equilibria are guaranteed to exist, and it provides greater intuitive explanatory power for symmetries of the game to be reflected in its equilibria.
While computing symmetric Nash equilibria can be hard in the worst case \cite{Daskalakis_nash_complexity, Conitzer_nash_complexity}, incomplete local-search algorithms such as replicator dynamics and gradient descent are often highly successful in practice.
To facilitate algorithms for identifying symmetric equilibria, we aim for data structures that optimize the calculation of \emph{deviation payoffs} (and their derivatives).
For a symmetric mixed strategy employed by all players, the deviation payoff vector gives, for each action, the expected utility a single agent would receive if they deviated unilaterally to that action.
For a large fraction of the algorithms used to compute Nash equilibria, the ability to evaluate deviation payoffs for symmetric mixed strategies is both necessary and sufficient; for most other algorithms, deviation payoffs plus their derivatives suffice.
We show that focusing the design of symmetric game data structures toward deviation payoffs leads to a number of optimizations that jointly yield a dramatic improvement in the practical efficiency of solving large symmetric games.
\subsection{Contributions}
We describe seven distinct upgrades to classic data structures for representing symmetric normal-form games.
Two of these (sections \ref{sec:pre-reps} and \ref{sec:opp-config}) constitute asymptotic improvements:
changing from $P$-player profiles to $(P-1)$-opponent configurations reduces the number of stored payoff entries (with $A$ actions) from $A\binom{P+A-1}{P}$ to $A \binom{P+A-2}{P-1}$,
and pre-computing probability weights to avoid repeated multinomial calculations accelerates equilibrium searches by a factor of $A$.
Four of the upgrades focus on vectorization, resulting in better constants and enabling SIMD acceleration.
And we find---as has been well-established in the settings of neural networks and scientific computation---that such improvements can qualitatively change the scope of the computational tools.
The remaining upgrade (section~\ref{sec:log-probs}) follows because the overall expansion-of-scope enables us to analyze games so large that the probability calculations can overflow 64-bit integers, necessitating a switch to a log-space representation of payoffs and probabilities.
Our main result is a roughly ten-thousand-fold speedup in the running time of equilibrium computation algorithms for symmetric games with many players.
This makes it possible to run practical-but-incomplete search methods like replicator dynamics on any symmetric game that can fit in memory, and can also facilitate other slower equilibrium-search techniques.
Our results effectively close the gap between small and large symmetric games, relegating approximation techniques to only those games too large to be represented explicitly.
Two open-source libraries implement most of the data structures and algorithms we discuss.
The \texttt{gameanalysis.jl}\footnote{\url{https://github.com/Davidson-Game-Theory-Research/gameanalysis.jl}} repository provides simple Julia implementations of the best CPU and GPU versions of our symmetric game data structure, and also includes all of the code for experiments in this paper.
The \texttt{gameanalysis.py}\footnote{\url{https://github.com/egtaonline/gameanalysis}} module implements in Python the data structure variants for role-symmetric games that we discuss in Section~\ref{sec:role-symmetry} and provides a richer command-line interface with numerous tools for solving role-symmetric games.
\section{Background}
\subsection{Terminology}
In a symmetric game, all players have the same action set and identical incentives.
We therefore refer to the number of players $P$, but rarely distinguish individual players.
We call the number of actions $A$, and will often index the actions by $a \in \{ 1, \ldots, A\}$.
A \emph{profile} specifies an action for each player, and in a symmetric game, we can represent a profile by an integer-vector $\vec{s}$ specifying a non-negative number of players selecting each action.
We denote the entries in a vector with a subscript, so $\vec{s}_a$ is the player-count for action $a$.
We will also distinguish a profile from an \emph{opponent configuration} $\vec{c}$, which differs only in that $\sum_a \vec{s}_a = P$, while $\sum_a \vec{c}_a = P-1$.
We refer to the configuration resulting from removing action $a$ from profile $\vec{s}$ as $(\vec{s} \mid a)$, since it will often appear in probability calculations and other contexts where it is \emph{given} that one player selects action $a$.
In terms of the integer-vector representation, $(\vec{s} \mid a)$ subtracts 1 from dimension $a$ of $\vec{s}$.
A symmetric game's payoffs can be expressed as the value achieved by a player choosing action $a$ when opponents play configuration $\vec{c}$, which we denote by $v_a \left( \vec{c} \right)$ normally, or by $v_a \left( \vec{s} \mid a \right)$, when working in terms of profile $\vec{s}$.
A mixed strategy specifies a probability distribution over actions, and a mixed strategy used by all players is called a symmetric mixed-strategy profile.
We denote this with the variable $\vec{\sigma}$, and will often abbreviate the term by referring to a \emph{mixture}.
When computing probabilities, we will frequently refer to the number of asymmetric arrangements corresponding to a symmetric configuration, which we call \emph{repetitions}.
We denote this quantity by $\reps{\vec{c}}$ or $\reps{\vec{s} \mid a}$, and calculate it with the following multinomial:
\begin{equation}
\label{eq:dev_reps}
\reps{\vec{c}} = \binom{P-1}{\vec{c}_1, \vec{c}_2, \ldots, \vec{c}_A} = \frac{(P-1)!}{\vec{c}_1! \vec{c}_2! \ldots \vec{c}_A!}
\end{equation}
\subsection{Deviation Payoffs}
When analyzing a symmetric game, we are most often interested in computing symmetric mixed-strategy Nash equilibria.
For many algorithms that compute such equilibria a necessary and sufficient condition is the ability to compute deviation payoffs, and for most other algorithms, deviation payoffs plus deviation derivatives suffice.
We begin by formally defining these terms, and then describe their application to our preferred equilibrium computation algorithms in the next subsection.
Given a symmetric mixed-strategy profile $\vec{\sigma}$, we define the \emph{deviation payoff} $\vec{u}_a(\vec{\sigma})$ for action $a$ as the expected utility one player would receive if they played $a$ while all opponents randomized according to $\vec{\sigma}$.
This expectation is often expressed as a sum over all profiles in which $a$ is played of the probability that profile occurs times the payoff to $a$ in that profile:
\begin{equation}
\label{eq:prof_dev_pays}
\vec{u}_a(\vec{\sigma}) = \sum_{\vec{s} : \, \vec{s}_a > 0} \Pr_{\vec{\sigma}} \left( \vec{s} \mid a \right) v_a \left( \vec{s} \mid a \right)
\end{equation}
\noindent
but can be stated much more cleanly using configurations:
\begin{align}
\label{eq:config_dev_pays}
\vec{u}_a(\vec{\sigma}) &= \sum_{\vec{c}} v_a \left( \vec{c} \right) \Pr_{\vec{\sigma}} \left(\vec{c} \right) \\
&= \sum_{\vec{c}} v_a \left( \vec{c} \right) \reps{ \vec{c} } \prod_{a'} \left( \vec{\sigma}_{a'} \right)^{\vec{c}_{a'}}
\label{eq:config_reps_dev_pays}
\end{align}
\noindent
The deviation payoff vector $\vec{u}(\vec{\sigma})$ collects these values for all actions $a \in \{ 1, \ldots, A\}$.
We call the partial derivatives of deviation payoffs with respect to mixture probabilities \emph{deviation derivatives}.
Specifically, $\frac{\partial \vec{u}_a(\vec{\sigma})}{\partial \vec{\sigma}_s}$ gives the change in the deviation payoff for action $a$ as the probability of action $s$ is varied.
Again this can be expressed in terms of profiles, but is more straightforward in terms of configurations:
\footnote{\label{note:div_zero}Note that efficient computation of $\left( \vec{\sigma}_{s}\right)^{\vec{c}_s - 1}$ can result in numerical errors for mixtures where $\vec{\sigma}_s = 0$.
This sort of error can be avoided here and elsewhere with no real loss of precision by lower-bounding mixture probabilities at machine-epsilon.}
\begin{equation}
\label{eq:dev_deriv}
\devderiv{\vec{\sigma}}{a}{\vec{\sigma}_s} = \sum_{\vec{c}} v_a(\vec{c}) \reps{\vec{c}} \left( \vec{c}_s \right) \left( \vec{\sigma}_{s}\right)^{\vec{c}_s - 1} \prod_{a' \ne s} \left( \vec{\sigma}_{a'} \right)^{\vec{c}_{a'}}
\end{equation}
We can define most other standard game-theoretic quantities for symmetric games in terms of deviation payoffs.
The expected utility experienced by all players when following a symmetric mixed strategy is given by the dot product $u(\vec{\sigma}) = \vec{u}(\vec{\sigma}) \cdot \vec{\sigma}$.
The regret of a mixture is $\textrm{reg}(\vec{\sigma}) = \max_a \left( \vec{u}_a(\vec{\sigma}) - u(\vec{\sigma}) \right)$.
A symmetric Nash equilibrium is a mixture with $\textrm{reg}(\vec{\sigma}) = 0$, while an approximate Nash equilibrium has $\textrm{reg}(\vec{\sigma}) \le \varepsilon$, for suitable $\varepsilon$.
\subsection{Equilibrium Computation}
\begin{figure}[t]
\center
\includegraphics[width=\columnwidth]{figures/nash_traces.png}
\caption{Execution traces for both replicator dynamics (red) and gradient descent (yellow) on a 100-player, 3-action game.
The underlying heatmap shows mixture regret.
Black points are starting mixtures.
Large points indicate $\varepsilon$-equilibria.
The 3 orange equilibria were found by both algorithms.
RD found 2 unique equilibria, while GD found 6.}
\label{fig:nash_traces}
\end{figure}
Computing deviation payoffs and/or their derivatives is the key step for a number of algorithms that identify Nash equilibria in symmetric games.
We describe two of the most practical algorithms here: replicator dynamics, which depends on deviation payoffs, and gradient descent on sum-of-gains, which depends on deviation derivatives.
Our data structures can also efficiently support a number of other Nash algorithms, including fictitious play \cite{fictitious_play}, Scarf's simplicial subdivision \cite{Scarf}, and the global Newton method of \citet{global_newton}, as well as some algorithms for correlated equilibria \cite{compact_correlated}.
The Nash Algorithms Appendix\footnote{See Appendix C: Nash Algorithms. \url{https://arxiv.org/abs/2302.13232}}
in the supplement presents further details on how some of these algorithms depend on deviation payoffs.
\paragraph{Replicator dynamics} \cite{replicator_dynamics} is often presented as a rule governing evolutionary dynamics, but can also be viewed as an algorithm for computing symmetric Nash equilibria.
Expressed in terms of deviation payoffs, replicator dynamics starts from some initial mixture $\vec{\sigma}^0$ at $t=0$, and performs iterative updates of the form:
\begin{align*}
\vec{w}_a &\gets \vec{\sigma}_a^{t} \ \vec{u}_a(\vec{\sigma}^{t}) \\
\vec{\sigma}_a^{t+1} &\gets \frac{\vec{w}_a}{\sum_{a^\prime}\vec{w}_{a^\prime}}
\end{align*}
\noindent
This update assumes that all payoffs are non-negative; a positive affine transformation can be applied to any game to ensure this assumption holds (and to adjust the effective step-size).
\paragraph{Gradient descent} is a classic local-search algorithm for minimizing differentiable functions.
We can easily define a function whose minima correspond to Nash equilibria based on the following sum of deviation gains:
\begin{equation*}
g(\vec{\sigma}) = \sum_a \max \left( 0 , \vec{u}_a(\vec{\sigma}) - \vec{u}(\vec{\sigma}) \cdot \vec{\sigma} \right)
\end{equation*}
\noindent
Then we can iteratively take steps in the direction of $-\nabla g(\vec{\sigma})$.
The elements of this gradient vector are given by:
\begin{equation*}
\nabla_s(g) = \sum_{a} \left( \devderiv{\vec{\sigma}}{a}{s} - \vec{u}_s(\vec{\sigma}) - \sum_{a'} \vec{\sigma}_{a'} \devderiv{\vec{\sigma}}{a'}{s} \right) 1_{g_s}
\end{equation*}
\noindent
Where $1_{g_s}$ is an indicator variable for ${\vec{u}_s(\vec{\sigma}) > \vec{u}(\vec{\sigma}) \cdot \vec{\sigma}}$.
So deviation payoffs and deviation derivatives suffice to compute the gain gradient.
When performing gradient descent, the mixture resulting from $\vec{\sigma} - \nabla g(\vec{\sigma})$ may not lie on the probability simplex, so it is necessary to project each step back onto the simplex, which we do using the method from \citet{simplex_projection}.
Neither replicator dynamics nor gradient descent is guaranteed to identify a Nash equilibrium.
However, since these algorithms are many orders of magnitude faster than complete algorithms like simplicial subdivision or global Newton, it is practical to re-run them from many initial mixtures.
We find that both algorithms tend to identify multiple equilibria in large symmetric games and that the sets of equilibria they find are often only partially overlapping.
On the other hand, we find that fictitious play and other best-response-based updates are ineffective on large symmetric games.
Therefore in practice, we recommend repeatedly running both replicator dynamics and gradient descent, filtering by regret to isolate $\varepsilon$-Nash mixtures, and merging the resulting equilibrium sets.
An example trace from running both algorithms on a 100-player, 3-action Gaussian mixture game (represented using our data structures) appears in Figure~\ref{fig:nash_traces}.
\section{Data Structure Improvements}
The classic payoff-matrix representation of a normal-form game has a dimension for each player, and a size along each dimension equal to the number of actions available to that player.
In a symmetric game, it suffices to store just one player's payoffs, so if there are $P$ players and $A$ actions, a symmetric game can be represented by a symmetric tensor of size $A^P$.
Because symmetric tensors also arise in other settings, it is worth considering whether generic techniques for symmetric tensors would suffice for efficiently representing symmetric games.
In particular, \citet{symmetric_tensors} proposes a block-representation of symmetric tensors and a cache-conscious algorithm for multiplying with them.
Their representation has the same level of asymptotic compression as the data structures presented here.
However, in Figure~\ref{fig:nfg_memory}, we compare the memory footprint of the \citeauthor{symmetric_tensors} symmetric-tensor representation (as implemented by the \texttt{SymmetricTensors.jl} library) against the largest of our symmetric-game data structures (pre-computed repetitions), showing that while the block-symmetric representation is better than storing the full payoff matrix, it is significantly larger than our data structure variants.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/NFG_memory_comparison.png}
\caption{The normal-form (dotted lines) and symmetric-tensor (dashed lines) representations use significantly more memory than the largest of our proposed data structures.}
\label{fig:nfg_memory}
\end{figure}
\subsection{Payoff Dictionary}
We therefore turn to purpose-built data structures for storing symmetric games.
The classic approach is to store a mapping from profiles to payoffs.
In Gambit \cite{Gambit} the implementation of symmetric action-graph games stores this mapping in a trie, while others have used a database \cite{EGTAOnline}.
Such a mapping is also implicit in many theoretical descriptions, for example when relating the size of a compact representation to the number of profiles \cite{action-graph_games, resource-graph_games}.
In our experiments, our baseline data structure stores this mapping using a hash table that maps a vector of integers representing the profile $\vec{s}$ to a vector of floats that stores the payoff $v_a(\vec{s})$ for each played action with $\vec{s}_a > 0$.
With $P$ players and $A$ actions, this table will store $\binom{P+A-1}{P}$ profiles.
Calculating a deviation payoff $u_a(\vec{\sigma})$ using this mapping requires iterating through all profiles to compute the sum from Equation~\eqref{eq:prof_dev_pays}, where the probabilities are given by this expression:
\begin{equation}
\label{eq:prof_prob}
\Pr_{\vec{\sigma}} \left( \vec{s} \mid a \right) = \reps{\vec{s} \mid a} \left( \vec{\sigma}_a \right)^{\vec{s}_a - 1} \prod_{a' \ne a} \left( \vec{\sigma}_{a'} \right)^{\vec{s}_{a'}}
\end{equation}
\noindent
$\reps{\vec{s} \mid a}$, calculated according to the multinomial from Equation~\eqref{eq:dev_reps}, gives the number of asymmetric orderings of the symmetric configuration $\left( \vec{s} \mid a \right)$, while the remaining terms gives the probability of the opponents jointly playing one such asymmetric ordering.
The Worked Examples Appendix shows the full mapping and a detailed walk-through of the deviation payoff calculation on an example 3-player, 3-strategy symmetric game for this version of the data structure as well as for all subsequent variants.
We strongly recommend stepping through these examples to help resolve any confusion that arises from the data structure and algorithm descriptions that follow.\footnote{\label{note:worked_examples}See Appendix A: Worked Examples. \url{https://arxiv.org/abs/2302.13232}}
\subsection{Array Vectorization}
The first idea for improving this deviation payoff calculation is to store the profiles $\vec{s}$ and the payoffs $v$ in a pair of two-dimensional arrays with parallel structure, denoted by a shared index $i$ over all profiles.
In both arrays, each row corresponds to an action and each column corresponds to a profile, resulting in arrays of size $A \times \binom{P+A-1}{P}$.
Extracting column $i$ of the profiles-array gives a profile vector $\vec{s}_i$ with the count for each action.
The corresponding column of the payoffs-array stores the payoff $v_a \left( \vec{s}_i \mid a \right)$ for each action $a$ where that profile has a non-zero count.
Of note, the array representation here (and in all subsequent data structure variants) hinders direct look-up of payoffs for a pure-strategy profile using the mapping.
However, this trade-off is clearly worthwhile for several reasons:
\begin{enumerate}
\item Calculating mixed-strategy deviation payoffs is a much tighter bottleneck than looking up pure-strategy payoffs.
\item Symmetric mixed-strategy equilibria in symmetric games are more often relevant than asymmetric pure-strategy equilibria (and are guaranteed to exist).
\item The profile-to-index mapping can be computed by a ranking algorithm for combinations-with-replacement \cite{Combinatorics}, or can be stored alongside the arrays for fast lookup with only linear additional memory.
\end{enumerate}
Using this array representation, we can vectorize each of the steps in the calculation of the deviation payoff $\vec{u}_a(\vec{\sigma})$.
We describe each of the following operations in terms of profile $i$, and broadcast those operations across array columns.
First, we can compute a mask $m$ identifying the profiles where action $a$ is played:
\begin{equation}
\label{eq:mi}
m_{i} \gets \vec{s}_{i\,a} \ne 0
\end{equation}
Then for each such profile, we can remove a player choosing $a$ to get the opponent-configuration $\vec{c}_i = (\vec{s}_{i} | a)$ by subtracting an indicator vector $\vec e_{a}$ for action $a$:
\begin{equation}
\label{eq:ci}
\vec{c}_i \gets \vec{s}_i - \vec e_{a}
\end{equation}
For each profile, the probability $p_i = \Pr \left( \vec{c}_i \right)$ of the opponent configuration is calculated by a multinomial to compute repetitions along with a broadcast-and-reduce to apply exponents and take the product, resulting in an array of configuration probabilities:
\begin{equation}
\label{eq:pi}
p_{i} \gets \reps{\vec{c}_{i}} \prod_{a^\prime} \left( \vec{\sigma}_{a^\prime} \right)^{\vec{c}_{i \, a^\prime}}
\end{equation}
Finally the deviation payoff for action $a$ can be computed by extracting row $a$ of the payoffs array, multiplying element-wise by the configuration probabilities, masking out the invalid configurations, and summing over the remaining profiles:\footnotemark[5]
\begin{equation*}
\vec{u}_{a}(\vec{\sigma}) \gets \sum_i m_{i}\ v_{a}\left(\vec{c}_i \right)\ p_{i}
\end{equation*}
Since these steps can be performed using only arithmetic primitives and indexed reduction, the computation is vectorizable in any numeric library.
Further vectorization over actions is in principle possible, but we defer this to later sections.
The degree of improvement from array vectorization will vary with the language used.
More importantly, it sets the stage for our subsequent innovations.
\subsection{Pre-computed Repetitions}
\label{sec:pre-reps}
The next improvement comes from recognizing that each time we compute a deviation-payoff vector $\vec{u}(\vec{\sigma})$, we make use of the $\reps{\vec{s} \mid a}$ value for every profile-action pair.
Since equilibrium search involves many, many deviation-payoff calculations, we can save significant work by pre-computing these repetitions and storing them in a third parallel array.
Then the probability calculation in Equation~\eqref{eq:pi} can extract row $a$ of this repetitions array and element-wise multiply by the array of products.
Our code supplement and the Worked Examples Appendix each include a version of this data structure that computes $\vec{u}(\vec{\sigma})$ in a single pass instead of computing $\vec{u}_a(\vec{\sigma})$ separately for each action, but we defer the detailed explanation of this vectorization to the following variant where it is dramatically simplified.
\subsection{Opponent Configurations}
\label{sec:opp-config}
A closer inspection of these parallel array structures reveals some redundancies: first the payoffs array contains a number of meaningless entries, since whenever $\vec{s}_a = 0$, the corresponding $v_a$ is masked out of the calculations.\footnotemark[5]
Second, the repetitions array contains a number of redundant entries: whenever two profiles differ by a single player's action,
that is $(\vec{s} \mid a) = (\vec{s}\,' \mid a')$ we will end up storing identical entries for $\reps{\vec{s} \mid a}$ and $\reps{\vec{s}\,' \mid a'}$.
Both of these issues can be avoided if we switch to storing opponent-configurations, with a shared index $j$ over all configurations.
This gives us two parallel arrays of size $A \times \binom{P+A-2}{P-1}$ for configurations and payoffs, and a $1 \times \binom{P+A-2}{P-1}$ repeats-array.
The configurations-array now stores each configuration $\vec{c}_j$ over $P-1$ opponents.
Each corresponding column of the repeats-array stores $\reps{\vec{c}_j}$, and the same column of the payoffs-array stores, for each action $a$, the payoff $v_a(\vec{c}_j)$ for a player choosing $a$ when their opponents play according to $\vec{c}_j$.
This also simplifies the deviation-payoffs calculation.
We compute probability $p_j = \Pr(\vec{c}_j)$ of a configuration in same way as before, except we can skip the steps from Equations \ref{eq:mi} and \ref{eq:ci}, instead simply accessing configuration $c_j$ from storage:
\begin{equation*}
p_j \gets \reps{\vec{c}_j} \prod_{a^\prime} \left( \vec{\sigma}_{a^\prime} \right) ^ {\vec{c}_{j\,a^\prime}}
\end{equation*}
The key difference in this equation is that we compute the probability of a configuration $j$, instead of the probability of a configuration derived from a masked profile $i$.
This change removes the need to mask payoffs and means that the configuration probabilities are identical for all deviations, meaning we do not have to perform a separate probability calculation for each action.
This single array of configuration-probabilities in turn simplifies vectorization over deviation actions, easily giving us the whole deviation-payoff vector at once.
To get deviation payoffs, we multiply the configuration probabilities $p_j$ by the payoffs-array (broadcasting over actions), and sum over the configurations dimension.
\begin{equation}
\label{eq:dev_conf}
\vec{u}(\vec{\sigma}) \gets \sum_j p_j \vec{v}_j
\end{equation}
Here $\vec{v}_j$ refers to a column of the payoffs-array, and summing these configuration-payoff vectors gives us a vector of deviation payoffs.
The code for this version is the easiest to understand, so despite our recommended further improvements, we include it in the Julia Code Appendix.\footnote{\label{note:julia_code}Implementation shown in Appendix B: Julia Code. \url{https://arxiv.org/abs/2302.13232}}
\subsection{Pre-Weighting by Repetitions}
The next improvement comes from the realization that by associativity we can re-group Equation~\eqref{eq:dev_conf} as follows.
\begin{equation}
\vec{u}_a(\vec{\sigma}) \gets \sum_{j} \Bigl( v_a \left( \vec{c}_j \right) \reps{ \vec{c}_j } \Bigr) \prod_{a'} \left( \vec{\sigma}_{a'} \right)^{\vec{c}_{j\,a'}}
\end{equation}
Which means that even though repetitions are logically part of the probability calculation, we can simplify our computations by storing them with the payoff values.
Specifically, we can combine the repeats-array into the payoffs-array by multiplying each payoff value by that configuration's repetitions, so that the entry in row $a$, column $j$ of the payoffs array stores the value:
\begin{equation*}
\reps{ \vec{c}_j } v_a \left( \vec{c}_j \right)
\end{equation*}
This operation can be performed once in advance, saving space and speeding up all subsequent deviation payoff calculations.\footnotemark[5]
\subsection{Log Transformation}
\label{sec:log-probs}
The combined effect of all the improvements so far is sufficient to allow reasonably efficient deviation payoff calculations for games with over 100 players, as long as the number of actions is kept small, but this poses a new problem: $\binom{32}{6,6,6,7,7} > 2^{63}$, meaning that the repetitions overflow a 64-bit integer for some profiles with $P=33$ and $A=5$.
We can solve this problem by working with log-probabilities, and this incidentally produces a slight computation speed-up by making use of arithmetic operations that consume fewer processor cycles when we transform exponents into multiplication and multiplication into addition.
Specifically, we can store the natural (or other base) log of the repetition-weighted-payoffs $\lambda$, and to calculate repetitions, we can use a log-gamma function to avoid overflows.
\begin{equation*}
\lambda_{ja} \gets \log \left( \reps{\vec{c}_j} \right) + \log \left( v_a(\vec{c}_j) \right)
\end{equation*}
This will not work if payoffs are negative, but since any positive affine transformation of utilities has no effect on incentives, we can transform any game into one with non-negative payoffs.
In fact, we find it useful under all of our game representations to transform the payoffs into a standardized range, to simplify hyperparameter tuning for various equilibrium-computation algorithms.
And of course these calculations can be vectorized as before with the exponential applied element-wise to an array with contribution-values for each configuration.
\begin{table}[b]
\small
\caption{For a given number of actions $A$, the largest number of players $P$ before \emph{repetitions} overflows a 64-bit integer.}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|}
\cline{2-10}
$A$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline
$\max P$ & 67 & 44 & 36 & 32 & 29 & 27 & 26 & 25 & 25 \\ \cline{2-10}
\multicolumn{9}{c}{} \\ \cline{2-10}
$A$ & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & $\ge \text{19}$ \\ \hline
$\max P$ & 24 & 23 & 23 & 23 & 23 & 22 & 22 & 22 & 21 \\ \cline{2-10}
\end{tabular}
\center
\label{tab:int_overflow}
\end{table}
We can now compute deviation payoffs using this representation by first computing each configuration's contribution $\gamma_j$ to the deviation payoff in log-space.
\begin{equation}
\label{eq:log_cont}
\gamma_j \gets \exp \left( \lambda_j + \sum_{a^\prime} \vec{c}_{ja^\prime} \log \vec{\sigma}_{a^\prime} \right)
\end{equation}
and then summing over all configurations.\footnotemark[5]
\begin{equation*}
\vec{u}_a(\vec{\sigma}) \gets \sum_j \gamma_j
\end{equation*}
\subsection{GPU Acceleration}
Now that our deviation-payoff calculations are done as a sequence of simple mathematical operations broadcast across large arrays of floating-point data, an obvious way to accelerate them is to move the computation to a graphics processor.
Most modern programming languages have libraries that make GPU translation manageable for operations as simple as ours, and specifically in Julia the translation is trivial, requiring only that we choose an array data type that moves the data to the GPU; the code for CPU and GPU versions of our final data structure is otherwise identical.\footnotemark[6]
\subsection{Batch Processing}
The final improvement we implemented requires no change to the preceding data structure, but instead uses it more efficiently.
If memory allows, we can take even greater advantage of SIMD operations on the GPU by computing deviation payoffs for a batch containing multiple mixtures.\footnotemark[6]
This is useful because many of the algorithms we use to compute Nash equilibria are forms of local search where it can help to initialize them from many starting points.
For example, in both replicator dynamics and gain-gradient descent, we generally restart the algorithm as many as 100 times from different initial mixtures, so by parallelizing deviation-payoff calculations, we can parallelize multiple restarts of the algorithm.
This adds an extra dimension to the intermediate arrays constructed by the deviation payoff computation, giving them dimension actions~$\times$ configurations $\times$ mixtures.
This is mainly helpful with small games, since large games can occupy all GPU execution units with the deviation-payoff calculation for a single mixture.
If batch processing is used on large games, the number of mixtures must be carefully chosen to not exceed available GPU memory.
\subsection{Deviation Derivatives}
For any of the data structure variants we describe, it is possible to calculate deviation derivatives, but here we describe the approach only for the final log-transformed variants.
Computing the Jacobian with respect to the mixture probability $\vec{\sigma}_s$ only requires a small addition to the deviation payoff contributions computed in equation~\eqref{eq:log_cont}; we multiply by $\frac{\vec{c}_s}{\vec{\sigma}_{s}}$
over a new dimension $s$, before summing across configurations.\footnotemark[5]
\begin{equation*}
\devderiv{\vec{\sigma}}{a}{\vec{\sigma}_s} \gets \sum_j \frac{\vec{c}_{j s}}{\vec{\sigma}_{s}} \gamma_j
\end{equation*}
This calculation can be fully vectorized using similar operations but adds a dimension since we have a partial derivative for each \emph{pair} of actions.\footnotemark[6]
This extra dimension is the main reason gradient descent runs slower than replicator dynamics in our experiments.
\section{Validation and Experiments}
\subsection{Size Limits}
\label{sec:sizes}
\begin{figure}[t]
\center
\includegraphics[width=\columnwidth]{figures/log_mem_limit_32.png}
\caption{GPU memory required to store the final version of our data structure. Batch-computation of deviation payoffs increases memory footprint by a linear factor.}
\label{fig:mem_limit}
\end{figure}
The first constraint on the size of the data structures comes from integer overflows when calculating repetitions.
Table~\ref{tab:int_overflow} shows the upper limit in terms of $P$ and $A$ beyond which at least one profile will overflow a signed 64-bit integer.
For example, if we want to represent a 5-action game with more than 32 players, we must use the log-transform to ensure that we calculate probabilities correctly.
Note that using unsigned integers usually makes no difference and never allows more than one extra player.
This raises the question of whether we sacrifice precision when using the log-transform.
To test this, we generated 10 random bipartite action-graph games for every combination of $2 \le P \le 512$ and $2 \le A \le 20$ for which the size of our data structures would be below 1GiB.
We then represented these games using the CPU (64-bit) and GPU (32-bit) versions of our data structure, and calculated deviation payoffs for 1000 mixtures per game and recorded the largest error in the calculated deviation payoffs.
These errors were tiny, except in games with hundreds of players in the 32-bit representation, and even then the scale of the errors was at least 7 orders of magnitude smaller than the payoff range of the game.
Full results are shown in the supplement.\footnote{\label{note:supplementary_figures}See Appendix D: supplementary figures. \url{https://arxiv.org/abs/2302.13232}}
With these results, the question of what games we can represent comes down to what can fit in (main or GPU) memory.
Figure~\ref{fig:mem_limit} shows the size of the GPU data structure as $P$ increases for various values of $A$.
A similar figure for 64-bit arrays appears in the supplement.\footnotemark[4]
Note that calculating deviation payoffs requires storing intermediate arrays of similar size to the data structure, so game size should be restricted to below half of available memory (or smaller if we want to calculate deviation payoffs in batches).
Note that these sizes mean that it is entirely reasonable to store a 100-player, 6-action game.
\subsection{Timing Comparisons}
\label{sec:timing}
\begin{figure}
\center
\includegraphics[width=\columnwidth]{figures/deviation_payoff_timing_4A.png}
\caption{Time to required to compute deviation payoffs for 1024 mixtures in 4-action games using each data structure variant. Lines stop when either an integer overflow is encountered or when more than 1GB of memory is required. Similar plots for $A=6$ and $A=8$ appear in Appendix D.}
\label{fig:dev_pay_timing}
\end{figure}
Figures \ref{fig:dev_pay_timing} and \ref{fig:eq_timing} show our main results: our data structures produce significant speedup in computing deviation payoffs and therefore running Nash-finding algorithms.
Timing comparisons were run using an AMD Threadripper 2990WX CPU and NVIDIA RTX 2080-Ti GPUs.
Figure~ \ref{fig:dev_pay_timing} shows the time to compute deviation payoffs using each of the data structures described in the previous section.
The first several lines end at $P=32$, since this is near the int-overflow limit for $A=4$.
The batch-processing line ends at $P=128$, because operating on a batch of 64 mixtures requires much more memory.
Note that our proposed data structure can compute deviation payoffs in a game with 384 players faster than the baseline data structure can handle $P=12$.
Many more deviation-payoff timing comparisons including 6- and 8-action games appear in the supplement.\footnotemark[4]
Figure~\ref{fig:eq_timing} shows a similar result for Nash-finding, but focusing only on our best data structure, and using batch sizes adapted to available memory.
First, note that replicator dynamics using our data structure outperforms the previous implementation by four orders of magnitude.
Gradient descent is consistently an order of magnitude slower than replicator dynamics, making it impractical and not generally implemented with older data structures.
But with our data structure that slowdown can often be acceptable because gradient descent and replicator dynamics frequently identify some distinct equilibria (as illustrated in Figure~\ref{fig:nash_traces}), and so in cases where replicator dynamics fails or more equilibria are desired it is entirely reasonable to run both algorithms.
For example, with $A=4$ and $P=512$, we can perform 100 runs of replicator dynamics or gradient descent in a reasonable time-frame.
Timing results shown in the supplement for $A=6$ and $A=8$ are broadly similar.\footnotemark[7]
\begin{figure}[t]
\center
\includegraphics[width=\columnwidth]{figures/nash_timing_4A.png}
\caption{Time to compute Nash equilibria in 4-action games with 100 starting mixtures and 1000 iterations. Replicator dynamics is considerably faster than gradient descent (and could get away with fewer iterations), but it is often worthwhile to run both. Our data structure lets us represent and solve much larger games than the previous state-of-the-art. Similar plots for $A=6$ and $A=8$ appear in Appendix D.}
\label{fig:eq_timing}
\end{figure}
\section{Extensions}
\subsection{Many Strategies}
\label{sec:many-strategies}
The size of our data structures and therefore the time to compute deviation payoffs scales reasonably similarly with respect to the number of players $P$ and the number of actions $A$.
The configurations and payoffs arrays both have size $A \times \binom{P+A-2}{P-1}$, which is dominated by the binomial that grows exponentially in the smaller of $P$ and $A$.
As a result, it is generally possible to represent games with a large number of players but a small number of actions or a large number of actions but a small number of players, with the caveat that actions contribute an extra linear factor.
Unfortunately, equilibrium computation does not scale equally well in players and actions.
Identifying symmetric mixed-strategy Nash equilibria means searching for specific points in the mixed-strategy simplex, whose dimension grows with the number of actions.
The various complete algorithms are generally hopelessly slow with large number of actions, and the local search methods we prefer are much less likely to identify an equilibrium on any given run.
This means that solving symmetric games with many players is far more feasible than solving ones with many actions.
To handle games with many actions, we can take inspiration from \citet{simple_search}, and search for equilibria with small support sets.
This requires solving a subgame defined by restricting the game to a subset of the actions, and then checking whether those subgame solutions have beneficial deviations in the full action set.
To facilitate this, we can use a variant of our data structure where we include additional rows in the payoffs-array representing actions outside the support set, but make no change to the configurations-array.
Then for a mixture in the support set, we can compute deviation payoffs for all actions against that mixture by expanding the step where we broadcast the multiplication of configuration probabilities and payoffs.
If we determine that the current subgame set does not contain a Nash equilibrium, we can update the support set replacing weaker strategies with stronger ones, as measured by deviation gain against the subgame's candidate solutions (or other metrics).
This will iteratively search the space of supports and eventually identify a small-support equilibrium if any exist.
This approach of constructing complete subgames plus payoff information for unilateral deviations is already widely used in empirical game settings \cite{EGTA} where determining payoffs for a single configuration is expensive.
\subsection{Role-Symmetric Games}
\label{sec:role-symmetry}
Outside of fully-symmetric games, many large normal-form games exhibit role-symmetry, where players can be grouped into $R$ roles---like buyers and sellers or attackers and defenders---and within each role, all players are indistinguishable.
To solve role symmetric games, analysts typically search for role-symmetric Nash equilibria, where all players in a given role play the same mixed strategy.
Role symmetric mixed-strategy profiles can be represented by a vector that concatenates the mixed strategy played by each role, and the deviation payoff vector of the same dimension gives the expected utility of a unilateral deviator selecting each action.
These deviation payoffs and their derivatives remain central to computing role-symmetric equilibria, and almost all of the techniques we propose for improving representations of symmetric games also apply under role symmetry.
Unfortunately, representing opponent configurations gets trickier with more than one role, because the set of opponents is different for players belonging to different roles.
For example, in a game with 10 buyers and 10 sellers, when computing deviation payoffs for buyer-actions, opponent configurations include 9 buyers and 10 sellers, whereas for seller actions, there are 10 buyer-opponents and 9 seller-opponents.
We could resolve this by building separate opponent-configuration tables for each role $r$, where the arrays for role $r$ are based on configurations with $P_r - 1$ opponents in the same role and $P_{r'}$ opponents in other roles.
This gives us $R$ pairs of configuration- and payoff-arrays where the arrays for role $r$ have $\binom{P_{r} + A_{r} - 2}{P_{r}-1} \left(\prod_{r' \ne r} \binom{P_{r'} + A_{r'} - 1}{P_{r'}} \right) $ columns, by $\left(\sum_{r=1}^{R} A_r \right)$ rows for configurations and $A_r$ rows for payoffs.
We could also dispense with the opponent-configuration approach and instead store full $P$-player profiles.
This results in one profile-array and one payoff-array that each have size $ \left(\sum_{r=1}^{R} A_r \right) \times \left(\prod_{r=1}^{R} \binom{P_r + A_r - 1}{P_r} \right) $, but requires us to return to masking-based dev\-iation-payoff computations.
Thus for multi-role games, we have a choice between slightly smaller storage requirements for a profile-based data structure or slightly simpler computations for a config\-uration-based representation;
the \texttt{gameanalysis.py} library\footnotemark[2] employs the former option.
Under either of these approaches, all of our other optimizations can still apply, and as long as the number of roles is small both options provide for reasonably efficient deviation-payoff calculations.
\subsection{Action-Graph Games}
Action-graph games \cite{action-graph_games} represent certain games compactly by storing, for each node, a mapping from neighborhood configurations to payoffs.
Each action (for any player) corresponds to a node in a graph, and an action's payoff depends only on the number of players choosing each of the adjacent actions.
This means that action-graph games are role-symmetric, with any group of players who share an action set belonging to the same role; when all players have the same action set action-graph games are symmetric.
In the symmetric case, we can extend our data structures to efficiently compute deviation payoffs in action-graph games with a pair of arrays for each action storing that action's neighborhood configurations and repetition-weighted-payoffs.
The key augmentation is to include, as part of the representation of an opponent-configuration, an extra ``action'' capturing the number of players choosing any action outside the neighborhood.
Then when computing the deviation payoff for an action, we can sum the probabilities for each non-adjacent action to get the out-of-neighborhood probability.
This means that all of our data structure improvements can be applied to symmetric AGG-$\emptyset$ games.
For role-symmetric AGG-$\emptyset$ games, our role-symmetric data structure variants could be applied after splitting up action nodes shared by more than one role.
For action-graph games with contribution-independent function nodes, many our data-structure improvements can be applied, but the representation of these games tends to be sufficiently compact that the potential gains may be small.
The \texttt{gameanalysis.jl} library\footnotemark[1] implements symmetric action-graph games with a bipartite graph between actions and contribution-independent functions as one of the tools for generating interesting random symmetric game instances.
This implementation makes use of vectorization, opponent configurations, pre-computed repetitions, and the log transformation.
\subsection{Multi-Threading}
The next extension we would like to explore would focus on improving the CPU version of our data structure to capture some of the parallelism achievable on the GPU.
At present, our deviation-payoff calculations are single-threaded, and while we might not expect to outperform SIMD array operations, there should still be room for significant speedup on multi-core systems from parallelizing a batch of deviation payoff computations across threads, especially for games in the multi-gigabyte size-range that can stress GPU memory but still fit in system RAM.
\section{Conclusion}
Our validation experiments show that the data structures we propose are capable of storing (Section~\ref{sec:sizes}) and efficiently computing (Section~\ref{sec:timing}) deviation payoffs in symmetric games with very large numbers of players.
By using incomplete-but-effective search algorithms we are consistently able to identify symmetric Nash equilibria.
In combination with the iterative exploration approach described in Section~\ref{sec:many-strategies} we can also find small-support equilibria in games with many actions.
These results dramatically out-class existing solvers, enabling analysis of a much wider range of symmetric games, and closing the gap between micro- and macro-scale symmetric games.
\balance
\bibliographystyle{ACM-Reference-Format}
|
{
"arxiv_id": "2302.13265",
"language": "en",
"timestamp": "2023-02-28T02:14:38",
"url": "https://arxiv.org/abs/2302.13265",
"yymm": "2302"
} | \section{Introduction} \label{section-1}
Historically, the phenomenon of electron-positron pair production from a non-perturbative vacuum in the presence of electric field strength $eE$ (where $e$ is an electric charge) was first introduced by Fritz Sauter~\cite{Sauter:1931zz}, Heisenberg and Euler~\cite{Heisenberg:1936nmg}. However, the complete theoretical framework provided by Julian Schwinger in the field of quantum electrodynamics (QED)~\cite{Schwinger:1951nm}, and named after him as the Schwinger effect or Schwinger mechanism. Later on, this phenomenon has been widely studied in the field of quantum chromodynamics (QCD), see for example, Refs.~\cite{Yildiz:1979vv, Cox:1985bu, Suganuma:1990nn,Suganuma:1991ha,Tanji:2008ku}. As we know that the QCD is a theory of strong color force among the quarks and gluons possess two special properties: The asymptotic freedom (i.e., the quarks interacts weakly
at a short distancesor high energy scale) and the quark confinement
(i.e., the quarks interacts strongly at a large distance or low energy scale).
The Schwinger effect in QCD is thus, the production of quark-antiquark pairs in the presence of a strong electric field $eE$. The quark-antiquark pair production can be calculated from the Schwinger pair production rate $\Gamma$ and is, defined as the probability, per unit time and per unit
volume that a quark-antiquark pair is created by the constant electric
field $eE$. \\Another important property of low-energy QCD is
the dynamical chiral symmetry breaking, which is related to the dynamical mass generation of the quarks. The strong electric field tends to restore the dynamical chiral symmetry and as a result, the dynamically generated quark mass suppresses with the increase of $eE$. It can be understood by realizing that being closer together, the quark and antiquark pairs are reaching the asymptotic freedom regime faster by reducing the interaction strength as the intensity of the electric field $eE$ increases. Such a phenomenon is sometimes referred to as the chiral electric inhibition effect~\cite{Klevansky:1988yw, Suganuma:1990nn, Klimenko:1992ch, Klevansky:1992qe, Babansky:1997zh, Cao:2015dya, Tavares:2018poq}, or the chiral electric rotation effect~\cite{Wang:2017pje} or {inverse electric catalysis}(IEC)~\cite{Ruggieri:2016lrn, Tavares:2019mvq, Ahmad:2020ifp}. The nature of the dynamical chiral symmetry breaking--restoration and confinement--deconfinement phase transition is of second-order when the bare quark mass $m=0$ (i.e., in the chiral limit) while cross-over when $m\neq0$. It has been argued in Refs. ~\cite{Cao:2015dya, Tavares:2018poq, Tavares:2019mvq} that the pair production rate $\Gamma$, increases quickly near some pseudo-critical electric field $eE_c$, where the chiral symmetry is restored. \\
The study of the effect of the electric field on the chiral phase transition plays a significant role in Heavy-Ion Collision experiments. In such experiments, the magnitude of the electric and magnetic fields produced with the same order of magnitude ($\sim 10^{18}$ to $10^{20}$Gauss)~\cite{Bzdak:2011yy, Deng:2012pc, Bloczynski:2012en, Bloczynski:2013mca} in the event-by-event collisions using Au $+$ Au at RHIC-BNL, and in a non-central Heavy-Ion collision of Pb $+$ Pb in ALICE-LHC. Besides, experiments with asymmetric
Cu $+$ Au collisions, it is believed that the strong
electric field is supposed to be created in the overlapping region~\cite{Hirono:2012rt, Voronyuk:2014rna, Deng:2014uja}. It happens because there are different numbers
of electric charges in each nuclei, which may be
due to the charge dipole formed in
the early stage of the collision. Some other phenomena like the chiral electric separation effect~\cite{Huang:2013iia, Jiang:2014ura}, the particle polarization effect~\cite{Karpenko:2016jyx, Xia:2018tes, Wei:2018zfb}, etc., which may emerge due to the generation of vector and/or axial current in the presence of strong electromagnetic fields. \\ It is illustrative to approximate the created number of
charged quark-antiquark pairs in the
QGP produces in Heavy Ion collision, because in the QGP phase the dynamical chirl symmetry restores and the deconfinement occurs.
According to advance numerical
simulations, the electric fields created in Au $+$ Au
collisions at center-of-mass energy $\sqrt{s}=200$ GeV
due to the fluctuation is of
the order $eE \sim m^{2}_{\pi}$
while in Pb$+$Pb collisions at
$\sqrt{s}=2.76$ TeV is of the order $eE\sim20 m^{2}_{\pi}$
\cite{Deng:2012pc}. Then by assuming
the space-time volume of the QGP is of the order $\sim (5 {\rm fm})^4$,
the total pair creation number is $N_{\rm RHIC}=3.5$, and $N_{\rm LHC}=1400$ \cite{Cao:2015xja}, gives us a clear indication of the importance of Schwinger pair production of quark
and antiquark in Heavy Ion Collisions.\\
It is well understood that the QCD exhibits confinement and chiral symmetry breaking with
the small number of light quark
flavors $N_f$. However, for larger
$N_f$, Lattice QCD simulation~\cite{LSD:2014nmn,Hayakawa:2010yn,Cheng:2013eu,Hasenfratz:2016dou,LatticeStrongDynamics:2018hun},
as well as the continuum methods of QCD ~\cite{bashir2013qcd,Appelquist:1999hr,Hopfer:2014zna,Doff:2016jzk,Binosi:2016xxu,Ahmad:2020jzn,Ahmad:2020ifp,Ahmad:2022hbu}, predicted that there is a critical value
$N^{c}_{f}\approx8$
above which the chiral symmetry is restored and quarks become
unconfined. It has been discussed in detail in Ref.~\cite{Ahmad:2020ifp} that the critical number of flavors $N^{c}_{f}$ suppresses with the increase of temperature $T$ and enhances with the increasing magnetic field $eB$. Even the QCD phase diagram at finite temperature $T$ and density $\mu$ suppresses with the increase of light quark flavors, see for an instance Ref.~\cite{Ahmad:2020jzn}. Besides the higher number of light quark flavors, QCD with a larger number of colors $N_c$ in the
fundamental $SU(N_c)$ representation also plays a significant role. It has been demonstrated in Ref.~\cite{Ahmad:2020ifp, Ahmad:2020jzn} that the
chiral symmetry is dynamically broken above a critical value $N^{c}_{c}\approx2.2$, as a result, the dynamically generated mass increases
near and above $N^{c}_{c}$. Increasing the number of colors also enhances the critical temperature $T_c$ and the critical chemical potential $\mu_c$ of the chiral phase transition in the QCD phase diagram ~\cite{Ahmad:2020ifp}. Both $N_c$ and magnetic field $eB$ strengthen the generation of the dynamical masses of the quarks
~\cite{Ahmad:2020jzn}.\\
It will be more significant to study the dynamical chiral symmetry breaking and Schwinger effect in pure electric field background for a higher number of light quark flavors $N_f$ and for a large number of colors $N_c$, which has not yet been studied so far, as far as we know. It may have a stronger impact not only on the theoretical ground but also in Heavy-Ion Collision experiments where a large number of light flavors of quark-antiquark pairs are produced.
Our main objective of this work is to study the quark-antiquark pair production rate in the presence of a pure electric field $eE$ background for a higher number of light quark flavors $N_f$ and colors $N_f$. For this purpose, we use the Schwinger-Dyson equation in the rainbow-ladder truncation, in the Landau gauge, the symmetry preserving flavour-dependent confining vector-vector contact interaction model of quarks \cite{Ahmad:2020ifp},
and the Schwinger optimal time regularization scheme \cite{Schwinger:1951nm}. The pseudo-critical electrical field strength $eE_c$, the critical number of flavors $N^{c}_{f}$ and the critical number of colors $N^{c}_{c}$ for chiral symmetry breaking-restoration can be obtained from the peak of the correspondence gradient of dynamical quark mass, whereas the confinement-deconfinement can be triggered from the peaks of correspondence gradient of the confining length scale~\cite{Ahmad:2016iez, Ahmad:2020ifp, Ahmad:2020jzn}. It
should be noted that the chiral symmetry restoration and
deconfinement occur simultaneously in this model~\cite{Marquez:2015bca, Ahmad:2016iez, Ahmad:2020ifp, Ahmad:2020jzn}. \\
This manuscript is organized as follows: In Sec.~2, we present the general formalism for the flavor-dependent contact interaction model and QCD gap equation. In Sec.~3, We discuss the gap equation and the the Schwinger pair production rate in the presence of electric field $eE$ for a large number of flavors $N_f$ and colors $N_c$. In Sec.~4, we present the numerical solution of the gap equation and the Schwinger pair production rate in the presence of $eE$ for a higher number of $N_f$ and $N_c$. In the last Sec.~5, we present the summary and future perspective of our work.
\section{General formalism and flavor-dependent Contact Interaction model} \label{section-2}
The Schwinger-Dyson equations (SDE) for the dressed-quark propagator $S$, is given by:
\begin{eqnarray}
S^{-1}(p)&=S^{-1}_{0}(p) + \Sigma(p)\,,\label{CI1}
\end{eqnarray}
where $S_{0}(p)=(\slashed{p}+ m + i\epsilon)^{-1}$, is the bare quark propagator and $S(p)=(\slashed{p}+ M + i\epsilon)^{-1}$ is the dressed quark propagator. The $\Sigma(p)$ is the self energy and is given by
\begin{eqnarray}
\Sigma(p)=\int \frac{d^4k}{(2\pi)^4} g^{2}
D_{\mu\nu}(q)\frac{\lambda^a}{2}\gamma_\mu S(k)
\frac{\lambda^a}{2}\Gamma_\nu(p,k)\,,\label{CI2}
\end{eqnarray}
where $\Gamma_\nu (k,p)$ is the dressed quark-gluon vertex and $g^{2}$ is the QCD coupling constant. The
$D_{\mu\nu}(q)= D(q)(\delta_{\mu\nu} -\frac{q_{\mu} q_{\nu}}{q^2})$ is the gluon propagator in the Landau gauge with $\delta_{\mu \nu}$ is the metric tensor in Euclidean space, $D(q)$ is the gluon scalar function and $q=k-p$ is the gluon four momentum.
The $m$ is the current quark mass, which may set equal to zero (i.e., $m=0$) in the chiral limit. The $\lambda^a$'s are the
Gell-Mann matrices and in the ${\rm SU(N_c)}$ representation, the Gell-Mann's matrices satisfies the following identity:
\begin{eqnarray}
\sum^{8}_{a=1}\frac{{\lambda}^a}{2}\frac{{\lambda}^a}{2}=\frac{1}{2}\left(N_c - \frac{1}{N_c} \right)I, \label{CI3}
\end{eqnarray}
here $I$ is the unit matrix. In this work, we use the symmetry preserving flavor-dependent confining contact interaction model~\cite{Ahmad:2020jzn, Ahmad:2022hbu} for the gluon propagator (in Landau gauge) in the infrared region where the gluons dynamically acquire a mass $m_{g}$~\cite{Langfeld:1996rn, Cornwall:1981zr,Aguilar:2015bud,GutierrezGuerrero:2010md, Kohyama:2016obc}, is given by
\begin{eqnarray}
g^2 D_{\mu \nu}(k) &=& \frac{4 \pi
\alpha_{\rm ir}}{m_G^2}\sqrt{1 - \frac{(N_{f}-2)}{\mathcal{N}_{f}^{c}}} \delta_{\mu \nu} =\delta_{\mu \nu} \alpha_{\rm eff} \sqrt{1 - \frac{(N_{f}-2)}{\mathcal{N}_{f}^{c}}}\,,\label{CI4}
\end{eqnarray}
the $\alpha_{\rm
ir}=0.93\pi $ is the infrared enhanced interaction strength parameter, $m_g=800$ MeV is the gluon mass scale~\cite{Boucaud:2011ug}. The $\mathcal{N}_{f}^{c}=N^c_{f}+\eta$ is some guess values of critical number of
flavors. In Ref.~\cite{Ahmad:2020jzn, Ahmad:2022hbu}, the value of $\eta$ has been set and it ranging from $1.8-2.3$, to obtained the desired number of critical number $N^{c}_{f}\approx8$ above which the dynamical symmetry restored and deconfinement occurred. It has been
argued in the Ref.~\cite{Ahmad:2020jzn} that the appearance of the parameter $\eta$ is
because of the factor $(N_f-2)$ in Eq.~(\ref{CI4}).\\
With a particular choice of the flavor-dependent model Eq.~(\ref{CI4}), the dynamical quark mass function is merely a constant and the
the dressed quark propagator takes
into a very simple form~\cite{Ahmad:2015cgh}:
\begin{eqnarray}
S^{-1}(p)=i\gamma\cdot p+M_f\;.\label{CI5}
\end{eqnarray}
It is because the wave function renormalization trivially
tends to unity in this case, and the quark mass function $M$
become momentum independent:
\begin{equation}
M_f=m_f+\frac{\alpha_{\rm eff} \alpha^{N_c}_{N_f}}{2}\int^\Lambda \frac{d^4k}{(2\pi)^4} {\rm Tr}[S_f(k)]\;.\label{CI6}
\end{equation}
Where $M_f$ is the dynamical mass and $\alpha^{N_c}_{N_f}=\sqrt{1 - \frac{(N_{f}-2)}{\mathcal{N}_{f}^{c}}}\left(N_c -\frac{1}{N_c}\right)$. After simplifying Eq.~(\ref{CI6}), we have
\begin{eqnarray}
M_f=m_f+2\alpha_{\rm eff} \alpha^{N_c}_{N_f}\int \frac{d^4k}{(2\pi)^4} \frac{M}{k^2+M^2}\;.\label{CI7}
\end{eqnarray}
The quark-anitquark condensate which serves as an order parameter for the dynamical chiral symmetry breaking in this truncation, can be written as
\begin{eqnarray}
-\langle \bar{q} q\rangle= \frac{M_f-m_f}{2\alpha_{\rm eff}}\;.\label{CI8}
\end{eqnarray}
Using $d^4 k= (1/2) k^2 dk^2 \sin ^2 \theta d\theta \sin \phi d \phi d\psi $, performing the trivial integration's and using the variable $s=k^2$ in Eq.~(\ref{CI7}), we have
\begin{eqnarray}
M_f=m_{f}+\frac{\alpha_{\rm eff}\alpha^{N_c}_{N_f}}{8\pi^2}\int^{\infty }_{0}
ds\frac{s}{s+M_f^2} \,. \label{CI9}
\end{eqnarray}
The above integral in Eq.~(\ref{CI9}) is divergent and we need to regularize it. In the present scenario we use the Schwinger proper-time regularization procedure~\cite{Schwinger:1951nm}. In this procedure, we take the exponent
integrand's denominator and then introduce an additional
infrared cutoff, in addition to the conventional ultraviolet that is normally used in NJL model Studies.
Accordingly, the confinement is implemented through an infrared cut-off
~\cite{Ebert:1996vx}. The significance of adopting the mentioned regularization procedure is,
the quadratic and logarithmic divergences remove and the axial-vector Ward-Takahashi identity~\cite{Ward:1950xp, Takahashi:1957xn} is satisfied.
From Eq.~(\ref{CI9}),
the integrand's denominator reduced to:
\begin{eqnarray}
\frac{1}{s+M^{2}_{f}}&=&\int^{\infty }_{0} d\tau {\rm e}^{-\tau(s+M^{2}_{f})}
\rightarrow
\int^{\tau_{ir}^2}_{\tau_{uv}^2} d\tau
{\rm e}^{-\tau(s+M^{2}_{f})} \nonumber\\ &=&
\frac{ {\rm e}^{-\tau_{uv}^2(s+M^{2}_{f})}-{\rm e}^{-\tau_{ir}^2(s+M^{2}_{f})}}{s+M^{2}_{f}}\;. \label{CI10}
\end{eqnarray}
Here, $\tau_{uv}=\Lambda^{-1}_{uv}$ is an ultra-violet regulator, which plays the dynamical role and sets the scale for all dimensional quantities.
The $\tau_{ir}=\Lambda^{-1}_{ir}$ stands for the infrared regulator whose non zero value implements confinement by ensuring the absence of quarks production thresholds ~\cite{Roberts:2007jh}. Hence $\tau_{ir}$ referred to as the confinement scale~\cite{Ahmad:2016iez}. From Eq.~(\ref{CI10}), it is now clear that the location of the original pole is at $s=-M^2$, which is canceled by the numerator.
Thus the propagator is free from real as well as the complex poles, which is consistent with the definition of confinement i.e., ``an excitation
described by a pole-less propagator would never reach its
mass-shell''~\cite{Ebert:1996vx}. \\
After integration over `s', the gap equation Eq.~(\ref{CI9}) is reduced to:
\begin{eqnarray}
M_f&=& m_{f} + \frac{M_f^3 \alpha_{\rm eff}\alpha^{N_c}_{N_f}}{8\pi^{2}}
\Gamma(-1,\tau_{uv} M_{f}^2,\tau_{ir} M_{f}^2)\,,\label{CI11}
\end{eqnarray}
\noindent where
\begin{equation}
\Gamma (a, y_1,y_2)=\Gamma (a,y_1)-\Gamma(a,y_2)\,\label{CI12}
\end{equation}
with $\Gamma(a,y) = \int_{y}^{\infty} t^{\alpha-1} {\rm
e}^{-t} dt$, is the incomplete Gamma function. The above equation Eq.~(\ref{CI11}), is the gap equation in vacuum which is regularized in the Schwinger proper time regularization scheme with two regulators.
The confinement in this model can be triggered from the confining length scale~\cite{Ahmad:2016iez, Ahmad:2020ifp, Ahmad:2020jzn}:
\begin{eqnarray}
\tilde{\tau}_{ir}=\tau_{ir}\frac{M}{M_f},\label{CI13}
\end{eqnarray}
where $M=M(3,2)$ is the dynamical mass for fixed $N_c=3$, $N_c=2$. Here ${M}_f=M(N_c,N_f)$ is the generalized color $N_c$ and flavor $N_f$ dependent dynamical mass. As, the~$\tau_{ir}$ is introduced in the
model to mimic confinement by ensuring the absence of quarks production thresholds, so in the presence of $N_f$ and $N_c$, it
is required to vary slightly with both $N_f$ and $N_c$. Thus the entanglement between dynamical chiral symmetry breaking and confinement is expressed through an explicit $N_f$ and $N_c$-dependent regulator i.e., $\tilde{\tau}_{ir}=\tau_{ir}(N_c,N_f)$, in the infrared.
For chiral quarks (i.e., $m_f=0$), the confining scale $\tilde{\tau}_{ir}$ diverges at the chiral symmetry restoration region.
Next we discuss the general formalism of the gap equation and the Schwinger pair production rate in the in the presence of electric field and in the presence of electric field $eE$ and for higher $N_f$ and $N_c$.\\
\section{Gap equation and Schwinger pair production rate in the presence of electric field}
In this section, we discuss the gap equation in the presence of a uniform and homogeneous pure electric field $eE$. In QCD Lagrangian, the interaction with pure electric field $A^{ext}_\mu$ embedded in the covariant derivative,
\begin{eqnarray}
D_\mu=\partial_\mu -iQ_f A_\mu^{\rm ext}, \label{em1}
\end{eqnarray}
with $Q_{f}=(q_u=+2/3 ,q_d=-1/3)e$ is refers to the electric charges of $u$ and $d$-quark, respectively. We choose the symmetric gauge vector potential $A^{ext}_\mu=-\delta_{\mu0}x_{3} E
$, to obtain the resulting electric field along the z-axis. The gap equation in the presence of pure electric field continues to form Eq.~(\ref{CI6}), where $S_f(k)$ is now dressed
with electric field $eE$, that is, $ {S_f}(k) \rightarrow\tilde{S_f}(k)$~\cite{Schwinger:1951nm, Klevansky:1992qe, Cao:2015dya, Wang:2017pje, Cao:2021rwx}, and is given as
\begin{eqnarray}
&&\hspace{-22mm}\tilde{S_f}(k)=\int^{\infty}_{0} d\tau {\rm e}^{-\tau \bigg( M_{f}^{2}+(k_{4}^{2}+k_{3}^{2}) \frac{{\rm tan}(|Q_{f}E| \tau)}{|Q_{f}E|\tau}+k_{1}^{2}+k_{2}^{2}\bigg)}
\nonumber\\&&\hspace{-20mm}\times \bigg[ M_f+i\gamma^{4}\bigg(k_4 +{\rm tan}(|Q_{f}E\tau |)\bigg)k_3 -\gamma^{3}(k_3 +{\rm tan}(|Q_{f}E\tau |)k_4)-\bigg(\gamma^{2}k_2-\gamma^{1}k_1 \bigg)
\nonumber\\&&\hspace{-20mm}\times
\bigg(1+{\rm tan}(|Q_{f}E|\tau)\gamma^4 \gamma^3 \bigg) \bigg]\,.\label{em2}
\end{eqnarray}
Where $\gamma$'s are the Dirac gamma matrices. Taking the trace ``Tr'' of the of the propagator Eq.~(\ref{em2}), inserting it in Eq.~(\ref{CI6}) and after carrying out the the integration over $k$'s,
the gap equation in the electric field $eE$ and with the flavor-dependent contact interaction model of quark~\cite{Ahmad:2020jzn} is given by
\begin{eqnarray}
M_f &=& m_{f}+ \frac{\alpha_{\rm eff} \alpha^{N_c}_{N_f}}{8\pi^2}\sum_{f} \int^{\infty}_{0} \frac{d\tau}{\tau^2} M_f {\rm e}^{-\tau M_{f}^{2}}
\bigg[\frac{|Q_{f}E|}{{\rm tan}(|Q_{f}E|\tau)} \bigg]\,.\label{em3}
\end{eqnarray}
Next, we separate the vacuum and electric field dependent part by using the vacuum subtraction scheme\cite{Klevansky:1992qe, Cao:2015dya}, as given by
\begin{eqnarray}
M_f&=& m_f+\frac{\alpha_{\rm eff} \alpha^{N_c}_{N_f}}{8\pi^2}\sum_{f} \int^{\infty}_{0} \frac{d\tau}{\tau^2} M_f {\rm e}^{-\tau M_{f}^{2}}\nonumber\\&&\hspace{-0.5mm}+ \frac{\alpha_{\rm eff} \alpha^{N_c}_{N_f}}{8\pi^{2}}\sum_{f} \int^{\infty}_{0} \frac{d\tau}{\tau^2} M_f {\rm e}^{-\tau M_{f}^{2}}
\bigg[\frac{|Q_{f}E|\tau}{{\rm tan}(|Q_{f}E|\tau)}-1\bigg]\,.\label{em4}
\end{eqnarray}
The vacuum integral can be regularized with the two regulators as in Eq.~(\ref{CI11}), and thus the Eq.~(\ref{em4}) can be reduced to:
\begin{eqnarray}
M_f&=& m_f+\frac{M_{f}^{3} \alpha_{\rm eff} \alpha^{N_c}_{N_f}}{8\pi^{2}}
\Gamma(-1,\tau^{2}_{uv} M_{f}^2,\tau^{2}_{ir} M_{f}^2)\nonumber\\&&\hspace{-0.5mm}+ \frac{\alpha_{\rm eff} \alpha^{N_c}_{N_f}}{8\pi^{2}}\sum_{f} \int^{\infty}_{0} \frac{d\tau}{\tau^2} M_f {\rm e}^{-\tau M_{f}^{2}}
\bigg[\frac{|Q_{f}E|\tau}{{\rm tan}(|Q_{f}E|\tau)}-1\bigg]\,.\label{em5}
\end{eqnarray}
The Eq.~(\ref{em5}) can also be written as
\begin{eqnarray}
\frac{M_f-m_f}{\alpha_{\rm eff}}&=&\frac{M_{f}^{3} \alpha^{N_c}_{N_f}}{8\pi^{2}}
\Gamma(-1,\tau^{2}_{uv} M_{f}^2,\tau^{2}_{ir} M_{f}^2)\nonumber\\&&\hspace{-0.5mm}+ \frac{ \alpha^{N_c}_{N_f}}{8\pi^{2}}\sum_{f} \int^{\infty}_{0} \frac{d\tau}{\tau^2} M_f {\rm e}^{-\tau M_{f}^{2}}
\bigg[\frac{|Q_{f}E|\tau}{{\rm tan}(|Q_{f}E|\tau)}-1\bigg]\,.\label{em6a}
\end{eqnarray}
As we have already regularized the integral in Eq.~(\ref{em5}), using the vacuum subtraction scheme but we still have poles associated with the tangent term in the denominator of our gap
equation when $|Q_fE|\tau=n\pi$ with $n=1,2,3.....,$. Upon taking the principle value (real value) of the integral~\cite{Klevansky:1992qe},
given in Eq.~(\ref{em6a}), we have
\begin{eqnarray}
\int^{\infty}_{0} \frac{d\tau} {\tau^2}{\rm e}^{-\tau M_{f}^{2}}
\bigg(\frac{|Q_{f}E|\tau}{{\rm tan}(|Q_{f}E|\tau)}-1\bigg)&=& |Q_{f}E| Re J(i M^{2}_{f}/2|Q_{f}E|)\,,\label{em6b}
\end{eqnarray}
with
\begin{eqnarray}
J(\zeta) = 2i \bigg[(\zeta -\frac{1}{2}){\rm log}\zeta-\zeta-{\rm log}\Gamma(\zeta)+\frac{1}{2}{\rm log}2\pi] \bigg]\,.\label{em6c}
\end{eqnarray}
The effective potential $\Omega$, can be obtained by integrating the Eq.~(\ref{em6a}) over dynamical mass $M_f$:
\begin{eqnarray}
\Omega &=&\frac{(M_f-m_f)^2}{2\alpha_{\rm eff}}-\frac{\alpha^{N_c}_{N_f}{M_f}^{4}}{32\pi^{2}}
\bigg[\frac{{\rm e}^{-\tau M_{f}^{2}}}{\tau^{4}_{uv}}-\frac{{\rm e}^{-\tau M_{f}^{2}}}{\tau^{4}_{ir}}-M_{f}^4\Gamma(-1,\tau^{2}_{uv} M_{f}^2,\tau^{2}_{ir} M_{f}^2)\bigg]\nonumber\\&&\hspace{-0.5mm}-\frac{ \alpha^{N_c}_{N_f}}{16\pi^{2}}\sum_{f} \int^{\infty}_{0} \frac{d\tau}{\tau^3} M_f {\rm e}^{-\tau M_{f}^{2}}
\bigg[\frac{|Q_{f}E|\tau}{{\rm tan}(|Q_{f}E|\tau)}-1\bigg]\,.\label{em7a}
\end{eqnarray}
We noted that there
are infinite poles in the effective potential $\Omega$ too which are due to the contribution of the Schwinger pair
production in the presence of the tangent (electric field-related) term
. These poles yields the effective
potential to be complex with its imaginary part giving the
Schwinger pair production rate~\cite{Schwinger:1951nm,Ruggieri:2016lrn, Cao:2015dya, Tavares:2018poq}.
The third term in the Eq.~(\ref{em7a}) can be further simplified by using the following trigonometric relation:
\begin{eqnarray}
\frac{\pi\tau}{{\rm tan}(\pi\tau)}-1 =\sum_{n=1}^{\infty} \frac{2 \tau^2}{\tau^2-n^2}\,.\label{em7b}
\end{eqnarray}
Using Eq.~(\ref{em7b}) in Eq.~(\ref{em7b}), we have
\begin{eqnarray}
\Omega &=&\frac{(M_f-m_f)^2}{2\alpha_{\rm eff}}-\frac{\alpha^{N_c}_{N_f}{M_f}^{4}}{32\pi^{2}}
\bigg[\frac{{\rm e}^{-\tau M_{f}^{2}}}{\tau^{4}_{uv}}-\frac{{\rm e}^{-\tau M_{f}^{2}}}{\tau^{4}_{ir}}-M_{f}^4\Gamma(-1,\tau^{2}_{uv} M_{f}^2,\tau^{2}_{ir} M_{f}^2)\bigg]\nonumber\\&&\hspace{-0.5mm}-\frac{ \alpha^{N_c}_{N_f}}{16\pi^{2}}\sum_{f} \int^{\infty}_{0} \frac{d\tau}{\tau^3} M_f {\rm e}^{-\tau M_{f}^{2}}
\sum_{n=1}^{\infty} \frac{2\tau^2}{\tau^2-\frac{\pi^{2}n^2}{(|Q_{f}E|)^2}}\,.\label{em8}
\end{eqnarray}
The Schwinger pair production rate $\Gamma$, corresponds to the imaginary
part of the effective potential Eq.~(\ref{em8}), and is thus, given by
\begin{eqnarray}
\Gamma=-2{\rm Im}\Omega=\sum_{f}\sum^{\infty}_{n=1}\frac{\rm \alpha^{N_c}_{N_f}} {4\pi}\frac{(|Q_f E|)^{2}{\rm e}^{-n\pi M^{2}_f/|Q_f E|}}{(n\pi)^2}\,,\label{em9}
\end{eqnarray}
which does not depend on the number of color $N_c$ or the number of flavors $N_f$ explicitly. However, since the dynamical quark mass $M_f$ depends
on both $N_f$ and $N_c$ as it is obvious from the gap equation Eq.~(\ref{em4}), they will
affect the quark-antiquark pair production rate $\Gamma$ implicitly through $M_f$. The numerical solution of the gap equation and the Schwinger pair production rate will be discussed in the next section.
\section{Numerical Results}
In this section, we present the numerical results of the gap equation at zero electric field and in the presence of electric field for higher $N_c $ and $N_f$. We use the following set of flavor-dependent contact interaction model parameters~\cite{Ahmad:2020jzn} i.e., $\tau_{{ir}} = (0.24~\mathrm{GeV})^{-1}$, $\tau_{uv} = (0.905~\mathrm{GeV})^{-1}$,
and bare quark mass $m_u=m_d=0.007$~GeV. For fixed $N_c=3$ and $N_f=2$, we have obtained the dynamical mass $M_{u,d}=0.367$~GeV and the
condensate $-\langle\bar{q}q\rangle^{1/3}_{u,d}=
0.243$~GeV$^3$. It should be noted that these parameters were fitted to reproduce the light mesons properties~\cite{GutierrezGuerrero:2010md}. \\ Next, we solve the gap equation Eq.(\ref{CI11}) for various $N_f$ and at fix $N_c=3$ as shown in the Fig.~\ref{fig1}. The dynamical mass monotonically decreases as we increase the $N_f$ as depicted in Fig.~\ref{fig1}(a). The plot inside the Fig.~\ref{fig1}(a), represents the flavor-gradient of the dynamical mass~$\partial_{N_f}M_f$, whose peak is at $N^{c}_{f}\approx8$, which is a critical number of flavors where near and above the dynamical chiral symmetry restored. In Fig.~\ref{fig1}(b), we show the dynamical mass as a function of the number of colors $N_c$ for fix $N_f=2$. This plot represents that the dynamical chiral symmetry broken near or above some critical value of $N^{c}_c\approx 2.2$. The $N^{c}_c\approx 2.2$ is obtained from the peak of the color-gradient of the mass function~$\partial_{N_c}M_f$. These findings are consistent with results obtained in~\cite{Ahmad:2020jzn, Ahmad:2022hbu}.
\begin{figure}
\begin{subfigure}[b]{7cm}
\centering\includegraphics[width=8cm]{Fig1.pdf}
\caption{}\label{fig:a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{7cm}
\centering\includegraphics[width=8cm]{Fig2.pdf}
\caption{}\label{fig:b}
\end{subfigure}
\caption{(\subref{fig:a}) Behavior of the dynamical quark mass $M$ and its flavor-gradient $\partial_{N_f}M$ as a function of number of flavors $N_f$ for fixed number of $N_c=3$. The dynamical mass $M$ decreases as we increase $N_f$ and at some critical value $N^{c}_f=8$, the dynamical chiral symmetry is restored. \\ (\subref{fig:b}) The behavior of the dynamical quark mass $M$ and its color-gradient $\partial_{N_c}M$ as a function of number of colors $N_c$ and for fixed $N_f=2$. From the peak of the $\partial_{N_c}M$, it is clear that the dynamical chiral symmetry is broken above $N^{c}_c=2.2$.}
\label{fig1}
\end{figure}
The inverse of the confining length scale $\tilde{\tau}^{-1}_{ir}$ and its flavor-gradient plotted for various $N_f$ and for fixed $N_c=3$ are plotted in Fig.~\ref{fig2}(a). The confinement can be triggered from flavor-gradient~$\partial_{N_f}\tilde{\tau}^{-1}_{ir}$, whose peak is at $N^{c}_{f}\approx8$, and is approximated as a critical number of flavors above which the quark become unconfined. In Fig.~\ref{fig2}(b), we show the inverse confining length scale $\tilde{\tau}^{-1}_{ir}$ as a function of the various number of colors $N_c$ for fix $N_f=2$. This plot represents that the confinement occurs near and above at some critical value of $N^{c}_c\approx 2.2$. The critical $N^{c}_{c}\approx 2.2$ for the confinement is obtained from the peak of the color gradient~$\partial_{N_c}\tilde{\tau}^{-1}_{ir}$.
\begin{figure}
\begin{subfigure}[b]{7cm}
\centering\includegraphics[width=8cm]{Fig3.pdf}
\caption{}\label{fig:a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{7cm}
\centering\includegraphics[width=8cm]{Fig4.pdf}
\caption{}\label{fig:b}
\end{subfigure}
\caption{(\subref{fig:a}) Behavior of the inverse of confining length scale $\tilde{\tau}^{-1}_{ir}$ and its flavor-gradient $\partial_{N_f}\tilde{\tau}^{-1}_{ir}$ as a function of number of flavors $N_f$ for fixed $N_c=3$. (\subref{fig:b}) The behavior $\tilde{\tau}^{-1}_{ir}$ and its color-gradient ~$\partial_{N_c}\tilde{\tau}^{-1}_{ir}$ as a function of the number of colors $N_c$ and for fixed $N_f=2$.}
\label{fig2}
\end{figure}
Next, we solve the gap equation Eq.~(\ref{em4}), in the presence of electric field $eE$ and plotted the dynamical mass as a function of $eE$, for the various number of flavors $N_f$ and for fix $N_c=3$ as depicted in Fig.~\ref{fig3}(a). The dynamical mass decreases as we increase the electric field $eE$ as we expected. Upon increasing the $N_f$, the plateau of dynamical quark mass as a function of $eE$ suppresses for larger values of $N_f$. There is no dynamical mass generation above $N^{c}_f=8$, this is because both electric field $eE$ and $N_f$ restore the dynamical chiral symmetry and quarks becomes unconfined. \\ In Fig.~\ref{fig3}(b), we plotted the dynamical quark mass as a function of electric field strength $eE$ for various $N_c$ and for fixed $N_f=2$. The increasing $N_c$ enhances the plateau of the dynamical mass as a function of $eE$. For $N_c\geq4$, the mass plots show the discontinuities in the dynamical symmetry restoration region where the nature of smooth cross-over phase transition changes to the first order. This may be due to the strong competition that occurs between $eE$ and $N_c$ (i.e. strong electric field tends to restore the dynamical chiral symmetry whereas larger $N_c$ enhance the dynamical chiral symmetry breaking.
\begin{figure}
\begin{subfigure}[b]{7cm}
\centering\includegraphics[width=8cm]{Fig5.pdf}
\caption{}\label{fig:a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{7cm}
\centering\includegraphics[width=8cm]{Fig6.pdf}
\caption{}\label{fig:b}
\end{subfigure}
\caption{(\subref{fig: a}) The dynamical quark mass as a function of electric field strength $eE$, for various $N_f$ and for fixed $N_c=3$. The plateau of the dynamical mass suppresses as we increase $N_f$. (\subref{fig:b}) shows the behavior of the dynamical quark mass as a function of electric field strength $eE$ for various $N_c$ and for fixed $N_f=2$. The increasing $N_c$ enhances the plateau of the dynamical mass as a function of $eE$. For $N_c\geq4$, the cross-over phase transition changes to the first order.
}
\label{fig3}
\end{figure}
The Schwinger pair production rate ``$\Gamma$'' Eq.~(\ref{em9}), as a function of electric field strength $eE$, for various $N_f$ and for fix $N_c=3$ is shown in the Fig.~\ref{fig4}(a). This figure clearly demonstrate that
after some critical value $eE_{c}$, where near at and above the dynamical symmetry restored and the deconfinement occurred, the pair production rate ``$\Gamma$'' grows
faster due to the weakening of the chiral condensates. In this situation, the QCD vacuum becomes more unstable
and the pair of quark-antiquark becomes more likely to
produces. Upon increasing the number of flavors $N_f$, the pair production rate grows faster and shifted towards the lower values of electric field $eE$. This is because, both the $eE$ and $N_f$ restore the dynamical chiral symmetry. Although, there is an unstable slow enhancement of pair production rate for small $N_f$ but become stable and faster for higher values of $N_f$ In Fig.~\ref{fig4}(b), we plotted the quark-antiquark pair production rate``$\Gamma$'' as a function of $eE$ for various $N_c$ but at this time we fix the number of flavors $N_f=2$. Upon increasing the number of colors $N_c$, the production rate grows slowly and higher values of $eE$ are needed for a stable and faster production rate. This is because, $N_c$ enhances the dynamical chiral symmetry breaking. For $N_c\geq4$, the discontinuity occurs in the production rate near and above the chiral symmetry restoration and deconfinement region. This may be due to the strong competition between $N_c$ and $eE$, i.e., on one hand, the strong electric field $eE$ tends to restore the dynamical chiral symmetry whereas the $N_c$ tends to break it.
\begin{figure}
\begin{subfigure}[b]{7cm}
\centering\includegraphics[width=8cm]{Fig7.pdf}
\caption{}\label{fig:a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{7cm}
\centering\includegraphics[width=8cm]{Fig8.pdf}
\caption{}\label{fig:b}
\end{subfigure}
\caption{(\subref{fig:a}) The Schwinger pair production rate $\Gamma$ as a function of electric field strength $eE$, for various $N_f$ and for fixed $N_c=3$. Upon increasing $N_f$, the pair production rate grows faster even at small values of $eE$. (\subref{fig:b}) The behavior of production rate $\Gamma$ as a function of $eE$ for various $N_c$ at fixed $N_f=2$. Upon increasing the number of colors $N_c$ the production rate grows slowly and higher values of $eE$ needed for quick and stable pair production rate.}
\label{fig4}
\end{figure}
We then obtained the pseudo-critical electric field $eE_c$ for the chiral symmetry breaking -restoration from the electric field-gradient~$\partial_{eE}M_f$ as a function of $eE$ for various $N_f$ and at fixed $N_c=3$, as depicted in Fig.~\ref{fig5}(a). The peaks of the ~$\partial_{eE}M_f$ shift towards lower values of the electric field $eE$.
In Fig.~\ref{fig5}(b), we plotted the electric field gradient ~$\partial_{eE}M_f$ as a function of $eE$ for various $N_c$ and at fixed $N_f=2$. Upon increasing $N_c$, the peaks shift towards higher values of $eE$.
\begin{figure}
\begin{subfigure}[b]{7cm}
\centering\includegraphics[width=8cm]{Fig9.pdf}
\caption{}\label{fig:a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{7cm}
\centering\includegraphics[width=8cm]{Fig10.pdf}
\caption{}\label{fig:b}
\end{subfigure}
\caption{(\subref{fig:a}) The electric-gradient of the dynamical mass $\partial_{eE}M$ as a function $N_f$ for fixed $N_c=3$. The peaks shifted towards the lower values of electric field $eE$ upon the increasing $N_f$. (\subref{fig:b}) The electric-gradient $\partial_{N_c}M$ as a function of $N_c$ for fixed $N_f=2$. The peak of the $\partial_{N_c}M$, shifts toward higher values of $eE$ as $N_c$ increases. For $N_c\geq4$, the peaks in $\partial_{N_c}M$ diverges and dicontinous.}
\label{fig5}
\end{figure}
We then obtained the pseudo-critical electric field $eE_c$ for the chiral symmetry restoration/deconfinement different $N_f$, and draw a phase diagram in $N_{f}-eE$ plane a shown in the Fig.~\ref{fig6}(a). The pseudo-critical electric field $eE_c$ decreases as we increase $N_f$, the nature of the chiral phase transition is of smooth cross-over. In Fig.~\ref{fig6}(b), we sketch the phase diagram in $N_{c}-eE_c$ plane, the critical $eE_c$ grows upon increasing the number of colors $N_c$. The transition is of smooth cross-over until the critical endpoint $(N_{c,p}=4, eE_{c,p}=0.54 GeV^2)$ where it suddenly changes to the first-order. The phase diagram also shows that the quark-antiquark pair production grows quickly after pseudo-critical electric field $eE_c$ and how it varies with the flavors and colors. Our finding for $N_c=3$ and $N_f=2$, agrees well with the growth of the quark-antiquark production rate studied through another effective model of low energy QCD~\cite{Cao:2015xja, Tavares:2018poq}. \begin{figure}
\begin{subfigure}[b]{7cm}
\centering\includegraphics[width=8cm]{Fig11.pdf}
\caption{}\label{fig:a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{7cm}
\centering\includegraphics[width=8cm]{Fig12.pdf}
\caption{}\label{fig:b}
\end{subfigure}
\caption{(\subref{fig:a}) The phase diagram $N_f-eE_c$ plane for the dynamical chiral symmetry breaking/restoration or confinement-deconfinement transition. The pseudo-critical $eE_c$ decreases upon increasing the number of light quark flavors $N_f$. (\subref{fig:b}) The phase diagram in $N_c-eE_c$ plane; here the critical $eE$ increase upon increasing the number of color $N_c$. The transition is of smooth cross-over until the critical endpoint $(N^{c}_{c,p}, ,eE_{c,p})$ where it suddenly changes to the first-order.}
\label{fig6}
\end{figure}
In the next section, we discuss the summery and future perspectives of the this work.
\section{Summery and Perspectives}
In this work, we have studied the impact of
pure electric
field on the color-flavor chiral phase transitions and investigated the Schwinger quark-antiquark pair production rate $\Gamma$ for the higher number of colors $N_c$ and number flavors $N_f$. For this purpose, we
implemented the Schwinger-Dyson formulation of QCD,
with a gap equation kernel comprising a symmetry-preserving
vector-vector flavor-dependent contact interaction model of quarks
in a rainbow-ladder truncation. Subsequently, we adopted
the well-known Schwinger proper-time regularization
procedure. The outcome of this study is presented as
follows:\\
First, we reproduced the results of dynamical chiral symmetry restoration for large $N_f$ with fixed $N_f=3$, where we evaluated the critical number of flavors $N^{c}_{f}\approx 8$. Further, we explore the dynamical symmetry breaking for a higher number of colors $N_c$ but kept $N_f=2$ and found the critical number of colors $N^{c}_c\approx2.2$.
Second, we consider the influence of the pure electric field $eE$, where we have explored the dynamical chiral symmetry restoration and deconfinement for various numbers of flavors $N_f$ and colors $N_c$. The plateau of the mass as a function of the electric field is noted to suppress upon increasing the number of flavors. Whereas upon increasing the number of colors, the dynamical mass as a function of $eE$ enhances and at some $N_c\geq4$, we found the discontinuity in the chiral symmetry restoration region.
Next, We obtained the Schwinger pair production (quark-antiquark) rate $\Gamma$ as a function of the pure electric field $eE$ for various $N_f$ and $N_c$. For fixed $N_c=3$, and upon varying the $N_f$, we found that quark-antiquark production rate $\Gamma$ grows quickly
when we cross a pseudo-critical electric field $eE_c$. As a result, the
pair production rate
tends to initiates at lower values of $eE$ for larger values of $N_f$. Hence, the pseudo-critical electric field $eE_c$ reduced in magnitude upon increasing the number of flavors $N_f$. This is because both $N_f$ and $eE$ restored the dynamical chiral symmetry. While for fixed $N_f=2$ and upon increasing $N_c$, the Schwinger pair production
tends to initiate at larger values of $eE$ for higher values of $N_c$. It is interesting to note that for $N\geq4$, the discontinuity occurred in the production rate in the region where the chiral symmetry is restored. This may be due to the strong competition that occurred between $eE$ and $N_c$. Thus, the pseudo-critical $eE_c$ enhances as the number of colors $N_c$ increases. The transition is of smooth cross-over until the critical endpoint $(N_{c,p}=4, eE_{c,p}=0.54 GeV^2)$, where it suddenly changes to the first order. Our finding for $N_c=3$ and $N_f=2$ agrees well with the behaviour of the pair production rate studied through other effective models of low energy QCD.
Qualitatively and quantitatively, the predictions of the
presented flavor-dependent contact interaction model for fixed $N_=3$ and $N_f=2$ agree well with results obtained from
other effective QCD models. Soon, we plan to extend this
work to study the Schwinger pair production rate in hot and dense matter QCD in the presence of background fields. We are also interested in extending this work
to study the properties of light hadrons in the
background fields etc.
\section{Acknowledgments}
We acknowledge A. Bashir and A. Raya for their guidance and valuable suggestions in the process of completion of this work. We also thank to the colleagues of the Institute of Physics, Gomal University (Pakistan).
\section*{References}
\bibliographystyle{iopart-num}
|
{
"arxiv_id": "2302.13183",
"language": "en",
"timestamp": "2023-02-28T02:12:01",
"url": "https://arxiv.org/abs/2302.13183",
"yymm": "2302"
} | \section{Introduction}
Deep generative models, such as generative adversarial networks (GANs) \citep{goodfellow2014generative,arjovsky2017wasserstein} and variational autoencoder \citep{kingma2013auto,mohamed2014stochastic}, utilize neural networks to generate new samples which follow the same distribution as the training data.
They have been successful in many applications including producing photorealistic images, improving astronomical images, and modding video games \citep{reed2016generative, ledig2017photo, schawinski2017generative, brock2018large, volz2018evolving,radford2015unsupervised,salimans2016improved}.
To estimate a data distribution $Q$, generative models solve the following optimization problem
\begin{align}\label{eq:population}
\textstyle \min_{g_\theta \in \mathcal{G}}~ {\tt discrepancy}((g_{\theta})_{\sharp}\rho, Q),
\end{align}
where $\rho$ is an easy-to-sample distribution, $\mathcal{G}$ is a class of generating functions, ${\tt discrepancy}$ is some distance function between distributions, and $(g_\theta)_\sharp \rho$ denotes the pushforward measure of $\rho$ under $g_\theta$. In particular, when we obtain a sample $z$ from $\rho$, we let $g_\theta(z)$ be the generated sample, whose distribution follows $(g_\theta)_\sharp \rho$.
{ There are many choices of the {\tt discrepancy} function in literature among which Wasserstein distance attracts much attention. The so-called Wasserstein generative models \citep{arjovsky2017wasserstein} consider the Wasserstein-1 distance defined as
\begin{align}\label{eq:wasserstein-1}
\textstyle W_1(\mu,\nu) = \sup\limits_{f \in {\rm Lip}_1(\mathbb{R}^D)} \mathbb{E}_{X \sim \mu} [f(X)] - \mathbb{E}_{Y \sim \nu} [f(Y)],
\end{align}
where $\mu, \nu$ are two distributions and ${\rm Lip}_1(\mathbb{R}^D)$ consists of $1$-Lipschitz functions on $\mathbb{R}^D$. The formulation in \eqref{eq:wasserstein-1} is known as the Kantorovich-Rubinstein dual form of Wasserstein-1 distance and can be viewed as an integral probability metric \citep{IPMMuller}.
}
{In deep generative models, the function class $\mathcal{G}$ is often parameterized by a deep neural network class $\mathcal{G}_{\rm NN}$. Functions in $\mathcal{G}_{\rm NN}$ can be written in the following compositional form
\begin{align}\label{nnform}
\textstyle g_{\theta}(x) = W_{L}\cdot \sigma(W_{L-1}\ldots \sigma(W_{1}x + b_1) + \ldots + b_{L-1}) +b_L,
\end{align}
where the $W_i$'s and $b_i$'s are weight matrices and intercepts/biases of corresponding dimensions, respectively, and $\sigma$ is ReLU activation applied entry-wise: $\sigma(a) = \max(a, 0)$. Here $\theta=\{W_i,b_i\}_{i=1}^L$ denotes the set of parameters.
}
{Solving \eqref{eq:population} is prohibitive in practice, as we only have access to a finite collection of samples, $X_1, \dots, X_n \overset{\text{iid}}{\small \sim} Q$. Replacing $Q$ by its empirical counterpart $Q_n = \frac{1}{n}\sum_{i=1}^n\delta_{X_i}$, we end up with
\begin{align}\label{emprisk}
\textstyle \hat{g}_n = \operatornamewithlimits{\text{argmin}}\limits_{g_{\theta} \in \mathcal{G}_{\textup{NN}}}W_1((g_{\theta})_{\sharp}\rho, Q_n).
\end{align}}
Note that \eqref{emprisk} is also known as training deep generative models under the Wasserstein loss in existing deep learning literature \citep{frogner2015learning,genevay2018learning}. It has exhibited remarkable ability in learning complex distributions in high dimensions, even though existing theories cannot fully explain such empirical successes. In literature, statistical theories of deep generative models have been studied in \citet{arora2017generalization, zhang2017discrimination, jiang2018computation,bai2018approximability, liang2017well, liang2018well,uppal2019nonparametric,CLZZ,lu2020universal,Block2021ANE,luise2020generalization,schreuder2021statistical}. Due to the well-known curse of dimensionality, the sample complexity in \citet{liang2017well,uppal2019nonparametric,CLZZ,lu2020universal} grows exponentially with respect to underlying the data dimension.
For example, the CIFAR-10 dataset consists of $32 \times 32$ RGB images. Roughly speaking, to learn this data distribution with accuracy $\epsilon$, the sample size is required to be $\epsilon^{-D}$ where $D = 32\times 32 \times 3 = 3072$ is the data dimension. Setting $\epsilon = 0.1$ requires $10^{3072}$ samples. However, GANs have been successful with $60,000$ training samples \citep{goodfellow2014generative}.
{A common belief to explain the aforementioned gap between theory and practice is that practical data sets exhibit low-dimensional intrinsic structures.}
For example, many
image patches are generated from the same pattern by some transformations, such as
rotation, translation, and skeleton. Such a generating mechanism induces a small number of intrinsic
parameters. It is plausible to model these data as samples near a low dimensional
manifold \citep{tenenbaum2000global,roweis2000nonlinear,peyre2009manifold,coifman2005geometric}.
To justify that deep generative models can adapt to low-dimensional structures in data sets, this paper focuses (from a theoretical perspective) on the following fundamental questions of both distribution approximation and estimation:
\begin{description}
\item[Q1:] {\it Can deep generative models approximate a distribution on a low-dimensional manifold by representing it as the pushforward measure of a low-dimensional easy-to-sample distribution?}
\item[Q2:] {\it If the representation in \textbf{Q1} can be learned by deep generative models, what is the statistical rate of convergence in terms of the sample size $n$?}
\end{description}
This paper provides positive answers to these questions.
We consider data distributions supported on a $d$-dimensional compact Riemannian manifold $\mathcal{M}$ isometrically embedded in ${\mathbb{R}}^{D}$.
The easy-to-sample distribution $\rho$ is uniform on $(0,1)^{d+1}$. To answer {\bf Q1}, our Theorem \ref{thm:approx} proves that deep generative models are capable of approximating a transportation map which maps the low-dimensional uniform distribution $\rho$ to a large class of data distributions on $\mathcal{M}$. To answer {\bf Q2}, our Theorem \ref{thm:stat} shows that the Wasserstein-1 loss in distribution learning
converges to zero \textit{at a fast rate depending on the intrinsic dimension $d$} instead of the data dimension $D$.
In particular we prove that
\begin{align*}
\mathbb{E} W_1((\hat{g}_n)_\sharp \rho, Q) \leq C n^{-\frac{1}{d+\delta}}
\end{align*} for all $\delta > 0$ where $C$ is a constant independent of $n$ and $D$.
Our proof proceeds by constructing an oracle transportation map $g^{*}$ such that $g^{*}_\sharp \rho = Q$. This construction crucially relies on a cover of the manifold by geodesic balls, such that the data distribution $Q$ is decomposed as the sum of local distributions supported on these geodesic balls. Each local distribution is then transported onto lower dimensional sets in $\mathbb{R}^d$ from which we can apply optimal transport theory. We then argue that the oracle $g^*$ can be efficiently approximated by deep neural networks.
We make minimal assumptions on the network, only requiring that $g_\theta$ belongs to a neural network class (labelled $\mathcal{G}_{\textup{NN}}$) with size depending on some accuracy $\epsilon$. Further, we make minimal assumptions on the data distribution $Q$, only requiring that it admits a density that is upper and lower bounded. Standard technical assumptions are made on the manifold $\mathcal{M}$.
\section{Preliminaries}
We establish some notation and preliminaries on Riemannian geometry and optimal transport theory before presenting our proof.
\textbf{Notation.} For $x \in \mathbb{R}^d$, $\|x\|$ is the Euclidean norm, unless otherwise specified. $B_X(0,r)$ is the open ball of radius $r$ in the metric space $X$. If unspecified, we denote $B(0,r) = B_{\mathbb{R}^d}(0,r)$. For a function $f: \mathbb{R}^d \to \mathbb{R}^d$ and $A \subseteq \mathbb{R}^d$, $f^{-1}[A]$ denotes the pre-image of $A$ under $f$. $\partial$ denotes the differential operator. For $0 < \alpha \leq 1$, we denote by $C^{\alpha}$ the class of H\"older continuous functions with H\"older index $\alpha$. $\| \cdot \|_\infty$ denotes the $\infty$ norm of a function, vector, or matrix (considered as a vector). For any positive integer $N \in \mathbb{N}$, we denote by $[N]$ the set $\{1, 2, \dots, N \}$.
\subsection{Riemannian Geometry}
Let $(\mathcal{M}, g)$ be a $d$-dimensional compact Riemannian manifold isometrically embedded in $\mathbb{R}^D$. Roughly speaking a manifold is a set which is locally Euclidean i.e. there exists a function $\phi$ continuously mapping a small patch on $\mathcal{M}$ into Euclidean space. This can be formalized with \textit{open sets} and \textit{charts}. At each point $x \in \mathcal{M}$ we have a \textit{tangent space} $T_x\mathcal{M}$ which, for a manifold embedded in $\mathbb{R}^D$, is the $d$-dimensional plane tangent to the manifold at $x$. We say $\mathcal{M}$ is Riemannian because it is equipped with a smooth metric $g_x: T_x\mathcal{M} \times T_x\mathcal{M} \to \mathbb{R}$ (where $x$ is a basepoint) which can be thought of as a local inner product. We can define the Riemannian distance $d_\mathcal{M} : \mathcal{M} \times \mathcal{M} \rightarrow \mathbb{R}$ on $\mathcal{M}$ as
\begin{align*}
d_{\mathcal{M}}(x,y) = \inf\{L(\gamma) | \gamma \textup{ is a } C^1(\mathcal{M}) \textup{ curve such that } \gamma(0) = x, \gamma(1) = y\},
\end{align*} i.e. the length of the shortest path or \textit{geodesic} connecting $x$ and $y$.
An \textit{isometric embedding} of the $d$-dimensional $\mathcal{M}$ in $\mathbb{R}^D$ is an embedding that preserves the Riemannian metric of $\mathcal{M}$, including the Riemannian distance. For more rigorous statements, see the classic reference \citet{flaherty2013riemannian}.
We next define the exponential map at a point $x \in \mathcal{M}$ going from the tangent space to the manifold.
\begin{definition}[Exponential map]
Let $x \in \mathcal{M}$. For all tangent vectors $v \in T_x \mathcal{M}$, there is a unique geodesic $\gamma$ that starts at $x$ with initial tangent vector $v$, i.e. $\gamma(0) = x$ and $\gamma'(0) = v$. The exponential map centered at $x$ is given by $\exp_x(v) = \gamma(1)$, for all $v \in T_x \mathcal{M}$.
\end{definition}
The exponential map takes a vector $v$ on the tangent space $T_x \mathcal{M}$ as input. The output, $\exp_x(v)$, is the point on the manifold obtained by travelling along a (unit speed) geodesic curve that starts at $x$ and has initial direction $v$ (see Figure \ref{fig:exponential_map} for an example).
It is well known that for all $x \in \mathcal{M}$, there exists a radius $\delta$ such that the exponential map restricted to $B_{T_x\mathcal{M}}(0, \delta)$ is a diffeomorphism onto its image, i.e. it is a smooth map with smooth inverse. As the sufficiently small $\delta$-ball in the tangent space may vary for each $x \in \mathcal{M}$, we define the injectivity radius of $\mathcal{M}$ as the minimum $\delta$ over all $x \in \mathcal{M}$.
\begin{definition}[Injectivity radius]
For all $x \in \mathcal{M}$, we define the injectivity radius at a point $\text{inj}_{\mathcal{M}}(x) = \sup\{\delta > 0| \exp_{x}: B_{T_x \mathcal{M}}(0, \delta) \subseteq T_x \mathcal{M} \to \mathcal{M} \textup{ is a diffeomorphism}\}$. Then the injectivity radius of $\mathcal{M}$ is defined as
\begin{align*}
\text{inj}(\mathcal{M}) = \inf\{\text{inj}_{\mathcal{M}}(x) | x \in \mathcal{M}\}.
\end{align*}
\end{definition} For any $x \in \mathcal{M}$, the exponential map restricted to a ball of radius $\text{inj}_{\mathcal{M}}$ in $T_x\mathcal{M}$ is a well-defined diffeomorphism. Within the injectivity radius, the exponential map is a diffeomorphism between the tangent space and a patch of $\mathcal{M}$, with $\exp^{-1}$ denoting the inverse. Controlling a quantity called reach allows us to lower bound the manifold's injectivity radius.
\begin{definition}[Reach\citep{Federer}]
The reach $\tau$ of a manifold $\mathcal{M}$ is defined as the quantity
\begin{align*}
\tau = \inf\{r > 0 : \exists x\neq y \in \mathcal{M}, v \in \mathbb{R}^D \textup{ such that } r =\|x- v\| = \|y- v\| = \inf_{z \in \mathcal{M}} \| z - v \| \}.
\end{align*}
\end{definition}
Intuitively, if the distance of a point $x$ to $\mathcal{M}$ is smaller than the reach, then there is a unique point in $\mathcal{M}$ that is closest to $x$. However, if the distance between $x$ and $\mathcal{M}$ is larger than the reach, then there will no longer be a unique closest point to $x$ in $\mathcal{M}$. For example, the reach of a sphere is its radius. A manifold with large and small reach is illustrated in Figure \ref{fig:reach}. The reach gives us control over the injectivity radius $\text{inj}({\mathcal{M}})$; in particular, we know $\text{inj}({\mathcal{M}}) \geq \pi\tau$ (see \cite{AL} for proof).
\begin{figure}
\begin{minipage}[c]{0.5\textwidth}
\centering
\includegraphics[width=0.63\textwidth]{figures/exp.pdf}
\caption{Exponential map on $\mathcal{M}$.}
\label{fig:exponential_map}
\end{minipage}
\hfill
\begin{minipage}[c]{0.52\textwidth}
\centering
\includegraphics[width=0.6\textwidth]{figures/reach_large.pdf}
\\
\vspace{0.2cm}
\includegraphics[width=0.6\textwidth]{figures/reach_small.pdf}
\caption{Manifolds with large and small reach.}
\label{fig:reach}
\end{minipage}
\end{figure}
\subsection{Optimal Transport Theory}
Let $\mu, \nu$ be absolutely continuous measures on sets $X, Y \subseteq \mathbb{R}^d$. We say a function $f: X \to Y$ \textit{transports} $\mu$ onto $\nu$ if $f_\sharp \mu = \nu$. In words, for all measurable sets $A$ we have
\begin{align*}
\nu(A) = f_\sharp \mu(A) = \mu \left(f^{-1}(A) \right),
\end{align*}
where $f^{-1}(A)$ is the pre-image of $A$ under $f$.
Optimal transport studies the maps taking source measures $\mu$ on $X$ to target measures $\nu$ on $Y$ which also minimize a cost $c:X \times Y \to \mathbb{R}_{\geq 0}$ among all such transport maps. However the results are largely restricted to transport between measures on the same dimensional Euclidean space. In this paper, we will make use of the main theorem in \cite{caffarelli1992regularity}, in the form presented in \cite{villani2008optimal}.
\begin{proposition}
\label{proposition:OT}
Let $c(x,y) = \|x-y\|^2$ in $\mathbb{R}^d \times \mathbb{R}^d$ and let $\Omega_1, \Omega_2$ be nonempty, connected, bounded, open subsets of $\mathbb{R}^d$. Let $f_1, f_2$ be probability densities on $\Omega_1$ and $\Omega_2$ respectively, with $f_1, f_2$ bounded from above and below. Assume further that $\Omega_2$ is convex. Then there exists a unique optimal transport map $T : \Omega_1 \rightarrow \Omega_2$ for the associated probability measures $\mu(dx) = f_1(x) \, dx$ and $\nu(dy) = f_2(y) \, dy$, and the cost $c$. Furthermore, we have that $T \in C^{\alpha}(\Omega_1)$ for some $\alpha \in (0, 1)$.
\end{proposition}
This proposition allows to produce H\"older transport maps which can be further approximated with neural networks with size depending on a given accuracy.
To connect optimal transport and Riemannian manifolds, we first define the \textit{volume measure} on a manifold $\mathcal{M}$ and establish integration on $\mathcal{M}$.
\begin{definition}[Volume measure]
Let $\mathcal{M}$ be a compact $d$-dimensional Riemannian manifold. We define
the volume measure $\mu_{\mathcal{M}}$ on $\mathcal{M}$ as the restriction of the $d$-dimensional Hausdorff measure $\mathcal{H}^d$.
\end{definition}
A definition for the restriction of the Hausdorff measure can be found in \cite{Federer}.
We say that the distribution $Q$ has density $q$ if the Radon-Nikodym derivative of $Q$ with respect to $\mu_{\mathcal{M}}$ is $q$.
According to \cite{EG}), for any continuous function $f : \mathcal{M} \to \mathbb{R}$ supported within the image of the ball $B_{T_x \mathcal{M}}(0,\epsilon)$ under the exponential map for $\epsilon <\text{inj}(\mathcal{M})$, we have
\begin{align}\label{localdensity}
\int f \, d Q = \int (fq) \, d \mu_{\mathcal{M}} = \int_{B_{T_x\mathcal{M}}(0,\epsilon)} (fq) \circ \exp_x(v)\sqrt{\det g_{ij}^x(v)} \, dv.
\end{align} Here $g_{ij}^x(v) = \langle \partial \exp_x(v)[e_i], \partial \exp_x(v)[e_j]\rangle$ with $(e_1,...,e_d)$ an orthonormal basis of $T_x\mathcal{M}$.
\section{Main Results}
We will present our main results in this section, including an approximation theory for a large class of distributions on a Riemannian manifold (Theorem \ref{thm:approx}), and a statistical estimation theory of deep generative networks for distribution learning (Theorem \ref{thm:stat}).
We make some regularity assumptions on a manifold $\mathcal{M}$ and assume the target data distribution $Q$ is supported on $\mathcal{M}$. The easy-to-sample distribution $\rho$ is taken to be uniform on $ (0,1)^{d+1}$.
\begin{assumption}\label{assumptionmanifold} $\mathcal{M}$ is a $d$-dimensional compact Riemannian manifold isometrically embedded in ambient dimension $\mathbb{R}^D$. Via compactness, $\mathcal{M}$ is bounded: there exists $M > 0$ such that $\|x\|_\infty \leq M$ , $\forall x \in \mathcal{M}$. Further suppose $\mathcal{M}$ has a positive reach $\tau > 0$.
\end{assumption}
\begin{assumption}\label{assumptiondistribution} $Q$ is supported on $\mathcal{M}$ and has a density $q$ with respect to the volume measure on $\mathcal{M}$. Further we assume boundedness of $q$ i.e. there exists some constants $c, C > 0$ such that $c \leq q \leq C$.
\end{assumption}
To justify the representation power of feedforward ReLU networks for learning the target distribution $Q$, we explicitly construct a neural network generator class, such that a neural network function in this generator class can pushfoward $\rho$ to a good approximation of $Q$.
Consider the following generator class $\mathcal{G}_{\textup{NN}}$
\begin{align*}
\mathcal{G}_{\textup{NN}}(L,p,\kappa) = \{&g=[g_1,...,g_{D}]: \mathbb{R}^{d + 1} \to \mathbb{R}^{D} | g_j \textup{ in form } (\ref{nnform}) \text{ with at most } L \textup{ layers} \\
& \textup{ and max width } p, \text{ while } \norm{W_i}_{\infty} \leq \kappa, \norm{b_i}_{\infty} \leq \kappa \textup{ for all } i \in [L], j \in [D] \},
\end{align*}
where $\|\cdot\|_{\infty}$ is the maximum magnitude in a matrix or vector. The width of a neural network is the largest dimension (i.e. number of rows/columns) among the $W_i$'s and the $b_i$'s.
\begin{theorem}[Approximation Power of Deep Generative Models] \label{thm:approx}
Suppose $\mathcal{M}$ and $Q$ satisfy Assumptions \ref{assumptionmanifold} and \ref{assumptiondistribution} respectively.
The easy-to-sample distribution $\rho$ is taken to be uniform on $ (0,1)^{d+1}$. Then there exists a constant $0 < \alpha < 1$
(independent of $D$) such that for any $0 < \epsilon < 1$, there exists a $g_\theta \in \mathcal{G}_{\textup{NN}}(L, p, \kappa)$ with parameters
\begin{align*}
L = &O\left(\log\left(\frac{1}{\epsilon}\right)\right),\hspace{0.1in} p = O\left(D\epsilon^{-\frac{d}{\alpha}}\right), \hspace{0.1in} \kappa = M
\end{align*}
that satisfies
\begin{align*}
W_1((g_{\theta})_\sharp \rho, Q) < \epsilon.
\end{align*}
\end{theorem}
Theorem \ref{thm:approx} demonstrates the representation power of deep neural networks for distributions $Q$ on $\mathcal{M}$, which answers Question {\bf Q1}. For a given accuracy $\epsilon$, there exists a neural network $g_{\theta}$ which pushes the uniform distribution on $(0,1)^
{d+1}$ forward to a good approximation of $Q$ with accuracy $\epsilon$. The network size is exponential in the intrinsic dimension $d$. A proof sketch of Theorem \ref{thm:approx} is given in Section \ref{subsection:appxproof}.
We next present a statistical estimation theory to answer Question {\bf Q2}.
\begin{theorem}[Statistical Guarantees of Deep Wasserstein Learning] \label{thm:stat}
Suppose $\mathcal{M}$ and $Q$ satisfy Assumption \ref{assumptionmanifold} and \ref{assumptiondistribution} respectively.
The easy-to-sample distribution $\rho$ is taken to be uniform on $ (0,1)^{d+1}$.
Let $n$ be the number of samples of $X_i \sim Q$. Choose any $\delta > 0$. Set $\epsilon = n^{-\frac{1}{d+\delta}}$ in Theorem \ref{thm:approx} so that the network class $\mathcal{G}_{\textup{NN}}(L,p,\kappa)$ has parameters
\begin{align*}
L = &O\left(\log\left(n^{\frac{1}{d+\delta}}\right)\right),\hspace{0.1in} p = O\left(Dn^{\frac{d}{\alpha(d+\delta)}}\right), \hspace{0.1in} \kappa = M.
\end{align*}
Then the empirical risk minimizer $\hat{g}_n$ given by \eqref{emprisk} has rate
\begin{align*}
\mathbb{E} W_1((\hat{g}_n)_\sharp \rho, Q) \leq C n^{-\frac{1}{d+\delta}},
\end{align*} where $C$ is a constant independent of $n$ and $D$.
\end{theorem}
A proof sketch of Theorem \ref{thm:stat} is presented in Section \ref{subsection:statproof}. Additionally, this result can be easily extended to the noisy case. Suppose we are given $n$ noisy i.i.d. samples $\hat{X}_1,...,\hat{X}_n$ of the form $\hat{X}_i = X_i + \xi_i$, for $X_i \sim Q$ and $\xi_i$ distributed according to some noise distribution. The optimization in \eqref{emprisk} is performed with the noisy empirical distribution $\hat{Q}_n = \frac{1}{n}\sum_{i=1}^n \delta_{\hat{X}_i}$. Then the minimizer $\hat{g}_n$ satisfies
\begin{align*}
\mathbb{E} W_1((\hat{g}_n)_\sharp \rho, Q) \leq C n^{-\frac{1}{d+\delta}} + 2\sqrt{V_\xi},
\end{align*} where $V_\xi = \mathbb{E} \|\xi\|_2^2$ is the variance of the noise distribution. The proof in the noisy case is given in Section \ref{SubsecNoisy}.
\textbf{Comparison to Related Works}. To justify the practical power of generative networks, low-dimensional data structures are considered in \cite{luise2020generalization,schreuder2021statistical,Block2021ANE,chae2021likelihood}. These works consider the generative models in \eqref{eq:population}.
They assume that the high-dimensional data are parametrized by low-dimensional latent parameters. Such assumptions correspond to the manifold model where the manifold is globally homeomorphic to Euclidean space, i.e. the manifold has a single chart.
In \cite{luise2020generalization}, the generative models are assumed to be continuously differentiable up to order $s$. By jointly training
of the generator and the latent distributions, they proved that the Sinkhorn divergence between the generated distribution and data distribution converges, depending on data intrinsic dimension. \cite{chae2021likelihood} and \cite{schreuder2021statistical} assume the special case where the manifold has a single chart. More recently, \cite{Block2021ANE} proposed to estimate the intrinsic dimension of data using the H\"older IPM between some empirical distributions of data.
This theory is based on the statistical convergence of the empirical distribution to the data distribution. As an application to GANs, \cite[Theorem 23]{Block2021ANE} gives the statistical error while the approximation error is not studied. In these works, the single chart assumption is very strong while a general manifold can have multiple charts.
Recently, \cite{yang2022capacity,huang2022error} showed that GANs can approximate any data distribution (in any dimension) by transforming an absolutely
continuous 1D distribution. The analysis in \cite{yang2022capacity,huang2022error} can be applied to the general manifold model. Their approach requires the GAN to memorize the empirical data distribution using ReLU networks. Thus it is not clear how the designed generator is capable of generating new samples that are different from the training data. In contrast, we explicitly construct an oracle transport map which transforms the low-dimensional easy-to-sample distribution to the data distribution. Our work provides insights about how distributions on a manifold can be approximated by a neural network pushforward of a low-dimensional easy-to-sample distribution without exactly memorizing the data. In comparison, the single-chart assumption in earlier works assumes that an oracle transport naturally exists. Our work is novel in the construction of the oracle transport for a general manifold with multiple charts, and the approximation theory by deep neural networks.
\section{Proof of Main Results}
\subsection{Proof of Approximation Theory in Theorem \ref{thm:approx}}
\label{subsection:appxproof}
To prove Theorem \ref{thm:approx}, we explicitly construct an oracle transport $g^*$ pushing $\rho$ onto $Q$, i.e. $g^*_\sharp \rho = Q$. Further this oracle will be piecewise $\alpha$-H\"older continuous for some $\alpha \in (0,1)$.
\begin{lemma}\label{lemma:oracleappx}
Suppose $\mathcal{M}$ and $Q$ satisfy Assumption \ref{assumptionmanifold} and \ref{assumptiondistribution} respectively.
The easy-to-sample distribution $\rho$ is taken to be uniform on $(0,1)^{d+1}$.
Then there exists a function $g^*: (0,1)^{d+1} \to \mathcal{M}$ such that $Q = g^*_\sharp \rho$ where \begin{equation}g^*(x) =\textstyle \sum_{j=1}^J \mathds{1}_{(\pi_{j-1}, \pi_j)}(x_1) g^*_j(x_{2:d+1})
\label{eqgstar}
\end{equation}
for some $\alpha$-H\"older ($0<\alpha<1$) continuous functions $g_1^*, \dots, g_J^*$ and some constants $0 = \pi_0 < \pi_1 < \dots < \pi_J = 1$.
\end{lemma}
\label{proof:oracleappx}
\begin{proof}
We construct a transport map $g^*: (0,1)^{d+1} \to \mathcal{M}$ that can be approximated by neural networks. First, we decompose the manifold into overlapping geodesic balls. Next, we pull these local distributions on these balls back to tangent space, which produces $d$-dimensional tangent distributions. Then, we apply optimal theory on these tangent distributions to produce maps between the source distributions on $(0,1)^{d}$ to the appropriate local (geodesic ball) distributions on the manifold. Finally, we glue together these local maps with indicators functions and a uniform random sample from $(0,1)$. We proceed with the first step of decomposing the manifold.
\textbf{Step 1: Overlapping ball decomposition.} Recall that $\mathcal{M}$ is a compact manifold with reach $\tau >0$. Then the injectivity radius of $\mathcal{M}$ is greater or equal to $\pi \tau$ (\citet{aamari2019estimating}). Set $r = \frac{\pi \tau}{2}$. For each $c \in \mathcal{M}$, define an open set $U_c = \exp_c(B_{T_c \mathcal{M}}(0, r)) \subseteq \mathcal{M}$. Since the collection $\{ U_c : x \in \mathcal{M} \}$ forms an open cover of $\mathcal{M}$ (in $\mathbb{R}^D$), by the compactness of $\mathcal{M}$ we can extract a finite subcover which we denote as $\{ U_{c_j} \}_{j=1}^J$. For convenience, we will write $U_j = U_{c_j}$.
\textbf{Step 2: Defining local lower-dimensional distributions.} On each $U_j$, we define a local distribution $Q_j$ with density $q_j$ via $$q_j(x) = \frac{q(x)}{Q(U_j)}\mathds{1}_{U_j}(x).$$ Set $K(x) = \sum_{j=1}^J \mathds{1}_{U_j}(x)$ as the number of balls $U_j$ containing $x$. Note $1 \leq K(x) \leq J$ for all $x \in \mathcal{M}$. Now define the distribution $\overline{Q}_j$ with density $\overline{q}_j$ given by
\[ \overline{q}_j(x) = \frac{\frac{1}{K(x)} q_j(x) \mathds{1}_{U_j}(x)}{\int_{U_j}\frac{1}{K(x)} q_j(x)d\mathcal{H}} . \]
Write $K_j = \int_{U_j}\frac{1}{K(x)}q_j(x)d\mathcal{H}$ as the normalizing constant. Define $\tilde{q}_j(v) = (\overline{q}_j \circ \exp_{c_j})(v)\sqrt{\det g_{kl}^{c_j}(v)}$ where $g_{kl}^{c_j}$ is the Riemannian at $c_j$. This quantity can be thought of as the Jacobian of the exponential map, denoted by $|J_{\exp_{c_j}}(v)|$ in the following step. Then $\tilde{q}_j$ is a density on $\tilde{U}_j = \exp^{-1}_{c_j}(U_j)$, which is a ball of radius $\frac{\pi \tau}{2}$ since
\begin{align*}
1 = \int_{U_j}\overline{q}_j(x)d\mathcal{H} = \int_{\tilde{U}_j}\sqrt{\det g_{kl}^{c_j}(v)} \overline{q}_j(\exp_{c_j}(v))dv = \int_{\tilde{U}_j} \tilde{q}_j(v)dv
\end{align*}
Let $\tilde{Q}_j$ be the distribution in $\mathbb{R}^d$ with density $\tilde{q}_j$. By construction, we can write \begin{equation}\overline{Q}_j = (\exp_{c_j})_\sharp \tilde{Q}_j.
\label{ProofLemma1EqComp1}
\end{equation}
\textbf{Step 3: Constructing the local transport.} We have that $\exp_{c_j}^{-1}$ is bi-Lipschitz on $U_j$ and hence its Jacobian is upper bounded. Since $|J_{\exp_{c_j}}(v)| = \frac{1}{|J_{\exp_{c_j}^{-1}}(x)|}$, we know that $|J_{\exp_{c_j}}|$ lower bounded. Since $q_j$ is lower bounded (away from $0$), this means $\tilde{q}_j$ is also lower bounded. Now the distribution $\tilde{\rho}_j$ supported on $\tilde{U}_j = B(0,\frac{\tau \pi}{2})$ fulfills the requirements for our optimal transport result: (1) Its density $\tilde{p}_j$ is lower and upper bounded; (2) The support $B(0,\frac{\tau \pi}{2})$ is convex.
Taking our cost to be $c(x,y) = \frac{1}{2}\|x-y\|^2$ (i.e. squared Euclidean distance), via Proposition \ref{proposition:OT} we can find an optimal transport map $T_j$ such that
\begin{equation}
(T_j)_{\sharp}\rho_d = \tilde{Q}_j
\label{ProofLemma1EqComp3}
\end{equation} where $\rho_d$ is uniformly distributed on $(0,1)^d$. Furthermore, $T_j \in C^{\alpha_j}$ for some $\alpha_j \in (0,1)$. Then we can construct a local transport onto $U_j$ via
\begin{equation}g_j^* = \exp_{c_j} \circ T_j
\label{eq:gjstar}
\end{equation}
which pushes $\rho_d$ forward to $\overline{Q}_j$. Since $g_j^*$ is a composition of a Lipschitz map with an $\alpha_j$ H\"older continuous maps, it is hence $\alpha_j$ H\"older continuous.
\begin{figure}[h!]
\centering
\includegraphics[height=1.25in]{figures/simple_proof.jpg}
\caption{Local transport $g_j^*$ in \eqref{eq:gjstar} mapping $\rho_d$ on $(0,1)^d$ to a local distribution $\overline{Q}_j$ supported on $U_j$.}
\label{fig:simple_flowchart}
\end{figure}
\textbf{Step 4: Assembling the global transport.} It remains to patch together the local distributions $\overline{Q}_j$ to form $Q$. Define $\eta_j = K_jQ(U_j)$. Notice
\begin{align*}
\sum_{j=1}^J \eta_j \overline{q}_j(x) &= \sum_{j=1}^J K_jQ(U_j) \frac{\frac{1}{K(x)} q_j(x) \mathds{1}_{U_j}(x)}{K_j} = \sum_{j=1}^J Q(U_j) \frac{\frac{1}{K(x)} q(x) \mathds{1}_{U_j}(x)}{Q(U_j)} \\
&= \sum_{j=1}^J \frac{1}{K(x)} q(x) \mathds{1}_{U_j}(x) = q(x)\frac{1}{K(x)}\sum_{j=1}^J \mathds{1}_{U_j}(x) = q(x).
\end{align*} Hence it must be that $\sum_{j=1}^J \eta_j = 1$. Set $\alpha = \min_{j \in [J]} \alpha_j$. We can now define the oracle $g^*$. Let $x \in (0,1)^{d+1}$. Write
\begin{align}
g^*(x) =\textstyle \sum_{j=1}^J \mathds{1}_{(\pi_{j-1}, \pi_j) }(x_1)g_j^*(x_{2:d+1}),
\label{eq:gstar}
\end{align} where $x_1$ is the first coordinate and $x_{2:d+1}$ are the remaining coordinates with $\pi_j = \sum_{i=1}^{j-1}\eta_i$. Let $Z \sim \rho$. Then $g(Z) \sim Q$. We see this as follows. For $A \subseteq \mathcal{M}$ we can compute
\begin{align*}
\mathbb{P}(g^*(Z) \in A) &= \sum_{j=1}^J\mathbb{P}(\pi_{j-1} < Z_1 < \pi_j) \mathbb{P}(g^*_j(Z_{2:d+1}) \in A \cap U_j) = \sum_{j=1}^J \eta_j \overline{Q}_j(A \cap U_j) \\
&= \sum_{j=1}^J \eta_j \int_{A}\overline{q}_j(x) d\mathcal{H} = \int_A \sum_{j=1}^J \eta_j \overline{q}_j(x)d\mathcal{H} = \int_A q(x)d\mathcal{H} = Q(A)
\end{align*} which completes the proof.
\end{proof}
We have found an oracle $g^*$ which is piecewise H\"older continuous such that $g^*_\sharp \rho = Q$. We can design a neural network $g_\theta$ to approximate this oracle $g^*$. Now in order to minimize $W_1((g_\theta)_\sharp \rho, Q) = W_1((g_\theta)_\sharp \rho, g^*_\sharp \rho)$, we show it suffices to have $g_\theta$ approximate $g^*$ in $L^1(\rho)$.
\begin{lemma}
\label{lemma:w1}
Let $\mu$ be an absolutely continuous probability distribution on a set $Z \subseteq \mathbb{R}^d$, and let $f, g: Z \rightarrow \mathbb{R}^m$ be transport maps. Then $$W_1(f_\sharp \mu, g_\sharp \mu) \leq C\|f-g\|_{L^1(\mu)}$$ for some $C > 0$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:w1}]
The vector-valued functions $f$ and $g$ output $m$-dimensional vectors. Note that $||f-g||_{L^1(\mu)} = \sum_{i=1}^m \| f_i - g_i \|$ where $f_i$ and $g_i$ denote the $i$th component function of $f$ and $g$, respectively. Then we can compute
\begin{align*}
W_1(f_\sharp \mu, g_\sharp \mu)
&= \sup_{\phi \in \text{Lip}_1(\mathbb{R}^m)} \left| \int \phi(y) \, d(f_\sharp \mu) - \int \phi(y) \, d(g_\sharp \mu) \right| \\
&= \sup_{\phi \in \text{Lip}_1(\mathbb{R}^m)} \left| \int \phi(f(x)) - \phi(g(x)) \, d\mu \right| \\
&\leq \sup_{\phi \in \text{Lip}_1(\mathbb{R}^m)} \int \left|\phi(f(x)) - \phi(g(x)) \right| \, d\mu \\
&\leq \int_Z \left\|f(x) - g(x) \right\|_2 \, d\mu \\
&\leq \int_Z C \|f(x)-g(x)\|_1 \, d\mu \\
&= C \|f - g\|_{L^1(\mu)},
\end{align*}
since $\phi$ is Lipschitz with constant $1$ and all norms are equivalent in finite dimensions. In particular, $C=1$ here.
\end{proof}
We now prove Theorem \ref{thm:approx}.
\begin{proof}[Proof of Theorem \ref{thm:approx}]
By Lemma \ref{lemma:oracleappx}, there exists a transformation $g^*(x) = \sum_{j=1}^J \mathds{1}_{(\pi_{j-1}, \pi_j)}(x_1) g^*_j(x_{2:d+1})$ such that $g^* _\sharp \rho = Q$.
By Lemma \ref{lemma:w1}, it suffices to approximate $g^*$ with a neural network $g_\theta \in \mathcal{G}_{\textup{NN}}(L, p, \kappa)$ in $L^1$ norm, with a given accuracy $\epsilon > 0$. Let $(g^*)^{(i)}$ denote the $i$th component of the vector valued function $g^*$. Then it suffices to approximate
\[ (g^*)^{(i)}(x) = \sum_{j=1}^J \mathds{1}_{(\pi_{j-1}, \pi_j)}(x_1) (g^*_j)^{(i)}(x_{2:d+1}) \]
for each $1 \leq i \leq D$, where $(g^*_j)^{(i)}$ denotes the $i$th component of the function $g^*_j$.
We construct the approximation of $(g^*)^{(i)}$ by the function
\begin{equation}
\label{eq:gtheta}
(g_\theta)^{(i)}(x) = \sum_{j=1}^J \tilde{\times}^{\delta_2} \left(\tilde{\mathds{1}}^{\delta_1}_{(\pi_{j-1}, \pi_j)}(x_1), (g_{j, \theta}^{\delta_3})^{(i)}(x_{2:d+1}) \right),
\end{equation}
where $\tilde{\times}^{\delta_2}$ is a ReLU network approximation to the multiplication operation with $\delta_2$ accuracy, $\tilde{\mathds{1}}^{\delta_1}_{(\pi_{j-1}, \pi_j)}$ is a ReLU network approximation to the indicator function with $\delta_1$ accuracy, and $(g_{j, \theta }^{\delta_3})^{(i)}$ is a ReLU network approximation to $(g_j^*)^{(i)}$ with $\delta_3$ accuracy. We construct these using the approximation theory outlined in Appendix \ref{sec:appendixapprox}.
First, we obtain $\tilde{\mathds{1}}^{\delta_1}_{(\pi_{j-1}, \pi_j)}$ via an application of Lemma \ref{lemma:indicator}. Next, we obtain $\tilde{\times}^{\delta_2}$ from an application of Lemma \ref{lemma:times}. Finally, we discuss $g_{j, \theta}^{\delta_3}$. Let $j \in [J]$. To approximate the H\"older function $g_j^*$, we use the following Lemma \ref{lemma:main} that is proved in Appendix \ref{sec:appendixapprox}. Similar approximation results can be found in \cite{shen2022optimal} and \cite{ohn2019smooth} as well. In Lemma \ref{lemma:main}, our approximation error is in $L^1$ norm and all weight parameters are upper bounded by a constant. In comparison, the error in \cite{ohn2019smooth} is in $L^\infty$ norm and the weight parameter increases as $\epsilon$ decreases.
\begin{lemma}
\label{lemma:main}
Fix $M \geq 2$. Suppose $f \in C^\alpha([0,1]^d)$, $\alpha \in (0, 1]$, with $\|f\|_{L^\infty} < M$. Let $0 < \epsilon < 1$. Then there exists a function $\Phi$ implementable by a ReLU network such that $$\|f - \Phi\|_{L^1} < \epsilon.$$ The ReLU network has depth at most $c_1 \log\left(\frac{1}{\epsilon} \right)$, width at most $c_2 \epsilon^{-\frac{d}{\alpha}}$, and weights bounded by $M$ (where $c_1$ and $c_2$ are constants independent of $\epsilon$).
\end{lemma}
We can apply Lemma \ref{lemma:main} to $(g_j^*)^{(i)}$ for all $1 \leq j \leq J$ and $1 \leq i \leq D$, since they are all elements of $C^{\alpha}(0,1)^d$ and elements of $C^{\alpha}(0,1)^d$ can be extended to $C^{\alpha}[0,1]^d$.
Thus there exists a neural network $(g_{j, \theta}^{\delta_3})^{(i)} \in \mathcal{G}_{\textup{NN}}(L, p, \kappa)$ with parameters given as above such that $$\|(g_j^*)^{(i)} - (g_{j, \theta}^{\delta_3})^{(i)}\|_{L^1} < \delta_3.$$
The goal is now to show the $L^1$ distance between $g_\theta$ (as defined in \eqref{eq:gtheta}) and $g^*$ is small. We compute
\begin{align*}
& \quad \|g^*-g_\theta\|_{L^1} \\
&= \sum_{i=1}^D \|(g^*)^{(i)} - (g_\theta)^{(i)}\|_{L^1} \\
&= \sum_{i=1}^D \int_{(0,1)^{d+1}} \left|(g^*)^{(i)}(x) - (g_{\theta})^{(i)}(x)\right| \, dx \\
&\leq \sum_{i=1}^D \sum_{j=1}^J \int_{(0,1)^{d+1}}\left| \tilde{\times}^{\delta_2} \left(\tilde{\mathds{1}}^{\delta_1}_{(\pi_{j-1}, \pi_j)}(x_1), (g_{j, \theta}^{\delta_3})^{(i)}(x_{2:d+1}) \right) - \mathds{1}_{(\pi_{j-1}, \pi_j)}(x_1) (g^*_j)^{(i)}(x_{2:d+1})\right| \, dx\\
&\leq \sum_{i=1}^D \sum_{j=1}^J \int_{(0,1)^{d+1}}\left| \tilde{\times}^{\delta_2} \left(\tilde{\mathds{1}}^{\delta_1}_{(\pi_{j-1}, \pi_j)}(x_1), (g_{j, \theta}^{\delta_3})^{(i)}(x_{2:d+1}) \right) - \tilde{\mathds{1}}^{\delta_1}_{(\pi_{j-1}, \pi_j)}(x_1)(g_{j, \theta}^{\delta_3})^{(i)}(x_{2:d+1}) \right| \, dx\\
&\quad+ \sum_{i=1}^D \sum_{j=1}^J \int_{(0,1)^{d+1}}\left|\tilde{\mathds{1}}^{\delta_1}_{(\pi_{j-1}, \pi_j)}(x_1)(g_{j, \theta}^{\delta_3})^{(i)}(x_{2:d+1}) - \mathds{1}_{(\pi_{j-1}, \pi_j)}(x_1)(g_{j, \theta}^{\delta_3})^{(i)}(x_{2:d+1}) \right| \, dx \\
&\quad+ \sum_{i=1}^D \sum_{j=1}^J \int_{(0,1)^{d+1}}\left|\mathds{1}_{(\pi_{j-1}, \pi_j)}(x_1)(g_{j, \theta}^{\delta_3})^{(i)}(x_{2:d+1}) - \mathds{1}_{(\pi_{j-1}, \pi_j)}(x_1)(g_j^*)^{(i)}(x_{2:d+1}) \right| \, dx \\
&= \sum_{i=1}^D \sum_{j=1}^J \left(\text{(I) + (II) + (III)}\right)
\end{align*}
Each of the three terms are easily handled as follows.
\begin{enumerate}
\item[(I)] By construction of $\tilde{\times}^{\delta_2}$ in Lemma \ref{lemma:times}, we have that
\begin{align*}
\text{(I)}
&= \int_{(0,1)^{d+1}} \left| \tilde{\times}^{\delta_2} \left(\tilde{\mathds{1}}^{\delta_1}_{(\pi_{j-1}, \pi_j)}(x_1), (g_{j, \theta}^{\delta_3})^{(i)}(x_{2:d+1}) \right) - \tilde{\mathds{1}}^{\delta_1}_{(\pi_{j-1}, \pi_j)}(x_1)(g_{j, \theta}^{\delta_3})^{(i)}(x_{2:d+1}) \right| \, dx \\
&\leq \int_{(0,1)^{d+1}} \delta_2 \, dx = \delta_2.
\end{align*}
\item[(II)] By construction of $\tilde{\mathds{1}}^{\delta_1}_{(\pi_{j-1}, \pi_j)}$ in Lemma \ref{lemma:indicator}, we have that
\begin{align*}
\text{(II)}
&= \int_{(0,1)^{d+1}} \left|\tilde{\mathds{1}}^{\delta_1}_{(\pi_{j-1}, \pi_j)}(x_1)(g_{j, \theta}^{\delta_3})^{(i)}(x_{2:d+1}) - \mathds{1}_{(\pi_{j-1}, \pi_j)}(x_1)(g_{j, \theta}^{\delta_3})^{(i)}(x_{2:d+1}) \right| \, dx \\
&\leq \left\|(g_{j, \theta}^{\delta_3})^{(i)} \right\|_{\infty} \int_0^1 \left|\tilde{\mathds{1}}^{\delta_1}_{(\pi_{j-1}, \pi_j)}(x_1) - \mathds{1}_{(\pi_{j-1}, \pi_j)}(x_1)\right| \, dx \\
&\leq M \left\| \tilde{\mathds{1}}^{\delta_1}_{(a, b)} - \mathds{1}_{(a, b)}\right\|_{L^1} \\
&= M \delta_1.
\end{align*}
\item[(III)] By construction of $(g_{j, \theta}^{\delta_3})^{(i)}$ from Lemma \ref{lemma:main}, we have that
\begin{align*}
\text{(III)}
&= \int_{(0,1)^{d+1}} \left|\mathds{1}_{(\pi_{j-1}, \pi_j)}(x_1)(g_{j, \theta}^{\delta_3})^{(i)}(x_{2:d+1}) - \mathds{1}_{(\pi_{j-1}, \pi_j)}(x_1)(g_{j}^*)^{(i)}(x_{2:d+1}) \right| \, dx \\
&= \|\mathds{1}_{(\pi_{j-1}, \pi_j)}\|_\infty \int_{(0,1)^{d}} \left|(g_{j, \theta}^{\delta_3})^{(i)}(x) - (g_{j}^*)^{(i)}(x) \right| \, dx \\
&= \left\| (g_{j, \theta}^{\delta_3})^{(i)} - (g_{j}^*)^{(i)} \right\|_{L^1} \\
&\leq \delta_3.
\end{align*}
\end{enumerate}
As a result, we have that
\[ \|g^* - g_{\theta}\|_{L^1} \leq \sum_{i=1}^D \sum_{j=1}^J \text{(I)} + \text{(II)} + \text{(III)} \leq \sum_{i=1}^D \sum_{j=1}^J \delta_2 + M\delta_1 + \delta_3 = DJ(M\delta_1 + \delta_2 + \delta_3). \]
By selecting $\delta_1 < \frac{\epsilon}{3DJM}$, $\delta_2 < \frac{\epsilon}{3DJ}$, and $\delta_3 < \frac{\epsilon}{3DJ}$, we obtain that $||g^* - g_{\theta}||_1 < \epsilon$.
To complete the proof, we note that $g_\theta$ can be exactly represented by a neural network in $\mathcal{G}_{\textup{NN}}(L, p, \kappa)$ with parameters
\begin{align*}
L = &O\left(\log\left(\frac{1}{\epsilon}\right)\right),\hspace{0.1in} p = O\left(D\epsilon^{-\frac{d}{\alpha}}\right), \hspace{0.1in} \kappa = M.
\end{align*}
\end{proof}
\subsection{Proof of Statistical Estimation Theory in Theorem \ref{thm:stat}}
\label{subsection:statproof}
The proof of Theorem \ref{thm:stat} is facilitated by the common bias-variance inequality, presented here as a lemma.
\begin{lemma}
\label{lemma:biasvariance}
Under the same assumptions of Theorem \ref{thm:stat}, we have
\begin{align}
\label{eq:biasvariance}
\mathbb{E} W_1((\hat{g}_n)_{\sharp}\rho, Q) \leq \inf_{g_{\theta} \in \mathcal{G}_{\textup{NN}} }W_1((g_{\theta})_{\sharp}\rho, Q) + 2 \mathbb{E} W_1(Q_n,Q)
\end{align} where $Q_n$ is the clean empirical distribution.
\end{lemma}
\begin{proof} Recalling the definition of $\hat{g}_n$ as the empirical risk minimizer, we compute
\begin{align*}
\mathbb{E} W_1((\hat{g}_n)_{\sharp}\rho, Q)
&\leq \mathbb{E} W_1((\hat{g}_n)_{\sharp}\rho, Q_n) + \mathbb{E} W_1(Q_n, Q) \\
&= \mathbb{E} \inf_{g_\theta \in \mathcal{G}_{\textup{NN}}} W_1((g_\theta)_{\sharp}\rho, Q_n) + \mathbb{E} W_1(Q_n, Q)\\
&\leq \mathbb{E} \inf_{g_\theta \in \mathcal{G}_{\textup{NN}}} W_1((g_\theta)_{\sharp}\rho, Q) + 2\mathbb{E} W_1(Q_n, Q) \\
\end{align*}
since $W_1((\hat{g}_n)_\sharp \rho, Q_n) = \inf_{g_\theta \in \mathcal{G}_{\rm NN}} W_1((g_\theta)_\sharp \rho, Q_n)$ from \eqref{emprisk}.
\end{proof}
In the right hand side of \eqref{eq:biasvariance}, the first term is the approximation error and the second term is the statistical error. This naturally decomposes the problem into two parts: one controlling the approximation error and the other controlling the statistical error. The bias term can be controlled via Theorem \ref{thm:approx}. To control convergence of the empirical distribution $Q_n$ to $Q$ we leverage the existing theory \citep{Weed2019SharpAA} to obtain the following lemma.
\begin{lemma}
\label{lemma:statconverge}
Under the same assumptions of Theorem \ref{thm:stat}, for all $\delta > 0$, $\exists C_\delta > 0$ such that
\begin{equation}
\label{eq:statrate}
\mathbb{E} \left[W_1(Q, Q_n) \right] \leq C_\delta n^{-\frac{1}{d + \delta}}.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:statconverge}]
Let $\delta > 0$. Consider the manifold $\mathcal{M}$ with the geodesic distance as a metric space. When \cite[Theorem 1]{Weed2019SharpAA} is applied to $\mathcal{M}$ with the geodesic distance, we have that $$\mathbb{E} \left[W^{\mathcal{M}}_1(Q, Q_n) \right] \leq C_\delta n^{-\frac{1}{d + \delta}}$$ for some constant $C_\delta$ independent of $n$. Here, $W^\mathcal{M}_1$ is the $1$-Wasserstein distance on $\mathcal{M}$ with the geodesic distance. It suffices to show that $$W_1^{\mathbb{R}^D}(Q, Q_n) = W_1(Q, Q_n) \leq W_1^{\mathcal{M}}(Q, Q_n).$$ Let $\text{Lip}_1(\mathbb{R}^D)$ and $\text{Lip}_1(\mathcal{M})$ denote the set of $1$-Lipschitz functions defined on $\mathcal{M}$ with respect to the Euclidean distance on $\mathbb{R}^D$ and geodesic distance on $\mathcal{M}$ respectively. But note that $\text{Lip}_1(\mathbb{R}^D) \subseteq \text{Lip}_1(\mathcal{M})$ because for any $f \in \text{Lip}_1(\mathbb{R}^D)$ we have
\begin{align*}
\frac{|f(x)-f(y)|}{\|x-y\|_{\mathcal{M}}} \leq \frac{|f(x)-f(y)|}{\|x-y\|_{\mathbb{R}^D}} \leq 1
\end{align*} as $\|x-y\|_{\mathbb{R}^D} \leq \|x-y\|_{\mathcal{M}}$ under an isometric embedding and hence $f \in \text{Lip}_1(\mathcal{M})$. Thus
$$
\mathbb{E} \left[W_1(Q, Q_n)\right] \leq \mathbb{E} \left[W_1^{\mathcal{M}}(Q, Q_n)\right] \leq C_\delta n^{-\frac{1}{d + \delta}}.
$$
\end{proof}
Finally, we prove our statistical estimation result in Theorem \ref{thm:stat}.
\begin{proof}[Proof of Theorem \ref{thm:stat}]
Choose $\delta > 0$. Recall from Lemma \ref{lemma:biasvariance} we have
\begin{align*}
W_1((\hat{g}_n)_{\sharp}\rho, Q) \leq \mathbb{E} \inf_{g_\theta \in \mathcal{G}_{\textup{NN}}} W_1((g_\theta)_{\sharp}\rho, Q) + 2\mathbb{E} W_1(Q_n, Q)
\end{align*} The first term is the approximation error which can be controlled within an arbitrarily small accuracy $\epsilon$. Theorem \ref{thm:approx} shows the existence of a neural network function $g_\theta$ with $O\left(\log\left(\frac{1}{\epsilon}\right)\right)$ layers and $O(D\epsilon^{-d/\alpha}\log(\frac{1}{\epsilon}))$ neurons such that $W_1((g_\theta)_\sharp \rho, Q) \leq \epsilon $ for any $\epsilon > 0$. We choose $\epsilon = n^{-\frac{1}{d+\delta}}$ to optimally balance the approximation error and the statistical error. The second term is the statistical error for which we recall from Lemma \ref{lemma:statconverge} that $\mathbb{E} \left[ W_1(\hat{Q}_n,Q) \right] \leq C_\delta n^{-\frac{1}{d+\delta}}$ for some constant $C_\delta$.
Thus we have
\begin{align*}
\mathbb{E} W_1((\hat{g}_n)_\sharp \rho, Q) \leq n^{-\frac{1}{d+\delta}} + 2C_\delta n^{-\frac{1}{d+\delta}} = C n^{-\frac{1}{d+\delta}}
\end{align*}
by setting $C = 1 + 2C_\delta$. This concludes the proof.
\end{proof}
\subsection{Controlling the noisy samples}
\label{SubsecNoisy}
In the noisy setting, we are given $n$ noisy i.i.d. samples $\hat{X}_1,...,\hat{X}_n$ of the form $\hat{X}_i = X_i + \xi_i$, for $X_i \sim Q$ and $\xi_i$ distributed according to some noise distribution. The optimization in \eqref{emprisk} is performed with the noisy empirical distribution $\hat{Q}_n = \frac{1}{n}\sum_{i=1}^n \delta_{\hat{X}_i}$.
\begin{lemma}
\label{lemma:noisybiasvariance}
Under the same assumptions of Theorem \ref{thm:stat} and in the noisy setting, we have
\begin{align}
\label{eq:biasvariancenoise}
\mathbb{E} W_1((\hat{g}_n)_{\sharp}\rho, Q) \leq \inf_{g_{\theta} \in \mathcal{G}_{\textup{NN}} }W_1((g_{\theta})_{\sharp}\rho, Q) + 2 \mathbb{E} W_1(Q_n,Q) + 2\mathbb{E} W_1(\hat{Q}_n, Q_n)
\end{align} where $\hat{Q}_n$ is the noisy empirical distribution and $Q_n$ is the clean empirical distribution.
\end{lemma}
\begin{proof} Recalling the definition of $\hat{g}_n$ as the empirical risk minimizer, we compute
\begin{align*}
\mathbb{E} W_1((\hat{g}_n)_{\sharp}\rho, Q)
&\leq \mathbb{E} W_1((\hat{g}_n)_{\sharp}\rho, \hat{Q}_n) + \mathbb{E} W_1(\hat{Q}_n, Q) \\
&\leq \mathbb{E} \inf_{g_\theta \in \mathcal{G}_{\textup{NN}}} W_1((g_\theta)_{\sharp}\rho, \hat{Q}_n) + \mathbb{E} W_1(Q_n, Q) + \mathbb{E} W_1(\hat{Q}_n, Q_n)\\
&\leq \mathbb{E} \inf_{g_\theta \in \mathcal{G}_{\textup{NN}}} W_1((g_\theta)_{\sharp}\rho, Q) + 2\mathbb{E} W_1(Q_n, Q) + 2\mathbb{E} W_1(\hat{Q}_n, Q) \\
\end{align*}
since $W_1((\hat{g}_n)_\sharp \rho, \hat{Q}_n) = \inf_{g_\theta \in \mathcal{G}_{\rm NN}} W_1((g_\theta)_\sharp \rho, \hat{Q}_n)$ from \eqref{emprisk}.
\end{proof}
\begin{lemma}\label{lemma:noiseconverge}
Write $ W_1(\hat{Q}_n, Q_n) = W_1^{\mathbb{R}^D}(\hat{Q}_n, Q_n) $. In the noisy setting, we express $\hat{X}_i = X_i + \xi_i$ where $X_i$ is drawn from $Q$ and then noised with $\xi_i$ drawn from some noise distribution. Then
\begin{align*}
\mathbb{E}[W_1(Q_n, \hat{Q}_n)] \leq \sqrt{V_\xi}
\end{align*} where $V_\xi = \mathbb{E}\|\xi\|_2^2$ which is the variance of the noise.
\end{lemma}
\begin{proof}
Let $\hat{X}_i, X_i$ be samples defining $\hat{Q}_n, Q_n$ respectively. We have $\hat{X}_i = X_i + \xi_i$ where $\xi$ is the noise term. Compute
\begin{align*}
\mathbb{E} W_1(\hat{Q}_n, Q_n) &= \mathbb{E}\sup_{f \in \text{Lip}_1(\mathbb{R}^D)}\hat{Q}_n(f)-Q_n(f) = \mathbb{E}\sup_{f \in \text{Lip}_1(\mathbb{R}^D)}\frac{1}{n}\sum_{i=1}^n f(\hat{X}_i) - f(X_i) \\
&\leq \mathbb{E}\sup_{f \in \text{Lip}_1(\mathbb{R}^D)}\frac{1}{n}\sum_{i=1}^n |f(\hat{X}_i) - f(X_i)| = \mathbb{E}\sup_{f \in \text{Lip}_1(\mathbb{R}^D)}\frac{1}{n}\sum_{i=1}^n |f(X_i + \xi_i) - f(X_i)| \\
&\leq \mathbb{E}\sup_{f \in \text{Lip}_1(\mathbb{R}^D)}\frac{1}{n}\sum_{i=1}^n \|\xi_i\|_2 = \mathbb{E} \|\xi\|_2 \leq \sqrt{V_\xi}
\end{align*} the last line follows from Jensen's inequality.
\end{proof}
We conclude in the noisy setting that
\begin{align*}
\mathbb{E} W_1((\hat{g}_n)_\sharp \rho, Q) \leq \epsilon_{\rm appx} +2 C_\delta n^{-\frac{1}{d+\delta}} + 2\sqrt{V} \leq Cn^{-\frac{1}{d+\delta}} + 2 \sqrt{V_\xi}
\end{align*} after balancing the approximation error $\epsilon_{\rm appx}$ appropriately.
\section{Conclusion}
We have established approximation and statistical estimation theories of deep generative models for estimating distributions on a low-dimensional manifold.
The statistical convergence rate in this paper depends on the intrinsic dimension of data.
In light of the manifold hypothesis, which suggests many natural datasets lie on low dimensional manifolds, our theory rigorously explains why deep generative models defy existing theoretical sample complexity estimates and the curse of dimensionality. In fact, deep generative models are able to learn low-dimensional geometric structures of data, and allow for highly efficient sample complexity independent of the ambient dimension. Meanwhile the size of the required network scales
exponentially with the intrinsic dimension.
Our theory imposes very little assumption on the target density $Q$, requiring only that it admit a density $q$ with respect to the volume measure and that $q$ is upper and lower bounded. In particular we make no smoothness assumptions on $q$. This is practical, as we do not expect existing natural datasets to exhibit high degrees of smoothness.
In this work we assume access to computation of the $W_1$ distance. However during GAN training a discriminator is trained for this purpose. It would be of interest for future work to investigate the low-dimensional role of such discriminator networks which approximate the $W_1$ distance in practice.
Additionally, we provide an alternative approach to construct the oracle transport by decomposing the manifold into Voronoi cells and transporting the easy-to-sample distribution onto each disjoint cell directly in Appendix \ref{appsecvoronoi}.
\bibliographystyle{plainnat}
|
{
"arxiv_id": "2302.13163",
"language": "en",
"timestamp": "2023-02-28T02:11:36",
"url": "https://arxiv.org/abs/2302.13163",
"yymm": "2302"
} | \section{Introduction}
Neural network based PDE solvers have recently experienced an enormous growth in popularity and attention within the scientific community following the works of~\cite{weinan2017deep, han2018solving, sirignano2018dgm, weinan2018deep, raissi2019physics, li2021fourier}.
In this article we focus on methods, which parametrize the solution of the PDE by a neural network and use a formulation of the PDE in terms of a minimization problem to construct a loss function used to train the network.
The works following this ansatz can be divided into the two approaches: (a) residual minimization of the PDEs residual in strong form, this is known under the name \emph{physics informed neural networks } or \emph{deep Galerkin method}, see for example~\cite{dissanayake1994neural, lagaris1998artificial, sirignano2018dgm, raissi2019physics}; (b) if existent, leveraging the variational formulation to obtain a loss function, this is known as the \emph{deep Ritz method}~\cite{weinan2018deep}, see also~\cite{beck2020overview, weinan2021algorithms} for in depth reviews of these methods.
One central reason for the rapid development of these methods is their mesh free nature which allows easy incorporation of data and their promise to be effective in high-dimensional and parametric problems, that render mesh-based approaches infeasible. Nevertheless, in practice when these approaches are tackled directly with well established optimizers like GD, SGD, Adam or BFGS, they often fail to produce accurate solutions even for problems of small size. This phenomenon is increasingly well documented in the literature where it is attributed to an insufficient optimization leading to a variety of optimization procedures being suggested, where accuracy better than in the order of $10^{-3}$ relative $L^2$ error can rarely be achieved \cite{hao2021efficient, wang2021understanding, wang2022and, krishnapriyan2021characterizing, davi2022pso, zeng2022competitive}. The only exceptions are ansatzes, which are conceptionally different from direct gradient based optimization, more precisely greedy algorithms and a reformulation as a
min-max game~\cite{hao2021efficient, zeng2022competitive}.
\paragraph{Contributions}
We provide a simple, yet effective optimization method that achieves high accuracy for a range of PDEs when combined with the PINN ansatz.
Although we evaluate the approach on PDE related tasks, it can be applied to a wide variety of training problems.
Our main contributions can be summarized as follows:
\begin{itemize}
\item We introduce the notion of \emph{energy natural gradients}.
This natural gradient is defined via the Hessian of the training objective in function space.
\item We show that an energy natural gradient update in parameter space corresponds to a Newton update in function space. In particular, for quadratic energies the function space update approximately moves into the direction of the residual $u^\ast-u_\theta$.
\item We demonstrate the capabilities of the energy natural gradient combined with a simple line search to achieve an accuracy, which is several orders of magnitude higher compared to standard optimizers like GD, Adam or a natural gradient defined via Sobolev inner products.
These examples include PINN formulations of stationary and evolutionary PDEs as well as the deep Ritz formulation of a nonlinear ODE.
\end{itemize}
\paragraph{Related Works}
Here, we
focus on improving the training process and thereby the accuracy of PINNs.
It has been observed that
the magnitude of the gradient contributions from the PDE residuum, the boundary terms and the initial conditions
often possess imbalanced magnitudes.
To address this, different weighting strategies for the individual components of the loss have been developed~\cite{wang2021understanding,van2022optimally,wang2022and}.
Albeit improving PINN training, non of the mentioned works reports relative $L^2$ errors below $10^{-4}$.
The choice of the collocation points in the discretization of PINN losses has been investigated in a variety of works~\cite{lu2021deepxde, nabian2021efficient, daw2022rethinking,zapf2022investigating, wang2022respecting, wu2023comprehensive}.
Common in all these studies is the observation that collocation points should be concentrated in regions of high PDE residual and
we refer to~\cite{daw2022rethinking, wu2023comprehensive} for an extensive comparisons of the different proposed sampling strategies in the literature. Further, for time dependent problems curriculum learning
is reported to mitigate training pathologies associated with solving evolution problems with a long time horizon~\cite{wang2022respecting, krishnapriyan2021characterizing}. Again, while all aforementioned works considerably improve PINN training, in non of the contributions errors below $10^{-4}$ could be achieved.
Different optimization strategies, which are conceptionally different to a direct gradient based optimization of the objective, have been proposed in the context of PINNs.
For instance, greedy algorithms where used to incrementally build a shallow neural neuron by neuron, which led to high accuracy, up to relative errors of $10^{-8}$, for a wide range of PDEs~\cite{hao2021efficient}. However, the proposed greedy algorithms are only computationally tractable for shallow neural networks.
Another ansatz is to reformulate the quadratic PINN loss as a saddle-point problem involving a network for the approximation of the solution and a discriminator network that penalizes a non-zero residual. The resulting saddle-point formulation cab be solved with competitive gradient descent~\cite{zeng2022competitive} and the authors report highly accurate -- up to $10^{-8}$ relative $L^2$ error -- PINN solutions for a number of example problems.
This approach however comes at the price of training two neural networks and exchanging a minimization problem for a saddle-point problem.
Finally, particle swarm optimization methods have been proposed in the context of PINNs, where they improve over the accuracy of standard optimizers, but fail to achieve accuracy better than $10^{-3}$ despite their computation burden~\cite{davi2022pso}.
Natural gradient methods are an established optimization algorithm and we give an overview in Section~\ref{sec:natgrad} and discuss here only works related to the numerical solution of PDEs.
In fact, without explicitly referring to the natural gradient literature and terminology, natural gradients are used in the PDE constrained optimization community in the context of finite elements. For example, in certain situations the mass or stiffness matrices can be interpreted as Gramians, showing that this ansatz is indeed a natural gradient method. For explicit examples we refer to ~\cite{schwedes2016iteration, schwedes2017mesh}.
In the context of neural network based approaches, a variety of natural gradients induced by Sobolev, Fisher-Rao and Wasserstein geometries have been proposed and tested for PINNs~\cite{nurbekyan2022efficient}.
This work focuses on the efficient implementation of these methods and does not consider energy based natural gradients, which we find to be necessary in order to achieve high accuracy.
\paragraph{Notation}
We denote the space of functions on \(\Omega\subseteq\mathbb R^d\) that are integrable in $p$-th power by \(L^p(\Omega)\) and endow it with its canonical norm.
We denote the \emph{Sobolev space}
of functions with weak derivatives up to order \(k\) in $L^p(\Omega)$ by \(W^{k,p}(\Omega)\), which is a Banach space with the norm
\[ \lVert u\rVert_{W^{k,p}(\Omega)}^p \coloneqq \sum_{l=0}^k\lVert D^{l} u \rVert_{L^p(\Omega)}^p. \]
In the following we mostly work with the case $p=2$ and write $H^k(\Omega)$ instead of $W^{k,2}(\Omega)$.
Consider natural numbers \(d, m, L, N_0, \dots, N_L\) and let $\theta = \left((A_1, b_1), \dots, (A_L, b_L)\right)$ be a tupel of matrix-vector pairs where \(A_l\in\mathbb R^{N_{l}\times N_{l-1}}, b_l\in\mathbb R^{N_l}\) and \(N_0 = d, N_L = m\). Every matrix vector pair \((A_l, b_l)\) induces an affine linear map \(T_l\colon \mathbb R^{N_{l-1}} \to\mathbb R^{N_l}\). The \emph{neural network function with parameters} \(\theta\) and with respect to some \emph{activation function} \(\rho\colon\mathbb R\to\mathbb R\) is the function
\[u_\theta\colon\mathbb R^d\to\mathbb R^m, \quad x\mapsto T_L(\rho(T_{L-1}(\rho(\cdots \rho(T_1(x)))))).\]
The \emph{number of parameters} and the \emph{number of neurons} of such a network is given by \(\sum_{l=0}^{L-1}(n_{l}+1)n_{l+1}\
.
We call a network \emph{shallow} if it has depth \(2\) and \emph{deep} otherwise.
In the remainder, we restrict ourselves to the case \(m=1\) since we only consider real valued functions.
Further, in our experiments we choose $\tanh$ as an activation function in order to assume the required notion of smoothness of the network functions $u_\theta$ and the parametrization $\theta\mapsto u_\theta$.
For $A\in\mathbb R^{n\times m}$ we denote any pseudo inverse of $A$ by $A^+$.
\section{Preliminaries}
Various neural network based approaches for the approximate solution of PDEs have been suggested~\cite{beck2020overview, weinan2021algorithms, kovachki2021neural}.
Most of these cast the solution of the PDE as the minimizer of a typically convex energy over some function space and use this energy to optimize the networks parameters. We present two prominent approaches and introduce the unified setup that we use to treat both of these approaches later.
\paragraph{Physics-Informed Neural Networks}
Consider a general partial differential equation of the form
\begin{align}\label{eq:Poisson}
\begin{split}
\mathcal L u
& = f \quad \text{in } \Omega \\
\mathcal B u & = g \quad \text{on } \partial\Omega,
\end{split}
\end{align}
where $\Omega\subseteq\mathbb R^d$ is an open set, $\mathcal L$ is a -- possibly non-linear -- partial differential operator and $\mathcal B$ is a boundary value operator. We assume that the solution $u$ is sought in a Hilbert space $X$ and that the right-hand side $f$ and the boundary values $g$ are square integrable functions on $\Omega$ and $\partial\Omega$ respectively. In this situation, we can reformulate \eqref{eq:Poisson} as an minimization problem with objective function
\begin{equation}
E(u) = \int_\Omega (\mathcal L u - f)^2
\mathrm{d}x + \tau \int_{\partial\Omega} (\mathcal B u-g)^2\mathrm ds,
\end{equation}
for a penalization parameter $\tau > 0$. A function $u\in X$ solves \eqref{eq:Poisson} if and only if $E(u)=0$. In order to obtain an approximate solution, one can parametrize the function $u_\theta$ by a neural network and minimize the network parameters $\theta\in\mathbb R^p$ according to the loss function
\begin{equation}
L(\theta) \coloneqq \int_\Omega (\mathcal Lu_\theta - f)^2\mathrm dx + \tau \int_{\partial\Omega}\mathcal (\mathcal Bu_\theta - g)^2\mathrm ds.
\end{equation}
This general approach to formulate equations as minimization problems is known as \emph{residual minimization} and in the context of neural networks for PDEs can be traced back to~\cite{dissanayake1994neural, lagaris1998artificial}.
More recently, this ansatz was popularised under the names \emph{deep Galerkin method} or \emph{physics-informed neural newtorks}, where the loss can also be augmented to encorporate a regression term steming from real world measurements of the solution~\cite{sirignano2018dgm,raissi2019physics}.
In practice, the integrals in the objective function have to be discretized in a suitable way.
\paragraph{The Deep Ritz Method}
When working with weak formulations of PDEs it is standard to consider the variational formulation, i.e., to consider an energy functional such that the Euler-Lagrange equations are the weak formulation of the PDE. This idea was already exploited by~\cite{ritz1909neue} to compute the coefficients of polynomial approximations to solutions of PDEs and popularized in the context of neural networks in~\cite{weinan2018deep} who coined the name \emph{deep Ritz method} for this approach.
Abstractly, this approach is similar to the residual formulation. Given a variational energy $E\colon X \to \mathbb R,$ on a Hilbert space $X$
one parametrizes the ansatz by a neural network $u_\theta$ and arrives at the loss function $L(\theta)\coloneqq E(u_\theta)$.
Note that this approach is different from PINNs, for example for the Poisson equation $-\Delta u = f$, the residual energy is given by $u\mapsto\lVert \Delta u + f \rVert_{L^2(\Omega)}^2$, where the corresponding variational energy is given by $u\mapsto\frac12\lVert \nabla u\rVert_{L^2(\Omega)}^2-\int_\Omega fu\mathrm dx$. In particular, the energies require different smoothness of the functions and are hence defined on different Sobolev spaces.
Incorporating essential boundary values in the Deep Ritz Method differs from the PINN approach.
Whereas in PINNs for any $\tau>0$ the unique minimizer of the energy is the solution of the PDE, in the deep Ritz method the minimizer of the penalized energy solves a Robin boundary value problem, which can be interpreted as a perturbed problem. In order to achieve a good approximation of the original problem the penalty parameters need to be large, which leads to ill conditioned problems~\cite{muller2022error, courte2023robin}.
\paragraph{General Setup}
Both, physics informed neural networks as well as the deep Ritz method fall in the general framework of minimizing an energy $E\colon X\to\mathbb R$
or more precisely the associated objective function $L(\theta) \coloneqq E(u_\theta)$ over the parameter space of a neural network. Here, we assume $X$ to be a Hilbert space of functions and the functions $u_\theta$ computed by the neural network with parameters $\theta$ to lie in $X$ and assume that $E$ admits a unique minimizer $u^\star\in X$.
Further, we assume that the parametrization $P\colon\mathbb R^p\to X, \theta\mapsto u_\theta$ is differentiable and denote its range by $\mathcal F_\Theta=\{u_\theta:\theta\in\mathbb R^p\}$. We denote the generalized tangent space on this parametric model by
\begin{equation}
T_\theta \mathcal F_\Theta \coloneqq
\operatorname{span} \left\{\partial_{\theta_i} u_\theta : i=1, \dots, p \right\}.
\end{equation}
\paragraph{Accuracy of NN Based PDE Solvers}
Besides considerable improvement in the PINN training process, as discussed in the Section on related work, gradient based optimization of the original PINN formulation could so far not break a certain optimization barrier, even for simple situations. Typically achieved errors are of the order $10^{-3}$ measured in the $L^2$ norm. This phenomenon is attributed to the stiffness of the PINN formulation, as experimentally verified in \cite{wang2021understanding}. Furthermore, the squared residual formulation of the PDE squares the condition number -- which is well known for classical discretization approaches \cite{zeng2022competitive}. As discretizing PDEs leads to ill-conditioned linear systems, this deteriorates the convergence of iterative solvers such as standard gradient descent. On the other hand, natural gradient descent circumvents this \emph{pathology of the discretization} by guaranteeing an update direction following the function space gradient information where the PDE problem often is of a simpler structure. We refer to Theorem~\ref{thm:main_thm} and the Appendix~\ref{sec:proofs} for a rigorous explanation.
\section{Energy Natural Gradients
}\label{sec:natgrad}
The concept of \emph{natural gradients} was popularized by Amari in the context of parameter estimation in supervised learning and blind source separation~\cite{amari1998natural}.
The idea here is to modify the update direction in a gradient based optimization scheme to emulate gradient in a suitable representation space of the parameters.
Whereas, this ansatz was already formulated for general metrics it is usually attributed to the use of the Fisher metric on the representation space, but also products of Fisher metrics, Wasserstein and Sobolev geometries have been successfully used~\cite{kakade2001natural, li2018natural, nurbekyan2022efficient}.
After the initial applications in supervised learning and blind source separation, it was successfully adopted in reinforcement learning~\cite{kakade2001natural, peters2003reinforcement, bagnell2003covariant,morimura2008new}, inverse problems~\cite{nurbekyan2022efficient}, neural network training~\cite{schraudolph2002fast,pascanu2014revisiting, martens2020new} and generative models~\cite{shen2020sinkhorn, lin2021wasserstein}.
One sublety in the natural gradients is the definition of a geometry in the function space. This can either be done axiomatically or through the Hessian of a potential function~\cite{amari2010information, amari2016information, wang2022hessian, Mueller2022Convergence}.
We follow the idea to work with the natural gradient induced by the Hessian of a convex function, contrary to existing works we encounter infinite and not strongly dimension settings in our applications.
Here, we consider the setting of the minimization of a convex energy $E\colon X\to\mathbb R$ defined on a Hilbert space $X$, which covers both physics informed neural networks and the deep Ritz method.
We define the \emph{Hilbert} and \emph{energy Gram matrices} by
\begin{align}\label{eq:gramHilbert}
G_H(\theta)_{ij} \coloneqq \langle\partial_{\theta_i} u_\theta, \partial_{\theta_j} u_\theta\rangle_X \quad
\end{align}
and
\begin{equation}\label{eq:gramEnergy}
G_E(\theta)_{ij} \coloneqq D^2E(u_\theta)(\partial_{\theta_i} u_\theta, \partial_{\theta_j} u_\theta).
\end{equation}
The update direction $\nabla^H L(\theta) = G_H(\theta)^+\nabla L(\theta)$ is often called the \emph{Hilbert natural gradient} (H-NG) or in the special case the $X$ is a Sobolev space the \emph{Sobolev natural gradient}. It is well known in the literature\footnote{For regular and singular Gram matrices and finite dimensional spaces see~\cite{amari2016information, van2022invariance}, an argument for infinite dimensional space can be found in the appendix.} on natural gradients that
\begin{equation}
DP_\theta\nabla^HL(\theta) = \Pi_{T_\theta \mathcal F_\Theta}( \nabla E(u_\theta)).
\end{equation}
In words, following the natural gradient amounts to moving along the projection of the Hilbert space gradient onto the model's tangent space in function space.
The observation that identifying the function space gradient via the Hessian leads to a Newton update motivates the concept of energy natural gradients that we now introduce.
\begin{definition}[Energy Natural Gradient]
Consider the problem
$\min_{\theta\in\mathbb R^p} L(\theta)$,
where
L(\theta)\coloneqq E(u_\theta)$ and denote the Euclidean gradient by $\nabla L(\theta)$.
Then we call
\begin{equation}
\nabla^E L(\theta) \coloneqq G_E^+(\theta)\nabla L(\theta),
\end{equation}
the
\emph{energy natural gradient (E-NG)}\footnote{Note that this is different from the \emph{energetic natural gradients} proposed in~\cite{thomas2016energetic}, which defines natural gradients based on the energy distance rather than the Fisher metric.}.
\end{definition}
For a linear PDE operator $\mathcal L$, the residual
yields a quadratic energy and the energy Gram matrix takes the form
\begin{align}
\begin{split}
G_E(\theta)_{ij}
&=
\int_\Omega \mathcal L (\partial_{\theta_i}u_\theta) \mathcal L (\partial_{\theta_j}u_\theta) \mathrm dx
\\
&+
\tau \int_{\partial\Omega} \mathcal B (\partial_{\theta_i}u_\theta) \mathcal B (\partial_{\theta_j}u_\theta) \mathrm ds
\end{split}
\end{align}
On the other hand, the deep Ritz method for a quadratic energy $E(u) = \frac12 a(u,u) - f(u)$, where $a$ is a symmetric and coercive bilinear form and $f\in X^*$ yields
\begin{equation}
G_E(\theta)_{ij} = a(\partial_{\theta_i} u_\theta, \partial_{\theta_j}u_\theta).
\end{equation}
For the energy natural gradient we have the following result relating energy natural gradients to Newton updates.
\begin{restatable}[Energy Natural Gradient in Function Space]{theorem}{ENGFunctionSpace}\label{thm:main_thm}
If we assume that $D^2E$ is coercive everywhere, then we have\footnote{here, we interpret the bilinear form $D^2E(u_\theta)\colon H\times H\to\mathbb R$ as an operator $D^2E(u_\theta)\colon H\to H$; further $\Pi_{T_\theta \mathcal F_\Theta}^{D^2E(u_\theta)}$ denotes the projection with respect to the inner product defined by $D^2E(u_\theta)$}
\begin{equation}\label{eq:pushforwardENGNewton}
DP_\theta\nabla^EL(\theta) = \Pi_{T_\theta \mathcal F_\Theta}^{D^2E(u_\theta)}(D^2E(u_\theta)^{-1} \nabla E(u_\theta)).
\end{equation}
Assume now that $E$ is a quadratic function with bounded and positive definite
second derivative $D^2E = a$ that admits a minimizer $u^*\in X$. then it holds that
\begin{equation}\label{eq:pushforwardENG}
DP_\theta\nabla^EL(\theta) = \Pi_{T_\theta \mathcal F_\Theta}^{a}( u_\theta - u^*).
\end{equation}
\end{restatable}
\begin{proof}[Proof idea, full proof in the appendix]
In the case that $D^2E$ is coercive, it induces a Riemannian metric on the Hilbert space $X$.
Since the gradient with respect to this metric is given by $D^2E(u)^{-1}\nabla E(u)$ the identity~\eqref{eq:pushforwardENGNewton} follows analogously to the finite dimensional case or case of Hilbert space NGs.
In the case that the energy $E$ is quadratic and $D^2E=a$ is bounded and non degenerate, the gradient with respect to the inner product $a$ is not classically defined.
However, one can check that $a(u-u^\ast, v) = DE(u)v$, i.e., that the residuum $u-u^*$
can be interpreted as a gradient with respect to the inner product $a$, which yields~\eqref{eq:pushforwardENG}.
\end{proof}
In particular, we see from~\eqref{eq:pushforwardENGNewton} and~\eqref{eq:pushforwardENG} that using the energy NG in parameter space is closely related to a Newton update in function space, where for quadratic energies the Newton direction is given by the residuum $u_\theta-u^\star$.
\paragraph{Complexity of H-NG and E-NG}
The computation of the H-NG and E-NG is -- up to the assembly of the Gram matrices $G_H$ and $G_E$ -- equally expensive.
Luckily, the Gram matrices are often equally expensive to compute.
For quadratic problems the Hessian is typically not harder to evaluate than the Hilbert inner product~\eqref{eq:gramHilbert} and even for non quadratic cases closed form expressions of~\eqref{eq:gramEnergy} in terms of inner products are often available, see~\ref{subsec:nonlinearDR}. Note that H-NG emulates GD and E-NG emulates a Newton method in $X$.
In practice, the compuation of the natural gradient is expensive since it requires the solution of a system of linear equations, which has complexity $O(p^3)$, where $p$ is the parameter dimension.
Compare this to the cost of $O(p)$ for the computation of the gradient.
In our experiments, we find that E-NGD achieves significantly higher accuracy compared to GD and Adam even when the latter once are allowed more computation time.
\section{Experiments}
We test the energy natural gradient approach on three problems: a PINN formulation of a two-dimensional Poisson equation, a PINN formulation of a one-dimensional heat equation and a deep Ritz formulation of a one-dimensional, nonlinear elliptic equation.
\paragraph{Description of the Method}
For all our numerical experiments, we realize an energy natural gradient step with a line search as described in Algorithm~\ref{alg:E-NGD}.
We choose the interval $[0,1]$ for the line search determining the learning rate since a learning rate of $1$ would correspond to an approximate Newton step in function space.
However, since the parametrization of the model is non linear, it is beneficial to conduct the line search and can not simply choose the Newton step size.
In our experiments, we use a grid search over a logarithmically spaced grid on $[0,1]$ to determine the learning rate $\eta^*$.
\begin{algorithm}
\caption{Energy Natural Gradient with Line Search}\label{alg:E-NGD}
\begin{algorithmic}
\State {\bfseries Input:} initial parameters $\theta_0\in\mathbb R^p$,
$N_{max}$
\For{$k=1, \dots, N_{max}$}
\State Compute $\nabla L(\theta)\in\mathbb R^p$
\State $G_E(\theta)_{ij} \gets D^2E(\partial_{\theta_i}u_\theta, \partial_{\theta_j}u_\theta)$ for $i,j =1, \dots, p$
\State $\nabla^E L(\theta) \gets G^+_E(\theta)\nabla L(\theta)$
\State $\eta^* \gets \arg\min_{\eta\in[0,1]} \,L( \theta - \eta \nabla^E L(\theta) )$
\State $\theta_k = \theta_{k-1} - \eta^* \nabla^E L(\theta)$
\EndFor
\end{algorithmic}
\end{algorithm}
The assembly of the Gram matrix $G_E$ can be done efficiently in parallel, avoiding a potentially costly loop over index pairs $(i,j)$.
Instead of computing the pseudo inverse of the Gram matrix $G_E(\theta)$
we solve the least square problem
\begin{equation}\label{eq:lsqNG}
\nabla^E L(\theta)\in\arg\min_{\psi\in\mathbb R^p}\| G_E(\theta)\psi - \nabla L(\theta) \|^2_2.
\end{equation}
Although naive, this can easily be parallelized and performs fast and efficient in our experiments.
For the numerical evaluation of the integrals appearing in the loss function as well as in the entries of the Gram matrix we use a quadrature based on the evaluations of the functions on a uniform grid. We initialize the network's weights and biases according to a Gaussian with standard deviation $0.1$ and vanishing mean.
\paragraph{Baselines}
We compare the efficiency of
energy NGs to the following optimizers. First, we consider vanilla gradient descent (denoted as GD in our experiments) with a line search on a logarithmic grid. Then, we test the performance of Adam with an exponentially decreasing learning rate schedul to prevent oscillations, where we start with an initial learning rate of $10^{-3}$ that after $1.5\cdot 10^4$ steps starts to decrease by a factor of $10^{-1}$ every $10^4$ steps until a minimum learning rate of $10^{-7}$ is reached or the maximal amount of iterations is completed.
Finally, we test the Hilbert natural gradient descent with line search (denoted by H-NGD).
\paragraph{Computation Details}
For our implementation we rely on the library JAX~\cite{jax2018github}, where all required derivatives are computed using JAX' automatic differentiation module. The JAX implementation of the least square solve
relies on a singular value decomposition. All experiments were run on a single NVIDIA RTX 3080 Laptop GPU in double precision. The code to reproduce the experiments can be found in the supplements.
\subsection{Poisson Equation}
We consider the two dimensional Poisson equation
\begin{equation*}
-\Delta u (x,y) = f(x,y) = 2\pi^2\sin(\pi x) \sin(\pi y)
\end{equation*}
on the unit square $[0,1]^2$ with zero boundary values. The solution is given by
\begin{equation*}
u^*(x,y) = \sin(\pi x) \sin(\pi y)
\end{equation*}
and the PINN loss of the problem is
\begin{align}\label{eq:poisson_loss}
\begin{split}
L (\theta) & = \frac{1}{N_\Omega}\sum_{i=1}^{N_\Omega}(\Delta u_\theta(x_i,y_i) + f(x_i,y_i))^2 \\ & \qquad\qquad\quad + \frac{1}{N_{\partial\Omega}}\sum_{i=1}^{N_{\partial\Omega}}u_\theta(x^b_i,y^b_i)^2,
\end{split}
\end{align}
where $\{(x_i,y_i)\}_{i=1,\dots,N_\Omega}$ denote the interior collocation points and $\{(x^b_i,y^b_i)\}_{i=1,\dots,N_{\partial\Omega}}$ denote the collocation points on $\partial\Omega$. In this case the energy inner product on $H^2(\Omega)$ is given by
\begin{equation}\label{eq:poisson_energy_product}
a(u,v) = \int_\Omega \Delta u \Delta v \mathrm dx + \int_{\partial\Omega}u v \mathrm ds.
\end{equation}
Note that this inner product is not coercive\footnote{the inner product is coercive with respect to the $H^{1/2}(\Omega)$ norm, see~\cite{muller2022notes}} on $H^2(\Omega)$ and different from the $H^2(\Omega)$ inner product.
\begin{figure}[]
\centering
\includegraphics[width=\linewidth]{figures/errors_poisson_eq.png}
\vspace{-0.5cm}
\caption
Median relative $L^2$ errors for the Poisson equation example over 10
initializations for the four optimizers: energy NG descent, Hilbert NG descent, vanilla gradient descent and Adam; the shaded area denotes the region between the first and third quartil
; note that GD and Adam are run for $400$ times more iterations.
}\label{fig:poisson}
\end{figure}
The integrals in
~\eqref{eq:poisson_energy_product} are computed using the same collocation points as in the definition of the PINN loss function $L$ in~\eqref{eq:poisson_loss}. To approximate the solution $u^*$ we use a shallow neural network with the hyperbolic tangent as activation function and a width of 64, thus there are 257 trainable weights. We choose 900 equi-distantly spaced collocation points in the interior of $\Omega$ and 120 collocation points on the boundary. The energy natural gradient descent and the Hilbert natural gradient descent are applied for $500$ iterations each, whereas we train for $2\cdot 10^{5}$ iterations of GD and Adam.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& Median & Minimum & Maximum \\
\hline
GD
& $8.2 \cdot 10^{-3}$ & $2.6 \cdot 10^{-3}$ & $1.5 \cdot 10^{-2}$ \\
Adam & $1.1 \cdot 10^{-3}$ & $6.9 \cdot 10^{-4}$ & $\mathbf{1.3 \cdot 10^{-3}}$ \\
H-NGD
& $1.2$ & $4.0$ & $2.1$ \\
E-NGD
& $\mathbf{2.3\cdot 10^{-7}}$ & $\mathbf{1.2\cdot 10^{-7}}$ & $9\cdot10^{-1}$ \\
\hline
\end{tabular}
\caption{Median, minimum and maximum of the relative $L^2$ errors for the Poisson equation example achieved by different optimizers over $10$
initializations. Here, energy and Hilbert NG descent are run for $500$ and the other methods for $2\cdot10^5$ iterations.}\label{table:poisson}
\end{center}
\end{table}
As reported in Figure~\ref{fig:poisson} and Table~\ref{table:poisson}, we observe that the energy NG updates require relatively few iterations to produce a highly accurate approximate solution of the Poisson equation. Note that the Hilbert NG descent did not converge at all, stressing the importance of employing the geometric information of the Hessian of the function space objective, as is done in energy NG descent.
Note also, that while E-NG is highly efficient for most initializations we observed that sometimes a failure to train can occur, compare to Table~\ref{table:poisson}. We did also note that conducting a pre-training, for instance with GD or Adam was able to circumvent this issue.
The standard optimizers we consider, i.e., Adam and vanilla gradient descent
reliably decrease the relative errors, but
fail to achieve an accuracy higher than $10^{-3}$ even though we allow for a much higher number of iterations.
With our current implementation and the network sizes we consider, one natural gradient update is roughly 15 times as costly as a standard gradient step.
Training a PINN model with optimizers such as Adam easily requires $100$ times the amount of iterations -- without being able to produce highly accurate solutions -- of what we found necessary for natural gradient training, rendering the proposed approach both faster and more accurate.
To illustrate the difference between the energy natural gradient $\nabla^E L(\theta)$, the standard gradient $\nabla L(\theta)$ and the
residuum
u_\theta - u^*$, we plot the effective update directions $DP(\theta) \nabla L(\theta)$ and $DP(\theta) \nabla^{E} L(\theta)$ in function space at initialization, see Figure~\ref{fig:poisson_pushs}.
Clearly, the energy natural gradient update direction matches the residuum much better than the vanilla parameter gradient.
\begin{figure}[]
\centering
\begin{tikzpicture}
\node[inner sep=0pt] (r1) at (0,0)
{\includegraphics[width=\linewidth]{figures/poisson_push.png}};
\node[inner sep=0pt] (r1) at (-4.8, 2.4
{
$u_\theta-u^\ast$
};
\node[inner sep=0pt] (r1) at (-0.4,2.4)
{
Energy NG
};
\node[inner sep=0pt] (r1) at (3.9,2.4)
{
Vanilla gradient
};
\end{tikzpicture}
\caption{Shown are the residuum $u_\theta - u^*$ and the push forwards of the energy NG and vanilla gradient for the Poisson problem; all functions normed to lie in $[-1,1]$ to allow for a visual comparison.}\label{fig:poisson_pushs}
\end{figure}
\renewcommand{\arraystretch}{1.1}
\subsection{Heat Equation}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/errors_heat_eq.png}
\vspace{-.5cm}
\caption{The plot shows the median of the relative $L^2$ errors for the heat equation example throughout the training process for the four optimizers: energy natural gradient descent, Hilbert natural gradient descent, vanilla gradient descent and Adam. The shaded area displays the region between the first and third quartile of 10 runs for different initializations of the network's parameters. Note that GD and Adam are run for 100 times more iterations.
}\label{fig:heat}
\end{figure}
Let us consider the one-dimensional heat equation
\begin{align*}
\partial_t u(t,x) &= \frac{1}{4}\partial_x^2u(t,x) \quad \text{for }(t,x)\in[0,1]^2
\\
u(0,x) &= \sin(\pi x) \qquad\;\, \text{for }x\in[0,1]
\\
u(t,x) &= 0 \qquad\qquad\quad\text{for }(t,x)\in[0,1]\times\{0,1\}.
\end{align*}
The solution is given by
\begin{equation*}
u^*(t,x) = \exp\left(-\frac{\pi^2 t}{4}\right)\sin(\pi x)
\end{equation*}
and the PINN loss is
\begin{align*}
L(\theta) &= \frac{1}{N_{\Omega_T}} \sum_{i=1}^{N_{\Omega_T}} \left( \partial_t u_\theta(t_i,x_i) - \frac14\partial_x^2 u_\theta(t_i, x_i) \right)^2
\\
&\quad+ \frac{1}{N_\text{in}}\sum_{i=1}^{N_\Omega}\left(u_\theta(0,x_i^{\text{in}}) - \sin(\pi x_i^{\text{in}}) \right)^2
\\&\quad +
\frac{1}{N_{\partial\Omega}}\sum_{i=1}^{N_{\partial\Omega}}u_\theta(t^b_i,x^b_i)^2,
\end{align*}
where $\{ (t_i,x_i) \}_{i=1,\dots, N_{\Omega_T}}$ denote collocation points in the interior of the space-time cylinder, $\{ (t_i^b,x_i^b) \}_{i=1,\dots,N_{\partial\Omega}}$ denote collocation points on the spatial boundary and $\{ (x_i^{\text{in}}) \}_{i=1,\dots,N_{\text{in}}}$ denote collocation points for the initial condition. The energy inner product is defined on the space
\begin{equation*}
a\colon\left(H^1(I,L^2(\Omega)) \cap L^2(I,H^2(\Omega))\right)^2 \to \mathbb R
\end{equation*}
and given by
\begin{align*}
a(u,v) &= \int_0^1\int_{\Omega}\left(\partial_t u - \frac14 \partial_x^2 u\right)\left(\partial_t v - \frac14 \partial_x^2 v\right)\,\mathrm dx\mathrm dt
\\
&\quad +
\int_\Omega u(0,x)v(0,x)\, \mathrm dx + \int_{I\times\partial\Omega}uv \,\mathrm ds \mathrm dt.
\end{align*}
In our implementation, the inner product is discretized by the same quadrature points as in the definition of the loss function.
The network architecture and the training process are identical to the previous example of the Poisson problem and we run the two NG methods for $2\cdot 10^3$ iterations.
Energy natural gradient approach shows almost the same efficiency.
We refer to Figure~\ref{fig:heat} for a visualization of the training process, Table~\ref{table:heat} for the relative $L^2$ errors after training and Figure \ref{fig:heat_pushs} for a visual comparison of the different gradients.
Note again the saturation of the conventional optimizers above $10^{-3}$ relative $L^2$ error.
Similar to the Poisson equation, the Hilbert NG descent is not an efficient optimization algorithm for the problem at hand which stresses again the importance of the Hessian information.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\node[inner sep=0pt] (r1) at (0,0)
{\includegraphics[width=\linewidth]{figures/heat_push.png}};
\node[inner sep=0pt] (r1) at (-4.8, 2.4)
{
$u_\theta-u^\ast$
};
\node[inner sep=0pt] (r1) at (-0.4, 2.4)
{
Energy NG
};
\node[inner sep=0pt] (r1) at (3.9, 2.4)
{
Vanilla gradient
};
\end{tikzpicture}
\caption{The first image shows $u_\theta - u^*$, the second image is the computed natural gradient and the last image is the pushforward of the standard parameter gradient. All gradients are pointwise normed to $[-1,1]$ to allow visual comparison.}\label{fig:heat_pushs}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& Median & Minimum & Maximum \\
\hline
GD
& $1.6\cdot10^{-2}$ & $5.0\cdot10^{-3}$ & $4.2\cdot10^{-2}$ \\
Adam & $1.0\cdot10^{-3}$ & $6.4\cdot10^{-4}$ & $\mathbf{1.4\cdot10^{-3}}$ \\
H-NGD
& $4\cdot10^{-1}$ & $3\cdot10^{-1}$ & $5\cdot10^{-1}$ \\
E-NGD
& $\mathbf{5.7\cdot 10^{-6}}$ & $\mathbf{1.4\cdot 10^{-6}}$ & $6\cdot10^{-1}$ \\
\hline
\end{tabular}
\caption{Median, minimum and maximum of the relative $L^2$ errors for the heat equation achieved by different optimizers over $10$ different initializations. Here, H-NGD and E-NGD is run for $2\cdot 10^3$ and the other methods for $2\cdot10^5$ iterations.}\label{table:heat}
\end{center}
\end{table}
\subsection{A Nonlinear Example with the Deep Ritz Method}\label{subsec:nonlinearDR}
We test the energy natural gradient method for a nonlinear problem utilizing the deep Ritz formulation. Consider the one-dimensional variational problem of finding the minimizer of the energy
\begin{equation}
E(u) \coloneqq
\frac12\int_\Omega |u'|^2\mathrm dx + \frac14 \int_\Omega u^4\mathrm dx - \int_\Omega fu\,\mathrm dx
\end{equation}
with $\Omega = [-1,1]$, $f(x) = \pi^2\cos(\pi x)+\cos^3(\pi x)$.
The associated Euler Lagrange equations yield the nonlinear PDE
\begin{align}
\begin{split}\label{eq:nonlinear}
-u'' + u^3 &= f \quad \text{in }\Omega
\\
\partial_n u &=0 \quad \text{on }\partial\Omega
\end{split}
\end{align}
and hence
the minimizer is given by
$u^\ast(x) = \cos(\pi x)
.
Since the energy is not quadratic, the energy inner product depends on $u\in H^1(\Omega)$ and is given by
\begin{equation*}
D^2E(u)(v,w) = \int_\Omega v'w'\,\mathrm dx + 3\int_\Omega u^2vw\,\mathrm dx.
\end{equation*}
To discretize the energy and the inner product we use trapezoidal integration with $2\cdot10^4$ equi-spaced quadrature points.
We use a shallow neural network
of width of 32 neurons and a hyperbolic tangent as an activation function.
\begin{figure}[]
\centering
\includegraphics[width=\linewidth]{figures/errors_nonlinear.png}
\vspace{-.5cm}
\caption{The plot shows the median of the relative $L^2$ errors for the nonlinear example throughout the training process for the four optimizers: energy natural gradient descent, Hilbert natural gradient descent, vanilla gradient descent and Adam. The shaded area displays the region between the first and third quartile of 10 runs for different initializations of the network's parameters. Note that GD and Adam are run for $400$ times more iterations.
}\label{fig:nonlinear}
\end{figure}
Once more, we observe that the energy NG updates efficiently lead to a very accurate approximation of the solution, see Figure~\ref{fig:nonlinear} for a visualization of the training process and Table~\ref{table:nonlinear} for the obtained relative $L^2$ errors.
In this example, the Hilbert NG descent is similarly efficient.
Note that the energy inner product and the Hilbert space inner product are very similar in this case.
Again, Adam and standard gradient descent saturate early with much higher errors than the natural gradient methods.
We observe that obtaining high accuracy with the Deep Ritz method requires a highly accurate integration, which is why we used a comparatively fine grid and trapezoidal integration.
\section{Conclusio
}
We propose to train physics informed neural networks with energy natural gradients, which correspond to the well-known concept of natural gradients combined with the geometric information of the Hessian in function space.
We show that the energy natural gradient update direction
corresponds to the Newton direction
in function space, modulo an orthogonal projection onto the tangent space of the model.
We demonstrate that this optimization can achieve highly accurate PINN solutions, well beyond the the accuracy that can be obtained with standard optimizers.
\section*{Acknowledgment}
JM acknowledges support by the Evangelisches Studienwerk e.V. (Villigst), the International Max Planck Research School for Mathematics in the Sciences (IMPRS MiS) and the European Research Council (ERC) under the EuropeanUnion’s Horizon 2020 research and innovation programme (grant number 757983). MZ acknowledges support from the Research Council of Norway (grant number 303362).
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& Median & Minimum & Maximum \\
\hline
GD
& $2.2\cdot10^{-4}$ & $1.2\cdot10^{-4}$ & $2.6\cdot10^{-4}$ \\
Adam
& $5.3\cdot10^{-5}$ & $2.4\cdot10^{-5}$ & $\mathbf{1.1\cdot10^{-4}}$ \\
H-NGD
& $\mathbf{1.0\cdot10^{-8}}$ & $6.3\cdot10^{-9}$ & $1.0$ \\
E-NGD
& $1.3\cdot10^{-8}$ & $\mathbf{6.0\cdot10^{-9}}$ & $1.0$ \\
\hline
\end{tabular}
\caption{Median, minimum and maximum of the relative $L^2$ errors for the nonlinear problem achieved by different optimizers over $10$ different initializations. Here, H-NGD and E-NGD is run for $500$ and the other methods for $2\cdot10^5$ iterations.}\label{table:nonlinear}
\end{center}
\end{table}
\bibliographystyle{alpha}
|
{
"arxiv_id": "2302.13199",
"language": "en",
"timestamp": "2023-02-28T02:12:39",
"url": "https://arxiv.org/abs/2302.13199",
"yymm": "2302"
} |
\section{Introduction}
The wide availability of data acquisition devices has produced large trajectory datasets.
These datasets compile movement data in a span of domains.
For this reason, the construction of analysis and visualization techniques, strategies, and tools to support the exploration of this data type has been a well-studied problem.
Moving entities are often represented as points (dimensionless objects) when studying trajectories. However, in applications such as climate science and video surveillance, the moving entities have extents that are important for analyzing these datasets.
The spatial extent also leads to interaction between objects when there is a spatial intersection. This relationship can contain valuable information about the objects.
We call these types of trajectories representing the movement of objects with a spatial extent as \emph{Moving Regions}.
\looseness=-1
An essential problem in trajectory data visualization is the construction of visual overviews to summarize the movement of a collection of objects in a static plot.
The most straightforward solutions are aggregation, small multiples, or animation-based visualizations.
Aggregation often breaks up trajectories into pieces to form collections that cause the loss of overall movement.
On the other hand, while strategies based on small multiples can give some temporal context, they are limited to a small number of possible timesteps to be shown.
In addition, animations containing many moving objects pose a high cognitive load to the users~\cite{AIGNER:2011:VISUALIZATION,Harrower:2007:Cognitive}.
Another possible solution is to use a three-dimensional representation, such as the space-time cube metaphor, which uses the two dimensions to represent space and the perpendicular third dimension to represent time.
Nevertheless, this suffers from the usual flaws due to the use of 3D, such as occlusion and perspective distortion.
Recently proposed techniques such as \emph{MotionRugs}~\cite{Buchmuller:2018:MVCTST} (and its variations) attempt to overcome these problems by creating an overview using a 2D metaphor in which the time is represented on one axis and space on another. However, they do not consider the extension of the objects, generating additional problems such as the preservation of intersections in the original space.
\emph{Storyline} visualizations were initially used to display the narrative of movies in a 2D plot, focusing on the representation of meetings between actors along the movie duration. Most recent works have applied it to more general datasets and different notions of interaction.
While powerful, these other summaries did not consider the combined representation of trajectories, spatial extent, and interactions and are unsuitable for depicting overviews of \textit{moving regions} datasets.
In this paper, we propose MoReVis (\textbf{Mo}ving \textbf{Re}gions \textbf{Vis}ualization). This visualization technique addresses the abovementioned limitations and provides an overview of moving regions. MoReVis uses a 1D representation of space similar to MotionRugs to build an overview as a static 2D plot. We formulate the layout strategy as an optimization problem to properly represent the moving regions' extents and their spatial interactions, in this case, intersections. The final layout ensures that the object's areas and interactions on the visual summary are as close as possible to the areas in the original 2D space.
The final plot illustrates each object as a \emph{curved ribbon}, which uses discrete time in the horizontal direction and space on the vertical axis. We also provide rich interactive features to help users understand the underlying data.
We implement our method in a visual interface and present two usage scenarios using datasets from different domains. These examples show how our approach can provide an overview allowing users to grasp patterns and interactions within the moving region dataset quickly.
Finally, we evaluate MoReVis' effectiveness using numerical experiments and a preliminary user study.
Users were able to answer questions about the dataset under evaluation adequately. In addition, the feedback on our proposal's usefulness and effectiveness was positive overall.
In summary, our main contributions are:
\squishlist
\item A novel technique for creating a visual summary of moving regions, preserving areas, spatial distances, and intersections between regions as much as possible.
\item Visual and interactive tools for better understanding the space transformation utilized and the representation of intersections.
\item A quantitative and qualitative evaluation of MoReVis, including a comparison with other spatiotemporal visualization methods and a user study.
\end{list}
Finally, all the data and code used in this paper are publicly available at \href{http://visualdslab.com/papers/MoReVis/}{http://visualdslab.com/papers/MoReVis/}.
\section{Conclusion}
\looseness=-1
We presented MoReVis, a visual summary that provides an overview of moving regions datasets.
This technique is based on a carefully designed optimization problem to build the visualization layout.
MoReVis is applicable in several domains and analysis situations, as shown in our use cases and discussion.
Our main directions for future work are to reduce the computation cost of our algorithm and apply our technique to summarize clustering results and trajectories that model the spatial uncertainty of moving objects.
\section{Related Work}
Our work draws on three streams of prior work trajectory visualization and application domains of moving regions.
Trajectory visualization is a well-studied problem and a complete overview of visual motion analysis; we recommend the survey by Andrienko~et~al.~\cite{Andrienko2013Visual}.
The following two subsections consider trajectories represented as moving points. We discuss classical static summaries of trajectory visualizations and storyline visualization. The last subsection depicts some applications where objects with spatial extent are essential.
\myparagraph{Trajectory Visualization:}
A common problem in visualizing trajectory data is providing an overview of a dataset.
The most commonly used method for representing trajectories is a static spatial view (often a geographical map), where polylines are used to show the trajectories followed by the moving entities present in the data~\cite{thudt2013visits, wang2011interactive}.
However, this approach presents a poor representation of the time dimension and can suffer from overplotting. Therefore, variations have been proposed to overcome these problems using aggregation or pattern extraction algorithms that segment the data into consistent motion patterns~\cite{ferreira2013vector,Wang2014Urban}.
On the other hand, the space-time cube~\cite{spacetimecube} uses a different approach to solve these issues by using a 3D-based visual metaphor to represent time as one of the dimensions in 3D space.
This method has been widely used in previous work~\cite{bach2017descriptive,andrienko2013space,chen2015survey}.
However, as it uses a 3D environment, it can present diverse problems: cognitive overload, distortion of distances, and occlusion~\cite{Andrienko2013Visual, ware2013information,evaluating2020filho}.
\noindent More recent methods use static temporal visualizations to avoid the problems present in the space-time cube. These consist of a dense plot with time on the horizontal axis and a discrete set of vertical positions representing the spatial component of the trajectories. As a result, these methods can display the entire period of data and do not suffer from overplotting and occlusions.
These techniques use spatial transformations that preserve the order of positions without considering distances.
\textit{MotionRugs}~\cite{Buchmuller:2018:MVCTST} is the first representative of this category, proposed to provide an overview of collective motion data. As the objective is to identify the general trend in the motion of a population, it does not represent the individual trajectories and only shows relative distances.
A further variation of this technique, called \textit{SpatialRugs}, was presented by Buchm\"uller~et~al.~\cite{spatialrugs}, which uses colors to represent absolute spatial positions.
Subsequently, Franke~et~al.~\cite{1dordering} adapted this idea to present a temporal heatmap to visualize the propagation of natural phenomena.
JamVis~\cite{2022-JamVis} utilized the 2D representation to show urban events formed by groups of spatiotemporal points.
A vital aspect of these visualizations is how to represent 2D spatial coordinates in one dimension. There are different alternatives to accomplish this task: dimensionality reduction techniques~\cite{Ayesha2020}, spatial indexing methods~\cite{lu1993spatial,GUO:2006:SPATIAL}, or even specially designed projection techniques~\cite{stablevisualsummaries} to improve temporal stability in the results.
Unlike our work, these techniques focus on collective movements. Thus, they lack a direct representation of individual objects, which is of utmost importance in our case. In addition, and more importantly, they were not designed to represent moving regions (i.e., objects with a spatial extent).
\myparagraph{Storyline Visualizations:}
This group of techniques communicates the evolution of relationships between different objects over time. In general, these relations are the interaction of two objects at the same spatial position.
Commonly referred to as Storyline visualizations, they were often used to represent movie plots. However, lately, they have been used to describe relationships between more generic temporal objects~\cite{pena2021hyperstorylines}.
In this category of visualizations, entities are represented by curves with a horizontal temporal scale. The vertical proximity between the curves indicates a relationship.
This group of methods has developed by improving the layout of the curves (reducing line crossings and wiggles)~\cite{van2016block, Tanahashi2012Design, Liu2013Storyflow} or by designing tools that allow the user to control the visualization~\cite{Tang2019istoryline}.
The closest work related to our proposal in this category was proposed by Arendt~and~Pirrung~\cite{Arendt2017They}, who explicitly incorporated spatial information to create the 1D representation of space.
Their user study found that explicitly using spatial information improved performance on overview tasks compared to methods that implicitly represent space through object interactions.
Unlike our proposal, all previous works focus on maintaining the local spatial ordering of objects without preserving the distances\,---\,our work intents to resemble the original distances and intersections between objects as much as possible.
\myparagraph{Moving Regions Applications:}
We now discuss some domains where spatial distances, object areas, and interactions, primarily due to spatial intersections, are vital to interpreting the data.
The first application is video surveillance systems used in traffic management and monitoring public places, which are essential in intelligent cities~\cite{lisecure2021}.
In this type of video, it is common to perform object detection~\cite{joshi2012survey} automated.
However, human interaction still plays an essential role in their analysis, using visualizations as support~\cite{raty2010survey}.
In this context, there are also methods using the space-time cube~\cite{meghdadi2013interactive}, exhibiting the same disadvantages described above.
On the other hand, Lee and Wittenburg~\cite{lee2019space} use an approach similar to \textit{MotionRugs}~\cite{Buchmuller:2018:MVCTST} but with this type of data. Their method uses the vertical axis to represent time and the horizontal axis to map the horizontal axis of the video frames.
A limitation in their work is that the projection from 2D to 1D is effortless, creating overlapping plots between objects when there is no spatial interaction. For example, in videos of cars on a highway seen from the front, it is expected that the cars are horizontally aligned, generating many errors.
Fonseca~and~Paiva~\cite{fonseca2021system} use time bars with interactivity tools to indicate the intervals where the meeting between observed people and other events occurs to enable fast video analysis.
Another recurrent application is trajectories with uncertainty and movement prediction~\cite{domingo2012microaggregation,bonchi2009privacy}. In particular, hurricane trajectories are one theme where the study is necessary for preparation for future events~\cite{Cox2013Visualizing}.
This type of data presents a trajectory formed by recording the positions of the hurricane at different points in time. In addition, these records may include other measurements, such as pressure or wind speed. The use of visualization tools is customary in these data.
For example, Li~et~al.~\cite{Li2011MoveMine} used spatial mining techniques to decompose hurricane tracks and identify critical features.
Wang~et~al.~\cite{wang2011interactive} used a map view with trajectories linked to a parallel coordinate plot, a theme river chart, and scatter plots to represent the temporal aspect of other measured attributes.
With thunderstorm data, Diehl~et~al.~\cite{diehl2021hornero} proposed a tool that used the TITAN algorithm~\cite{dixon1993titan} to obtain regions of the presence of the storms for each timestep and presented a graph abstraction to display the splits and merges along time.
This modeling of thunderstorms as regions can also be applied to hurricanes. Depending on wind speed and pressure, a hurricane can affect surfaces of different sizes; therefore, it is possible to consider the presence of a hurricane as a region.
These applications use the same spatiotemporal visualization techniques presented above with some adaptations depending on the domain. Therefore, they have the same limitations, and there is room for improvements in the representation of moving regions. In the next section, we will discuss in detail the shortcomings of some of these techniques.
\section{Background and Motivation}
\label{sec:motivating-example}
This section presents an example using a synthetic dataset to introduce our method goals and compare them with related techniques.
This data set consists of four circular objects moving through time in orbits of different radii, as shown in Fig.~\ref{fig:reference_visualizations}(A).
The trajectories of the objects have different behaviors: the green object moves within a small radius; the orange and blue ones move closely, overlapping in the second half of the observed period; finally, the pink object moves in the opposite direction.
In addition, the areas of the objects also have different behaviors: the green object has a constant radius; the pink one has a decreasing radius; the other two have an increasing radius as a linear function of time, leading to an almost complete overlapping of the circles at the last timestep.
This dataset simulates applications such as object tracking in videos, with a bounding box in each frame for each moving object in the scene. This region (bounding box) can change position and shape between different timesteps.
A challenge in this context is providing a visual overview that summarizes the spatiotemporal features of the movement in a given dataset.
Such an overview needs to support the identification (both in space and time) of i) trajectories of individual objects, ii) area changes (which can indicate moving closer to/farther from the camera), and iii) intersections between objects (which can indicate encounters or occlusion between objects).
These tasks were inspired by applications such as analyzing object tracking in videos~\cite{meghdadi2013interactive,hoeferlin2013interactive} and how the StoryLines~\cite{Tanahashi2012Design, Liu2013Storyflow} visualization summarizes the similarities/encounters in a given dataset.
We considered using five previously proposed techniques to provide a visual summary of our synthetic data set (see the results in Fig.~\ref{fig:reference_visualizations}).
In each of the results, we discuss five limitations:
L1) need for navigational interactions,
L2) lack of representation of individual trajectories,
L3) lack of representation of the area,
L4) lack of spatial overlap (intersections) representation, and
L5) need to have all objects observed at all timesteps in the time window under consideration.
We consider that techniques can suffer from these limitations in a weak (W) or strong (S) way.
\myparagraph{Space-time cube~\cite{spacetimecube}:}
This technique uses a 3D metaphor in which the horizontal plane represents the spatial positions and the vertical axis the temporal dimension.
We can use the original spatial coordinates to describe the objects' area and intersections directly.
Fig.~\ref{fig:reference_visualizations}(A) illustrates three views of the same spatiotemporal cube.
The first one only shows the spatial information of the objects since it is a view from the top of the cube.
The 3D view has some drawbacks, such as perspective distortion and occlusion, which occur when projecting the cube on a 2D screen~\cite{bach2017descriptive}.
The distortion can hinder the perception of objects' areas (WL3); for example, in Fig.~\ref{fig:reference_visualizations}(A), only the first point of view presents the areas proportional to the actual values.
In each point of view, there are two types of object overlaps: real intersections and visual intersections caused by projecting objects of different depths~\cite{bach2014visualizing} (WL4).
The user would need to verify which intersections are correct from other points of view.
The 3D-based navigation could be more demanding to control~\cite{evaluating2020filho, sss-gi2001} (WL1), and for that reason, it does not present a fast overview of the data.
\myparagraph{MotionRugs~\cite{Buchmuller:2018:MVCTST}:}
This method is a dense representation where each column represents a timestep, and each cell is a different object. Note that two cells in the same row do not necessarily represent the same object.
Since this technique was developed for point data, we use the centroid of each region as its position to construct the visual summary in Fig.~\ref{fig:reference_visualizations}(B).
The objects are positioned vertically in each column according to a spatial ordering, in this case, obtained using a Hilbert Curve.
Unlike the space-time cube, the time dimension is represented linearly, and space can be interpreted as the dependent variable.
We incorporate this representation of time in MoReVis.
The colors in each cell can be based on a feature of the data, such as velocity or area.
In our example, the colors represent the 2D spatial position of the objects following the 2D color map on the right.
In this way, it is possible to identify the position of objects in space, not just their relative position, by referring to the 2D color map.
In addition, this coloring also helps to estimate the distances between objects at each timestep. A more significant color change indicates a more considerable distance.
However, this representation has some limitations.
For instance, it is impossible to understand the movement of individual objects (SL2) since this technique independently uses the spatial ordering strategy at each timestep. Thus, the track of each object at different timesteps is lost.
Furthermore, although it is possible to use colors or glyphs with sizes proportional to the area within the cells, it needs to be clarified how to adapt these metaphors to represent the spatial intersections of the different objects (SL4).
Lastly, this technique needs the same objects to be observed in all timesteps (SL5).
\figMotivatingMoReVis
\figPipeline
\myparagraph{Stable Principal Components Summary~\cite{stablevisualsummaries}:}
This method uses the same visual encoding as \textit{MotionRugs}; however, it adopts a modified PCA projection called \emph{Stable Principal Components}~\cite{stablevisualsummaries}.
This adaptation applies PCA to each timestep; then, it interpolates the results to generate continuous changes in the calculated principal components.
This space transformation method can better represent 1D object trajectories and give a better view of space, as shown in Fig.~\ref{fig:reference_visualizations}(C).
However, it suffers from the same limitations as MotionRugs, \emph{i.e.}\xspace, it is impossible to identify individual trajectories, there is no representation of the spatial intersections, and the objects must be observed in all timesteps (SL2, SL4, SL5).
\myparagraph{MotionLines~\cite{stablevisualsummaries}:}
This method was presented with \emph{Stable Principal Components}.
The idea is to use the distance between the objects in the spatial representation (\emph{i.e.}\xspace, the 1D projection on the y-axis) instead of positioning the objects in each column in their relative order (see Fig.~\ref{fig:reference_visualizations}(D)).
Compared to the previous method, it is possible to identify the movement of individual objects, a representation that we also use in MoReVis.
Nevertheless, MotionLines does not consider the representation of areas and intersections, which are essential in some applications, such as surveillance videos.
A trivial modification to represent the area would be to change the width of the curves proportionally to the area of the respective objects.
However, as discussed in Sec.\ref{sec:evaluation}, this change can result in many missing and spurious intersections (SL4).
Furthermore, the spatial transformation used does not consider the case where the number of objects is not constant over time (WL5).
The space of each timestep is transformed separately, and when there is one object or none, the 1D space degenerates.
\myparagraph{Visual Analysis with 1D ordering~\cite{1dordering}:}
This method creates a heatmap to summarize the dataset.
As in previous work, each column represents a timestep. Each row corresponds to a grid cell of the divided space grid\,---\,\emph{i.e.}\xspace, the positions are discretized into a regular grid.
Next, the mapping of the cells to the vertical position is obtained by dimensionality reduction, in this case, with the Hilbert curve.
For each cell, the color represents some measure of the objects' density in the corresponding grid cells.
In Fig.~\ref{fig:reference_visualizations}(E), we use the sum of areas of the objects in that cell.
Although we can infer global motion trends in this example, this summary uses an aggregation strategy. Therefore, it does not support the study of individual movements (SL2).
It is possible to represent the area of the objects but only the aggregated area (WL3). As a cell has many objects, it could be interpreted as an intersection, but objects are not guaranteed to intersect (SL4).
In summary, the above techniques have different limitations when representing moving regions.
Therefore, the visualizations produced could be better for summarizing situations where the areas of objects and their spatial interactions (intersections) are relevant.
Our proposed technique, MoReVis, solves these limitations by a non-trivial combination/extension of ideas from many existing visual summaries.
Fig.~\ref{fig:motivating_morevis} shows the result of using MoReVis in the synthetic dataset described above (see a detailed discussion in Sec.~\ref{sec:visual_evaluation}).
We highlight the representation of areas (A1), the representation of the intersections between objects (A2), and visual cues to denote the absence of objects' intersections (A3).
The following section describes the technique in detail.
\section{MoReVis}
\label{sec:method}
This section introduces MoReVis, a spatiotemporal visual summary designed to overcome the limitations of previous methods (Sec.~\ref{sec:motivating-example}).
The algorithm to produce this visual summary consists of five steps (illustrated in Fig.~\ref{fig:method}): \emph{Projection of regions}, \emph{Creation of time slices}, \emph{Area representation}, \emph{Intersection representation}, and \emph{Crossings removal}.
We detail each step in the rest of this section.
\subsection{Projection of regions}
The input of our method is a moving regions dataset, \emph{i.e.}\xspace, a set of objects $O = \{O_1, O_2, \dots, O_n\}$ that move over time.
At a given timestep, each object is associated with a convex region in the 2D plane (Fig.~\ref{fig:method}(A)).
Notice that these regions may change over time.
Furthermore, the objects do not need to be observed in all timesteps, different from related techniques.
Finally, each object can have additional associated time-varying attributes representing either numerical or categorical properties.
\looseness=-1
The first step in our algorithm is obtaining an initial 1D representation of the spatial movement of each object.
This step aims to capture the spatial context of the dataset.
Projection of punctual data is a prevalent task; however, there is important information on the extent of regions in our situation.
We only considered the region's centroids with projection methods that only support point data; otherwise, the distance between regions was used.
\looseness=-1
The considered projection methods include dimensionality reduction techniques: PCA~\cite{pearson1901liii}, MDS~\cite{kruskal1964multidimensional}, force-directed~layout~\cite{improved:2003:tejada}, t-SNE~\cite{van2008visualizing}, UMAP~\cite{mcinnes2020umap}, and space-filling techniques: Hilbert and Morton curves~\cite{lu1993spatial}.
These projection methods are data-driven, and we used as input the data of all objects to fit them, ignoring the time information.
In recent visual~summaries~\cite{Buchmuller:2018:MVCTST, stablevisualsummaries}, the projection methods, such as Stable Principal Components, were fitted with the data of each timestep separately or using the data of the current and previous timesteps.
Although this procedure presented positive results, we have a different scenario in which the number of objects may vary over time.
Therefore, the spatial representation can be compromised by applying projections in each timestep separately, resulting in a degraded spatial representation when only one object is observed in a given timestep.
For that reason, we fit the projection with all points to obtain a general space representation.
\subsection{Creation of time slices}
The MoReVis visualization comprises columns indicating the (discrete) set of observed timesteps.
We represent an object as a rectangle in each column corresponding to timesteps where the object is observed.
The rectangles are vertically positioned so that their center corresponds to the projection value obtained in the previous step.
In addition, all the rectangles have the same width (60\% of the number of pixels corresponding to a column in the plot) and the same height (which will be adjusted later).
We connect the rectangles corresponding to the same object to form ribbons representing each object's movement.
The color of each ribbon can be used to convey either object attributes (\emph{e.g.}\xspace, uncertainty), movement attributes (\emph{e.g.}\xspace, speed), or identifiers used to distinguish the objects.
In the example depicted in Fig.~\ref{fig:method}(C), colors are used to identify different objects.
\subsection{Area representation}
\label{sec:area-representation}
The remaining steps of the MoReVis technique aim to adjust the initial layout described so far to represent areas of objects and their intersections.
To help describe these steps, we first set up some notation.
The area of a given object $O_i$ at timestep $t$ is denoted by $a_{i, t}$.
$R_{i, t}$ denotes the MoReVis rectangle associated with this object and timestep.
The vertical position of this rectangle's center is denoted by $y'_{i, t}$ (value obtained in the projection step) and its height by $h_{i, t}$.
Consider that the positions $y'_{i,t}$ were normalized to the interval $[0, 1]$, so the scale is not dependent on the projection method.
This step aims to scale rectangles' height so that their area (in the MoReVis plot) is proportional to their area in the original 2D space.
To do so, we want a scaling factor based on the objects' overall spatial extent.
Furthermore, this scaling factor has to be the same for all timesteps so that the rectangles' heights are comparable through time.
To do so, we first define $A_t = \sum_{i} a_{i, t}$ as the sum of the area of objects in each timestep and $A_M = \max_t \{A_t\}$.
We then set the rectangle's height $h_{i, t} = \dfrac{a_{i, t}}{A_M}$.
The intuition behind it is that the sum of the heights for all the rectangles should be less than or equal to 1, and it is exactly only when for the timesteps where the total area occupied by the objects corresponds to $A_M$, and the objects are disjoint.
\figIntersectionLimitation
\subsection{Intersection representation}
\label{sec:intersection_representation}
This final step aims to represent intersections between the objects, \emph{i.e.}\xspace, make the rectangles in the MoReVis plot intersect (by changing their vertical position) with an intersection area proportional to the actual intersection in the original 2D space.
This problem is challenging since the intersection patterns in 2D space can be complex (Fig.~\ref{fig:intersection_limitation}).
For this reason, we formulate this as an optimization problem that will try to preserve the given spatial configuration as much as possible.
An optimization problem is going to be formulated for each timestep independently.
Thus, for clarity, the description below will omit the subscript $t$ for all the variables. We now set some notation.
Given a pair of objects $(O_{i}, O_{j})$, $w_{i, j}$ denotes the area of the intersection of their regions (already divided by $A_M$), and $I_{i, j}$ denotes the vertical intersection of their associated rectangles $(R_{i}, R_{j})$.
As shown in Fig.~\ref{fig:intersection_limitation}, it is not always possible to represent all intersections correctly.
Complex 2D intersection patterns can force the creation of {\emph spurious intersections} on the 1D representation.
For this reason, we set the goals of our optimization as follows:
\squishlist
\item[(G1)] For every intersection in the 2D space ($w_{i, j} > 0$), we want that the corresponding pair of rectangles also intersect in the 1D space, with the 1D intersection being at least as big as the 2D intersection ($w_{i, j} \leq I_{i, j}$).
\item[(G2)] We also want the 1D intersections not much bigger than the 2D intersections.
\item[(G3)] If there is no intersection in the 2D space ($w_{i, j} = 0$), we want to avoid, as much as possible, having \emph{spurious intersections} in 1D (which happens when $I_{i, j} > 0$).
\item[(G4)] We want to keep the rectangles as close as possible to their original positions obtained in the projection step to keep the space representation.
\end{list}
To formulate the optimization problem, notice that the vertical intersection $I_{i, j}$ between two rectangles is a function of their height and vertical position.
With the height fixed, if there is no intersection $I_{i, j} = 0$, if one rectangle contains the other $I_{i, j} = \min(h_i, h_j)$ and otherwise $ I_{i, j} = \frac{h_i + h_j}{2} - |y_i - y_j|$.
We first separate the pairs of objects in two disjoints subsets, $A = \{(i,j)| w_{i,j} > 0 \}$ and $B = \{(i,j)| w_{i,j} = 0 \}$, \emph{i.e.}\xspace, $A$ is the subset of pair of objects that intersect and $B$ is the pair of objects that do not intersect.
For each pair $(i,j) \in A$, we define a constraint in our optimization to achieve (G1) as:
\begin{equation}
w_{i,j} \leq I_{i,j} \Leftrightarrow |y_i - y_j| \leq \frac{h_i + h_j}{2} - w_{i, j}
\end{equation}
Note that we make $G1$ a constraint; for every intersection in the 2D space, we will have the guarantee that it will also be present in the 1D plot (no missing intersection).
\figIntersectionAlgorithm
\figInterface
\noindent Similarly, to achieve (G2), for each pair $(i,j) \in A$, we define a real optimization variable $k_{i, j} \geq 1$ and add the following constraint:
\begin{equation}
I_{i, j} \leq k_{i, j}w_{i,j} \Leftrightarrow |y_i - y_j| \geq \dfrac{h_i + h_j}{2} - k_{i,j}w_{i, j}
\end{equation}
Since (G2) states that $I_{i, j}$ should not be much bigger than $w_{i, j}$, we want each $k_{i, j}$ to be as small as possible.
To this end, we define our first loss as: $F_1 = \dfrac{1}{|A|} \sum_{(i, j) \in A} k_{i, j}$.
For the third goal (G3), we want to minimize the number of spurious intersections. Therefore, we want to obtain $I_{i,j} = 0$, for every pair $(i,j) \in B$.
For this to happen, we must have $|y_i - y_j| \geq \dfrac{h_i + h_j}{2}$.
To count the number of spurious intersections for each pair in $B$, we add a binary variable $c_{i, j}$ and a new constraint of the form:
\begin{equation}
|y_i - y_j| \geq (1 - c_{i, j})\dfrac{h_i + h_j}{2}
\end{equation}
When $c_{i, j} = 1$, the constraint is redundant; when $c_{i, j} = 0$, there is no intersection between the rectangles. We therefore define our second loss as $F_2 = \dfrac{1}{|B|}\sum_{(i, j) \in B} c_{i, j}$.
Finally, to fulfill (G4), we desire that the update positions $y_i$ are close to the 1D space representation $y'_i$ obtained previously (in order to retain the spatial representation), so we add the following quadratic error penalty: $F_3 = \sum_{i=1}^n (y'_i - y_i)^2$.
We combine the three losses into a single one by defining two parameters, $\lambda_1 > 0$ and $\lambda_2 > 0$, and the final objective function of our minimization problem is given by: $\lambda_1 F_1 + \lambda_2 F_2 + F_3$.
This formulation results in a mixed-integer quadratic programming problem. This type of problem can be solved with a branch-and-bound approach~\cite{lee2011mixed}.
We notice that we can reduce the size of our optimization problem (in the number of variables and constraints) by
partitioning the set of objects into groups. Each group contains the objects that form a connected region\,---\,\emph{i.e.}\xspace, there is a path of intersections connecting all objects in the group.
The six objects were separated into three groups in Fig.~\ref{fig:intersection_algorithm}(A).
This separation then divides our optimization problem into smaller problems that can be solved more efficiently.
As two objects from two different groups should not present an intersection, we only place them after optimizing each group so there is no overlap.
To do so, we use a quadratic program.
For each group $g$, we compute its total height $h_g$, \emph{i.e.}\xspace, the size of the interval that contains all rectangles $(y_i - h_i/2, y_i + h_i/2)$ of the group and its mean position $\overline y_{g} = h_g/2 + \min_{i \in g} (y_i - h_i/2)$.
With the groups ordered by mean position, consider the consecutive pair $(g, g')$; we add constraints $y_g + h_g/2 \leq y_{g'} - h_{g'}/2$, so they will not intersect.
We use an objective function similar to $F_3$ to minimize the overall displacement, $\sum_{g} (y_g - \overline y_g)^2$, where the optimization variables are only $y_g$.
We then place the individual groups and use the individual subproblems to place the rectangles internally in each group.
\subsection{Crossings removal}
\label{sec:crossings}
Lastly, representing the objects as curves and creating links between rectangles can lead to undesired crossings between curves.
Again, removing all the crossings while preserving the spatial context in a lower dimension is impossible.
Some previous works~\cite{stablevisualsummaries, Buchmuller:2018:MVCTST} also identified this problem and proposed alternative projections that try to generate more stable orderings.
In Sec.~\ref{sec:evaluation}, we evaluate different projection strategies, including those mentioned.
However, since even with the use of these stable projections, we still can face undesired crossings we
decide to represent them visually.
A crossing between two links is \textit{spurious} if the objects in the previous and the next timestep present no intersection in the original space.
Similar to the visual cues studied by Bäuerle~et.~al.~\cite{bauerle2022where} to represent missing information, we changed the encoding of every link involved in a \textit{spurious} crossing to exhibit hashed color or gradient in opacity.
\section{Visualization Interface}
\label{sec:interface}
We implemented MoReVis in an interactive interface (shown in Fig.~\ref{fig:interface}) with four coordinated views:
\emph{MoReVis}, \emph{Data View}, the \emph{Intersection View}, and the \emph{Parallel Coordinates}.
Each view will be explained in detail in the following sub-sections.
\looseness=-1
\myparagraph{MoReVis View:}
This is the main view in our system~(Fig.~\ref{fig:interface}(A)) and aims to present the MoReVis plot alongside additional visual clues that help in the data exploration process.
This view also supports zooming and panning and presents a tooltip showing detailed information about the object when the mouse hovers over a curve.
Next to the y-axis, a color bar is present to interpret the 1D space and to verify the quality of the projection inspired by the coloring from SpatialRugs~\cite{spatialrugs}.
A 2D color map is placed on the original space, and the coloring of the objects in each timestep is the color of the centroids' position.
However, instead of keeping colors in the curves, we moved all rectangles to the same horizontal position, blending the color of overlapping rectangles.
The 2D color map is displayed in the Data View by hovering over this color bar.
The idea is to inspect the space's most used regions represented by the vertical axis.
Thus, to evaluate the spatial transformation, we look for abrupt changes in color as neighborhoods in the original space have similar colors.
Finally, the bar chart on the top indicates the number of spurious intersections in each timestep (similar to the error plots presented in \cite{stablevisualsummaries}).
For example, in Fig.~\ref{fig:interface}(A), we can see that the maximum number of spurious intersections in a single timestep is 2.
The rectangles that participate in spurious intersections are highlighted by hovering over any of the bars.
\myparagraph{Data View:}
This view presents the original moving regions dataset, so the visual metaphor depends on the application domain.
In our application, we implemented two options.
The first one is used for data representing geographical trajectories. In this case, the view presents a 2D geographical map with polygonal lines and shapes depicting the moving regions as in Fig.~\ref{fig:hurricane-use-case} (Sec.~\ref{sec:use-case-hurdat}).
The second option works for data representing object tracking in videos (as in Fig.~\ref{fig:teaser} and Fig.~\ref{fig:interface}(B)); we show the video frames with the bounding boxes of the tracked objects.
Finally, when the user hovers the mouse over a curve in the MoReVis view, the data for the corresponding timestep is shown on the Data View.
Similarly, when a user clicks on a curve, the trajectory of the centroids of the object is shown on the Data View.
\myparagraph{Intersection View:}
This view is activated when the user creates a brush on the MoReVis plot to present details of the structure of the intersections.
For each timestep in the horizontal extent of the brush, we create a graph where nodes are objects (contained in the brush region), and an edge is made if two rectangles intersect.
We use a vertical layout to display the graphs, similar to recent works~\cite{Valdivia2021Analyzing, Elzen2014Dynamic}. Each row is a node in this display, and a line between two rows is an edge.
Notice that we only show nodes with a non-zero degree.
The width of the edges is proportional to the intersection area, and black edges indicate real intersections. In contrast, the red ones represent spurious intersections (which are not present in the 2D original space).
For example, in Fig.\ref{fig:interface}(C), it is possible to see a spurious intersection at the timestep $940$ between objects red and pink; on the Data View, it is possible to verify that they are close but present no intersection.
This view was designed to facilitate the verification of details in intersections and depicted possible errors.
\myparagraph{Parallel Coordinates:}
We use a parallel coordinates plot~\cite{inselberg1990parallel} to represent object attributes, such as measures of their overall movement, area, and presence on the video, as shown in Fig.~\ref{fig:interface}(D).
In the usage scenario with hurricanes (Sec.~\ref{sec:use-case-hurdat}), other attributes were also used: their max velocity, wind speed, and pressure.
Brushing the different axes in this plot allows the user to filter the objects shown in the MoReVis view.
\section{Experimental Evaluation}
\label{sec:evaluation}
This section presents a series of experiments to evaluate the MoReVis algorithm in terms of its parameters.
\subsection{Datasets}
\label{sec:datasets}
Our evaluation uses two real datasets from different domains: object tracking in videos and hurricane trajectories. These datasets are described in the following.
\myparagraph{WILDTRACK~\cite{Chavdarova2018Wildtrack}:} consists of an object tracking dataset produced from a video captured in a public open area with an intense movement of people.
The data contain the original video and the bounding boxes of tracked people for each frame in the video.
We only considered a subset of 14 people (moving regions) with a long presence on the video, having a total of 234 timesteps.
We decided to filter the people to generate more interpretable results that our users can evaluate through our visualization.
\myparagraph{HURDAT~\cite{hurdat}:} describes hurricane trajectories tracked since 2004. Each trajectory contains several attributes, such as windy velocity, pressure, and the spatial extent of the storm, with measurements in intervals of 6 hours.
We used each hurricane as a moving region, and each timestep represents two days. The area is the convex hull that includes all region's measurements of the hurricane inside this time interval.
In addition, we disregard the year of occurrence for each timestamp to investigate seasonal patterns.
Finally, we selected the hurricanes that started with the longitude inside the interval $[-50, -20]$ and latitude inside $[10, 20]$ (west coast of Africa). It resulted in a total of 70 hurricane tractories along 52 timesteps.
\subsection{Quality Metrics}
\label{sec:metrics}
We now describe the metrics used to evaluate the MoReVis layout.
\myparagraph{Stress Measure:}
This metric
is commonly used to assess the quality of dimensionality reduction techniques,
it is the difference between the distances in the original and the projected spaces.
More clearly, let $d_{O_i, O_j, t_p, t_q}$ be the distance between the region of objects $O_i$ and $O_j$ at timesteps $t_p$ and $t_q$, and let $\hat d_{O_i, O_j, t_p, t_q}$ denote the distance between the respective rectangles.
We use the length of the smallest segment that links the regions or rectangles as the distance.
We build the pairwise distance matrices for all $d_{O_i, O_j, t_p, t_q}$ called $D$ and for all $\hat d_{O_i, O_j, t_p, t_q}$ called $\hat D$. Consider that they are divided by the maximum value to remove the scale.
The \textit{stress measure} is $\sqrt{||D - \hat D||_F^2 / ||D||_F^2}$, where $||.||_F$ denotes de Frobenius norm of a matrix.
\myparagraph{Crossing and Jump Distance:}
These metrics were present in evaluating space projection techniques for MotionLines~\cite{stablevisualsummaries}.
The crossings (change of curves order between timesteps) can generate misrepresentations of closeness.
Consider that $S_{i, t}$ is the ranking of the object $O_i$ on timestep $t$ in relation to the vertical positions.
Given pair of consecutive timesteps, we count the pairs of objects $(O_{i}, O_{j})$ such that $S_{i, t} < S_{j, t}$ and $S_{i, t + 1} > S_{j, t+1}$ (their order changes).
The \textit{crossing metric} is the average crossing calculated for all consecutive timesteps.
Similarly, the jump distance is the difference in the ranking of the objects between timesteps; it is $\sum_{i} |S_{i, t} - S_{i, t+1}|$.
The sum is over every object present in both timesteps.
Similarly, the \textit{jump distance metric} is the average value for all timesteps.
\myparagraph{Intersection Area Ratio Error:}
On our formulation (Sec.~\ref{sec:method}) for every object's intersection, we allow $I_{i,j}$ to be bigger than $w_{i, j}$.
Therefore, to evaluate the size of this ``distortion'', we define the intersection area ratio error as the mean of the ratios $I_{i,j}/ w_{i,j}$ for all intersecting pairs of objects ($w_{i, j} > 0$).
\looseness=-1
\myparagraph{Spurious Intersection Error:}
Although there is no missing intersection in the MoReVis plot, spurious intersections may occur.
Therefore we define the \textit{spurious intersection error} as the percentage of intersections represented in a given plot that is spurious.
\figProjectionsComparison
\subsection{Quantitative Evaluation}
\myparagraph{Projection Evaluation:}
Similar to related techniques (Sec.~\ref{sec:motivating-example}), the MoReVis result depends on the method used to transform the original 2D space into 1D.
We evaluate a set of projections with both datasets to identify the most suited projection technique.
We proceed with this evaluation in two steps.
First, for each projection, we identify the best parameter values.
We used the practices discussed in the literature and performed a grid search with all combinations to identify the best parameter values.
The details are available in the accompanying supplementary material.
We highlight that we used the region's centroid as the representative position for techniques that directly use positions (\emph{i.e.}\xspace, PCA, Hilbert Curve, and Morton Curve).
On the other hand, when using techniques based on distances (\emph{i.e.}\xspace, force-directed layout, MDS, T-SNE, and UMAp), we evaluated the use of distances between their centroids and the distances between the regions (considering the extent).
With our datasets, the cost of computing distances between regions was insignificant and can be more informative considering the extent.
To select the best set of parameters, we chose the ones that minimized the stress measure, crossings, and jump distance with this respective order of priority.
In the second step, we compare the different techniques.
Fig.~\ref{fig:projections_comparison} shows the results, where each line represents a technique, and the vertical axes include the quality metrics (discussed in Sec.~\ref{sec:metrics}) and the overall computing time of the MoReVis technique with each projection technique.
Notice that the choice of the technique has great importance for the time necessary to compute the complete MoReVis result; slower methods can almost double the required time.
The projection technique had a minor influence on the metrics related to the optimization step in MoReVis: intersection area ratio and the number of spurious intersections.
We can also see that the space-filling curves and MDS did not obtain the best result in any of the metrics.
Finally, while t-SNE and UMAP attained good results, they demand the evaluation of a large number of parameters and involve a high computing time.
Overall, the two best techniques are the force-directed layout and PCA, with similar 1D representations. A different aspect is that PCA considers only the centroids of points, while the force-directed considers the extent.
One drawback of the force-directed is the iterative process, making it slower than PCA
The good results of PCA can be interpreted because the data is low-dimensional (2D).
Considering these results, we used PCA in the next steps of evaluations and usage scenarios.
\figOptimizationMetrics
\myparagraph{Optimization Evaluation:}
Presented in Fig.~\ref{fig:projections_comparison}, the spurious intersection error for WILDTRACK was $5\%$, for HURDAT was $10.5\%$, and the intersection area ratio for both was $1.2$.
These results show that our optimization obtained good approximations despite not getting a perfect presentation of intersections in 1D.
The size of the problem impacts these metrics: the bigger the number of objects in each group (as described in Sec.~\ref{sec:intersection_representation}), the more complex the intersection structure to be represented.
To investigate this dependency, we ran the MoReVis algorithm on a reduced set of objects randomly sampled from the original dataset for each dataset.
More clearly, we generated ten random samples for different sample sizes, which were given as input to the MoReVis algorithm.
The average error measures are reported in Fig.~\ref{fig:optimization_metrics}(left); the filled region marks the min and max values obtained.
Notice that the algorithm can produce layouts with small values of spurious intersection and intersection area ratio for a small set of objects. However, the errors increase as the number of objects grows, and more complex spatial structures happen.
The number of objects also greatly impacts the computation time, as the number of constraints and integer variables grows quadratically with the number of objects.
Fig.~\ref{fig:optimization_metrics} (bottom-left) shows the total computation time divided by the number of timesteps (the WILDTRACK presents more timesteps but fewer objects than HURDAT).
The mean time per timestep for small samples (in both datasets) is at most $0.1$ seconds. Still, in the case of large samples for the HURDAT dataset, the computation can take, on average, close to $0.2$ seconds, which results in a total computation time of around 30 seconds.
We now investigate the dependency of the quality metrics on the parameters $\lambda_1$ and $\lambda_2$.
To compare the different situations, we considered different values for the ratio $\lambda_2/\lambda_1$.
When this value is small, more weight is given to minimizing the intersection area ratio. On the other hand, when the ratio is large, more weight is given to reducing the number of spurious intersections.
The results of this experiment are shown in Fig.~\ref{fig:optimization_metrics} (right).
The expected relation between $\lambda_2/\lambda_1$ and the metrics is observed.
It is also possible to identify that the HURDAT presented a longer computing time when there was more importance on preserving spurious intersections.
Depending on the application, one can set the parameters to prioritize one of the aspects.
In all the usage scenarios this paper presents, we set both $\lambda_1 = \lambda_2 = 1$.
\figMotionlinesComparison
\subsection{Visual Evaluation}
\label{sec:visual_evaluation}
We now present and discuss the visual results of MoReVis with some of our datasets.
Fig.~\ref{fig:motivating_morevis} shows the technique for the synthetic dataset described in Sec.~\ref{sec:motivating-example}.
We can identify the scale of the area (A1), with the pink object having a greater extent that is decreasing through time. In contrast, the objects represented in orange and red have increasing regions.
Furthermore, it is possible to identify that the objects started intersecting close to the middle of the observation period and almost overlapped entirely by the end (A2).
In (A3), we demonstrate the importance of the crossing removal step.
Notice that at this timestep, the pink curve crosses the other two curves that do not intersect in the original space.
The visual cue present in MoReVis indicates this inconsistency, and the final result presents no missing or spurious intersections.
This example illustrates the effectiveness of MoReVis in supporting the tasks discussed in Sec.~\ref{sec:motivating-example}.
The example presented in Fig.~\ref{fig:motionlines_comparison} aims to show the importance of the optimization step in MoReVis.
To do this, we compare MoReVis against an adapted version of MotionLines~\cite{stablevisualsummaries} (using the SPC projection) that represents areas on the WILDTRACK dataset.
Considering the same notation of Sec.~\ref{sec:area-representation}, in the MotionLines adaptation we set the width of the curves by $h_{i, t} = s\dfrac{a_{i, t}}{A_M}$. This $s$ parameter permits us to generate different scales of the curves of the plot, and we evaluated $s = 0.5$ and $s = 1$.
The curves are plotted in blue, and some rectangles are highlighted by filling in orange, purple, or red if they had a spurious intersection, missing intersection, or both, respectively.
Notice that this trivial adaptation of MotionLines presents some uncontrolled flaws in the representation of intersections.
There are missing intersections when the curves are thin, as shown on the top plot. When the curves are thick, many spurious intersections appear. This is evident at the end of the observed period, where many close objects have spurious intersections for an extended period.
In the bottom plot, the MoReVis result can adjust the position of curves so that there are no missing intersections and just a few spurious intersections.
In summary, when compared with other techniques, MoReVis can represent objects' area and spatial interactions.
These issues were not considered in previous works.
Furthermore, our results show that while it is true that we can adapt previous techniques to represent the area (\emph{e.g.}\xspace, MotionRugs, MotionLines), these solutions are insufficient.
Therefore, non-trivial solutions are needed to achieve the objective and our proposal points in that direction.
In particular, when considering the more widely used solutions like space-time cubes and animations, MoReVis can provide an overview of all the spatial movement,
area, and intersections simultaneously without requiring extensive navigational interactions or spatial distortions caused by 3D views.
All of this allows for fast identification of spatial events, temporal patterns concerning spatial extents (areas), and direct comparison across different timesteps.
\section{Usage Scenarios}
\label{sec:uses}
\label{sec:use-case}
We present two usage scenarios to demonstrate the applicability of MoReVis in different domains.
The visual interface presented in Sec.~\ref{sec:interface} was used to produce the analysis.
\subsection{Object tracking data}
\label{sec:use-case-wildtrack}
\looseness=-1
First, we explore the WIDLTRACK dataset described in Sec.~\ref{sec:datasets}.
It consists of people tracked on a video captured at an entrance of a building.
In the video, many pedestrians pass by that mostly enter and leave the scene quickly.
We applied filtering to select objects that stayed at least $200$ and less than $1000$ frames on the screen.
The MoReVis result with the PCA projection is depicted in Fig.~\ref{fig:teaser}.
The visualization elucidated several properties of the data, discussed in the following.
At the start, the violet curve on the left (A1) presents a significant change in the vertical position, suggesting a broad movement that passes through a large extent of the scene.
The curve also presents a small width, indicating that the pedestrian has a relatively small bounding box, \emph{i.e.}\xspace, is far from the camera.
Around the timestep $900$, this curve jumps from the top of the plot to the bottom; this was identified as an error in the original data: two persons were classified as the same.
Fig.~\ref{fig:teaser}(A2) shows the trajectory of the original video, forming a zig-zag due to this classification error.
Another example is highlighted as (B1), which consists of three thin curves (light green, orange, and pink) overlapping. With frame (B2), we identify three classified pedestrians walking together with intersecting bounding boxes represented in the plot. As they are distant from the camera, their curves are thin.
The green curve (C1) presents a significant increase in its width. This occurs due to a pedestrian walking in the camera's direction, getting close to it, as shown in (C2).
This pedestrian leaves the screen for a while and later returns (at timestep $1700$) to the viewport.
The curves inside the region (D1) present the most intersections in the observed period. This result relates to a group meeting in the middle of the scene, as depicted in (D2).
One detail is that the yellow curve does not intersect with the main group (blue, red, and brown). This can be verified in the plot and also in the frame view. However, it crosses with the green object between timesteps $1800$ and $1900$.
In this portion of the visualization, there are also some spurious intersections due to the complexity of intersections of the data.
The bar chart on the top shows four spurious intersections that can be detailed with the Intersection View.
\figWildtrack
\figHurricane
\subsection{Hurricane data}
\label{sec:use-case-hurdat}
\looseness=-1
A common analysis with hurricane data is comparing the trajectories with close start positions~\cite{camargo2007cluster,tripathi2016direction,mirzargar2014curve}.
In this use case, we perform such a task using the portion of the HURDAT dataset (Sec.~\ref{sec:datasets}) that corresponds to hurricanes that started with longitude between $-50^{\circ}$ and $-20^{\circ}$
and latitude between $0^{\circ}$ and $20^{\circ}$, a region close to the west coast of Africa.
The MoReVis plot generated is presented in Fig.~\ref{fig:hurricane-use-case}.
To interpret the space transformation made by PCA, we annotate the color bar on the left, indicating the dominant regions in different vertical intervals of the 1D space representation.
The color map is also presented on the map at the left.
We can see general movement trends, with many hurricanes starting close to the west coast of Africa and going towards.
Some more prolonged hurricanes move after towards Europe.
The color on the curves shows the air pressure for each hurricane.
With the coloring, it is possible to identify an inverse relationship between the area and the pressure.
Additional interesting general patterns include that hurricanes usually start with small widths and increase through time.
As MoReVis does not suffer from occlusion and tries to minimize spurious intersections, the hurricanes' trajectories are more easily identified.
This example also shows the importance of the crossings removal step (Sec.~\ref{sec:crossings}).
The intersections in this example are situations where hurricanes pass through the same region at the same period of the year.
At the bottom of the plot, we can identify the example marked with A1. These same hurricanes are highlighted in A2 and shown on a map in A3, with a color update to distinguish them.
We have two separate groups that have intersections for some timesteps, and both occur near the east coast of North America.
\looseness=-1
One hurricane that stands out is marked with (B); it has the most significant area observed and entered the most inside Europe.
This was hurricane Karl from 2004 which peaked as a category four on the Saffir-Simpson scale.
Its intensity quickly diminished, causing no fatalities when it reached Norway at the end of its trajectory.
\section{User Study}
We performed a user study to evaluate the ease of using MoReVis on analytical scenarios. In this section, we describe the details of the survey and the results.
\subsection{Study Procedure}
We had nine participants in the study (eight undergraduate students and one master's student). Six already had previous experience with spatiotemporal data, and four were familiar with visualization techniques for this type of data. Participation was voluntary and unpaid.
The experiment was conducted asynchronously, \emph{i.e.}\xspace, participants were free to run the study at any time.
First, participants had to watch a 10-minute video with a general explanation of spatiotemporal data, a simple explanation of MoReVis, and a description of the visualization web interface.
Subsequently, the participants had to perform three groups of activities between tasks and questions. The first group was designed to validate if they understood MoReVis. The second group dealt with spatiotemporal aspects of visualization. Finally, the third group was about object intersections.
\myparagraph{Tasks:}
The tasks of group one ($T1$) were: counting the objects in the frame, counting the number of intersections of an object, and identifying the time interval with the most intersections.
The tasks of group two ($T2$) were: identifying the objects with the most movement on the screen, relating regions of the original space and intervals in the 1D space representation, identifying the region of the original space with the largest number of objects, identifying the object with the largest area, and interpreting the event that caused this largest area.
Finally, the tasks of group three ($T3$) were: counting the number of intersections of an object, interpreting these intersections, identifying the object that intersects with another specific object, identifying the time step interval with the largest number of intersections, interpreting the event that caused this largest number of intersections, identifying a pair of objects with spurious intersections in a specific timestep, and identifying all objects with spurious intersections in a specific time step interval.
\looseness=-1
\myparagraph{Questions:}
The questions for tasks $T2$ were:
(Q1) \emph{``Is it easy to interpret distances on the MoReVis view?"},
(Q2) \emph{``Is it easy to interpret the 1D representation of space in the MoReVis view?"},
(Q3) \emph{``Does the color bar on the left assist in interpreting the 1D representation of the space in the MoReVis view?"} and
(Q4) \emph{``Is it easy to identify the area of objects in the MoReVis visualization?"}.
Using the same format, the questions asked after tasks $T3$ were:
(Q5) \emph{``Is it easy to identify intersections between objects?"},
(Q6) ``Is it easy to identify if an intersection is spurious?",
(Q7) \emph{``Is it easy to identify the region that has spurious intersections?"},
(Q8) \emph{``Does the selection in the MoReVis view help to analyze intersections?"}, and
(Q9) \emph{``Does the representation of intersections in the MoReVis view facilitate the data's spatial interpretation?"}.
We asked users to comment on answers Q2, Q5, Q6, and Q9, asked them which of the three other interface views they found useful and a final comment.
\subsection{Results}
One of the nine participants responded that he did not understand MoReVis; therefore, we eliminated his/her responses from the analysis. As for the other responses to the validation stage, all participants answered the first question correctly. For the other two questions, we had one incorrect answer for each (different participants). With that, we kept eight participants in the analysis of the results.
When asked about the object with the most movement on the screen in one of the tasks of $T1$, three participants responded with objects with the longest presence observed; this may be due to little training. However, in all other questions, users gave correct or approximate answers.
In questions regarding $T2$, users could identify and interpret the intersections correctly, as well as identify and analyze spurious intersections. The summary of the answers to the quantitative questions is available in Fig.~\ref{fig:user-study-results}:
In the following, we comment on some important aspects:
\myparagraph{Space representation (Q1, Q2, Q4):}
Most users found it easy to understand the representation of space, including the area of objects; those who found it more difficult agreed that it could be solved with proper training. One participant commented, \emph{``It is possible to easily visualize distances across the curve path and how objects have changed over time and visualize their region in space and intersection with other objects"}. Another comment: \emph{``The most complex thing for me is understanding the transformation from 2D into 1D. With this knowledge, exploration becomes simple."}
\myparagraph{Representation of intersections (Q5, Q6, Q7, Q9)}:
Seven users agreed that it was easy to identify intersections; one commented, \emph{``Intersections are clear in the visualization"}. Six of the users found it easy to identify spurious intersections ($Q6$). The participant who strongly disagreed with $Q6$ commented, \emph{``If you do not use the selection and the Intersection view, it is hardly possible to differentiate between spurious and non-spurious intersections"}. We agree that the MoReVis view does not have the necessary support to differentiate spurious intersections; however, we offer a rich set of interactive features to address this limitation. Five participants strongly agree that the representation of intersections makes it easier to interpret the data ($Q9$), with two agreeing. One commented, \emph{``It helps to visualize information that would be very chaotic with the data view alone"}.
\figUserStudyResults
\myparagraph{Auxiliary Views (Q2, Q3, Q7, Q8):}
Beyond the MoReVis technique, users also considered the other interface tools necessary. Despite the simple blending applied in the color bar, the response ($Q3$) was positive. In the comments on question $Q2$, users mentioned this graphic as helpful for representing space. One of the comments was, \emph{``By looking at the color bar and examining how the curves in the MoReVis view translate to the original data view, one can understand where each original region was mapped"}. Similarly, the intersection view was rated as necessary as well. Most participants found the selection (to highlight the Intersection View) very useful; one of the comments in $Q7$ was, \emph{``The intersection view helps"}. When asked about the most essential linked views, all users responded that the Intersection View and the Data View were necessary, while only two found the parallel coordinates important. This could have occurred due to the small size of the datasets.
\myparagraph{Scope for improvements:}
Although MoReVis showed promising results in representing spatiotemporal features of objects, there are areas where the technique can be improved.
Three of the eight participants claimed intermediate experience with visualization, and one claimed advanced knowledge. Because of this, it was suggested improvements to our proposal, particularly regarding the visualization interface. Some users suggested adding more interactions with the Data View, a play button to show the data as an animation, and a link to the Intersection View, showing the respective intersection and data while the animation occurs.
A suggestion was also to permit queries to compare pairs of trajectories, comparing the distance and trends between two highlighted objects.
One user also suggested higher-level functionality incorporating anomaly detection techniques to indicate significant aspects of the data to the user.
These functionalities could be added to MoReVis with little effort, and we consider them possible future work to be pursued in this line of research.
\section{Discussion}
As shown in Sec.~\ref{sec:uses}, MoReVis can provide an effective spatiotemporal visual summary of moving regions datasets through its representation of space and the areas and intersections.
This section discusses relevant points concerning the MoReVis technique, its limitations, and opportunities for future improvements.
\looseness=-1
\myparagraph{Projection of centroids:}
In our current implementation, we assume that the region covered by an object at each timestep is convex.
When they are not convex, the centroid could fall outside the region.
Therefore, the spatial representation might not be accurate for some projection methods (such as PCA).
We highlight that this is not a strong constraint since, in this case, one can use the convex hull of each region or
use distances between the areas instead of the distance between centroids.
All the current experiments and use cases presented in this paper were done using convex moving regions.
We plan to investigate how our algorithm and visualization would behave for non-convex moving regions in the future.
\myparagraph{Intersection representation:}
To develop our optimization model for intersection representation, we chose a set of goals, as it is not always possible to obtain a perfect solution.
We opt to represent every intersection of the original space with the cost of over-representing intersections areas and spurious intersections.
Sec.~\ref{sec:evaluation} verified that these error values were small in the two example datasets.
Also, to deal with this error, our visualization interface added visual hints that indicate the timesteps and regions with the presence of errors.
Furthermore, the Intersection View permits a detailed verification of intersections presented by our approach.
\myparagraph{Scalability:}
Our solution solves a mixed-integer program for each time step, which is computationally expensive.
For example, if we have $n$ objects, the optimization problem can have up to $n^2-n$ integer variables.
In the case of the HURDAT dataset, we used only 70 hurricanes, and it took 5 seconds, but if we use all 298 objects, it will take a few minutes.
Similarly, in the WILDTRACK dataset, we filtered 14 objects\,---\,people who had more permanence in the video scene\,---\,and our solution took 5 seconds; however, if we process the entire dataset (282 objects), it takes 5 minutes.
We are interested in improving this process stage, which is an immediate future work.
Nevertheless, this task can be pre-processed since it is only performed once. We used this strategy in the user study to have an interactive experience.
We can also see scalability in the visual representation; MoReVis might be cluttered for large datasets. In this case, interactivity is essential to reduce the number of moving regions shown. We also plan to investigate the use of alternative representations (such as density-based approaches) to overcome this limitation.
\myparagraph{Future applications:}
We plan to apply our method to analyze other types of datasets in the future. One possible application is to analyze clusters of trajectory data. It is easy to see that a group of trajectories can be seen as a moving region. Another application is the visualization of trajectories with uncertainty. These trajectories are often used to model the spatial uncertainty due, for example, to errors in sensor measurements or variations in prediction models. We believe that MoReVis can be effective in such applications.
One useful and simple adaptation would be to consider datasets where objects can present spatial splits and merges as the storm cells analyzed in Hornero~\cite{diehl2021hornero}.
|
{
"arxiv_id": "2302.13264",
"language": "en",
"timestamp": "2023-02-28T02:14:37",
"url": "https://arxiv.org/abs/2302.13264",
"yymm": "2302"
} |
\section*{Proof of Proposition~\ref{prop:monotonicity}}
\begin{proof}
We need to show that
$f_{\text{slam}}^\star(K-1) \geq f_{\text{slam}}^\star(K)$, for any $K \in \{1,2,\ldots,m\}$.
Note that $f_{\text{slam}}^\star(K-1)$ and $f_{\text{slam}}^\star(K)$ are both optimization problems and the
only difference is that the former imposes an extra constraint over the $\boldsymbol{y}$ (in particular, that one more landmark has to match one of the existing landmarks). Therefore the feasible set of $f_{\text{slam}}^\star(K-1)$ is contained in the feasible set of $f_{\text{slam}}^\star(K)$, and thus $f_{\text{slam}}^\star(K-1) > f_{\text{slam}}^\star(K)$.
The observation that $\beta K$ is linearly increasing in $K$ is trivially true.
\end{proof}
\section*{Proof of Proposition~\ref{prop:clustering}}
\begin{proof}
$K$-means clustering
requires partitioning $m$ points $\vzhat_k$, $k=1,\ldots,m$,
into $K$ clusters such the sum-of-squared-distances between points in a cluster and the cluster's centroid are minimized.
More formally, a standard $K$-means clustering problem can be written as the following optimization problem:
\begin{eqnarray}
\label{eq:clustering}
\min_{\boldsymbol{y} \in \Real{d\times K}} \sum_{k=1}^m \min_{j_k} \| \boldsymbol{y}_{j_k} - \vzhat_k \|^2
\end{eqnarray}
The problem essentially looks for $K$ cluster centroids $\boldsymbol{y}_{j_k}$ (each one is a column of the matrix $\boldsymbol{y}$)
while associating each point $\vzhat_k$ to a cluster centroid via $j_k$.
We now show that when $\vxx$ and $K$ are given, then Problem~\ref{prob:inner} can be written in the form~\eqref{eq:clustering}. When $\vxx$ and $K$ are given, Problem~\ref{prob:inner} becomes:
\begin{eqnarray}
\label{eq:clustering2}
\min_{ \boldsymbol{y} \in \Real{d \times K} } &
\!\displaystyle \sum_{k=1}^m \min_{j_k} \| \M{R}_{i_k}^{\mathsf{T}} \boldsymbol{y}_{j_k} \!-\! \boldsymbol{t}_{i_k} \!-\! \vzbar_k \|^2 + \text{const.}
\end{eqnarray}
where we noticed that the first and last term in Problem~\ref{prob:inner} are constant (for given $\vxx$ and $K$) and are hence irrelevant for the optimization.
Now recalling that the $\ell_2$ norm is invariant to rotation (and dropping constant terms), eq.~\eqref{eq:clustering2} becomes:
\begin{eqnarray}
\label{eq:clustering3}
\min_{ \boldsymbol{y} \in \Real{d \times K} } &
\!\displaystyle \sum_{k=1}^m \min_{j_k} \| \boldsymbol{y}_{j_k} \!-\! \M{R}_{i_k}(\boldsymbol{t}_{i_k} \!+\! \vzbar_k) \|^2
\end{eqnarray}
Now recalling that the poses (hence $\M{R}_{i_k}$ and $\boldsymbol{t}_{i_k}$) are known, and
defining vectors $\vzhat_k \doteq \M{R}_{i_k}(\boldsymbol{t}_{i_k} \!+\! \vzbar_k)$ we can
readily see eq.~\eqref{eq:clustering3} becomes identical to
eq.~\eqref{eq:clustering}, proving our claim.
\end{proof}
\section*{Proof of Proposition~\ref{prop:slam}}
\begin{proof}
The proof trivially follows from the fact that for given $K$ and data associations $j_k$,
Problem~\ref{prob:inner} simplifies to (after dropping the constant term $\beta K$, which is irrelevant for the optimization once $K$ is given):
\begin{eqnarray}
\label{eq:stdslam}
\min_{ \substack{\vxx \in \SE{d}^n \\ \boldsymbol{y} \in \Real{d \times K} } } &
f_{\text{odom}} (\vxx) + \!\displaystyle \sum_{k=1}^m \| \M{R}_{i_k}^{\mathsf{T}} \boldsymbol{y}_{j_k} \!-\! \boldsymbol{t}_{i_k} \!-\! \vzbar_k \|^2 \nonumber
\end{eqnarray}
which matches the standard formulation of landmark-based SLAM, and can be readily solved with both local~\cite{Kuemmerle11icra,gtsam} and global solvers~\cite{Rosen18ijrr-sesync}.
\end{proof}
\section*{Description of the ML Baseline} \label{sec:mldetails}
The incremental maximum likelihood approach includes two steps at each timestamp. The first step, for each measurement, selects a set of admissible landmarks by performing a $\chi^2$-test dependent on the {Mahalanobis distance} between the landmark estimate $\boldsymbol{y}_{j}$ and the measurement $\boldsymbol{z}_{k}$. We use a probability tail of $0.8$ for the test in all datasets except for Garage, where $0.9$ is used.
The second step associates the measurement with the most likely landmark (\emph{i.e.,}\xspace the one achieving the {lowest negative loglikelihood score}), while ensuring that each landmark is only associated to one of the newly acquired measurements. This is done by forming a cost matrix with admissible landmarks and corresponding measurement candidates and solving for the optimal hypothesis with the Hungarian algorithm \cite{kuhn_1955,munkres_1957}.
The measurements that are not associated to a landmark are initialized as new landmarks using the measurement in the world frame as the initial estimate.
At each timestamp, after data associations are established as described above, the ML\xspace baseline
uses batch optimization with the Levenberg-Marquardt optimizer \cite{levenberg_1944,marquardt_1963} to compute the SLAM estimate.
\section*{Visualizations of the Datasets} \label{sec:vizdatasets}
We visualize the five datasets in their default setup (Table~\ref{tab:simparams}) in Fig.~\ref{fig:mapviz}.
\input{sections/fig-datasets}
\section*{Sensitivity Analysis of $\beta$} \label{sec:sabeta}
\input{sections/fig-sensitivityBeta.tex}
We vary the value of $\beta$ in K-SLAM\xspace and compare how the estimation accuracy changes with $\beta$ under different noise conditions in Fig.~\ref{fig:sensitivity}. Overall, the performance of K-SLAM\xspace is not sensitive to the choice of $\beta$. In the default noise case, a range of $\beta$ values has almost the same accuracy. Under high odometry noise, K-SLAM\xspace shows overestimation of the number of landmarks as seen in Section~\ref{sec:accuracy}. The overestimation is reduced as $\beta$ increases following our expectation from formulation~\eqref{eq:formulationLmkCount}. On the contrary, under the high landmark noise condition, underestimation of the number of landmarks is present, but it is alleviated as $\beta$ decreases. Even in the challenging high noise conditions, the ATE of K-SLAM\xspace does not vary significantly across different values of $\beta$.
\section*{Ablation} \label{sec:ablation}
We present an ablation study on the Grid (3D) dataset in Fig.~\ref{fig:ablation} to understand the contributions of different components in our algorithm. The following observations are made in Fig.~\ref{fig:ablationa}. (i) We first would like to know whether the provided initial guess actually helps Oracle\xspace. Comparing Alt (Oracle\xspace) to Alt (kmpp init) and K-SLAM\xspace (given init + carry) to K-SLAM\xspace (carry), we confirm that providing the initial guess for landmarks indeed helps improve the accuracy by a large margin (except for Alt (Oracle\xspace) at the highest noise level). (ii) We are also interested in whether clustering (line~\ref{line:kmeans} in Algorithm~\ref{algo:inner}) is better than the one-step association in the Oracle\xspace baseline. Comparing Alt (Oracle\xspace) and K-SLAM\xspace (given init + carry), we observe a small improvement by doing iterative clustering for association. (iii) The largest improvement that helps K-SLAM\xspace beat the Oracle\xspace baseline is line~\ref{line:kmeans++} in Algorithm~\ref{algo:inner} as seen from the performance gap between K-SLAM\xspace (known $K$) and K-SLAM\xspace (carry). The randomness from k-means++ greatly improves the accuracy (with the exception at the highest landmark noise level) by jumping out of local minima. Under high noise, the randomness becomes detrimental because it hampers convergence in the highly nonlinear cost function topology. (iv) We further observe that knowing the true $K$ does not necessarily lead to the best accuracy (see K-SLAM\xspace versus K-SLAM\xspace (known K)). The higher performance of K-SLAM\xspace at lower noise levels is indeed caused by slight overestimation of the number of landmarks since overestimation can reduce the chance of map collapsing which may happen when there are fewer estimated landmarks than actual.
For runtime in Fig.~\ref{fig:ablationb}, the three variants of Algorithm~\ref{algo:inner} (given the ground-truth K) are faster than the Oracle\xspace baseline as they leverage efficient and mature k-means and k-means++ implementation. Therefore, the clustering steps (lines~\ref{line:kmeans++}-\ref{line:kmeans} in Algorithm~\ref{algo:inner}) bring better accuracy not at the expense of speed. The burden of estimating $K$ in an outer iteration scheme significantly increases the runtime for K-SLAM\xspace.
\input{sections/fig-ablation}
\section{Conclusion}
\label{sec:conclusion}
In this paper we investigated a challenging batch SLAM problem where the goal is to simultaneously estimate
the poses of the robot, the locations of unknown landmarks, and the data associations between measurements
and landmarks. We proposed problem formulations and algorithms for the resulting data-association-free SLAM problem. The algorithms were tested on a mix of synthetic and real datasets and perform better or on par with strong baselines that have access to privileged information. While these initial findings are encouraging, our algorithm does show limitations such as degraded performance at high noise and slow runtime. It would be desirable to better understand the behavior of $f_{\text{\normalfont slam}}^\star(K)$ for $K$ estimation and improve the robustness of association and clustering at high noise.
\section{Experiments}
\label{sec:experiments}
This section shows that the proposed K-SLAM\xspace algorithm computes accurate estimates across multiple simulated and real datasets, and it is competitive against two baselines, one of which has access to the true number of landmarks.
We first introduce the experimental setup and datasets
(Section~\ref{sec:datasets}) and the baseline techniques (Section~\ref{sec:baselines}).
We then evaluate the techniques in terms of accuracy (Section~\ref{sec:accuracy}). An ablation study is provided in the supplementary~\cite{supplementary}.
\subsection{Datasets and Setup}\label{sec:datasets}
\myParagraph{Datasets} We evaluate K-SLAM\xspace (Algorithm~\ref{algo:beta}) along with baseline techniques on
(i) two fully synthetic ``grid'' datasets (2D and 3D),
(ii) two semi-real datasets (Intel 2D and Garage 3D) where the trajectories and odometry measurements are real, but the landmarks are synthetic, and
(iii) a real dataset (Indoor).
See~\cite{supplementary} for visualizations of the datasets.
For the two synthetic datasets, we first generate a grid-like ground-truth trajectory and uniformly sample ground-truth landmark positions around the trajectory. We then add Gaussian noise to create odometry and landmark measurements following eq.~\eqref{eq:ldmkmeasurement}. Each landmark is observed by a fixed number of poses that are closest to it in the ground-truth positions.
For the two semi-real datasets (Intel and Garage), we use two real datasets for pose graph optimization~\cite{kummerle2011g, g2odata}, which include both odometry measurements and loop closures.
We obtain a proxy for the ground-truth trajectory by optimizing the original pose graph with loop closures.
Then we remove the loop closures from the dataset, and add landmarks following the same procedure used for the synthetic datasets.
The real dataset (Indoor) is a benchmarking dataset available online~\cite{mathworksdata}.
In this dataset, a robot observes a number of AprilTags~\cite{april} along the trajectory (we only use the ID of the AprilTags
for evaluating different methods).
\myParagraph{Setup} In our synthetic and semi-real experiments, we sweep through
different parameter values that define the map (\emph{e.g.,}\xspace odometry noise level, number
of landmarks, and landmark measurement noise level). Twenty Monte Carlo simulations are performed for each set of parameters where we vary noise realizations and landmark placement. The default map parameter values can be found in Table~\ref{tab:simparams}.
\input{includes/table-params}
\subsection{Compared Techniques and Baselines}\label{sec:baselines}
\myParagraph{Proposed Technique} K-SLAM\xspace uses the Levenberg-Marquardt optimizer in GTSAM~\cite{gtsam}
for landmark-based SLAM. We set $\text{maxIterations}=15$. A rule of thumb is to set $\beta$ to the maximum expected residuals for the measurements of a single landmark. Assuming that a landmark $j$ is observed $n = |{\cal M}_j|$ times and that the measurements are affected by Gaussian noise, $r_j$ (defined in Remark~\ref{remark:setBeta}) at the ground truth is distributed according to a $\chi^2$ distribution, hence
$\prob{r_j \leq \gamma^2} = p$, where $\gamma^2$ is the quantile of the $\chi^2(dn)$ distribution with
$d \times n$ degrees of freedom and lower tail probability equal to $p$.
As a heuristic, we set $\beta = \gamma^2$ where $\gamma$ can be computed for a given choice of tail probability (in our experiments $p = 0.997$) and for an average number of landmark observations $n$. Assuming having knowledge of $n$, the values of $\beta$ used are 41.72 (Grid 2D), 55.64 (Grid 3D), 68.94 (Intel), and 94.47 (Garage). We performed a sensitivity analysis on $\beta$ for the Indoor dataset (see Table~\ref{tab:realresults}) and in the supplementary \cite{supplementary}.
\myParagraph{Oracle\xspace Baseline} The Oracle\xspace baseline is an alternation scheme that has access to the true number of landmarks and an initial guess for each landmark.
The baseline alternates two steps. In the first step,
for each measurement $\vzbar_k$ it finds the landmark having a position estimate
$\boldsymbol{y}_j$ that minimizes $\| \M{R}_{i_k}^{\mathsf{T}} \boldsymbol{y}_{j} \!-\! \boldsymbol{t}_{i_k} \!-\!
\vzbar_k \|$. After making associations for all the measurements, it solves the
SLAM problem with the estimated associations. This process repeats until
convergence or reaching a user-specified maximum number of iterations. This method is similar to the technique proposed in \cite{doherty2022discrete}. Note that this strong baseline assumes extra knowledge (\emph{i.e.,}\xspace number of landmarks, initial guess for the landmarks) that is not available to the other techniques and in real scenarios. The initial guess for the landmarks is computed by projecting the first measurement of each landmark (\emph{i.e.,}\xspace measurement from the first pose that observes the landmark) into the world frame (in the same way as line~\ref{line:proj} in Algorithm~\ref{algo:inner}).
\myParagraph{ML\xspace Baseline} The second baseline is the popular Maximum Likelihood data association used in an
incremental formulation of SLAM, where odometry is propagated sequentially and newly acquired measurements are incrementally associated to the most likely landmark estimate; see~\cite{bar1990tracking}.
At each time step, for each measurement, it selects a set of admissible landmarks by performing a $\chi^2$-test between the landmark estimates $\boldsymbol{y}_{j}$ and the measurement $\boldsymbol{z}_{k}$. The measurement is associated to the most likely landmark, while each landmark is only associated to one of the newly acquired measurements. The measurements that are not associated to any landmark are initialized as new landmarks. After data associations are established, batch optimization with the Levenberg-Marquardt optimizer \cite{levenberg_1944,marquardt_1963} is performed to compute the SLAM estimate at each time step. Additional implementation details are in the supplementary \cite{supplementary}.
\myParagraph{Odom\xspace Baseline} This baseline only computes an odometric estimate of the robot
trajectory by concatenating odometry measurements without using landmark measurements. We report the accuracy of this baseline as a worst-case reference.
\vspace{-2mm}
\subsection{Results: Accuracy}\label{sec:accuracy}
\input{sections/fig-accuracy2}
We measure the accuracy of the techniques by computing the Absolute Trajectory Error (ATE) of the trajectory estimated by each technique with respect to an optimized trajectory obtained via standard landmark-based SLAM with ground-truth data associations. Therefore, if the algorithms estimate all data associations correctly, the ATE will be zero. Unlike the normal ATE with respect to the ground-truth trajectory, this metric aims to capture data association correctness. In the following, we evaluate the ATE for varying levels of odometry noise, number of landmarks, and landmark measurement noise. Then we provide summary statistics for the real dataset (Indoor) in Table~\ref{tab:realresults}.
\myParagraph{Influence of odometry noise}
Fig.~\ref{fig:odomNoise} shows the ATE for increasing odometry noise in our synthetic datasets.
For low and moderate odometry noise levels, K-SLAM\xspace generally outperforms other techniques. When the odometry noise is high, ML\xspace starts to outperform Oracle\xspace and K-SLAM\xspace. The reason is that the poor odometry causes the initial projected measurements of a landmark (line~\ref{line:proj} in Algorithm~\ref{algo:inner}) to be too far to cluster and associate. As an incremental method, ML\xspace reduces the odometry drift incrementally, alleviating this problem.
Fig.~\ref{fig:nrLmksa} shows the estimated number of landmarks for K-SLAM\xspace and ML\xspace (Oracle\xspace is given the ground-truth number). We observe that ML\xspace tends to largely overestimate the number of landmarks.
In contrast, K-SLAM\xspace provides fairly accurate estimation of the number at low to medium odometry noise. It only overestimates the number of landmarks at high odometry noise. We note that underestimating the number of landmarks in SLAM can lead to catastrophic failures due to incorrect associations. Overestimation, however, generally has less severe consequences, causing the method to miss some of the correct associations and thus lose loop-closing information.
\myParagraph{Influence of number of landmarks and landmark noise}
Fig.~\ref{fig:lmkNoiseAndNumber} shows the ATE across the four datasets for increasing
landmark measurement noise (top row) and increasing number of landmarks shown as a percentage of the number of poses (bottom row).
We observe that our K-SLAM\xspace still outperforms all other techniques almost across the spectrum; it is only worse at high landmark noise where the raw odometry becomes the best among all the techniques (with the exception of the garage dataset). In this case, two factors come into play for K-SLAM\xspace. First, as mentioned earlier, our approach can underestimate the number of landmarks under high noise (see Fig.~\ref{fig:nrLmksb}). Second, high landmark noise makes solving the inner problem (Problem~\ref{prob:inner}) more challenging in K-SLAM\xspace by reducing the inter-cluster distances of the projected measurements. Solving the inner problem sub-optimally further affects the outer estimation of the number of landmarks, causing more performance loss. The Garage dataset is particularly challenging as its odometry translation covariance is specified to be large despite the odometry being accurate. The ML\xspace baseline is the most vulnerable to such mismatched covariances due to its reliance on them.
Number of landmarks is a relatively weak factor for data association. There is a slight trend of having less accurate data association as more landmarks are in the map, which is expected.
\input{includes/table-real}
\myParagraph{Performance on the real dataset}
Table~\ref{tab:realresults} reports the results on the real dataset.
We observe that although being slow, K-SLAM\xspace still dominates the others. Additionally, the performance is not sensitive to $\beta$. A wide range of $\beta$ ($\beta\in[20,140]$) all gives the highest accuracy.
\section{Experiments}
\label{sec:experiments}
\subsection{Baselines}
Our proposed algorithms are compared with (1) a simpler alternation scheme that has access to the ground-truth number of landmarks and an initial guess for each landmark, and (2) a more traditional incremental formulation of SLAM, where odometry is propagated sequentially and data association is made on the arriving measurements with the maximum likelihood estimator \cite{bar1990tracking}.
The alternation baseline is quite simple. In the first step, it enumerates the landmark variables $\boldsymbol{y}_j$ and finds the one that minimizes $\| \M{R}_{i_k}^{\mathsf{T}} \boldsymbol{y}_{j} \!-\! \boldsymbol{t}_{i_k} \!-\! \vzbar_k \|^2$ to be the landmark that associates to measurement $\vzbar_k$ given the current estimates of $\vxx$ and $\boldsymbol{y}$. After making associations for all the measurements, it solves the SLAM problem with the estimated associations. This process repeats until convergence. In contrast to Algorithm~\ref{algo:nameKmeans}, this baseline alternation solves only the data associations in the first step but not together with $\boldsymbol{y}$. Thus, it requires an initial guess for each landmark to start with, inevitably also knowing the ground-truth number of landmarks.
The incremental maximum likelihood approach follows two stages. The first stage is to perform a crude selection of landmarks by thresholding the Euclidean distance between the estimated landmark position for existing landmark variables $\boldsymbol{y}_j$ and the transformed measurement position in the world frame for some range threshold $r_{\mathrm{thresh}}$, given by
\begin{equation}
\norm{\M{R}_{i_k}^{\mathsf{T}}\boldsymbol{y}_j - \boldsymbol{t}_{i_k} - \vzbar_k}_2 < r_{\mathrm{thresh}}.
\end{equation}
This step forms an admissible set ${\cal A}$ similar to the one in the GNC approach (Section~\ref{sec:solver-gnc}) to reduce computation load. Then, each of the landmarks that are in the admissible set is tested for \emph{individual compatibility} with a $\chi^2$-test dependent on the \emph{Mahalanobis distance} $\varepsilon_{kj}$ between the landmark $\boldsymbol{y}_{j}$ and the measurement $\boldsymbol{z}_{k}$, given by
\begin{equation}
\varepsilon_{kj} = \left(\vzbar_k - h(\vxx_{i_k}, \boldsymbol{y}_{j})\right)^{\mathsf{T}}\boldsymbol{S}^{-1}_{kj}\left(\vzbar_k - h(\vxx_{i_k}, \boldsymbol{y}_{j})\right),
\end{equation}
where $h(\vxx_{i_k}, \boldsymbol{y}_j)$ is the measurement model given in \eqref{eq:formulationLmkCount}, $\boldsymbol{S}_{kj}=\boldsymbol{H}_{kj}\boldsymbol{P}_{kj}\boldsymbol{H}_{kj}^{\mathsf{T}} + \boldsymbol{Q}_k$ is the covariance of the measurement likelihood, $\boldsymbol{H}_{kj}$ is the joint Jacobian of the measurement model with respect to the state $\vxx_{i_k}$ and the landmark $\boldsymbol{y}_j$, $\boldsymbol{P}_{kj}$ is the joint marginal covariance for the state $\vxx_{i_k}$ and landmark $\boldsymbol{y}_j$ and $\boldsymbol{Q}_k$ is the measurement noise covariance. The $\chi^2$-test is formulated as whether the the computed Mahalanobis distance $\varepsilon_{kj}$ satisfies
\begin{equation}
\varepsilon_{kj} < \varepsilon_{\mathrm{thresh}}
\end{equation}
where $\varepsilon_{\mathrm{thresh}} = \texttt{chi2inv}(\alpha, \mathrm{dim}(\boldsymbol{z}))$ for some probability threshold $\alpha$.
The second stage of the algorithm ensures the \emph{mutual exclusion principle} of data association. The first pass ensures that all measurements are associated with exactly one landmark, but may cause a landmark to have multiple measurement candidates. In the second stage, each landmark is associated with the measurement with the \emph{lowest negative loglikelihood score} $l_{kj}$, given by
\begin{equation}
l_{kj} = \log(\det\left(\boldsymbol{S}_{kj}\right)) + \varepsilon_{kj}
\end{equation}
where common constant terms and factors are removed as they do not affect the solution to the optimization problem. The remaining measurements that are not associated to a landmark are initialized as new landmarks using the measurement in the world frame as the initial estimate.
\subsection{Datasets}
We evaluate our K-SLAM algorithm (Algorithm~\ref{algo:nameKmeans}) and GNC algorithm (Section~\ref{sec:solver-gnc}) along with the baselines on two fully synthetic "grid" datasets (2D and 3D), two semi-synthetic datasets where the trajectories are real, but the landmarks are synthetic, and two real datasets \cite{}. Fig.~\ref{fig:mapviz} shows example visualization of the six datasets.
For the two synthetic datasets, we first generate a grid-like ground-truth trajectory and uniformly sample landmark positions in a slightly enlarged space where the trajectory occupies. Gaussian noise is injected to the ground-truth trajectory to create odometry measurements and to the landmark measurements following eq.~\eqref{eq:ldmkmeasurement}. Each landmark is observed by a fixed number of poses that are closest in their ground-truth positions to the landmark. The initial guess for the poses is the odometry. The initial guess for a landmark (only used by the baseline alternation) is obtained by composing the initial guess of the closest pose and the measurement from that pose. Finally, we run Gauss-Newton optimization given the odometry and landmark measurements (with known data associations) to obtain the pseudo-ground-truth trajectory for our testing purpose. This pseudo-ground-truth represents the best any method could do without data association information.
For the two semi-synthetic datasets, we first use Gauss-Newton to optimize the original pose graph with loop closures. Landmarks are uniformly sampled in the space and landmark measurements are created with the optimized trajectory in the same manner as for the synthetic datasets. Loop closures are then discarded and only odometry is retained so that the methods rely on landmarks to close the loops. Initial guess is obtained in the same fashion as before. Gauss-Newton is again employed to obtain the pseudo-ground-truth given the odometry and landmark measurements.
In our synthetic and semi-synthetic experiments, we sweep through different parameter values that define the map (\emph{e.g.,}\xspace odometry noise level, number of landmarks, and landmark measurement noise level). Five random simulations are performed for each set of parameters. The default map parameter values can be found in Tab.~\ref{tab:simparams}.
\begin{table}[t]
\smaller
\centering
\noindent
\caption{
Dataset default parameters. Values in bold are varied in our experiments. n/a stands for a dense matrix not representable by a single value.
}
\begin{tabular}{c | c c c}
\hline
Dataset & 2D Grid & 3D Grid & Intel \\
\hline
\# of poses & 20 $\times$ 25 & 6 $\times$ 6 $\times$ 6 & 942 \\
\# of landmarks & \textbf{100} & \textbf{64} & \textbf{141} \\
\# of lm. measurements & $\times$ 12 & $\times$ 12 & $\times$ 15 \\
odometry translation std. & \textbf{0.05} & \textbf{0.05} & 0.045 \\
odometry rotation std. & \textbf{0.005} & \textbf{0.01} & 0.014 \\
lm. measurement std. & \textbf{0.1} & \textbf{0.1} & \textbf{0.1} \\
\hline
Dataset & Garage & Indoor & Corridor \\
\# of poses & 1661 & 571 & 1744 \\
\# of landmarks & \textbf{166} & 14 & 248 \\
\# of lm. measurements & $\times$ 20 & 233 & 983 \\
odometry translation std. & 1 & 0.1 & (0.15, 0.05, 0.1) \\
odometry rotation std. & n/a & 0.1 & (0.003, 0.006, 0.006) \\
lm. measurement std. & \textbf{0.1} & 0.1 & 0.1 \\
\hline
\end{tabular}
\label{tab:simparams}
\end{table}
The real dataset \cite{} was collected for a previous research effort to study the ambiguous data association problem with semantic information. This dataset features a forward moving robot that is observing AprilTags \cite{} along the trajectory. In this dataset, the only way to reduce the odometry drift is closing large loops which requires some form of data association information (\emph{e.g.,}\xspace semantic information as in \cite{}). The lack of any data association information as in our case makes it particularly challenging. We also evaluate on a public dataset where the robot frequently re-observes a small set of landmarks in the environment, which is a different scenario than \cite{}.
\begin{figure}[htb!]
\centering
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/maps/vizmap_2dsyn.png}
\caption{Grid (2D)}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/maps/vizmap_3dsyn.png}
\caption{Grid (3D)}
\end{subfigure}%
\newline
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/maps/vizmap_intelos.png}
\caption{Intel (2D)}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/maps/vizmap_garage.png}
\caption{Parking Garage (3D)}
\end{subfigure}%
\newline
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/maps/vizmap_matlab.png}
\caption{Indoor (3D)}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/maps/vizmap_kevin.png}
\caption{Corridor (3D)}
\end{subfigure}%
\caption{Visualization of the datasets. The black trajectories and markers are the initial guess. The blue trajectories and markers are the pseudo-ground-truth obtained by Gauss-Newton.}
\label{fig:mapviz}
\end{figure}
\subsection{Results: Accuracy}
The K-SLAM algorithm, GNC, and the baseline alternation all use the Levenberg-Marquardt optimizer. The Maximum Likelihood approach uses ISAM2 \cite{ISAM2} with the Dogleg optimizer \cite{Dogleg_Powell1970} in the back-end. We present the results of our simulations on the synthetic and semi-synthetic datasets in Fig.~\ref{fig:gridresults} and Fig.~\ref{fig:semisynresults}. Tab.~\ref{tab:realresults} reports the results on the two real datasets. We visualize the resulting trajectories for K-SLAM in Fig.~\ref{fig:realkslam}.
Our K-SLAM algorithm outperforms the others in the majority of the experiments. In many cases, it even surpasses the baseline alternation which knows the actual number of landmarks and is given an initial guess for the landmarks. One interesting trend as seen in Fig.~\ref{fig:gridresults} is that K-SLAM tends to underestimate the number of landmarks as the landmark measurement noise goes up. This is expected -- as the landmark measurement noise increases, the tolerance of wrong data associations raises since the landmark measurements are down-weighted compared to the odometry term $f_{\text{odom}} (\vxx)$. The $\beta K$ term in eq. \eqref{eq:formulationLmkCount} then drives the number of landmarks ($K$) down, as it is facing reduced penalties for wrong data associations. For the same reason, increasing odometry noise has the opposite effect of causing overestimation.
We take advantage of this observation to tackle the most challenging dataset, the Corridor, where all the other algorithms fail to nicely close the loop (Tab.~\ref{tab:realresults}). In our experiment, a very high landmark measurement noise model is artificially set in K-SLAM. This noise model has two effects, (1) encouraging underestimation of the number of landmarks and aggressive data associations, and (2) suppressing the negative effects of wrong data associations. It turns out that on a dataset such as the Corridor where the success of reducing odometry drift relies critically on closing large loops, the synergy of these two effects causes the algorithm to close those large loops without paying a price for making incorrect data associations in small loops along the trajectory.
The maximum likelihood baseline performs well on the fully synthetic datasets, but not on the semi-synthetic or real datasets. We attribute this behavior to its reliance on covariance information, which ultimately comes from the prescribed noise models, to make data associations. For the fully synthetic data, the noise model provided to the algorithm matches exactly the actual noise characteristics. However, on semi-synthetic and real datasets, this perfect match is unlikely. The maximum likelihood suffers from this mismatch. Additionally, we observe its strong tendency to overestimate the number of landmarks.
GNC performs moderately in our experiments, with stable performance. We consider the strength of it lies in the speed-accuracy trade-off (Section~\ref{sec:runtime}).
\begin{figure}
\centering
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0133_odom_noise_lv_rerun_refGN.png}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0133_odom_noise_lv_rerun_nLm.png}
\end{subfigure}%
\newline
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0257_ldmk_t_std_rerun_refGN.png}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0257_ldmk_t_std_rerun_nLm.png}
\end{subfigure}%
\newline
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0217_ldmk_percent_rerun_refGN.png}
\caption{ATE (m) on Grid (2D)}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0217_ldmk_percent_rerun_nLm.png}
\caption{Estimated number of landmarks (relative to GT) on Grid (2D)}
\end{subfigure}%
\newline
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220228_2350_odom_noise_lv_refGN.png}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220228_2350_odom_noise_lv_nLm.png}
\end{subfigure}%
\newline
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0020_ldmk_t_std_refGN.png}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0020_ldmk_t_std_nLm.png}
\end{subfigure}%
\newline
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0050_ldmk_percent_refGN.png}
\caption{ATE (m) on Grid (3D)}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0050_ldmk_percent_nLm.png}
\caption{Estimated number of landmarks (relative to GT) on Grid (3D)}
\end{subfigure}%
\caption{Absolute trajectory error (ATE) with respect to the pseudo-ground-truth from running Gauss-Newton and estimated number of landmarks (relative to the ground-truth number) on the two grid datasets.}
\label{fig:gridresults}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/intelgarageresults/20220301_0454_ldmk_t_std_refGN.png}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/intelgarageresults/20220301_0454_ldmk_t_std_nLm.png}
\end{subfigure}%
\newline
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/intelgarageresults/20220301_0730_ldmk_percent_refGN.png}
\caption{ATE (m) on Intel (2D)}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/intelgarageresults/20220301_0730_ldmk_percent_nLm.png}
\caption{Estimated number of landmarks (relative to GT) on Intel (2D)}
\end{subfigure}%
\newline
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0020_ldmk_t_std_refGN.png}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0020_ldmk_t_std_nLm.png}
\end{subfigure}%
\newline
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0050_ldmk_percent_refGN.png}
\caption{ATE (m) on Garage (3D)}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/gridresults/20220301_0050_ldmk_percent_nLm.png}
\caption{Estimated number of landmarks (relative to GT) on Garage (3D)}
\end{subfigure}%
\caption{Absolute trajectory error (ATE) with respect to the pseudo-ground-truth from running Gauss-Newton and estimated number of landmarks (relative to the ground-truth number) on the two semi-synthetic datasets.}
\label{fig:semisynresults}
\end{figure}
\begin{table}[tb]
\small
\centering
\noindent
\caption{
ATE (first two rows) and run-time (sec) (last two rows) results on the two real datasets. Estimates of the number of landmarks are given in parentheses (relative to the ground-truth). To obtain the results on the corridor dataset, Alternation and K-SLAM employed a higher noise model increased from 0.1 default standard deviation to 5.5. GNC's maximum distance threshold was set to 3. For the Indoor dataset, measurement noise standard deviation was set to 0.1 throughout.
}
\begin{tabular}{c | c c c c c}
\hline
Dataset & Odom & ML & Alt & GNC & K-SLAM \\
\hline
Indoor & 0.415 (+24) & 0.833 & 0.076 & 0.157 & \textbf{0.046} (+1) \\
Corridor & 6.67 (+233) & 7.13 & 4.18 & 5.817 & \textbf{1.068} (-233) \\
\hline
Indoor & n/a & 7.49 & 1.66 & 2.94 & 16.73 \\
Corridor & n/a & 77.05 & 24.83 & 21.93 & 69.2 \\
\hline
\end{tabular}
\label{tab:realresults}
\end{table}
\begin{figure}
\centering
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/realresults/matlab_altkmfree_results.png}
\caption{Indoor}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/realresults/kevin3000_altkmfree_results(originalGT).png}
\caption{Corridor}
\end{subfigure}%
\caption{Results of running K-SLAM on the two real datasets. Black denotes the estimated trajectories and landmarks. Blue denotes the pseudo-ground-truth obtained by Gauss-Newton.}
\label{fig:realkslam}
\end{figure}
\subsection{Results: Runtime}
\label{sec:runtime}
We analyze the run-time performance of the methods on the 2D grid dataset as shown in Fig.~\ref{fig:runtime} and the real datasets in Tab.~\ref{tab:realresults}. K-SLAM, although being accurate, is the slowest algorithm, while GNC is one of the fastest. In general, the run-time of GNC varies between the maximum likelihood method and somewhat below the alternation approach in our experiments. The run-time of all the algorithms increases linearly on the log-scale plot (Fig.~\ref{fig:runtime}) as the number of landmarks increases, indicating the combinatorial nature of the problem.
\begin{figure}
\centering
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/runtimegrid/20220301_0217_ldmk_percent_rerun_time.png}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/runtimegrid/20220301_0257_ldmk_t_std_rerun_time.png}
\end{subfigure}%
\caption{Run-time on the 2D grid datasets.}
\label{fig:runtime}
\end{figure}
\section*{Supplementary}
The proofs, additional experimental results, and details about the datasets and implementation of baseline algorithms are provided in \cite{supplementary}.
\section{Introduction}
\label{sec:intro}
This paper studies landmark-based Simultaneous Localization and Mapping (SLAM) and investigates the challenging case
where data associations between measurements and landmarks are not given.
Consider a robot moving in an unknown environment and trying to estimate its own trajectory
using odometry and detections of unknown landmarks.
When data associations are given, this leads to a well-studied and now easy-to-solve setup, \emph{i.e.,}\xspace
standard landmark-based SLAM~\cite{Cadena16tro-SLAMsurvey}.
However, the problem becomes extremely challenging when data associations are unknown, \emph{i.e.,}\xspace
when the robot does not have prior knowledge about data association and does not even know how many unique
landmarks it has observed.
This case arises in practice when the detections do not provide rich information (\emph{e.g.,}\xspace appearance, semantics) to perform data association, or after failures resulting from incorrect data associations (where one needs to recover an estimate by reasoning over potentially wrong associations).
The problem of data association in SLAM has been extensively studied~\cite{bar1990tracking}, see Section~\ref{sec:relatedWork} for
a review.
However, most papers consider an \emph{incremental} setup where data associations are greedily established at each
time step (\emph{e.g.,}\xspace one has to associate newly acquired measurements to the existing set of landmarks). The setup where all associations have to be established at the same time (\emph{i.e.,}\xspace a \emph{batch} optimization setup)
has not received the same amount of attention. The batch setup is relevant for several reasons.
First, the incremental setup makes hard data-association decisions at each step, but data associations that look plausible at a given time may turn out to be wrong after more measurements are collected; on the other hand, a batch setup simultaneously reasons over the entire history of measurements and looks for an optimal association that is consistent across all measurements.
Second, typical incremental data association techniques are based on probabilistic tests that rely on the knowledge of the measurement covariances; however, the covariances are often hand-tuned and may not be reliable, hence calling for an alternative framework that is less sensitive to the covariance tuning.
Finally, incremental data association may fail after an incorrect data association (or may accumulate large drift if correct data associations are discarded); therefore, it is desirable to have a batch approach that revisits all data associations and corrects mistakes, in order to recover after SLAM failures.
\myParagraph{Contribution} This paper investigates batch-SLAM with unknown data associations.
Our first contribution (Section~\ref{sec:problemFormulation}) is to provide a formulation for the
inner problem, which simultaneously optimizes for the robot poses, landmark positions, and data associations assuming the number of landmarks is given. We then present a formulation to estimate the number of landmarks.
Our second contribution (Section~\ref{sec:solvers}) presents algorithms to solve the optimization problems arising in our formulations. Our algorithm for the inner problem is based on an alternation scheme, which alternates the estimation of the robot poses
and landmark positions with the choice of data associations. The algorithm works well in practice, but it is not guaranteed to compute optimal solutions due to the hardness and combinatorial nature of the optimization problems it attempts to solve. Moreover, it creates
useful and novel connections with existing techniques from discrete-continuous optimization (\emph{e.g.,}\xspace k-means clustering), which has the potential to trigger novel research.
Finally, we demonstrate the proposed approaches in extensive simulations and on real datasets (Section~\ref{sec:experiments}), and show that
the proposed techniques outperform typical data association baselines and are even competitive against an ``oracle'' baseline which has access to the number of landmarks and an initial guess for each landmark.
\section{Problem Formulations: Data-Association-Free SLAM\xspace (\probName)}
\label{sec:problemFormulation}
We break the Data-Association-Free SLAM\xspace (\probName) problem into an inner problem and an
outer problem. The inner problem solves for the robot poses, landmark positions
and data associations assuming the number of landmarks is known. The outer
problem estimates the number of landmarks assuming one can solve the inner problem. We first introduce our notations.
Let us denote the robot trajectory as $\vxx = \{\vxx_1, \ldots, \vxx_n\} \in \SE{d}^n$,
where $\vxx_i \in \SE{d}$ is the (unknown) $i$-th pose of the robot and
$\SE{d}$ is the group of poses in dimension $d=2$ or $3$;
in particular, $\vxx_i = (\M{R}_i,\boldsymbol{t}_i)$, and $\M{R}_i$ and $\boldsymbol{t}_i$ are the rotation and translation components of the pose.
We denote the (unknown) position of the $j$-th landmark as $\boldsymbol{y}_j \in \Real{d}$; we then store all landmark position estimates as columns of a matrix $\boldsymbol{y}$, the \emph{landmark matrix}. Finally, we denote the (given) measurement of a landmark position as $\vzbar_k \in \Real{d}$, for $k \in [m] \triangleq \{1,\ldots,m\}$,
and use the standard generative model for the measurements:
\begin{eqnarray}
\label{eq:ldmkmeasurement}
\vzbar_k = \M{R}_{i_k}^{\mathsf{T}} \boldsymbol{y}_{j_k} - \boldsymbol{t}_{i_k} + \boldsymbol{\epsilon}_k
\end{eqnarray}
where a landmark $\boldsymbol{y}_{j_k}$ is observed in the local frame of robot pose $i_k$,
up to zero-mean Gaussian measurement noise $\boldsymbol{\epsilon}_k$.
Since we know the timestamp at which a measurement is taken, we can associate the measurement with the corresponding robot pose index $i_k$ (\emph{e.g.,}\xspace the $k$-th measurement taken from the second robot pose has $i_k=2$). However we do not know
what is the index $j_k$ of the landmark measured by $\vzbar_k$ (\emph{i.e.,}\xspace we have unknown data associations).
With this notation, we can now introduce our problem formulations.
\subsection{\probName Inner Problem}
We provide a problem formulation for the inner problem of \probName to estimate the robot poses and landmark positions as well as the data associations given the number of landmarks.
\begin{problem}[\probName inner problem]\label{prob:inner}
\begin{eqnarray}
\label{eq:inner}
\min_{ \substack{\vxx \in \SE{d}^n \\ \boldsymbol{y} \in \Real{d \times K} } } &
f_{\text{\normalfont odom}} (\vxx) + \!\displaystyle \sum_{k=1}^m \min_{j_k \in [K]} \| \M{R}_{i_k}^{\mathsf{T}} \boldsymbol{y}_{j_k} \!-\! \boldsymbol{t}_{i_k} \!-\! \vzbar_k \|_{\Sigma}^2
\end{eqnarray}
where $K$ is the number of landmarks (\emph{i.e.,}\xspace the number of columns of the matrix $\boldsymbol{y}$).
The first term $f_{\text{\normalfont odom}} (\vxx)$ in~\eqref{eq:inner}
is the standard odometric cost (which is only a function of the robot trajectory)\footnote{
We refer the reader to standard references~\cite{Cadena16tro-SLAMsurvey,Rosen18ijrr-sesync} for the expression of $f_{\text{odom}}$, which is irrelevant for the
rest of the discussion in this paper.}; the second term is similar to the one arising in landmark-based SLAM, but also
includes an optimization over the unknown data association $j_k$.
\end{problem}
\begin{proposition}[Monotonicity]\label{prop:monotonicity}
Let us define the function $f_{\text{\normalfont slam}}(\vxx,\boldsymbol{y},K) =
f_{\text{odom}} (\vxx) + \!\textstyle \sum_{k=1}^m \min_{j_k \in [K]} \|
\M{R}_{i_k}^{\mathsf{T}} \boldsymbol{y}_{j_k} \!-\! \boldsymbol{t}_{i_k} \!-\! \vzbar_k \|_{\Sigma}^2$, which
is the objective function in~\eqref{eq:inner}. After minimizing over $\vxx$ and
$\boldsymbol{y}$, it remains a function of $K$, namely $f_{\text{\normalfont slam}}^\star(K)$.
Then, the function $f_{\text{\normalfont slam}}^\star(K)$ is monotonically
decreasing in $K$ and attains a minimum $f_{\text{\normalfont slam}}^\star(K) = 0$ when $K=m$.
\end{proposition}
\begin{remark}[Hardness] \label{remark:hard}
Problem~\ref{prob:inner} is NP-hard even when the optimal poses are known: in
the next section we show that it indeed is equivalent to $K$-means clustering when
the optimal poses and the number of landmarks are known, which is known to be NP-hard (see Proposition~\ref{prop:clustering}).
\end{remark}
\subsection{\probName Outer Problem}
We now present our formulation to the outer problem of estimating the number of landmarks $K$ in Problem~\ref{prob:betaform}.
\begin{problem}[\probName as simplest explanation]\label{prob:betaform} Using the definitions in Proposition \ref{prop:monotonicity}, we write an outer minimization over K which penalizes the allocation of unnecessary landmarks:
\begin{eqnarray}
\label{eq:formulationLmkCount}
\min_{K \in \Natural{}} &
\overbrace{\left( \displaystyle\min_{ \substack{\vxx \in \SE{d}^n, \boldsymbol{y} \in \Real{d \times K} } }
f_{\text{\normalfont slam}} (\vxx,\boldsymbol{y}, K) \right)}^{f_{\text{\normalfont slam}}^\star(K)}
+ \beta K
\end{eqnarray}
\end{problem}
Note that this problem together with Problem~\ref{prob:inner} simultaneously decides the SLAM estimates $\vxx$ and $\boldsymbol{y}$, while also deciding the
number of landmarks (\emph{i.e.,}\xspace the number of columns in $\boldsymbol{y}$) and the data associations (minimization over $j_k$ in~\eqref{eq:inner}).
This formulation follows Occam's razor: we try to explain the measurements with the minimal number of landmarks, and we do so by minimizing a cost that trade-off the SLAM error with the cost $\beta$ ($\beta > 0$) of using more landmarks.
The following remarks give some insights on the shape of the cost function in~\eqref{eq:formulationLmkCount}
as well as on how to tune $\beta$.
\begin{remark}[Shape of ``U'']\label{remark:U}
Proposition~\ref{prop:monotonicity} states that the objective function
in~\eqref{eq:formulationLmkCount} is the sum of a monotonically decreasing
function of $K$ (\emph{i.e.,}\xspace $f_{\text{\normalfont slam}}^\star(K)$) and a monotonically
increasing linear function of $K$ (\emph{i.e.,}\xspace $\beta K$). This suggests that for sufficiently large values of $\beta$, the overall cost function $f(K) = f_{\text{\normalfont
slam}}^\star(K) + \beta K$, roughly has a ``U'' shape,
which starts at a large value for $K=1$ (this is the case where all measurements
are associated to a single landmark, in which case the cost
$f_{\text{\normalfont slam}}^\star(K)$ is typically large), then reaches a minimum for a suitable value of $K$, and then increases again until reaching a value $f(m) = \beta m$ for $K = m$. This insight will be useful to design our solver in~Section~\ref{sec:solver-km}, whose outer loop searches over potential values of $K$.
\end{remark}
\begin{remark}[Interpretation of $\beta$] \label{remark:setBeta}
The parameter $\beta$ describes the cost of using one more landmark to explain the measurements. Choosing a large $\beta$ leads to underestimating the number of landmarks, while a small $\beta$ leads to overestimating it. More specifically, assume we have a set of measurements ${\cal M}_j$ of landmark $j$. Then, the corresponding residual errors for these measurements will be $r_j \doteq \sum_{k \in {\cal M}_j} \| \M{R}_{i_k}^{\mathsf{T}} \boldsymbol{y}_{j} - \boldsymbol{t}_{i_k} - \vzbar_k \|_{\Sigma}^2$.
Intuitively, when $r_j$ exceeds $\beta$, the optimization may prefer ``breaking'' the measurements across two or more landmarks (\emph{i.e.,}\xspace increasing $K$). We explain how we set $\beta$ in Section~\ref{sec:baselines}.
\end{remark}
\section{Related Work}
\label{sec:relatedWork}
The data association problem of associating measurements
to states of interest has been extensively studied in the target tracking
literature \cite{bar1990tracking} and later computer vision and robotics. Data associations can be made in the front-end via feature descriptors \cite{mur2017orb} and recently via deep learning \cite{li2021odam}. However, these front-end methods are typically sensor-dependent and require additional information (\emph{e.g.,}\xspace landmark appearance) from the sensor other than pure positional measurements of the landmarks. We are interested in the case where such information is not available and data associations must be inferred from the positional measurements. Standard robust optimization methods based on M-estimators used in SLAM (\emph{e.g.,}\xspace \cite{Sunderhauf12iros,Agarwal13icra,Yang20ral-GNC}) are not directly applicable to such scenarios in an efficient manner (\emph{i.e.,}\xspace they would require adding robust losses for all possible associations). Therefore, in the following we focus our attention on techniques designed for data association in landmark-based SLAM.
Maximum likelihood data association (sometimes referred to as nearest neighbor)
\cite{bar1990tracking} is a classical approach where one first rejects unlikely associations for which the
Mahalanobis distance between measurements and their predictions is greater than
a statistical threshold. This results in a number of
\emph{individually compatible} association candidates for each measurement,
among which one picks the one with the maximum likelihood.
In its most basic form, this approach processes measurements in a sequential
manner and makes a decision for each measurement independently.
Alternatively, one can solve a linear assignment problem between
a batch of measurements and landmarks in which the cost of each measurement-landmark
assignment is set to negative log-likelihood if they are \emph{individually
compatible}, and to infinity otherwise \cite{bar1990tracking}.
Joint compatibility branch and bound (JCBB) \cite{neira2001data}
is a popular technique for solving the data association problem
\emph{simultaneously} for a batch of measurements. This method leverages
correlations between the estimated positions of landmarks and seeks the largest
set of \emph{jointly compatible} associations. JCBB uses the branch and bound
technique to prune the exponentially large search space of the problem.
The combined constraint data association (CCDA) method \cite{bailey02thesis} forms the \emph{correspondence graph} whose nodes correspond to individually compatible associations and undirected edges correspond to a pair of mutually compatible associations. CCDA
then aims to find the largest set of individually
compatible associations that are also \emph{pairwise} compatible by
equivalently searching for maximum clique in the correspondence graph.
A similar idea is proposed in \cite{Lusk21icra-clipper} in which the authors propose to form an
edge-weighted version of the correspondence graph and search for
the densest clique.
Another approach is CLEAR \cite{fathian2020clear}, which identifies landmarks by clustering raw measurements using a
spectral graph clustering-type method. This is done by forming an association graph whose
nodes correspond to measurements and whose edges specify potential matches based
on a compatibility criterion. Then, CLEAR uses spectral graph-theoretic methods to
first estimate the number of distinct landmarks $K$, and then to rectify the
association graph to obtain a disjoint union of $K$ cliques, each of which corresponds to a unique landmark.
Unlike our approach, the abovementioned works aim to find an explicit
set of associations \emph{prior} to estimating robot trajectory
and the map in each time step. An alternative approach is proposed in
\cite{Dellaert00cvpr-correspondenceFreeSfm} where the authors treat associations
as latent variables and propose a scheme based on expectation-maximization (EM)
for estimating poses and 3D points. This is done by maximizing the
expected value of the log-likelihood function where the expectation is taken
over all possible associations. Similar approaches are proposed in \cite{Bowman17icra,michael2022probabilistic} where,
in addition to geometric information, the authors also incorporate semantic
information.
Unlike \cite{Dellaert00cvpr-correspondenceFreeSfm,Bowman17icra,michael2022probabilistic}, in this
work we aim to find the \emph{most likely}
estimate for robot poses, landmark positions, and \emph{associations}, as well as the number of landmarks based on
the collected measurements.
In a separate line of work \cite{mullane2011random}, the map is modeled as a random finite set (a
finite-set-valued random variable), inferring the number of landmarks from the collected measurements and circumventing the need for conventional data
association.
Our formulation for the inner problem of simultaneously estimating the robot poses, landmark positions and data associations has a strong connection to the max-mixture formulation \cite{Olson12rss} in the sense that landmark choices for a measurement can be modeled as different modes in the mixture. A max-mixture-type method for data association for semantic SLAM is proposed in \cite{doherty2020probabilistic}. Our inner problem is a special case of the general formulation recently presented in the discrete-continuous smoothing and mapping \cite{doherty2022discrete}. In Section~\ref{sec:experiments}, we compare our method with the alternating minimization approach proposed in \cite{doherty2022discrete} (see Oracle baseline).
\section{Algorithms for \probName}
\label{sec:solvers}
\subsection{Algorithms for Problem~\ref{prob:inner}}
\label{sec:solver-km}
This section proposes a heuristic algorithm to solve Problem~\ref{prob:inner}.
Before introducing the algorithm,
we provide fundamental insights that connect Problem~\ref{prob:inner} with Euclidean clustering and standard
landmark-based SLAM.
\begin{proposition}[Clustering]\label{prop:clustering}
Given $K$ and for fixed $\vxx$, Problem~\ref{prob:inner} becomes an Euclidean clustering problem assuming the measurement noise covariances $\Sigma$ are isotropic. In this clustering problem, the landmarks $\boldsymbol{y}$ are the cluster centers and the data associations $j_k$ are equivalent to the cluster assignments.
\end{proposition}
\begin{proposition}[SLAM]\label{prop:slam}
Given $K$ and the data associations $j_k$ for each measurement $\vzbar_k \in \Real{d}$, $k=1,\ldots,m$,
Problem~\ref{prob:inner} reduces to standard landmark-based SLAM.
\end{proposition}
Propositions~\ref{prop:clustering}-\ref{prop:slam} suggest a simple alternation scheme, where we alternate
solving for the data associations $j_k, \forall k=1,\ldots,m$ (given an initial guess for $\vxx$),
with solving $\vxx,\boldsymbol{y}$ (given the data associations). The former can be solved
using standard clustering approaches (\emph{e.g.,}\xspace Lloyd's algorithm for k-means clustering with k-means++ initialization~\cite{Arthur07-kmeanspp})
according to Proposition~\ref{prop:clustering}; the latter can use standard landmark-based SLAM solvers according to Proposition~\ref{prop:slam}. We describe the full algorithm below.
The pseudo-code is given in Algorithm~\ref{algo:inner}.
Algorithm~\ref{algo:inner} performs a fixed number of iterations, where
each iteration alternates between two steps:
\begin{itemize}
\item {\bf Data association update} (lines~\ref{line:proj}-\ref{line:kmeans}) computes the data associations $j_k$ and landmark positions $\boldsymbol{y}$ given $\vxx$; this reduces to clustering per Proposition~\ref{prop:clustering}, which we solve (locally) using k-means and k-means++~\cite{Arthur07-kmeanspp};
\item {\bf Variable update} (line~\ref{line:slam}) computes the robot poses $\vxx$ and landmark positions $\boldsymbol{y}$ given the data associations $j_k$ and the result of $\boldsymbol{y}$ from the first step as initial guess; this reduces to standard landmark-based SLAM per Proposition~\ref{prop:slam}, hence it can be readily solved with both local~\cite{Kuemmerle11icra,gtsam} and global solvers~\cite{Rosen18ijrr-sesync}.
\end{itemize}
\begin{algorithm}[h!]
\small
\caption{K-SLAM\xspace inner solver.\label{algo:inner}}
\SetAlgoLined
\KwIn{Odometry and corresponding initial guess $\vxx$; landmark measurements $\vzbar_k$, $k=1,\ldots,m$; number of landmarks $K$; $\text{maxIterations}$}
\KwOut{Estimate of robot poses $\vxx$; landmark positions $\boldsymbol{y}$; SLAM objective value $f_{\text{slam}}$.}
\For{ $\text{\normalfont iter} = 1,2, \ldots, \text{\normalfont maxIterations}$ \label{line:inner}}{
\tcc{\footnotesize Project each measurement to the world frame based on current
estimate of $\vxx$} $\vzhat_k = \M{R}_{i_k}
\vzbar_k \!+\! \boldsymbol{t}_{i_k}$,
\, \text{for $k \in [m]$} \label{line:proj}\;
$\text{initialize } \boldsymbol{y} \text{ by k-means++}(\{\vzhat_1,\ldots,\vzhat_m\}, K)$\label{line:kmeans++}\;
$\{\boldsymbol{y}, j_1,\ldots,j_m\} = \text{k-means}(
\{\vzhat_1,\ldots,\vzhat_m\}, K)$\label{line:kmeans}\;
$\{\vxx,\boldsymbol{y}\} = \text{SLAM}(\vxx,\boldsymbol{y},\{j_1,\ldots,j_m\})$\label{line:slam}\;
}
\Return{$\{\vxx,\boldsymbol{y},f_{\text{\normalfont slam}}(\vxx,\boldsymbol{y},K)\}$.}
\end{algorithm}
\subsection{Algorithms for Problem~\ref{prob:betaform}}
\label{sec:solver-beta}
In consistency with Remark~\ref{remark:U}, we also empirically observe $f(K) = f_{\text{slam}}^\star(K) + \beta K$ has roughly a U-shape. Therefore, we simply use a gridding method to solve for $K^\star$. Naive gridding, which probes the objective function at every $K$ ($K = 1,\ldots,m$), requires $m \cdot \text{maxIterations}$ invocations of the landmark-based SLAM solver. For practical problems where the number of measurements $m$ is in the thousands and considering that in our experiments $\text{maxIterations} = 15$, this is too expensive. We thus introduce a multi-resolution gridding technique, whose pseudo-code is in Algorithm~\ref{algo:beta}.
\begin{algorithm}[h!]
\small
\caption{K-SLAM\xspace solver.\label{algo:beta}}
\KwIn{Odometry and corresponding initial guess $\vxx$; landmark measurements $\vzbar_k$, $k=1,\ldots,m$; $\text{maxIterations}$; parameter $\beta$; number of $K$'s to probe at each level $N_K$.}
\KwOut{Estimate of robot poses $\vxx$; landmark number ($K$) and positions $\boldsymbol{y}$.}
$\{K_1, K_2, ..., K_{N_K}\} = \text{UniformDiv}([1,m], N_K)$\label{line:interval1m}\;
\While{$|K_1 - K_2| > 1$ \label{line:1resol}} {
\For{$K = K_1, K_2, ..., K_{N_K}$\label{line:forK}} {
$\{\vxx\at{K}, \boldsymbol{y}\at{K}, f_{\text{slam}}\at{K}\} = \text{Alg.}~\ref{algo:inner}(\vxx, \{\vzbar_1,\ldots,\vzbar_m\}, K, \text{maxIterations})$\;
$f\at{K} = f_{\text{slam}}\at{K} + \beta K$\; \label{line:eval_f}
\tcc{\footnotesize Save best result}
\If{$f\at{K} \leq f^\star$ \label{line:optCost}}{
$f^\star \leftarrow f\at{K}$\;
$\{\vxx^\star,\boldsymbol{y}^\star\} \leftarrow \{\vxx\at{K},\boldsymbol{y}\at{K}\}$\;
$K_{n^\star} \leftarrow K$\;
}
}
$\{K_1, ..., K_{N_K}\} = \text{UniformDiv}([K_{n^\star-1},K_{n^\star+1}], N_K)$\label{line:shrinkint}\;
}
\Return{$\{\vxx^\star,\boldsymbol{y}^\star,K_{n^\star}\}$.}
\end{algorithm}
In Algorithm~\ref{algo:beta}, we iteratively grid the interval where the optimal $K$ could lie into smaller intervals until we find the exact value of $K$ in a resolution-one grid. Specifically, we start with the interval $[1, m]$ (line~\ref{line:interval1m}) and uniformly evaluate $f^\star_{\text{slam}}(K) + \beta K$ at $N_K$ values ($N_K$ is set to 11), $K_1, K_2, ..., K_{N_k}$, within this interval and including both end points (lines~\ref{line:forK}-\ref{line:eval_f}). We find the optimal $K_{n^\star}$ out of these $N_k$ values (line~\ref{line:optCost}), shrink the interval down to $[K_{n^\star-1}, K_{n^\star+1}]$ (line~\ref{line:shrinkint}), and repeat the process until the resolution is one (line~\ref{line:1resol}). This multi-resolution strategy makes approximately $N_k\log_{\frac{N_k}{2}}(m)$ evaluations of $f_{\text{slam}}^\star(K)$ compared to $m$ evaluations in the naive gridding. For $m=1000$ and $N_K=11$, this is 45 versus 1000 evaluations.
|
{
"arxiv_id": "2302.13235",
"language": "en",
"timestamp": "2023-02-28T02:13:45",
"url": "https://arxiv.org/abs/2302.13235",
"yymm": "2302"
} |
\section{Introduction}
This paper is a continuation of \cite{KTTWYY1}. We refer thereto for the background and motivation behind studying quasi-$F$-splittings in birational geometry.
Given an algebraic variety $V$ in positive characteristic,
we say that $V$ is {\em $n$-quasi-$F$-split} if there exist
a $W_n\mathcal{O}_V$-module homomorphism $\alpha : F_* W_n\mathcal{O}_V \to \mathcal{O}_V$ which completes the following commutative diagram:
\begin{equation*} \label{diagram:intro-definition}
\begin{tikzcd}
W_n\mathcal{O}_V \arrow{r}{{F}} \arrow{d}{R^{n-1}} & F_* W_n \arrow[dashed]{ld}{\exists\alpha} \mathcal{O}_V \\
\mathcal{O}_V.
\end{tikzcd}
\end{equation*}
Further, we say that $V$ is \emph{quasi-$F$-split} if it is $n$-quasi-$F$-split for some $n \in \mathbb{Z}_{>0}$.
In the first part \cite{KTTWYY1}, we have proven the following results.
{\setlength{\leftmargini}{2.7em}
\begin{enumerate
\item[(I)] Every affine klt surface $S$ is quasi-$F$-split.
\item[(II)] Every three-dimensional $\mathbb{Q}$-factorial affine klt variety $X$ is $2$-quasi-$F$-split
if $p \gg 0$.
\end{enumerate}
}
\noindent The purpose of this article is to give the optimal bound for when $X$ is quasi-$F$-split as explained by the following results
\begin{theoremA}[Theorem \ref{t-3dim-klt-QFS}]\label{tA-3klt-QFS}
Let $k$ be
a perfect field of characteristic $p >41$.
Let $V$ be a three-dimensional $\mathbb{Q}$-factorial affine klt variety over $k$.
Then $V$ is quasi-$F$-split.
In particular, $V$ lifts to $W_2(k)$.
\end{theoremA}
\begin{theoremA}[Theorem \ref{t-3dim-klt-nonQFS}]\label{tA-3klt-nonQFS}
Let $k$ be an algebraically closed field of characteristic $41$.
Then there exists a three-dimensional $\mathbb{Q}$-factorial affine klt variety $V$ over $k$ which is not quasi-$F$-split.
\end{theoremA}
Roughly speaking, Theorem \ref{tA-3klt-QFS} and Theorem \ref{tA-3klt-nonQFS}
are reduced, by global-to-local correspondence in birational geometry,
to the following corresponding problems for log del Pezzo pairs with standard coefficients.
For the definition of quasi-$F$-split pairs $(X, \Delta)$, see Subsection \ref{ss-def-QFS}.
\begin{theoremA}[Theorem \ref{t-LDP-QFS}]\label{tA-LDP-QFS}
Let $k$ be a
perfect field of characteristic $p >41$.
Let $(X, \Delta)$ be a log del Pezzo pair over $k$ with standard coefficients, that is,
$(X, \Delta)$ is a two-dimensional projective klt pair such that $-(K_X+\Delta)$ is ample and
all coefficients of $\Delta$ are contained in $\{ 1 - \frac{1}{n} \,|\, n \in \mathbb{Z}_{>0}\}$.
Then $(X, \Delta)$ is quasi-$F$-split.
\end{theoremA}
\begin{theoremA}[Theorem \ref{t-P^2-main}]\label{tA-LDP-nonQFS}
Let $k$ be an algebraically closed field of characteristic $41$.
Then there exists a log del Pezzo pair $(X, \Delta)$ over $k$ with standard coefficients which is not quasi-$F$-split.
\end{theoremA}
We emphasise that Theorem \ref{tA-3klt-QFS} and Theorem \ref{tA-LDP-QFS} are false with the notion of quasi-$F$-split replaced by $F$-split even if $p$ is arbitrarily large, as shown in \cite{CTW15a}.\\
One of the main difficulties of positive characteristic birational geometry is that klt singularites are, in this setting, little understood. In fact, some of their properties, such as Cohen-Macauliness, break already in dimension three for low characteristics (cf.\ \cite{CT17}). Theorem \ref{tA-3klt-QFS} indicates that these singularities should, however, behave very similarly to their characteristic counterparts when, at least, $p>41$.
In what follows we provide another example, besides liftability, supporting this statement, by generalising the result of the first author about the logarithmic extension theorem from $F$-injective three-dimensional terminal singularities to quasi-F-split ones. In particular, this result holds for all three-dimensional terminal singularities when $p > 41$.
\begin{theoremA}[Corollary \ref{c-3dim-1form}]\label{tA-1form}
Let $V$ be a three-dimensional terminal variety over a perfect field
of characteristic $p>41$.
Then $V$ satisfies the logarithmic extension theorem for one-forms.
More explicitly, for any proper birational morphism $f\colon W\to V$ from a normal variety $W$, the natural injective homomorphism
\[
f_{*}\Omega^{[1]}_W(\log\,E)\hookrightarrow \Omega^{[1]}_V.
\]
is an isomorphism,
where $E$ denotes the reduced Weil divisor satisfying $\Supp E = \Ex(f)$.
\end{theoremA}
\subsection{Overview of the proofs}
We now give an overview of how to prove Theorem \ref{tA-3klt-QFS}--\ref{tA-LDP-nonQFS}.
Following global-to-local correspondence in birational geometry,
we shall establish the following two implications (I) and (II).
\begin{enumerate}
\item[(I)] Theorem \ref{tA-LDP-QFS} $\Rightarrow$ Theorem \ref{tA-3klt-QFS}.
\item[(II)] Theorem \ref{tA-LDP-nonQFS} $\Rightarrow$ Theorem \ref{tA-3klt-nonQFS}.
\end{enumerate}
The implication (I) has already been proven in \cite{KTTWYY1}. To show the implication (II), we construct an example of a non-quasi-$F$-split three-dimensional klt singularity by taking an orbifold cone over the non-quasi-$F$-split log del Pezzo surface from Theorem \ref{tA-LDP-nonQFS}. The details are contained in Section \ref{s-Qcone-QFS} and Section \ref{s-Qcone-klt}, where we prove the following:
\begin{itemize}
\item a log del Pezzo surface with standard coefficients is quasi-$F$-split if and only if its orbifold cone is so (Corollary \ref{cor:corresponding});
\item an orbifold cone over a Picard-rank-one log del Pezzo surface with standard coefficients is $\mathbb{Q}$-factorial and klt (Theorem \ref{t-klt})\footnote{this result was readily believed to be true by the experts, but we could not find a reference that works beyond the case of characteristic zero}.
\end{itemize}
\begin{comment}For a log del Pezzo pair $(X, \Delta)$ as in Theorem \ref{tA-LDP-nonQFS}
while the three-dimensional example needed to establish implication (II) is
As for the implication (II),
Sections {\color{red} \ref{s-Qcone-QFS}--\ref{s-Qcone-klt}} will be devoted to establishing (II). In what follows, we give an overview of the implication (II)
,
we consider its log anti-canonical ring:
\[
R := \bigoplus_{d \geq 0} H^0(X, \mathcal{O}_X(-d(K_X+\Delta))),
\]
where we have $\mathcal{O}_X(D) = \mathcal{O}_X(\rdown{D})$ for every $\mathbb{Q}$-divisor $D$.
Then $V := \Spec R$ can be considered as a cone.
In order to show Theorem \ref{tA-3klt-nonQFS},
it is enough to prove that
\begin{enumerate}
\item[(i)] $V$ is klt, and
\item[(ii)] $V$ is not quasi-$F$-split.
\end{enumerate}
Concerning (i), we consider the $\mathbb A^1$-fibration
\[
W := \Spec \left(\bigoplus_{d \geq 0} \mathcal{O}_X(-d(K_X+\Delta))\right) \to X.
\]
Then $V$ is obtained by contracting the $0$-section $\Gamma$ of $W$.
A key part is to show that $(W, (1-\epsilon)\Gamma)$ is klt for $0 < \epsilon <1$.
In our applications, $(X, \Delta)$ is a log smooth pair.
In this case, the singularities of
$(W, (1-\epsilon)\Gamma)$ can be computed via toric geometry.
For more details of the proof of (i), see Section \ref{s-Qcone-klt}.
As for (ii), we shall check that
the strategy of \cite{Watanabe91} actually works
in our setting (Section \ref{s-Qcone-QFS}).
\end{comment}
\subsubsection{Sketch of the proof of Theorem \ref{tA-LDP-nonQFS}}\label{sss-LDP-nonQFS}
The example in the statement of Theorem \ref{tA-LDP-nonQFS} is explicitly given by
\[
(X, \Delta) =\left( \P^2, \frac{1}{2}L_1 + \frac{2}{3} L_2 + \frac{6}{7} L_3 + \frac{40}{41}L_4\right),
\]
where $L_1, L_2, L_3, L_4$ are lines such that $L_1 + L_2 + L_3 + L_4$ is simple normal crossing.
It is easy to check that $(X, \Delta)$ is a log del Pezzo pair.
What is remaining is to prove that $(X, \Delta)$ is not quasi-$F$-split.
Recall that $(X, \Delta)$ is quasi-$F$-split if and only if there exists $m \in \mathbb{Z}_{>0}$
such that $\Phi_{X, \Delta, m}: \mathcal{O}_X \to Q_{X, \Delta, m}$ is a split injection (cf.\ Subsection \ref{ss-def-QFS}).
Fix $m \in \mathbb{Z}_{>0}$ and set $D := K_X+\Delta$.
By $\Phi_{X, \Delta, m} \otimes \mathcal{O}_X(K_X) = \Phi_{X, D, m}$, it is enough to show that
\[
H^2(\Phi_{X, D, m}) : H^2(X, \mathcal{O}_X(D)) \to H^2(X, Q_{X, D, m})
\]
is not injective.
Set $\Delta_{\mathrm{red}} := L_1 + L_2 +L_3 + L_4$.
By the following exact sequence (cf.\ Lemma \ref{lem:Serre's map}):
\[
0 \to \mathcal{O}_X(D) \xrightarrow{\Phi_{X, D, n}} Q_{X, D, n} \to B_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD) \to 0,
\]
it suffices to show that the connecting map
\[
\delta_m : H^1(X,B_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD)) \to H^2(X,\mathcal{O}_X(D))
\]
is nonzero.
If $m=1$, then this follows from $H^1(X, Q_{X, D, 1})=H^1(X, F_*\mathcal{O}_X(pD)) \simeq H^1(X, \mathcal{O}_X(\rdown{pD}))= 0$ and $H^2(X, B_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD)) \neq 0$,
where the latter one can be checked by the Euler sequence and standard exact sequences on log higher Cartier operators.
For the general case, the strategy is to apply induction on $m$ via the following commutative diagram:
\begin{equation*}
\begin{tikzcd}
H^1(B_{m}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m}D)) \arrow[d, "C"']\arrow{r}{{\delta_{m}}} & H^2(\mathcal{O}_X(D)) \arrow[equal]{d} \\
H^1(B_{m-1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m-1}D)) \arrow{r}{{\delta_{m-1}}} & H^2(\mathcal{O}_X(D))
\end{tikzcd}
\end{equation*}
Again by the Euler sequence and log higher Cartier operators,
we can check that $C$ is an isomorphism.
Therefore, $\delta_m$ is nonzero, because so is $\delta_1$.
For more details, see Section \ref{s-P^2-cex}.
\medskip
\subsubsection{Sketch of the proof of Theorem \ref{tA-LDP-QFS}}\label{sss-LDP-QFS}
{First, let us recall \cite[Conjecture 6.20]{KTTWYY1}.
\begin{conjecture} \label{conj:log-liftability-intro}
Let $(X,\Delta)$ be a log del Pezzo pair over a perfect field of characteristic $p>5$ and such that $\Delta$ has standard coefficients. Then there exists {a}
log resolution $f \colon Y \to X$ of $(X, \Delta)$ such that $(Y, f^{-1}_*\Delta + \Exc(f))$ lifts to {$W(k)$}.
\end{conjecture}
\noindent In other words, the statement stipulates that $(X,\Delta)$ is \emph{log liftable} to $W(k)$.
As indicated in \cite[Remark 6.21]{KTTWYY1}, Theorem \ref{tA-LDP-QFS} follows from this conjecture by using the method of higher Cartier operator. For the convenience of the reader, we summarise this argument. Write $K_Y+\Delta_Y = f^*(K_X+\Delta)$ and set $E := \Supp(f^{-1}_*\Delta + \Exc(f))$.
By Theorem \ref{t-QFS-criterion}, it is enough to verify that
\begin{enumerate}
\item[(B)] $H^0(Y, \mathcal{O}_Y(K_Y+E+p^l(K_Y+\Delta_Y))) =0$ for every $l \geq 1$, and
\item[(C)] $H^0(Y, \Omega_Y(\log E)(K_Y+B_Y))=0$.
\end{enumerate}
Here, (C) follows by a nef-and-big Akizuki-Nakano vanishing (see \cite[Theorem 2.11]{Kaw3}) contingent upon Conjecture \ref{conj:log-liftability-intro}. Interestingly, (B) is false in general if we do not assume that $p> 42$. The validity of (B) under this assumption follows readily from the ACC statement for the nef threshold for surfaces (this is exactly where the constant $42$ comes into the picture) which in characteristic $0$ is known by Koll\'ar (\cite{Kol94}). We extend this result (alas only for log del Pezzo surfaces) to positive characteristic in Subsection \ref{ss-explicit-ACC} by lifting to characteristic $0$ and applying Koll\'ar's original statement.\\
Therefore, to understand Theorem \ref{tA-LDP-QFS}, we need to address Conjecture \ref{conj:log-liftability-intro}. Note that log liftability of del Pezzo pairs (that is, the case $\Delta=0$) for $p>5$ is already known by Arvidsson-Bernasconi-Lacini \cite{ABL20}. In trying to generalise their result to pairs with standard coefficients, we first show the following local result which we believe is of independent interest (both in characteristic zero and positive characteristics).
\begin{lemma}[{Corollary \ref{cor:Hiromu-resolution}}] \label{lem:min-resolutions-intro}
Let $(X,\Delta)$ be a two-dimensional klt pair with standard coefficients
over an algebraically closed field. Then there exists a projective birational morphism $\varphi \colon V \to X$ such that $(V,\Delta_V)$ is simple normal crossing and $\Delta_V$ is effective, where $\Delta_V$ is the $\mathbb{Q}$-divisor defined by $K_V+\Delta_V = \varphi^*(K_X+\Delta)$.
\end{lemma}
One may ask if a stronger statement is true, that
\begin{equation} \label{eq:stronger-minimal-resolution}
(V, \varphi^{-1}_*\Delta + \Exc(\varphi))
\textrm{ is simple normal crossing.}
\end{equation}
By minimal resolutions, we may assume that (\ref{eq:stronger-minimal-resolution}) is valid when $\Delta=0$. Unfortunately, in general, one cannot construct a resolution for which $\Delta_V\geq 0$ and (\ref{eq:stronger-minimal-resolution}) holds. A counterexample is as follows:
\begin{itemize}
\item $x \in X$ a smooth point, and
\item $\Delta=\frac{1}{2}C$ with $C$ being an irreducible curve with a cusp\footnote{a singularity looking like $(y^2 =x^{2n+1})$} at $x$.
\end{itemize}
We shall call such a point $x$, a special points of $(X,\Delta)$. On the positive side, we can show that special points are the only possible counterexamples to constructing resolutions satisfying $\Delta_V \geq 0$ and (\ref{eq:stronger-minimal-resolution}) (see Theorem \ref{t-hiromu-resol}).
Building on the above lemma and \cite{ABL20}, we can prove the following weaker version of Conjecture \ref{conj:log-liftability-intro}.
\begin{theorem}[Corollary \ref{cor:Hiromu-resolution},
Theorem \ref{thm:liftability-of-Hiromu-resolution}] \label{thm:liftability-of-log-del-Pezzo-intro}
Let $(X,\Delta)$ be a log del Pezzo pair with standard coefficients over an algebraically closed field of characteristic $p>5$. Then there exists a projective birational morphism $f \colon Y \to X$ such that $(Y, \Delta_Y)$ is log smooth and lifts over $W(k)$, where $K_Y + \Delta_Y = f^*(K_X+\Delta)$ and $\Delta_Y \geq 0$.
\end{theorem}
Now, let us go back to the proof of the Theorem \ref{t-3dim-klt-QFS}. In what follows, we shall call a special point $x$ of $(X,\Delta)$ to be
{\em of type} $n$, if $n$ is the minimum number of blowups which
resolve $(X,\Delta)$ at $x$. We make the following observations.
\begin{itemize}
\item If there are no special points of $(X,\Delta)$, then we can construct a log resolution as in Conjecture \ref{conj:log-liftability-intro}, and so the sketch of the argument above shows that $(X,\Delta)$ is quasi-$F$-split.
\item If every special point of $(X,\Delta)$ is of type $n$ for $2n+1 < p$, then we can still recover vanishing (C) by combining the above strategy with ideas of Graf (\cite{graf21}) on extensions of logarithmic forms\footnote{We consider a sequence of birational morphisms $f \colon Y \xrightarrow{\theta} V \xrightarrow{\varphi} X$, where $\varphi$ is as in Lemma \ref{lem:min-resolutions-intro} and $\varphi \circ \theta$ is a log resolution of $(X,\Delta)$. Specifically, we choose them so that $\theta$ is identity over all points $x \in X$ unless $x$ is special. Then, by the inequality $2n+1 < p$, the determinant $(-1)^n(2n+1)$ of a suitable intersection matrix of $\Exc(\theta)$ is not divisible by $p$ (cf.\ Remark \ref{r-cusp-resol}), and so we can apply the argument of Graf to show that $H^0(Y, \Omega^1_Y(\log E)(K_Y+\Delta_Y)) = H^0(Y, \Omega^1_Y(\log \Supp \Delta_Y)(K_Y+\Delta_Y))$, which is then zero by Akizuki-Nakano and our Theorem \ref{thm:liftability-of-log-del-Pezzo-intro}; see Subsection \ref{t-BS-vanishing}.}.
Thus, we again get that $(X,\Delta)$ is quasi-$F$-split.
\end{itemize}
Thus, from now on, in our explanation we may assume that there exists a special point $x$ of $(X,\Delta)$ of type $n$ for $42 \leq p \leq 2n+1$. Since $K_X+\Delta$ is anti-ample, the existence of a curve $\frac{1}{2}C \subseteq \Delta$ with singularity of high multiplicity, imposes strict restriction on the geometry of $(X,\Delta)$. Thus, one may hope to classify all such $(X,\Delta)$. Informally speaking, $(X,\Delta)$ should be birational to a Hirzebruch surface $(\mathbb{F}_n, \Delta_{\mathbb{F}_n})$ with large $n$ and simple $\Supp \Delta_{\mathbb{F}_n}$. Granted that, one should be able to prove Conjecture \ref{conj:log-liftability-intro} for such $(X,\Delta)$ by hand, concluding the proof that it is quasi-$F$-split.\\
In what follows, we provide more details (in practice we avoid the use of explicit classifications by a mix of intersection and deformation theories). But before doing so, let us note that a natural first step would be to take the minimal resolution ${\color{cyan} f} \colon Y \to X$ of $X$ and run a $K_Y$-MMP to construct a Hirzebruch surface as above. Unfortunately, if we write $K_Y+\Delta_Y = f^*(K_X+\Delta)$, then $\Delta_Y$ will not have standard coefficients anymore. In fact, its coefficients may be arbitrarily small, and so hard to control.
To circumvent this problem, we use
canonical models over $X$ as suggested to us by J.\ Koll\'ar. Specifically:
\begin{enumerate}
\item we take the canonical model
$f \colon Y \to X$ over $X$. Then the singularities of $Y$ are canonical, and hence Gorenstein.
\item The coefficients of $\Delta_Y$, defined by $K_Y+\Delta_Y = f^*(K_X+\Delta)$,
are at least $\frac{1}{3}$.
\item Then we run a $K_Y$-MMP $g \colon Y \to Z$, so that
\begin{itemize}
\item[(i)] $\rho(Z)=1$ or
\item[(ii)] $Z$ admits a Mori fibre space $\pi \colon Z \to \mathbb{P}^1$.
\end{itemize}
Set $\Delta_Z := g_*\Delta_Y$ and $C_Z := g_*f^{-1}_*C$, where $C_Z$ is still highly singular.
\item Now, canonical del Pezzo surfaces are bounded, and given that $\frac{1}{2}C_Z \leq \Delta_Z$ with singularities of high multiplicity, one can check by intersection theory that the case (i) does not happen, and so $Z$ admits a Mori fibre space $\pi \colon Z \to \mathbb{P}^1$.
\item Since $(K_Z+\Delta_Z) \cdot F <0$ for a generic fibre $F$ of $\pi$, we get that $0 < \Delta_Z \cdot F < 2$. With a little bit of work, one can then check that $(\Delta_Z)_{\rm red} \cdot F \leq 3$, from which we deduce that $(Z,\Delta_Z)$, and so $(X,\Delta)$, are log liftable (see Lemma \ref{l-vertical-lift}; thanks to this lemma, we do not need to classify $(Z,\Delta_Z)$ explicitly).
\end{enumerate}}
\begin{comment}
{\color{cyan} Probably: log liftable or not might be easier to understand.
The proof for the log liftable case (5.7, ACC).
Obstruction (cusps) and how to overcome (min canonicaln plus BS vanishing). }
Construction of $W$. Section \ref{s-klt-resol}.
If there existed a log resolution $\mu : V \to X$ of $(X, \Delta)$ such that $\Delta_V$ is effective for the $\mathbb{Q}$-divisor defined by $K_V+ \Delta_V = \mu^*(K_X+\Delta)$.
Although we can not hope this in general, we have the following classification result:
Going back to the original situation, we apply Theorem \ref{tA-hiromu-resol} for each non-log smooth point of a given log del Pezzo pair $(X, \Delta)$.
Therefore,
In the latter case, $x$ is called a special
The latter one is called a special point for $(X, \Delta)$.
Our strategy is to prove that special points are enoughly bounded (e.g. $2n+1 <p$)
\end{comment}
\medskip
\noindent {\bf Acknowledgements.}
The authors thank Janos Koll\'ar
for valuable conversations related to the content of the paper.
\begin{itemize}
\item Kawakami was supported by JSPS KAKENHI Grant number JP22J00272.
\item Takamatsu was supported by JSPS KAKENHI Grant number JP22J00962.
\item Tanaka was supported by JSPS KAKENHI Grant number JP18K13386.
\item Witaszek was supported by NSF research grant DMS-2101897.
\item Yobuko was supported by JSPS KAKENHI Grant number JP19K14501.
\item Yoshikawa was supported by JSPS KAKENHI Grant number JP20J11886 and RIKEN iTHEMS Program.
\end{itemize}
\section{Appendix: Cone construction for $\mathbb{Q}$-divisors}\label{s-Qcone}
Throughout this section,
we work over an arbitrary field $k$.
All the results and proofs in this section work in any characteristic.
Let $X$ be a projective normal variety $X$ and let $D$ be an ample $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor.
We shall summarise some foundational results on the {\emph{orbifold cone}}
\[
V_{X, D} := \Spec \left(\bigoplus_{d \geq 0} H^0(X, \mathcal{O}_X(dD)){t^d}
\right).
\]
In order to compare $X$ and $V_{X, D}$, we also introduce:
\begin{alignat*}{3}
&\mathbb A^1\textrm{-fibration:} \quad &&\pi : W_{X, D} := \Spec_X \Big( \bigoplus_{d \geq 0} \mathcal{O}_X(dD)t^d\Big) \to X, \textrm{ and }\\
&\mathbb G_m\textrm{-fibration:} \quad &&\pi^{\circ} : W^{\circ}_{X, D} := \Spec_X \Big( \bigoplus_{d \in \mathbb{Z}} \mathcal{O}_X(dD)t^d\Big) \to X.
\end{alignat*}
Note that if $D$ is a very ample Cartier divisor, then all the results in this section are well known.
In this special case, $\pi : W_{X, D} \to X$ is an $\mathbb A^1$-bundle, {$\pi^{\circ} : W^{\circ}_{X, D} \to X$ is a $\mathbb G_m$-bundle}, and $V_{X, D}$ is the {usual} cone.
\subsection{Foundations}
\begin{definition}\label{d-Qcone1}
Let $X$ be a normal variety
and let $D$ be a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor.
For the quasi-coherent finitely generated $\mathcal{O}_X$-algebras
\[
\mathcal A_{X, D} := \bigoplus_{d \geq 0} \mathcal{O}_X(dD)t^d \qquad
{\rm and} \qquad
\mathcal A^{\circ}_{X, D} := \bigoplus_{d \in \mathbb{Z}} \mathcal{O}_X(dD)t^d,
\]
we set
\[
W_{X, D} := \Spec_X \mathcal A_{X, D} \qquad
{\rm and} \qquad W^{\circ}_{X, D} := \Spec_X \mathcal A^{\circ}_{X, D}.
\]
Note that
$\mathcal A_{X, D}$ and $\mathcal A^{\circ}_{X, D}$
are sheaves of graded subrings of the constant sheaves
$\bigoplus_{d \geq 0}K(X)t^d$ and $\bigoplus_{d \in \mathbb{Z}}K(X)t^d$, respectively.
We have the induced morphisms:
\[
\pi^{\circ} : W^{\circ}_{X, D} \xrightarrow{j} W_{X, D} \xrightarrow{\pi} X.
\]
Let $\Gamma_{X, D} \subseteq W_{X, D}$ be the {\em $0$-section}, i.e.,
$\Gamma_{X, D}$ is the section of $\pi : W{_{X, D}}\to X$ corresponding to the ideal sheaf
\[
(\mathcal A_{X, D})_+ := \bigoplus_{d > 0}\mathcal{O}_X(dD)t^d
\]
of $\mathcal A_{X, D} = \bigoplus_{d \geq 0}\mathcal{O}_X(dD)t^d$.
We drop the subscript $(-)_{X, D}$ when no confusion arises, for example, $W := W_{X, D}$.
\end{definition}
\begin{lemma}\label{l-VW-normal}
Let $X$ be a normal variety and let $D$ be a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor.
Then the following hold.
\begin{enumerate}
\item $W_{X, D}$ is a normal variety.
\item
The induced morphism $j: W^{\circ}_{X, D} \to W_{X, D}$ is an open immersion.
\item The set-theoretic equality $W_{X, D} \setminus \Gamma_{X, D} = j(W^{\circ}_{X, D})$ holds.
\end{enumerate}
\end{lemma}
\begin{proof}
The assertion (1) follows from \cite[3.1 in page 48]{Dem88}.
The assertions (2) and (3) hold by \cite[Lemme 2.2(i) in page 40]{Dem88}.
\end{proof}
In what follows, we consider $W^{\circ}_{X, D}$ as an open subscheme of $W_{X, D}$.
\begin{definition}\label{d-Qcone2}
Let $X$ be a projective normal variety and let $D$ be an ample $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor.
For the finitely generated $k$-algebra
\[
R(X, D) := \bigoplus_{d \geq 0} H^0(X, \mathcal{O}_X(dD)t^d,
\]
we set
\[
V_{X, D} := \Spec R(X, D).
\]
Let $v_{X, D}$ be the {\em vertex} of $V_{X, D}$, that is,
$v_{X, D}$ is the closed point of $V_{X,D}$ corresponding to
the maximal ideal $\bigoplus_{d > 0} H^0(X, \mathcal{O}_X(dD))t^d$
of $R(X, D) = \bigoplus_{d \geq 0} H^0(X, \mathcal{O}_X(dD))t^d$.
By the ring isomorphism
\[
\Gamma(V_{X, D}, \mathcal{O}_{V_{X, D}}) =
\bigoplus_{d \geq 0} H^0(X, \mathcal{O}_X(dD)t^d) \xrightarrow{\simeq}
H^0(X, \bigoplus_{d \geq 0}\mathcal{O}_X(dD)t^d) = \Gamma(W_{X, D}, \mathcal{O}_{W_{X, D}}),
\]
we obtain a morphism $\mu: W_{X, D} \to V_{X, D}$
with $\mu_*\mathcal{O}_{W_{X, D}}=\mathcal{O}_{V_{X, D}}$ (cf.\ \cite[Ch. II, Exercise 2.4]{hartshorne77}); {here the latter condition can be checked on global sections as $V_{X,D}$ is affine and $\mu_*\mathcal{O}_{W_{X, D}}$ is quasi-coherent}.
To summarise, we have the following {diagram}:
\[
\begin{tikzcd}
W^{\circ}_{X, D} \arrow[rd, "\pi^{\circ}"']& W_{X, D} & V_{X, D} \\
& X.
\arrow["j", hook, from=1-1, to=1-2]
\arrow["\mu", from=1-2, to=1-3]
\arrow["\pi", from=1-2, to=2-2]
\end{tikzcd}
\]
We drop the subscript $(-)_{X, D}$ when no confusion arises, for example, $V:= V_{X, D}$.
\end{definition}
\begin{theorem}\label{t-Qcone-birat}
Let $X$ be a projective normal variety and
let $D$ be an ample $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor.
Then the following hold.
\begin{enumerate}
\item $V_{X, D}$ is an affine normal variety.
\item the induced morphism $\mu : W_{X, D} \to V_{X, D}$
is a projective birational morphism such that the following set-theoretic equalities are valid:
\begin{equation}\label{e1-Qcone-birat}
\Ex(\mu) = \Gamma_{X, D}
\qquad {\rm and} \qquad
\mu(\Ex(\mu)) = \mu(\Gamma_{X, D})=v_{X, D}.
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
By $(\mu_{X, D})_*\mathcal{O}_{W_{X, D}} = \mathcal{O}_{V_{X, D}}$,
it follows from Lemma \ref{l-VW-normal} that $V$ is an affine normal variety.
Thus (1) holds.
The assertion (2) follows from \cite[3.4 in page 48]{Dem88}.
\end{proof}
\subsection{Functoriality}
Let $\alpha : Y \to X$ be a dominant morphism of normal varieties.
Let $D$ be a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor and set $D_Y := \alpha^*D$.
Then we have the following commutative diagram (for the definitions of $W_{X, D}$ and $W_{Y, D_Y}$, see Definition \ref{d-Qcone1}):
\[
\begin{tikzcd}
Y \arrow[d, "\alpha"'] & W_{Y, D_Y} \arrow[l, "\pi_Y"']\arrow[d, "\beta"]\\
X & W_{X, D}. \arrow[l, "\pi"]
\end{tikzcd}
\]
Furthermore, if $\alpha : Y \to X$ is a finite surjective morphism of projective normal varieties and $D$ is an ample $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor, then we get the following commutative diagram (for the definitions of $V_{X, D}$ and $V_{Y, D_Y}$, see Definition \ref{d-Qcone2}):
\[
\begin{tikzcd}
Y \arrow[d, "\alpha"'] & W_{Y, D_Y} \arrow[l, "\pi_Y"']\arrow[d, "\beta"] \arrow[r, "\mu_Y"] & V_{Y, D_Y} \arrow[d, "\gamma"]\\
X & W_{X, D} \arrow[l, "\pi"] \arrow[r, "\mu"'] & V_{X, D}.
\end{tikzcd}
\]
\begin{comment}
\begin{notation}\label{n-Qcone-functor}
We use Notation \ref{n-Qcone}.
Let $\alpha : Y \to X$ be a finite surjective morphism from a projective normal variety $Y$.
\footnote{Open immersion, etale base change}
Set $D_Y := \alpha^*D$ and we then obtain
\[
\mu_Y : W_Y := \Spec_Y \bigoplus_{d \geq 0} \mathcal{O}_X(dD_Y) \to \Spec\,\bigoplus_{d \geq 0} H^0(Y, \mathcal{O}_Y(dD_Y))=:V_Y.
\]
We have the following diagram
\begin{equation*}\label{e1-Qcone-functor}
\begin{CD}
Y @<\pi_Y<< W_Y @>\mu_Y>>V_Y\\
@VV\alpha V @VV\alpha_W V @VV{\alpha_V}V\\
X @<\pi<< W @>\mu >>V.\\
\end{CD}
\end{equation*}
It is easy to see that this diagram is commutative.
\end{notation}
\begin{lemma}
We use Notation \ref{n-Qcone-functor}
The above diagram is commutative.
\end{lemma}
\begin{proof}
The right square:
We have two arrows $W_Y \to V$.
Since $V$ is affine, this corresponds to $\mathcal{O}_V(V) \to \mathcal{O}_{W_Y}(W_Y)$.
These coincide, hence ok.
The left square:
The problem is local on $X$.
We may assume that $X =\Spec\,R$.
Then everything becomes affine, hence can be checked directly.
\end{proof}
\end{comment}
\begin{lemma}\label{l-funct-Gamma}
Let $\alpha : Y \to X$ be a dominant morphism of normal varieties.
Let $D$ be a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$ and set $D_Y := \alpha^*D$.
Then the set-theoretic equality
\[
\beta^{-1}(\Gamma_{X, D}) = \Gamma_{Y, D_Y}
\]
holds for the induced morphism $\beta : W_{Y, D_Y} \to W_{X, D}$.
\end{lemma}
\begin{proof}
Since the problem is local on $X$ and $Y$,
we may assume that $X=\Spec R$, $Y = \Spec R_Y$, and $\mathcal{O}_X(d_0D)|_{\Spec R} \simeq \mathcal{O}_{\Spec R}$ for some $d_0 \in \mathbb{Z}_{>0}$.
There exists $f \in K(X) \setminus \{0\}$ such that
\[
R(d_0dD)= f^dR \qquad {\rm and} \qquad
R_Y(d_0dD)= f^dR_Y
\]
for every $d \in \mathbb{Z}$.
We have
the induced graded
ring homomorphism:
\[
A := \bigoplus_{d \geq 0} R(dD)t^d \to \bigoplus_{d \geq 0} R_Y(dD_Y)t^d=:A_Y.
\]
Since this ring homomorphism is injective,
we consider $A$ as a subring of $A_Y$.
It suffices to show that
\[
(A_Y)_+ = \sqrt{A_+ \cdot A_Y},
\]
where $A_+ := \bigoplus_{d >0} R(dD)t^d$ and
$(A_Y)_+ := \bigoplus_{d >0} R_Y(dD_Y)t^d$.
By $(A_Y)_+ \supseteq A_+$ and $\sqrt{(A_Y)_+} =(A_Y)_+$,
it holds that
\[
(A_Y)_+ =\sqrt{(A_Y)_+} \supseteq
\sqrt{A_+ \cdot A_Y}.
\]
It is enough to prove the opposite inclusion:
$(A_Y)_+ \subseteq \sqrt{A_+ \cdot A_Y}$.
Fix $d \in \mathbb{Z}_{>0}$.
Take a homogeneous element $\psi t^d \in
R_Y(dD_Y)t^d$ with $\psi \in R_Y(dD_Y)$.
We obtain
\[
\psi^{d_0} \in R_Y(d_0d{D_Y}) = f^d R_Y.
\]
Hence we can write $\psi^{d_0} =f^d s$ for some
$s \in R_Y$.
Then it holds that
\[
(\psi t^d)^{d_0} =
\psi^{d_0} t^{d_0d}
= f^ds \cdot t^{d_0d} = (ft^{d_0})^d \cdot s
\in A_+ \cdot A_Y,
\]
which implies $\psi t^{d} \in \sqrt{A_+ \cdot A_Y}$, as required.
\end{proof}
\begin{lemma}\label{l-Qcone-etale}
Let $\alpha : Y \to X$ be a smooth morphism of normal varieties.
Let $D$ be a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$ and set $D_Y := \alpha^*D$.
Then the following diagram
\[
\begin{tikzcd}
Y \arrow[d, "\alpha"'] & W_{Y, D_Y} \arrow[l, "\pi_Y"']\arrow[d, "\beta"]\\
X & W_{X, D}. \arrow[l, "\pi"]
\end{tikzcd}
\]
is cartesian, where $\beta, \pi, \pi_Y$ are the induced morphisms.
\end{lemma}
\begin{proof}
Since both $W_{Y, D_Y}$ and $W_{X, D} \times_X Y$ are affine over $Y$,
the problem is local on $Y$.
In particular, we may assume that $X= \Spec R$ and $Y= \Spec R_Y$.
It suffices to show that
\[
\left( \bigoplus_{d \geq 0} R(dD) \right) \otimes_R R_Y \to \bigoplus_{d \geq 0} R_Y(dD_Y)
\]
is an isomorphism.
\begin{comment
Then it holds that
\[
W' = \Spec_Y (\mathcal A \otimes_{\mathcal{O}_X} \mathcal{O}_Y).
\]
By
\[
W = \Spec_X \bigoplus_{d \geq 0} \mathcal{O}_X(dD), \qquad
W_Y = \Spec_Y \bigoplus_{d \geq 0} \mathcal{O}_Y(dD_Y),
\]
and we have the induced morphism $W_Y \to W \times_X Y$.
Hence, it suffices to show that
\end{comment}
Fix $d \in \mathbb{Z}_{\geq 0}$.
Set $E := \llcorner d D \lrcorner$ and $E_Y := \llcorner dD_Y \lrcorner$.
We have
\[
E_Y = \llcorner dD_Y \lrcorner = \llcorner \alpha^*(dD) \lrcorner =
\alpha^*(\llcorner dD \lrcorner) = \alpha^* E,
\]
where the third equality holds, because $\alpha$ is smooth.
Then it is enough to prove that
\[
\alpha^*\mathcal{O}_X(E) \to \mathcal{O}_Y(\alpha^*E)
\]
is an isomorphism.
Outside the singular locus, these coincide.
Hence it suffices to show that both hand sides are reflexive.
It is well known that the right hand side $\mathcal{O}_Y(\alpha^*E)$ is reflexive.
For the smooth locus $X'$ of $X$ and its inverse image $Y' := \alpha^{-1}(X')$,
we have the cartesian diagram:
\[
\begin{tikzcd}
Y' \arrow[r, "i"]\arrow[d, "\alpha'"'] & Y \arrow[d, "\alpha"]\\
X' \arrow[r, "j"] & X.
\end{tikzcd}
\]
By the isomorphism
\[
j_*j^*\mathcal{O}_X(E) \xrightarrow{\simeq} \mathcal{O}_X(E),
\]
we obtain
\[
\alpha^*
\mathcal{O}_X(E)
\xleftarrow{\simeq}
\alpha^*j_*j^*\mathcal{O}_X(E)
\simeq i_*\alpha'^*j^*\mathcal{O}_X(E)
\simeq i_*i^* \alpha^*\mathcal{O}_X(E),
\]
where the isomorphism
$\alpha^*j_*j^*\mathcal{O}_X(E)
\simeq i_*\alpha'^*j^*\mathcal{O}_X(E)$ follows from the flat base change theorem.
Therefore, $\alpha^*\mathcal{O}_X(E)$ is reflexive, as required.
\end{proof}
\subsection{Miscellanies}
\begin{proposition}\label{prop:S_2}
Let $X$ be a normal variety with $\dim X \geq 1$,
{let} $D$ {be} an ample $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor,
and
{let} $L$ {be} a $\mathbb{Q}$-divisor.
Let $\mathcal{F}_L$ be the quasi-coherent $\mathcal{O}_{W^{\circ}_{X,D}}$-submodule of the constant sheaf $K(X)[t,t^{-1}]$ corresponding to the quasi-coherent $\pi^{\circ}_*\mathcal{O}_{W^{\circ}_{X,D}}$-module
\[
\bigoplus_{d \in \mathbb{Z}} \mathcal{O}_X(L+dD)t^d
\]
(see \cite[Exercise II.5.17(e)]{hartshorne77}),
that is, $\mathcal{F}_L$ satisfies
\[
\pi^{\circ}_*\mathcal{F}_L \simeq \bigoplus_{d \in \mathbb{Z}} \mathcal{O}_X(L+dD)t^d.
\]
{Then the following hold.}
\begin{enumerate}
\item If $X$ is regular or $D$ is Cartier, then $\pi^{\circ} \colon W_{X,D}^{\circ} \to X$ is flat.
\item $\mathcal{F}_{L}$ is coherent and satisfies the condition $S_2$.
\item For every codimension one point $w \in W^{\circ}_{X,D}$, the codimension of $\pi^{\circ}(w)$ is one or zero.
\item Let $D=\sum^r_{i=1} \frac{l_i}{d_i} D_i$ be the irreducible decomposition, where $l_i$ and $d_i$ are coprime integers satisfying $d_i >0$ for each $1 \leq i \leq r$. Set $ D':=\sum^r_{i=1}\frac{d_i-1}{d_i}D_i$.
Then, for every $q \geq 1$, we have
\[
(\mathcal{F}_{D'}^{\otimes q})^{**}=\mathcal{F}_{qD'}
\]
via the natural inclusions into $K(X)[t,t^{-1}]$, where $(-)^{**}$ denotes the reflexive hull.
\end{enumerate}
\end{proposition}
\begin{proof}
By definition,
\[
\pi^{\circ}_*\mathcal{O}_{W^{\circ}_{X,D}} =\bigoplus_{d \in \mathbb{Z}} \mathcal{O}_X(dD)t^d.
\]
If $X$ is regular or $D$ is Cartier, then $\pi^{\circ}_*\mathcal{O}_{W^{\circ}_{X,D}}$ is a locally free $\mathcal{O}_X$-module.
Since $\pi$ is affine, $\pi$ is flat.
Next, we prove the assertion (2).
Let $N$ be a positive integer such that $ND$ is Cartier.
Then the natural inclusion
\[
\bigoplus_{d \in \mathbb{Z}} \mathcal{O}_X(dND)s^d \to \bigoplus_{d \in \mathbb{Z}} \mathcal{O}_X(dD) t^d
\qquad\qquad s \mapsto t^N
\]
induces a finite surjective morphism $f \colon W^{\circ}_{X,D} \to W^{\circ}_{X,ND}$.
We have
\[
(\pi^{\circ}_N)_*f_*\mathcal{F}_{L} \simeq \bigoplus_{0 \leq i \leq N-1} \mathcal{O}_X(L+iD) \otimes_{\mathcal{O}_X} \mathcal{O}_{W^{\circ}_{X,ND}} \simeq \bigoplus_{0 \leq i \leq N-1} \mathcal{O}_{X}(L+iD) \otimes_{\mathcal{O}_X} \mathcal{A}^{\circ}_{X,ND},
\]
where $\pi^{\circ}_N \colon W^{\circ}_{X,ND} \to X$ is a canonical morphism induced in Definition \ref{d-Qcone1}.
In particular,
\[ (\pi^{\circ}_N)^*\bigoplus_{0 \leq i \leq N-1} \mathcal{O}_{X}(L+iD) \simeq f_*\mathcal{F}_L.
\]
Since $f_*\mathcal{F}_L$ is coherent, so is $\mathcal{F}_L$.
By the assertion (1), $\pi^{\circ}_N$ is flat.
Furthermore, since $\mathcal{O}_X(L+iD)$ satisfies the condition $S_2$, so does $\mathcal{F}_{L}$.
Next, we prove the assertion (3).
Since the above $f$ is finite and surjective, $f(w)$ is also a codimension one point.
Therefore, the assertion follows from the flatness of $\pi^{\circ}_N$.
Indeed, since $\pi^{\circ}_N$ is flat, every fiber of $\pi^{\circ}_N$ has dimension one.
By \cite[III.Proposition 9.5]{hartshorne77}, we have
\[
\dim \mathcal{O}_{X,\pi^{\circ}_N(f(w))} \leq \dim \mathcal{O}_{W^{\circ}_{X,ND},f(w)}=1.
\]
Finally, we prove the assertion (4).
First, we describe the natural inclusions.
We consider the composition of maps
\[
\mathcal{F}^{\otimes q}_{D'} \hookrightarrow K(X)[t,t^{-1}]^{\otimes q} \simeq K(X)[t,t^{-1}],
\]
where the first inclusion induced by the natural injection and the second isomorphism induced by taking products.
Therefore, it induces the natural injection
\[
(\mathcal{F}_{D'}^{\otimes q})^{**} \hookrightarrow K(X)[t,t^{-1}].
\]
By the assertion (3) and the $S_2$-condition of $\mathcal{F}_{qD'}$, we may assume that $X$ is a spectrum of a discrete valuation ring.
In particular, $D=\frac{a}{b} E$ is the irreducible decomposition, where $a$ and $b$ are coprime integers satisfying $b > 0$ and $D'=\frac{b-1}{b} E$.
For the assertion, it is enough to show that for every $d \in \mathbb{Z}$, the map
If $D$ is $\mathbb{Z}$-Weil, $D'=0$.
\[
\bigoplus_{d_1+\cdots+d_q=d}( \mathcal{O}_X(D'+d_1D) \otimes \cdots \otimes \mathcal{O}_X(D'+d_qD)) \to \mathcal{O}_X(qD'+dD)
\]
induced by taking the products is surjective.
Therefore, it is enough to show that for every $d \in \mathbb{Z}$, there exists $d_1,\ldots,d_q \in \mathbb{Z}$ such that $d_1+\cdots +d_q=d$ and
\[
\sum_{1 \leq i \leq q}\lfloor \frac{b-1}{b}+d_i\frac{a}{b} \rfloor =\lfloor q \frac{b-1}{b}+d\frac{a}{b} \rfloor.
\]
Since $a$ and $b$ are coprime integers, we can find $x, y \in \mathbb{Z}$ such that
\[
(b-1)+ax \equiv q(b-1)+da \mod b,\ \text{and}\ (b-1)+ay \equiv 0 \mod b.
\]
Then $x+(q-1)y \equiv d \mod b$.
Set $d_1:=d-(q-1)y$ and $d_i:=y$ for $2 \leq i \leq q$. Then $d_1+\cdots +d_q=d$, and
\[
(b-1)+d_1a \equiv (b-1)+ax \equiv q(b-1)+da \mod b.
\]
Therefore, if we set
\begin{align*}
(b-1)+ad_1&= bQ_1+r, \\
(b-1)+ad_i&= bQ_i\ &\text{for $i \geq 2$}, \\
q(b-1)+ad &= bQ+r,
\end{align*}
then $Q_1+\cdots +Q_q=Q$.
Furthermore, we have
\[
\sum_{1 \leq i \leq q}\lfloor \frac{b-1}{b}+d_i\frac{a}{b} \rfloor
= Q_1+\cdots +Q_q =Q=\lfloor q \frac{b-1}{b}+d\frac{a}{b} \rfloor.
\]
\end{proof}
\section{Appendix: Cone singularities for log smooth pairs}\label{s-Qcone-klt}
Throughout this section,
we work over an algebraically closed field $k$ of arbitrary characteristic. However, the main results of this section should extend to perfect fields by base change.
Let $X$ be a smooth projective variety and let $\Delta$ be a simple normal crossing $\mathbb{Q}$-divisor.
Assume that $\rdown{\Delta}=0$, $\Delta$ has standard coefficients, and $D :=-(K_X+\Delta)$ is ample.
The purpose of this section is to prove that the {orbifold} cone $V := V_{X, D}$ is $\mathbb{Q}$-factorial and klt (Theorem \ref{t-klt}).
For the $0$-section $\Gamma := \Gamma_{X, D}$ of $\pi : W:=W_{X, D} \to X$,
we shall prove (I) and (II) below.
\begin{enumerate}
\item[(I)] $W$ is $\mathbb{Q}$-factorial, $(W, \Gamma)$ is plt, and {so} $(W, 0)$ is klt.
\item[(II)] $(K_W + \Gamma)|_{\Gamma} = K_{\Gamma}+\Delta_{\Gamma}$, where $\Delta_{\Gamma} := (\pi|_{\Gamma})^*\Delta$ via
$\pi|_{\Gamma} : \Gamma \hookrightarrow W \to X$.
\end{enumerate}
The main tool is the toric geometry.
Both (I) and (II) are \'etale local problems on $X$.
Then the proofs will be carried out by introducing a toric structure on $(W, \Gamma)$ for the case when $X=\mathbb A^n$ and $\Supp \Delta$ is a union of coordinate hyperplanes.
Subsection \ref{ss-toric-str} and Subsection \ref{ss-Diff} are devoted to proving (I) and (II), respectively.
In Subsection \ref{ss-notation-toric}, we recall some foundations on toric geometry.
\subsection{Notation and results on toric varieties}\label{ss-notation-toric}
When we work with toric varieties, we use the following notation,
which is extracted from \cite{CLS11}.
In this paper, we only need normal affine toric varieties $U_{\sigma}$ associated to a rational {strongly}
convex polyhedral cone $\sigma$.
\begin{notation}\label{n-toric}
\hphantom{a}\\
\vspace{-1em}
\begin{enumerate}
\item Let $N$ be a finitely generated free $\mathbb{Z}$-module. Let $M$ be its dual,
that is, $M := \Hom_{\mathbb{Z}}(N, \mathbb{Z})$.
\item
Set $N = M = \mathbb{Z}^r$.
We identify $M$ and $\Hom_{\mathbb{Z}}(N, \mathbb{Z})$ via the standard inner product:
\[
N \times M \to \mathbb{Z}, \qquad (x, y) \mapsto \langle x, y\rangle,
\]
that is,
\[
\langle (x_1, ..., x_r), (y_1, ..., y_r)\rangle := x_1y_1 + \cdots +x_ry_r.
\]
\item Set $N_{\mathbb{R}} := N \otimes_{\mathbb{Z}} \mathbb{R}$ and $M_{\mathbb{R}} := M \otimes_{\mathbb{Z}} \mathbb{R}$.
\item For a rational {strongly} convex polyhedral cone $\sigma \subseteq N_{\mathbb{R}}$,
let $U_{\sigma}$ be the associated affine normal toric variety, that is,
$U_{\sigma} = \Spec k[M \cap \sigma^{\vee}]$.
\end{enumerate}
\end{notation}
\begin{definition}\label{d-toric}
\hphantom{a}\\
\vspace{-1em}
\begin{enumerate}
\item We say that $T$ is a {\em torus} if $T =\mathbb G_m^n$,
which is an algebraic group over $k$.
\item
We say that $(T, X)$ is a {\em toric variety} if
$X$ is a variety and $T$ is an open subscheme of $X$ such that
\begin{itemize}
\item $T$ is a torus and
\item the action of $T$ on itself extends to $X$.
\end{itemize}
By abuse of notation, also $X$ is called a toric variety.
In this case, $(T, X)$ is called the {\em toric structure} on $X$.
\item
A prime divisor $D$ is {\em torus-invariant} if $D$ corresponds to a ray of $\sigma$.
\end{enumerate}
\end{definition}
Given a rational strongly convex polyhedral sone $\sigma \subseteq N_{\mathbb{R}}$ with $d:=\dim \sigma = \dim N_{\mathbb{R}}$,
the following hold for the associated affine normal toric variety $U_{\sigma} = \Spec\,k[M \cap \sigma^{\vee}]$.
\begin{enumerate}
\item $U_{\sigma}$ is smooth if and only if we can write $\sigma = \mathbb{R}_{\geq 0} u_1+ \cdots + \mathbb{R}_{\geq 0} u_d$ for some $\mathbb{Z}$-linear basis $u_1, ..., u_d \in N \cap \sigma$ of $N$
\cite[Theorem 1.3.12]{CLS11}.
\item $U_{\sigma}$ is $\mathbb{Q}$-factorial if and only if $\sigma$ is simplicial, that is,
we can write $\sigma = \mathbb{R}_{\geq 0} u_1+ \cdots + \mathbb{R}_{\geq 0} u_d$ for some
$u_1, ..., u_d \in N \cap \sigma$ \cite[Proposition 4.2.7]{CLS11}.
In this case, $u_1, ..., u_d$ is an $\mathbb{R}$-linear basis of $N_{\mathbb{R}}$.
\end{enumerate}
\begin{comment}
\begin{example}
Set $M := \mathbb{Z}^n$.
For
\[
k[x_1^{\pm}, ..., x_n^{\pm 1}] := k[x_1, x_1^{-1}, ..., x_n, x_n^{-1}]
\]
and a finite set of monomials
\[
\Lambda \subset \{ x_1^{e_1} \cdots x_n^{e_1} \,|\, e_1, ..., e_n \in \mathbb{Z} \} \subset
k[x_1^{\pm}, ..., x_n^{\pm 1}],
\]
Let $k[\Lambda]$ be the $k$-subalgebra of $k[x_1^{\pm}, ..., x_n^{\pm 1}]$
generated by $\Lambda$.
Let $\tau_{\Lambda} \subset M_{\mathbb{R}}$ be the cone generated by
\[
\{ (e_1, ..., e_n) \in M \,|\, x_1^{e_1} \cdots x_n^{e_n} \in \Lambda\}.
\]
Since $k[\Lambda]$ coincides with
Finitely generated by monomial, then normal affine toric variety.
\end{example}
\end{comment}
\begin{lemma}\label{l-anQfac-toric}
Let $W$ be an affine normal $\mathbb{Q}$-factorial toric variety.
Then the following hold.
\begin{enumerate}
\item There exists a finite surjective morphism $\mathbb A^r \times \mathbb G_m^s \to W$ for some $r, s \in \mathbb{Z}_{\geq 0}$.
\item
$W$ is analytically $\mathbb{Q}$-factorial, that is,
$\Spec \widehat{\mathcal{O}}_{W, w}$ is $\mathbb{Q}$-factorial for every (possibly non-closed) point $w \in W$, where $\widehat{\mathcal{O}}_{W, w}$ denotes the completion of the local ring $\mathcal{O}_{W, w}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let us show (1).
We have $W =U_{\sigma}$ for some simplicial cone $\sigma \subseteq N_{\mathbb{R}}$.
Set $d := \dim W$.
Fix a minimal collection of generators $v_1, ..., v_d \in N \cap \sigma$ of $\sigma$
and set $N'$ to be the sublattice of $N$ generated by $v_1, ..., v_d$.
Then the inclusion $N' \hookrightarrow N$ induces
a finite surjective morphism $\mathbb A^r \times \mathbb G_m^s \to W$ for some $r, s \in \mathbb{Z}_{\geq 0}$ (cf.\ \cite[page 127]{CLS11}).
Thus (1) holds.
Let us show (2).
By (1), we have a finite surjective morphism
$V:= \mathbb A^r \times \mathbb G_m^s \to W$.
Let $v_1, ..., v_r \in V$ be the points lying over $w$.
We have a finite injective ring homomorphism:
\[
\widehat{\mathcal{O}}_{W, w} \hookrightarrow \mathcal{O}_{V}(V) \otimes_{\mathcal{O}_W(W)} \widehat{\mathcal{O}}_{W, w}
\simeq \varprojlim_n \mathcal{O}_{V}(V)/\mathfrak{m}_w^n \mathcal{O}_{V}(V) \simeq
\widehat{\mathcal{O}}_{V, v_1} \times \cdots \times \widehat{\mathcal{O}}_{V, v_r},
\]
where $\mathfrak{m}_w$ denotes the maximal ideal of $\mathcal{O}_{W, w}$,
the first isomorphism holds by \cite[Proposition 10.13]{AM69},
and the second one follows from \cite[Theorem 17.7]{Nag62}.
In particular, the composite ring homomorphism
\[
\widehat{\mathcal{O}}_{W, w} \to \widehat{\mathcal{O}}_{V, v_1} \times \cdots \times \widehat{\mathcal{O}}_{V, v_r} \xrightarrow{{\rm pr}_1} \widehat{\mathcal{O}}_{V, v_1}
\]
is a finite injective ring homomorphism to a regular local ring $\widehat{\mathcal{O}}_{V, v_1}$.
Then $\Spec \widehat{\mathcal{O}}_{W, w}$ is $\mathbb{Q}$-factorial by the same argument as in \cite[Propostion 5.16]{KM98}.
Thus (2) holds.
\end{proof}
\subsection{Toric structure}\label{ss-toric-str}
\begin{notation}\label{n-W-toric}
Set $X := \mathbb A^n = \Spec k[x_1, ..., x_n]$ and
$H_1 :=V(x_1), ..., H_n := V(x_n)$, which are the coordinate hyperplanes.
We equip $X$ with the toric structure $(T_X, X)$
given by $T_X := X \setminus \bigcup_{i=1}^n H_i = \Spec k[x_1^{\pm 1}, ..., x_n^{\pm 1}]$.
For $a_1, ..., a_n \in \mathbb{Q} \cap [0, 1)$ and $D := a_1 H_1 + \cdots + a_n H_n$,
recall that we have the induced morphism (cf.\ Definition \ref{d-Qcone1}):
\[
\pi : W := W_{X, D} = \Spec A \to X
\quad {\rm where}
\quad A := \bigoplus_{d \in \mathbb{Z}_{\geq 0}} R(dD)t^d \subseteq k[x_1^{\pm 1}, ..., x_n^{\pm 1}, t].
\]
\vspace{-1.2em}
\noindent Set
\begin{itemize}
\item $E_W := (\pi^*H_1)_{\mathrm{red}} + \cdots + (\pi^*H_n)_{\mathrm{red}}$,
\item $T_W := \Spec k[x_1^{\pm 1}, ..., x_n^{\pm 1}, t^{\pm 1}]$, and
\item $\Gamma := \Gamma_{X, D}$, which is the $0$-section of $\pi : W \to X$.
\end{itemize}
We have the commutative diagrams:
\[
\begin{tikzcd}
A= \bigoplus_{d \geq 0} R(dD)t^d \arrow{r} & k[x_1^{\pm 1}, ..., x_n^{\pm 1}, t^{\pm 1}] \\
R= k[x_1, ..., x_n]\arrow{r}\arrow[u] & k[x_1^{\pm 1}, ..., x_n^{\pm 1}]\arrow[u]
\end{tikzcd}
\hspace{20mm}
\begin{tikzcd}
W \arrow[d, "\pi"'] & T_W \arrow[l, "j_W"'] \arrow[d] \\
X & T_X \arrow[l, "j_X"],
\end{tikzcd}
\]
where the right diagram is obtained from the left one by applying $\Spec (-)$.
\end{notation}
\begin{remark}\label{r-An-inductive}
We use Notation \ref{n-W-toric}.
Fix a closed point $x \in X$.
The following arguments (1) and (2) enable us to reduce many problems
to the case when $x$ is the origin.
\begin{enumerate}
\item
Assume $x \not\in H_1, ..., x \not\in H_n$.
Then $W$ and $W_{X, 0} = X \times \mathbb A^1$ are isomorphic over some open neighbourhood of $x \in X$.
More precisely, for the projection $\pi': W_{X, 0} \to X$,
there exists an open neighbourhood $X'$ of $x \in X = \mathbb A^n$
such that
\pi^{-1}(X')$ and $\pi'^{-1}(X') = X' \times \mathbb A^1$
are isomorphic over $X'$.
In particular, $\pi^{-1}(x) \simeq \mathbb A^1$.
\item
Assume $x \in H_1, ..., x \in H_r, x \not\in H_{r+1}, ..., x \not\in H_n$
for some $1 \leq r <n$.
Then, $W$ and $W' :=W_{X, a_1H_1+ \cdots +a_r H_r}$ are isomorphic over
some open neighbourhood of $x \in X$.
Furthermore,
we have the following cartesian diagram (Lemma \ref{l-Qcone-etale}):
\[
\begin{tikzcd}
W_{X, a_1H_1+ \cdots +a_r H_r} \arrow[d, "\pi"'] \arrow[r] & W_{X', a_1H'_1+ \cdots + a_rH'_r} \arrow[d, "\pi '"]\\
X=\mathbb{A}^n \arrow[r, "\rho"] & X':=\mathbb{A}^r
\end{tikzcd}
\]
for the projection $\rho :\mathbb A^n \to \mathbb A^r, (c_1, ..., c_n) \mapsto (c_1, ..., c_r)$ and
the coordinate hyperplanes $H'_1, ..., H'_r$ of $X' = \mathbb A^r$.
\end{enumerate}
\end{remark}
\begin{proposition}\label{p-toricity1}
We use Notation \ref{n-W-toric}.
Then the following hold.
\begin{enumerate}
\item The induced morphism $j_W : T_W \to W$ is an open immersion.
\item $W$ is an affine normal toric variety with the toric structure $(T_W, W)$.
\item $\pi : (T_W, W) \to (T_X, X)$ is a toric morphism.
\item $W$ is $\mathbb{Q}$-factorial.
\end{enumerate}
\end{proposition}
\begin{proof}
The assertion (1) follows from
\[
A[x_1^{-1}, ..., x_n^{-1}, t^{-1}] =
\left( \bigoplus_{d \in \mathbb{Z}_{\geq 0}} R(dD)t^d\right) [x_1^{-1}, ..., x_n^{-1}, t^{-1}]
= k[x_1^{\pm 1}, ..., x_n^{\pm 1}, t^{\pm 1}].
\]
Let us show (2).
For any $d \in \mathbb{Z}_{\geq 0}$, it holds that
\[
R(dD)t^d = R(\llcorner dD \lrcorner) t^d = R x_1^{-\alpha_{d, 1}} \cdots x_n^{-\alpha_{d, n}} t^d,
\]
where $\alpha_{d, 1}, ..., \alpha_{d, n} \in \mathbb{Z}_{\geq 0}$ are defined as follows:
\begin{equation}\label{e1-toricity1}
\alpha_{d, 1} := \llcorner da_1 \lrcorner,\quad \ldots,\quad \alpha_{d, n} := \llcorner da_n \lrcorner.
\end{equation}
Therefore, we obtain
\[
A = \bigoplus_{d \geq 0} R(dD) t^d
= \sum_{d=0}^{\infty} Rx_1^{-\alpha_{d, 1}} \cdots x_n^{-\alpha_{d, n}} t^d
= k[\Lambda],
\]
where $k[\Lambda]$ denotes the $k$-subalgebra of
$k[x_1^{\pm 1}, ..., x_n^{\pm 1}, t]$ generated by
\[
\Lambda := \{x_1, ..., x_n\} \cup \{ x_1^{-\alpha_{d, 1}} \cdots x_n^{-\alpha_{d, n}} t^d \,|\, d \in \mathbb{Z}_{\geq 0} \}.
\]
Since $k[\Lambda]$ is a finitely generated $k$-algebra,
there exists a finite subset $\Lambda' \subseteq \Lambda$ such that
$\{x_1, ..., x_n \} \subseteq \Lambda'$ and $k[\Lambda] = k[\Lambda']$.
Hence it follows from \cite[Proposition 1.1.14]{CLS11} that $W$ is an affine toric variety.
Thus (2) holds.
The assertion (3) follows from the fact that the induced morphism $T_W \to T_X$ is a homomorphism of algebraic groups.
Let us show (4).
Under the identification $M := \mathbb{Z}^{n+1} = \Hom_{\mathbb{Z}}(\Hom (T_W, \mathbb G_m), \mathbb{Z})$,
let $e_1, ..., e_{n+1} \in M$ be the standard basis, where $e_1, ..., e_n, e_{n+1}$ correspond to
the variables $x_1, ..., x_n, t$.
For each $d \in \mathbb{Z}_{\geq 0}$, set
\[
v_d := -\alpha_{d, 1}e_1 - \cdots - \alpha_{d, n}e_n + d e_{n+1}
= -\sum_{i=1}^n \alpha_{d, i}e_i + d e_{n+1} .
\]
Fix $d_0 \in \mathbb{Z}_{>0}$ such that $d_0 D$ is a Cartier divisor, that is,
$d_0 a_1, ..., d_0 a_n \in \mathbb{Z}$.
Set
\[
\tau := \mathbb{R}_{\geq 0} e_1 + \cdots + \mathbb{R}_{\geq 0} e_n + \sum_{d \geq 0} \mathbb{R}_{\geq 0}v_d,\quad \text{ and }
\quad
\tau' := \mathbb{R}_{\geq 0} e_1 + \cdots + \mathbb{R}_{\geq 0} e_n + \mathbb{R}_{\geq 0} v_{d_0}.
\]
For the time being, let us finish the proof under assumption $\tau = \tau'$.
By $k[M \cap \tau'] \subseteq k[\Lambda] \subseteq k[M \cap \tau] = k[M \cap \tau']$,
we obtain $A = k[\Lambda] = k[M \cap \tau']$.
Since $\tau'$ is simplicial, so is $\sigma:=(\tau ')^{\vee}=\tau^{\vee}$.
Hence $W=U_{\sigma}$ is $\mathbb{Q}$-factorial.
It is enough to show that $\tau = \tau'$.
The inclusion $\tau \supseteq \tau'$ is clear.
Fix $d \in \mathbb{Z}_{\geq 0}$.
It suffices to prove that $v_d \in \tau'$, which follows from
\begin{eqnarray*}
v_d &=& -\sum_{i=1}^n \alpha_{d, i}e_i + d e_{n+1} \\
&\overset{({\rm i})}{=}&-\sum_{i=1}^n \llcorner da_i \lrcorner e_i + d e_{n+1}\\
&=&\sum_{i=1}^n (da_i - \rdown{da_i})e_i -\sum_{i=1}^n da_i e_i + d e_{n+1}\\
&=&\sum_{i=1}^n (da_i - \rdown{da_i})e_i
+ \frac{d}{d_0} \left(- \sum_{i=1}^n d_0a_i e_i + d_0 e_{n+1} \right)\\
&\overset{({\rm ii})}{=}&\sum_{i=1}^n (da_i - \rdown{da_i})e_i
+ \frac{d}{d_0} \left(- \sum_{i=1}^n \alpha_{d_0, i} e_i + d_0 e_{n+1} \right)\\
&=&\sum_{i=1}^n (da_i - \rdown{da_i})e_i
+\frac{d}{d_0}v_{d_0}\\
&\in & \mathbb{R}_{\geq 0} e_1 + \cdots + \mathbb{R}_{\geq 0} e_n + \mathbb{R}_{\geq 0} v_{d_0} = \tau',
\end{eqnarray*}
where (i) and (ii) follow from (\ref{e1-toricity1}).
Thus (4) holds.
\end{proof}
\begin{remark}\label{r-toric-str}
We use Notation \ref{n-W-toric}
and the same notation as in the proof of Proposition \ref{p-toricity1}.
In what follows, we summarise the properties of the toric structures on $X$ and $W$.
\begin{enumerate}
\item Set $N := M:= \mathbb{Z}^{n+1}$ and $N' := M' := \mathbb{Z}^n$.
Let $e_1, ..., e_{n+1}$ (resp.\ $e'_1, ..., e'_n$) be the standard basis of $N=M=\mathbb{Z}^{n+1}$ (resp.\ $N'=M'=\mathbb{Z}^n$).
Take the projection:
\[
\varphi : N \to N', \qquad e_1 \mapsto e'_1, \quad \ldots,\quad e_n \mapsto e'_n, \quad e_{n+1} \mapsto 0.
\]
We identify $M$ and $\Hom_{\mathbb{Z}}(N, \mathbb{Z})$
via the standard inner product (cf.\ Notation \ref{n-toric}).
Similarly, we identify $M'$ and $\Hom_{\mathbb{Z}}(N', \mathbb{Z})$.
\item
For $v := v_{d_0}$, we set
\[
\tau := \mathbb{R}_{\geq 0}e_1 + \cdots + \mathbb{R}_{\geq 0} e_n + \mathbb{R}_{\geq 0} v \subseteq M_{\mathbb{R}}
\qquad {\rm and}\qquad
\sigma := \tau^{\vee} \subseteq N_{\mathbb{R}}.
\]
It follows from the proof of Proposition \ref{p-toricity1}(4) that
\[
W = U_{\sigma} = \Spec k[\tau \cap M].
\]
\item
We have
\[
\sigma = \mathbb{R}_{\geq 0} u_1 + \cdots +\mathbb{R}_{\geq 0} u_n + \mathbb{R}_{\geq 0}e_{n+1},
\]
where each $u_i \in N$ is the primitive element such that
\begin{itemize}
\item $\langle u_i, e_i \rangle >0$,
\item $\langle u_i, e_j \rangle =0$ for any $j \in \{1, ..., n\} \setminus \{i\}$, and
\item $\langle u_i, v \rangle =0$.
\end{itemize}
For each $i$, it holds that
\[
u_i = d_i e_i + \ell_i e_{n+1}
\]
for the coprime integers $d_i$ and $\ell_i$ such that
$d_i > \ell_i \geq 0$ and $a_i = \ell_i/d_i$. In particular, we get
\[
\varphi(u_1) = d_1 e'_1, \quad \ldots,\quad \varphi(u_n) = d_n e'_n, \quad \varphi(e_{n+1}) =0.
\]
\item
For $\sigma' := \sum_{i=1}^n \mathbb{R}_{\geq 0} e'_i \subseteq N'_{\mathbb{R}}$,
we have $\varphi_{\mathbb{R}}(\sigma) \subseteq \sigma'$, which induces the toric morphism
\[
\pi : W = U_{\sigma} \to U_{\sigma'} = \mathbb A^n =X.
\]
\end{enumerate}
\end{remark}
\begin{proposition}\label{p-toricity2}
We use Notation \ref{n-W-toric}.
Then the following hold.
\begin{enumerate}
\item For any point $x \in X$, the fibre $\pi^{-1}(x)$ is one-dimensional and
geometrically irreducible.
\item $\pi : W \to X$ is flat.
\item $\Gamma$ is a torus-invariant prime divisor on $W$.
\item For every $1 \leq i \leq n$, $(\pi^*H_i)_{\mathrm{red}}$ is a torus-invariant prime divisor on $W$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let us show (1).
We may assume that $x \in X$ is a closed point.
If $x \not\in H_1, ..., x\not\in H_n$, then we have $\pi^{-1}(x) \simeq \mathbb A^1$ (Remark \ref{r-An-inductive}(1)).
Hence, the problem is reduced to the case when
$x \in H_1, ..., x \in H_r, x \not\in H_{r+1}, ..., x\not\in H_n$
for some $1 \leq r \leq n$.
By Remark \ref{r-An-inductive}(2), we may assume that $r=n$, that is,
$x$ is the origin.
In what follows, we use the notation introduced in Remark \ref{r-toric-str}.
It follows from
\cite[Theorem 3.2.6 and Lemma 3.3.21]{CLS11} that
\[
\pi^{-1}(0) = \pi^{-1}( O_X({\sigma'})) = O_W(\sigma) \amalg O_W(\widetilde{\sigma}) \subseteq \overline{O_W(\widetilde{\sigma})}
\]
for the orbits $O_X(-)$ and $O_W(-)$,
where $\sigma$ and $\widetilde{\sigma}$ are the faces such that
\begin{itemize}
\item $\dim \sigma =n+1, \dim \widetilde{\sigma} =n$,
\item $\varphi(\sigma ) =\varphi(\widetilde{\sigma}) = \sigma'$, and
\item $\widetilde{\sigma} \prec \sigma$.
\end{itemize}
Since $\pi^{-1}(0)$ is a closed subset of $W$, we have $\pi^{-1}(0) = \overline{O_W(\widetilde{\sigma})}$, which is one-dimensional and irreducible.
Thus (1) holds.
The assertion (2) follows from (1) and the fact that
$X$ is smooth and $W$ is Cohen--Macaulay.
Let us show (3).
By $\Gamma \simeq X$, $\Gamma$ is a prime divisor on $W$.
Hence it suffices to show that $\Gamma$ is torus-invariant.
We have ring homomorphisms:
\[
A = \bigoplus_{d \geq 0} R(dD) t^d \hookrightarrow k[x_1^{\pm 1}, ..., x_n^{\pm 1}, t]
\hookrightarrow k[x_1^{\pm 1}, ..., x_n^{\pm 1}, t^{\pm 1}],
\]
corresponding to $T_W$-equivariant open immersions:
\[
W = \Spec A \hookleftarrow \mathbb G_m^n \times \mathbb A^1\hookleftarrow \mathbb G_m^{n+1} =T_W.
\]
Since $\Gamma|_{\mathbb G_m^n \times \mathbb A^1}$ is $T_W$-invariant,
also $\Gamma$ itself is $T_W$-invariant.
Thus (3) holds.
Let us show (4).
Fix $1 \leq i \leq n$.
Recall that $H_i \subseteq X$ is a torus-invariant prime divisor on $X$.
Therefore, its set-theoretic inverse image $\pi^{-1}(H_i)$ is stable under the $T_W$-action.
Therefore, it suffices to show that
the effective Cartier divisor $\pi^*H_i$ is irreducible,
which follows from (1) and (2).
Thus (4) holds.
\end{proof}
\begin{proposition}\label{p-toricity3}
We use Notation \ref{n-W-toric}.
Then the following hold.
\begin{enumerate}
\item $(W, \Gamma +E_W)$ is lc.
\item $(W, \Gamma + cE_W)$ is plt for any $0 \leq c <1$.
In particular, $W$ is klt.
\end{enumerate}
\end{proposition}
\begin{proof}
Let us show (1).
Since $(\pi^*H_1)_{\mathrm{red}}, \cdots, (\pi^*H_n)_{\mathrm{red}}, \Gamma$ are torus-invariant prime divisors,
$\Gamma + E_W = \Gamma + (\pi^*H_1)_{\mathrm{red}} + \cdots + (\pi^*H_n)_{\mathrm{red}}$
is a sum of torus-invariant prime divisors (Proposition \ref{p-toricity2}(3)(4)).
Therefore, $(W, \Gamma +E_W)$ is lc by \cite[Proposition 11.4.24]{CLS11}.
Thus (1) holds.
Let us show (2).
By construction,
$\Gamma|_{W \setminus E_W}$ is a smooth prime divisor on $W \setminus E_W$.
Hence $(W \setminus E_W, (\Gamma + cE_W)|_{W \setminus E_W})$ is plt.
This, together with (1), implies that $(W, \Gamma + cE_W)$ is plt.
Thus (2) holds.
\qedhere
\end{proof}
\begin{theorem}\label{t-toricity}
Let $X$ be a smooth variety and let $D$ be a $\mathbb{Q}$-divisor
such that $\{D\}$ is simple normal crossing.
For $W := W_{X, D}$ and $\Gamma := \Gamma_{X, D}$,
we set $E := \{ D\}_{\mathrm{red}}$ and $E_W := (\pi^*E)_{\mathrm{red}}$, where $\pi : W \to X$ denotes the induced morphism.
Then the following hold.
\begin{enumerate}
\item $W$ is $\mathbb{Q}$-factorial.
\item $(W, \Gamma +E_W)$ is lc.
\item $(W, \Gamma +cE_W)$ is plt for any $0 \leq c <1$.
\end{enumerate}
\end{theorem}
\begin{proof}
Set $n := \dim X$.
The problem is local on $X$.
Fix a closed point $x \in X$, around which we shall work.
Let $ D= a_1 D_1 + \cdots + a_m D_m$ be the irreducible decomposition.
We may assume that the following hold.
\begin{itemize}
\item $D=\{D\}$, that is, $0 \leq a_1 < 1, ..., 0 \leq a_m <1$.
\item $x \in D_1, ..., x \in D_m$. In particular, $m \leq n$.
\end{itemize}
Fix an \'etale morphism $\alpha : X \to Z :=\mathbb A^n$ such that
$D_1 = \alpha^*H_1, ..., D_m =\alpha^*H_m$ hold and $\alpha(x)$ is the origin,
where $H_1, ..., H_n$ denote the coordinate hyperplanes.
For $D_Z := a_1 H_1+ \cdots + a_m H_m$, we have the cartesian diagram
(Lemma \ref{l-Qcone-etale}):
\[
\begin{tikzcd}
X \arrow[d, "\alpha"'] & W_{X, D} = W \arrow[l, "\pi"']\arrow[d, "\beta"]\\
Z=\mathbb A^n & W_{Z, D_Z}.\arrow[l, "\pi_Z"]
\end{tikzcd}
\]
Since $W_{Z, D_Z}$ is a normal $\mathbb{Q}$-factorial toric variety (Proposition \ref{p-toricity1}),
$W_{Z, D_Z}$ is analytically $\mathbb{Q}$-factorial (Lemma \ref{l-anQfac-toric}).
Since $\beta : W \to W_{Z, D_Z}$ is \'etale, also $\Spec \mathcal{O}_{W, w}$ is
$\mathbb{Q}$-factorial for any closed point $w \in W$.
Therefore, $W$ is $\mathbb{Q}$-factorial, that is, (1) holds.
By $\alpha^*\Gamma_{Z, D_Z} = \Gamma$ (Lemma \ref{l-funct-Gamma}, Lemma \ref{l-Qcone-etale}),
(2) and (3) follow from Proposition \ref{p-toricity3}.
\end{proof}
\subsection{Different}\label{ss-Diff}
\begin{comment}
\begin{definition}\label{d-W-for-A1}
Set
\[
N := \mathbb{Z}^2, \quad e_1 := (1, 0) \in N,\quad e_2 := (0, 1) \in N,
\quad M:= \Hom_{\mathbb{Z}}(N, \mathbb{Z}).
\]
Fix $a \in \mathbb{Q}$ with $0 < a <1$.
Take the irreducible fraction $a = \ell / d$ with $d > \ell >0$.
For $u := de_1 +\ell e_2$, we set
\[
W_a := U_{\sigma} = \Spec k[M \cap \sigma^{\vee}]
\qquad {\rm for}\qquad
\sigma := \mathbb{R}_{\geq 0} u + \mathbb{R}_{\geq 0} e_2.
\]
Note that $W_a$ is an affine normal toric surface such that
$W_a = W_{\mathbb A^1, aH}$ for the origin $H$ on $\mathbb A^1$.
\end{definition}
\begin{example}\label{e-W-for-A1}
We use the notation as in Definition \ref{d-W-for-A1}.
Set $X := \mathbb A^1$ and $H := H_1$.
For a rational number $a$ with $0 < a <1$, set $D := aH$, $W:=W_{X, D}$, and $\Gamma := \Gamma_{X, D}$.
For the irreducible fraction $a = \ell /d$ with $d > \ell >0$, we set
\[
v := -\ell e_1 + d e_2.
\]
By the proof of Proposition \ref{p-toricity}, we have
\[
W = \Spec A \qquad {\rm for}\quad
\tau := \mathbb{R}_{\geq 0} e_1 + \mathbb{R} _{\geq 0}v \quad {\rm and}\quad
A := k[M \cap \tau].
\]
For $v^{\perp} := (d, \ell) \in N_{\mathbb{R}}$, it holds that
\[
\tau^{\vee} =\mathbb{R}_{\geq 0} (-e_2) + \mathbb{R}_{\geq 0} v^{\perp}.
\]
Applying the $\mathbb{Z}$-linear automorphism $N \to N, e_1 \mapsto e_1, e_2 \mapsto -e_2$,
we have
\[
W =U_{\tau^{\vee}} \simeq U_{\sigma} =U_a
\qquad {\rm for} \qquad
u := d e_1 - \ell e_2 \quad {\rm and} \quad
\sigma := \mathbb{R}_{\geq 0} u + \mathbb{R}_{\geq 0} e_2.
\]
In particular,
\[
(W, \Gamma) =: (U_a, \Gamma_a)
\]
is a plt pair.
For more details on this singularity,
we refer to \cite[\S 10.1 and \S 10.2]{CLS11}.
\end{example}
\end{comment}
\begin{comment}
\begin{example}\label{e-toricity}
Let us give a concrete description for the case when $\Delta = \frac{1}{2} H_x + \frac{2}{3} H_y$ of $\mathbb A^2 =\Spec k[x, y]$.
We have
\[
\mathcal A = k[x,y]
\oplus k[x,y]t
\oplus k[x,y] x^{-1} t^2
\oplus k[x,y] x^{-1}y^{-1} t^3
\]
\[
\oplus k[x,y] x^{-2}y^{-1} t^4
\oplus k[x,y] x^{-2}y^{-1} t^5
\oplus k[x,y] x^{-3}y^{-2} t^6
\oplus \cdots
\subset k[x^{\pm 1}, y^{\pm 1}, t].
\]
In other words,
\[
\mathcal A = k[x, y][t, x^{-1}t^2, x^{-1}y^{-1} t^3,
x^{-2}y^{-1} t^4, x^{-2}y^{-1} t^5, x^{-3}y^{-2} t^6].
\]
Consider the semi-group of $\mathbb{Z}^3$, i.e.
\[
S :=
\langle e_1, e_2, e_3,
-e_1 + 2e_3,
-e_1-e_2+3e_3,
-2e_1 -e_2 +4e_3,
-2e_1 -e_2 +5e_3,
-3e_1 -2 e_2 +6e_3\rangle.
\]
Then $S \subset M$ is an affine semi-group and $\mathcal A = k[S]$ is the semigroup algebra.
Since $\mathcal A$ is an integrally closed domain,
$\Spec \mathcal A$ is an affine normal toric variety.
\end{example}
\end{comment}
\begin{definition}\label{d-Qfac-index}
\hphantom{a}\\
\vspace{-1em}
\begin{enumerate}
\item Let $W$ be a klt surface such that $W$ has a unique singular point $Q$.
Let $\psi : W' \to W$ be the minimal resolution of $W$.
For the $\psi$-exceptional prime divisors $E_1, ..., E_r$ and
the intersection matrix $(-E_i \cdot E_j)$, we set
\[
d_{W, Q} := \det (-E_i \cdot E_j),
\]
Note that
$d_Q$ does not depends on the choice of the order of $E_1, ..., E_n$
\item
Let $W$ be a klt surface.
For a singular point $Q$ of $W$ and
an open neighbourhood $W'$ of $Q \in W$ such that
$Q$ is a unique singular point of $W',$
we set
\[
d_Q := d_{W, Q} := d_{W', Q},
\]
where $d_{W', Q}$ is the integer defined in (1).
\end{enumerate}
\end{definition}
\begin{lemma}\label{l-plt-chain}
Let $(W, \Gamma)$ be a two-dimensional plt pair, where $\Gamma$ is a prime divisor.
Assume that $W$ has a unique singular point $Q$ and
that $Q \in \Gamma$.
Then the following hold.
\begin{enumerate}
\item
For the different $\Diff_{\Gamma}$ defined by
$(K_W+\Gamma)|_{\Gamma} = K_{\Gamma} + \Diff_{\Gamma}$,
it holds that
\[
\Diff_{\Gamma} = \frac{d_Q -1}{d_Q}.
\]
\item
$d_Q$ is equal to the $\mathbb{Q}$-factorial index of $W$, that is,
$d_Q$ is the minimum positive integer satisfying the following property $(*)$.
\begin{enumerate}
\item[$(*)$] If $D$ is a Weil divisor on $W$, then $d_Q D$ is Cartier.
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{proof}
The assertion (1) follows from \cite[Theorem 3.36]{kollar13}.
Let us show (2).
By \cite[Lemma 2.2]{CTW15b},
the $\mathbb{Q}$-factorial index $d'_Q$ is a positive integer with $d_Q \geq d'_Q$.
Let $\psi : W' \to W$ be the minimal resolution of $W$.
Set $\Gamma' := \psi^{-1}_*\Gamma$.
For the irreducible decomposition $\Ex(\psi) = E_1 \cup \cdots \cup E_r$, the extended dual graph
is a chain \cite[Theorem 3.36]{kollar13}:
\[
\Gamma' - E_1 - \cdots - E_n.
\]
There exist $c_1, ..., c_n \in \mathbb{Q}_{\geq 0}$ such that
\[
K_{W'} + \Gamma' + c_1E_1+ \cdots + c_n E_n = \psi^*(K_W+ \Gamma).
\]
By (1), we have $c_1 = \frac{d_Q-1}{d_Q}$.
Since $d'_Q(K_W + \Gamma)$ is Cartier, we obtain
\[
d'_Q \cdot \frac{d_Q-1}{d_Q} = d'_Q c_1 \in \mathbb{Z},
\]
which implies $d'_Q \in d_Q \mathbb{Z}$.
Combining with $d_Q \geq d'_Q$, we get $d_Q = d'_Q$.
Thus (2) holds.
\end{proof}
\begin{comment}
\begin{proposition}\label{p-Diff}
Set $X := \mathbb A^1$.
Let $P$ be a closed point of $X$ and let $D := a P$.
Set $a = \ell/d$, where $0<\ell<d$ are coprime integers.
Recall that $(W, \Gamma) :=(W_{X, D}, \Gamma_{X, D})$ is a two-dimensional plt pair.
Let $Q \in \Gamma$ be the closed point lying over $P$.
Then the following hold.
\begin{enumerate}
\item $d=d_Q$.
\item For the different $\Diff_{\Gamma}$ given by
$(K_W+\Gamma)|_{\Gamma} = K_{\Gamma} + \Diff_{\Gamma}$,
it holds that
\[
\Diff_{\Gamma} = \frac{d-1}{d} Q.
\]
\end{enumerate}
\end{proposition}
\begin{proof}
By Lemma \ref{l-plt-chain}, (1) implies (2).
Let us show (1).
It holds that $W$ is nothing but the normal affine toric surface
corresponding to the cone $\sigma$ given below:
\[
\sigma := \mathbb{R}_{\geq 0} u + \mathbb{R}_{\geq 0} e_2, \qquad u := (d, -\ell).
\]
By Lemma \ref{l-toric-Qfac-ind},
the $\mathbb{Q}$-factorial index $d_Q$ of $W$ is equal to $d$.
\end{proof}
\end{comment}
\begin{proposition}\label{p-Diff}
Set $X :=\mathbb A^1$ and let $P$ be a closed point.
For coprime integers $d > \ell >0$, we set $D:= \frac{\ell}{d} P$.
Set $(W, \Gamma):=(W_{X, D}, \Gamma_{X, D})$.
Let $Q \in \Gamma$ be the closed point lying over $P$.
Then the following hold.
\begin{enumerate}
\item $(W, \Gamma)$ is a two-dimensional affine plt pair.
\item $Q$ is a unique singular point of $W$.
\item $d_Q = d$.
\item For the different $\Diff_{\Gamma}$ given by
$(K_W+\Gamma)|_{\Gamma} = K_{\Gamma} + \Diff_{\Gamma}$,
it holds that
\[
\Diff_{\Gamma} = \frac{d-1}{d} Q.
\]
\end{enumerate}
\begin{comment}
For $N := \mathbb{Z}^2 = \mathbb{Z} e_1 \oplus \mathbb{Z} e_2$ with $e_1 =(1, 0)$ and $e_2 =(0, 1)$,
we set
\[
u_1 := de_1 -\ell e_2 \in N, \quad u_2 := e_2, \quad
\sigma :=\mathbb{R}_{\geq 0} u_1 + \mathbb{R}_{\geq 0} u_2 \subset N_{\mathbb{R}},
\]
\[
\rho_1 := \mathbb{R}_{\geq 0} u_1, \quad \rho_2 := \mathbb{R}_{\geq 0}u_2, \quad
D_1 := D_{\rho_1}, \quad D_2 := D_{\rho_2},
\]
where $D_1$ and $D_2$ are the torus-invariant prime divisors on
an affine normal toric surface $U_{\sigma}$.
Recall that $U_{\sigma}$ has a unique singular point $Q$
Then $d_Q = d$.
\end{comment}
\end{proposition}
\begin{proof}
We may assume that $P$ is the origin of $\mathbb A^1$.
By setting $(n, H_1, a_1) :=(1, P, \frac{\ell}{d})$,
we may use Notation \ref{n-W-toric}.
Hence $(W, \Gamma)$ is a plt pair (Proposition \ref{p-toricity3}(2)).
Thus (1) holds.
By Remark \ref{r-toric-str},
we have $W=U_{\sigma}$ for
\[
\sigma := \mathbb{R}_{\geq 0} u + \mathbb{R}_{\geq 0} e_2, \qquad u :=de_1 + \ell e_2.
\]
By $d \geq 2$, we have $\mathbb{Z} u +\mathbb{Z} e_2 = \mathbb{Z}(de_1) + \mathbb{Z} e_2 \neq \mathbb{Z}^2$,
and hence $W$ is a singular affine normal toric surface with a unique singular point $Q'$.
Since $Q'$ is the torus-invariant point,
$Q'$ is set-theoretically equal to the intersection of the torus invariant prime divisors $\Gamma$ and $(\pi^*P)_{\mathrm{red}}$ (Proposition \ref{p-toricity2}(3)(4)).
Therefore, we obtain $Q = Q'$.
Thus (2) holds.
It follows from (1), (2), and Lemma \ref{l-plt-chain}(1) that (3) implies (4).
It suffices to show (3).
We use the notation in Remark \ref{r-toric-str}.
In particular, we equip $\mathbb{Z}^2 =N = M$ with the standard inner product: $\langle e_i, e_j \rangle =\delta_{ij}$.
Note that $(W, \Gamma)$ is a plt pair and $\Gamma$ passes through $Q$.
Therefore, it suffices to show that
the $\mathbb{Q}$-factorial index is equal to $d$ (Lemma \ref{l-plt-chain}(2)).
Take $a_1, a_2 \in \mathbb{Z}$ and set
\[
D := a_1 D_1 + a_2 D_2,
\]
where $D_1$ and $D_2$ are the torus invariant prime divisors on $W$ corresponding to $u$ and $e_2$, respectively.
We define $\beta_1, \beta_2 \in \mathbb{Q}$ such that
the equality
\[
(a_1, a_2) = (\langle m, u \rangle, \langle m, e_2 \rangle)
\quad \text{ holds}\quad\text{for }\quad
m := \beta_1 e_1 + \beta_2 e_2 \in M.
\]
Then we have that
\[
\begin{pmatrix}
a_1\\
a_2
\end{pmatrix}
=
\begin{pmatrix}
\langle m, u \rangle \\
\langle m, e_2 \rangle
\end{pmatrix}
=
\begin{pmatrix}
\langle \beta_1 e_1 + \beta_2 e_2, de_1 + \ell e_2\rangle \\
\langle \beta_1 e_1 + \beta_2 e_2, e_2\rangle
\end{pmatrix}
=
\begin{pmatrix}
\beta_1 d + \beta_2\ell \\
\beta_2
\end{pmatrix}
=
\begin{pmatrix}
d & \ell\\
0 & 1
\end{pmatrix}
\begin{pmatrix}
\beta_1\\
\beta_2
\end{pmatrix},
\]
which implies
\[
\begin{pmatrix}
\beta_1\\
\beta_2
\end{pmatrix}
=
\begin{pmatrix}
d & \ell\\
0 & 1
\end{pmatrix}^{-1}
\begin{pmatrix}
a_1\\
a_2
\end{pmatrix} =
\frac{1}{d}
\begin{pmatrix}
1 & -\ell\\
0 & d
\end{pmatrix}
\begin{pmatrix}
a_1\\
a_2
\end{pmatrix} =
\begin{pmatrix}
\frac{1}{d} & -\frac{\ell}{d}\\
0 & 1
\end{pmatrix}
\begin{pmatrix}
a_1\\
a_2
\end{pmatrix}
=
\begin{pmatrix}
\frac{a_1 - \ell a_2}{d}\\
a_2
\end{pmatrix}.
\]
By \cite[Theorem 4.2.8]{CLS11}, $D= a_1 D_1 + a_2 D_2$ is Cartier if and only if $\frac{a_1-\ell a_2}{d} \in \mathbb{Z}$ and $a_2 \in \mathbb{Z}$.
Since every Weil divisor on $W$ is linearly equivalent to
a torus invariant divisor \cite[Theorem 4.1.3]{CLS11},
the $\mathbb{Q}$-factorial index of $W$ is equal to $d$.
Thus (3) holds.
\end{proof}
\begin{theorem}\label{t-Diff}
Let $X$ be a smooth variety and let $D$ be a $\mathbb{Q}$-divisor such that
$\{ D\} = a_1 D_1+ \cdots + a_m D_m$ is simple normal crossing,
where $a_1, ..., a_m \in \mathbb{Q}_{>0}$.
For each $1 \leq i \leq m$,
let $\ell_i$ and $d_i$ be coprime integers such that
$a_i = \ell_i /d_i$ and $0 < \ell_i <d_i$.
Set $(W, \Gamma) :=(W_{X, D}, \Gamma_{X, D})$.
Then, for the different $\Diff_{\Gamma}$ defined by
\[
(K_W+\Gamma)|_{\Gamma} = K_{\Gamma} + \Diff_{\Gamma},
\]
it holds that
\[
\Diff_{\Gamma} = \sum_{i=1}^m \frac{d_i -1}{d_i}D_{\Gamma, i},
\]
where $D_{\Gamma, i}$ denotes the pullback of $D_i$ under
the composite isomorphism $\Gamma \hookrightarrow W \xrightarrow{\pi} X$.
\end{theorem}
\begin{proof}
In order to compute the coefficient of $D_{\Gamma, 1}$ in $\Diff_{\Gamma}$,
we may replace $X$ by an open subset of $X$ which intersects $D_1$.
Hence, the problem is reduced to the case when $D = \{ D\} = a_1D_1$.
Taking a suitable \'etale coordinate,
we may assume that
$X=\mathbb A^n$ and $D_1$ is a coordinate hyperplane (Lemma \ref{l-Qcone-etale}).
Since the problem is stable under smooth base change,
we may further assume that $X= \mathbb A^1$ and $D_1$ is a closed point (Lemma \ref{l-Qcone-etale}).
Then the assertion follows from Proposition \ref{p-Diff}.
\end{proof}
\begin{theorem}\label{t-klt}
Let $X$ be a smooth projective variety and
let $\Delta$ be a simple normal crossing $\mathbb{Q}$-divisor such that
$\llcorner \Delta \lrcorner=0$ and $\Delta$ has standard coefficients.
Assume that
\begin{enumerate}
\renewcommand{\labelenumi}{(\roman{enumi})}
\item $D:=-(K_X+\Delta)$ is ample, and
\item $\rho(X)=1$.
\end{enumerate}
For $W := W_{X, D}$ and $V:=V_{X, D}$, let
$\mu :W \to V$ be the induced morphism.
Then the following hold.
\begin{enumerate}
\item $\rho(W/V)=1$.
\item $-\Gamma$ is $\mu$-ample.
\item $V$ is $\mathbb{Q}$-factorial.
\item $V$ is klt.
\end{enumerate}
\end{theorem}
\begin{proof}
Note that the following property (iii) holds
by (i), (ii), and \cite[Corollary 6.5]{tanakainseparable}.
\begin{enumerate}
\item[(iii)] Any numerically trivial Cartier divisor on $X$ is torsion.
\end{enumerate}
The assertion (1) follows from (ii) and the fact that
$\mu : W \to V$ is a projective birational morphism
with $\Ex(\mu) = \Gamma$ (Theorem \ref{t-Qcone-birat}).
Fix a curve $C$ satisfying $C \subseteq \Gamma$.
Let us show (2).
Pick an effective Cartier divisor $H_V$ on $V$ passing through the vertex $v \in V$.
It holds that
\[
\mu^* H_V = \mu^{-1}_* H_V + \alpha \Gamma
\]
for some $\alpha \in \mathbb{Z}_{>0}$.
Since $\mu^{-1}_*H_{V}$ intersects $\Gamma$,
it follows from $\rho(\Gamma)=\rho(X)=1$ that $\mu^{-1}_*H \cdot C>0$.
Therefore, it holds that
\[
0 = \mu^* H_V \cdot C = \mu^{-1}_* H_V\cdot C+ \alpha \Gamma \cdot C
> \alpha \Gamma \cdot C.
\]
By $\alpha >0$, we get $\Gamma \cdot C<0$.
It follows from (1) that $-\Gamma$ is $\mu$-ample.
Thus (2) holds.
We now prove (3) and (4) for the case when
$k$ is of positive characteristic.
Let us show (3).
Fix a prime divisor $D_V$ on $V$ and set $D_W := \mu_*^{-1}D_V$.
We may assume that $v \in D_V$.
By (1) and (2),
there is $\beta \in \mathbb{Q}$ such that $D_W + \beta \Gamma \equiv_{\mu} 0$.
In particular, $D_W + \beta \Gamma$ is $\mu$-nef and $\mu$-big.
By (1) and $v \in D_V$, $D_W$ is $\mu$-ample.
It holds by (2) that $\beta > 0$.
It follows from \cite[Lemma 2.18(1)]{CT17} that $\mathbb E_{\mu}(D_W + \beta \Gamma) \subseteq \Gamma$.
By (iii) and \cite[Proposition 2.20]{CT17},
$D_W + \beta \Gamma$ is $\mu$-semi-ample.
Hence, its pushforward $\mu_*(D_W + \beta \Gamma) = D_V$ is $\mathbb{Q}$-Cartier. Thus (3) holds.
Let us show (4).
By (3), there is $\gamma \in \mathbb{Q}$ such that
\[
K_W + (1-\gamma) \Gamma =\mu^*K_V.
\]
For the different $\Delta_{\Gamma}$ defined by
$(K_W + \Gamma)|_{\Gamma} = (K_{\Gamma} +\Delta_{\Gamma})$, the following holds:
\[
(K_W + \Gamma) \cdot C = (K_{\Gamma} +\Delta_{\Gamma}) \cdot C = (K_X+ \Delta) \cdot C_X <0,
\]
where $C_X := \pi(C)$ and the second equality holds by Theorem \ref{t-Diff}.
This inequality, together with $\Gamma \cdot C<0$, implies that $\gamma >0$.
Then $(W, (1-\gamma) \Gamma))$ is sub-klt by Theorem \ref{t-toricity}, that is,
$(1-\gamma)\Gamma$ is a (possibly non-effective) $\mathbb{Q}$-divisor and
$a(E, W, (1-\gamma) \Gamma)) >-1$ for any prime divisor $E$ over $W$.
Therefore, $(V, 0)$ is klt.
This completes the proof for the case when
$k$ is of positive characteristic.
It suffices to show (3) and (4) for the case when
$k$ is of characteristic zero.
By the above argument, it holds that
\[
K_W + \Gamma \equiv_{\mu} \gamma \Gamma
\]
for some $\gamma >0$.
By (2), $(W, B)$ is klt and $-(K_W+B)$ is $\mu$-ample
for $B := (1-\delta )\Gamma$ and $0 < \delta \ll 1$.
Hence $\mu : W \to V$ is a $(K_W+B)$-negative divisorial contraction.
It follows from the same argument as in \cite[Corollary 3.18]{KM98} that $V$ is $\mathbb{Q}$-factorial.
By the same argument as above, (4) holds.
\end{proof}
\section{Preliminaries}
\subsection{Notation}\label{ss-notation}
\begin{enumerate}
\item
Throughout this paper, we work over an algebraically closed field $k$ of characteristic $p>0$, unless otherwise specified. However, we emphasise that all the results in the preliminaries also hold over perfect fields. Moreover, at the end of the paper we deduce from the algebraically-closed-field case that the main results of our article hold over perfect fields, too.
We shall freely use notation and terminologies from \cite{hartshorne77} and \cite{KTTWYY1}.
\item
We say that $X$ is a {\em variety} (over field $k$) if
$X$ is an integral scheme which is separated and of finite type over $k$.
We say that $X$ is a {\em surface}
(resp.\ a {\em curve})
if $X$ is a variety of dimension two (resp.\ one).
\item We say that $(X, \Delta)$ is {\em log smooth} if $X$ is a smooth variety and
$\Delta$ is a simple normal crossing effective $\mathbb{Q}$-divisor.
\item\label{ss-n-log-lift}
Let $X$ be a normal surface and let $D$ be a $\mathbb{Q}$-divisor on $X$.
We say that $(X, D)$ is {\em log liftable} if
there exists a log resolution $f : Y \to X$ of $(X, D)$ such that $(Y, f_*^{-1}D_{\mathrm{red}} +\Ex(f))$ lifts to $W(k)$.
If $(X, D)$ is log liftable, then $(Z, g_*^{-1}D_{\mathrm{red}} +\Ex(g))$ lifts to $W(k)$ for any log resolution $g : Z \to X$ of $(X, D)$ (Proposition \ref{p-log-lift-any}).
\item
Given a normal variety $X$ and a $\mathbb{Q}$-divisor $D$,
recall that the subsheaf $\mathcal{O}_X(D)$ of the constant sheaf $K(X)$ is defined by
\[
\Gamma(U, \mathcal{O}_X(D)) := \{ \varphi \in K(X) \,|\, ({\rm div}\,(\varphi) + D)|_U \geq 0\}.
\]
Clearly, we have $\mathcal{O}_X(D) = \mathcal{O}_X(\rdown{D})$.
Moreover, when $X = \Spec R$, we set $R(D) := \Gamma(X, \mathcal{O}_X(D))$.
\item Given a normal variety $X$ and a $\mathbb{Q}$-divisor $\Delta$,
$\coeff (\Delta)$ denotes its coefficient set. More precisely,
if $\Delta = \sum_{i \in I} \delta_i \Delta_i$ is the irreducible decomposition
such that $\delta_i\neq 0$ for every $i \in I$,
then $\coeff (\Delta) := \{ \delta_i \,|\, i \in I\} \subseteq \mathbb{Q}$.
Similarly, a coefficient of $\Delta$ is always assumed to be nonzero under our notation.
\item Given a normal variety $X$ and a $\mathbb{Q}$-divisor $D$ with the irreducible decomposition $D = \sum_{i \in I} d_iD_i$,
we set $D_{\mathrm{red}} := \sum_{i \in I} D_i$.
{We say that $D$ is {\em simple normal crossing}
if for every closed point $x \in D_{\mathrm{red}}$, $X$ is smooth at $x$ and there exists a regular system of parameters $x_1,\ldots, x_d$ in the maximal ideal $\mathfrak{m}$ of $\mathcal{O}_{X,x}$ and $1 \leq r \leq d$ such that $D_{\mathrm{red}}$ is defined by $x_1\cdots x_r$ in $\mathcal{O}_{X,x}$.}
\item Let $X$ be a normal variety, let $D$ be a reduced divisor on $X$, and let $j\colon U\to X$ be the inclusion morphism from the log smooth locus $U$ of $(X,D)$. We denote the sheaf of $i$-th reflexive logarithmic differential forms $j_{*}\Omega_U^{i}(\log D|_U)$ by $\Omega_X^{[i]}(\log D)$.
\end{enumerate}
\subsection{Log liftability}
\begin{lemma}\label{lemma:lift of bl-down}
Let $Y$ be a smooth projective surface and let $E$ be a $(-1)$-curve on $Y$.
Let $f\colon Y\to X$ be the contraction of $E$.
Suppose that there exists a lift $\mathcal{Y}$ over $W(k)$ of $Y$.
Then there exist a lift $\mathcal{X}$ of $X$ and $\tilde{f}\colon \mathcal{Y}\to \mathcal{X}$ of $f$ over $W(k)$.
\end{lemma}
\begin{proof}
Since $H^1(E, N_{E|_X})=H^1(E, \mathcal{O}_E(-1))=0$, there exists a lift $\mathcal{E}$ of $E$.
Let $\mathcal{L}$ be an ample Cartier divisor on $\mathcal{Y}$ and $L\coloneqq \mathcal{L}|_{Y}$.
Take $\lambda\in\mathbb{Q}_{>0}$ so that $(L+\lambda E)\cdot E=0$.
Then $L+\lambda E-K_Y$ is nef and big over $X$ and the base point free theorem shows that there exists an ample Cartier divisor $A$ on $X$
and $m\in\mathbb{Z}_{>0}$ such that $m(L+\lambda E)=f^{*}A$. Replacing $A$ with its multiple, we can assume that $H^i(X, \mathcal{O}_X(A))=0$ for every $i>0$.
Since $R^if_{*}\mathcal{O}_Y(f^{*}A)=\mathcal{O}_X(A)\otimes R^{i}f_{*}\mathcal{O}_X=0$, we have $H^i(Y, \mathcal{O}_Y(m(L+\lambda E)))= H^i(Y, \mathcal{O}_Y(f^{*}A))\cong H^i(X,\mathcal{O}_X(A))=0$.
Then it follows from the Grauert theorem that $H^0(\mathcal{Y}, \mathcal{O}_{\mathcal{Y}}(\mathcal{L}+\lambda \mathcal{E}))$ is a free $W(k)$-module and
\[
H^0(\mathcal{Y}, \mathcal{O}_{\mathcal{Y}}(\mathcal{L}+\lambda \mathcal{E}))\otimes_{W(k)}k \cong H^0(Y, \mathcal{O}_{Y}(L+\lambda E)).
\]
Let $\mathcal{Z}$ be the base locus of $\mathcal{L}+\lambda \mathcal{E}$. Then the image of $X \to \Spec\,W(k)$ is a closed subset since $\mathcal{Y}$ is proper over $\Spec\,W(k)$.
Since $L+\lambda E$ is base point free, the base locus $\mathcal{Z}$ should be the empty set.
Now, the morphism associated to $\mathcal{L}+\lambda \mathcal{E}$ gives a lift of the morphism associated to $L+\lambda E$, which is $f$.
\end{proof}
\begin{proposition}\label{p-log-lift-any}
Let $(X, D)$ be a pair of a normal projective surface and a $\mathbb{Q}$-divisor such that $(X,D)$ is log liftable.
Then for any log resolution $f\colon Y\to X$ of $(X,D)$, the pair $(Y, f^{-1}_{*}D+\Exc(f))$ lifts to $W(k)$.
\end{proposition}
\begin{proof}
The assertion follows from Lemma \ref{lemma:lift of bl-down} and the proof of \cite[Lemma 2.9]{Kaw3}.
\end{proof}
\subsection{Singularities in
the minimal model program}
We recall some notation in the theory of singularities in the minimal model program.
For more details, we refer the reader to \cite[Section 2.3]{KM98} and \cite[Section 1]{kollar13}.
We say that $(X, \Delta)$ is a \textit{log pair}
if $X$ is a normal variety over $k$ and $\Delta$ is an effective $\mathbb{Q}$-divisor such that
$K_X + \Delta$ is $\mathbb{Q}$-Cartier.
For a proper birational morphism $f: X' \to X$ from a normal variety $X'$
and a prime divisor $E$ on $X'$, the \textit{discrepancy} of $(X, \Delta)$
at $E$ is defined as
\[
a(E, X, \Delta) := \text{the coefficient of }E\text{ in }K_{X'} - f^* (K_X + \Delta).
\]
We say that $(X, \Delta)$ is \textit{klt} (resp.\ \textit{lc})
if $(X, \Delta)$ is a log pair such that
$a (E, X, \Delta) > -1$ (resp.\ $a (E, X, \Delta) \geq -1$) for any prime divisor $E$ over $X$.
Note that, in contrast to \cite{KM98} and \cite{kollar13},
we always assume $\Delta$ to be effective for a klt (or lc) pair $(X, \Delta)$.
We say that $X$ is \textit{klt} (resp.\ \textit{lc}) if so is $(X, 0)$.
\subsection{Log del Pezzo pairs}
\begin{definition
{We say that}
\begin{enumerate}
\item $(X, \Delta)$ is a {\em log del Pezzo pair} if
$(X, \Delta)$ is a two-dimensional projective klt pair such that $-(K_X+\Delta)$ is ample;
\item $(X, \Delta)$ is a {\em weak log del Pezzo pair} if
$(X, \Delta)$ is a two-dimensional projective klt pair such that $-(K_X+\Delta)$ is nef and big;
\item $X$ is {\em of del Pezzo type} if
$X$ is a projective normal surface and there exists an effective $\mathbb{Q}$-divisor $\Delta$ such that
$(X, \Delta)$ is a log del Pezzo pair.
\end{enumerate}
\end{definition}
\begin{remark}\label{r-log-dP-summary}
{The following properties of del Pezzo pairs will be used later on.}
\begin{enumerate}
\item
Let $(X, \Delta)$ be a weak log del Pezzo pair.
Since $-(K_X+\Delta)$ is big, we can write $-(K_X+\Delta) = A+E$ for some ample $\mathbb{Q}$-divisor $A$ and an effective $\mathbb{Q}$-divisor $E$.
Then $(X, \Delta+ \epsilon E)$ is a log del Pezzo pair for $0 < \epsilon \ll 1$.
Indeed, $(X, \Delta+ \epsilon E)$ is klt and
\[
-(K_X+\Delta +\epsilon E)=-(1-\epsilon)(K_X+\Delta) + \epsilon A
\]
is ample
\item
We have the following implications:
\begin{align*}
(X, \Delta) \text{ is a log del Pezzo pair}
&\Rightarrow
(X, \Delta) \text{ is a weak log del Pezzo pair} \\
&\Rightarrow
X \text{ is of del Pezzo type.}
\end{align*}
Indeed, the first implication is clear and the second one follows from (1).
\item
Let $X$ be a surface of del Pezzo type.
Fix an effective $\mathbb{Q}$-divisor $\Delta$ on $X$ such that $(X, \Delta)$ is klt
and $-(K_X+\Delta)$ is ample.
Then the following hold.
\begin{enumerate}
\item $X$ is $\mathbb{Q}$-factorial \cite[Theorem 5.4]{tanaka12}.
\item
For any $\mathbb{Q}$-divisor $D$, we may run a $D$-MMP \cite[Theorem 1.1]{tanaka12}.
Indeed, we have
\begin{equation}\label{e1-log-dP-summary}
\epsilon D = K_X+\Delta -(K_X+\Delta) +\epsilon D \sim_{\mathbb{Q}} K_X+\Delta'
\end{equation}
for some $0 < \epsilon \ll 1$ and effective $\mathbb{Q}$-divisor $\Delta'$ such that $(X, \Delta')$ is klt.
\item
If $D$ is a nef $\mathbb{Q}$-divisor, then $D$ is semi-ample by (\ref{e1-log-dP-summary})
and \cite[Theorem 1.2]{tanaka12}.
\item
It holds that
\[
\NE(X) = \overline{\NE}(X) = \sum_{i=1}^n R_i
\]
for finitely many extremal rays $R_1, ..., R_n$ of $\NE(X)$ \cite[Theorem 3.13]{tanaka12}.
For each $R_i$, there exists $f_i : X \to Y_i$, called the contraction of $R_i$,
which is a morphism to a projective normal variety $Y_i$
such that $(f_i)_*\mathcal{O}_X = \mathcal{O}_{Y_i}$ and, given a curve $C$ on $X$,
$f_i(C)$ is a point if and only if $[C] \in R_i$ \cite[Theorem 3.21]{tanaka12}.
\end{enumerate}
\end{enumerate}
\end{remark}
\subsection{Quasi-$F$-splitting}\label{ss-def-QFS}
In this subsection, we summarise the definition and some basic properties of a quasi-$F$-splitting.
For details, we refer to \cite{KTTWYY1}, in which we work in a more general setting.
\begin{definition}\label{d-QFS}
Let $X$ be a normal variety and let $\Delta$ be an effective $\mathbb{Q}$-divisor on $X$ satisfying $\rdown{\Delta} =0$.
For $n \in \mathbb{Z}_{>0}$, we say that $(X, \Delta)$ is $n$-{\em quasi-$F$-split} if
there exists
a $W_n\mathcal{O}_X$-module homomorphism $\alpha : F_* W_n\mathcal{O}_X(p\Delta) \to \mathcal{O}_X$ which completes the following commutative diagram:
\begin{equation*} \label{diagram:intro-definition}
\begin{tikzcd}
W_n\mathcal{O}_X(\Delta) \arrow{r}{{F}} \arrow{d}{R^{n-1}} & F_* W_n \arrow[dashed]{ld}{\exists\alpha} \mathcal{O}_X(p\Delta) \\
\mathcal{O}_X.
\end{tikzcd}
\end{equation*}
We say that $(X, \Delta)$ is {\em quasi-$F$-split} if there exists $n \in \mathbb{Z}_{>0}$ such that $(X, \Delta)$ is $n$-quasi-$F$-split.
\end{definition}
\begin{remark}\label{r-QFS}
Let $X$ be a normal variety and let $\Delta$ be an effective $\mathbb{Q}$-divisor on $X$ satisfying $\rdown{\Delta} =0$.
Fix $n \in \mathbb{Z}_{>0}$.
We then define $Q_{X, \Delta, n}$ by the following pushout diagram of $W_n\mathcal{O}_X$-modules:
\[
\begin{tikzcd}
W_n\mathcal{O}_X(\Delta) \arrow{r}{{F}} \arrow{d}{R^{n-1}} & F_* W_n\mathcal{O}_X(p\Delta) \arrow{d} \\
\mathcal{O}_X \arrow{r}{\Phi_{X, \Delta, n}} & Q_{X, \Delta, n}.
\end{tikzcd}
\]
\begin{enumerate}
\item
The $W_n\mathcal{O}_X$-module $Q_{X, \Delta, n}$ is naturally a coherent $\mathcal{O}_X$-module, and hence
$\Phi_{X, \Delta, n} : \mathcal{O}_X \to Q_{X, \Delta, n}$ is an $\mathcal{O}_X$-module homomorphism
\cite[Proposition 3.6(2)]{KTTWYY1}.
In particular, the following {conditions} are equivalent \cite[Proposition 3.7]{KTTWYY1}.
\begin{itemize}
\item $(X, \Delta)$ is $n$-quasi-$F$-split.
\item
$\Phi_{X, \Delta, n} : \mathcal{O}_X \to Q_{X, \Delta, n}$ splits as an $\mathcal{O}_X$-module homomorphism.
\end{itemize}
\item
Assume that $X$ is projective.
Then
$(X, \Delta)$ is $n$-quasi-$F$-split if and only if
\begin{equation} \label{eq:coh-def-of-qfsplit}
H^{\dim X}(\Phi_{X, K_X+\Delta, n}) :
H^{\dim X}(X,
\mathcal{O}_X(K_X+\Delta)) \to H^{\dim X}(X, Q_{X, K_X+\Delta, n})
\end{equation}
is injective \cite[Lemma 3.13]{KTTWYY1}.
\end{enumerate}
\end{remark}
\begin{theorem}\label{t-QFS-criterion}
Let $(X, \Delta)$ be a log del Pezzo pair.
Assume that there exists a log resolution $f: Y \to X$ of $(X, \Delta)$ and a $\mathbb{Q}$-divisor $B_Y$ on $Y$ such that the following conditions hold:
\begin{enumerate}
\item[(A)] $\rdown{B_Y} \leq 0$, $f_*B_Y =\Delta$, and $-(K_Y+B_Y)$ is ample,
\item[(B)] $H^0(Y, \mathcal{O}_Y(K_Y + E + p^{\ell}(K_Y+B_Y))) =0$
for every $\ell \in \mathbb{Z}_{>0}$,
\item[(C)] $H^0(Y, \Omega_Y^1(\log E) \otimes \mathcal{O}_Y(K_Y+B_Y))=0$,
\end{enumerate}
where $E := (B_Y)_{\mathrm{red}}$. Then $(X, \Delta)$ is quasi-$F$-split.
\end{theorem}
\begin{proof}
The assertion holds by \cite[Theorem 5.13]{KTTWYY1}
and the following inclusion:
\[
H^0(Y, B_1\Omega_Y^2(\log E)(p^{\ell}(K_Y+B_Y)))
\subseteq
H^0(Y, \mathcal{O}_Y(K_Y + E + p^{\ell}(K_Y+B_Y))) =0.
\]
\end{proof}
\subsection{Higher log Cartier operator for $\mathbb{Q}$-divisors}
For definitions and fundamental properties of higher log Cartier operators,
we refer to \cite[Subsection 5.2]{KTTWYY1}.
\begin{comment}
{\color{cyan}
For
Let $(Y,E)$ be a log smooth pair, where $E$ is reduced.
Let $D$ be a $\mathbb{Q}$-divisor satisfying $\Supp(\{D\})\subseteq E$.
Set $n := \dim\,Y$.
By \cite[3.4. Key Observation]{hara98a}, we have a complex of $\mathcal{O}_Y$-module homomorphisms as follows:
\[
F_{*}\Omega_Y^{{{\,\begin{picture}(1,1)(-1,-2)\circle*{2}\end{picture}\ }}}(\log\,E)(D) \colon F_{*}\mathcal{O}_Y(D) \overset{F_{*}d}{\to} F_{*}\Omega^1_Y(\log\,E)(D) \overset{F_{*}d}{\to} \cdots.
\]
We define locally free sheaves of $\mathcal{O}_Y$-modules:
\begin{align*}
&B_0\Omega^{i}_Y(\log\,E)(D)\coloneqq0\\
&Z_0\Omega^{i}_Y(\log\,E)(D)\coloneqq \Omega^{i}_Y(\log\,E)(D)\\
&B_1\Omega^{i}_Y(\log\,E)(D)\coloneqq \mathrm{Im}(F_{*}d \colon F_{*}\Omega^{i-1}_Y(\log\,E)(D) \to F_{*}\Omega^{i}_Y(\log\,E)(D))\\
&Z_1\Omega^{i}_Y(\log\,E)(D)\coloneqq \mathrm{Ker}(F_{*}d \colon F_{*}\Omega^{i}_Y(\log\,E)(D) \to F_{*}\Omega^{i+1}_Y(\log\,E)(D))
\end{align*}
By \cite[3.4. Key Observation]{hara98a}, there exists an exact sequence
\[
0\to B_1\Omega^{i}_Y(\log\,E)(pD)\to Z_1\Omega^{i}_Y(\log\,E)(pD) \overset{C_pD}{\to} \Omega^{i}_Y(\log\,E)(D) \to 0
\]
arising from the logarithmic Cartier isomorphism,
and we then define locally free sheaves as followings.
\begin{align*}
&B_2\Omega^{i}_Y(\log\,E)(p^2D)\coloneqq (F_{*}C^{p_2D})^{-1}(B_1\Omega^{i}_Y(\log\,E)(pD)),\\
&Z_2\Omega^{i}_Y(\log\,E)(p^2D)\coloneqq (F_{*}C^{p_2D})^{-1}(Z_1\Omega^{i}_Y(\log\,E)(pD)),
\end{align*}
where $F_{*}C^{p_2D} \colon F_{*}Z_1\Omega^{i}_Y(\log\,E)(p^2D)\to F_{*}\Omega^{i}_Y(\log\,E)(pD)$.
Inductively, we define locally free sheaves $B_m\Omega^{i}_Y(\log\,E)(p^mD)$ and $Z_m\Omega^{i}_Y(\log\,E)(p^mD)$ for all $m\in\mathbb{Z}_{>0}$.
By construction, we can see that
\begin{align*}
&B_m\Omega_Y^{0}(\log\,E)(p^mD)=0, Z_m\Omega_Y^{0}(\log\,E)(p^mD)=\mathcal{O}_X(D),\\ &Z_m\Omega_Y^{n}(\log\,E)(p^mD)=F_{*}^{m}(\omega_Y\otimes \mathcal{O}_Y(E+p^{m}D))
\end{align*}
and
\begin{align*}
&B_m\Omega_Y^{i}(\log\,E)(p^mD)\otimes \mathcal{O}_Y(H)=B_m\Omega_Y^{i}(\log\,E)(p^mD+p^mH),\\ &Z_m\Omega_Y^{i}(\log\,E)(p^mD)\otimes \mathcal{O}_Y(H)=Z_m\Omega_Y^{i}(\log\,E)(p^mD+p^mH).
\end{align*}
for all $m\in\mathbb{Z}_{>0}$, $i\in \mathbb{Z}_{\geq 0}$, and every Cartier divisor $H$.
In the rest of paper, by abuse of notation, $C_{p^mD}$ is denoted by $C$.
}
\end{comment}
\begin{lemma}\label{l-Cartier-op}
Let $(Y,E)$ be a log smooth pair, where $E$ is a reduced divisor.
Let \vspace{0.05em} $D$ be a $\mathbb{Q}$-divisor satisfying $\Supp\, \{D\}\subseteq E$.
Then we have the following exact \vspace{0.1em} sequences for all $i \in \mathbb{Z}_{\geq 0}$ and $m, r\in\mathbb{Z}_{>0}$ satisfying $r<m$.
\begin{smallertags}
{\mathrm{small} \begin{align}
&\!\!0 \to B_m\Omega^{i}_Y(\log\,E)(p^mD)\to Z_m\Omega^{i}_Y(\log\,E)(p^mD)
\overset{{C^{m}}}{\to}
\Omega^{i}_Y(\log\,E)(D)\to 0,\label{exact1}\\[0.5em]
&\!\!0 \to F_{*}^{m{-}r}\!B_{r}\Omega^{i}_Y(\log E)(p^{m}\!D) \!\to\! B_{m}\Omega^{i}_Y(\log E)(p^m\!D) \overset{{C^{r}}}{\to} B_{m-r}\Omega^{i}_Y(\log E)(p^{m-r}\!D) \!\to\! 0,\label{exact2}\\[0.5em]
&\!\!0 \to F_{*}^{m{-}r}B_{r}\Omega^{i}_Y(\log E)(p^{m}\!D) \!\to\! Z_{m}\Omega^{i}_Y(\log E)(p^m\!D) \overset{{C^{r}}}{\to} Z_{m-r}\Omega^{i}_Y(\log E)(p^{m-r}\!D) \!\to\! 0,\label{exact3}\\[0.5em]
&\!\!0 \to Z_m\Omega^{i}_Y(\log\,E)(p^{m}D) \to F_{*}Z_{m-1}\Omega^{i}_Y(\log\,E)(p^{m}D)
\xrightarrow{\psi}
B_1\Omega^{i+1}_Y(\log\,E)(pD) \to 0\label{exact4},
\end{align}}
\end{smallertags}
\!\!where $\psi := F_{*}d\circ C^{m{-}1}$.
\end{lemma}
\begin{proof}
Here, (\ref{exact1}) is exact by \cite[(5.7.1)]{KTTWYY1}.
The exactness of (\ref{exact2}) and (\ref{exact3}) follow from the constructions of $B_m\Omega^{i}_Y(\log\,E)(p^mD)$ and $Z_m\Omega^{i}_Y(\log\,E)(p^mD)$ (cf.\ \cite[Definition 5.6]{KTTWYY1}).
As for the last one (\ref{exact4}), see \cite[Lemma 5.8]{KTTWYY1}.
\begin{comment}
By the definition of $B_1\Omega^{i}_Y(\log\,E)(pD)$ and $Z_1\Omega^{i}_Y(\log\,E)(pD)$, we have
\[
0\to Z_1\Omega^{i}_Y(\log\,E)(pD) \to F_{*}\Omega^{i}_Y(\log\,E)(pD) \to B_1\Omega^{i+1}_Y(\log\,E)(pD) \to 0,
\]
and this is the $m=1$ case of \ref{exact4}.
By pulling back the exact sequence by
\[C^{m-1}\colon Z_m\Omega^i_Y(\log\,E)(p^mD)\to Z_1\Omega_Y^{i}(\log\,E)(pD), \]
we obtain \ref{exact4} for all $m\in\mathbb{Z}_{>0}$.
\end{comment}
\end{proof}
\begin{comment}
\begin{lemma}\label{l-Cartier-op}
Let $(Y,E)$ be a log smooth pair, where $E$ is a reduced divisor.
Let $D$ be a $\mathbb{Q}$-divisor satisfying $\Supp(\{D\})\subseteq E$.
Then we have the following exact sequences for all $m\in\mathbb{Z}_{>0}$.
\begin{align}
&0 \to B_m\Omega^{i}_Y(\log\,E)(p^mD)\to Z_m\Omega^{i}_Y(\log\,E)(p^mD)\overset{C^m}{\to} \Omega^{i}_Y(\log\,E)(D)\to 0,\label{exact1}\\
&0 \to F_{*}B_{m-1}\Omega^{i}_Y(\log\,E)(p^{m}D)\to B_{m}\Omega^{i}_Y(\log\,E)(p^mD) \overset{C^{m-1}}{\to} B_{1}\Omega^{i}_Y(\log\,E)(pD)\to 0,\label{exact2}\\
&0 \to F_{*}^{m-1}B_{1}\Omega^{i}_Y(\log\,E)(p^{m}D)\to B_{m}\Omega^{i}_Y(\log\,E)(p^mD) \overset{C}{\to} B_{m-1}\Omega^{i}_Y(\log\,E)(p^{m-1}D)\to 0,\label{exact2'}\\
&0 \to F_{*}B_{m-1}\Omega^{i}_Y(\log\,E)(p^{m}D)\to Z_{m}\Omega^{i}_Y(\log\,E)(p^{m}D) \overset{C^{m-1}}{\to} Z_{1}\Omega^{i}_Y(\log\,E)(pD)\to 0,\label{exact3}\\
&0 \to F_{*}^{m-1}B_{1}\Omega^{i}_Y(\log\,E)(p^{m}D)\to Z_{m}\Omega^{i}_Y(\log\,E)(p^mD) \overset{C}{\to} Z_{m-1}\Omega^{i}_Y(\log\,E)(p^{m-1}D)\to 0,\label{exact3'}\\
&0 \to Z_m\Omega^{i}_Y(\log\,E)(p^{m}D) \to F_{*}Z_{m-1}\Omega^{i}_Y(\log\,E)(p^{m}D) \overset{F_{*}d\circ C^{m-1}}{\to} B_1\Omega^{i+1}_Y(\log\,E)(pD)\to 0\label{exact4}.
\end{align}
\end{lemma}
\begin{proof}
The exactness of \ref{exact1}--\ref{exact3'} follow from the constructions of $B_m\Omega^{i}_Y(\log\,E)(p^mD)$ and $Z_m\Omega^{i}_Y(\log\,E)(p^mD)$.
By the definition of $B_1\Omega^{i}_Y(\log\,E)(pD)$ and $Z_1\Omega^{i}_Y(\log\,E)(pD)$, we have
\[
0\to Z_1\Omega^{i}_Y(\log\,E)(pD) \to F_{*}\Omega^{i}_Y(\log\,E)(pD) \to B_1\Omega^{i+1}_Y(\log\,E)(pD) \to 0,
\]
and this is the $m=1$ case of \ref{exact4}.
By pulling back the exact sequence by
\[C^{m-1}\colon Z_m\Omega^i_Y(\log\,E)(p^mD)\to Z_1\Omega_Y^{i}(\log\,E)(pD), \]
we obtain \ref{exact4} for all $m\in\mathbb{Z}_{>0}$.
\end{proof}
\end{comment}
\begin{lemma}\label{lem:Serre's map}
Let $(Y,E)$ be a log smooth pair, where $E$ is a reduced divisor.
Let $D$ be a $\mathbb{Q}$-divisor satisfying $\Supp \{D\}\subseteq E$.
Then there exists the following exact sequence of $W_m\mathcal{O}_Y$-modules
\[
0\to W_m\mathcal{O}_Y(D) \xrightarrow{F} F_{*}W_m\mathcal{O}_Y(pD) \xrightarrow{s} B_m\Omega_Y^{1}(\log\,E)(p^mD)\to 0,
\]
where $s$ is defined by
\[
s(F_*(f_0,\ldots,f_{n-1}))\coloneqq
F_*^n(f_{0}^{p^{n-1}-1}df_{0} + f_1^{p^{n-2}-1}df_1+\ldots+df_{n-1}).
\]
Furthermore, ${\rm Coker}(\Phi_{Y, D, m} : \mathcal{O}_Y(D) \to Q_{Y, D, m}) \simeq B_m\Omega_Y^{1}(\log\,E)(p^mD)$.
\end{lemma}
\begin{proof}
See \cite[Lemma 5.9]{KTTWYY1}.
\end{proof}
\begin{comment}
\begin{lemma}\label{lem:Serre's map}
Let $(Y,E)$ be a log smooth pair, where $E$ is a reduced divisor.
Let $D$ be a $\mathbb{Q}$-divisor satisfying $\Supp \{D\}\subseteq E$.
Then there exists the following exact sequence of $W_m\mathcal{O}_Y$-modules:
\[
0\to W_m\mathcal{O}_Y(D) \to F_{*}W_m\mathcal{O}_Y(pD) \to B_m\Omega_Y^{1}(\log\,E)(p^mD)\to 0.
\]
{\color{cyan} what is $\Delta$? }
Furthermore, $B_{X, \Delta, m} \simeq B_m\Omega_Y^{1}(\log\,E)(p^mD)$ holds for $B_{X, \Delta, m} := {\rm Coker}( \Phi_{X, \Delta, m} : \mathcal{O}_X \to Q_{X, \Delta, m}$.
\end{lemma}
\begin{proof}
See \cite[Lemma 5.9]{KTTWYY1}.
\end{proof}
\end{comment}
\section{Log resolutions of klt surfaces with effective boundaries}\label{s-klt-resol}
Throughout this section, we work over an algebraically closed field $k$.
The main question of this section is as follows.
\begin{question}\label{q-klt-resol}
Let $(X, \Delta)$ be a two-dimensional klt pair with standard coefficients.
Does there exist a log resolution $\varphi : V \to X$ of $(X, \Delta)$
such that $\Delta_V$ is effective for the $\mathbb{Q}$-divisor $\Delta_V$ defined by $K_V+\Delta_V = \varphi^*(K_X+\Delta)$?
\end{question}
If $\Delta=0$, then it is well known that the answer is affirmative,
since the minimal resolution of $X$ is a log resolution of $(X, \Delta=0)$ \cite[Theorem 4.7]{KM98}.
Unfortunately, the answer is negative in general.
For example, we can not find such a log resolution when $(X, \Delta) = (\mathbb A^2, \frac{1}{2} C)$, where $C$ is a cusp $\{y^2 = x^{2n+1}\}$, and $n \in \mathbb{Z}_{>0}$.
The purpose of this section is to prove that
if $(X,\Delta)$ is a counterexample to the above question, then up to localising at some point $x\in X$ we have that $X$ is smooth, $\Delta=\frac{1}{2}C$ for some prime divisor $C$, and $(X,\Delta)$ has the same dual graph as the example $(\mathbb A^2, \frac{1}{2} C)$ as above.
(cf.\ Theorem \ref{t-hiromu-resol}, Figure \ref{figure-V}).
In Subsection \ref{ss-klt-resol-sm},
we treat the case when $X$ is smooth and the answer to Question \ref{q-klt-resol} is affirmative.
In Subsection \ref{ss-klt-resol-sing},
we prove that Question \ref{q-klt-resol} is affirmative
for the case when $X$ is singular.
Based on these subsections, we shall prove the main theorem of this subsection in Subsection \ref{ss-klt-resol-general}.
\subsection{The smooth case}\label{ss-klt-resol-sm}
In this subsection, we consider Question \ref{q-klt-resol}
for the case when $X$ is smooth.
The main objective is to show that the answer to Question \ref{q-klt-resol} is affirmative for many cases (Lemma \ref{l-klt-resol-r=3}, Lemma \ref{l-klt-resol-r=2}, Lemma \ref{l-klt-resol-r=1}).
We start by introducing the following notation.
\begin{notation}\label{n-klt-resol-sm}
Let $(X, \Delta)$ be a two-dimensional klt pair such that $X$ is smooth.
Assume that
there exists a closed point $x$ of $X$ such that
$\Delta_{\mathrm{red}}|_{X \setminus x}$ is smooth
and $\Delta$ is not simple normal crossing at $x$.
Let $\Delta = \sum_{i=1}^r c_i C_i$ be the irreducible decomposition,
where $c_i \geq 1/2$ holds for any $1 \leq i \leq r$.
Assume that all of $C_1, ..., C_r$ pass through $x$.
\end{notation}
\begin{lemma}\label{l-klt-resol-r=3}
We use Notation \ref{n-klt-resol-sm}.
Assume $r\geq 3$.
Then $r=3$ and there exists a log resolution $\varphi : V \to X$ of $(X, \Delta)$ such that
$\varphi(\Ex(\varphi)) = \{x\}$ and $\Delta_V$ is effective
for the $\mathbb{Q}$-divisor $\Delta_V$ defined by $K_V + \Delta_V = \varphi^*(K_X+\Delta)$.
\end{lemma}
\begin{proof}
Let $f: Y \to X$ be the blowup at $x$. Set $E:=\Ex(f)$. We have
\[
K_Y + \Delta_Y = K_Y + f_*^{-1}\Delta + bE = f^*(K_X+\Delta)\quad \text{for}\quad \Delta_Y := f_*^{-1}\Delta + bE
\]
\[
\text{and}\quad
b := -1 + \mult_x \Delta = -1 + \sum_{i=1}^r c_i \mult_x C_i \geq -1 + \frac{1}{2}\sum_{i=1}^r \mult_x C_i.
\]
If $r \geq 4$, then we obtain $b \geq 1$, which contradicts the assumption that $(X, \Delta)$ is klt.
Hence $r=3$.
By decreasing the coefficients of $\Delta,$ we may assume that all the coefficients of $\Delta$ are $\frac{1}{2}$, that is, $\Delta = \frac{1}{2}(C_1+C_2+C_3)$.
Then
\[
\mult_x \Delta = \frac{3}{2}\qquad\text{and}\qquad
\mult_x C_1 = \mult_x C_2 = \mult_x C_3 = 1.
\]
Hence each $C_i$ is smooth at $x$. In particular, we obtain
\[
\Delta_Y = \frac{1}{2}(E+C_{1, Y} + C_{2, Y} + C_{3, Y}) \qquad
{\text{for}\qquad C_{i, Y} := f_*^{-1}C_i}.
\]
After permuting $C_1, C_2, C_3$ if necessary, we have the following three cases.
\begin{enumerate}
\item Each of $C_1+C_2, C_2+C_3, C_3+C_1$ is simple normal crossing.
\item None of $C_1+C_2, C_2+C_3, C_3+C_1$ is simple normal crossing.
\item $C_1+C_2$ is not simple normal crossing, but $C_1+C_3$ is simple normal crossing.
\end{enumerate}
For $1 \leq i < j \leq 3$,
$C_i + C_j$ is simple normal crossing at $x$ if and only if $C_{i, Y} \cap C_{j, Y} = \emptyset$.
Assume (1).
Then the blowup $Y \to X$ is a log resolution of $(X, \Delta)$,
because $C_{i, Y} \cap C_{j, Y} = \emptyset$ for $1 \leq i < j \leq 3$.
We are done.
Assume (2).
We have $C_{1, Y} \cap E = C_{2, Y} \cap E = C_{3, Y} \cap E =: y$, which is a point.
Then it holds that
\[
\mult_y \Delta_Y = \frac{1}{2}\mult_y (E+C_{1, Y} + C_{2, Y} + C_{3, Y})=2.
\]
However, this implies that $(Y, \Delta_Y)$ is not klt, which contradicts the assumption that $(X, \Delta)$ is klt. Hence the case (2) does not occur.
Assume (3). In this case, we have
\[
y := C_{1, Y} \cap E = C_{2, Y} \cap E \neq C_{3, Y} \cap E.
\]
Note that $\Delta_Y$ is simple normal crossing around $E \cap C_{3, Y}$.
On the other hand, three curves $E, C_{1, Y}, C_{2, Y}$ intersects at a single point $y$.
We again take the blowup $g: Z \to Y$ at $y$.
Then the same argument as above can be applied after replacining $(Y, \Delta_Y, y)$ by $(X, \Delta, x)$.
Moreover, we can repeat the same procedure.
Since the intersection multiplicity strictly drops:
$(C_1 \cdot C_2)_x > (C_{1, Y} \cdot C_{2, Y})_y$\footnote{{this inequality is proven as follows: by taking a compactification of $X$ such that $(C_1 \cap C_2)_{\mathrm{red}} = \{x\}$, we may assume that
$X$ is proper and $C_1 \cdot C_2 = (C_1 \cdot C_2)_x$; then
$(C_1 \cdot C_2)_x = C_1 \cdot C_2 = C_{1, Y} \cdot C_{2, Y} +1 >C_{1, Y}\cdot C_{2, Y} \geq (C_{1, Y} \cdot C_{2, Y})_y$}} (for the definition of intersection multiplicies, see \cite[Ch. V, Section 1]{hartshorne77}), this procedure will terminate after finitely many times.
As an alternative proof,
this termination is assured by the fact that
a log resolution is obtained by taking successive blowups of points.
\end{proof}
\begin{lemma}\label{l-klt-resol-r=2}
We use Notation \ref{n-klt-resol-sm}.
Assume $r=2$.
Then there exists a log resolution $\varphi: V \to X$ of $(X, \Delta)$ such that
$\varphi(\Ex(\varphi)) = \{x\}$ and $\Delta_V$ is effective
for the $\mathbb{Q}$-divisor $\Delta_V$ defined by $K_V + \Delta_V = \varphi^*(K_X+\Delta)$.
\end{lemma}
\begin{proof}
By decreasing the coefficients of $\Delta$, we may assume that all the coefficients of $\Delta$ are equal to $\frac{1}{2}$,
that is, $\Delta = \frac{1}{2}(C_1 +C_2)$.
Since $(X, \Delta)$ is klt, we have $\mult_x \Delta <2$.
As each $C_i$ passes through $x$, we get $\mult_x \Delta \geq 1$.
By $\mult_x \Delta \in \frac{1}{2} \mathbb{Z}$, we have the following two cases.
\begin{enumerate}
\item[(I)] $\mult_x \Delta = 1$.
\item[(II)] $\mult_x \Delta =3/2$.
\end{enumerate}
(I)
Assume $\mult_x \Delta =1$.
Then $\mult_x C_1 = \mult_x C_2 = 1$, that is, both $C_1$ and $C_2$ are smooth at $x$.
Let $f: Y \to X$ be the blowup at $x$.
Then
\[
K_Y + \Delta_Y = f^*(K_X+\Delta) \qquad \text{for}\qquad
\]
\[
\Delta_Y = \frac{1}{2}(C_{1, Y} + C_{2, Y}), \qquad C_{1, Y} := f_*^{-1}C_1,\qquad C_{2, Y} := f_*^{-1}C_2.
\]
For any point $y \in C_{1, Y} \cap C_{2, Y}$,
we have an inquality of intersection multiplicities
$(C_{1, Y} \cdot C_{2, Y})_y \leq (C_1 \cdot C_2)_x -1$
(e.g. if $X$ is projective, then $C_{1, Y} \cdot C_{2, Y} = C_1 \cdot C_2 -1$).
We take blowups until (the proper transforms of) $C_1$ and $C_2$ will be disjoint.
Then we obtain
\[
K_{V} + \Delta_V = \varphi^*(K_X+\Delta)
\]
for $\Delta_V = \frac{1}{2}(C_{1, V} + C_{2, V}),
C_{1, V} := \varphi_*^{-1}C_1,
C_{2, V} := \varphi_*^{-1}C_2$, that is,
the coefficient of any $\varphi$-exceptional prime divisor is zero.
Furthermore, it is easy to see that
$\varphi: V \to X$ is a log resolution of $(X, \Delta)$.
(II) Assume $\mult_x \Delta =3/2$. Then
\[
K_Y + \frac{1}{2}(E+ C_{1, Y} + C_{2, Y}) = f^*(K_X+\Delta).
\]
After permuting $C_1$ and $C_2$ if necessary,
we may assume that $\mult_x C_1 =1$ and $\mult_x C_2 =2$.
Hence, $C_1$ is smooth at $x$ and $C_2$ is not smooth at $x$.
We have $C_{1, Y} \cdot E =1$ and $C_{2, Y} \cdot E =2$.
In particular, $C_{1, Y} + E$ is simple normal crossing and $y_1 := C_{1, Y} \cap E$ is a point.
We now treat the case when $C_{2, Y}+E$ is simple normal crossing, that is,
$C_{2, Y} \cap E = \{y_2, y'_2\}$ with $y_2 \neq y'_2$.
If $y_1 \neq y_2$ and $y_1 \neq y'_2$, then $\Delta_Y$ is simple normal crossing.
We are done.
Then we may assume that $y_1 = y_2$.
In this case, we can apply
{Lemma \ref{l-klt-resol-r=3}.}
This completes the case when $C_{2, Y}+E$ is simple normal crossing.
We may assume that $C_{2, Y}+E$ is not simple normal crossing, that is,
$y_2 :=(C_{2, Y} \cap E)_{\mathrm{red}}$ is one point.
If $y_1 = y_2$, then we can apply
{Lemma \ref{l-klt-resol-r=3}.}
The problem is reduced to the case when $y_1 \neq y_2$.
Then $(Y, \Delta_Y)$ is simple normal crossing at $y_1$.
If $\mult_{y_2}\Delta_Y = 1$, then we are done by (I).
We obtain $\mult_{y_2}\Delta_Y = 3/2$.
After replacing $(Y, \Delta_Y, y_2)$ by $(X, \Delta, x)$,
we may apply the same argument as above.
In other words, we again take the blowup $g: Z \to Y$ at $y_2$.
Applying this argument repeatedly,
we obtain a log resolution $\varphi : V \to X$ such that $\Delta_V$ is effective
for $K_V + \Delta_V = \varphi^*(K_X+\Delta)$.
\end{proof}
\begin{lemma}\label{l-klt-resol-r=1}
We use Notation \ref{n-klt-resol-sm}.
Assume $\mult_x \Delta \geq \frac{3}{2}$.
Then there exists a log resolution $\varphi : V \to X$ of $(X, \Delta)$ such that
$\varphi(\Ex(\varphi)) = \{x\}$ and $\Delta_V$ is effective
for the $\mathbb{Q}$-divisor $\Delta_V$ defined by $K_V + \Delta_V = \varphi^*(K_X+\Delta)$.
\end{lemma}
\begin{proof}
Let $f: Y \to X$ be the blowup at $x$.
We obtain
\[
K_Y + \Delta_Y =f^*(K_X+\Delta)
\]
\[
\text{for}\qquad E := \Ex(f)\qquad\text{and}\qquad \Delta_Y := f_*^{-1}\Delta + (\mult_x \Delta -1) E.
\]
By $\mult_x \Delta \geq \frac{3}{2}$, any coefficient of $\Delta_Y$ is $\geq \frac{1}{2}$.
Then any non-log smooth point of $(Y, \Delta_Y)$ is contained in at least two irreducible components of $\Delta_Y$.
Therefore, we can find a log resolution $\varphi_Y: V \to Y$ of $(Y, \Delta_Y)$
such that $\Delta_V$ is effective for $K_V + \Delta_V = \varphi_Y^*(K_Y+\Delta_Y)$
by Lemma \ref{l-klt-resol-r=3} and Lemma \ref{l-klt-resol-r=2}.
Then the composition $\varphi : V \xrightarrow{\varphi_Y} Y \xrightarrow{f} X$ is a required log resolution
of $(X, \Delta)$.
\end{proof}
\begin{comment}
\begin{lemma}\label{l-klt-resol-r=1}
We use Notation \ref{n-klt-resol-sm}.
Assume $\mult_x \Delta \geq \frac{4}{3}$.
Then there exists a log resolution $\mu : V \to X$ of $(X, \Delta)$ such that $\Delta_V$ is effective
for the $\mathbb{Q}$-divisor $\Delta_V$ defined by $K_V + \Delta_V = \mu^*(K_X+\Delta)$.
\end{lemma}
\begin{proof}
If $r \geq 2$, then we are done by Lemma \ref{l-klt-resol-r=3} and Lemma \ref{l-klt-resol-r=2}.
Hence we may assume that $r=1$, that is, $\Delta = c C$ for $c := c_1$ and $C := C_1$.
Let $f: Y \to X$ be the blowup at $x$.
We obtain
\[
K_Y + \Delta_Y =f^*(K_X+\Delta)
\]
\[
\text{for}\quad E := \Ex(f)\quad\text{and}\quad \Delta_Y := f_*^{-1}\Delta + (\mult_x \Delta -1) E
=f_*^{-1}\Delta + (c \cdot \mult_x C-1) E .
\]
We now treat the case when $\mult_x C \geq 3$.
Then $c \cdot \mult_x C -1 \geq \frac{1}{2} \cdot 3 - 1 = \frac{1}{2}$.
Any coefficient of $\Delta_Y$ is $\geq \frac{1}{2}$.
Then any non-log smooth point of $(Y, \Delta_Y)$ is contained in at least two irreducible components of $\Delta_Y$.
Therefore, we can find a log resolution $\mu_Y: V \to Y$ of $(Y, \Delta_Y)$
such that $\Delta_V$ is effective for $K_V + \Delta_V = \mu_Y^*(K_Y+\Delta_Y)$
by Lemma \ref{l-klt-resol-r=3} and Lemma \ref{l-klt-resol-r=2}.
Then the composition $\mu : V \xrightarrow{\mu_Y} Y \xrightarrow{f} X$ is a required log resolution
of $(X, \Delta)$.
We may assume that $\mult_x C \leq 2$.
If $\mult_x C=1$, then $(X, \Delta = c C)$ is log smooth, which is a contradiction.
We then have $\mult_x C=2$.
In particular, we get $2c = \mult_x \Delta \geq \frac{4}{3}$, which implies $c \geq \frac{2}{3}$.
By decreasing the coefficient $c$, we may assume that $c= \frac{2}{3}$.
In this case,
Since $(X, \Delta)$ is klt, we obtain $\mult_x C $.
By $\mult_x \Delta \geq \frac{3}{2}$,
\end{proof}
\end{comment}
\subsection{The singular case}\label{ss-klt-resol-sing}
The purpose of this subsection is to show that
Question \ref{q-klt-resol} is affirmative for the case when $X$ is singular and $(X, \Delta)$ is log smooth outside the singular points of $X$ (Proposition \ref{p-klt-sing}).
We first treat two typical cases in
Lemma \ref{l-klt-sing1} and
Lemma \ref{l-klt-sing2}.
The general case will be reduced to these cases (cf.\ the proof of Proposition \ref{p-klt-sing}).
\begin{lemma}\label{l-klt-sing1}
Let $(X, \Delta)$ be a two-dimensional klt pair such that all the coefficients of $\Delta$ are
$\geq \frac{1}{2}$.
Let $x$ be a singular point of $X$
such that $(X \setminus \{x\}, \Delta|_{X \setminus \{x\}})$ is log smooth.
For the minimal resolution $f: Y \to X$ of $X$ and
the $\mathbb{Q}$-divisor $\Delta_Y$ defined by $K_Y+\Delta_Y = f^*(K_X+\Delta)$, assume that $E:= \Ex(f)$ is a prime divisor and $ \Delta_Y$ is not simple normal crossing.
Then $E \subseteq \Supp \Delta_Y$ and
the coefficient of $E$ in $\Delta_Y$ is $\geq \frac{1}{2}$.
\end{lemma}
\begin{proof}
If $x \not \in \Supp \Delta$, then $\Delta_Y$ must be simple normal crossing which contradicts our assumption.
Thus $x \in \Supp \Delta$, and so $\Supp \Delta_Y = E \cup \Supp f_*^{-1}\Delta$. By decreasing the coefficients of $\Delta,$ we may assume that every coefficient of $\Delta$ is equal to $\frac{1}{2}$.
Set $m:=-E^2 \geq 2$.
We have
\[
K_Y + f_*^{-1}\Delta + E =f^*(K_X+\Delta) + bE
\]
for the rational number $b$ satisfying the following:
\[
b = \frac{1}{E^2} ( -2 + f_*^{-1}\Delta \cdot E) = \frac{1}{m}
(2 - f_*^{-1}\Delta \cdot E).
\]
Since $2f_*^{-1}\Delta$ is Cartier and
$\Supp \Delta_Y = \Supp (f_*^{-1} \Delta + E)$ is not simple normal crossing,
we obtain $f_*^{-1}\Delta \cdot E \geq 1$.
Hence
\[
b = \frac{1}{m}
(2 - f_*^{-1}\Delta \cdot E) \leq \frac{1}{m}
(2 - 1) \leq \frac{1}{2}.
\]
Therefore, any coefficient of $\Delta_Y = f_*^{-1}\Delta + (1-b)E$ is $\geq \frac{1}{2}$.
\end{proof}
\begin{lemma}\label{l-klt-sing2}
Let $(X, \Delta)$ be a two-dimensional klt pair such that all the coefficients of $\Delta$ are $\geq \frac{1}{2}$.
Let $x$ be a singular point of $X$
such that $(X \setminus \{x\}, \Delta|_{X \setminus \{x\}})$ is log smooth.
For the minimal resolution $f: Y \to X$ of $X$, assume that the following hold.
\begin{enumerate}
\item
$\Ex(f) = E_1 \cup E_2$ is the irreducible decomposition,
where $E_1$ and $E_2$ are distinct prime divisors on $Y$.
\item
$f_*^{-1}\Delta$ contains $E_1 \cap E_2$.
\end{enumerate}
Let $\Delta_Y$ be the $\mathbb{Q}$-divisor defined by $K_Y+ \Delta_Y = f^*(K_X+\Delta)$.
Then $\Ex(f) \subseteq \Supp \Delta_Y$ and
all the coefficients of $\Delta_Y$ are $\geq \frac{1}{2}$.
\end{lemma}
\begin{proof}
Note that $E_1 \cdot E_2 =1$.
In other words, $E_1 +E_2$ is simple normal crossing and $y := E_1 \cap E_2$ is a closed point of $Y$.
Set $m_1 := -E_1^2$ and $m_2 := -E_2^2$. Of course, we have
\begin{equation}\label{e1-klt-sing1}
m_1 \geq 2\qquad \text{and}\qquad m_2 \geq 2.
\end{equation}
By (2),
we obtain $x \in \Supp \Delta$ and $\Supp \Delta_Y = \Exc(f) \cup \Supp f_*^{-1}\Delta$.
By decreasing the coefficients of $\Delta,$ we may assume that
any coefficient of $\Delta$ is equal to $\frac{1}{2}$.
Let $b_1, b_2 \in \mathbb{Q}$ be the rational numbers defined by
\[
K_Y + f_*^{-1}\Delta + E_1+E_2 =f^*(K_X+\Delta) + b_1 E_1+b_2E_2.
\]
Since $(X, \Delta)$ is klt, we have $b_1 >0$ and $b_2 >0$.
We obtain
\[
\Delta_ Y = f_*^{-1}\Delta + (1-b_1)E_1+ (1-b_2)E_2.
\]
It follows from (2) that
\begin{equation}\label{e2-klt-sing1}
f_*^{-1}\Delta \cdot E_1 >0 \qquad \text{and} \qquad f_*^{-1}\Delta \cdot E_2 >0.
\end{equation}
For the intersection matrix
\[
A := (E_i \cdot E_j) =
\begin{pmatrix}
-m_1 & 1\\
1 & -m_2
\end{pmatrix},
\]
its inverse matrix $A^{-1}$ is given as follows:
\[
-A^{-1} = \frac{-1}{m_1m_2-1}
\begin{pmatrix}
-m_2 & -1\\
-1 & -m_1
\end{pmatrix}
=\frac{1}{m_1m_2-1}
\begin{pmatrix}
m_2 & 1\\
1 & m_1
\end{pmatrix}.
\]
Therefore,
\[
\begin{pmatrix}
b_1\\
b_2
\end{pmatrix}
=
-A^{-1}
\begin{pmatrix}
2 - (f_*^{-1}\Delta + E_2) \cdot E_1\\
2 - (f_*^{-1}\Delta + E_1) \cdot E_2
\end{pmatrix} =
\frac{1}{m_1m_2-1}
\begin{pmatrix}
m_2 & 1\\
1 & m_1
\end{pmatrix}
\begin{pmatrix}
1 - f_*^{-1}\Delta \cdot E_1\\
1 - f_*^{-1}\Delta \cdot E_2
\end{pmatrix}.
\]
By (\ref{e2-klt-sing1}) and $f_*^{-1}\Delta \cdot E_i \in \frac{1}{2}\mathbb{Z}$,
it holds that
\begin{equation}\label{e3-klt-sing1}
1 - f_*^{-1}\Delta \cdot E_1 \leq \frac{1}{2} \qquad \text{and} \qquad
1 - f_*^{-1}\Delta \cdot E_2 \leq \frac{1}{2}.
\end{equation}
By (\ref{e1-klt-sing1}) and (\ref{e3-klt-sing1}), the following holds:
\begin{align*}
b_1 &= \frac{1}{m_1m_2-1}( m_2(1 - f_*^{-1}\Delta \cdot E_1) + (1 - f_*^{-1}\Delta \cdot E_2))\\
&\leq \frac{m_2 +1}{2(m_1m_2-1)} \leq \frac{m_2 +1}{2(2m_2-1)}
\leq
\frac{1}{2},
\end{align*}
where the last inequality holds by
$\frac{m_2 +1}{2m_2-1} = \frac{1}{2}+ \frac{3/2}{2m_2-1}\leq
\frac{1}{2}+ \frac{3/2}{2 \cdot 2-1} =1$.
By symmetry, we get $b_2 \leq \frac{1}{2}$.
Hence any coefficient of $\Delta_Y = f_*^{-1}\Delta + (1-b_1)E_1+ (1-b_2)E_2$ is $\geq 1/2$.
\begin{comment}
\vspace{20mm}
Furthermore, $(f_*^{-1}\Delta \cdot E_1, f_*^{-1}\Delta \cdot E_2)=(1, 1)$ is impossible.
By $f_*^{-1}\Delta \cdot E_1 \in \frac{1}{2}\mathbb{Z}$ and $f_*^{-1}\Delta \cdot E_2 \in \frac{1}{2}\mathbb{Z}$, we get
\[
(f_*^{-1}\Delta \cdot E_1, f_*^{-1}\Delta \cdot E_2) = \left(\frac{1}{2}, \frac{1}{2}\right)
\quad \text{or}\quad
(f_*^{-1}\Delta \cdot E_1, f_*^{-1}\Delta \cdot E_2) = \left(\frac{1}{2}, 1\right).
\]
Let us prove show that the latter case does not happen. i.e.
$(f_*^{-1}\Delta \cdot E_1, f_*^{-1}\Delta \cdot E_2) \neq (1/2, 1)$.
Suppose $(f_*^{-1}\Delta \cdot E_1, f_*^{-1}\Delta \cdot E_2) = (1/2, 1)$.
Then we have
\[
\begin{pmatrix}
a_1\\
a_2
\end{pmatrix}
=
\begin{pmatrix}
\frac{m_2}{2(m_1m_2-1)}\\
\frac{1}{2(m_1m_2-1)}
\end{pmatrix}.
\]
Let $g : Z \to Y$ be the blowup at $y$.
For $K_Z + \Delta_Z = g^*(K_Y+\Delta_Y)$ and the coefficient $c$ of $\Ex(g)$ in $\Delta_Z$,
the following holds:
\begin{eqnarray*}
c & \geq & -1 + \frac{1}{2} + ( 1 -a_1) + (1-a_2) \\
&=& \frac{3}{2} - \frac{m_2}{2(m_1m_2-1)} -
\frac{1}{2(m_1m_2-1)}\\
&\geq &\frac{3}{2} - \frac{m_2}{2(2m_2-1)} -
\frac{1}{2(2m_2-1)}\\
&= &\frac{3}{2} - \frac{1}{2(2-\frac{1}{m_2})} -
\frac{1}{2(2m_2-1)}\\
&\geq &\frac{3}{2} - \frac{1}{2(2-\frac{1}{2})} -
\frac{1}{2(2 \cdot 2-1)}\\
&=& \frac{3}{2} - \frac{1}{3} - \frac{1}{6} =1.
\end{eqnarray*}
This contradicts the assumption that $(X, \Delta)$ is klt.
Therefore, we obtain $(f_*^{-1}\Delta \cdot E_1, f_*^{-1}\Delta \cdot E_2)=(1/2, 1/2)$.
Then $f_*^{-1}\Delta =\frac{1}{2} C$ for a prime divisor $C$ on $Y$ such that
$E_1 +C$ and $E_2 +C$ are simple normal crossing.
In this case, we have
\[
\begin{pmatrix}
a_1\\
a_2
\end{pmatrix}
=
\begin{pmatrix}
\frac{m_2+1}{2(m_1m_2-1)}\\
\frac{m_1+1}{2(m_1m_2-1)}
\end{pmatrix}.
\]
In this case, the singularity of $C +E_1 + E_2$ is the triple intersection \textcolor{cyan}{add the definition}.
Since the sum of the coefficients of $C, E_1, E_2$ in $\Delta_Y$ is equal to $\frac{1}{2} + (1 - a_1) + (1-a_2)$, it is enough to show $\frac{1}{2} + (1 - a_1) + (1-a_2) \geq 1$, that is,
$\frac{3}{2} \geq a_1 + a_2$.
We have
\[
a_1 = \frac{m_2+1}{2(m_1m_2-1)} \leq \frac{1}{2} \times \frac{m_2+1}{2m_2 -1} \leq \frac{1}{2}.
\]
By symmetry, we also have $a_2 \leq 1/2$, which implies $a_1+ a_2 \leq \frac{1}{2}+\frac{1}{2} \leq \frac{3}{2}$.
\end{comment}
\end{proof}
\begin{proposition}\label{p-klt-sing}
Let $(X, \Delta)$ be a two-dimensional klt pair such that all the coefficients of $\Delta$ are
$\geq \frac{1}{2}$.
Let $x$ be a singular point of $X$
such that $(X \setminus \{x\}, \Delta|_{X \setminus \{x\}})$ is log smooth.
Then there exists a log resolution $\varphi : V \to X$ of $(X, \Delta)$ such that $\varphi(\Ex(\varphi)) = \{x\}$ and $\Delta_V$ is effective
for the $\mathbb{Q}$-divisor $\Delta_V$ defined by $K_V + \Delta_V = \varphi^*(K_X+\Delta)$.
\end{proposition}
\begin{proof}
Let
\[
f : Y \to X
\]
be the minimal resolution of $X$.
Let $\Ex(f) = E_1 \cup \cdots \cup E_n$ be the irreducible decomposition.
We define an effective $\mathbb{Q}$-divisor $\Delta_Y$ on $Y$ by
\[
K_Y + \Delta_Y = f^*(K_X+\Delta).
\]
If $x \not\in \Supp \Delta$, then
it is enough to set $V:=Y$.
In what follows, we assume that $x \in \Supp \Delta$.
In particular,
\begin{equation}\label{e1-klt-sing}
\Supp \Delta_Y = \Ex(f) \cup \Supp f_*^{-1}\Delta =
E_1 \cup \cdots \cup E_n \cup \Supp f_*^{-1}\Delta.
\end{equation}
We emphasise that $\Ex(f)$ is simple normal crossing (but $\Supp \Delta_Y$ need not be simple normal crossing).
\begin{claim}\label{c1-klt-sing}
Fix $1 \leq i \leq n$.
If there exists a closed point $y \in E_i$
at which $\Delta_Y$ is not simple normal crossing,
then the coefficient of $E_i$ in $\Delta_Y$ is $\geq \frac{1}{2}$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c1-klt-sing}]
Since $E_1 \cup \cdots \cup E_n$ is simple normal crossing, there are the following two possible cases:\ (1) and (2).
\begin{enumerate}
\item $y \not\in E_{j}$ for every $j$ satisfying $j \neq i$.
\item There exists $1 \leq i' \leq n$
such that $i' \neq i$, $y \in E_{i} \cap E_{i'}$,
and $y \not\in E_j$ for any $j \neq i, i'$.
\end{enumerate}
Assume (1).
Let $f' : Y \to X'$ be the projective birational morphism
to a normal surface $X'$ such that $\Ex(f')=E_i$.
Then $f'$ factors through $f$:
\[
f : Y \xrightarrow{f'} X' \xrightarrow{g} X.
\]
By $E_i^2 \leq -2$,
$f': Y \to X'$ is the minimal resolution of $X'$.
Set $\Delta' := g_*^{-1} \Delta$.
We have
\[
K_{X'} + \Delta' \leq g^*(K_X+\Delta).
\]
We define a $\mathbb{Q}$-divisor $\Delta'_Y$ by
$K_Y+\Delta'_Y = f'^*(K_{X'}+\Delta')$.
Then $\Delta'_Y$ is effective, since $f' : Y \to X'$ is the minimal resolution of $X'$.
It holds that
\[
K_Y + \Delta'_Y = f'^*(K_{X'}+\Delta')\leq
f'^*(g^*(K_X+\Delta)) = f^*(K_X+\Delta) = K_Y+\Delta_Y.
\]
Hence it is enough to show that the coefficient
of $E_i$ in $\Delta'_Y$ is $\geq \frac{1}{2}$.
For $x' := f'(E_i)$ and a suitable open neighbourhood ${\widetilde X'}$ of $x' \in X'$,
we may apply Lemma \ref{l-klt-sing1} for
$({\widetilde X'}, \Delta'|_{{\widetilde X'}})$ and the minimal resolution
$f'|_{f'^{-1}({ \widetilde X'})} : f'^{-1}({ \widetilde X'}) \to { \widetilde X'}$ of ${\widetilde X'}$,
because $\Delta'_Y$ is not simple normal crossing at $y$.
This completes the proof of Claim \ref{c1-klt-sing} for the case when (1) holds.
Assume (2).
Let $f' : Y \to X'$ be the projective birational morphism
to a normal surface $X'$ such that $\Ex(f')=E_i \cup E_{i'}$.
Then the same argument as that of (1) works by using Lemma \ref{l-klt-sing2} instead of Lemma \ref{l-klt-sing1}.
This completes the proof of Claim \ref{c1-klt-sing}.
\begin{comment}
By applying Lemma \ref{l-klt-sing2} for
$f' : Y \to X'$ and $(X', \Delta')$,
we can find a resolution $\varphi' : V \to X'$
$(X', \Delta')$ such that $\varphi'^{-1}(x')_{\mathrm{red}}$ is a simple normal crossing divisor and $\Delta'_V$ is effective
for the $\mathbb{Q}$-divisor $\Delta'_V$ defined by $K_V + \Delta'_V = \varphi'^*(K_{X'}+\Delta')$.
Then the induced birational morphism $\varphi : V \to Y$
is a log resolution of $(Y, \Delta_Y)$ over $y$.
\end{comment}
\end{proof}
Fix a closed point $y \in Y$ at which $(Y, \Delta_Y)$ is not log smooth.
By (\ref{e1-klt-sing}),
it is enough to find an open neighbourhood $\widetilde Y$ of $y \in Y$ and a log resolution $\psi : \widetilde V \to \widetilde Y$ of $(\widetilde Y, \Delta_Y|_{\widetilde Y})$
such that $\Delta_{\widetilde V}$ is effective for the $\mathbb{Q}$-divisor defined by $K_{\widetilde V} + \Delta_{\widetilde V} =
\psi^*(K_{\widetilde Y}+ (\Delta_Y|_{\widetilde Y}))$.
By Claim \ref{c1-klt-sing},
there exists an open neighbourhood $\widetilde Y$ of $y \in Y$
such that all the coefficients of $\Delta_Y|_{\widetilde Y}$ are $\geq \frac{1}{2}$.
Moreover, $y$ is contained in at least two irreducible components of $\Delta_Y|_{\widetilde Y}$:
one of them is some $f$-exceptional prime divisor $E_i$ and
we can find another one from $f_*^{-1}\Delta$.
Therefore, we can find a required log resolution $\psi : \widetilde V \to \widetilde Y$ of $(\widetilde Y, \Delta_Y|_{\widetilde Y})$ by Lemma \ref{l-klt-resol-r=3}
and Lemma \ref{l-klt-resol-r=2}.
\qedhere
\begin{comment}
\vspace{40mm}
Note that $\Supp \Delta_Y = \Ex(f) \cup \Supp ( f_*^{-1}\Delta)$.
Fix a closed point $y \in Y$ at which
$\Delta_Y$ is not simple normal crossing.
It is enough to find an open neighbourhood $\widetilde Y$ of $y \in Y$ and a log resolution $\psi : \widetilde V \to \widetilde Y$ of $(\widetilde Y, \Delta_Y|_{\widetilde Y})$
such that $\Delta_{\widetilde V}$ is effective for the $\mathbb{Q}$-divisor defined by $K_{\widetilde V} + \Delta_{\widetilde V} = \psi^*(K_{\widetilde Y}+ \Delta_Y|_{\widetilde Y})$.
There are the following two cases.
\begin{enumerate}
\item $y \in E_i$ for some $i$ and $y \not\in E_{j}$ for any $j \neq i$.
\item $y \in E_{i_1} \cap E_{i_2}$ for some $i_1, i_2$ with $i_1 \neq i_2$ and
$y \not\in E_j$ for any $j \neq i_1, i_2$.
\end{enumerate}
Assume (1).
After permuting the indices, we may assume that $y \in E_1$, that is, $i=1$.
Let $f' : Y \to X'$ be the projective birational morphism
to a normal surface $X'$ such that $\Ex(f')=E_1$.
Then $f'$ factors through $f$:
\[
f : Y \xrightarrow{f'} X' \xrightarrow{g} X.
\]
Note that $f': Y \to X'$ is the minimal resolution of $X'$.
Set $\Delta' := g_*^{-1} \Delta$.
We have
\[
K_{X'} + \Delta' \leq g^*(K_X+\Delta).
\]
We define $\mathbb{Q}$-divisors $\Delta_Y$ and $\Delta'_Y$ as follows:
\[
K_Y+\Delta_Y =f^*(K_X+\Delta), \qquad K_Y+\Delta'_Y = f'^*(K_{X'}+\Delta').
\]
Then both $\Delta_Y$ and $\Delta'_Y$ are effective, because $f: Y \to X$ and $f' : Y \to X'$ are the minimal resolutions of $X$ and $X'$, respectively.
Then the following (A) and (B) hold.
\begin{enumerate}
\item[(A)] $K_Y + \Delta'_Y \leq K_Y+\Delta_Y$.
\item[(B)] $f_*^{-1} \Delta + \Ex(f) = f_*'^{-1} \Delta' + \Ex(f')$ holds around $y$, that is, the equality holds after restricting to a suitable open neighbourhood of $y \in Y$.
\end{enumerate}
{\color{cyan} we should be careful with the proof for the singular case. }
By applying Lemma \ref{l-klt-sing2} for
$f' : Y \to X'$ and $(X', \Delta')$,
we can find a resolution $\varphi' : V \to X'$
$(X', \Delta')$ such that $\varphi'^{-1}(x')_{\mathrm{red}}$ is a simple normal crossing divisor and $\Delta'_V$ is effective
for the $\mathbb{Q}$-divisor $\Delta'_V$ defined by $K_V + \Delta'_V = \varphi'^*(K_{X'}+\Delta')$.
Then the induced birational morphism $\varphi : V \to Y$
is a log resolution of $(Y, \Delta_Y)$ over $y$.
we are done.
This completes the proof for the case when (1) holds.
Assume (2).
Then the same argument as above works by using Lemma \ref{l-klt-sing1} instead of Lemma \ref{l-klt-sing2}.
\end{comment}
\end{proof}
\subsection{The general case}\label{ss-klt-resol-general}
We are ready to prove the main theorem of this section.
\begin{theorem}\label{t-hiromu-resol}
Let $(X, \Delta)$ be a two-dimensional klt pair with standard coefficients.
Fix a closed point $x$ of $X$.
Assume that $(X \setminus \{x\}, \Delta|_{X \setminus \{x\}})$
is log smooth,
$(X, \Delta)$ is not log smooth, and
all the irreducible components of $\Delta$ pass through $x$.
Then one of the following holds.
\begin{enumerate}
\item
There exists a log resolution $\varphi : V \to X$ of $(X, \Delta)$ such that
$\varphi(\Ex(\varphi)) = \{x \}$ and
$\Delta_V$ is effective
for the $\mathbb{Q}$-divisor $\Delta_V$ defined by $K_V + \Delta_V = \varphi^*(K_X+\Delta)$.
\item
$X$ is smooth, $\Delta = \frac{1}{2} C$ for a prime divisor $C$,
and there exists a projective birational morphism $\varphi \colon V \to X$ from a smooth surface $V$
such that, for the $\mathbb{Q}$-divisor $\Delta_V$ defined by $K_V+\Delta_V =\varphi^*(K_X+\Delta)$, the dual graph of $\Delta_V$ is given by Figure \ref{figure-V}.\footnote{In Figure \ref{figure-V},
$C_V$ is the strict transform of $C$,
$E_1, \ldots, E_n$ are the $\varphi$-exceptional prime divisors,
the rational number in front of each prime divisor is
its coefficient in $\Delta_V$, and the other numbers are the self-intersection numbers}
Specifically, there exists a sequence of projective birational morphisms of smooth surfaces
\[
\varphi: V :=X_n \xrightarrow{\varphi_n} X_{n-1} \xrightarrow{\varphi_{n-1}} \cdots \xrightarrow{\varphi_1} X_0 =X
\]
satisfying the following property
for
the proper transforms $C_i$ of $C$ on $X_i$.
\begin{enumerate}
\item
For each $0 \leq i \leq n-1$, $C_i$ has a unique singular point $x_i$.
Furthermore, $\mult_{x_i} C_i=2$.
\item
For each $0 \leq i \leq n-1$, $\varphi_{i+1} : X_{i+1} \to X_i$ is the blowup at $x_i$, and $K_{X_{i+1}} + \frac{1}{2}C_{i+1} = \varphi^*_{i+1}(K_{X_{i}} + \frac{1}{2}C_{i})$.
\item
$C_V := C_n$ is a smooth prime divisor and $\Delta_V = \frac{1}{2} C_V$.
\item
The dual graph of $C_V \cup \Ex(\varphi)$ is given by
\[
C_V = E_n - E_{n-1} - \cdots - E_1,
\]
where each $E_i$ denotes the proper transform of $\Ex(\varphi_i)$ on $V$ and
$C_V=E_n$ means that $(C_{V} \cap E_n)_{\mathrm{red}}$ is a single point with $C_V \cdot E_n =2$ (cf.\ Figure \ref{figure-V}).
\end{enumerate}
\end{enumerate}
\end{theorem}
A closed point $x$ satisfying the above property (2) is called
a {\em special point} {of} $(X, \Delta)$.
\begin{figure}
\centering
\begin{tikzpicture}
\draw (1.5, 2) node{$\frac{1}{2}C_V$};
\draw (-2.5, -2) node{$0E_n$};
\draw (-0.8, -1.1) node{$0E_{n-1}$};
\draw (1.7, -1.7) node{$0E_{n-2}$};
\draw (4.5, -1.3) node{$0E_{2}$};
\draw (7.5, -1.3) node{$0E_{1}$};
\draw (-2.4, 0) node{$-1$};
\draw (-1.2, -1.8) node{$-2$};
\draw (1.5, -0.9) node{$-2$};
\draw (4.7, -2.0) node{$-2$};
\draw (7.0, -2.0) node{$-2$};
\draw[domain=-2:1, samples=500] plot (\x, {1+sqrt(0.333*\x+0.666)});
\draw[domain=-2:1, samples=500] plot (\x, {1-sqrt(0.333*\x+0.666)})
\draw(-2,2.2)--(-2,-2.2);
\draw(-2.2,-0.95)--(0.2,-2.05);
\draw(-0.2,-2.05)--(2.2,-0.95);
\draw[shift={(3,-1.5)}] node{$\hdots$};
\draw(4,-2)--(6.2,-0.95);
\draw(5.8,-0.95)--(8,-2);
\end{tikzpicture}
\caption{The coefficients of $\Delta_V$ and self-intersection numbers on $V$}\label{figure-V}
\end{figure}
\begin{proof}
If $x$ is a singular point of $X$, then (1) holds by Proposition \ref{p-klt-sing}.
Hence the problem is reduced to the case when $X$ is smooth.
If $\Delta$ is not irreducible, then (1) holds by
Lemma \ref{l-klt-resol-r=3} and
Lemma \ref{l-klt-resol-r=2}.
Therefore, we may assume that $\Delta$ is irreducible.
We can write $\Delta = \frac{m-1}{m} C$ for an integer $m \geq 2$
and a prime divisor $C$.
Since $(X, \Delta = \frac{m-1}{m} C)$ is not log smooth at $x$, $C$ is singular at $x$, that is, $\mult_x C \geq 2$.
If $m \geq 4$, then we have
$\mult_x \Delta = \frac{m-1}{m} \mult_x C \geq \frac{3}{4} \cdot 2 = \frac{3}{2}$.
In this case, (1) holds by Lemma \ref{l-klt-resol-r=1}.
Therefore, we obtain $m=2$ or $m=3
.
It follows from a similar argument that $\mult_x C = 2$.
We start with the case when $m=2$.
Let $\varphi_1: X_1 \to X$ be the blowup at $x$.
Then we have
\[
K_{X_1} + \frac{1}{2} C_{1}
=\varphi_1^*\left(K_X+ \frac{1}{2}C\right)
\]
for
$C_{1} := (\varphi_1)_*^{-1}C$.
We have $C_{1} \cdot E =2$ for $E := \Ex(\varphi_1)$.
There are the following three cases.
\begin{enumerate}
\item[(I)] $C_{1} \cap E$ consists of two points.
\item[(II)] $x_1 :=(C_{1} \cap E)_{\mathrm{red}}$ is one point, and $C_{1}$ is smooth at $x_1$ {and tangent to $E$}.
\item[(III)] $x_1:=(C_{1} \cap E)_{\mathrm{red}}$ is one point and $C_{1}$ is not smooth at $x_1$.
\end{enumerate}
Assume (I). Then $C_{1} + E$ is simple normal crossing.
In this case, (1) holds.
Assume (II).
In this case, we obtain (2).
Assume (III).
In this case, we take the blowup at $x_1$.
Then, after replacing $(X_1, \Delta_{1} := \frac{1}{2}C_{1}, x_1)$ by $(X, \Delta, x)$,
we can apply the same argument as above.
Note that, for the blowup $\varphi_2 : X_2 \to X_1$ at $x_1$,
the strict transforms $C_2$ and $(\varphi_2)_*^{-1}E$ of $C_{1}$ and $E$ are disjoint, because the following holds for $F := \Ex(\varphi_2)$:
\[
C_2 \cdot (\varphi_2)_*^{-1}E
= (\varphi_2^*C_1-2F) \cdot (\varphi_2)_*^{-1}E
= C_1 \cdot E -2 F \cdot (\varphi_2)_*^{-1}E =2-2 \cdot 1=0.
\]
Repeat this argument, and then this procedure will terminate after finitely many times \cite[Corollary V.3.7]{hartshorne77}.
Then (1) or (2) holds when $m=2$.
Let us treat the remaining case, that is, $m = 3$. We use the same notation as above.
We then get
\[
K_{X_1} + \frac{2}{3} C_{1} + \frac{1}{3}E
=\varphi_1^*\left(K_X+ \frac{2}{3}C\right).
\]
If (I) holds, then we get (1) as above.
If (III) holds, then $\mult_{x_1}(\frac{2}{3} C_{1} + \frac{1}{3}E) \geq \frac{5}{3}$,
so that we again obtain (1) by Lemma \ref{l-klt-resol-r=1}.
Assume (II).
We then get
\[
K_{X_2} + \frac{2}{3} C_2 + \frac{1}{3}\varphi_2^{-1}E
=\varphi_2^*\left( K_{X_1} + \frac{2}{3} C_{1} + \frac{1}{3}E \right)
=\varphi_2^*\varphi_1^*\left(K_X+ \frac{2}{3}C\right).
\]
Repeat this procedure, which will terminate after finitely many times.
Hence (1) holds when $m=3$.
\end{proof}
\begin{comment}
\vspace{20mm}
By Lemma \ref{l-klt-resol-r=1}, it suffices to prove that (1) holds under assuming $m \geq 3$ and $\mult_x C=2$.
If $m \geq 4$, then any coefficient of $\frac{m-1}{m} C_Y + \frac{m-2}{m}E$ is $\geq \frac{1}{2}$, so that we can apply Lemma \ref{l-klt-resol-r=2}.
Then what is remaining is the case when $n=3$.
In this case, we can directly check that (1) holds.
FORMER PROOF
\begin{proof}
If $x$ is a singular point of $X$, then (1) holds by Proposition \ref{p-klt-sing}.
Hence the problem is reduced to the case when $X$ is smooth.
If $\Delta$ is not irreducible, then (1) holds by
Lemma \ref{l-klt-resol-r=3} and
Lemma \ref{l-klt-resol-r=2}.
Therefore, we may assume that $\Delta$ is irreducible.
We can write $\Delta = \frac{m-1}{m} C$ for an integer $m \geq 2$
and a prime divisor $C$.
By Lemma \ref{l-klt-resol-r=1}, it suffices to prove that (1) holds under assuming $m \geq 3$ and $\mult_x C=2$.
Let $f_1: X_1 \to X$ be the blowup at $x$.
Then we have
\[
K_{X_1} + \frac{m-1}{m} C_{X_1} + \frac{m-2}{m}E =f_1^*\left(K_X+ \frac{m-1}{m}C\right)
\]
for $E := \Ex(f)$ and $C_{X_1} := g_*^{-1}C$.
If $m \geq 4$, then any coefficient of $\frac{m-1}{m} C_Y + \frac{m-2}{m}E$ is $\geq \frac{1}{2}$, so that we can apply Lemma \ref{l-klt-resol-r=2}.
Then what is remaining is the case when $n=3$.
In this case, we can directly check that (1) holds.
\end{proof}
\begin{proof}
By decreasing the coefficients of $\Delta$, we may assume that all the coefficients of $\Delta$ are $\frac{1}{2}$.
Then we have $\Delta = \frac{1}{2} C$ and $\mult_x C \leq 3$.
If $\mult_x C=1$, then $C$ is smooth and there is nothing to show.
Let $f: Y \to X$ be the blowup at $x$.
Set $E := \Ex(f)$.
Assume $\mult_x C=3$.
Then
\[
K_Y + \frac{1}{2}(C_Y + E) =f^*(K_X+\Delta), \qquad \Delta_Y := \frac{1}{2}(C_Y + E).
\]
Fix $y \in \Ex(f) =E$.
If $y \not\in C_Y$, then $\Delta_Y$ is simple normal crossing at $y$.
If $y \in C_Y$, then we have $y \in E \cap C_Y$, and hence we can apply Lemma \ref{l-klt-resol-r=2}
for $(Y, \Delta_Y, y)$.
Assume $\mult_x C=2$.
\end{proof}
\end{comment}
\begin{figure}
\centering
\begin{tikzpicture}
\draw (0.0, 1.9) node{$\frac{1}{2}C_W$};
\draw (2.0, 1.9) node{$-\frac{1}{2}E_{n+1, W}$};
\draw (6.8, 1.4) node{$-1E_{n+2, W}$};
\draw (-2.6, -2) node{$0E_{n, W}$};
\draw (-0.8, -1.1) node{$0E_{n-1, W}$};
\draw (1.8, -1.7) node{$0E_{n-2, W}$};
\draw (4.5, -1.3) node{$0E_{2, W}$};
\draw (7.5, -1.3) node{$0E_{1, W}$};
\draw (3.4, 1.9) node{$-2$};
\draw (6.6, 0.7) node{$-1$};
\draw (-2.4, 0) node{$-3$};
\draw (-1.2, -1.8) node{$-2$};
\draw (1.5, -0.9) node{$-2$};
\draw (4.7, -2.0) node{$-2$};
\draw (7.0, -2.0) node{$-2$};
\draw(-2.2, 1)--(7, 1);
\draw(0.5, 2.2)--(0.5, 0);
\draw(3, 2.2)--(3, 0);
\draw(-2,2.2)--(-2,-2.2);
\draw(-2.2,-0.95)--(0.2,-2.05);
\draw(-0.2,-2.05)--(2.2,-0.95);
\draw[shift={(3,-1.5)}] node{$\hdots$};
\draw(4,-2)--(6.2,-0.95);
\draw(5.8,-0.95)--(8,-2);
\end{tikzpicture}
\caption{The coefficients of $\Delta_W$ and self-intersection numbers on $W$}\label{figure-W}
\end{figure}
\begin{remark} \label{r-cusp-resol}
We here summarise some properties of special points $x$ of $(X, \Delta)$. {This will be applied at the end of the proof of Theorem \ref{t-LDP-QFS}, and so the reader might postpone reading this remark until reaching Theorem \ref{t-LDP-QFS}.}
Let $(X, \Delta)$ be a two-dimensional klt pair with standard coefficients and
let $x$ be a special point of $(X, \Delta)$,
that is, Theorem \ref{t-hiromu-resol}(2) holds around $x$.
Taking the blowups $\theta \colon W \to V$ at the non-log smooth point twice, the composition
\[
\psi : W\xrightarrow{\theta} V \xrightarrow{\varphi} X
\]
is a log resolution of $(X, \Delta)$ (cf.\ Figure \ref{figure-V}, Figure \ref{figure-W}).
Set $C_W := \theta_*^{-1}C_V$ and $E_{i, W} := \theta_*^{-1}E_i$.
Let $E_{n+1, W}$ and $E_{n+2, W}$ be the $\theta$-exceptional prime divisors obtained by the first and second blowups, respectively.
Set $K_W + \Delta_W = \psi^*(K_X+\Delta)$.
By Figure \ref{figure-V} and
$K_V +\frac{1}{2}C_V= K_V +\Delta_V=\varphi^*(K_X+\Delta)$, it is easy to see that
\[
\Delta_W = \frac{1}{2}C_W -\frac{1}{2}E_{n+1, W} - E_{n+2, W}.
\]
In Figure \ref{figure-W}, the rational number in front of each prime divisor is
its coefficient in $\Delta_W$, and the other numbers indicate the self-intersection numbers.
Set
\[
\mathbb E_W := E_{1, W} \cup \cdots \cup E_{n, W}.
\]
Given a $\psi$-exceptional prime divisor $E_{i, W}$ with $1 \leq i \leq n+2$,
it follows from Figure \ref{figure-W} that {$E_{i, W}$ has an integer coefficient in $K_W+\Delta_W$} and
$\rdown{K_W+\Delta_W} \cdot E_{i, W} = 0$
if and only if $1 \leq i \leq n$ (indeed, $\rdown{K_W+\Delta_W} \cdot E_{W, i} = -\frac{1}{2}(C_W +E_{n+1, W}) \cdot E_{W, i}$).
The determinant $d(\mathbb E_W)$ of the intersection matrix $(E_{i, W} \cdot E_{j, W})_{1 \leq i, j\leq n}$ of $\mathbb E_W$ can be computed as follows:
\begin{align*}
d(\mathbb E_W) =
\det &\left[\begin{matrix}
-3 & 1 & 0 & 0 & \ldots & \ldots \\
1 & -2 & 1 & 0 & \ldots & \ldots \\
0 & 1 & -2 & 1 & \ldots & \ldots \\
0 & 0 & 1 & -2 & \ldots & \ldots & \\
\ldots & \ldots & \ldots & \ldots & \ddots & \ldots \\
\ldots & \ldots & \ldots & \ldots & \ldots & \ddots
\end{matrix} \right] = -3\det A_{n-1} - \det A_{n-2}, \textrm{ where} \\
A_n := &\left[\begin{matrix}
-2 & 1 & 0 & 0 & \ldots & \ldots \\
1 & -2 & 1 & 0 & \ldots & \ldots \\
0 & 1 & -2 & 1 & \ldots & \ldots \\
0 & 0 & 1 & -2 & \ldots & \ldots & \\
\ldots & \ldots & \ldots & \ldots & \ddots & \ldots \\
\ldots & \ldots & \ldots & \ldots & \ldots & \ddots
\end{matrix} \right] \textrm{ is an } n \times n \textrm{ matrix.}
\end{align*}
We see that
\[
\det A_n = -2 \det A_{n-1} - \det A_{n-2},
\]
which by induction yields $\det A_n = (-1)^n(n+1)$ for $n\geq 1$. In particular,
\[
d(\mathbb E_W) =
(-1)^n3n + (-1)^{n-1}(n-1) = (-1)^{n}(2n+1).
\]
For its absolute value $a := |d(\mathbb E_W)| = 2n +1$,
a point $x$ is called \emph{of type $a$}.
To summarise, the following hold.
\begin{itemize}
\item $n$ is the minimum number of blowups to resolve the singularity of $C$.
\item $n+2$ is the minimum number of blowups to reach a log resolution of $(X, \Delta = \frac{1}{2}C)$.
\item $d(\mathbb E_W) = \det (E_{i, W} \cdot E_{j, W})_{1 \leq i, j\leq n} = (-1)^n(2n+1)$.
\item $x$ is a special point of type $2n+1$.
\end{itemize}
\end{remark}
We shall later need the following two consequences.
\begin{corollary} \label{cor:Hiromu-resolution}
Let $(X,\Delta)$ be a two-dimensional klt pair with standard coefficients.
Then there exists a projective birational morphism $\varphi \colon V \to X$
such that {$(V, \Delta_V)$ is log smooth and $\Delta_V$ is effective}, where $\Delta_V$ is the $\mathbb{Q}$-divisor defined by $K_V + \Delta_V = \varphi^*(K_X+\Delta)$.
\end{corollary}
\begin{proof}
The assertion immediately follows from Theorem \ref{t-hiromu-resol}.
\end{proof}
\begin{corollary}\label{c-klt-resol-r=1'}
We use Notation \ref{n-klt-resol-sm}.
Assume $\mult_x \Delta \geq \frac{4}{3}$.
Then there exists a log resolution $\varphi : V \to X$ of $(X, \Delta)$ such that $\Delta_V$ is effective
for the $\mathbb{Q}$-divisor $\Delta_V$ defined by $K_V + \Delta_V = \varphi^*(K_X+\Delta)$.
\end{corollary}
\begin{proof}
By Lemma \ref{l-klt-resol-r=3} and Lemma \ref{l-klt-resol-r=2}, we may assume that $\Delta$ is irreducible.
We can write $\Delta = cC$ for some $c \geq 1/2$ and
prime divisor $C$.
By Lemma \ref{l-klt-resol-r=1}, we may assume that $\mult_x C=2$.
It follows from $\mult_x \Delta = c \mult_x C =2c$ and
the assumption $\mult_x \Delta \geq \frac{4}{3}$ that $c \geq 2/3$.
By decreasing the coefficient $c$, we may assume $c = 2/3$.
Then the assertion follows from Theorem \ref{t-hiromu-resol}.
\end{proof}
\section{Log liftablity of log del Pezzo surfaces}\label{s-log-lift}
Throughout this section, we work over an algebraically closed field $k$ of characteristic $p>0$.
We start by stating the following conjecture.
\begin{conjecture}\label{c-LDP-loglift}
Assume $p>5$.
Let $(X, \Delta)$ be a log del Pezzo pair with standard coefficients.
Then $(X, \Delta)$ is log liftable.
\end{conjecture}
This conjecture is known to hold when $\Delta =0$ \cite[Theorem 1.2]{ABL20}.
{Assuming} Conjecture \ref{c-LDP-loglift}, we can show that
a log del Pezzo pair $(X, \Delta)$ with standard coefficients is quasi-$F$-split {when $p>41$}
(cf.\ Lemma \ref{l-log-lift-enough}).
However, Conjecture \ref{c-LDP-loglift} is open as far as the authors know.
As replacements of Conjecture \ref{c-LDP-loglift}, we {establish the following results:
\begin{itemize}
\item Theorem \ref{t-bdd-or-liftable} stating that either $(X, \Delta)$ is log liftable or special points of $(X, \Delta)$ are bounded with respect to the arithmetic genus (see Subsection \ref{ss-bdd-or-lift1} and \ref{ss-bdd-or-lift2}),
\item Theorem \ref{thm:liftability-of-Hiromu-resolution} assuring the existence of a projective birational map $f \colon Y \to X$ such that $(Y,\Delta_Y)$ is log smooth and lifts to $W(k)$,
where $K_Y+\Delta_Y = f^*(K_X+\Delta)$ (see Subsection \ref{ss-weak-log-lift}).
\end{itemize}
\begin{comment}
\begin{notation}\label{n-DuVal-MFS}
Let $k$ be an algebraically closed field of characteristic $p>2$
Let $(X, \Delta)$ be a log del Pezzo pair over $k$ with standard coefficients.
Let $f : Y \to X$ be the minimal canonicalisation of $X$ (Remark).
We run a $K_Y$-MMP:
\[
g : Y =:Y_0 \xrightarrow{g_0} Y_1 \xrightarrow{g_1} Y_2 \xrightarrow{g_2} \cdots
\xrightarrow{g_{\ell-1}} Y_{\ell} =: Z,
\]
where $Z$ is its end result and each $g_i: Y_i \to Y_{i+1}$ is a birational morphism such that
$\Ex(g_i)$ is a $K_{Y_i}$-negative prime divisor.
We define effective $\mathbb{Q}$-divisors $\Delta_Y$ and $\Delta_Z$ by
\[
K_Y +\Delta_Y = f^*(K_X+\Delta)\qquad\text{and} \qquad \Delta_Z := g_*\Delta_Y.
\]
Note that $(Y, \Delta_Y)$
\end{notation}
\begin{remark}
Given a projective klt surface $X$, let $\mu_X : X' \to X$ be the minimal resolution of $X$.
Let $X' \to Y$ be the birational morphism to a projective normal surface such that
By contracting all the prime divisors $E$ on $X'$
\end{remark}
\end{comment}
\subsection{Bounds for Mori fibre spaces}\label{ss-bdd-or-lift1}
In this subsection, we shall establish two auxiliary results:
Proposition \ref{p-bound-rho1} and Proposition \ref{p-bound-rho2},
which will be
used { to establish} the main theorem of this section: Theorem \ref{t-bdd-or-liftable}.
In {this theorem and} its proof,
the situation is as follows: {for a log del Pezzo pair}
$(X, \Delta)$ with standard coefficients,
we take the canonical model $f: Y \to X$ of $X$ (cf.\ Definition \ref{d-cano-model}),
and we run a $K_Y$-MMP, whose end result is denoted by $Z$.
In particular, $Z$ admits a Mori fibre space structure $\pi : Z \to T$.
Proposition \ref{p-bound-rho1} and Proposition \ref{p-bound-rho2} treat the cases when $\dim T=0$ and $\dim T=1$, respectively.
The purpose of this subsection is to give an explicit upper bound for $(K_Z+C_Z) \cdot C_Z$ under suitable assumptions.
\begin{comment}
Let $(X, \Delta :=\frac{1}{2} C)$ be a log del Pezzo surface, where $C$ has a higher cusp.
For the minimal resolution $\mu_X : X' \to X$,
we contruct all the $(-2)$-curves contained in $\Ex(\mu_X)$, so that we have
\[
\mu_X: X' \to Y \xrightarrow{f} X.
\]
For
\[
K_Y+\Delta_Y = f^*(K_X+\Delta)
\]
and any $f$-exceptional prime divisor $E$, the coefficients of $E$ in $\Delta_Y$ is $\geq \frac{1}{3}$ (e.g. $\Delta_Y = \frac{1}{2}C_Y + \frac{1}{3}E$).
Note that $Y$ is canonical.
We run a $K_Y$-MMP: $g: Y \to Z$.
By
\[
-K_Y = -f^*(K_X+\Delta)+\Delta_Y,
\]
$-K_Y$ is big, and hence its end result $Z$ is a Mori fibre space.
Note that $Z$ is canonical.
There are the following two cases:
\begin{enumerate}
\item[(I)] $\rho(Z) =1$. In this case, $Z$ is a canonical (Du Val) del Pezzo surface with $\rho(Z) =1$.
\item[(II)] $\rho(Z)=2$. In this case, we have a morphism $\pi:Z \to B := \P^1$
with $\pi_*\mathcal{O}_Z = \mathcal{O}_B$ (note that $H^1(B, \mathcal{O}_B)=0$ follows from $H^1(Z, \mathcal{O}_Z)=0$).
\end{enumerate}
\end{comment}
\begin{proposition}\label{p-bound-rho1}
Let $Z$ be a canonical del Pezzo surface with $\rho(Z)=1$ and
let $C_Z$ is a prime divisor such that $-(K_Z+\frac{1}{2}C_Z)$ is ample.
Then $(K_Z + C_Z) \cdot C_Z <18$.
\end{proposition}
\begin{proof}
It is easy to show that
\[
{\rm Cl}\,(Z) /\! \equiv \hspace{2mm} \simeq\, \mathbb Z,
\]
where $\equiv$ denotes the numerical equivalence.
Let $A$ be an ample Weil divisor on $Z$ which generates ${\rm Cl}\,(Z) / \equiv$.
We have
\[
-K_Z \equiv rA \qquad \text{and} \qquad C_Z \equiv mA
\]
for some $r \in \mathbb{Z}_{>0}$ and $m \in \mathbb{Z}_{>0}$.
Then $-(K_Z+ \frac{1}{2} C_Z)$
is numerically equivalent to $(r -\frac{m}{2})A$.
Since $-(K_Z+ \frac{1}{2} C_Z)$ is ample, we obtain $m < 2r$.
We then get
\[
(K_Z + C_Z) \cdot C_Z
= (-r +m) \cdot m A^2
< (-r + 2r) \cdot 2r A^2 =2r^2 A^2 =2K_Z^2 \leq 18,
\]
where the last inequality holds by the fact that $Z$ is a projective rational surface with canonical singularities (indeed, we have $K_Z^2 =K_{Z'}^2 \leq 9$ for the minimal resolution $\mu : Z' \to Z$).
\end{proof}
\begin{comment}
Let X' -> X and Z' -> Z be the minimal resolutions. By construction, we have X' -> Z':
Z <- Z' <- X' -> X
By p_a(C') \leq p_a(C_{Z'}) and 2p_a(C_{Z'}) -2 =(K_{Z'}+C_{Z'}).C_{Z'},
it suffices to give a bound for (K_{Z'}+C_{Z'}).C_{Z'}.
By K_{Z'} + C_{Z'} +effective = f^*(K_Z+C_Z), we have
(K_{Z'}+C_{Z'}).C_{Z'} \leq (K_{Z'} + C_{Z'} +effective).C_{Z'} = (K_Z+C_Z).C_Z.
Hence it suffices to give a bound for (K_Z+C_Z).C_Z.
Then we have
-K_Z = rA for some integer r>0.
C_Z = mA for some integer m>0.
\end{comment}
\begin{proposition}\label{p-bound-rho2}
Let $(Z, \Delta_Z = \frac{1}{2} C_Z + B_Z)$ be a weak log del Pezzo pair,
where $Z$ is canonical, $\rho(Z)=2$, $C_Z$ is a non-smooth prime divisor, and $B_Z$ is an effective $\mathbb{Q}$-divisor satisfying $C_Z \not\subseteq \Supp\,B_Z$.
Assume that ${\rm (a)}$--${\rm (c)}$ hold.
\begin{enumerate}
\item[(a)] All the coefficients of $B_Z$ are $\geq \frac{1}{3}$.
\item[(b)] There exists a surjective morphism $\pi : Z \to T$ to a smooth projective curve $T$ with $\pi_*\mathcal{O}_Z = \mathcal{O}_T$.
\item[(c)] There does not exist a prime divisor $D$ on $Z$ such that $K_Z \cdot D < 0$ and
$D^2 <0$.
\end{enumerate}
Then one of the following holds.
\begin{enumerate}
\item $(\Delta_Z)_{\mathrm{red}} \cdot F \leq 3$ for a general fibre $F$ of $\pi : Z \to T$.
\item $(K_Z+ C_Z) \cdot C_Z <36$.
\end{enumerate}
\end{proposition}
{Assumption (a) is natural when one considers minimal Gorenstein (canonical) models of klt surfaces (see Lemma \ref{l-coeff-1/3}),
whilst Assumption (c) says that there is no projective birational contraction $Z \to Z'$ induced by a $K_Z$-MMP.
\begin{remark}
{Let us briefly explain the motivation for Proposition \ref{p-bound-rho2}. Assume that $Z$ is smooth, and so $Z \simeq \mathrm{Proj}_{\mathbb P^1}(\mathcal{O}_{\mathbb P^1} \oplus \mathcal{O}_{\mathbb P^1}(n))$ is the $n$-th Hirzebruch surface. In what follows we suppose that $n\geq 6$ and show that (1) holds. Since $-(K_Z+\frac{1}{2}C_Z)$ is big, it is easy to see that if $n < 6$, then the arithmetic genus of $C_Z$
must be bounded from above by a constant (see the proof of Proposition \ref{p-bound-rho2} for the exact calculations), and so the validity of (2) in this case should not come as a surprise.
Let $\Gamma \simeq \mathbb P^1$ be the negative section such that $\Gamma^2 = -n$ and let $F$ be a general fibre. As $C_Z$ is singular, $C_Z \neq \Gamma$ and $C_Z \cdot F \geq 2$.
Write $B_Z = a\Gamma + B'_Z$ for $a \in \mathbb{Q}_{\geq 0}$, where $B'_Z \geq 0$ and $\Gamma \not \subseteq \Supp B'_Z$. Then $a \geq \frac{n-2}{n}$ by the following calculation:
\[
0 \geq (K_Z+\Delta_Z) \cdot \Gamma \geq (K_Z+a\Gamma)\cdot \Gamma = (K_Z+\Gamma)\cdot \Gamma + (a-1)\Gamma^2 = -2 - (a-1)n.
\]
Since $n \geq 6$, we get $a \geq \frac{2}{3}$.
To show (1), we first observe that $0 > (K_Z+\Delta_Z) \cdot F = -2 + \Delta_Z \cdot F$, and so $\Delta_Z \cdot F \leq 2$. In particular,
\begin{equation} \label{eq:hirzebruch-bounds}
2 > \Delta_Z \cdot F = \frac{1}{2}C_Z \cdot F + a\Gamma \cdot F + B'_Z \cdot F \geq 1 + \frac{2}{3} + B'_Z \cdot F,
\end{equation}
that is $B'_Z \cdot F < \frac{1}{3}$. Since the coefficients in $B'_Z$ are at least $\frac{1}{3}$, we get that $B'_Z \cdot F = 0$. Moreover, the calculation in (\ref{eq:hirzebruch-bounds}) also shows that $C_Z \cdot F = 2$. This concludes the proof of (1).}
\end{remark}
\begin{proof}[Proof of Proposition \ref{p-bound-rho2}]
Since $Z$ is of del Pezzo type and $\rho(Z)=2$,
$\NE(Z)$ has exactly two extremal rays and there exist the corresponding contraction morphisms (cf.\ Remark \ref{r-log-dP-summary}).
One of them corresponds to $\pi : Z \to T = \P^1$.
Let $R$ be the other extremal ray.
Fix a curve $\Gamma$ with $[\Gamma] \in R$.
By $\rho(Z)=2$, we have $\Gamma^2 = 0$ or $\Gamma^2<0$
(cf.\ \cite[Proof of the case where $C^2>0$ of Theorem 3.21]{tanaka12}).
Set $\gamma := -\Gamma^2$ and let $F$ be a general fibre of $\pi : Z \to T$.
By adjunction, we have $F \simeq \P^1$ and $K_Z \cdot F = -2$. In particular,
\[
\NE(Z) = \mathbb{R}_{\geq 0}[F] + \mathbb{R}_{\geq 0}[\Gamma].
\]
\setcounter{step}{0}
\begin{step}\label{s1-bound-rho2}
Assume $\Gamma^2=0$. Then $(K_Z+C_Z) \cdot C_Z < 16$.
\end{step}
\begin{proof}[Proof of Step \ref{s1-bound-rho2}]
By $\Gamma^2=0$, the corresponding morphism is a Mori fibre space to a curve:
$\pi' : Z \to T' =\P^1$.
Let $F$ and $F'$ be general fibres of $\pi : Z \to T$ and $\pi' : Z \to T'$, respectively.
In particular, $F$ and $F'$ are effective Cartier divisors with
$F \simeq F' \simeq \P^1$ and $-K_Z \cdot F = -K_Z \cdot F' = 2$.
Set $d := F \cdot F' \in \mathbb{Z}_{>0}$.
By $\NE(Z) = \mathbb{R}_{\geq 0} [F] + \mathbb{R}_{\geq 0} [F']$
We can write
\begin{equation}\label{e1-bound-rho2}
C_Z \equiv a F + a' F'
\end{equation}
for some $a, a' \in \mathbb{R}_{\geq 0}$.
By Kleimann's criterion, the big $\mathbb{Q}$-divisor $-(K_Z+ \frac{1}{2} C_Z)$ is automatically ample.
It follows from (\ref{e1-bound-rho2}) that
\[
0> \left(K_Z+ \frac{1}{2} C_Z\right) \cdot F = -2 + \frac 1 2 a'd,
\]
and hence $a'd < 4$. By symmetry, we also obtain $ad < 4$.
{Then}
\[
C_Z^2 = 2aa' F \cdot F' = \frac{2(ad)(a'd)}{d} < \frac{32}{d} \leq 32,
\]
which implies
\[
(K_Z+ C_Z) \cdot C_Z =
(K_Z+ \frac{1}{2} C_Z + \frac{1}{2} C_Z ) \cdot C_Z < \frac{1}{2} C_Z^2 < 16.
\]
This completes the proof of Step \ref{s1-bound-rho2}.
\end{proof}
\begin{step}\label{s2-bound-rho2}
If $\Gamma^2 <0$, then
\begin{enumerate}
\renewcommand{\labelenumi}{(\roman{enumi})}
\item $\Gamma \simeq \P^1$,
\item $\Gamma \neq C_Z$, and
\item the coefficient of $\Gamma$ in $\Delta_Z$ is nonzero.
\end{enumerate}
\end{step}
\begin{proof}[Proof of Step \ref{s2-bound-rho2}]
Assume $\Gamma^2<0$.
We have
\[
(K_Z + \Gamma) \cdot \Gamma < (K_Z+\Delta_Z) \cdot \Gamma \leq 0,
\]
which implies $\Gamma \simeq \P^1$ \cite[Theorem 3.19]{tanaka12}.
Thus (i) holds. Since $C_Z$ is singular, (i) implies (ii).
Let us show (iii).
Suppose that the coefficient of $\Gamma$ in $\Delta_Z$ is zero.
We then get $\Delta_Z \cdot \Gamma \geq 0$, which implies
\[
0> (K_Z+ \Delta_Z) \cdot \Gamma \geq K_Z \cdot \Gamma.
\]
This contradicts (c).
This completes the proof of Step \ref{s2-bound-rho2}.
\end{proof}
\begin{step}\label{s3-bound-rho2}
Assume that $\Gamma^2<0$ and $\pi|_{\Gamma}: \Gamma \to T$ is not birational.
Then $(K_Z+C_Z) \cdot C_Z < 4$.
\end{step}
\begin{proof}[Proof of Step \ref{s3-bound-rho2}]
By $\NE(Z) = \mathbb{R}_{\geq 0}[F] + \mathbb{R}_{\geq 0}[\Gamma]$,
$\Gamma$ is the unique curve on $Z$ whose self-intersection number is negative. Hence $-(K_Z+\frac{1}{2}C_Z + c\Gamma)$ is nef and big, where $c$ denotes the coefficient of $\Gamma$ in $\Delta_Z$.
By Step \ref{s2-bound-rho2}(iii) and (a), we get $c \geq \frac{1}{3}$.
Since $F$ is nef and $\Gamma \neq C_Z$ (Step \ref{s2-bound-rho2}(ii)), we have
\begin{equation}\label{e2-bound-rho2}
0> (K_Z+\Delta_Z) \cdot F
\geq \left( K_Z+ \frac{1}{2}C_Z +c\Gamma \right) \cdot F
\geq -2 + \frac{1}{2}\cdot 2 +c\Gamma \cdot F,
\end{equation}
where the last inequality follows from the assumption that $C_Z$ is singular,
and hence $\pi|_T : C_Z \to T$ is of degree $\geq 2$.
We then get $c \Gamma \cdot F <1$.
By $c \geq \frac{1}{3}$, we have $\Gamma \cdot F < 3$.
Since $\pi|_{\Gamma}: \Gamma \to T$ is not birational,
it holds that $\Gamma \cdot F \geq 2$, and so $\Gamma \cdot F =2$.
By (\ref{e2-bound-rho2}), we also have $C_Z \cdot F =2$.
It follows from $\NE(Z) = \mathbb{R}_{\geq 0}[F] + \mathbb{R}_{\geq 0}[\Gamma]$ that we can write
$C_Z \equiv a\Gamma + b F$ for some $a, b \in \mathbb{R}_{\geq 0}$.
By taking the intersection number with $F$, we get $a=1$.
In particular,
\begin{equation}\label{e3-bound-rho2}
C_Z \equiv \Gamma + b F\qquad \text{and}\qquad
C_Z \cdot \Gamma = -\gamma + 2b.
\end{equation}
Again by (\ref{e2-bound-rho2}), we obtain
\[
0> (K_Z+\Delta_Z) \cdot F \geq
\left(K_Z+ \frac{1}{2} C_Z + c \Gamma\right) \cdot F =
-2 + 1 + 2c,
\]
which implies $c < \frac{1}{2}$.
As $-(K_Z+\Delta_Z)$ is nef,
the following holds:
\begin{align*}
0 &\geq (K_Z+\Delta_Z) \cdot \Gamma \\
&\geq \left(K_Z+ \frac{1}{2} C_Z + c \Gamma\right) \cdot \Gamma \\
&= \left( (K_Z+\Gamma)+ \frac{1}{2} C_Z + (c-1)\Gamma\right) \cdot \Gamma \\
&\geq -2 + \frac{1}{2} (-\gamma +2b) + \gamma(1-c)\\
&> -2 + \frac{1}{2} (-\gamma +2b) + \gamma\left( 1-\frac{1}{2}\right)\\
&=-2 +b,
\end{align*}
where the third inequality holds by (\ref{e3-bound-rho2}) and $(K_Z+\Gamma) \cdot \Gamma \geq -2$.
We then get $b <2$, which implies
\[
C_Z^2 =(\Gamma +bF)^2 = \Gamma^2 +4b < 0+4 \cdot 2=8.
\]
Then
\[
(K_Z+C_Z) \cdot C_Z = \Big(K_Z+\Delta_Z - \big(\Delta_Z - \frac{1}{2}C_Z\big) + \frac{1}{2}C_Z\Big) \cdot C_Z
\leq \frac{1}{2}C_Z^2 <4.
\]
This completes the proof of Step \ref{s3-bound-rho2}.
\qedhere
\end{proof}
\begin{step}\label{s4-bound-rho2}
Assume that $\Gamma^2<0$ and $\pi|_{\Gamma} : \Gamma \to T$ is birational.
Then (1) or (2) in the statement of Proposition \ref{p-bound-rho2} holds.
\end{step}
\begin{proof}
Since $\pi|_{\Gamma}: \Gamma \to T = \P^1$ is a finite birational morphism from a curve $\Gamma$ to a smooth curve $T$, $\pi|_{\Gamma}$ is an isomorphism, that is, $\Gamma$ is a section of $\pi : Z \to T$.
We can write
\[
\Delta_Z = \frac{1}{2} C_Z + c \Gamma + \Delta'_Z,
\]
where $c$ is the coefficient of $\Gamma$ in $\Delta_Z$ and
$\Delta'_Z$ is a nef effective $\mathbb{Q}$-divisor.
If $c \geq \frac{2}{3}$, then $\Delta'_Z$ is $\pi$-vertical (that is, $\pi(\Supp \Delta'_Z) \subsetneq T$) {and} $C_Z \cdot F =2$.
Then (1) holds.
Hence we may assume that
\[
\frac{1}{3} \leq c < \frac{2}{3}.
\]
By $\NE(Z) = \mathbb{R}_{\geq 0}[\Gamma] + \mathbb{R}_{\geq 0}[F]$, we can write
\[
C_Z = a \Gamma + b F
\]
for some $a, b \in \mathbb{R}_{\geq 0}$.
By $a = C_Z \cdot F$, we have $a \in \mathbb{Z}_{\geq 0}$.
Moreover, we have $a = C_Z \cdot F \geq 2$, because $C_Z$ is a singular prime divisor.
Since $-(K_Z+ \Delta_Z)$ is nef and big, we obtain
\begin{enumerate}
\item[(I)] $-(K_Z+\Delta_Z) \cdot F >0$.
\item[(II)] $-(K_Z+\Delta_Z) \cdot \Gamma \geq 0$.
\end{enumerate}
The inequality (I) implies
\[
0> (K_Z+\Delta_Z) \cdot F \geq
\left(K_Z+ \frac{1}{2} C_Z + c \Gamma\right) \cdot F =
-2 + \frac{1}{2} a + c,
\]
which yields $\frac{1}{2}a + c < 2$ and $a \in \{ 2, 3\}$.
The inequality (II) implies
\begin{align*}
0 &\geq (K_Z+\Delta_Z) \cdot \Gamma \\
&\geq \left(K_Z+ \frac{1}{2} C_Z + c \Gamma\right) \cdot \Gamma \\
&= \left( (K_Z+\Gamma) + \frac{1}{2} C_Z + (c-1) \Gamma\right) \cdot \Gamma \\
&\geq -2 + \frac{1}{2}C_Z \cdot \Gamma + \gamma(1-c)\\
&= -2 + \frac{1}{2} (-a\gamma +b) + \gamma(1-c)\\
&=-2 + \frac{1}{2}b + \gamma -\gamma\left( \frac{1}{2}a +c\right)\\
&> -2 + \frac{1}{2}b + \gamma -2\gamma.
\end{align*}
Hence $b < 2\gamma + 4$.
The above displayed equation, together with $C_Z \cdot \Gamma \geq 0$, also implies the following {(see the fourth line)}:
\[
\gamma \leq \frac{2}{1-c}.
\]
We further have
\[
C_Z^2 = (a\Gamma + bF)^2 = a^2(-\gamma) +2ab <2ab,
\]
which implies
\[
(K_Z+C_Z) \cdot C_Z = \Big(K_Z+\Delta_Z - \big(\Delta_Z - \frac{1}{2}C_Z\big) + \frac{1}{2}C_Z\Big) \cdot C_Z
\leq \frac{1}{2}C_Z^2 < ab.
\]
In what follows, we separately treat the two cases: $a=2$ and $a=3$.
Assume $a=2$.
Then we obtain
\[
\gamma \leq \frac{2}{1-c} < \frac{2}{1- \frac{2}{3}} = 6
\]
and $b <2\gamma + 4 <16$.
Therefore, $(K_Z+C_Z) \cdot C_Z < ab <32$.
Assume $a=3$.
In this case, we have $c < 2 -\frac{1}{2}a =\frac{1}{2}$ as $\Delta_Z \cdot F < 2$ (see the first paragraph of the proof of Step 4).
Then we obtain
\[
\gamma \leq \frac{2}{1-c} < \frac{2}{1- \frac{1}{2}} = 4
\]
and $b <2\gamma + 4 <12$.
Therefore, $(K_Z+C_Z) \cdot C_Z < ab <36$.
In any case, (2) holds.
This completes the proof of Step \ref{s4-bound-rho2}.
\end{proof}
Step \ref{s1-bound-rho2},
Step \ref{s3-bound-rho2}, and
Step \ref{s4-bound-rho2} complete the proof of
Proposition \ref{p-bound-rho2}.
\end{proof}
\subsection{Log liftablity for unbounded log del Pezzo pairs}\label{ss-bdd-or-lift2}
The purpose of this subsection is to prove one of the main results of this \st{sub}section: Theorem \ref{t-bdd-or-liftable}.
We start by recalling the definition of canonical models.
\begin{definition}\label{d-cano-model}
Let $X$ be a normal surface.
We say that $f: Y \to X$ is the {\em canonical model} over $X$
if
\begin{enumerate}
\item $f$ is a projective birational morphism from a canonical surface $Y$, and
\item $K_Y$ is $f$-ample.
\end{enumerate}
In this case, also $Y$ is called the {\em canonical model} over $X$.
\end{definition}
\begin{remark}\label{r-cano-model}
Let $X$ be a normal surface.
\begin{enumerate}
\item It is well known that canonical models over $X$ {are Gorenstein and} are unique up to isomorphisms over $X$.
\item Given a normal surface $X$,
the canonical model $Y$ over $X$ is obtained as follows.
We first take the minimal resolution $\mu : X' \to X$ of $X$.
Then the canonical model $Y$ is obtained by contracting all the $(-2)$-curves contained in $\Ex(\mu)$.
In particular, we have the induced morphisms: $\mu : X' \to Y \xrightarrow{f} X$.
\end{enumerate}
\end{remark}
The following lemma {highlights} the key property of canonical models { that we shall use later on}.
We emphasise that our proof of Theorem \ref{t-bdd-or-liftable} does not work
if we use the minimal resolution of $X$ instead of the canonical model over $X$.
\begin{lemma}\label{l-coeff-1/3}
Let $X$ be an
lc surface and let $f: Y \to X$ be the canonical model over $X$. Let $B$ be the effective $\mathbb{Q}$-divisor defined by
\[
K_{Y} + B = f^*K_X.
\]
Then $\Ex(f) = \Supp B$ and every coefficient of $B$ is $\geq \frac{1}{3}$.
\end{lemma}
\begin{proof}
Let $\mu : X' \to X$ be the minimal resolution of $X$,
so that we have the induced projective birational morphisms (Remark \ref{r-cano-model}):
\[
\mu: X' \xrightarrow{\mu_Y} Y \xrightarrow{f} X.
\]
We define an effective $\mathbb{Q}$-divisor $B'$ on $X'$ by
\[
K_{X'} + B' =\mu^*K_X.
\]
Fix a $\mu$-exceptional prime divisor $E$ on $X'$ satisfying $E^2 \leq -3$.
We can write
\[
B' = c E + B'',
\]
where $c$ is the coefficient of $E$ in $B'$ and $B''$ is an effective $\mathbb{Q}$-divisor satisfying $E \not\subseteq \Supp B''$.
By $(\mu_Y)_*B' = B$, it suffices to show that $c \geq 1/3$.
This holds by the following computation:
\[
0 = \mu^*K_X \cdot E= (K_{X'} + B' ) \cdot E =
(K_{X'} + E + (c-1)E +B'' ) \cdot E
\]
\[
\geq -2 + (1-c) (-E^2) \geq -2 +(1-c) 3,
\]
where the last inequality follows from $1 - c \geq 0$,
which is assured by the assumption that $(X, 0)$ is lc.
\end{proof}
\begin{lemma}\label{l-vertical-lift}
Assume $p>3$.
Let $Z$ be a projective normal surface and
let $D$ be a reduced Weil divisor.
Let $\pi : Z \to T$ be a morphism to a smooth projective curve $T$ such that $\pi_*\mathcal{O}_Z = \mathcal{O}_T$.
Let $F$ be a general fibre of $\pi: Z \to T$.
Assume that the following hold.
\begin{enumerate}
\item $K_Z \cdot F <0$.
\item $D \cdot F \leq 3$
\end{enumerate}
Then $(Z, D)$ is log liftable.
\end{lemma}
\begin{proof}
Let $f\colon W\to Z$ be a log resolution of $(Z, D)$.
Set $D_W: = f^{-1}_{*}D +\Exc(f)$.
We aim to lift $(W, D_W)$ to $W(k)$.
To obtain such a lift, it suffices to show that
\[
H^2(W, T_W(-\log D_W))=0
\]
and $H^2(W, \mathcal{O}_W)=0$ by \cite[Theorem 2.10]{Kaw3},
\[
T_W(-\log D_W)\coloneqq \mathcal Hom_{\mathcal{O}_{W}}(\Omega^{1}_W(\log D_W), \mathcal{O}_W).
\]
The latter vanishing follows from (1) and Serre duality.
Again by Serre duality, we have
\[
H^2(W, T_W(-\log D_W))
\simeq H^0(W, \Omega^1_W(\log D_W)\otimes \mathcal{O}_W(K_W))^*,
\]
where $(-)^*$ denotes the dual vector space.
It is enough, by \cite[Lemma 4.1]{Kaw3}, to show that
\[
H^0(Z, (\Omega^{{[1]}}_Z(\log D)\otimes \mathcal{O}_Z(K_Z))^{**})=0,
\]
where $(-)^{**} := \mathcal Hom_{\mathcal{O}_Z}( \mathcal Hom_{\mathcal{O}_Z}( -, \mathcal{O}_Z), \mathcal{O}_Z)$.
Suppose by contradiction that \[H^0(Z, (\Omega^{{[1]}}_Z(\log D)\otimes \mathcal{O}_Z(K_Z))^{**})\neq 0.
\]
Then there exists an injective $\mathcal{O}_Z$-module homomorphism
$s\colon \mathcal{O}_Z(-K_Z) \to \Omega^{{[1]}}_Z(\log D)$.
{Note that every $\pi$-horizontal prime divisor $C \subseteq \Supp D$ is generically \'etale over $T$, as otherwise the assumption (2) leads a contradiction: $3 \geq D \cdot F \geq C \cdot F \geq p >3$.}
{Thus}
we can find a non-empty open subset $T'$ of $T$ such that $(Z', D')$ is simple normal crossing over $T'$,
where $Z' := \pi^{-1}(T')$ and $D' := D|_{Z'}$
(indeed, we may assume that $\pi|_{Z'} : Z' \to T'$ is smooth and $\pi|_{D'} : D' \to T'$ is \'etale).
We then
obtain the following horizontal exact sequence
\begin{equation*}
\begin{tikzcd}
& & \mathcal{O}_{Z'}(-K_{Z'}) \arrow[dotted]{ld}\arrow{d}{s}\arrow{rd}{t} && \\
0 \arrow{r} & \mathcal{O}_{Z'}((\pi|_{Z'})^{*}K_{T'}) \arrow{r} & \Omega^{1}_{Z'}(\log D') \arrow{r}{\rho} & \Omega^{1}_{Z'/T'}(\log D') \arrow{r} & 0,
\end{tikzcd}
\end{equation*}
where $s$ denotes $s|_{Z'}$ by abuse of notation.
The construction of this exact sequence is as follows.
When $D=0$, this is the usual relative differential sequence \cite[Proposition II.8.11]{hartshorne77}.
When $D\neq 0$, we define $\rho$ by $d(f^{*}{\tau})\mapsto 0, dz/z\mapsto d\tau/\tau$, where $\tau$ is a coordinate on $T$ and $z$ is a local equation of $D$. Note that $f^{*}{\tau}$ and $z$ form a coordinate on $Z$ since $D$ is simple normal crossing over $T$.
Set $t := \rho \circ s$.
Suppose that $t$ is nonzero.
Then, by restricting $t$ to $F$, we have an injective $\mathcal{O}_F$-module homomorphism $t|_{F}\colon \mathcal{O}_F(-K_F) \hookrightarrow
\Omega^1_{F}(\log (D|_F))=\mathcal{O}_F(K_F+D|_{F})$, where the injectivity holds since $F$ is chosen to be general.
{This,} together with (2),
implies that
\[
2=\deg (-K_F)\leq\deg(K_F+(D|_F))\leq 1,
\]
which is a contradiction.
Thus $t$ is zero and the homomorphism $s$ induces an injection $\mathcal{O}_Z(-K_Z)\hookrightarrow \mathcal{O}_Z(\pi^{*}K_T)$.
By taking the restriction to $F$, we get
\[
2=\deg (-K_F)\leq\deg(\pi^*K_T|_F)=0,
\]
which is again a contradiction.
Therefore, we obtain the required vanishing.
\end{proof}
\begin{comment}
\begin{lemma}\label{l-vertical-lift}
Assume $p>3$.
Let $(Z, \Delta_Z)$ be a two-dimensional projective pair.
Let $\pi : Z \to B$ be a morphism to a smooth projective curve $B$ such that $\pi_*\mathcal{O}_Z = \mathcal{O}_B$.
Let $F$ be a general fibre of $\pi: Z \to B$
Assume that the following hold.
\begin{enumerate}
\item $K_Z \cdot F <0$.
\item $(\Delta_Z)_{\mathrm{red}} \cdot F \leq 3$
\end{enumerate}
Then $(Z, \Supp\,\Delta_Z)$ is log liftable.
\end{lemma}
\begin{proof}
Let $f\colon W\to Z$ be a log resolution of $(Z, \Delta_Z)$. We aim to lift $(W, f^{-1}_{*}(\Delta_Z)_{\mathrm{red}}+\Exc(f))$ to $W(k)$.
To obtain such a lift, it suffices to show that
\[
H^2(W, T_W(-\log\,f^{-1}_{*}(\Delta_Z)_{\mathrm{red}}+\Exc(f)))=0
\]
and $H^2(W, \mathcal{O}_W)=0$ by \cite[Theorem 2.10]{Kaw3}.
The latter vanishing follows from (1).
By Serre duality, we have
\[
H^2(W, T_W(-\log\,f^{-1}_{*}(\Delta_Z)_{\mathrm{red}}+\Exc(f)))
\simeq H^0(W, \Omega^1_W(\log\,f^{-1}_{*}(\Delta_Z)_{\mathrm{red}}+\Exc(f))\otimes \mathcal{O}_W(K_W)).
\]
Therefore, it is enough, by \cite[Lemma 4.1]{Kaw3}, to show that
\[
H^0(Z, (\Omega^1_Z(\log\,(\Delta_Z)_{\mathrm{red}})\otimes \mathcal{O}_Z(K_Z))^{**})=0.
\]
Suppose by contradiction that \[H^0(Z, (\Omega^1_Z(\log\,(\Delta_Z)_{\mathrm{red}})\otimes \mathcal{O}_Z(K_Z))^{**})\neq 0.
\]
Then there exists an injective $\mathcal{O}_Z$-module homomorphism
$s\colon \mathcal{O}_Z(-K_Z) \to \Omega^1_Z(\log\,(\Delta_Z)_{\mathrm{red}})$.
By $(\Delta_Z)_{\mathrm{red}} \cdot F \leq 3$ and $p>3$,
we can find a non-empty open subset $B'$ such that $(Z', \Delta_{Z'})$ is simple normal crossing over $B'$,
where $Z' := \pi^{-1}(B')$ and $\Delta_{Z'} := \Delta_Z|_{Z'}$.
Then we have the following commutative diagram in which the horizontal sequence is exact:
\begin{equation*}
\xymatrix{ & & \mathcal{O}_{Z}(-K_Z) \ar@{.>}[ld] \ar[d]^{s} \ar[rd]^{t} &\\
0\ar[r] &\mathcal{O}_Z(\pi^{*}K_B)\ar[r] & \Omega^1_Z(\log\,(\Delta_Z)_{\mathrm{red}}) \ar[r] & \Omega^1_{Z/B}(\log\, (\Delta_Z)_{\mathrm{red}}) \ar[r] & 0.}
\end{equation*}
Suppose that $t$ is nonzero.
Then, by restricting $t$ to $F$, we have an injective $\mathcal{O}_F$-module homomorphism $t|_{F}\colon \mathcal{O}_F(-K_F) \hookrightarrow \Omega^1_{F}(\log\, (\Delta_Z)_{\mathrm{red}}|_F)=\mathcal{O}_F(K_F+(\Delta_Z)_{\mathrm{red}}|_{F})$, where the injectivity holds as $F$ is chosen to be general.
Together with (2),
we obtain
\[
2=\deg (-K_F)\leq\deg(K_F+(\Delta_Z)_{\mathrm{red}}|_F)\leq 1,
\]
which is a contradiction.
Thus $t$ is zero and the homomorphism $s$ factors through $\mathcal{O}_Z(-K_Z)\hookrightarrow \mathcal{O}_Z(\pi^{*}K_B)$.
By taking the restriction to $F$, we obtain
\[
2=\deg (-K_F)\leq\deg(\pi^*K_B|_F)=0,
\]
which is again a contradiction.
Therefore, we obtain the desired vanishing.
\end{proof}
\end{comment}
We are ready to prove the main result of this subsection.
\begin{theorem}\label{t-bdd-or-liftable}
Assume $p>3$.
Let $(X, \Delta)$ be a log del Pezzo pair with standard coefficients.
Then one of the following holds.
\begin{enumerate}
\item $(X, \Delta)$ is log liftable.
\item If $C$ is a prime divisor on $X$ such that $C \subseteq \Supp\,\Delta$ and $C$ has a singular point at which $X$ is smooth, then
\[
p_a(C_{X'}) \leq 18
\]
for the minimal resolution $\mu : X' \to X$ of $X$ and the proper transform $C_{X'} := \mu_*^{-1}C$ of $C$ on $X'$.
\end{enumerate}
\end{theorem}
\begin{proof}
If there exists no prime divisor $C$ such that $C \subseteq \Supp\,\Delta$ and that $C$ has a singular point at which $X$ is smooth, then (2) holds.
Hence we fix such a curve $C$.
We can write
\[
\Delta = a C + B,
\]
where $a$ is the coefficient of $C$ in $\Delta$ and
$B$ is an effective $\mathbb{Q}$-divisor satisfying $C\not\subseteq \Supp B$.
Let $f : Y \to X$ be the canonical model over $X$ (cf.\ Definition \ref{d-cano-model}, Remark \ref{r-cano-model}).
We run a $K_Y$-MMP:
\[
g : Y =:Y_0 \xrightarrow{g_0} Y_1 \xrightarrow{g_1} Y_2 \xrightarrow{g_2} \cdots
\xrightarrow{g_{\ell-1}} Y_{\ell} =: Z,
\]
where $Z$ is its end result and each $g_i: Y_i \to Y_{i+1}$ is a {projective} birational morphism such that
$E_i := \Ex(g_i)$ is a prime divisor with $K_{Y_i} \cdot E_i < 0$.
{We may assume that none of the $K_Z$-negative extremal rays induces a projective birational morphism (in other words, \st{that} every induced contraction is a Mori fibre space).}
We define effective $\mathbb{Q}$-divisors $\Delta_Y$ and $\Delta_Z$ by
\[
K_Y +\Delta_Y = f^*(K_X+\Delta)\qquad\text{and} \qquad \Delta_Z := g_*\Delta_Y.
\]
Set $C_Y := f_*^{-1}C$ and $C_Z := g_*C_Y$.
Then $C_Y$ and $C_Z$ are singular prime divisors
(note that $g(C_Y)$ is not a point,
because the image of $C_Y$ on $Y_i$ is singular,
whilst $E_i \simeq \P^1$ by \cite[Theorem 3.19(1)]{tanaka12}).
Then $(Y, \Delta_Y)$ and $(Z, \Delta_Z)$ are weak del Pezzo pairs.
Since $-K_Y$ is big, the end result $Z$ has a $K_Z$-Mori fibre space structure $\pi: Z \to T$, that is, $\pi$ is a morphism to a normal projective variety $T$,
$-K_Z$ is $\pi$-ample, $\pi_*\mathcal{O}_Z= \mathcal{O}_T$, and $\dim Z > \dim T$.
In particular, $\dim T =0$ or $\dim T=1$.
Let $\mu : X' \to X$ and $\mu_Z : Z' \to Z$ be
the minimal resolutions of $X$ and $Z$, respectively.
Then $\mu : X' \to X$ factors through $f: Y \to X$ (Remark \ref{r-cano-model}): $\mu: X' \xrightarrow{\mu_Y} Y \xrightarrow{f} X$.
Furthermore, the induced resolution $g \circ \mu_Y : X' \to Z$ factors through $\mu_Z: Z' \to Z$,
so that we get the following commutative diagram:
\[
\begin{tikzcd}
& X' \arrow[d, "\mu_Y"] \arrow[ld, "h"'] \arrow[ddr, bend left, "\mu"]\\
Z' \arrow[d, "\mu_Z"'] & Y\arrow[ld, "g"'] \arrow[rd, "f"]\\
Z & & X.
\end{tikzcd}
\]
Recall that $h : X' \to Z'$ is a composition of blowups of points.
Hence we have $p_a(C_{X'}) \leq p_a(C_{Z'})$ \cite[Corollary V.3.7]{hartshorne77},
where $C_{Z'} := h_*C_{X'}$.
Then the problem is reduced to showing $p_a(C_{Z'}) \leq 18$, which is equivalent to
$(K_{Z'} + C_{Z'}) \cdot C_{Z'} \leq 34$, {where $(K_{Z'} + C_{Z'}) \cdot C_{Z'} = 2p_a(C_{Z'}) - 2$}.
Since the equality $K_{Z'} + C_{Z'} + \Gamma = \mu_Z^*(K_Z+C_Z)$
holds
and some $\mu_Z$-exceptional effective $\mathbb{Q}$-divisor $\Gamma$,
we obtain
\[
(K_{Z'}+C_{Z'}) \cdot C_{Z'} \leq (K_{Z'} + C_{Z'} +\Gamma) \cdot C_{Z'}
= (K_Z+C_Z) \cdot C_Z.
\]
To summarise, in order to show $p_a(C_{X'}) \leq 18$,
it suffices to prove $(K_Z+C_Z) \cdot C_Z <36$.
Assume
{$\dim T =0$.}
It follows from Proposition \ref{p-bound-rho1} that $(K_Z+C_Z) \cdot C_Z < 18 <36$.
Thus (2) holds.
Assume
{$\dim T =1$.}
{To apply Proposition \ref{p-bound-rho2},
we need to check that the assumptions (a)--(c) of Proposition \ref{p-bound-rho2} hold. The assumption (a) holds by Lemma \ref{l-coeff-1/3}, and the assumption (b) is clear. Now suppose that the assumption (c) does not hold, that is,
there exists a prime divisor $D$ on $Z$ satisfying $K_Z \cdot D <0$ and $D^2 <0$. Then $D$ is a generator of a $K_Z$-negative extremal ray, the contraction of which is birational. This is impossible by our assumption on $Z$, and so (c) holds.}
By Proposition \ref{p-bound-rho2}, we get that
$(K_Z+C_Z) \cdot C_Z <36$ or
$(\Delta_Z)_{\mathrm{red}} \cdot F \leq 3$ for a general fibre $F$ of $\pi : Z \to T$.
In the former case, (2) holds.
Hence we may assume that
$(\Delta_Z)_{\mathrm{red}} \cdot F \leq 3$.
{Then} $(Z, \Delta_Z)$ is log liftable by Lemma \ref{l-vertical-lift}, {and so,}
automatically, $(Y, \Delta_Y)$ is log liftable (cf.\ Subsection \ref{ss-notation}(\ref{ss-n-log-lift})).
Since $\Ex(f) \subseteq \Supp \Delta_Y$ (see Lemma \ref{l-coeff-1/3}),
$(X, \Delta)$ is log liftable. Thus (1) holds.
\end{proof}
\begin{comment}
The number of blowups resolving the higher cusp is equal to the arithmetic genus of the singularity (define it under assuming $C$ is rational and has a unique singularity).
The number of the blowups is $\frac{p+3}{2}$: this is $\geq \frac{43+3}{2} = 23$ if $p \geq 43$, and hence $p_a(C')\leq 22$ (i.e. $2p_a(C') -2 = 42$ is enough).
Hence it is enough to assure $(K_Z + C_Z) \cdot C_Z \leq 42$.
This is assured (if my computation is correct) whenever $-\Gamma^2 < 17$.
\end{comment}
\subsection{Weak log liftability for log del Pezzo pairs}\label{ss-weak-log-lift}
The purpose of this subsection is to show the following theorem.
\begin{theorem} \label{thm:liftability-of-Hiromu-resolution}
Assume $p>5$.
Let $(X, \Delta)$ be a log del Pezzo pair.
Suppose that there exist a birational morphism $f \colon Y \to X$
from a smooth projective surface $Y$
and an effective simple normal crossing $\mathbb{Q}$-divisor $B_Y$ on $Y$
such that $f_*B_Y =\Delta$ and $-(K_Y+B_Y)$ is nef and big.
Then $(Y, \Supp B_Y)$ lifts to $W(k)$.
\end{theorem}
\begin{proof}
By Remark \ref{r-log-dP-summary}, we may run a $(-K_Y)$-MMP: $g': Y \to Z'$.
Then also $Z'$ is of del Pezzo type and $-K_{Z'}$ is nef and big, because
$-K_Y = -(K_Y+B_Y) + B_Y$ is big.
As $-K_{Z'}$ is semi-ample (Remark \ref{r-log-dP-summary}), there exists a birational morphism $h : Z' \to Z$
to a projective normal surface $Z$ such that $h_*\mathcal{O}_{Z'} = \mathcal{O}_Z$, $-K_Z$ is ample, and $-K_{Z'} = h^*(-K_Z)$.
We then have the induced morphisms:
\[
g: Y \xrightarrow{g'} Z' \xrightarrow{h} Z.
\]
We can write
\[
K_Y+\Delta_Y = (g')^*K_{Z'} = g^*K_Z,
\]
for some effective $g'$-exceptional $\mathbb{Q}$-divisor $\Delta_Y$.
It holds that
\begin{equation} \label{eq:Delta-in-BY}
\Supp \Delta_Y \subseteq \Supp B_Y,
\end{equation}
because the MMP $g' : Y \to Z'$ is a $B_Y$-MMP by $-K_Y = -(K_Y + B_Y)+B_Y$, and hence
any prime divisor contracted by $g'$ must be an irreducible component of $B_Y$.
Set $E$ to be the maximum reduced divisor such that $\Supp E \subseteq \Ex(g)$ and $\Supp E \subseteq \Supp B_Y$.
We then obtain
\[
\kappa(Y, K_Y+E) \leq \kappa(Z, g_*(K_Y+E)) = \kappa(Z, K_Z) = -\infty.
\]
Hence, as $(Y,E)$ is log smooth,
this pair lifts to $W(k)$ by $p>5$ and \cite[Theorem 1.3]{Kaw3}.
We denote this lift by $(\mathcal{Y}, \mathcal{E})$.
\begin{claim}\label{claim:liftability-of-Hiromu-resolution}
Every prime divisor $C \subseteq \Supp B_Y$ lifts to $\mathcal{Y}$.
\end{claim}
\begin{proof}[Proof of Claim \ref{claim:liftability-of-Hiromu-resolution}]
Fix a prime divisor $C$ with $C \subseteq \Supp B_Y$.
If $C \subseteq E$, then there is nothing to prove.
So we may assume that $C \not \subseteq \Supp E$, that is, $C$ is not $g$-exceptional.
In particular, $C \not\subseteq \Supp \Delta_Y$.
First, since $H^2(Y, \mathcal{O}_Y)=0$, it follows from \cite[Corollary 8.5.6 (a)]{fga2005}
that the line bundle $L=\mathcal{O}_Y(C)$ lifts to a line bundle $\mathcal{L}$ on $\mathcal Y$.
Second, it holds that $H^i(Y, L) =0$ for $i>0$ by Kawamata-Viehweg vanishing (which holds for every boundary by \cite[Theorem 1.1]{ABL20}), because we can write
\begin{align*}
C &= K_Y + C + \Delta_Y - (K_Y + \Delta_Y) \\
&= \underbrace{K_Y + (1-\delta)C + (\Delta_Y+\epsilon A)}_{\mathrm{klt}}
+
\underbrace{(g^*(-K_Z) - \epsilon A) + \delta C}_{\mathrm{ample}},
\end{align*}
where $A$ is a $g$-anti-ample $g$-exceptional effective $\mathbb{Q}$-divisor and $0 < \delta \ll \epsilon \ll 1$. Here, $(Y, (1-\delta)C + (\Delta_Y+\epsilon A))$ is klt, because
\begin{itemize}
\item $(Y, \Supp (C + \Delta_Y))$ is simple normal crossing as $\Supp (C+\Delta_Y) \subseteq \Supp B_Y$ (cf.\ (\ref{eq:Delta-in-BY})), therefore
\item $(Y, C + \Delta_Y)$ is plt given that $\rdown{\Delta_Y}=0$, and so
\item $(Y, C + \Delta_Y + \epsilon A)$ is plt, because $C \not \subseteq \Supp A$ and a small perturbation of a plt pair by any effective $\mathbb{Q}$-divisor not containing $C$ is plt.
\end{itemize}
By upper semicontinuity (\cite[Chapter III, Theorem 12.18]{hartshorne77}), we have $H^i(\mathcal{Y}, \mathcal{L})=0$ for $i>0$, and so by Grauert's theorem (\cite[Chapter III, Corollary 2.19]{hartshorne77}), the restriction map
\[
H^0(\mathcal{Y}, \mathcal{L}) \to H^0(Y, \mathcal L|_Y) = H^0(Y, L)
\]
is surjective.
Then there exists an effective Cartier divisor $\mathcal C$ on $\mathcal Y$ such that
$\mathcal C|_Y =C$ and $\mathcal{O}_{\mathcal Y}(\mathcal C) \simeq \mathcal L$.
Then $\mathcal C$ is flat over $W(k)$, because
the fibre $Y$ over the closed point $pW(k) \in \Spec W(k)$ is irreducible
and $\mathcal C$ does not contain $Y$.
This complete the proof of Claim \ref{claim:liftability-of-Hiromu-resolution}.
\end{proof}
Apply Claim \ref{claim:liftability-of-Hiromu-resolution} to every prime divisor $C \subseteq \Supp B_Y$.
By the fact that log-smoothness deforms (see \cite[Remark 2.7]{Kaw3}),
we get that $(Y, \Supp B_Y)$ lifts to $W(k)$.
This completes the proof of Theorem \ref{thm:liftability-of-Hiromu-resolution}.
\end{proof}
\begin{comment}
\begin{proof} Let $g \colon Y \to Z$ be an output of a $(-K_Y)$-MMP. Since $-K_Y = -(K_Y+B_Y) + B_Y$ is big, we have that $Z$ is a Picard rank one del Pezzo surface, $\Delta_Y \geq 0$, and $E \subseteq \Supp B_Y$, where $K_Y + \Delta_Y = g^*K_Z$ and $E$ is the exceptional locus of $g$. In particular $(Y,E)$ is snc.
Since $Z$ is log liftable to characteristic zero by ABL and $(Y,E)$ is a log resolution of $Z$, we get that $(Y,E)$ itself is liftable to characteristic zero. We denote this lift by $(\mathcal{Y}, \mathcal{E})$. Write $B_Y = aC + B'_Y$, where $C$ is a smooth integral curve not contained in the support of $B'_Y \geq 0$ and $a>0$. We claim that $C$ lifts to $\mathcal{Y}$. Applying this claim to every such curve $C$ and using that log-smoothness deforms, we get that $(Y, \Supp B_Y)$ is liftable to characteristic zero, concluding the proof of the theorem.
Thus we are left with showing the claim. If $C \subseteq E$, then there is nothing to prove, so we assume henceforth that $C \not \subseteq E$. Since $H^i(Y, \mathcal{O}_Y)=0$ for $i>0$, we get that $L = \mathcal{O}_Y(C)$ lifts to a line bundle $\mathcal{L}$ on $\mathcal{Y}$. Moreover, $H^i(Y, L) =0$ for $i>0$ by Kawamata-Viehweg vanishing (which holds for every boundary by ABL) as we can write
\[
C = K_Y + C + \Delta_Y - (K_Y + \Delta_Y) = K_Y + (1-\epsilon)C + (\Delta_Y+\delta A) - (K_Y+\Delta_Y + \delta A) - \epsilon C,
\]
where $A$ is a relatively anti-ample effective $g$-exceptional divisors and $0 < \epsilon \ll \delta \ll 1$.
In particular, by Grauter's theorem, $H^0(\mathcal{Y}, \mathcal{L})$ is fibrewise constant, and the section $C \in |L|$ lifts to a section $\mathcal{C} \in |\mathcal{L}|$. Since smoothness deforms, $\mathcal{C}$ is also smooth, and the claim is proven.
\end{proof}
\end{comment}
\section{Log del Pezzo pairs in characteristic $p>41$}\label{s-LDP-QFS42}
Throughout this section, we work over an algebraically closed field of characteristic $p>0$.
The purpose of this section is to show that $(X, \Delta)$ is quasi-$F$-split
if $p>41$ and $(X, \Delta)$ is a log del Pezzo pair with standard coefficients (Theorem \ref{t-LDP-QFS}).
\begin{comment}
, we finish the proof of the
establish a version of Bogomolov-Sommese vanishing.
If $(X, \Delta)$ has a special point of type $a \geq p$, then
we can show that $(X, \Delta)$ is log liftable by Theorem \ref{t-bdd-or-liftable}.
Otherwise, we can find a log resolution of $(X, \Delta)$
such that the determinant of suitable intersection matrix of is divisible by $p$.
In this case,
\end{comment}
\subsection{An explicit ACC for log Calabi-Yau surfaces}\label{ss-explicit-ACC}
In this subsection, we establish the following vanishing theorem:
\[
H^0(X, \mathcal{O}_X(K_X + \Delta_{\mathrm{red}} + \rdown{p^{\ell}(K_X+\Delta)})) =0
\]
for a log del Pezzo pair $(X, \Delta)$ with standard coefficients in characteristic $p>41$ (Theorem \ref{t-42-vanishing}) .
To this end, we prove an explicit version of ACC for two-dimensional log Calabi--Yau pairs (Theorem \ref{t-ACC2}).
We start by treating the following essential case.
\begin{comment}
The number $41$ comes from an explicit bound on ACC for two-dimensional log Calabi--Yau pairs (Theorem \ref{t-ACC2}).
Indeed, Theorem \ref{t-ACC2} deduces the following vanishing for $p>41$ (Theorem \ref{t-42-vanishing}):
\begin{equation}\label{e1-LDP-QFS42}
H^0(X, \mathcal{O}_X(K_X + \Delta_{\mathrm{red}} + \rdown{p^{\ell}(K_X+\Delta)})) =0.
\end{equation}
Subsection \ref{ss-explicit-ACC} is devoted to establishing
this vanishing theorem.
The main theorem of this section is given in Subsection \ref{ss-LDP>41}.
For a sketch of its proof, we refer to (\ref{sss-LDP-QFS}).
\end{comment}
\begin{theorem} \label{t-ACC1}
Assume $p>5$.
Let $(X,\Delta = aC + B)$ be a log del Pezzo pair with standard coefficients,
where $a \geq 0$, $C$ is a prime divisor, and $B$ is an effective $\mathbb{Q}$-divisor
with $C \not\subseteq \Supp B$.
Suppose that $(X, tC+B)$ is lc and $K_X + tC + B \equiv 0$ for a real number $t$ with
$0 \leq t < 1$. Then $t \leq \frac{41}{42}$.
\end{theorem}
{In characteristic zero, this theorem is a special case of \cite[Definition 1.1, (5.1.2), and Theorem 5.3]{Kol94}. In what follows, we deduce it in positive characteristic by constructing an appropriate lift.}
\begin{proof}
We have $t \in \mathbb{Q}$, because the equality $(K_X + tC + B) \cdot H =0$ holds for an ample Cartier divisor $H$.
It is clear that $a < t < 1$.
Moreover, we may assume that $t>\frac{5}{6}$, as otherwise there is nothing to show.
Note that $-(K_X+sC+B)$ is ample for any $a \leq s <t$,
because $K_X + tC + B \equiv 0$ and $-(K_X+aC+B)$ is ample.
\begin{claim}\label{c-ACC1}
There exists a resolution $f : Y \to X$ of $X$ such that:
\begin{enumerate}
\item[(i)]
$\Supp f^*C \subseteq \Supp (tC_Y + B_Y)$ and
$B_Y$ is effective and simple normal crossing, where
$C_Y := f^{-1}_*C$ and $B_Y$ is the $\mathbb{Q}$-divisor defined by
\[
K_Y + tC_Y + B_Y = f^*(K_X + tC +B).
\]
\item[(ii)] $f: Y \to X$ is a log resolution of $(X, tC+B)$ over some open subset $X'$ containing $C$.
\end{enumerate}
\end{claim}
\begin{proof}[Proof of Claim \ref{c-ACC1}]
Since the problem is local on $X$, we fix a closed point $x$ of $X$ around which we shall work.
It suffices to find an open neighbourhood $\widetilde X$ of $x \in X$
and a resolution $\widetilde f: \widetilde{Y} \to \widetilde{X}$ of $\widetilde X$ which satisfy the corresponding properties (i) and (ii).
If $x \not\in C$, then Corollary \ref{cor:Hiromu-resolution} enables us to find
such a resolution $\widetilde f: \widetilde{Y} \to \widetilde{X}$
for $\widetilde X := X \setminus C$, because $(t C +B)|_{\widetilde X} =B|_{\widetilde X}$ has standard coefficients.
Then we may assume that $x \in C$ and $(X, tC+B)$ is not log smooth at $x$.
If $x$ is a singular point of $X$, then we may apply Proposition \ref{p-klt-sing} to $(X,(t-\epsilon_1)C+B)$ and $0 < {\epsilon_1} \ll 1$ (here we take this $\epsilon_1$ to ensure that $\Supp f^*C \subseteq \Supp (tC_Y + B_Y)$).
Assume that $X$ is smooth at $x$.
For $0 < \epsilon_2 \ll 1$, we obtain
\[
\mult_x ((t-\epsilon_2)C + B) \geq
\mult_x \left( \frac{5}{6}C + B\right)
\geq
\min\left\{ 2 \cdot \frac{5}{6}, \hspace{2mm}\frac{5}{6} + \frac{1}{2} \right\} = \frac{4}{3}.
\]
Therefore, we can apply Corollary \ref{c-klt-resol-r=1'} to $(X,(t-\epsilon_2)C+B)$.
This completes the proof of Claim \ref{c-ACC1}.
\end{proof}
Pick $\epsilon \in \mathbb{Q}$ such that $0 < \epsilon \ll 1$.
We have
\[
K_Y+tC_Y + B_Y - \epsilon f^*C = f^*(K_Y+(t-\epsilon)C + B).
\]
Note that $tC_Y + B_Y - \epsilon f^*C$ is effective and simple normal crossing by Claim \ref{c-ACC1}.
Since $(X, (t-\epsilon)C+B)$ is a log del Pezzo pair, we may apply Theorem \ref{thm:liftability-of-Hiromu-resolution}
to $f : Y \to X$ and $tC_Y + B_Y - \epsilon f^*C$,
so that $(Y, \Supp (tC_Y + B_Y - \epsilon f^*C))$ lifts to $W(k)$. As $\Supp (f^*C) \subseteq \Supp (tC_Y + B_Y)$, we immediately get that $(Y,tC_Y +B_Y)$ admits a lift
$(\mathcal Y, t \mathcal C_{\mathcal Y} + \mathcal B_{\mathcal Y})$ to $W(k)$.
For $\Gamma := tC_Y + B_Y - \epsilon f^*C$,
$B^{\text{st}}_{Y} := f_*^{-1}B$, and $B^{\text{neg}}_Y := B_Y - B^{\text{st}}_{Y}$, the following holds on $Y$.
\begin{enumerate}
\item $(Y, \Gamma)$ is a log smooth weak del Pezzo pair.
\item $K_Y +t C_Y +B_Y \equiv 0$.
\item
$B_Y = B^{\text{st}}_{Y} + B^{\text{neg}}_Y$,
where $B^{\text{st}}_{Y}$ has standard coefficients and
$B^{\text{neg}}_Y$ is an effective $\mathbb{Q}$-divisor
which is negative definite, that is, the intersection matrix of $\Supp B^{\text{neg}}_Y$ is negative definite.
\end{enumerate}
Via the lift $(\mathcal Y, t \mathcal C_{\mathcal Y} + \mathcal B_{\mathcal Y})$,
the geometric generic fibre
$Y_{\overline K}$ of $\mathcal Y \to \Spec W(k)$ satisfies the following corresponding properties (1)'--(3)',
where $\overline K$ denotes the algebraic closure of ${\rm Frac}\,W(k)$ and
$\Gamma_{\overline K}, C_{Y_{\overline K}}, B_{Y_{\overline K}}$ are the $\mathbb{Q}$-divisors corresponding to
$\Gamma, C_Y, B_Y$, respectively.
\begin{enumerate}
\item[(1)'] $(Y_{\overline K}, \Gamma_{\overline K})$ is a log smooth weak del Pezzo pair.
\item[(2)'] $K_{Y_{\overline K}} + tC_{Y_{\overline K}} + B_{Y_{\overline K}} \equiv 0$.
\item[(3)']
$B_{Y_{\overline K}} = B^{\text{st}}_{Y_{\overline K}} + B^{\text{neg}}_{Y_{\overline K}}$,
where $B^{\text{st}}_{Y_{\overline K}}$ has standard coefficients and
$B^{\text{neg}}_{Y_{\overline K}}$ is an effective $\mathbb{Q}$-divisor
which is negative definite.
\end{enumerate}
Since $Y_{\overline K}$ is of del Pezzo type by (1)' and Remark \ref{r-log-dP-summary},
we can run a $B_{Y_{\overline K}}^{\text{neg}}$-MMP (Remark \ref{r-log-dP-summary}):
\[
\psi : Y_{\overline K} \to V,
\]
where $V$ denotes the end result.
Since $B^{\text{neg}}_{Y_{\overline K}}$ is negative definite,
its push-forward $\psi_*B^{\text{neg}}_{Y_{\overline K}}$ is either zero or negative definite.
{Indeed, a $\mathbb{Q}$-divisor $D$ on a projective $\mathbb{Q}$-factorial surface is negative definite if and only if $D'^2 <0$ for every nonzero $\mathbb{Q}$-divisor $D'$ satisfying $\Supp D' \subseteq \Supp D$.}
As $\psi_*B^{\text{neg}}_{Y_{\overline K}}$ is nef,
we obtain $\psi_*B^{\text{neg}}_{Y_{\overline K}} =0$.
By (3)', the $\mathbb{Q}$-divisor $B_V := \psi_*B_{Y_{\overline K}} = \psi_*B^{\text{st}}_{Y_{\overline K}}$ has standard coefficients. It follows from (2)' that
$K_V + tC_V +B_V \equiv 0$ for $C_V := \psi_*C_{Y_{\overline K}}$.
Note that $C_V$ is still a prime divisor, because
the above $B^{\text{neg}}_{Y_{\overline K}}$-MMP only contracts the curves contained in $\Supp B^{\text{neg}}_{Y_{\overline K}}$.
It follows from \cite[Definition 1.1, (5.1.2), and Theorem 5.3]{Kol94} that $t \leq \frac{41}{42}$.
\qedhere
\begin{comment}
By the negativity lemma, we get
$B_V = \psi_*B_{Y_{\overline K}} = \psi_*B_{Y_{\overline K}}^{\text{st}}$, and hence $B_V$ has standard coefficients.
We have $C_V \neq 0$ by construction.
It follows from $K_V+tC_V + B_V \equiv 0$ and \cite[Definition 1.1, (5.1.2), and Theorem 5.3]{Kol94} that $t \leq \frac{41}{42}$.
Then there exists a lift $\widetilde f : \mathcal Y \to \mathcal X$ of $f : Y \to X$ with
$\widetilde f_*\mathcal{O}_{\mathcal Y} = \mathcal{O}_{\mathcal X}$ \textcolor{red}{by the essentially same argument as \cite[Proposition 5.2]{BBKW}. }
{\color{cyan} Should be added: Note that $f_{\overline K}$ contracts all the exceptionals whose coeff are nonzero in $\mathcal B_{\mathcal Y}$.
Therefore, the MMP end result $V$ contracts all the non-st coeffs.
Alternatively, using the neg definiteness of the contracted $B^{exc}$, we may run non-relative MMP to contracts these divisors. THis might be simpler. Then $\mathcal X$ is not needed.}
By $K_Y + tC_Y + B_Y \equiv 0$,
its lift $K_{\mathcal Y} + t \mathcal C_{\mathcal Y} + \mathcal B_{\mathcal Y}$ is numerically trivial over $W(k)$, and hence so is the pullback to the geometric generic fibre $Y_{\overline K}$, that is,
\[
K_{Y_{\overline K}} + tC_{Y_{\overline K}} + B_{Y_{\overline K}} \equiv 0.
\]
We have a decomposition
\[
B_{Y_{\overline K}} = B_{Y_{\overline K}}^{\text{st}} + B_{Y_{\overline K}}^{\text{exc}},
\]
where $B_{Y_{\overline K}}^{\text{st}} $ has standard coefficients and
$B_{Y_{\overline K}}^{\text{exc}}$ is an effective $\mathbb{Q}$-divisor exceptional over $X_{\overline K}$.
\end{comment}
\end{proof}
\begin{comment}
We run a $(K_X+\Delta)$-MMP: {\ccyan MMP might not be needed.}
\[
\varphi: X \to X'.
\]
Then the end result $(X', \Delta' := \varphi_*\Delta)$ is a Mori fibre space.
For $C' :=\varphi_*C$ and $B' := \varphi_*B$,
all the assumptions still hold for $X'$, that is,
$(X', \Delta')$ is a log del Pezzo pair,
$C'$ is a prime divisor, $(X', tC'+B')$ is lc, and $K_{X'} +tC' + B' \equiv 0$.
After replacing $X'$ by $X$, we may assume that $(X, \Delta)$ has a $(K_X+\Delta)$-Mori fibre space structure: $\pi : X \to T$, that is, $\pi$ is a morphism to a projective normal variety $T$ such that
$\pi_*\mathcal{O}_X = \mathcal{O}_T$, $\rho(X) =\rho(T)+1$, $\dim X > \dim T$, and $-(K_X+\Delta)$ is $\pi$-ample.
There are the following two cases:
\begin{enumerate}
\item[(I)] $\dim T =1$.
\item[(II)] $\dim T =0$.
\end{enumerate}
(I) Assume $\dim T = 1$.
By $p>5$, any $\pi$-horizontal irreducible component of $\Delta$ is generically \'etale over $T$.
Hence, for a general fibre $F$ of $\pi : X \to T$,
$(F, \Delta|_F)$ is a log Fano pair with standard coefficients,
$(F, (aC + B)|_F)$ is lc, and $K_F + t(C|_F) + B|_F \equiv 0$.
In this case, \textcolor{cyan}{Add more}
(II) Assume $\dim T =0$.
\begin{proof}
We take a resolution as in .
We denote
\[
K_Y+B_Y=f^{*}(K_X+tC+B).
\]
Then $(Y, \Supp(B_Y))$ lifts to $W(k)$ by Theorem \ref{thm:liftability-of-Hiromu-resolution}. (\textcolor{red}{$f_{*}B_Y$ is not $\Delta$...}
Using \cite[Theorem 2.14]{BBKW} repeatedly, we obtain a formal lift $\tilde{f}\colon (\mathcal{Y}, \mathcal{B}_{Y}) \to (\mathcal{X}, t\mathcal{C}+\mathcal{B})$ of $f\colon (Y,B_Y) \to (X, tC+B)$ and this is algebraisable since $H^2(X, \mathcal{O}_X)=0$.
Since $K_{Y_K}+(B_Y)_K\equiv 0$, we have $K_{X_K}+tC_K + B_K=(f_K)_{*}(K_{Y_K}+(B_Y)_K)\equiv 0$.
We denote
\[
K_{Y_K}+B_{Y_K}+\sum a_iE_{i,K}\equiv f^{*}(K_{X_K}+tC_K + B_K),
\]
for some $a_i$, where $E_{i,K}$ is $f_K$-exceptional.
Then $K_{Y_K}+B_{Y_K}\equiv 0$ shows that $a_i=0$ and in particular $(X_K, tC_K + B_K)$ is lc.
\textcolor{red}{We can prove stronger assertion as follows. In fact, we can obtain $K_{\mathcal{X}}+\mathcal{D}=0$ and $(\mathcal{X}, \mathcal{D})$ is lc}
Since $K_{\mathcal{Y}}+\mathcal{B}_Y\equiv 0$, we have $K_{\mathcal{X}}+\mathcal{D}=\tilde{f}_{*}(K_{\mathcal{Y}}+\mathcal{B}_Y)\equiv 0$.
We denote
\[
K_{\mathcal{Y}}+\mathcal{B}_Y+\sum a_i\mathcal{E}_i\equiv \tilde{f}^{*}(K_{\mathcal{X}}+\mathcal{D})
\]
for some $a_i$, where $\mathcal{E}_i$ is $\tilde{f}$-exceptional. Note that $X$ is not contained in $\Exc(\tilde{Ff})$ and thus each $\mathcal{E}_i$ dominants $W(k)$.
By restring to the generic fiber $X_K$,
we have
\[
K_{Y_K}+B_{Y_K}+\sum a_i(E_i)_K\equiv f(K_{X_K}+\Delta_K),
\]
and $K_{Y_K}+B_{Y_K}\equiv 0$ shows that $a_i=0$.
Since $K_{\mathcal{Y}}+\mathcal{B}_Y\equiv 0$ and $\tilde{f}$ is crepant, we have $K_{\mathcal{X}}+\mathcal{D}\equiv 0$ and $(\mathcal{X}, \mathcal{D})$ is lc.
\end{proof}
\begin{proof}
\todo{Jakub: add the proof}
It is clear that $a < t \leq 1$.
We run a $(K_X+\Delta)$-MMP $\varphi: X \to X'$.
Then the end result $(X', \Delta' := \varphi_*\Delta)$ is a Mori fibre space.
For $C' :=\varphi_*C$ and $B' := \varphi_*B$,
all the assumptions still hold for $X'$, that is,
$(X', \Delta')$ is a log del Pezzo pair,
$C'$ is a prime divisor, $(X', tC'+B')$ is lc, and $K_{X'} +tC' + B' \equiv 0$.
After replacing $X'$ by $X$, we may assume that $(X, \Delta)$ has a $(K_X+\Delta)$-Mori fibre space structure: $\pi : X \to T$, that is, $\pi$ is a morphism to a projective normal variety $T$ such that
$\pi_*\mathcal{O}_X = \mathcal{O}_T$, $\rho(X) =\rho(T)+1$, $\dim X > \dim T$, and $-(K_X+\Delta)$ is $\pi$-ample.
Assume $\dim T = 1$.
By $p>5$, any $\pi$-horizontal irreducible component of $\Delta$ is generically \'etale over $T$.
Hence, for a general fibre $F$ of $\pi : X \to T$,
$(F, \Delta|_F)$ is a log Fano pair with standard coefficients,
$(F, (aC + B)|_F)$ is lc, and $K_F + t(C|_F) + B|_F \equiv 0$.
In this case, \textcolor{cyan}{Add more}
Assume $\dim T =0$.
By ..., there exists a birational morphism $f: Y \to X$ from a smooth projective surface $Y$
such that $\Delta_Y$ is effective and simple normal crossing for the $\mathbb{Q}$-divisor defined by
$K_Y + \Delta_Y = f^*(K_X+\Delta)$.
Take the irreducible decomposition:
\[
\Delta_Y = a C_Y + \sum_{i=1}^m b_i E_i.
\]
Note that we have $\Ex(f) =E_1 \cup \cdots \cup E_n$ for some $0 \leq m \leq n$.
By Theorem \ref{thm:liftability-of-Hiromu-resolution},
$(Y, C_Y+\sum_{i=1}^m E_i)$ admits a lifting
$(\mathcal Y, \mathcal C_{\mathcal Y} + \sum_{i=1}^n E_i)$ over $W(k)$.
Let $(Y', C' + \sum_{i=1}^m E'_i)$ is the geom generic fibre.
The other exceptionals might not be lifted.
We have
\[
K_{Y}^2 =K_{Y'}^2
\]
Then $Y'$ is weak dP, $\rho$ is the same, $E_1, ..., E_m$ can be contracted by
$-(K_{Y'} + \Delta_{Y'})$: $Y' \to X'$.
If $\rho(X)$ and $\rho(X')$ can be comparable, then we are done. Rational pts?
$\widetilde{Y}$ might be better.
\vspace{20mm}
\textcolor{cyan}{Char 0: .
We seems to need the lifting of $(X, \Delta)$, whilst we only assure the lifting of $Y$. In this case, $Y \to X$ is given by $-(K_Y+\Delta_Y) = -f^*(K_X+\Delta)$.
We can probably show the vanishing and surje $H^0(\mathcal Y, -m(K_Y+\Delta_Y)) \to H^0(Y, ...)$. Then we get $\mathcal Y \to \mathcal X$. At least, $\varphi|_Y$ is $Y \to X$. Then probably, we get $\mathcal X_0 = X$.
The lift $\mathcal Y, ...$ is klt.
Crepant: $(\mathcal X, ...)$ is klt.
}
$(\mathcal Y, \mathcal B_{\mathcal Y})$ is log smooth.
$-(K_{\mathcal Y} +\mathcal B_{\mathcal Y})$ is nef, as its specialisation is nef.
The vanishing follows from upper semi-cont plus the vanishing for $(Y, B)$.
Then $-m(K_{\mathcal Y} +\mathcal B_{\mathcal Y})$ is bpf and
$F: \mathcal Y \to \mathcal X$ induces $F|_Y = f: Y \to X$.
Note that $\mathcal X_0$ is an effective Cartier divisor such that
$F^*\mathcal X_0 = \mathcal Y_0 = Y$.
Then
\[
\mathcal X_0 =F_*F^*\mathcal X_0 = F_*\mathcal Y_0 = F_*Y=X.
\]
We have $\mathcal B$ on $\mathcal X$, because $K$ and $K+B$ can be used.
But how to get prime divisors e.g. $\mathcal C$?
We set $\mathcal C :=F_*\mathcal C_Y$.
By $K_{\mathcal Y} + \mathcal B_{\mathcal Y} = F^*(K_{\mathcal X} + \mathcal B)$,
$(\mathcal X, \mathcal B)$ is klt. Since the restriction to $\mathcal X_0 = X$ is anti-ample,
$(\mathcal X, \mathcal B)$ is (possibly non-$\mathbb{Q}$-factorial) log Fano.
The following can happen:
\begin{enumerate}
\item Let $\psi : \mathcal X' \to \mathcal X$ be a small $\mathbb{Q}$-factorisalisation.
Note that $\mathcal X'$ modifies the special fibres $X$, whilst the generic fibre does not change.
\item $\Ex(\psi) = E$ is a curve which is a blowup of the cusp on the surface $X$.
\item $\rho(\mathcal X') =2$ and $\rho(\mathcal X)=1$.
\end{enumerate}
$\Pic \mathcal X \to \Pic X$ is bijective.
However,
I do not know how to compute $\rho(\mathcal X_K)$ or $\rho(\mathcal X_{\overline K})$ (the generic/geom generic).
Let $g: X' \to X$ be the morphism of special fibres.
We have $0 \equiv g^*(K_X+tC + B) = g^*(K_X+B) + tg^*C$.
We already have $K_{\mathcal X'} + \mathcal B'$.
As a line bdl, we get a lift: $\mathcal L|_{X'} \simeq \mathcal{O}_{X'}(mg^*C)$.
\end{proof}
\end{comment}
\begin{lemma}\label{l-curve-ACC}
Let $(C, \Delta)$ be a one-dimensional projective lc pair such that $K_C+\Delta \equiv 0$ and
\[
\coeff(\Delta) \subseteq \left\{ \frac{n-1}{n} \, \middle|\, n \in \mathbb{Z}_{>0} \right\} \cup \left[\frac{5}{6}, 1\right].
\]
Then
\[
\coeff(\Delta) \subseteq
\left\{ \frac{n-1}{n} \, \middle|\, n \in \mathbb{Z}_{>0}, 1 \leq n \leq 6\right\} \cup \{1\}.
\]
\end{lemma}
\begin{proof}
Let $\Delta = \sum_{i=1}^r a_iP_i$ be the irreducible decomposition.
We may assume that $\Delta \neq 0$.
It follows from $K_C+ \Delta \equiv 0$ that $C =\P^1$ and $\sum_{i=1}^r a_i =2$.
By contradiction, we suppose that $\frac{5}{6} < a_1 <1$.
We then get $r=3$, which implies $a_1 + a_2 +a_3 =2$.
It is easy to see that there does not exist such a solution.
\end{proof}
\begin{theorem} \label{t-ACC2}
Assume $p>5$.
Let $(X,\Delta)$ be a two-dimensional projective lc pair
such that $K_X + \Delta \equiv 0$ and
\[
\coeff(\Delta) \subseteq \left\{ \frac{n-1}{n} \, \middle|\, n \in \mathbb{Z}_{>0} \right\} \cup \left[\frac{41}{42}, 1\right].
\]
Then
\[
\coeff(\Delta) \subseteq \left\{ \frac{n-1}{n} \, \middle|\, n \in \mathbb{Z}_{>0}, 1 \leq n \leq 42\right\} \cup \{1\}.
\]
\end{theorem}
\begin{proof}
Replacing $(Y, \Delta_Y)$ by $(X, \Delta)$ for a dlt blowup $f: Y \to X$ of $(X, \Delta)$ (see, for example, \cite[Theorem 4.7]{tanaka16_excellent}),
we may assume that $(X, \Delta)$ is dlt.
Take the irreducible decomposition of $\Delta$ and define effective $\mathbb{Q}$-divisors $D, E, F$ as follows:
\[
\Delta =D+E+F = \sum_{i=1}^r d_i D_i + \sum_{j=1}^s e_j E_j + \sum_{k=1}^t F_k,
\]
\[
D:= \sum_{i=1}^r d_i D_i, \quad E:=\sum_{j=1}^s e_j E_j,\quad F:=\sum_{k=1}^t F_k.
\]
where $D_i, E_j, F_k$ are prime divisors,
\[
\frac{1}{2} \leq d_i \leq \frac{41}{42}, \qquad \text{and}\qquad
\frac{41}{42} < e_j <1
\]
for all $i, j, k$.
It suffices to show $E =0$.
Suppose $E \neq 0$.
Let us derive a contradiction.
We run a $(K_X+D+F)$-MMP $g: X \to Z$,
which is a $(-E)$-MMP by
$-E \equiv K_X+D+F$.
Then the end result $Z$ has a Mori fibre space structure $\pi: Z \to T$ such that
$g_*E$ is $\pi$-ample.
In particular, $g_*E \neq 0$.
Furthermore, $(Z, g_*D)$ is klt, because
$(X, D+(1-\epsilon)F)$ is klt and this MMP can be considered as a $(K_X+D+(1-\epsilon)F)$-MMP for some $0 < \epsilon \ll 1$.
Replacing $(Z, g_*\Delta)$ by $(X, \Delta)$, we may assume,
in addition to the original assumptions, that (1)--(3) hold.
\begin{enumerate}
\item $(X, D)$ is klt.
\item $X$ has a $(K_X+D+F)$-Mori fibre space structure $\pi: X \to T$.
\item $E$ is $\pi$-ample.
\end{enumerate}
Assume $\dim T =1$.
By $p\geq 5$, any $\pi$-horizontal irreducible component of $\Delta$
is generically \'etale over $T$.
Hence,
we obtain $\coeff(\Delta|_W) \subseteq \coeff(\Delta)$
for a general fibre $W$ of $\pi: Z \to T$.
By (3), $E|_F \neq 0$.
This contradicts Lemma \ref{l-curve-ACC}.
\medskip
Assume $\dim T =0$. {In particular, $\rho(X)=1$.}
Since the bound for ACC for log canonical thresholds is $\frac{5}{6}$,
$(X, D + \sum_{j=1}^s E_j +\sum_{k=1}^t F_k)$ is lc.
For a sufficiently large integer $\ell \gg 42$, the pair
\[
\left(X, D+ \frac{\ell -1}{\ell}\sum_{j=1}^s E_j + \frac{\ell -1 }{\ell}\sum_{k=1}^t F_k\right)
\]
is klt and $K_X +D + \frac{\ell -1}{\ell}\sum_{j=1}^s E_j + \frac{\ell -1 }{\ell}\sum_{k=1}^t F_k$ is ample.
On the other hand,
\[
\left(X, D+ \frac{41}{42}\sum_{j=1}^s E_j + \frac{\ell -1 }{\ell}\sum_{k=1}^t F_k\right)
\]
is klt and $-\left(K_X +D +\frac{41}{42}\sum_{j=1}^s E_j + \frac{\ell -1 }{\ell}\sum_{k=1}^t F_k\right)$ is ample.
Enlarge the coefficient of $E_1$ from $\frac{41}{42}$ to $\frac{\ell-1}{\ell}$.
Then
\[
K_X + D+ \frac{\ell -1 }{\ell} E_1 + \frac{41}{42}\sum_{j=2}^s E_j + \frac{\ell -1 }{\ell}\sum_{k=1}^t F_k
\]
is either nef or anti-ample.
If this $\mathbb{Q}$-divisor is nef, then there exists $u \in \mathbb{Q}$
such that $\frac{41}{42} < u \leq \frac{\ell -1 }{\ell}$ and
$K_X + D+ u E_1 + \frac{41}{42}\sum_{j=2}^s E_j + \frac{\ell -1}{\ell}\sum_{k=1}^t F_k \equiv 0$.
However, this contradicts Theorem \ref{t-ACC1}.
Hence $-(K_X + D+ \frac{\ell -1 }{\ell} E_1 + \frac{41}{42}\sum_{j=2}^s E_j + \frac{\ell -1 }{\ell}\sum_{k=1}^t F_k)$ is ample.
By enlarging the coefficient of $E_2$ from $\frac{41}{42}$ to $\frac{\ell-1}{\ell}$, the same argument deduces that
\[
K_X + D+ \frac{\ell -1 }{\ell} (E_1+E_2) + \frac{41}{42}\sum_{j=3}^s E_j + \frac{\ell -1 }{\ell}\sum_{k=1}^t F_k
\]
is anti-ample. By repeating this procedure finitely many times,
we get that
\[
K_X + D+ \frac{\ell -1 }{\ell}\sum_{j=1}^s E_j + \frac{\ell -1 }{\ell}\sum_{k=1}^t F_k
\]
is anti-ample.
However, this is again a contradiction.
Hence $E=0$, as required.
\end{proof}
\begin{theorem} \label{t-42-vanishing}
Assume $p>41$.
Let $(X,\Delta)$ be a log del Pezzo pair with standard coefficients. Then
\[
H^0(X, \mathcal{O}_X(K_X + \Delta_{\mathrm{red}} + \rdown{p^{\ell}(K_X+\Delta)})) =0
\]
for
every positive integer $\ell$.
\end{theorem}
It is essential to assume that $p>41$. Without this assumption, the theorem is false (cf.\ Lemma \ref{l-P^2-B_1}(2)).
\begin{proof}
Suppose $H^0(X, \mathcal{O}_X(K_X + \Delta_{\mathrm{red}} + \rdown{p^{\ell}(K_X+\Delta)})) \neq 0$
for some $\ell \in \mathbb{Z}_{>0}$.
Let us derive a contradiction.
We have
\[
K_X + \Delta_{\mathrm{red}}+ \llcorner p^{\ell}(K_X+\Delta) \lrcorner \sim N
\]
for some effective Weil divisor $N$.
By running a $K_X$-MMP and replacing divisors by their push-forwards,
we may assume that there is a $K_X$-Mori fibre space structure $\pi : X \to T$.
Take the irreducible decomposition of $\Delta$ and define $D$ and $E$ as follows:
\[
\Delta = D+E,
\qquad
D := \sum_{i=1}^r \frac{d_i -1}{d_i} D_i,\qquad \text{and}\qquad
E := \sum_{j=1}^s \frac{e_j-1}{e_j} E_j,
\]
where $D_i$ and $E_j$ are prime divisors,
\[
2 \leq d_i \leq 42, \quad \text{and}\quad
e_j \geq 43 \quad \text{for all}\quad i, j.
\]
The following holds:
\begin{align*}
\Delta_{\mathrm{red}} + \rdown{p^{\ell} \Delta}
&= \left( \sum_{i=1}^r D_i + \sum_{j=1}^s E_j\right) +
\rdown{ p^{\ell}\left(\sum_{i=1}^r \frac{d_i -1}{d_i} D_i + \sum_{j=1}^s \frac{e_j-1}{e_j} E_j\right) }\\
&\leq \left( \sum_{i=1}^r D_i + \sum_{j=1}^s E_j\right) +
p^{\ell}\left(\sum_{i=1}^r \frac{d_i -1}{d_i} D_i + \sum_{j=1}^s \frac{e_j-1}{e_j} E_j\right) - \sum_{i=1}^r \frac{1}{d_i} D_i\\
&= (p^{\ell}+1)\left( D+ \frac{p^{\ell} E + E_{\mathrm{red}}}{p^{\ell}+1}\right).
\end{align*}
Therefore, we obtain
\[
0 \leq N \sim
K_X + \Delta_{\mathrm{red}} + \rdown{p^{\ell}(K_X+\Delta)}
\leq (p^{\ell}+1)\left( K_X+ D+ \frac{p^{\ell} E + E_{\mathrm{red}}}{p^{\ell}+1}\right).
\]
In particular, we obtain $E \neq 0$.
Since $-(K_X+D+E)$ is $\pi$-ample and
\[
K_X+ D+ \frac{p^{\ell} E + E_{\mathrm{red}}}{p^{\ell} +1}
\]
is $\pi$-nef,
we can find a $\mathbb{Q}$-divisor $E'$ such that
\[
E \leq E' \leq \frac{p^{\ell} E + E_{\mathrm{red}}}{p^{\ell}+1}\qquad \text{and}\qquad
K_X+D+E' \equiv_{\pi} 0.
\]
If $\dim T=0$, then we get $E'=0$ by Theorem \ref{t-ACC2}.
However, this implies $E=0$, which is a contradiction.
Assume $\dim T=1$.
Pick a general fibre $W \simeq \P^1$ of $\pi : X \to T$.
By $p\geq 5$, every $\pi$-horizontal prime divisor contained in $\Delta$ is generically \'etale over $T$.
Hence it holds that $\coeff (\Delta|_W) \subseteq \coeff(\Delta)$ and every coefficient of $E|_W$ is $\geq \frac{42}{43}$.
We then get $E'=0$ by Lemma \ref{l-curve-ACC}, which is a contradiction.
\end{proof}
\subsection{Log del Pezzo pairs are quasi-F-split in characteristic $p>41$}\label{ss-LDP>41}
\begin{theorem} \label{t-BS-vanishing}
Let $(X, \Delta)$ be a log del Pezzo pair.
Let $\varphi \colon V \to X$ be a birational morphism from a smooth projective surface $V$ and let $(V, \Delta_V)$ be a klt pair such that $\Delta_V$ is simple normal crossing, $-(K_V+\Delta_V)$ is nef, $\varphi_*\Delta_V = \Delta$, and $(V, \Supp \Delta_V)$ lifts to $W_2(k)$
Let $\theta \colon W \to V$ be a birational morphism from a smooth projective surface $W$ such that the composition
\[
\psi := \varphi \circ \theta \colon W \xrightarrow{\theta} V \xrightarrow{\varphi} X
\]
is a log resolution of $(X, \Delta)$. Write $K_W + \Delta_W = \theta^*(K_V+\Delta_V)$ and
let $\mathbb{E}_W$ be the union of all the $\psi$-exceptional prime divisors $E$
such that $\lfloor K_W + \Delta_W \rfloor \cdot E = 0$ and
that the coefficient of $E$ in $\Delta_W$ is an integer.
Assume that $p$ does not divide the determinant of the intersection matrix of $\mathbb{E}_W$.
Then
\[
H^0(W, \Omega^1_W(\log D) \otimes \mathcal{O}_W( \lfloor K_W + \Delta_W \rfloor))=0
\]
for the reduced divisor $D$ on $W$ satisfying $\Supp D =
\mathrm{Exc}(\psi) \cup
\Supp \psi^{-1}_* \Delta$.
\end{theorem}
\begin{proof}
Recall that
$\Omega^1_W(\log D)( \lfloor K_W + \Delta_W \rfloor) = \Omega^1_W(\log D) \otimes \mathcal{O}_W( \lfloor K_W + \Delta_W \rfloor)$ under our notation.
\begin{claim}\label{c-BS-vanishing}
Let $S$ be the sum of all the $\psi$-exceptional prime divisors whose coefficients in $\Delta_W$ are integers. Then
\[
H^0(W, \Omega^1_W(\log (D - S))( \lfloor K_W + \Delta_W \rfloor)) =
H^0(W, \Omega^1_W(\log D)( \lfloor K_W + \Delta_W \rfloor)).
\]
\end{claim}
\vspace{0.3em}
The assertion follows immediately from Claim \ref{c-BS-vanishing}. Indeed, since $\theta_{*}S$ contains all the $\varphi$-exceptional prime divisors whose coefficients in $\Delta_V$ are equal to zero, we have $\theta_{*}(D-S)\subseteq \Supp \Delta_V$.
By $\theta_{*}\Delta_W=\Delta_V$, we have
\[
H^0(W, \Omega^1_W(\log (D - S))( \lfloor K_W + \Delta_W \rfloor)) \subseteq H^0(V, \Omega^1_V(\log \Supp \Delta_V)( \lfloor K_V + \Delta_V \rfloor))=0,
\]
where the last equality follows from
a nef-and-big Akizuki-Nakano vanishing \cite[Theorem 2.11]{Kaw3}.
\begin{proof}[Proof of Claim \ref{c-BS-vanishing}]
Let $E$ be a prime divisor contained in $\Supp S$.
Since $K_W+\Delta_W=\theta^{*}(K_V+\Delta_V)$ is anti-nef and
the coefficient of $E$ in $\Delta_W$ is an integer,
we have
\[
\rdown{K_W + \Delta_W} \cdot E\leq (K_W+\Delta_W)\cdot E\leq 0.
\]
First, we assume that $\rdown{K_W + \Delta_W} \cdot E<0$. Then, $H^0(E, \mathcal{O}_E(\rdown{K_W+\Delta_W}))=0$, and so by tensoring the following short exact sequence
\[
0 \to \Omega^1_W(\log (D - E)) \to \Omega^1_W(\log D) \to \mathcal{O}_E \to 0,
\]
with $\mathcal{O}_W(\rdown{K_W+\Delta_W})$,
we see that
\[
H^0(W, \Omega^1_W(\log (D - E))( \lfloor K_W + \Delta_W \rfloor)) =
H^0(W, \Omega^1_W(\log D)( \lfloor K_W + \Delta_W \rfloor)).
\]
Repeating this procedure, we can assume that every prime divisor $E$ in $S$ satisfies $\rdown{K_W + \Delta_W} \cdot E=0$, that is, that $S={\mathbb E_W}$.
Using essentially the same argument as in \cite[Subsection 8.C]{graf21}, we show that
\[
H^0(W, \Omega^1_W(\log (D - S))( \lfloor K_W + \Delta_W \rfloor)) =
H^0(W, \Omega^1_W(\log D)( \lfloor K_W + \Delta_W \rfloor)).
\]
By looking at the short exact sequence
\[
0 \to \Omega^1_W(\log (D - S)) \to \Omega^1_W(\log D) \to \bigoplus_{E \subseteq S} \mathcal{O}_E \to 0,
\]
tensored with $\mathcal{O}_W(\lfloor K_W + \Delta_W \rfloor)$, it suffices to show that the homomorphism
\begin{equation} \label{e1-BS-vanishing}
\overbrace{\bigoplus_{E \subseteq S} H^0(E, \mathcal{O}_E(\lfloor K_W + \Delta_W \rfloor))}^{=\ \bigoplus_{E \subseteq S} H^0(E, \mathcal{O}_E)} \to
H^1(W, \Omega^1_W(\log (D -S))(\lfloor K_W + \Delta_W \rfloor))
\end{equation}
is injective.
We note that $D-S$ is disjoint from $S$.
Indeed, if there exists a prime divisor $E \subseteq S$ intersecting $D-S$, then
$\lfloor K_W + \Delta_W \rfloor \cdot E <0$, because
the prime divisors in $D-S$ have non-integral coefficients in $\Delta_W$. This is impossible by our construction of $D$.
Now, since $D-S$ is disjoint from $S$, the map (\ref{e1-BS-vanishing}) factors through
\begin{equation} \label{e2-BS-vanishing}
\bigoplus_{E \subseteq S} H^0(E, \mathcal{O}_E) \to \bigoplus_{E \subseteq S} H^1(E, \Omega^1_E)
\end{equation}
via restriction $\Omega^1_W( \log (D - S))(\lfloor K_W + \Delta_W \rfloor) \to \bigoplus_{E \subseteq S} \Omega^1_E$.
Map (\ref{e2-BS-vanishing}) is given by the intersection matrix of $S$ (as explained in \cite[Subsection 8.C]{graf21}, this follows from \cite[Lemma 3.5, Fact 3.7, and Lemma 3.8]{gk14} which are essentially characteristic-free, just replace $\mathbb{C}$ by the base field $k$), and so it is an isomorphism
by the assumption that the determinant of the intersection matrix of $S={\mathbb{E}_W}$ is not divisible by $p$. In particular,
map (\ref{e1-BS-vanishing}) is injective.
\end{proof}
\end{proof}
\begin{lemma}\label{l-log-lift-enough}
Assume $p>41$.
Let $(X,\Delta)$ be a log del Pezzo pair with standard coefficients.
Suppose that $(X,\Delta)$ is log liftable.
Then $(X, \Delta)$ is quasi-$F$-split.
\end{lemma}
\begin{proof}
Since $(X, \Delta)$ is log liftable, there exists a log resolution $f\colon Y\to X$ of $(X, \Delta)$ such that $(Y,f_{*}^{-1}\Delta+\Exc(f))$ lifts to $W(k)$.
We define a $\mathbb{Q}$-divisor $\Delta_Y$ on $Y$ by
\[
K_Y+\Delta_Y=f^{*}(K_X+\Delta).
\]
Since $X$ is $\mathbb{Q}$-factorial,
we can find an effective $f$-exceptional $\mathbb{Q}$-divisor $F$ such that $-F$ is $f$-ample.
For $0 < \epsilon \ll 1$, we set $B_Y := \Delta_Y + \epsilon F$. To prove that $(X, \Delta)$ is quasi-$F$-split,
it suffices to show the following properties (A)--(C)
(Theorem \ref{t-QFS-criterion}).
\begin{enumerate}
\item[(A)] $\rdown{B_Y} \leq 0$, $f_*B_Y =\Delta$, and $-(K_Y+B_Y)$ is ample.
\item[(B)] $H^0(Y, \mathcal{O}_Y(K_Y + (B_Y)_{\mathrm{red}} + p^{\ell}(K_Y+B_Y)) =0$
for any $\ell \in \mathbb{Z}_{>0}$.
\item[(C)] $H^0(Y, \Omega_Y^1(\log\, (B_Y)_{\mathrm{red}}) \otimes \mathcal{O}_Y(K_Y+B_Y))=0$.
\end{enumerate}
The property (A) follows from the definition of $B_Y$.
Since $(Y, (B_Y)_{\mathrm{red}})$ lifts to $W(k)$ and $-(K_Y+B_Y)$ is an ample $\mathbb{Q}$-divisor satisfying $\Supp(\{K_Y+B_Y\})\subseteq \Supp (B_Y)_{\mathrm{red}}$, (C) follows from {\cite[Theorem 2.11]{Kaw3}}.
It suffices to show (B).
Since $f_{*}B_Y=\Delta$ and $f_{*}(B_Y)_{\mathrm{red}}= \Delta_{\mathrm{red}}$, we have
\[
H^0(Y, \mathcal{O}_Y(K_Y+(B_Y)_{\mathrm{red}}+p^{\ell}(K_Y+B_Y)))\subseteq
H^0(X, \mathcal{O}_X(K_X+\Delta_{\mathrm{red}}+p^{\ell}(K_X+\Delta))).
\]
Then the latter cohomology vanishes by Theorem \ref{t-42-vanishing}.
\end{proof}
\begin{comment}
\begin{proof}
Since $(X, \Delta)$ is log liftable, there exists a log resolution $f\colon Y\to X$ of $(X, \Delta)$ such that $(Y,f_{*}^{-1}\Delta+\Exc(f))$ lifts to $W(k)$.
We define a $\mathbb{Q}$-divisor $\Delta_Y$ on $Y$ by
\[
K_Y+\Delta_Y=f^{*}(K_X+\Delta).
\]
Since $X$ is $\mathbb{Q}$-factorial,
we can find an effective $f$-exceptional $\mathbb{Q}$-divisor $F$ such that $-F$ is $f$-ample.
For $0 < \epsilon \ll 1$, we set $B_Y := \Delta_Y + \epsilon F$.
Then $-(K_Y+B_Y)$ is ample, $\lfloor B_Y\rfloor\leq 0$, and $f_{*}B_Y=\Delta$.
We set $E\coloneqq \Supp B_Y$.
To prove that $(X, \Delta)$ is quasi-$F$-split,
it suffices to show the following properties (1) and (2) by \cite[Theorem 5.13]{KTTWYY1}.
\begin{enumerate}
\item $H^0(Y, \Omega_Y^1(\log E)(K_Y+B_Y))=0$.
\item $H^0(Y, B_1\Omega_Y^2(\log E)(p^{\ell}(K_Y+B_Y)))$ for every $\ell \geq 1$.
\end{enumerate}
Since $(Y, E)$ lifts to $W(k)$ and $-(K_Y+B_Y)$ is ample $\mathbb{Q}$-divisor satisfying $\Supp(\{K_Y+B_Y\})\subseteq E$, (1) follows from \cite[Theoerem 2.1]{Kaw3}.
It suffices to show (2).
We recall that
\[
B_1\Omega_Y^2(\log\,E)(p^{\ell}(K_Y+B_Y))\subseteq \mathcal{O}_Y(K_Y+E+p^{\ell}(K_Y+B_Y)).
\]
Since $f_{*}B_Y=\Delta$ and $f_{*}E=\ulcorner \Delta \urcorner$, we have
\[
H^0(Y, \mathcal{O}_Y(K_Y+E+p^{\ell}(K_Y+B_Y)))\subseteq
H^0(X, \mathcal{O}_Y(K_X+\ulcorner \Delta \urcorner+p^{\ell}(K_X+\Delta))).
\]
Then the latter cohomology vanishes by Theorem \ref{t-42-vanishing}.
\end{proof}
\end{comment}
We are ready to prove the main theorem of this section.
\begin{theorem} \label{t-LDP-QFS}
Assume $p>41$.
Let $(X,\Delta)$ be a log del Pezzo pair with standard coefficients.
Then $(X,\Delta)$ is quasi-$F$-split.
\end{theorem}
\begin{comment}
{\color{cyan}
For every special point $x$, we have the following list:
And
\begin{itemize}
\item $n:=n_x$ is the number of the resol
\item $n_x+2$ is the number for log resolution.
\item $\det V_x = (-1)^n (2n+1)$.
\item Type $a_x$ (due to Jakub) is $a_x = 2n_x+1$.
\item $p_a(C, x) = n_x$.
\item
$2p_a(C, x) - 2 = 2n_x -2 $.
\end{itemize}
This list is useful, so it might be good to include this in the Remark.
In our theorem, $p_a(C, x) \geq 15$ is enough to assure that $(X, \Delta)$ is log liftable.
}
\end{comment}
\begin{proof}
Recall that a point $x \in X$ is called a \emph{special {point of}} $(X,\Delta)$ if
Theorem \ref{t-hiromu-resol}(2) holds around $x$
(in particular, $X$ is smooth at $x$ and we have $\Delta = \frac{1}{2} C$ around $x$ for a prime divisor $C$).
We consider two cases separately.\\
\noindent \textbf{Case 1.}
For every special point $x$ of $(X,\Delta)$,
$x$ is of type $a_x$ with $a_x < p$ (see Remark \ref{r-cusp-resol}). \\
Let $\varphi \colon V \to X$ be a resolution as in Corollary \ref{cor:Hiromu-resolution} and write $K_V + \Delta_V = \varphi^*(K_X+\Delta)$. Explicitly, $\varphi$ is a log resolution over every point $x \in X$ except when $x$ is a special point {of} $(X, \Delta)$.
We construct a birational morphism $\theta \colon W \to V$ from a smooth projective surface as follows.
Over a non-special point of $(X, \Delta)$, $\theta$ is an isomorphism.
Over a special point of $(X, \Delta)$, we blow up $X$ twice, so that the composition
\[
\psi : W \xrightarrow{\theta} V \xrightarrow{\varphi} X
\]
is a log resolution of $(X, \Delta)$ (cf.\ Remark \ref{r-cusp-resol}).
We define $\mathbb{Q}$-divisors $\Delta_V$ and $\Delta_W$ by
\[
K_V+ \Delta_V=\varphi^*(K_X+\Delta)\qquad\text{and}\qquad
K_W+\Delta_W = \theta^*(K_V+\Delta_V) (=\psi^*(K_X+\Delta)).
\]
Pick an effective $\psi$-exceptional $\mathbb{Q}$-divisor $F$ on $W$ such that $-F$ is $\psi$-ample.
Set $B_W := \Delta_W + \epsilon F$ for $0 < \epsilon \ll 1$.
It is enough to show that the following conditions (A)--(C) hold (Theorem \ref{t-QFS-criterion}).
\begin{enumerate}
\item[(A)] $\rdown{B_W} \leq 0$, $\psi_*B_W =\Delta$, and $-(K_W+B_W)$ is ample.
\item[(B)] $H^0(W, \mathcal{O}_Y(K_W + (B_W)_{\mathrm{red}} + p^{\ell}(K_W+B_W))) =0$
for any $\ell \in \mathbb{Z}_{>0}$.
\item[(C)] $H^0(W, \Omega_W^1(\log (B_W)_{\mathrm{red}}) \otimes \mathcal{O}_W(K_W+B_W))=0$.
\end{enumerate}
By construction, (A) holds.
We have that
\[
H^0(W, \mathcal{O}_W(K_W + (B_W)_{\mathrm{red}} + \rdown{p^{\ell}(K_W+B_W)})) \subseteq
H^0(X, \mathcal{O}_X(K_X + \Delta_{\mathrm{red}} + \rdown{p^{\ell}(K_X+\Delta)}))=0
\]
for every $\ell \geq 1$ by Theorem \ref{t-42-vanishing}.
Thus (B) holds.
Let us show (C).
By $\rdown{K_W+B_W} = \rdown{K_W+\Delta_W}$, we have
\[
H^0(W, \Omega^1_W(\log (B_W)_{\mathrm{red}})(\rdown{K_W+B_W}))\subseteq
H^0(W, \Omega^1_W(\log D)(\rdown{K_W+\Delta_W})),
\]
where $D$ is the reduced divisor with $\Supp D= \Supp \psi^{-1}_*\Delta + \Exc(\psi)$.
Note that $(V, \Supp \Delta_V)$ lifts to $W_2(k)$ by Theorem \ref{thm:liftability-of-Hiromu-resolution}.
By Remark \ref{r-cusp-resol}, we may apply Theorem \ref{t-BS-vanishing}, so that
$H^0(W, \Omega^1_W(\log D)(\rdown{K_W+\Delta_W})) = 0$.
Hence (C) holds.
Thus $(X,\Delta)$ is quasi-$F$-split.\\
\begin{comment}
Pick a $\varphi$-exceptional effective $\mathbb{Q}$-divisor $F$ on $Y$ such that
$-F$ is $\varphi$-ample over non-special points and $\varphi(\Supp\,F)$ does not contain any special point.
For $0< \epsilon \ll 1$, we set $B_V := \Delta_V + \epsilon F$.
Then $-(K_V+B_V)$ is still nef, $-(K_V+B_V)$ is ample over non-special points of $X$, and
$\Delta_V$ is simple normal crossing.
Now write $K_W+B_W = \theta^*(K_V+B_V)$.
By construction, we can find a $\psi$-exceptional effective $\mathbb{Q}$-divisor $F'$ such that $-(K_W+B_W)-F'$ is ample.
Fix a rational number $\epsilon'$ with $0 < \epsilon' \ll 1$ and
set $B'_W := B_W + \epsilon' F'$.
It is enough to show that the following conditions (A)--(C) hold (Theorem \ref{t-QFS-criterion}).
\begin{enumerate}
\item[(A)] $\rdown{B'_W} \leq 0$, $\psi_*B'_W =\Delta$, and $-(K_W+B'_W)$ is ample.
\item[(B)] $H^0(W, \mathcal{O}_Y(K_W + (B'_W)_{\mathrm{red}} + p^{\ell}(K_W+B'_W)) =0$
for any $\ell \in \mathbb{Z}_{>0}$.
\item[(C)] $H^0(W, \Omega_W^1(\log (B'_W)_{\mathrm{red}}) \otimes \mathcal{O}_W(K_W+B'_W))=0$.
\end{enumerate}
By construction, (A) holds.
We have that
\[
H^0(W, \mathcal{O}_W(K_W + (B'_W)_{\mathrm{red}} + \rdown{p^{\ell}(K_W+B'_W)})) \subseteq
H^0(X, \mathcal{O}_X(K_X + \Delta_{\mathrm{red}} + \rdown{p^{\ell}(K_X+\Delta)}))=0
\]
for every $\ell \geq 1$ by Theorem \ref{t-42-vanishing}.
Thus (B) holds.
{\color{cyan} Why can we apply 5.6? I can not understand the proof. I should write down this proof carefully. Do we actuallly need to apply the perturbation on $V$? }
By Remark \ref{r-cusp-resol} and Theorem \ref{t-BS-vanishing}, we get that
\[
H^0(W, \Omega^1_W(\log (B'_W)_{\mathrm{red}})(\rdown{K_W+B'_W}))\subseteq
H^0(W, \Omega^1_W(\log D)(\rdown{K_W+B_W}))=0
\]
where $D$ is the reduced divisor with $\Supp D= \Supp \psi^{-1}_*\Delta + \Exc(\psi)$.
Hence (C) holds.
Thus $(X,\Delta)$ is quasi-$F$-split.\\
\end{comment}
\noindent \textbf{Case 2.} There exists a special point $x$ of $(X,\Delta)$ of type $a_x$ with $a_x \geq p$. \\
Around $x$, we can write $\Delta = \frac{1}{2}C$ for a prime divisor $C$ on $X$.
We have $a_x \geq p \geq 43$.
Let $\mu : X' \to X$ be the minimal resolution of $X$ and
set $C_{X'} : =\mu_*^{-1}C$.
The singularity $x$ of $C$ is resolved by $n_x$ blowups with $a_x = 2n_x +1$ (cf.\ Remark \ref{r-cusp-resol}).
Until the singularity $x \in C$ is resolved, the multiplicity of the singular point lying over $x$
is equal to two.
Since $C$ and $C_{X'}$ are isomorphic around $x$,
we obtain $p_a(C_{X'}) \geq n_x$ \cite[Corollary V.3.7]{hartshorne77}.
Then
\[
p_a(C_{X'}) \geq n_x = \frac{a_x-1}{2} \geq \frac{43-1}{2} = 21 >18.
\]
By Theorem \ref{t-bdd-or-liftable}, $(X, \Delta)$ is log liftable.
Then Lemma \ref{l-log-lift-enough} implies that $(X, \Delta)$ is quasi-$F$-split.
\end{proof}
\section{Log del Pezzo pairs which are not quasi-$F$-split}\label{s-P^2-cex}
In this section, we prove that there exists a log del Pezzo pair $(X, \Delta)$ in characteristic $41$ such that $\Delta$ has standard coefficients and that $(X, \Delta)$ is not quasi-$F$-split (Theorem \ref{t-P^2-main}). Since our construction {can be applied to} some other characteristics,
we start by {providing} a list of pairs $(X, \Delta)$ for which our argument works
(Notation \ref{n-P^2-cex}).
As explained in (\ref{sss-LDP-nonQFS}), {the} key part is to show that
$h^1(X, B_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m}(K_X+\Delta))) =1$ (Proposition \ref{p-P^2-BZ-key}(5)).
\begin{table}[h]
\caption{The following is a list of the dimension of cohomologies that we will compute. Here, we put $n, m \in \mathbb{Z}_{\geq 0}$, and $\Omega^{1}_{X,\log} := \Omega^{1}_{X}(\log \Delta_{\mathrm{red}})$. For the first column see: Lemma \ref{l-P^2-B_1} and Lemma \ref{l-P^2-H0Omega}. For the second column see Lemma \ref{l-P^2-H0Omega}, Lemma \ref{l-P^2-vanish}, and Lemma \ref{p-P^2-BZ-key}. For the last column see Lemma \ref{l-P^2-H0Omega}(3) and Lemma \ref{p-P^2-BZ-key}.}
\centering
{\renewcommand{\arraystretch}{1.35}%
\begin{tabular}{|l|c|c|c|}
\hline
& $h^0$ & $h^1$ & $h^{2}$ \\
\hline
\multirow{2}{*}{$\Omega^{1}_{X}(-n)$} & \multirow{2}{*}{$0$} & 0 (if $n>0$) & \multirow{2}{*}{see (\ref{l-P^2-H0Omega}(3))} \\
&&1 (if $n=0$) &
\\
\hline
$\Omega^{1}_{X} (p^{n} (K _{X}+ \Delta))$ & 0 &&
\\
\hline
$ \Omega^{1}_{X, \log}(p^{n}(K_{X}+ \Delta))$ & 0 & $0$ (if $n=1$) & 0 (if $n=1$) \\
\hline
\multirow{2}{*}{$Z_{m} \Omega^{1}_{X, \log}(p^{n}(K_{X}+ \Delta))$} & \multirow{2}{*}{0} & \hspace{0.4em} $0$ (if $n=m+1 \geq 2$) & \\
&& \hspace{0.4em} 1 (if $n=m \geq 1$) \hspace{1.35em} &\\
\hline
\multirow{3}{*}{$B_{m} \Omega^{1}_{X,\log}(p^{n}(K_{X}+ \Delta))$} &
\multirow{3}{*}{$0$}
&
\hspace{0.2em} $0$ (if $n=m+1 \geq 2$) \hspace{0.7em}
&
\\
&& \hspace{0.2em} $0$ (if $m=1$ and $n \geq 2$) &
\\
&& \hspace{0.2em} $1$ (if $n=m \geq 1$) \hspace{2.35em} & \\
\hline
$B_{1}\Omega^{2}_{X,\log}(p(K_{X}+ \Delta))$ & 0 &&\\
\hline
\end{tabular}}
\end{table}
\begin{notation}\label{n-P^2-cex}
Let $k$ be an algebraically closed field of characteristic $p>0$.
Set $X := \mathbb P^2_k$ and
let $L_1, L_2, L_3, L_4$ be distinct lines such that $L_1 + L_2 + L_3 + L_4$ is simple normal crossing.
Assume that $(p, \Delta)$ satisfies one of the following.
\begin{enumerate}
\renewcommand{\labelenumi}{(\roman{enumi})}
\item $p=11$ and $\Delta = \frac{2}{3}L_1 + \frac{2}{3} L_2 + \frac{3}{4} L_3 + \frac{10}{11}L_4$.
\item $p=17$ and $\Delta = \frac{1}{2}L_1 + \frac{2}{3} L_2 + \frac{8}{9} L_3 + \frac{16}{17}L_4$.
\item $p=19$ and $\Delta = \frac{1}{2}L_1 + \frac{3}{4} L_2 + \frac{4}{5} L_3 + \frac{18}{19}L_4$.
\item $p=23$ and $\Delta = \frac{1}{2}L_1 + \frac{2}{3} L_2 + \frac{7}{8} L_3 + \frac{22}{23}L_4$.
\item $p=41$ and $\Delta = \frac{1}{2}L_1 + \frac{2}{3} L_2 + \frac{6}{7} L_3 + \frac{40}{41}L_4$.
\end{enumerate}
{In what follows,} for $n \in \mathbb{Z}$, $\mathcal{O}_X(n)$ denotes the invertible sheaf on $X$ with $\deg \mathcal{O}_X(n)=n$.
Recall that $\Delta_{\mathrm{red}} = L_1+L_2+L_3+L_4$.
\end{notation}
\begin{proposition}\label{p-P^2-LDP}
We use Notation \ref{n-P^2-cex}.
Then the following hold.
\begin{enumerate}
\item $-(K_X+\Delta)$ is ample.
\item $\mathcal{O}_X( p(K_X+\Delta)) \simeq \mathcal{O}_X(-1)$.
\end{enumerate}
\end{proposition}
\begin{proof}
{We start with} case (v). We have
\[
\deg \Delta =\frac{1}{2} + \frac{2}{3} + \frac{6}{7} + \frac{40}{41} = \frac{85}{42} + \frac{40}{41},
\]
which implies $\deg (K_X+\Delta) = \frac{-1}{41 \cdot 42}$.
Thus (1) holds and $\mathcal{O}_X( p(K_X+\Delta)) \simeq \mathcal{O}_X(n)$ for some $n \in \mathbb{Z}_{<0}$.
Let us show (2). We have
\begin{align*}
\Delta
&=\frac{1}{2}L_1 + \frac{2}{3} L_2 + \frac{6}{7} L_3 + \frac{40}{41}L_4\\
&= \frac{21}{42}L_1 + \frac{28}{42} L_2 + \frac{36}{42} L_3 + \frac{40}{41}L_4\\
&>\frac{20}{41}L_1 + \frac{27}{41} L_2 + \frac{35}{41} L_3 + \frac{40}{41}L_4,
\end{align*}
{where we used that $\frac{a}{b} > \frac{a-1}{b-1}$ for $b>a>1$.} Then
\[
n = \deg (\rdown{p(K_X+\Delta)}) \geq -3 \cdot 41 + (20 + 27 +35 +40) = (-123 + 122) =-1.
\]
Thus (2) holds.
The same algorithm works for the other cases (i)--(iv).
We only summarise a list of the corresponding equalities and inequalities.
\begin{enumerate}
\item[(i)]
$\deg \Delta = \frac{2}{3}+ \frac{2}{3} + \frac{3}{4} + \frac{10}{11} = \frac{25}{12} + \frac{10}{11},\quad
\Delta > \frac{7}{11}L_1 + \frac{7}{11} L_2 + \frac{8}{11} L_3 + \frac{10}{11}L_4$.
\item[(ii)]
$\deg \Delta = \frac{1}{2}+ \frac{2}{3} + \frac{8}{9} + \frac{16}{17} = \frac{37}{18} + \frac{16}{17},\quad
\Delta > \frac{8}{17}L_1 + \frac{11}{17} L_2 + \frac{15}{17} L_3 + \frac{16}{17}L_4$.
\item[(iii)]
$\deg \Delta =\frac{1}{2} + \frac{3}{4} + \frac{4}{5} + \frac{18}{19} = \frac{41}{20} + \frac{18}{19},\quad
\Delta > \frac{9}{19}L_1 + \frac{14}{19} L_2 + \frac{15}{19} L_3 + \frac{18}{19}L_4$.
\item[(iv)]
$\deg \Delta =\frac{1}{2} + \frac{2}{3} + \frac{7}{8} + \frac{22}{23} = \frac{49}{24} + \frac{22}{23},\quad
\Delta > \frac{11}{23}L_1 + \frac{15}{23} L_2 + \frac{20}{23} L_3 + \frac{22}{23}L_4$.
\end{enumerate}
\begin{comment}
(i)
\[
\deg \Delta = \frac{2}{3}+ \frac{2}{3} + \frac{3}{4} + \frac{10}{11} = \frac{25}{12} + \frac{10}{11},
\]
\[
\Delta > \frac{7}{11}L_1 + \frac{7}{11} L_2 + \frac{8}{11} L_3 + \frac{10}{11}L_4.
\]
which implies $\deg (K_X+\Delta) = \frac{-1}{11 \cdot 12}$.
Thus (1) and (2) holds.
(ii) We have
\[
\deg \Delta = \frac{1}{2}+ \frac{2}{3} + \frac{8}{9} + \frac{16}{17} = \frac{37}{18} + \frac{16}{17},
\]
which implies $\deg (K_X+\Delta) = \frac{-1}{17 \cdot 18}$.
Thus (1) and (2) holds.
(iii) We have
\[
\deg \Delta =\frac{1}{2} + \frac{3}{4} + \frac{4}{5} + \frac{18}{19} = \frac{41}{20} + \frac{18}{19},
\]
which implies $\deg (K_X+\Delta) = \frac{-1}{19 \cdot 20}$.
Thus (1) and (2) holds.
(iv) We have
\[
\deg \Delta =\frac{1}{2} + \frac{2}{3} + \frac{7}{8} + \frac{22}{23} = \frac{49}{24} + \frac{22}{23},
\]
which implies $\deg (K_X+\Delta) = \frac{-1}{23 \cdot 24}$.
Thus (1) and (2) holds.
(v) We have
\[
\deg \Delta =\frac{1}{2} + \frac{2}{3} + \frac{6}{7} + \frac{40}{41} = \frac{85}{42} + \frac{40}{41},
\]
which implies $\deg (K_X+\Delta) = \frac{-1}{41 \cdot 42}$.
Thus (1) and (2) holds.
\end{comment}
\end{proof}
\begin{lemma}\label{l-P^2-B_1}
We use Notation \ref{n-P^2-cex}. Then the following hold.
\begin{enumerate}
\item $(X, \{p^r \Delta \})$ is globally $F$-regular for all $r \in \mathbb{Z}_{>0}$.
\item $h^0(X, B_1\Omega_X^2(\log \Delta_{\mathrm{red}})(p(K_X+\Delta)))=1$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let us show (1).
Fix $r \in \mathbb{Z}_{>0}$.
By Notation \ref{n-P^2-cex}, there exists $a \in \mathbb{Q}$ such that
$0 < a <1$ and we have $\{p^r \Delta \} \leq a(L_1 + L_2 + L_3)$.
We may assume that $L_1, L_2, L_3$ are torus invariant divisors on
a projective toric surface $X=\mathbb P^2_k$,
and hence $(X, a(L_1+ L_2 + L_3))$ is globally $F$-regular.
Therefore, also $(X, \{p^r \Delta \})$ is globally $F$-regular.
Thus (1) holds.
Let us show (2).
By (\ref{exact1}), we have the following exact sequence:
\begin{multline*}
0 \to B_1\Omega_X^2(\log \Delta_{\mathrm{red}})(p(K_X+\Delta)) \to F_*\Omega_X^2(\log \Delta_{\mathrm{red}})(p(K_X+\Delta)) \\
\overset{C}{\longrightarrow}
\Omega_X^2(\log \Delta_{\mathrm{red}})(K_X+\Delta) \to 0.
\end{multline*}
Hence, the equality $h^0(X, B_1\Omega_X^2(\log \Delta_{\mathrm{red}})(p(K_X+\Delta)))=1$ holds by
\begin{align*}
H^0(X, \Omega_X^2(\log \Delta_{\mathrm{red}})(K_X+\Delta))
&=H^0(X, \Omega_X^2(\log \Delta_{\mathrm{red}})(K_X))\\
&=
H^0(X, {\mathcal{O}_X}(2K_X+ \Delta_{\mathrm{red}}))=0, \textrm{ and }\\
h^0(X, \Omega_X^2(\log \Delta_{\mathrm{red}})(p(K_X+\Delta)))
&=h^0(X, {\mathcal{O}_X}(K_X+\Delta_{\mathrm{red}} + \llcorner p(K_X+\Delta)\lrcorner)) \\
&= h^0(X, \mathcal{O}_X(-3+4-1))=1,
\end{align*}
where $\mathcal{O}_X(\rdown{p(K_X+\Delta)}) \simeq \mathcal{O}_X(-1)$ follows from Proposition \ref{p-P^2-LDP}.
Thus (2) holds.
\end{proof}
\begin{lemma}\label{l-P^2-H0Omega}
We use Notation \ref{n-P^2-cex}. Then the following hold.
\begin{enumerate}
\setlength{\itemsep}{6pt}
\item $H^0(X ,\Omega_X^1 \otimes \mathcal{O}_X(n))=0$ for all $n \leq 0$.
\item $\!\begin{aligned}[t]
h^1(X ,\Omega_X^1 \otimes \mathcal{O}_X(n))=
\begin{cases}
0 \quad (n<0)\\
1 \quad (n=0).
\end{cases}
\end{aligned}$
\item
$h^2(X,\Omega_X^1 \otimes \mathcal{O}_X(n))= 3 h^0(X, \mathcal{O}_X(-2-n)) - h^0(X, \mathcal{O}_X(-3-n))$ for all $n \in \mathbb{Z}$.
\item $\!\begin{aligned}[t]
&H^0(X, \Omega^1_X(p^n(K_X+\Delta))) = 0, \textrm{ and } \\ &H^0(X, \Omega^1_X(\log \Delta_{\mathrm{red}})(p^n(K_X+\Delta))) =0, \textrm{ for all } n \geq 0.
\end{aligned}
$
\item
$\!\begin{aligned}[t]
&H^0(X, Z_m\Omega^1_X(\log \Delta_{\mathrm{red}})(p^n(K_X+\Delta))) = 0, \textrm{ and } \\
&H^0(X, B_m\Omega^1_X(\log \Delta_{\mathrm{red}})(p^n(K_X+\Delta))) =0, \textrm{ for all } m\geq 0 \textrm{ and } n \geq 0.
\end{aligned}$
\end{enumerate}
\end{lemma}
\begin{proof}
The assertions (1)--(3) follow from the dual of the Euler sequence \cite[Ch. II, Theorem 8.13]{hartshorne77}:
\[
0 \to \Omega_X^1 \to \mathcal{O}_X(-1)^{\oplus 3}
\to \mathcal{O}_X \to 0.
\]
Since (5) follows from (4),
it is enough to show (4).
By $\deg\,\llcorner p^n(K_X+\Delta)\lrcorner <0$ (Proposition \ref{p-P^2-LDP}(1)),
it follows from (1) that $H^0(X, \Omega^1_X(p^n(K_X+\Delta))) =0$.
We have an exact sequence:
\[
0 \to \Omega_X^1 \to \Omega_X^1(\log \Delta_{\mathrm{red}}) \to \bigoplus_{i=1}^4 \mathcal{O}_{L_i} \to 0.
\]
Again by $\deg\,\llcorner p^n(K_X+\Delta)\lrcorner <0$, we have that
\[
H^0(L_i, \mathcal{O}_X(p^n(K_X+\Delta))|_{L_i}) =0
\]
for every $1 \leq i\leq 4$.
Therefore, we get $H^0(X, \Omega^1_X(\log \Delta_{\mathrm{red}})(p^n(K_X+\Delta))) =0$.
Thus (4) holds.
\end{proof}
\begin{lemma}\label{l-P^2-vanish}
We use Notation \ref{n-P^2-cex}.
Then the following hold.
\begin{enumerate}
\item $H^1(X, B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{s}(K_X+\Delta)))=0$ for all $s\geq 2$, and
\item $H^1(X, B_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}(K_X+\Delta)))=0$ for all $m \geq 1$.
\end{enumerate}
\end{lemma}
\begin{proof}
Set $D :=K_X+\Delta$. Let us show (1).
We have the following exact sequence (Lemma \ref{lem:Serre's map}):
\[
0 \to \mathcal{O}_X (p^{s-1}D ) \xrightarrow{F} F_* \mathcal{O}_X(p^{s}D) \to B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{s}D) \to 0.
\]
By $H^1(X, F_* \mathcal{O}_X(p^{s}D))=0$, it suffices to show that
the $\mathcal{O}_X$-module homomorphism
\[
F: \mathcal{O}_X (p^{s-1}D ) \to F_* \mathcal{O}_X(p^{s}D)
\]
splits. This is equivalent to the splitting of
\[
F: \mathcal{O}_X \to F_* \mathcal{O}_X( p \{ p^{s-1} \Delta\}),
\]
{given that $\rdown{p^sD}-p\rdown{p^{s-1}D} = \rdown{p(p^{s-1}D-\rdown{p^{s-1}D})} = \rdown{p\{p^{s-1}D\}} = \rdown{p\{p^{s-1}\Delta\}}$.}
This holds because $(X, \{ p^{s-1} \Delta\})$ is globally $F$-regular
by Lemma \ref{l-P^2-B_1}(1) {(cf.\ \cite[Lemma 2.18]{KTTWYY1})}.
Thus (1) holds.
Let us show (2).
We have the following exact sequence (\ref{exact2}):
\begin{multline*}
0 \to F_*^{m-1}(B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}D)) \to B_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}D) \\
\overset{C}{\longrightarrow} B_{m-1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m}D) \to 0.
\end{multline*}
By (1), it holds that $H^1(X, F_*^{m-1}(B_1{\Omega^1_X}(\log \Delta_{\mathrm{red}})(p^{m+1}D)))=0$ for any $m \geq 1$,
which induces an injection:
\[
H^1(X, B_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}D)) \hookrightarrow H^1(X, B_{m-1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m}D)).
\]
Therefore, we obtain a sequence of injections
\begin{multline*}
H^1(X, B_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}D)) \hookrightarrow H^1({X,} B_{m-1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m}D))
\hookrightarrow
\\
\ \cdots\ \hookrightarrow
H^1(X, B_2\Omega_X^1(\log \Delta_{\mathrm{red}})(p^3D)) \hookrightarrow
H^1(X, B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(p^2D))=0,
\end{multline*}
where the last equality holds by (1). Thus (2) holds.
\end{proof}
\begin{proposition}\label{p-P^2-BZ-key}
We use Notation \ref{n-P^2-cex}.
Then the following hold.
\begin{enumerate}
\setlength{\itemsep}{3pt}
\item $H^j(X, \Omega_X^1(\log \Delta_{\mathrm{red}})(p(K_X+\Delta)))=0$, for all $j \in \mathbb{Z}$.
\item $\! \begin{aligned}[t]
H^j&(X, B_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}(K_X+\Delta))) \\
&\simeq
H^j(X, Z_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}(K_X+\Delta))), \textrm{ for all } j \in \mathbb{Z} \textrm{ and } m \in \mathbb{Z}_{>0}.
\end{aligned}$
\item
$H^1(X, Z_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}(K_X+\Delta)))=0$,
for all $m \in \mathbb{Z}_{>0}$.
\item
$h^1(X, Z_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m}(K_X+\Delta))) =1$,
for all $m \in \mathbb{Z}_{>0}$.
\item
$h^1(X, B_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m}(K_X+\Delta))) =1$
for all $m \in \mathbb{Z}_{>0}$.
\end{enumerate}
\end{proposition}
\begin{proof}
Set $D :=K_X+\Delta$. Let us show (1).
Recall that $\mathcal{O}_X(p(K_X+\Delta)) \simeq \mathcal{O}_X(-1)$
(Proposition \ref{p-P^2-LDP}(2)).
Therefore, we obtain
$H^j(X, \Omega_X^1 \otimes \mathcal{O}_X(p(K_X+\Delta))) =0$
for any $j \in \mathbb{Z}$ (Lemma \ref{l-P^2-H0Omega}(1)--(3)).
By the exact sequence
\[
0 \to \Omega_X^1 \to \Omega_X^1(\log \Delta_{\mathrm{red}})
\to \bigoplus_{i=1}^4 \mathcal{O}_{L_i} \to 0,
\]
we obtain
\begin{align*}
H^j(X, \Omega_X^1(\log \Delta_{\mathrm{red}})(p(K_X+\Delta)))
&\simeq
\bigoplus_{i=1}^4
H^j(L_i, \mathcal{O}_X(p(K_X+\Delta))|_{L_i})\\
&\simeq
\bigoplus_{i=1}^4
H^j(\mathbb P^1, \mathcal{O}_{\mathbb P^1}(-1))=0
\end{align*}
for any $j \in \mathbb{Z}$. Thus (1) holds.
The assertion (2) holds by (1) and (\ref{exact1}).
Moreover, it follows from Lemma \ref{l-P^2-vanish}(2) that (2) implies (3).
Let us show (4).
By (\ref{exact4}), we have the following exact sequence:
\begin{multline*}
H^0( X, Z_{m-1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD))
\to
H^0(X, B_1\Omega_X^2(\log \Delta_{\mathrm{red}})(pD)) \\
\longrightarrow
H^1(X, Z_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD)) \to
H^1(X, Z_{m-1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD)).
\end{multline*}
It holds that
\begin{align*}
H^0(X, Z_{m-1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD))&=0 \textrm{ (Lemma \ref{l-P^2-H0Omega}(5)), and } \\
h^0(X, B_1\Omega_X^2(\log \Delta_{\mathrm{red}})(pD)) &=1 \textrm{ (Lemma \ref{l-P^2-B_1}(2)).}
\end{align*}
By {(1) and} (3), we have $H^1(X,
Z_{m-1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD))=0$,
and therefore we also get $h^1(X, Z_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD))=1$.
Thus (4) holds.
Let us show (5).
We have the following commutative diagram in which each horizontal sequence is exact (\ref{exact2}):
{\mathrm{small}\[
\begin{tikzcd}[column sep=10pt, nodes={inner sep=3pt}]
0 \arrow{r} & F_*^{m-1}B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD) \arrow{d}{=} \arrow{r} &
B_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD) \arrow{d}{\alpha_m} \arrow{r}{C} & B_{m-1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m-1}D) \arrow{d}{\alpha_{m-1}} \arrow{r} & 0\\
0 \arrow{r} & F_*^{m-1}B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD) \arrow{r} &
Z_m\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD) \arrow{r}{C} & Z_{m-1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m-1}D) \arrow{r} & 0,
\end{tikzcd}
\]}
\!\!where the vertical arrows are the natural inclusions.
We then obtain the following commutative diagram in which each horizontal sequence is exact:
\[
\begin{tikzcd}[column sep=10pt, nodes={inner sep=4pt}]
H^0(B_{m-1}\widetilde{\Omega}_X^1) \arrow{r} \arrow{d}{H^0(\alpha_{m-1})} & H^1(F_*^{m-1}B_1\overline{\Omega}_X^1) \arrow{d}{=} \arrow{r} & H^1(B_m\widetilde{\Omega}_X^1) \arrow{r} \arrow{d}{H^1(\alpha_{m})} & H^1(B_{m-1}\widetilde{\Omega}_X^1) \arrow{r} \arrow{d}{H^1(\alpha_{m-1})} & H^1(F_*^{m-1}B_1\overline{\Omega}_X^1) \arrow{d}{=} \\
H^0(Z_{m-1}\widetilde{\Omega}_X^1) \arrow{r} & H^1(F_*^{m-1}B_1\overline{\Omega}_X^1) \arrow{r} & H^1({Z_m\widetilde{\Omega}_X^1}) \arrow{r} & H^1(Z_{m-1}\widetilde{\Omega}_X^1) \arrow{r} & H^1(F_*^{m-1}B_1\overline{\Omega}_X^1)
\end{tikzcd}
\]
where
\[
B_1\overline{\Omega}^1_X := B_1\Omega^1_X(\log \Delta_{\mathrm{red}})(p^mD),
\]
\[
B_{\ell}\widetilde{\Omega}^1_X := B_{\ell}\Omega^1_X(\log \Delta_{\mathrm{red}})(p^{{\ell}}D),
\quad\text{and} \quad
Z_{\ell}\widetilde{\Omega}^1_X := Z_{\ell}\Omega^1_X(\log \Delta_{\mathrm{red}})(p^{{\ell}}D).
\]
Since $H^0(X, B_{m-1}\widetilde{\Omega}_X^1) = H^0(X, Z_{m-1}\widetilde{\Omega}_X^1) =0$ (Lemma \ref{l-P^2-H0Omega}(5)), $H^0(\alpha_{m-1})$ is an isomorphism.
By the 5-lemma and induction on $m$, it suffices to show that
\[
H^1(\alpha_1) :
H^1(X, B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(pD)) \to
H^1(X, Z_1\Omega_X^1(\log \Delta_{\mathrm{red}})(pD))
\]
is an isomorphism.
To this end, it is enough to prove the following.
\begin{enumerate}
\renewcommand{\labelenumi}{(\roman{enumi})}
\item $h^1(X, Z_1\Omega_X^1(\log \Delta_{\mathrm{red}})(pD))=1$.
\item $h^1(X, B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(pD))=1$.
\item $H^1(\alpha_1)$ is injective.
\end{enumerate}
By (4), (i) holds.
Let us show (ii).
We have the following exact sequence (Lemma \ref{lem:Serre's map}):
\[
0 \to \mathcal{O}_X(D) \to F_*\mathcal{O}_X(pD) \to
B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(pD) \to 0,
\]
which induces the following exact sequence:
\[
H^1(X, \mathcal{O}_X(pD)) \to
H^1(X, B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(pD))
\to H^2(X, \mathcal{O}_X(D)) \to H^2(X, \mathcal{O}_X(pD)).
\]
{We have $h^2(X, \mathcal{O}_X(D))=h^2(X, \mathcal{O}_X(K_X+\Delta)) = h^2(X, \mathcal{O}_X(K_X)) = h^0(X, \mathcal{O}_X)=1$.
This, together with $h^1(X, \mathcal{O}_X(pD))=0$ and $h^2(X, \mathcal{O}_X(pD))=0$ (Proposition \ref{p-P^2-LDP}(2)), implies that $h^1(X, B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(pD))=1$.
}
Thus (ii) holds.
Let us show (iii).
We have the following exact sequence (\ref{exact1}):
\[
0 \to
B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(pD))
\xrightarrow{\alpha_1}
Z_1\Omega_X^1(\log \Delta_{\mathrm{red}})(pD))
\xrightarrow{C}
\Omega_X^1(\log \Delta_{\mathrm{red}})(D) \to 0.
\]
Hence (iii) follows from $H^0(X, \Omega_X^1(\log \Delta_{\mathrm{red}})(D))=0$ (Lemma \ref{l-P^2-H0Omega}(4)).
Thus (5) holds.
\end{proof}
\begin{lemma}\label{l-Bn-Cartier}
Let $X$ be a smooth variety,
$E$ a reduced simple normal crossing divisor,
and
$D$ a $\mathbb{Q}$-divisor satisfying $\Supp\,\{D\} \subseteq E$.
Let $\zeta : B_{n+1}\Omega^1_X(\log E) \to B_{n}\Omega^1_X(\log E)$ be the homomorphism
that completes the following commutative diagram in which each horizontal sequence is exact (cf.\ Lemma \ref{lem:Serre's map}):
\[
\begin{tikzcd}
0 \arrow{r} & W_{m+1}\mathcal{O}_X(D) \arrow{r}{F} \arrow{d}{R} & F_*W_{m+1}\mathcal{O}_X(pD) \arrow{r}{s} \arrow{d}{R} & B_{m+1}\Omega_X^1(\log E)(p^{m+1}D) \arrow{d}{\zeta} \arrow{r} & 0 \\
0 \arrow{r} & W_{m}\mathcal{O}_X(D) \arrow{r}{F} & F_*W_{m}\mathcal{O}_X(pD) \arrow{r}{s} & B_{m}\Omega_X^1(\log E)(p^mD) \arrow{r} & 0
\end{tikzcd}
\]
Then the {equality} $\zeta =C$ holds for the Cartier operator
$C: B_{m+1}\Omega^1_X(\log E)(p^{m+1}D) \to B_{m}\Omega^1_X(\log E)(p^mD)$.
\end{lemma}
\begin{proof}
For open subsets $V \subset U \subset X$, the restriction map
\[
\Gamma(U, B_{m}\Omega^1_X(\log E)(p^mD)) \to
\Gamma(V, B_{m}\Omega^1_X(\log E)(p^mD))
\]
is injective \cite[Lemma 5.10]{KTTWYY1}.
Therefore, it is enough to show the equality $\zeta =C$ after replacing $X$ by an open subset $X \setminus (\Supp\,E \cup \Supp\,D)$.
Hence we may assume that $D=E=0$.
In this case, the required equality $\zeta =C$
follows from \cite[Ch.\ I, (3.11.6) in Proposition 3.11]{illusie_de_rham_witt}.
Note that, for $n \in \{m, m+1\}$, the above horizontal arrow $s: F_*W_n\mathcal{O}_X \to B_n\Omega_X^1$
coincides with the homomorphism $F^{n-1}d: W_n\Omega^0_X \to B_n\Omega_X^1$ appearing in
\cite{illusie_de_rham_witt} as pointed out in
\cite[Ch.\ I, Remarques 3.12(a)]{illusie_de_rham_witt}, although \cite{illusie_de_rham_witt} omits $F_*$ as a minor difference.
\end{proof}
\begin{theorem}\label{t-P^2-main}
We use Notation \ref{n-P^2-cex}.
Then $(X=\mathbb P^2, \Delta)$ is a log del Pezzo pair with standard coefficients which is not quasi-$F$-split.
\end{theorem}
\begin{proof}
By Notation \ref{n-P^2-cex} and Proposition \ref{p-P^2-LDP},
$(X=\mathbb P^2, \Delta)$ is a log del Pezzo pair with standard coefficients.
It suffices to show that $(X, \Delta)$ is not quasi-$F$-split.
Set $D :=K_X+\Delta$.
It follows from Lemma \ref{l-Bn-Cartier} that
we have the following commutative diagram in which each horizontal sequence is exact:
\[
\begin{tikzcd}
0 \arrow{r} & \mathcal{O}_X(D) \arrow{r}{\Phi} \arrow{d}{=} & Q_{X, D, m+1} \arrow{r} \arrow{d} & B_{m+1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}D) \arrow{r} \arrow{d}{C} & 0 \\
0 \arrow{r} & \mathcal{O}_X(D) \arrow{r}{\Phi} & Q_{X, D, m} \arrow{r} & B_{m}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m}D) \arrow{r} & 0.
\end{tikzcd}
\]
We then obtain a commutative diagram
\[
\begin{tikzcd}
H^1(X, B_{m+1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}D)) \arrow{r}{\delta_{m+1}} \arrow{d}{C} & H^2(X, \mathcal{O}_X(D)) \arrow{d}{=} \\
H^1(X, B_{m}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD)) \arrow{r}{\delta_m} & H^2(X, \mathcal{O}_X(D)).
\end{tikzcd}
\]
By (\ref{eq:coh-def-of-qfsplit}), to show that $(X,\Delta)$ is not quasi-$F$-split,
it suffices to prove that $\delta_m$ is nonzero for every $m>0$. Hence it is enough to prove (1) and (2) below.
\begin{enumerate}
\item $\delta_1 : H^1(X, B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(pD))
\to H^2(X, \mathcal{O}_X(D))$ is nonzero.
\item $C: H^1(X, B_{m+1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}D))
\to H^1(X, B_{m}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^mD))$ is an isomorphism for every $m \geq 1$.
\end{enumerate}
Let us show (1).
By $Q_{X, D, 1} = F_*\mathcal{O}_X(pD)$, we have the following exact sequence:
\[
0=H^1(X, Q_{X, D, 1}) \to H^1(X, B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(pD))
\xrightarrow{\delta_1} H^2(X, \mathcal{O}_X(D)).
\]
Hence $\delta_1$ is injective.
By $H^1(X, B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(pD)) \neq 0$
(Proposition \ref{p-P^2-BZ-key}(5)),
$\delta_1$ is nonzero.
Thus (1) holds.
Let us show (2).
We have an exact sequence (\ref{exact2}):
\begin{multline*}
0 \to F_*^mB_1\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}D) \to
B_{m+1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}D) \\
\overset{C}{\longrightarrow}
B_{m}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m}D) \to 0.
\end{multline*}
By $H^1(X, B_1\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}D))=0$ (Lemma \ref{l-P^2-vanish}(1)), we get an injection:
\[
C: H^1(X, B_{m+1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}D))
\hookrightarrow H^1(X, B_{m}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m}D)).
\]
It follows from $h^1(X, B_{m+1}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m+1}D))
=h^1(X, B_{m}\Omega_X^1(\log \Delta_{\mathrm{red}})(p^{m}D)) =1$ (Proposition \ref{p-P^2-BZ-key}(5)) that this map $C$ is an isomorphism.
Thus (2) holds.
\end{proof}
\begin{remark}\label{r-small-counter}
\begin{enumerate}
\item
Take $p \in \{2, 3, 5\}$ and let $k$ be an algebraically closed field of characteristic $p$.
Then there exists a log del Pezzo pair $(X, 0)$ over $k$ which is not quasi-$F$-split.
Indeed, $X$ is quasi-$F$-split if and only if $X$ is log liftable by \cite[Theorem 6.3]{KTTWYY1}, and there exists a klt del Pezzo surface over $k$ which is not log liftable by \cite[Example 6.1]{Kaw3}.
For the construction of the examples, we refer to \cite[Theorem 4.2(6)]{CT19-2} ($p=2$), \cite[Theorem 1.1]{Ber} ($p=3$), and \cite[Proposition 5.2]{ABL20} ($p=5$).
\item
If $p \in \{2, 3, 5, 11, 17, 19, 23, 41\}$ and $k$ is an algebraically closed field of characteristic $p$,
then there exists, by (1) and Theorem \ref{t-P^2-main}, a log del Pezzo pair $(X, \Delta)$ with standard coefficients which is not quasi-$F$-split.
On the other hand, the authors do not know whether such examples exist in the other characteristics $p\leq 41$, that is, $p \in \{ 7, 13, 29, 31, 37\}$.
\end{enumerate}
\end{remark}
\section{Cone correspondence for quasi-F-splitting}\label{s-Qcone-QFS}
Let $(X, \Delta)$ be a projective log pair such that $-(K_X+\Delta)$ is ample and
$\Delta$ has standard coefficients with $\rdown{\Delta}=0$.
The purpose of this section is to prove that $(X, \Delta)$ is quasi-$F$-split if and only if its cone $R_{\mathfrak{m}}$ is quasi-$F$-split (Theorem \ref{cor:corresponding}),
where
\[
R := \bigoplus_{d \geq 0} H^0(X, \mathcal{O}_X(-d(K_X+\Delta)))
\quad\text{and}
\quad
\mathfrak{m} := \bigoplus_{d > 0} H^0(X, \mathcal{O}_X(-d(K_X+\Delta))) \subseteq R.
\]
The strategy is to generalise an analogue of this result for $F$-splittings which was established by \cite{Watanabe91}.
Most of this section is devoted to introducing
functorial graded structures on several key modules that feature in the theory of quasi-$F$-splittings.
\subsection{Notation on graded modules}
\begin{enumerate}
\item
Let $Z$ be a submonoid of $\mathbb{Q}$, that is,
$Z$ is a subset of $\mathbb{Q}$ such that $0 \in Z$ and $Z$ is closed under the addition (e.g. $\mathbb{Z}_{\geq 0}, \mathbb{Z}$).
For a $Z$-graded ring $R = \bigoplus_{d \in Z} R_d$,
we set $R_{d'} :=0$ for any $d' \in \mathbb{Q} \setminus Z$ and
we consider $R$ as a $\mathbb{Q}$-graded ring via $R = \bigoplus_{d \in \mathbb{Q}} R_d$.
Similarly, $Z$-graded $R$-modules can be considered as $\mathbb{Q}$-graded $R$-modules.
\item
Given $\mathbb{Q}$-graded rings $R = \bigoplus_{d \in \mathbb{Q}} R_d$ and $S = \bigoplus_{d \in \mathbb{Q}} S_d$,
we say that $\varphi : R \to S$ is a {\em graded ring homomorphism}
if $\varphi$ is a ring homomorphism such that $\varphi(R_d) \subseteq S_d$
for every $d \in \mathbb{Q}$.
\item
Given a $\mathbb{Q}$-graded ring $R = \bigoplus_{d \in \mathbb{Q}} R_d$ and
$\mathbb{Q}$-graded $R$-modules $M = \bigoplus_{d \in \mathbb{Q}} M_d$ and $N = \bigoplus_{d \in \mathbb{Q}} N_d$,
we say that $\psi : M \to N$ is a {\em graded $R$-module homomorphism}
if $\psi$ is an $R$-module homomorphism such that
$\psi(M_d) \subseteq N_d$ for any $d \in \mathbb{Q}$.
Note that we assume that $\psi$ preserves degrees, although in the literature it is sometimes allowed for the
degrees to be shifted.
\end{enumerate}
\subsection{Witt rings of graded rings}
\begin{comment}
\begin{definition}
Let
$S = \bigoplus_{d \in \mathbb{Q}} S_d$
be a $\mathbb{Q}$-graded $\mathbb{F}_p$-algebra.
For any $e \in \mathbb{Q}$, we set
\[
W_n(S)_e := \{ (a_0, a_1, ..., a_{n-1}) \in W_n(S) \,|\, a_0 \in S_e, a_1 \in S_{pe}, a_2\in S_{p^2e}, ..., a_{n-1} \in S_{p^{n-1}e}\},
\]
which is a subset of $W_n(S)$.
We shall prove that $W_n(S)$ has the $\mathbb{Q}$-graded ring structure:
$W_n(S) = \bigoplus_{e \in \mathbb{Q}} W_n(S)_e$.
In particular, $W_n(S)_0$ is a subring of $W_n(S)$ and $W_n(S)_e$ is a $W_n(S)_0$-module.
is a $W_n(S)_e$ is a $W_n(S)$
For any $e \in \mathbb{Q}$ and $r \in \mathbb{Z}_{\geq 0}$,
\end{definition}
\end{comment}
\begin{proposition}\label{p-Witt-of-graded}
Let $R = \bigoplus_{d \in \mathbb{Q}} R_d$ be a $\mathbb{Q}$-graded $\mathbb{F}_p$-algebra.
Then
\begin{enumerate}
\item
$W_n(R)$ has the $\mathbb{Q}$-graded ring structure
\[
W_n(R) = \bigoplus_{e \in \mathbb{Q}} W_n(R)_e
\]
such that, for every $e \in \mathbb{Q}$:
\begin{align*}
W_n(R)_e = \{ (a_0, &a_1, \cdots, a_{n-1}) \in W_n(R) \,|\\
&\, a_0 \in R_e, a_1 \in R_{pe}, a_2\in R_{p^2e}, \cdots , a_{n-1} \in R_{p^{n-1}e}\}.
\end{align*}
In particular, $W_n(R)_0 = W_n(R_0)$ holds.
\end{enumerate}
Note that, by (1),
$W_n(R)_e$ is a $W_n(R_0)$-module for every $e \in \mathbb{Q}$.
For all $r \in \mathbb{Z}_{\geq 0}$ and $e \in \mathbb{Q}$, we set
\[
(F^r_*W_n(R))_e := F_*^r(W_n(R)_{p^re}),
\]
which is a $W_n(R_0)$-module. Then the following hold.
\begin{enumerate}
\setcounter{enumi}{1}
\item
For every $r \in \mathbb{Z}_{\geq 0}$,
$F_*^rW_n(R)$ has the $\mathbb{Q}$-graded $W_n(R)$-module structure
\[
F_*^rW_n(R) = \bigoplus_{e \in \mathbb{Q}} (F^r_*W_n(R))_e.
\]
\item
The Frobenius ring homomorphism of $W_n(R)$
\[
F : W_n(R) = \bigoplus_{e \in \mathbb{Q}} W_n(R)_e \to F_*W_n(R) = \bigoplus_{e \in \mathbb{Q}} (F_*W_n(R))_{e}
\]
is a $\mathbb{Q}$-graded ring homomorphism.
\item
The Verschiebung homomorphism of $W_n(R)$
\[
V : F_*W_n(R) = \bigoplus_{e \in \mathbb{Q}} (F_*W_n(R))_{e} \to
W_{n+1}(R) = \bigoplus_{e \in \mathbb{Q}} W_{n+1}(R)_{e}
\]
is a $\mathbb{Q}$-graded $W_n(R)$-module homomorphism.
\item
For any $n, m \in \mathbb{Z}_{> 0}$,
the ring homomorphism
\[
R : W_{n+m}(R) = \bigoplus_{e \in \mathbb{Q}} W_{n+m}(R)_e
\to W_n(R) = \bigoplus_{e \in \mathbb{Q}} W_n(R)_e
\]
is a $\mathbb{Q}$-graded ring homomorphism.
\end{enumerate}
\end{proposition}
\begin{proof}
We omit a proof, as the same argument as in Proposition \ref{p-Witt-of-graded-can-div} works by setting $q=0$.
\qedhere
\end{proof}
\begin{remark}\label{r-Witt-of-graded}
In what follows, we are mainly interested in
$\mathbb{Z}_{\geq 0}$-graded $\mathbb{F}_p$-algebras $R = \bigoplus_{d \in \mathbb{Z}_{\geq 0}} R_d$.
Then $R$ can be considered as a $\mathbb{Q}$-graded $\mathbb{F}_p$-algebra $R= \bigoplus_{d \in \mathbb{Q}} R_d$, where we set $R_{d'} :=0$ for all $d' \in \mathbb{Q} \setminus \mathbb{Z}_{\geq 0}$.
In this case, $W_n(R)$ can be considered as a $p^{-(n-1)}\mathbb{Z}_{\geq 0}$-graded ring and
$F_*W_n(R)$ is a $p^{-n}\mathbb{Z}_{\geq 0}$-graded $W_n(R)$-module.
\end{remark}
\begin{nothing}[Graded structure on $R(qK_R)$]\label{graded-can-div}
Let $k$ be a field.
Let $R = \bigoplus_{d \in \mathbb{Z}} R_d$ be a $\mathbb{Z}$-graded
normal integral domain such that $R_d=0$ for every $d < 0$, $R_0=k$, and $R$ is a finitely generated $k$-algebra.
Fix a canonical divisor $K_R$ on $R$.
We equip $R(K_R)$ with the $\mathbb{Z}$-graded $R$-module structure as in \cite[Definition (2.1.2)]{Got-Wat78}.
We then define the $\mathbb{Z}$-graded $R$-module structure on {$R(qK_R)$} by taking the reflexive hull of $R(K_R)^{\otimes q}$ for every $q \in \mathbb{Z}_{\geq 0}$
(recall that given $\mathbb{Z}$-graded $R$-modules $M = \bigoplus_{d \in \mathbb{Z}} M_d$ and $N = \bigoplus_{d \in \mathbb{Z}} N_d$,
the tensor product is a $\mathbb{Z}$-graded $R$-module given by
$(M \otimes_R N)_d =\bigoplus_{d = d_1 +d_2}\mathrm{Im}( M_{d_1} \otimes_{R_0} N_{d_2} \to M \otimes_R N)$).
\end{nothing}
\begin{proposition}\label{graded-can-div2}
We use the notation as in (\ref{graded-can-div}).
Then the natural map
\[
\beta \colon R(q_1K_R) \otimes_R R(q_2K_R) \to R((q_1+q_2)K_R),
\]
induced by the product $\mathrm{Frac}\,R \otimes_R \mathrm{Frac}\,R \to \mathrm{Frac}\,R, f \otimes g \mapsto fg$,
is a $\mathbb{Z}$-graded $R$-module homomorphism for all $q_1, q_2 \in \mathbb{Z}_{>0}$.
\begin{comment}
Let $k$ be a field.
Let $R = \bigoplus_{d \in \mathbb{Z}} R_d$ be a $\mathbb{Z}$-graded
normal integral domain such that $R_d=0$ for $d < 0$, $R_0=k$, and $R$ is a finitely generated $k$-algebra.
Fix a canonical divisor $K_R$ on $R$.
By \cite[Definition (2.1.2)]{Got-Wat78}, we can define the $\mathbb{Z}$-graded structure on $R(K_R)$.
Thus, we define the $\mathbb{Z}$-graded structure on $R(qR)$ by taking the reflexive hull of $R(K_R)^{\otimes q}$ for every $q \in \mathbb{Z}_{\geq 0}$.
Then the natural map
\[
\phi \colon R(q_1K_R) \otimes_R R(q_2K_R) \to R((q_1+q_2)K_R)
\]
induced by taking product in $K=\mathrm{Frac}(R)$ is a $\mathbb{Z}$-graded $R$-module homomorphism.
\end{comment}
\end{proposition}
\begin{proof}
We have the following commutative diagram of $R$-module homomorphisms:
\[
\begin{tikzcd}
R(K_R)^{\otimes q_1} \otimes_R R(K_R)^{\otimes q_2}
\arrow[r, "\alpha", "\text{graded}"'] \arrow[d, "\zeta", "\text{graded}"']
\arrow[rr, "\gamma := \beta \circ \alpha", bend left=16, "\text{graded}"']
&R(q_1K_R) \otimes_R R(q_2K_R) \arrow[r, "\beta"] \arrow[d, "\xi", "\text{graded}"']
& R((q_1 + q_2)K_R)\arrow[d, "\eta", "\text{graded, bij.}"']
\\
(R(K_R)^{\otimes q_1} \otimes_R R(K_R)^{\otimes q_2})^{**}
\arrow[r, "\alpha^{**}", "\text{graded, bij.}"'] \arrow[rr, "\gamma^{**}", bend right=16, "\text{graded, bij.}"']
& (R(q_1K_R) \otimes_R R(q_2K_R))^{**} \arrow[r, "\beta^{**}", "\text{bij.}"']
& R((q_1 + q_2)K_R))^{**},
\end{tikzcd}
\]
where the lower horizontal arrows and all the vertical arrows are obtained by taking double duals.
It is clear that $\eta, \alpha^{**}, \beta^{**}, \gamma^{**}$ are bijective.
All the vertical arrows are graded homomorphisms.
Since $\alpha$ and $\gamma$ are graded homomorphisms by definition,
so are $\alpha^{**}$ and $\gamma^{**}$.
Then $\beta^{**}$ is a graded homomorphism.
The composition $\beta^{**} \circ \xi = \eta \circ \beta$ is a graded homomorphism,
and hence $\beta = \eta^{-1} \circ ( \eta \circ \beta)$ is a graded homomorphism.
\begin{comment}
\[
\xymatrix{
R(q_1K_R) \otimes_R R(q_2K_R) \ar[d]^{a} \ar[rd]^{\phi} & \\
(R(q_1K_R) \otimes_R R(q_2K_R))^{**} \ar[r]^-{b} & R((q_1+q_2)K_R) \\
(R(K_R)^{\otimes q_1} \otimes_R R(K_R)^{\otimes q_2})^{**} \ar[ru]_{d} \ar[u]^{c}.
}
\]
Since $a$ is injective and a graded homomorphism, it is enough to show that $b$ is a graded homomorphism.
Since $c$ is bijective and $d$ is a graded homomorphism by the definition of the graded structure of $R((q_1+q_2)K_R)$, it is enough to show that $c$ is a graded homomorphism.
Furthermore, $c$ is the map induced by the graded homomorphisms
\[
R(K_R)^{\otimes q_1} \to R(q_1K_R),\ \text{and}\ R(K_R)^{\otimes q_2} \to R(q_2K_R).
\]
Therefore, taking the reflexive hull, $c$ is also a graded homomorphism.
\end{comment}
\end{proof}
\begin{proposition}\label{p-Witt-of-graded-can-div}
Let $k$ be an $F$-finite field of characteristic $p>0$.
Let $R = \bigoplus_{d \in \mathbb{Q}} R_d$ be a $\mathbb{Q}$-graded normal integral domain such that $R_d=0$ for $d \in \mathbb{Q} \setminus \mathbb{Z}_{\geq 0}$, $R_0=k$, and $R$ is a finitely generated $k$-algebra.
Fix $q \in \mathbb{Z}_{>0}$ and a canonical divisor $K_R$ on $R$.
We define the graded structure on $R(qK_R)$ for $q \in \mathbb{Z}_{>0}$
as in (\ref{graded-can-div}).
Then the following hold.
\begin{enumerate}
\item
$W_n(R)(qK_R)$ has the $\mathbb{Q}$-graded $W_n(R)$-module structure
\[
W_n(R)(qK_R) = \bigoplus_{e \in \mathbb{Q}} W_n(R)(qK_R)_e
\]
such that, for every $e \in \mathbb{Q}$:
\begin{align*}
\quad W_n(R)(qK_R)_e = \{ (a_0, a_1, &\cdots, a_{n-1}) \in W_n(R)(qK_R) \,|\,
\\
&a_i \in R(p^iqK_R)_{p^ie}\text{ for every }1 \leq i \leq n-1 \}.
\end{align*}
\end{enumerate}
For all $r \in \mathbb{Z}_{\geq 0}$ and $e \in \mathbb{Q}$, we set
\[
(F^r_*W_n(R)(qK_R))_e := F_*^r(W_n(R)(qK_R)_{p^re}).
\]
Then the following hold.
\begin{enumerate}
\setcounter{enumi}{1}
\item
For every $r \in \mathbb{Z}_{\geq 0}$,
$F_*^rW_n(R)(qK_R)$ has the $\mathbb{Q}$-graded $W_n(R)$-module structure
\[
F_*^rW_n(R)(qK_R) = \bigoplus_{e \in \mathbb{Q}} (F^r_*W_n(R)(qK_R))_e.
\]
\item
The induced map
\[
F : W_n(R)(qK_R) \to F_*W_n(R)(pqK_R)
\]
by the Frobenius homomorphism
is a $\mathbb{Q}$-graded $W_n(R)$-module homomorphism.
\item
The induced map
\[
V : F_*W_n(R)(pqK_R) \to
W_{n+1}(R)(qK_R)
\]
by the Verschiebung homomorphism
is a $\mathbb{Q}$-graded $W_n(R)$-module homomorphism.
\item
For any $n, m \in \mathbb{Z}_{> 0}$,
the restriction map
\[
R: W_{n+m}(R)(qK_R)
\to W_n(R)(qK_R)
\]
is a $\mathbb{Q}$-graded $W_n(R)$-module homomorphism.
\end{enumerate}
\end{proposition}
\begin{proof}
Let us show (1).
It suffices to show the following assertions (i)--(iii).
\begin{enumerate}
\renewcommand{\labelenumi}{(\roman{enumi})}
\item $W_n(R)(qK_R)_e$ is a subgroup of $W_n(R)(qK_R)$.
\item $W_n(R)(qK_R)_{e} \cdot W_n(R)_{e'} \subseteq W_n(R)(qK_R)_{e+e'}$ holds for all $e, e' \in \mathbb{Q}$.
\item $W_n(R)(qK_R) = \bigoplus_{e \in \mathbb{Q}} W_n(R)(qK_R)_e$.
\end{enumerate}
Let us show (i).
We take $\varphi=(\varphi_0,\ldots,\varphi_{n-1}) \in W_n(R)(qK_R)_e$ and $\psi=(\psi_1,\ldots,\psi_{n-1}) \in W_n(R)(qK_R)_e$.
By \cite[Lemma 2.3]{tanaka22}, we have
\[
\varphi+\psi=(S_0(\varphi_0,\psi_0),S_1(\varphi_0,\psi_0,\varphi_1,\psi_1),\ldots)
\]
for some polynomials
\[
S_m(x_0,y_0,\ldots,x_m,y_m) \in \mathbb{Z}[x_0,y_0,\ldots,x_m,y_m].
\]
By \cite[Lemma 2.4]{tanaka22}, if we pick a monomial $x_0^{a_0}y_0^{b_0} \cdots x_m^{a_m}y_m^{b_m}$ appearing in the monomial decomposition of $S_m$, then we have
\[
\sum_{i=0}^m p^i(a_i+b_i)=p^m.
\]
By $\varphi_i,\psi_i \in R(p^iqK_R)_{p^ie}$ and Proposition \ref{graded-can-div2},
it holds that
\[
\varphi_0^{a_0}\psi_0^{b_0} \cdots \varphi_m^{a_m}\psi_m^{b_m} \in R(p^mK_R)_{p^me}.
\]
Therefore, we have $\varphi+\psi \in R(qK_R)_{e}$.
Let us show (ii).
Fix $e, e' \in \mathbb{Q}$.
Take two elements $V^m\underline{b} \in W_n(R)(qK_R)_e$ and $V^{m'}\underline{b'} \in W_n(R)_{e'}$,
where $0 \leq m \leq n-1, 0 \leq m' \leq n-1, b \in R(p^mqK_R)_{p^me}, b' \in R_{p^{m'}e'}$.
We then obtain
\[
(V^m\underline{b}) \cdot (V^{m'}\underline{b'})
= V^{m+m'}( ( F^{m'}\underline{b})\cdot ( F^{m}\underline{b'}))
= V^{m+m'}(\underline{ b^{p^{m'}} \cdot b'^{p^m} }).
\]
We have $b^{p^{m'}} \in R(p^{m+m'}qK_R)_{p^{m+m'}e}$ and $b'^{p^m} \in R_{p^{m+m'}e'}$, which implies
$b^{p^{m'}} \cdot b'^{p^m} \in R(p^{m+m'}qK_R)_{p^{m+m'}(e+e')}$ (Proposition \ref{graded-can-div2}).
Therefore, we get
$V^{m+m'}(\underline{ b^{p^{m'}} \cdot b'^{p^m} }) \in W_n(R)(qK_R)_{e+e'}$.
Thus (ii) holds.
Let us show (iii).
It is easy to check that the equality $W_n(R)(qK_R) = \sum_{e \in \mathbb{Q}} W_n(R)(qK_R)_e$ holds.
It suffices to show that this sum $\sum_{e \in \mathbb{Q}} W_n(R)(qK_R)_e$ is a direct sum.
Assume that
\[
\zeta_1 + \cdots + \zeta_r =0
\]
holds for $\zeta_1 \in W_n(R)(qK_R)_{e_1}, ..., \zeta_r \in W_n(R)(qK_R)_{e_r}$
with $e_1 < \cdots <e_r$.
It is enough to prove that $\zeta_1 = \cdots = \zeta_r=0$.
We have
\[
\zeta_i =(\zeta_{i, 0}, \zeta_{i, 1}, ..., \zeta_{i,n-1}) \in W_n(R)(qK_R)
\]
for some $\zeta_{i, j} \in R(p^jqK_R)$.
By
\[
(0, 0, ...) = 0 = \sum_{i=1}^r \zeta_i =
\left( \sum_{i=1}^r \zeta_{i, 0}, ...\right),
\]
we get $\sum_{i=1}^r \zeta_{i, 0}=0$, which implies
$\zeta_{1, 0} = \cdots =\zeta_{r, 0}=0$.
It follows from
\[
(0, 0, ...) = 0 = \sum_{i=1}^r \zeta_i =
\left(0, \sum_{i=1}^r \zeta_{i, 1}, ...\right)
\]
that $\sum_{i=1}^r, \zeta_{i, 1}=0$, which implies
$\zeta_{1, 1} = \cdots = \zeta_{r, 1}=0$.
Repeating this procedure, we obtain $\zeta_1 = \cdots = \zeta_r =0$.
Thus (iii) holds.
This completes the proof of (1).
We now show the following $(*)$.
\begin{enumerate}
\item[$(*)$] For $r \in \mathbb{Z}_{\geq 0}$ and the Frobenius action
$F^r : W_n(R)(qK_R) \to W_n(R)(p^rqK_R)$, it holds that $F^r ( W_n(R)(qK_R)_e) \subseteq W_n(R)(p^rqK_R)_{p^re}$.
\end{enumerate}
To this end, we may assume that $r=1$.
Fix $e \in \mathbb{Q}$ and $\alpha \in W_n(R)(qK_R)_e$.
We have
\[
\alpha= (a_0, a_1, ..., a_{n-1}) \qquad \text{for} \quad \text{some} \quad a_i \in R(p^iqK_R)_{p^i e}.
\]
It holds that
\[
F(\alpha) = F(a_0, a_1, ..., a_{n-1}) = (a_0^p, a_1^p, ..., a_{n-1}^p).
\]
Then $a_i \in R(p^iqK_R)_{p^i e}$ implies $a_i^p \in R(p^{i+1}qK_R)_{p^{i+1} e}$ by Proposition \ref{graded-can-div2}, that is, $F(\alpha) \in W_n(R)(pqK_R)_{pe}$.
Thus $(*)$ holds.
Let us show (2).
It follows from (1) that we have the direct sum decomposition
$F_*^rW_n(R)(qK_R) = \bigoplus_{e \in \mathbb{Q}} (F_*^rW_n(R)(qK_R))_e$
as an additive group.
Hence it suffices to show that $W_n(R)_e \cdot (F_*^rW_n(R)(qK_R))_{e'}
\subseteq (F_*^rW_n(R)(qK_R))_{e+e'}$.
Take $\zeta \in W_n(R)_e$ and $F^r_* \xi \in (F_*^rW_n(R)(qK_R))_{e'}$,
where $F^r_* \xi$ denotes the same element as $\xi \in W_n(R)(qp^rK_R)_{p^re'}$
via the set-theoretic equality $(F_*^rW_n(R)(qK_R))_{e'} = W_n(R)(qp^rK_R)_{p^re'}$.
We have $F^r(\zeta) \in W(R)_{p^re}$.
Hence we obtain
\[
\zeta \cdot (F^r_* \xi) = F^r_*( F^r(\zeta) \cdot \xi)
\in F_*^r( W(R)_{p^re} \cdot W_n(R)(qp^rK_R)_{p^re'})
\]
\[
\subseteq F_*^r(W_n(R)(qp^rK_R)_{p^r(e+e')}) =(F^r_* W_n(R)(qK_R))_{e+e'}.
\]
Thus (2) holds.
The assertions (3)--(5) follow from $(*)$.
\end{proof}
\subsection{Cones and quasi-F-splittings}
We start by making a general comment about graded $W_n(R)$-modules.
\begin{nothing}[Graded structures on cohomologies]\label{n-graded-str}
Let $R = \bigoplus_{d \in \mathbb{Z}_{\geq 0}}R_d$ be an $F$-finite noetherian
$\mathbb{Z}_{\geq 0}$-graded $\mathbb{F}_p$-algebra such that $R_0$ is a field.
Set $\mathfrak{m} := \bigoplus_{d \in \mathbb{Z}_{> 0}}R_d$ and
$U :=\Spec\,R \setminus \{\mathfrak{m}\}$.
Fix $n \in \mathbb{Z}_{>0}$ and $\nu \in \mathbb{Z}_{\geq 0}$.
Let $M$ be a
$p^{-\nu}\mathbb{Z}$-graded $W_n(R)$-module.
Fix $i \in \mathbb{Z}_{\geq 0}$.
In what follows, we introduce
\begin{enumerate}
\item a $p^{-\nu}\mathbb{Z}$-graded $W_n(R)$-module structure on $H^i(U, \widetilde{M}|_U)$, and
\item a $p^{-\nu}\mathbb{Z}$-graded $W_n(R)$-module structure on $H^{i+1}_{\mathfrak{m}}(M)$.
\end{enumerate}
Moreover, we show that
\begin{enumerate}
\item[(3)] the connecting homomorphism $H^i(U, \widetilde{M}|_U) \to H^{i+1}_{\mathfrak{m}}(M)$
is a graded $W_n(R)$-module homomorphism.
\end{enumerate}
For every nonzero homogeneous element $ f \in W_n(R)_{>0}$, we have
\[
\Gamma(D(f), \widetilde{M}|_{U}) = \Gamma(D(f), \widetilde{M}) = M_f,
\]
which is is a $p^{-\nu}\mathbb{Z}$-graded $W_n(R)_f$-module.
Fix nonzero homogeneous elements $f_1,\ldots,f_r \in W_n(R)$ such that $(f_1, ..., f_r) = W_n(R)_{>0}$,
that is, $D(f_1)\cup \cdots \cup D(f_r)=U$.
Then we have the \v{C}ech complex of $p^{-\nu}\mathbb{Z}$-graded $R$-modules
\[
C(f_1,\ldots,f_r):=\left( \bigoplus_{1 \leq i \leq r} M_{f_i} \to \bigoplus_{1 \leq i < j \leq r} M_{f_if_j} \to \cdots \to M_{f_1\cdots f_r}\right).
\]
Since the $i$-th cohomology of this complex is $H^i(U,\widetilde{M}|_U)$
\cite[Theorem III.4.5]{hartshorne77},
the complex $C(f_1,\ldots,f_r)$ gives a $p^{-\nu}\mathbb{Z}$-graded $W_n(R)$-module
structure on $H^i(U,\widetilde{M}_U)$.
Note that this does not depend on the choice of $f_1,\ldots,f_r \in W_n(R)$.
Indeed, if we pick another nonzero homogeneous element $f_{r+1} \in W_n(R)_{>0}$,
then $C(f_1,\ldots,f_r)\to C(f_1,\ldots,f_{r+1})$ is a homomorphism of complexes of $p^{-\nu}\mathbb{Z}$-graded $W_n(R)$-modules.
Similarly, we equip $H^{i+1}_{\mathfrak{m}}(M)$ with a $p^{-\nu}\mathbb{Z}$-graded $W_n(R)$-module structure
by using the fact that $H^{i+1}_{\mathfrak{m}}(M)$ is the $i$-th cohomology of $M \to C(f_1, ..., f_r)$.
Then the connecting homomorphism $H^i(U,\widetilde{M}|_U) \to H^{i+1}_\mathfrak{m}(M)$
is a graded $W_n(R)$-module homomorphism, because it is induced by the following commutative diagram:
\[
\begin{tikzcd}
0 \arrow{r}\arrow{d} & C(f_1,\ldots,f_r) \arrow{d}\\
M \arrow{r} & C(f_1,\ldots,f_r).
\end{tikzcd}
\]
\end{nothing}
\begin{comment}
\begin{lemma}\label{l-conne-graded}
We use Notation \ref{n-cone-QFS}.
Let $M$ be a $p^{-\nu}\mathbb{Z}$-graded $W_n(R)$-module for some
$n \in \mathbb{Z}_{>0}$ and $\nu \in \mathbb{Z}_{\geq 0}$.
Then the connecting homomorphism
\[
H^i(U,\widetilde{M}|_U) \to H^{i+1}_\mathfrak{m}(M)
\]
is a homomorphism of $p^{-\nu}\mathbb{Z}$-graded $R$-modules,
where
$H^i(U,\widetilde{M}|_U)$ is considered as a $p^{-\nu}\mathbb{Z}$-graded $R$-module via (\ref{n-graded-str}).
{\color{cyan} what is the graded structure on $H^{i+1}_\mathfrak{m}(M)$? $\to$ Given by Cech.}
\end{lemma}
\begin{proof}
The connecting map $H^i(U,\widetilde{M}|_U) \to H^{i+1}_\mathfrak{m}(M)$ is induced by the map of complexes of graded $R$-modules
\[
\xymatrix{
0 \ar[r] \ar[d] & C(f_1,\ldots,f_r) \ar[d] \\
M \ar[r] & C(f_1,\ldots,f_r),
}
\]
thus the connecting map is homomorphism of graded $R$-modules.
More specifically,
$H^i(U,\widetilde{M}|_U)$ is the $i$-th{\color{cyan} ???} cohomology of the upper complex, whilst $H^{i+1}_\mathfrak{m}(M)$ is the $i$-th{\color{cyan} ???} cohomology
of the lower complex, where the leftmost terms are of degree $0$.
\end{proof}
\end{comment}
\begin{notation}\label{n-cone-QFS}
Let $k$ be an $F$-finite field
of characteristic $p>0$.
Let $X$ be a projective normal variety over $k$
with $\dim X \geq 1$ and $H^0(X, \mathcal{O}_X)=k$.
Let $D$ be an ample $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor.
Set
\[
R := R(X, D) := \bigoplus_{d \in \mathbb{Z}_{\geq 0}} H^0(X, \mathcal{O}_X(dD))t^d \subseteq {K(X)}[t],
\]
which is a $\mathbb{Z}_{\geq 0}$-graded subring of the standard $\mathbb{Z}_{\geq 0}$-graded polynomial ring ${K(X)}[t]$.
Note that $R$ is a finitely generated $\mathbb{Z}_{\geq 0}$-graded $k$-algebra
and $\Spec R$ is an affine normal variety (Theorem \ref{t-Qcone-birat}).
Let $D = \sum_{i=1}^r \frac{\ell_i}{d_i}D_i$ be the irreducible decomposition,
where $\ell_i$ and $d_i$ are coprime integers satisfying $d_i>0$ for each $1 \leq i \leq r$.
Set $D':=\sum_{i=1}^r \frac{d_i-1}{d_i}D_i$.
For the graded maximal ideal $\mathfrak{m} := \bigoplus_{d >0} H^0(X, \mathcal{O}_X(dD))t^d \subseteq R$,
we set $U := \Spec R \setminus \{\mathfrak{m}\}$. We have $\mu|_{W^{\circ}_{X, D}} : W^{\circ}_{X, D} \xrightarrow{\simeq} U$ (Theorem \ref{t-Qcone-birat}),
and hence we have the induced affine morphism:
\[
\rho : U \xrightarrow{(\mu|_{W^{\circ}_{X, D}})^{-1}, \simeq} W^{\circ}_{X, D} \xrightarrow{\pi^{\circ}} X.
\]
For $n \in \mathbb{Z}_{>0}$, let
\[
W_n(R) = \bigoplus_{e \in p^{-(n-1)}\mathbb{Z}_{\geq 0}} W_n(R)_e
\]
be the $p^{-(n-1)}\mathbb{Z}_{\geq 0}$-graded ring structure induced by
Proposition \ref{p-Witt-of-graded}.
Set $W_n(R)_{>0} := \bigoplus_{e \in p^{-(n-1)}\mathbb{Z}_{>0}} W_n(R)_{{e}}$,
which is a graded primary ideal such that $\sqrt{W_n(R)_{>0}}$ is a maximal ideal.
\end{notation}
\begin{proposition}\label{prop: affine morphism}
We use Notation \ref{n-cone-QFS}.
Then the equality
\[
\rho_*\mathcal{O}_U =\bigoplus_{d \in \mathbb{Z}} \mathcal{O}_X(d{D})t^
\]
induces the following isomorphism of graded $R$-modules
\[
H^i(U,\mathcal{O}_U) \simeq \bigoplus_{d \in \mathbb{Z}} H^i(X,\mathcal{O}_X(d{D}))t^d,
\]
where the graded structure of $H^i(U,\mathcal{O}_{U})$ is defined in Notation \ref{n-graded-str}.
\end{proposition}
\begin{proof}
Note that $\rho_*\mathcal{O}_U =\bigoplus_{d \in \mathbb{Z}} \mathcal{O}_X(d{D})t^d$
is the equality inside the constant sheaf ${K(X)[t,t^{-1}]}$.
By Theorem \ref{t-Qcone-birat}, we have
\[
W_{X,D}^{\circ} \simeq W_{X,D} \backslash \Gamma_{X,D} \simeq V_{X,D} \backslash \{v_{X,D}\} =U.
\]
By the definition of $W^{\circ}_{X,D}$, we have
\[
\rho_*\mathcal{O}_U =\bigoplus_{d \in \mathbb{Z}} \mathcal{O}_X(d{D})t^d
\]
holds inside the constant sheaf ${K(X)[t,t^{-1}]}$.
In what follows, for $d \in \mathbb{Z}_{>0}$ and a nonzero element $f \in H^0(X, \mathcal{O}_X(dD))$, set
\[
\overline{f} := ft^{d} \in H^0(X, \mathcal{O}_X(dD))t^{d} \subseteq
\bigoplus_{d=0}^{\infty} H^0(X, \mathcal{O}_X(dD))t^d =R(X, D).
\]
Since we have $\mathrm{Proj}\, R(X,D) \simeq X$ by \cite[Proposition 3.3 (a)]{Demazure}, we can define an open subset $D_{+}(\overline{f})$ as in \cite[Proposition II.2.5]{hartshorne77}.
Let $f_1,\ldots , f_r \in R(X,D)$ be homogeneous elements of positive degree such that $\{D(f_i) \mid 1 \leq i \leq r\}$ is an open covering of $U$, then $\{D_+(\overline{f_i}) \mid 1 \leq i \leq r\}$ is an open covering of $X$.
By $\rho_*\mathcal{O}_U=\oplus \mathcal{O}_X(dD)t^d$ and $\rho^{-1}(D_+(\overline{f_i}))=D(f_i)$,
we have the isomorphism of complexes of $\mathbb{Z}$-graded $R$-modules:
\[
\begin{tikzcd}
\bigoplus_i R_{f_i} \arrow[r] \arrow[d, "\simeq"] & \bigoplus_{i<j} R_{f_if_j} \arrow[r] \arrow[d, "\simeq"] & \cdots \\
\bigoplus_{d \in \mathbb{Z}} \bigoplus_i H^0(D_+(\overline{f_i}), \mathcal{O}_X(dD)) \arrow[r] & \bigoplus_{d \in \mathbb{Z}} \bigoplus_{i<j} H^0(D_+(\overline{f_if_j}), \mathcal{O}_X(dD)) \arrow[r] & \cdots.
\end{tikzcd}
\]
Thus, taking cohomologies, we have the following isomorphism of graded $R$-modules:
\[
H^i(U,\mathcal{O}_U) \simeq \bigoplus_{d \in \mathbb{Z}} H^i(X, \mathcal{O}_X(dD))t^d.
\]
This completes the proof of Proposition \ref{prop: affine morphism}.
\end{proof}
\begin{nothing}[Canonical divisors]
We use Notation \ref{n-cone-QFS}.
Fix a canonical divisor $K_X$, that is, a Weil divisor $K_X$ such that $\mathcal{O}_X(K_X) \simeq \omega_X$.
By \cite[3.2]{Watanabe91} (for the convenience of the reader, we also provide a proof below),
there exists a canonical divisor
$K_R (=K_{\Spec R})$ on $\Spec R$ such that
the following equality holds as a $\mathbb{Z}$-graded $R$-module for every $q \in \mathbb{Z}_{>0}$:
\[
R(qK_R)
= \bigoplus_{d \in \mathbb{Z}} H^0(X, \mathcal{O}_X(q(K_X+D')+dD))t^d
\subseteq K(X)[t, t^{-1}],
\]
where the graded structure on $R(qK_R)$ is defined as in (\ref{graded-can-div}).
We note that $H^i(U, \mathcal{O}_U(qK_U))$ is a $\mathbb{Z}$-graded
$R$-module for any $i \geq 0$ (\ref{n-graded-str}).
\end{nothing}
\begin{proposition}
We use Notation \ref{n-cone-QFS}.
Fix a canonical divisor $K_X$, that is, a Weil divisor $K_X$ such that $\mathcal{O}_X(K_X) \simeq \omega_X$.
Then there exists a canonical divisor
$K_R (=K_{\Spec R})$ on $\Spec R$ such that
the following equality holds as a $\mathbb{Z}$-graded $R$-module for every $q \in \mathbb{Z}_{>0}$:
\begin{equation}\label{eq: canonical module}
R(qK_R)
= \bigoplus_{d \in \mathbb{Z}} H^0(X, \mathcal{O}_X(q(K_X+D')+dD))t^d
\subseteq K(X)[t, t^{-1}],
\end{equation}
where the $\mathbb{Z}$-graded $R$-module structure on $R(qK_R)$ is defined as in (\ref{graded-can-div}).
\end{proposition}
\begin{proof}
By \cite[Theorem 2.8]{wat81}, we have an isomorphism
\[
\omega_R \simeq \bigoplus_{d \in \mathbb{Z}} H^0(X, \underbrace{\mathcal{O}_X(K_X+D'+dD)}_{=\mathcal{O}_X(K_X+dD)})t^d.
\]
Therefore, there exists a canonical divisor $K_R$ on $\Spec R$ such that
\[
R(K_R)
= \bigoplus_{d \in \mathbb{Z}} H^0(X, \mathcal{O}_X(K_X+D'+dD))t^d
\subseteq K(X)[t, t^{-1}].
\]
Thus, we obtain the case of $q=1$.
Next, we prove the equality (\ref{eq: canonical module}),
The right hand side of (\ref{eq: canonical module}) is denoted by $M_q$, which is a finite $\mathbb{Z}$-graded $R$-module.
\begin{claim}\label{claim:S_2}
We have $H^i_\mathfrak{m}(M_q)=0$ for $i=0,1$.
\end{claim}
\begin{proof}[Proof of Claim \ref{claim:S_2}]
By a similar argument to the proof of Proposition \ref{prop: affine morphism},
\[
(\widetilde{M}_q)|_{U} \simeq \bigoplus_{d \in \mathbb{Z}} \mathcal{O}_X(q(K_X+D')+dD)t^d
\]
as $\mathcal{O}_U$-modules.
We consider the exact sequence
\[
0 \to H^0_\mathfrak{m}(M_q) \to M_q \to H^0(U,(\widetilde{M}_q)|_{U}) \to H^1_{\mathfrak{m}}(M_q) \to 0.
\]
Since
\[
H^0(U,(\widetilde{M}_q)|_{U}) \simeq \bigoplus_{d \in \mathbb{Z}} H^0(X,\mathcal{O}_X(q(K_X+D')+dD))t^d=M_q,
\]
we get $H^i_\mathfrak{m}(M_q)=0$ for $i=0,1$.
\end{proof}
By Claim \ref{claim:S_2}, it is enough to show that
\[
\widetilde{(R(qK_R)}|_{U} = \bigoplus_{d \in \mathbb{Z}} \mathcal{O}_X(q(K_X+D')+dD)t^d.
\]
This follows from Proposition \ref{prop:S_2} and the case of $q=1$.
\end{proof}
\begin{proposition}\label{prop: canonical divisor}\textup{(cf.\ \cite{Watanabe91})}
We use Notation \ref{n-cone-QFS}.
Fix $q \in \mathbb{Z}_{>0}$.
Then the following hold.
\begin{enumerate}
\item
For $m \in \mathbb{Z}_{>0}$ and a nonzero element $f \in H^0(X, \mathcal{O}_X(mD))$,
we have the following isomorphism of $\mathbb{Z}$-graded $R$-modules:
\[
R(qK_R)_{ft^m} \xrightarrow{\simeq} \bigoplus_{d \in \mathbb{Z}} H^0(D_+(\overline{f}), \mathcal{O}_X(q(K_X+D')+dD))t^d.
\]
\item
The equality
\[
\rho_*\mathcal{O}_U(qK_U)=\bigoplus_{d \in \mathbb{Z}}\mathcal{O}_X(q(K_X+D')+dD) t^d \subseteq K(X)[t, t^{-1}]
\]
holds.
Furthermore, it induces the isomorphism of $\mathbb{Z}$-graded $R$-modules
\[
H^i(U,\mathcal{O}_U(qK_U)) \simeq \bigoplus_{d \in \mathbb{Z}} H^i(X,\mathcal{O}_X(q(K_X+D')+dD))t^d.
\]
\end{enumerate}
\end{proposition}
\begin{proof}
We have the homomorphism of graded $R$-modules
\[
R(qK_R)_{ft^m} \to \bigoplus_{d \in \mathbb{Z}} H^0(D_+(\overline{f}), \mathcal{O}_X(q(K_X+D')+dD))t^d.
\]
By a similar argument to the proof of Proposition \ref{prop: affine morphism}, this is an isomorphism.
Then the assertions (1) and (2) follow from a similar argument to that of Proposition \ref{prop: affine morphism}.
\end{proof}
\begin{remark}[Graded structure on $Q_{R, K_R, n}$]\label{rem:coh of Q}
We use Notation \ref{n-cone-QFS}.
\begin{enumerate}
\item
We equip $W_n(R)(qK_R)$ with the $p^{-(n-1)}\mathbb{Z}$-graded $W_n(R)$-module structure as in Proposition \ref{p-Witt-of-graded-can-div}.
\item
We define a $p^{-(n+r-1)}\mathbb{Z}$-module structure on $F_*^rW_{n}(R)(qK_R)$ by
\[
(F^r_*W_n(R)(qK_R))_e:=W_n(R)(qK_R)_{p^re}.
\]
Then Proposition \ref{p-Witt-of-graded-can-div} implies that the following maps are graded $W_n(R)$-module homomorphisms:
\begin{align*}
&V \colon F_*W_{n-1}(R)(pqK_R) \to W_n(R)(qK_R), \text{ and}\\
&F \colon W_n(R)(qK_R) \to F_*W_n(R)(pqK_R).
\end{align*}
\item
We introduce the $p^{-n}\mathbb{Z}$-graded $W_n(R)$-module structure on $Q_{R,K_R,n}$ via the
following exact sequence (that is, the $e$-th graded piece is the quotient of the $e$-th graded pieces of the terms on the left):
\[
0 \to F_*W_{n-1}(R)(pK_R) \xrightarrow{FV} F_*W_n(R)(pK_R) \to Q_{R,K_R,n} \to 0.
\]
\item
It follows from (1) and (\ref{n-graded-str}) that $H^i(U,W_n\mathcal{O}_U(qK_U))$ and
$H^{i+1}_\mathfrak{m}(W_n(R)(qK_R))$ have $p^{-(n-1)}\mathbb{Z}$-graded $W_n(R)$-module structures
such that the connecting map
\[
H^i(U,W_n\mathcal{O}_U(qK_U)) \to H^{i+1}_\mathfrak{m}(W_n(R)(qK_R))
\]
is a graded $W_n(R)$-module homomorphism.
Similarly, by (3) and (\ref{n-graded-str}), the cohomologies $H^i(U, Q_{U, K_U, n})$ and
$H^{i+1}_\mathfrak{m}(Q_{R, K_R, n})$ have $p^{-n}\mathbb{Z}$-graded $W_n(R)$-module structures
such that the connecting map
\[
H^i(U, Q_{U, K_U, n}) \to H^{i+1}_\mathfrak{m}(Q_{R, K_R, n})
\]
is a graded $W_n(R)$-module homomorphism.
\end{enumerate}
\end{remark}
\begin{proposition}\label{prop:coh of Q}
We use Notation \ref{n-cone-QFS}.
Fix $d, n \in \mathbb{Z}_{>0}$.
Then
the following commutative diagram
consists of graded $W_n(R)$-module homomorphisms of $p^{-n}\mathbb{Z}$-graded $W_n(R)$-modules
\[
\begin{tikzcd}
H^d(U,\omega_U) \arrow[r, "\Phi_{U, K_U, n}"] \arrow[d, "\simeq"] & H^d(Q_{U,K_U,n}) \arrow[d, "\simeq"] \\
H^{d+1}_\mathfrak{m}(\omega_R) \arrow[r, "\Phi_{R, K_R, n}"] & H^{d+1}_\mathfrak{m}(Q_{R,K_R,n}),
\end{tikzcd}
\]
where all the maps are the natural ones and each vertical map is an isomorphism.
\end{proposition}
\begin{proof}
The commutativity of the diagram follows from the functoriality of the connecting map.
It is clear that the vertical maps are bijective.
It suffices to show that each map is a graded $W_n(R)$-module homomorphism.
As for the vertical arrows, we may apply Remark \ref{rem:coh of Q}(4).
Recall that the horizontal maps are given by the right vertical maps in the following diagrams in which each horizontal sequences are exact
\[
\begin{tikzcd}
0 \arrow[r] & F_*W_{n-1}\mathcal{O}_U(pK_U) \arrow[r, "V"] \arrow[d, equal] & W_n\mathcal{O}_U(K_U) \arrow[r] \ar[d, "F"] & \omega_U \arrow[r]
\arrow[d, "\Phi_{U, K_U, n}"] & 0 \\
0 \arrow[r] & F_*W_{n-1}\mathcal{O}_U(pK_U) \arrow[r, "FV"] & F_*W_n\mathcal{O}_U(pK_U) \arrow[r] & Q_{U,K_U,n} \arrow[r] & 0
\end{tikzcd}
\]
and
\[
\begin{tikzcd}
0 \arrow[r] & F_*W_{n-1}(R)(pK_R) \arrow[r, "V"] \arrow[d, equal] & W_n(R)(K_R) \arrow[r] \arrow[d, "F"] & \omega_R \arrow[r] \arrow[d, "\Phi_{R, K_R, n}"] & 0 \\
0 \arrow[r] & F_*W_{n-1}(R)(pK_R) \arrow[r, "FV"] & F_*W_n\mathcal{O}_R(pK_R) \arrow[r] & Q_{R,K_R,n} \arrow[r] & 0.
\end{tikzcd}
\]
Hence we are done by Remark \ref{rem:coh of Q}(2).
\end{proof}
\begin{proposition}\label{prop:commutativity}
We use Notation \ref{n-cone-QFS}.
Set $d := \dim X$.
Then we have the commutative diagram
\[
\begin{tikzcd}[column sep=1.5in]
H^d(U,\omega_U)_0 \arrow[r, "{H^d(U, \Phi_{U, K_U, n})_0}"] & H^d(U, Q_{U,K_U,n})_0 \\
H^d(X,\mathcal{O}_X(K_X+D')) \arrow[r, "{H^d(X, \Phi_{X, K_X+D', n})}"] \arrow[u, "\simeq"] & H^d(X,Q_{X,K_X+D',n}).
\arrow[u, "\simeq"]
\end{tikzcd}
\]
\end{proposition}
\begin{proof}
We take nonzero homogeneous elements
$\overline{f}_1 := f_1t^{d_1},\ldots , \overline{f}_r := f_rt^{d_r}$ of $R$
such that $D(\overline{f}_1)\cup \cdots \cup D(\overline{f}_r)=U$.
By Proposition \ref{prop: canonical divisor}(1), we obtain
\begin{align*}
R(qK_R)_{\overline{f}_i}&=
\bigoplus_{m \in \mathbb{Z}} H^0(D_+(\overline{f}_i), \mathcal{O}_X(q(K_X+D')+mD))t^m\qquad\text{and}\\
\left(R(qK_R)_{\overline{f}_i}\right)_0&=
H^0(D_+(\overline{f}_i), \mathcal{O}_X(q(K_X+D')))
\end{align*}
for any $1 \leq i \leq r$.
Then the latter equality implies
\begin{equation}\label{e1:commutativity}
\left(W_n(R)(qK_R)_{[\overline{f}_i]}\right)_0
= H^0(D_+(\overline{f}_i), W_n\mathcal{O}_X(q(K_X+D')))
\end{equation}
as $W_n(R)$-submodules of $W_n(K(R))$ by Proposition \ref{prop: canonical divisor}.
Indeed, the left hand side is
\[
\left(W_n(R)(qK_R))_{[\overline{f}_i]}\right)_0=
\prod_{m=0}^{n-1} (R(p^mqK_R)_{\overline{f}_i})_0
\]
and the right hand side is
\[
H^0(D_+(\overline{f}_i), W_n\mathcal{O}_X(q(K_X+D')))
=\prod_{m=0}^{n-1} H^0(D_+(\overline{f}_i), \mathcal{O}_X(p^mq(K_X+D'))).
\]
Therefore, (\ref{e1:commutativity}) induces the isomorphism of graded $W_n(R)$-modules
\[
\theta: H^d(X,W_n\mathcal{O}_X(q(K_X+D')))\xrightarrow{\simeq} H^d(U,W_n\mathcal{O}_U(qK_U) )_0,
\]
which commutes with $F$ and $V$.
Therefore, we obtain the following commutative diagram of graded $W_n(R)$-modules
{\mathrm{small}
\[
\begin{tikzcd}[row sep=1.5em, column sep =0.8em]
H^d(F_*W_{n-1}\mathcal{O}_U(pK_U))_
& & &
H^d(W_n\mathcal{O}_U(K_U))_
& & \\
&
H^d(F_*W_{n-1}\mathcal{O}_U(pK_U))_
& & &
H^d(F_*W_n\mathcal{O}_U(pK_U))_
& & \\
H^d(F_*W_{n-1}\mathcal{O}_X(pE)
& & & H^d(W_n\mathcal{O}_X(E)
& & \\
& H^d(F_*W_{n-1}\mathcal{O}_X(pE)
& & & H^d(F_*W_n\mathcal{O}_X(pE)
&
\arrow[from=1-1,to=2-2, equal]
\arrow[from=1-1,to=1-4, "V" pos=.6]
\arrow[from=1-4,to=2-5, "F"]
\arrow[from=3-1,to=4-2, equal]
\arrow[from=3-1,to=3-4, "V" pos=.6]
\arrow[from=4-2,to=4-5, "FV" pos=.4]
\arrow[from=3-4,to=4-5, "F"]
\arrow[from=3-1,to=1-1, "\theta" pos=.3, "\simeq"' pos=.3]
\arrow[from=3-4,to=1-4, "\theta" pos=.3, "\simeq"' pos=.3]
\arrow[from=4-5,to=2-5, "\theta" pos=.3, "\simeq"' pos=.3]
\arrow[from=4-2,to=2-2, crossing over, "\theta" pos=.3, "\simeq"' pos=.3]%
\arrow[from=2-2,to=2-5, crossing over, "FV" pos=.4]
\end{tikzcd}
\]}
\begin{comment}
Example of 3d diagram
\[
\begin{tikzcd}[row sep=1.5em, column sep =.1]
(k-1,j-1,i-1) & & & (k,j-1,i-1) & & \\
& (k-1,j-1,i) & & & (k,j-1,i) & & \\
& & (k-1,j-1,i+1) & & & (k,j-1,i+1) & \\
(k-1,j,i-1) & & & (k,j,i-1) & & \\
& (k-1,j,i) & & & (k,j,i) & \\
& & (k-1,j,i+1) & & & (k,j,i+1) \\
(k-1,j+1,i-1) & & & (k,j+1,i-1) & & \\
& (k-1,j+1,i) & & & (k,j+1,i) & \\
& & (k-1,j+1,i+1) & & & (k,j+1,i+1)
\arrow[from=1-1,to=4-1]\arrow[from=4-1,to=7-1]
\arrow[from=1-4,to=4-4]\arrow[from=4-4,to=7-4]
\arrow[from=1-4,to=1-1]\arrow[from=7-4,to=7-1]
\arrow[from=4-4,to=4-1]
\arrow[from=2-2,to=5-2, crossing over]\arrow[from=5-2,to=8-2, crossing over]
\arrow[from=2-5,to=5-5]\arrow[from=5-5,to=8-5]
\arrow[from=2-5,to=2-2, crossing over]\arrow[from=8-5,to=8-2]
\arrow[from=5-5,to=5-2, crossing over]
\arrow[from=3-3,to=6-3, crossing over]\arrow[from=6-3,to=9-3, crossing over]
\arrow[from=3-6,to=6-6]\arrow[from=6-6,to=9-6]
\arrow[from=3-6,to=3-3, crossing over]\arrow[from=9-6,to=9-3]
\arrow[from=6-6,to=6-3, crossing over]
\arrow[from=1-1,to=2-2]\arrow[from=2-2,to=3-3]
\arrow[from=4-1,to=5-2]\arrow[from=5-2,to=6-3]
\arrow[from=7-1,to=8-2]\arrow[from=8-2,to=9-3]
\arrow[from=1-4,to=2-5]\arrow[from=2-5,to=3-6]
\arrow[from=4-4,to=5-5]\arrow[from=5-5,to=6-6]
\arrow[from=7-4,to=8-5]\arrow[from=8-5,to=9-6]
\end{tikzcd}
\]
\[
\xymatrix@=10pt{
H^d(F_*W_{n-1}\mathcal{O}_U(pK_U))_0 \ar[rd] \ar[dd] \ar[rr] & &
H^d(W_n\mathcal{O}_U(K_U))_0 \ar[dd]|{\hole} \ar[rd] & \\
& H^d(F_*W_{n-1}\mathcal{O}_U(pK_U))_0 \ar[dd] \ar[rr] & & H^d(F_*W_n\mathcal{O}_U(pK_U))_0 \ar[dd] \\
H^d(F_*W_{n-1}\mathcal{O}_X(pE)) \ar[rr]|{\hole} \ar[rd] & & H^d(W_n\mathcal{O}_X(E)) \ar[rd] & \\
& H^d(F_*W_{n-1}\mathcal{O}_X(pE)) \ar[rr] & & H^d(F_*W_n\mathcal{O}_X(pE)),
} \]
\end{comment}
\noindent where $E:=K_X+D'$.
Taking the cokernels of horizontal maps, we obtain the required diagram.
\end{proof}
\begin{theorem}\label{thm:height equality}
We use Notation \ref{n-cone-QFS}.
Then, for every $n \in \mathbb{Z}_{>0}$,
$R_{\mathfrak{m}}$ is $n$-quasi-$F$-split if and only if $(X, D')$ is $n$-quasi-$F$-split.
In particular, $R_{\mathfrak{m}}$ is quasi-$F$-split if and only if $(X, D')$ is quasi-$F$-split.
\end{theorem}
\begin{proof}
Set $d := \dim X$. By Proposition \ref{prop:coh of Q} and Proposition \ref{prop:commutativity}, we obtain the following commutative diagram
\[
\begin{tikzcd}[column sep=3cm]
H^d(X,\mathcal{O}_X(K_X+D')) \arrow[r, "H^d(\Phi_{X, K_X+D', n})"] \arrow[d, "\simeq"] & H^d(Q_{X,K_X+D',n}) \arrow[d, "\simeq"] \\
H^{d+1}_\mathfrak{m}(\omega_R)_0 \arrow[r, "H^{d+1}_{\mathfrak{m}}(\Phi_{R, K_R, n})_0"] \arrow[d, hookrightarrow] & H^{d+1}_\mathfrak{m}(Q_{R,K_R,n})_0 \arrow[d, hookrightarrow] \\
H^{d+1}_\mathfrak{m}(\omega_R) \arrow[r, "H^{d+1}_{\mathfrak{m}}(\Phi_{R, K_R, n})"] & H^{d+1}_\mathfrak{m}(Q_{R,K_R,n}).
\end{tikzcd}
\]
By \cite[Lemma 3.13]{KTTWYY1} (cf.\ Remark \ref{r-QFS}),
the upper map
$H^d(\Phi_{X, K_X+D', n})$
is injective if and only if
$(X, D')$ is $n$-quasi-$F$-split.
Again by \cite[Lemma 3.13]{KTTWYY1},
the bottom map $H^{d+1}_{\mathfrak{m}}(\Phi_{R, K_R, n})$
is injective if and only if $R_{\mathfrak{m}}$ is $n$-quasi-$F$-split.
By diagram chase,
if the bottom map $H^{d+1}_{\mathfrak{m}}(\Phi_{R, K_R, n})$ is injective, then the upper map
$H^d(\Phi_{X, K_X+D', n})$ is injective.
Conversely, assume that the upper map
$H^d(\Phi_{X, K_X+D', n})$ is injective.
Since the degree $0$ part
$H^{d+1}_\mathfrak{m}(\omega_R)_0$
of $H^{d+1}_\mathfrak{m}(\omega_R)$ is the socle of $H^{d+1}_\mathfrak{m}(\omega_R)$ (cf.\ \cite[(3.2)]{Watanabe91}), the bottom map
$H^{d+1}_{\mathfrak{m}}(\Phi_{R, K_R, n})$
is also injective.
\end{proof}
\begin{corollary}\label{cor:corresponding}
Let $k$ be an $F$-finite field of characteristic $p>0$.
Let $X$ be a projective normal variety over $k$ with $H^0(X, \mathcal{O}_X)=k$.
Let $\Delta$ be an effective $\mathbb{Q}$-divisor on $X$ with standard coefficients
such that $\rdown{\Delta}=0$.
Assume that $-(K_X+\Delta)$ is ample.
Set
\[
R :=\bigoplus_{d \geq 0} H^0(X,\mathcal{O}_X(-d(K_X+\Delta)))t^d \subseteq K(R)[t]
\]
and
$\mathfrak{m} := \bigoplus_{d > 0} H^0(X,\mathcal{O}_X(-d(K_X+\Delta)))t^d \subseteq R$.
Then
$R_{\mathfrak{m}}$ is quasi-$F$-split if and only if $(X, \Delta)$ is quasi-$F$-split.
\end{corollary}
\begin{proof}
Set $d:=\dim X$ and $D := -(K_X+\Delta)$.
We use Notation \ref{n-cone-QFS}.
For the irreducible decomposition $\Delta = \sum_{i=1}^r \frac{d_i -1}{d_i}D_i$,
we obtain
\[
\{ D \} = \{ -(K_X+\Delta)\} = \sum_{i=1}^r \frac{1}{d_i} D_i,
\]
which implies $D' = \sum_{i=1}^r \frac{d_i -1}{d_i}D_i =\Delta$
(for the definition of $D'$, see Notation \ref{n-cone-QFS}).
It follows from Theorem \ref{thm:height equality}
that $R_{\mathfrak{m}}$ is quasi-$F$-split if and only if $(X, \Delta)$ is quasi-$F$-split.
\end{proof}
\section{Klt threefolds and quasi-F-splitting}\label{s-klt3}
We start by generalising Theorem \ref{t-LDP-QFS} to the case when the base field is perfect.
\begin{theorem} \label{t-LDP-QFS2}
Let $k$ be a perfect field of characteristic $p>41$.
Let $(X,\Delta)$ be a log del Pezzo pair with standard coefficients.
Then $(X,\Delta)$ is quasi-$F$-split.
\end{theorem}
\begin{proof}
If $k$ is algebraically closed, then the assertion follows from Theorem \ref{t-LDP-QFS}.
The general case is reduced to this case by taking the base change to the algebraic closure \cite[Corollary 3.20]{KTTWYY1}.
\end{proof}
We are ready to prove main theorems of this paper:
Theorem \ref{t-3dim-klt-QFS} and Theorem \ref{t-3dim-klt-nonQFS}.
\begin{theorem} \label{t-3dim-klt-QFS}
Let $k$ be a perfect field of characteristic $p >41$.
Let $(X, \Delta)$ be a three-dimensional $\mathbb{Q}$-factorial affine klt pair
over $k$, where $\Delta$ has standard coefficients.
Then $(X, \Delta)$ is quasi-$F$-split.
In particular, $X$ lifts to $W_2(k)$.
\end{theorem}
\begin{proof}
The same argument as in \cite[Theorem 6.19]{KTTWYY1} works
after replacining \cite[Theorem 6.18]{KTTWYY1} by Theorem \ref{t-LDP-QFS}.
\end{proof}
\begin{theorem} \label{t-3dim-klt-nonQFS}
Assume $p \in \{11, 17, 19, 23, 41\}$.
Let $k$ be an algebraically closed field of characteristic $p$.
Then there exists a three-dimensional affine $\mathbb{Q}$-factorial klt variety $V$ which is not quasi-$F$-split.
\end{theorem}
\begin{proof}
Let $(X:=\mathbb{P}^2_k, \Delta)$ be as in Notation \ref{n-P^2-cex}.
Set $D:=-(K_X+\Delta)$ and $V := V_{X, D}$.
By Theorem \ref{t-klt},
$V$ is a three-dimensional affine $\mathbb{Q}$-factorial klt variety.
It follows from Theorem \ref{t-P^2-main} that $(X, \Delta)$ is not quasi-$F$-split.
Then Corollary \ref{cor:corresponding} implies that $V$ is not quasi-$F$-split.
\end{proof}
\section{Application to extension theorem for differential forms}
In this section, we apply
Theorem \ref{t-3dim-klt-QFS}
to prove the logarithmic extension theorem for one-forms on three-dimensional terminal singularities (Corollary \ref{c-3dim-1form}).
\begin{definition}[Logarithmic extension theorem]\label{def:log ext thm}
Let $X$ be a normal variety over a perfect field and let $D$ be a reduced divisor on $X$.
We say that $(X,D)$ satisfies \textit{the logarithmic extension theorem for $i$-forms} if, for every proper birational morphism $f\colon Y\to X$ from a normal variety $Y$, the natural
restriction injection
\[
f_{*}\Omega^{[i]}_Y(\log\,(E+f_{*}^{-1}D))\hookrightarrow \Omega^{[i]}_X(\log\,D)
\]
is an isomorphism, where $E$ is the largest reduced $f$-exceptional
divisor.
\end{definition}
\begin{remark}
Let $i \in \mathbb{Z}_{\geq 0}$.
\begin{enumerate}
\item If $(X,D)$ is a log canonical pair over $\mathbb{C}$, then $(X,D)$ satisfies the logarithmic extension theorem for $i$-forms (\cite[Theorem 1.5]{GKKP}).
\item If $(X,D)$ is a two-dimensional log canonical pair over a perfect field of characteristic $p>5$, then $X$ satisfies the logarithmic extension theorem for $i$-forms (\cite[Theorem 1.2]{graf21}).
\end{enumerate}
\end{remark}
\begin{definition}\label{def:reflexive Carter operators}
Let $X$ be a normal variety over a perfect field
of characteristic $p>0$ and let $D$ be a reduced divisor on $X$.
Let $j\colon U\to X$ be the inclusion from the log smooth locus $U$ of $(X,D)$.
For $D_U :=D|_U$, set
\begin{align*}
&B_n\Omega_X^{[i]}(\log\,D)\coloneqq j_{*}B_n\Omega^{[i]}_U(\log\,D_U)\quad \text{and}\\
&Z_n\Omega_X^{[i]}(\log\,D)\coloneqq j_{*}Z_{n}\Omega_U^{i}(\log\,D_U).
\end{align*}
We define \textit{the $i$-th reflexive Cartier operator} by
\begin{align*}C^{[i]}_{X,D}:=j_{*}C^{i}_{U,D}\colon Z_1\Omega_X^{[i]}(\log\,D_U)\to \Omega^{[i]}_X(\log\,D_U).
\end{align*}
\end{definition}
We have the following criterion for the logarithmic extension theorem in positive characteristic.
\begin{theorem}[\textup{\cite[Theorem A]{Kaw4}}]\label{thm:criterion for log ext thm}
Let $X$ be a normal variety over a perfect field of characteristic $p>0$ and let $D$ be a reduced divisor on $X$.
Fix $i \in \mathbb{Z}_{\geq 0}$.
If the $i$-th reflexive Cartier operator
\[
C^{[i]}_{X,D}\colon Z_1\Omega_X^{[i]}(\log\,D)\to \Omega_X^{[i]}(\log\,D)
\]
is surjective, then $(X,D)$ satisfies the logarithmic extension theorem for $i$-forms.
\end{theorem}
\begin{theorem}\label{thm:log ext thm for qFs}
Let $X$ be a normal variety
over a perfect field of characteristic $p>0$.
Let $Z$ be the singular locus of $X$ and $j\colon U:=X\setminus Z\hookrightarrow X$ the inclusion.
Suppose that $X$ is quasi-$F$-split, $X$ satisfies $S_3$, and $\codim_X(Z)\geq 3$.
Then the first reflexive Cartier operator
\[
C^{[1]}_{X}\colon Z_{1}\Omega_X^{[1]}\to \Omega^{[1]}_X
\]
is surjective.
In particular, $X$ satisfies the logarithmic extension theorem for one-forms.
\end{theorem}
\begin{proof}
By
Theorem \ref{thm:criterion for log ext thm}, it suffices to prove the surjectivity of $C_X^{[1]}$.
To this end, we may assume that $X$ is affine.
Since $F_{*}\mathcal{O}_X$ satisfies $S_3$ and $\codim_X(Z)\geq 3$, it follows from \cite[Proposition 1.2.10 (a) and (e)]{BH93} that $H_Z^{2}(F_{*}\mathcal{O}_X)=0$.
We define $\mathcal{O}_X$-modules $\mathcal{B}_m$ by $\mathrm{Coker}(F\colon W_m\mathcal{O}_X\to F_{*}W_m\mathcal{O}_X)$.
Then $B_m\Omega_X^{[1]}$ and $\mathcal{B}_m$ coincide with each other on the smooth locus of $X$ by Lemma \ref{lem:Serre's map}.
Since $X$ is quasi-$F$-split, we can take $n\in\mathbb{Z}_{>0}$ such that
\[
0 \to \mathcal{O}_X\to Q_{X,n}\to \mathcal{B}_n\to 0
\]
splits.
Then we have the following commutative diagram
\[
\begin{tikzcd}
H^2_{Z}(Q_{X,n})\arrow[r, twoheadrightarrow]\arrow[d] & H^2_{Z}(\mathcal{B}_n)\arrow[d]\\
\mathllap{0\,=\,\, } H^2_{Z}(F_{*}\mathcal{O}_X)\arrow[r]& H^2_{Z}(\mathcal{B}_1),
\end{tikzcd}
\]
where the right vertical arrow induced by the $n$-times iterated Cartier operator $(C_U^{1})^{n}$ by Lemma \ref{l-Bn-Cartier} and Remark \ref{r-QFS}.
The top horizontal arrow is surjective since $Q_{X,n}\to \mathcal{B}_n$ is a splitting surjection.
Thus, the above diagram shows that the right vertical map is zero.
Since $X$ is affine, we have
\[
H^0(X, R^1j_{*}B_n\Omega_U^{1})\cong H^1(U, B_n\Omega_U^{1})\cong H^2_{Z}(\mathcal{B}_n)
\], and thus $R^1j_{*}(C_U^{1})^{n}\colon R^1j_{*}B_n\Omega_U^{1} \to R^1j_{*}B_1\Omega_U^{1}$ is zero.
By the following commutative diagram
\[
\begin{tikzcd}[column sep=3cm]
Z_{n}\Omega_X^{[1]}\arrow[r, "(C_{X}^{[1]})^{n}"]\arrow[d, "(C_{X}^{[1]})^{n-1}"'] & \Omega_X^{[1]}\ar[r]\arrow[d, equal] & R^1j_{*}B_{n}\Omega_U^{1}\arrow[d, "R^1j_{*}(C_U^{1})^{n-1}=\,0"]\\
Z_{1}\Omega_X^{[1]}\arrow[r, "C_X^{[1]}"] & \Omega_X^{[1]}\arrow[r] & R^1j_{*}B_{1}\Omega_U^{1},
\end{tikzcd}
\]
{where horizontal exact sequences are obtained by (\ref{exact1}).
Now, by the above diagram, we obtain the surjectivity of $C_{X}^{[1]}$, as desired.}
\end{proof}
\begin{corollary}\label{c-3dim-1form}
Let $X$ be a three-dimensional variety over a perfect field of characteristic $p>41$.
Assume that one of the following holds.
\begin{enumerate}
\item $X$ is terminal.
\item
The singular locus of $X$ is zero-dimensional and $X$ is $\mathbb{Q}$-factorial and klt.
\end{enumerate}
Then $X$ satisfies the logarithmic extension theorem for one-forms.
\end{corollary}
\begin{proof}
We first treat the case when (2) holds.
In this case, $X$ is Cohen--Macaulay by \cite[Corollary 1.3]{ABL20}.
Then the assertion follows from
Theorem \ref{t-3dim-klt-QFS}
and Theorem \ref{thm:log ext thm for qFs}.
This completes the proof for the case when (2) holds.
Assume (1).
By taking a $\mathbb{Q}$-factorialisation of $X$ (see, e.g., \cite[Theorem 2.14]{GNT16}), we may assume that $X$ is $\mathbb{Q}$-factorial.
Since $X$ is terminal, it follows from \cite[Corollary 2.13]{kollar13} that $X$ has only isolated singularities.
Then (2) holds.
\end{proof} |
{
"arxiv_id": "2302.13206",
"language": "en",
"timestamp": "2023-02-28T02:13:06",
"url": "https://arxiv.org/abs/2302.13206",
"yymm": "2302"
} | \section{Introduction}
Classifiers such as neural networks
often achieve their strong performance
through a completely supervised learning (CSL) approach,
which requires a fully classified (labeled) dataset.
However, missing labels often occur in practice
due to difficulty in determining
the true label for an observation (the feature vector).
For example, in fields such as medicine, defense, and other
scientific fields, images can often only be
correctly classified by a limited number of people
who are experts in the field.
Hence, a training sample might not be completely
classified with images difficult
to classify left without their class labels.
Moreover, in medical diagnosis, there might be
scans that can be diagnosed confidently only
after an invasive procedure
that might be regarded as unethical to perform at the time.
Much attention is given
in both the application oriented and the theoretical machine
learning communities to semi-supervised learning (SSL) approaches
that effectively form a classifier from
training data consisting of a limited number
of classified (labeled) observations but a much larger number
of unclassified (unlabeled) observations.
Thus, there is now a wide literature on SSL techniques, for example, \cite{chapelle2006semi}, \cite{berthelot2019remixmatch},
\cite{pan2006semi}, and \cite{zhou2003learning},
which is too numerous to review here.
One of the most intuitive approaches to SSL
is self-training, an iterative
method for learning with alternating steps
between generating pseudo-labels for the unclassified features
and then training a classifier using both
the labeled and pseudo-labeled data.
This approach goes back at least to the mid-seventies in the
statistical literature \citep{mclachlan1975iterative} as cited in \cite{sohn2020fixmatch}, and is discussed in more detail
in the brief review of \cite{ahfock2022semi} that focused
on statistical-based SSL techniques.
As noted in \cite{ahfock2022semi}, with the latter approach
the parameters are estimated iteratively
by treating the labels of the unclassified
features as unknown parameters to be estimated along with
the parameters of the allocation rule.
That is, it uses the so-called classification
maximum likelihood (CML) approach as considered
by \cite{hartley1968classification}, among others;
see Section 1.12 of \cite{mclachlan1988mixture}.
The CML approach gives an inconsistent estimate of
$\boldsymbol{\theta}$ except in special cases like
$\pi_1=\pi_2$.
The CML approach can be viewed as
applying the subsequent expectation-maximization (EM) algorithm of \cite{dempster1977maximum}
with the following modification \citep{mclachlan19829}.
Namely, the E-step is executed using outright (hard)
rather than fractional (soft) assignment of each unclassified
feature to a component of the mixture as with
the standard application of the EM algorithm.
Many open research questions remain
regarding SSL.
A fundamental issue is identifying the mechanisms
responsible for the success of SSL.
For generative classifiers that model the joint distribution
of the features and the labels \citep{mclachlan1992discriminant},
it is possible to compute the Fisher information
in unclassified observations and the benefits
of using unlabeled observations can be studied
on a sound theoretical basis.
Recently, \citet{ahfock2020apparent}
provided a basis on how to increase the
accuracy of the Bayes' (optimal) classifier
estimated parametrically from a
partially classified sample as in SSL.
For the commonly
used linear discriminant function
estimated from a partially classified sample
from two multivariate (homoscedastic) Gaussian classes,
they showed that under certain conditions
the increase in accuracy
can be of sufficient magnitude for this SSL-based
classifier to have smaller error rate than that
if it were formed from a completely classified sample.
This apparent paradox can be explained by the fact that
by introducing the concept of missingness
for the unobserved (missing) labels,
the information in the missing labels
has the potential to compensate
for the loss of information
from not knowing the missing labels. The compensation can occur to
the extent where the rule
can have lower error rate than that
if the sample were completely classified in
situations where the missingness-mechanism of the
class labels is non-ignorable in the pioneering framework
of \cite{rubin1976inference} for missingness in incomplete-data analysis.
Hence, it is a novel result that has
significant implications for constructing
parametric SSL classifiers in practice.
This model-based SSL approach in its general form
can be adopted for an arbitrary
number of classes with distributions belonging to any parametric
family in conjunction with a non-ignorable missingness mechanism.
In the \proglang{R} package presented in this paper,
the latter mechanism is
specified by multinomial logistic regression
in terms of the entropy of the feature vectors.
This package
applies to an arbitrary number of classes
having multivariate Gaussian distributions
with covariance matrices not necessarily the same. We call the new package \pkg{gmmsslm} (\underline{G}aussian \underline{m}ixture \underline{m}odel based \underline{s}emi-\underline{s}upervised \underline{l}earning with a \underline{m}issing data mechanism).
\section{Background}
We let $\boldsymbol{y}$ be a $p$-dimensional vector of features
on an entity to be assigned to one of $g$ predefined classes
$C_1,\ldots,C_g$.
The random variable $\boldsymbol{Y}$ corresponding to the
realization $\boldsymbol{y}$ is assumed to have density
$f_i(\boldsymbol{y};\boldsymbol{\omega}_i)$ up to a vector $\boldsymbol{\omega}_i$
of unknown parameters in class $C_i$ $(i=1,\ldots,g)$.
The Bayes' rule of allocation $R(\boldsymbol{y};\boldsymbol{\theta})$ assigns an entity
with feature vector $\boldsymbol{y}$ to class $C_k$
(that is, $R(\boldsymbol{y};\boldsymbol{\theta})=k)$ if
$k=\arg\max_i\, \tau_i(\boldsymbol{y};\boldsymbol{\theta}),$
where
\begin{equation*}
\tau_i(\boldsymbol{y};\boldsymbol{\theta})= {\pi_i f_i(\boldsymbol{y};\boldsymbol{\omega}_i)}/
{\sum_{h=1}^g \pi_h f_h(\boldsymbol{y};\boldsymbol{\omega}_h)}
\end{equation*}
is the posterior probability that the entity belongs
to class $C_i$ given $\boldsymbol{Y}=\boldsymbol{y}$. Here, $\pi_i$ is the
prior probability that the entity belongs to
$C_i$ and
$\boldsymbol{\theta}=(\pi_1,\ldots,\pi_{g-1},
\boldsymbol{\theta}_1^T,\ldots,\boldsymbol{\theta}_g^T)^T$
is the vector of unknown parameters.
The superscript $T$
denotes vector transpose.
In order to estimate $\boldsymbol{\theta}$, it is customary in practice to have
available a training sample of size $n$. We let
$\boldsymbol{x}_{\rm CC}=(\boldsymbol{x}_1^T,\ldots,\boldsymbol{x}_n^T)^T$
contain $n$ independent realizations of $\boldsymbol{X}=(\boldsymbol{Y}^T, Z)^T$
as the completely classified training data, where $Z$ denotes
the class membership of $\boldsymbol{Y}$, being equal to $i$ if $\boldsymbol{Y}$
belongs to class $C_i$,
and zero otherwise, and
where $\boldsymbol{x}_j=(\boldsymbol{y}_j^T, z_j)^T$.
For a partially classified training sample
$\boldsymbol{x}_{\rm PC}$ in SSL, we introduce
the missing-label indicator $m_j$ which equals 1 if $z_j$
is missing and 0 if it is available $(j=1,\ldots,n)$.
Thus, $\boldsymbol{x}_{\rm PC}$ consists of those observations $\boldsymbol{x}_j$
in $\boldsymbol{x}_{\rm CC}$ if $m_j=0$, but only the feature vector
$\boldsymbol{y}_j$ in $\boldsymbol{x}_{\rm CC}$
if $m_j=1$ (that is, the label $z_j$ is missing).
The construction of the parametric version
of the optimal (Bayes') classifier from partially classified data
can be undertaken by maximum likelihood (ML)
estimation of $\boldsymbol{\theta}$ implemented
via the EM algorithm of \cite{dempster1977maximum}; see also \cite{mclachlan2007algorithm}.
We let
\begin{eqnarray}
\log L_{\rm C}(\boldsymbol{\theta})&=&
\sum_{j=1}^n(1-m_j)\sum_{i=1}^g z_{ij}
\log\{\pi_i f_i(\boldsymbol{y}_j;\boldsymbol{\omega}_i)\}
\label{eq:3},\\
\log L_{\rm UC}(\boldsymbol{\theta})&=&
\sum_{j=1}^n m_j \log \sum_{i=1}^g \pi_i
f_i(\boldsymbol{y}_j;\boldsymbol{\omega}_i),\label{eq:5}\\
\log L_{\rm PC}^{(\rm ig)}(\boldsymbol{\theta})&=&\log L_{\rm C}(\boldsymbol{\theta})
+\log L_{\rm UC}(\boldsymbol{\theta}),
\label{eq:4}
\end{eqnarray}
where in \eqref{eq:3}, $z_{ij}=1$ if $z_j=i$ and is zero otherwise.
In situations where one proceeds by ignoring
the ``missingness'' of the class labels,
$L_{\rm C}(\boldsymbol{\theta})$ and $L_{\rm UC}(\boldsymbol{\theta})$
denote the likelihood function formed from the classified data
and the unclassified data, respectively, and
$L_{\rm PC}^{(\rm ig)}(\boldsymbol{\theta})$ is the likelihood function
formed from the partially classified sample $\boldsymbol{x}_{\rm PC}$,
ignoring the missing-data mechanism for the labels.
The log of the likelihood $L_{\rm CC}(\boldsymbol{\theta})$ for the
completely classified sample $\boldsymbol{x}_{\rm CC}$ is given by \eqref{eq:3}
with all $m_j=0$.
Situations in the present context where it is appropriate to
ignore the missing-data mechanism in carrying out likelihood inference
are where the missing labels are missing at random
in the framework for missing data
put forward by \cite{rubin1976inference}
for missingness in incomplete-data analysis.
This case will occur in the present context
if the missingness of the labels does not depend on
the features nor the labels (missing completely at random)
or if the missingness depends only on the features
and not also the class labels (missing at random),
as in the truncation example of \cite{mclachlan1989mixture}
where the mechanism for the missing labels was also ignorable.
The paper of \cite{mealli2015clarifying} is an excellent reference
for describing the terminology of missing (always) at random (MAR) and
the more restrictive version of missing (always) completely at random (MCAR).
We let $\hat{\btheta}_{\rm CC}$ and $\hat{\btheta}_{\rm PC}^{(\rm ig)}$
be the estimate of $\boldsymbol{\theta}$ formed by consideration
of $L_{\rm CC}(\boldsymbol{\theta})$ and $L_{\rm PC}^{(\rm ig)}(\boldsymbol{\theta})$,
respectively.
Also, we let
$R(\boldsymbol{y};\hat{\btheta}_{\rm CC})$ and $R(\boldsymbol{y};\hat{\btheta}_{\rm PC}^{(\rm ig)})$
denote the estimated Bayes' rule obtained by plugging in
$\hat{\btheta}_{\rm CC}$ and $\hat{\btheta}_{\rm PC}^{(\rm ig)}$,
respectively, for $\boldsymbol{\theta}$ in $R(\boldsymbol{y};\boldsymbol{\theta})$.
The overall conditional error rate of the rule
$R(\boldsymbol{y};\hat{\btheta}_{\rm CC})$ is defined by
\begin{equation}
{\rm err}(\hat{\btheta}_{\rm CC};\boldsymbol{\theta})
= 1-\sum_{i=1}^g \pi_i\, {\rm pr}\{ R(\boldsymbol{Y}; \hat{\btheta}_{CC}) = i
\mid \hat{\btheta}_{CC}, Z=i\}.
\label{eq:5a}
\end{equation}
The corresponding conditional error rate
${\rm err}(\hat{\btheta}_{PC}^{(\rm ig)}; \boldsymbol{\theta})$
of the rule $R(\boldsymbol{y};\hat{\btheta}_{\rm PC}^{(\rm ig)})$
is defined by replacing $\hat{\btheta}_{CC}$ with
$\hat{\btheta}_{PC}^{(\rm ig)}$ in \eqref{eq:5a}.
The optimal error rate ${\rm err}(\boldsymbol{\theta})$
is defined by replacing $\hat{\btheta}_{\rm CC}$
by $\boldsymbol{\theta}$ in \eqref{eq:5a}.
The asymptotic relative efficiency (ARE) of the estimated Bayes' rule
$R(\boldsymbol{y};\hat{\btheta}_{\rm PC}^{(\rm ig)})$ compared to the rule
$R(\boldsymbol{y};\hat{\btheta}_{\rm CC})$
based on the completely classified sample $\boldsymbol{x}_{\rm CC}$
is defined by
the ratio of their expected excess error rates with respect to
the optimal error rate ${\rm err}(\boldsymbol{\theta})$,
\begin{equation}
{\rm ARE}\{R(\boldsymbol{y};\hat{\btheta}_{\rm PC}^{(\rm ig)})\}
=\frac{E\{{\rm err}(\hat{\btheta}_{\rm CC};\boldsymbol{\theta})\} -{\rm err}(\boldsymbol{\theta})}
{E\{{\rm err}(\hat{\btheta}_{\rm PC}^{(\rm ig)};\boldsymbol{\theta})\}-{\rm err}(\boldsymbol{\theta})},
\label{eq:6}
\end{equation}
where the expectation in the numerator and denominator of
the right-hand side of \eqref{eq:6}
is taken over the distribution of the estimators of $\boldsymbol{\theta}$
and is expanded up to terms of the first order.
Considerable simplification is possible under the two-class
(homoscedastic) Gaussian model,
\begin{equation}
\boldsymbol{Y} \mid Z=i\, \sim \,N(\boldsymbol{\mu}_i, \boldsymbol{\Sigma})\quad \text{in }C_i
\text{ with prob. }\pi_i\quad (i=1,2),
\label{eq:7}
\end{equation}
which has a convenient canonical form of $\boldsymbol{\Sigma}$ being the
$p\times p$ identity matrix,
$\boldsymbol{\mu}_1=(\Delta,0,\ldots,0)^T,$ and $\boldsymbol{\mu}_2=\boldsymbol{0}$, where
$\Delta^2=(\boldsymbol{\mu}_1-\boldsymbol{\mu}_2)^T\boldsymbol{\Sigma}^{-1}(\boldsymbol{\mu}_1-\boldsymbol{\mu}_2)$
is the squared Mahalanobis distance between the two classes.
The Bayes' rule
reduces to depending on just the $(p+1)$-dimensional vector of
discriminant function coefficients $\boldsymbol{\beta}=(\beta_0,\boldsymbol{\beta}_1^T)^T$,
since $R(\boldsymbol{y};\boldsymbol{\theta})$ is 1 or 2, according as
$d(\boldsymbol{y};\boldsymbol{\beta})= \beta_0 + \boldsymbol{\beta}_1^T\boldsymbol{y}$
is greater or less than zero, where
$$\beta_0 = -{\textstyle\frac{1}{2}}(\boldsymbol{\mu}_1+\boldsymbol{\mu}_2)^T\boldsymbol{\Sigma}^{-1}
(\boldsymbol{\mu}_1-\boldsymbol{\mu}_2),\quad
\boldsymbol{\beta}_1 = \boldsymbol{\Sigma}^{-1} (\boldsymbol{\mu}_1-\boldsymbol{\mu}_2).$$
We can reparameterize the two-class Gaussian model
\eqref{eq:7} by taking
$\boldsymbol{\theta}=(\boldsymbol{\theta}_1^T,\boldsymbol{\beta}^T)^T,$
where $\boldsymbol{\theta}_1$ contains the elements of $\boldsymbol{\mu}=\pi_1\boldsymbol{\mu}_1+\pi_2\boldsymbol{\mu}_2$
and the distinct elements of
$\boldsymbol{\Lambda}=\boldsymbol{\Sigma}+\pi_1\pi_2(\boldsymbol{\mu}_1-\boldsymbol{\mu}_2)
(\boldsymbol{\mu}_1-\boldsymbol{\mu}_2)^T.$
Under the assumption that
the class labels are MCAR
(that is, the missingness of the labels does not depend on the data),
\cite{ganesalingam1978efficiency} derived the
ARE of $R(\boldsymbol{y};\hat{\bbeta}_{\rm PC}^{(\rm ig)})$
compared to $R(\boldsymbol{y};\hat{\bbeta}_{\rm CC})$
in the univariate case $(p = 1)$ of \eqref{eq:7}
and a completely unclassified sample $(\overline{m} = 1),$
where $\overline{m}=\sum_{j=1}^n m_j/n.$
Not surprisingly,
they showed that the
ARE of $R(\boldsymbol{y}; \hat{\bbeta}_{\rm PC}^{(\rm ig)})$
for a totally unclassified
sample is low, particularly for classes weakly separated
(for example, only 0.005
for $\Delta=1$ with $\pi_1=0.5).$
\cite{o1978normal} extended their result to
multivariate features and for arbitrary values of $\overline{m}$.
His results showed that this ARE was not sensitive to the
values of $p$ and does not vary with $p$ for equal class
prior probabilities.
In other work on the ARE of $R(\boldsymbol{y};\hat{\bbeta}_{\rm PC}^{(\rm ig)})$
compared to $R(\boldsymbol{y};\hat{\bbeta}_{\rm CC})$,
\cite{mclachian1995asymptotic}
evaluated it where
due to truncation the unclassified
univariate features had (ignorable) MAR labels.
\section{Fisher information matrix}
In the case of a partially classified training sample $\boldsymbol{x}_{\rm PC}$,
\cite{o1978normal} derived the information matrix
$$\boldsymbol{I}_{\rm PC}(\boldsymbol{\beta})
=E[\{\partial\log L_{\rm PC}(\boldsymbol{\theta})/\partial\boldsymbol{\theta}\}
\{\partial \log L_{\rm PC}(\boldsymbol{\theta})/\partial\boldsymbol{\theta}^T\}]$$
about the vector $\boldsymbol{\beta}$ of discriminant function coefficients
in the context of the two-class homoscedastic
Gaussian model \eqref{eq:7},
using the result of \cite{efron1975efficiency} for the information matrix
for $\boldsymbol{\beta}$ in applying logistic regression.
Assuming that the missingness of the labels
does not depend on the data,
\cite{o1978normal} showed that $\boldsymbol{I}_{\rm PC}(\boldsymbol{\beta})$
can be decomposed as
\begin{equation}
\boldsymbol{I}_{\rm PC}(\boldsymbol{\beta})=\boldsymbol{I}_{\rm CC}(\boldsymbol{\beta})
-\overline{m}\,\boldsymbol{I}_{\rm CC}^{(\rm lr)}(\boldsymbol{\beta}),
\label{eq:9}
\end{equation}
where $\boldsymbol{I}_{\rm CC}(\boldsymbol{\beta})$ is the information about $\boldsymbol{\beta}$
in a completely classified sample $\boldsymbol{x}_{\rm CC}$ and
$\boldsymbol{I}_{\rm CC}^{\rm(lr)}(\boldsymbol{\beta})$
is the information about $\boldsymbol{\beta}$ under
the logistic regression model for the distribution
of the class labels given the features in $\boldsymbol{x}_{\rm CC}$.
It can be seen from \eqref{eq:9} that the
loss of information due to the sample being partially
classified is equal to $\overline{m}\,\boldsymbol{I}_{\rm CC}^{(\rm lr)}(\boldsymbol{\beta})$.
\section{Modeling missingness for unobserved class labels}
\citet{ahfock2020apparent} proposed to treat the labels of
unclassified features as missing data
and to introduce a framework for their missingness.
To this end,
they introduced the random variable
$M_j$ corresponding to the realized value $m_j$ for the
missing-label indicator for the feature vector $\boldsymbol{y}_j$.
The missing-data mechanism of \cite{rubin1976inference} is specified
in the present context by the conditional distribution
\begin{equation}
{\rm pr}\{M_j=m_j\mid \boldsymbol{y}_j,z_j;\boldsymbol{\xi}\} \quad (j=1,\ldots,n),
\label{eq:10}
\end{equation}
where $\boldsymbol{\xi}$ is a vector of parameters.
\citet{ahfock2020apparent} noted from an examination
of several partially classified data sets
in the literature that unclassified features tend not to
occur at random in the feature space,
but rather tend to be concentrated
in regions of relatively high entropy.
They consequently proposed
to model the probability \eqref{eq:10} to depend on
the entropy $e_j(\boldsymbol{\theta})$ of the feature $\boldsymbol{y}_j$,
more precisely, the log entropy $\log e_j(\boldsymbol{\theta})$,
where $e_j(\boldsymbol{\theta})$ is defined by
\begin{equation*}
e_j(\boldsymbol{\theta})=-\sum_{i=1}^g \tau_i(\boldsymbol{y}_j;\boldsymbol{\theta})\log \tau_i(\boldsymbol{y}_j;\boldsymbol{\theta}).
\end{equation*}
Accordingly,
\cite{ahfock2020apparent}
specified the probability \eqref{eq:10} as
\begin{equation}
{\rm pr}\{M_j=1\mid \boldsymbol{y}_j,z_j\}= {\rm pr} \{M_j=1\mid \boldsymbol{y}_j\}
=q(\boldsymbol{y}_j;\boldsymbol{\theta},\boldsymbol{\xi}),
\label{eq:11}
\end{equation}
where $\boldsymbol{\Psi}=(\boldsymbol{\theta}^T, \boldsymbol{\xi}^T)^T$ and
the parameter $\boldsymbol{\xi}=(\xi_0, \xi_1)^T$ is distinct from $\boldsymbol{\theta}$.
The function $q(\boldsymbol{y}_j;\boldsymbol{\Psi})$ was taken to be
the logistic function,
\begin{equation}
q(\boldsymbol{y}_j;\boldsymbol{\Psi})= \frac{\exp\{\xi_0+\xi_1 \log e_j(\boldsymbol{\theta})\}}
{1+\exp\{\xi_0+\xi_1 \log e_j(\boldsymbol{\theta})\}}.
\label{eq:12}
\end{equation}
The expected proportion $\gamma(\boldsymbol{\Psi})$ of unclassified features
in a partially classified sample $\boldsymbol{x}_{\rm PC}$ is given by
\begin{equation*}
\gamma(\boldsymbol{\Psi})=\sum_{j=1}^n E(M_j)/n
=E[{\rm pr}\{M_j=1\mid \boldsymbol{Y}_j\}]
=E\{q(\boldsymbol{Y};\boldsymbol{\Psi})\}.
\end{equation*}
Under \eqref{eq:11}, the missingness is non-ignorable
as this probability depends also on $\boldsymbol{\theta}$, and hence,
the parameters of the Bayes' classifier.
In this case of non-ignorable MAR labels,
the log of the full likelihood function for $\boldsymbol{\Psi}$
is given by
\begin{equation*}
\log L_{\rm PC}^{(\rm full)}(\boldsymbol{\Psi})=
\log L_{\rm PC}^{(\rm ig)}(\boldsymbol{\theta}) +
\log L_{\rm PC}^{(\rm miss)}(\boldsymbol{\Psi}),
\end{equation*}
where
\begin{equation*}
\log L_{\rm PC}^{(\rm miss)}(\boldsymbol{\Psi})=
\sum_{j=1}^n [(1-m_j)\log \{1- q(\boldsymbol{y}_j;\boldsymbol{\Psi})\}
+ m_j \log q(\boldsymbol{y}_j;\boldsymbol{\Psi})]
\end{equation*}
is the log likelihood function for $\boldsymbol{\Psi}$
formed on the basis of the missing-label indicators
$m_j$ $(j=1,\ldots,n)$.
We let $\hat{\bPsi}_{\rm PC}^{(\rm full)}$
be the estimate of $\boldsymbol{\Psi}$ formed by consideration
of the full likelihood $L_{\rm PC}^{(\rm full)}(\boldsymbol{\Psi})$ and
$R(\boldsymbol{y};\hat{\bPsi}_{\rm PC}^{(\rm full)})$ be the estimated Bayes' rule
obtained by plugging in $\hat{\bPsi}_{\rm PC}^{(\rm full)}$ for
$\boldsymbol{\Psi}$ in the Bayes' rule $R(\boldsymbol{y};\boldsymbol{\Psi})$.
That is, $R(\boldsymbol{y};\hat{\bPsi}_{\rm PC}^{(\rm full)})$
is the rule referred to as the SSL-full rule
and $R(\boldsymbol{y};\hat{\bPsi}_{\rm CC})$ is the
completely supervised learning (CSL) rule.
Under the model \eqref{eq:12} for non-ignorable MAR labels
in the case of the two-class Gaussian model \eqref{eq:7},
\citet{ahfock2020apparent} derived the following theorem that
provides the motivation
for the development of a program to implement this SSL approach for possibly multiple classes with multivariate Gaussian distributions.
\noindent
{\bf \small{Theorem}} {\it The Fisher information about $\boldsymbol{\beta}$
in the partially classified sample $\boldsymbol{x}_{\rm PC}$
via the full likelihood function $L_{\rm PC}^{(\rm full)}(\boldsymbol{\Psi})$
can be decomposed as
\begin{equation}
\boldsymbol{I}_{\rm PC}^{(\rm full)}(\boldsymbol{\beta})= \boldsymbol{I}_{\rm CC}(\boldsymbol{\beta})
-\gamma(\boldsymbol{\Psi})\boldsymbol{I}_{\rm CC}^{(\rm clr)}(\boldsymbol{\beta}) +
\boldsymbol{I}_{\rm PC}^{(\rm miss)}(\boldsymbol{\beta}),
\label{eq:18}
\end{equation}
where
$\boldsymbol{I}_{\rm CC}(\boldsymbol{\beta})$
is the information about $\boldsymbol{\beta}$
in the completely classified sample $\boldsymbol{x}_{\rm CC},
\boldsymbol{I}_{\rm CC}^{(\rm clr)}(\boldsymbol{\beta})$ is the conditional information
about $\boldsymbol{\beta}$ under the logistic regression model fitted
to the class labels in $\boldsymbol{x}_{\rm CC}$, and
$\boldsymbol{I}_{\rm PC}^{(\rm miss)}(\boldsymbol{\beta})$ is the
information about
$\boldsymbol{\beta}$ in the missing-label indicators under the
assumed logistic model for their distribution
given their associated features
in the partially classified sample $\boldsymbol{x}_{\rm PC}$.}
On contrasting \eqref{eq:18} with \eqref{eq:9} which ignores the missingness
mechanism,
it can be seen that this expression for the Fisher
information about the vector of discriminant function coefficients
contains the additional
term $\boldsymbol{I}_{\rm PC}^{(\rm miss)}(\boldsymbol{\beta})$,
arising from the additional information
about $\boldsymbol{\beta}$ in the missing-label indicators $m_j$.
This term has the potential to compensate for the loss of information
in not knowing the true labels of those unclassified features
in the partially unclassified sample.
The compensation depends on the extent
to which the probability of a missing label
for a feature depends on its entropy.
It follows that if
\begin{equation}
\boldsymbol{I}_{\rm PC}^{(\rm miss)}(\boldsymbol{\beta}) > \gamma(\boldsymbol{\Psi})\,
\boldsymbol{I}_{\rm CC}^{(\rm clr)}(\boldsymbol{\beta}),
\label{eq:19}
\end{equation}
then there is an increase in the information about $\boldsymbol{\beta}$
in the partially classified sample over
the information $\boldsymbol{I}_{\rm CC}(\boldsymbol{\beta})$ about $\boldsymbol{\beta}$
in the completely classified sample.
The inequality in \eqref{eq:19} is used
in the sense that the left-hand side of the equation,
minus the right, is positive definite.
By deriving the ARE of the Bayes' rule using the full ML estimate of $\boldsymbol{\beta}$,
\citet{ahfock2020apparent} showed
that the asymptotic expected excess error rate
using the partially classified sample $\boldsymbol{x}_{\rm PC}$
can be much lower than the corresponding excess rate
using the completely classified sample $\boldsymbol{x}_{\rm CC}$.
The contribution to the Fisher information
from the missingness mechanism can
be relatively very high if $|\xi_{1}|$ is large,
as the location of the unclassified features
in the feature space
provides information
about regions of high uncertainty,
and hence where the entropy is high.
\section{Maximum likelihood estimation via the ECM algorithm}
In this section, we outline an application of the expectation conditional maximization (ECM) algorithm of \citet{meng1993maximum} to compute the ML estimate of $\boldsymbol{\Psi}$ on the basis of the full log likelihood $\log L_{{\text{PC}}}^{(\text{full})}(\boldsymbol{\Psi})$.
The E-step here is given as follows.
\textbf{E step:} The missing labels $z_{ij}$ for those observations $\boldsymbol{y}_j$ with $m_j=1$ are declared to be the ``missing'' data and are replaced by their conditional expectation given $\boldsymbol{y}_j$ and $m_j=1$:
\begin{equation*}
z_{ij}^{(k)}=\tau_i(\boldsymbol{y}_j;\boldsymbol{\theta}^{(k)})=\frac{\pi_i^{(k)} \phi(\boldsymbol{y}_j;\boldsymbol{\mu}_i^{(k)},\boldsymbol{\Sigma}_i^{(k)})}{\sum_{h=1}^g\pi_h^{(k)}\phi(\boldsymbol{y}_j;\boldsymbol{\mu}_h^{(k)},\boldsymbol{\Sigma}_h^{(k)})},
\end{equation*}
where $\boldsymbol{\theta}^{(k)}$ denotes the estimate of $\boldsymbol{\theta}$ after the $k$th iteration.
The ``missing'' data are handled by the E-step, which on the $(k+1)$th iteration takes the conditional expectation using the current estimate $\boldsymbol{\Psi}^{(k)}$ of $\boldsymbol{\Psi}$ of the complete-data log likelihood $\log L_{{\text{PC}}}^{(\text{full})}(\boldsymbol{\Psi})$ given
the observed data, using the current estimate $\boldsymbol{\Psi}^{(k)} $ for $\boldsymbol{\Psi}$ to give
\begin{equation*}
\begin{split}
Q(\boldsymbol{\theta},\boldsymbol{\xi};\boldsymbol{\theta}^{(k)},\boldsymbol{\xi}^{(k)})&=\sum_{j=1}^n\big[(1-m_j)\sum_{i=1}^gz_{ij}^{(k)}\{\log\pi_i+\log f_i(\boldsymbol{y}_j;\boldsymbol{\omega}_i)\}\\&\quad
+m_j\sum_{i=1}^gz_{ij}^{(k)}\{\log\pi_i+\log f_i(\boldsymbol{y}_j;\boldsymbol{\omega}_i)\}\big]\\&
\quad+\sum_{j=1}^n\big[(1-m_j)\log\{1-q(\boldsymbol{y}_j;\boldsymbol{\theta},\boldsymbol{\xi})\}+m_j\log q(\boldsymbol{y}_j;\boldsymbol{\theta},\boldsymbol{\xi})\big].
\end{split}
\end{equation*}
\textbf{CM steps:}
We calculate the updated value $\boldsymbol{\Psi}^{(k+1)}$ of $\boldsymbol{\Psi}$ using two conditional maximization (CM) steps.
\newline
\newline
CM-1 step: We fix $\boldsymbol{\xi}$ at its current value $\boldsymbol{\xi}^{(k)}$ and update $\boldsymbol{\theta}$ to $\boldsymbol{\theta}^{(k+1)}$ given by
\begin{equation*}
\boldsymbol{\theta}^{(k+1)}=\underset{\boldsymbol{\theta}}{\arg\max}\,Q(\boldsymbol{\theta},\boldsymbol{\xi}^{(k)};\boldsymbol{\theta}^{(k)},\boldsymbol{\xi}^{(k)})
\end{equation*}
where
\begin{equation*
\begin{split}
Q(\boldsymbol{\theta},\boldsymbol{\xi}^{(k)};\boldsymbol{\theta}^{(k)},\boldsymbol{\xi}^{(k)})&=\sum_{j=1}^n\big[(1-m_j)\sum_{i=1}^gz_{ij}^{(k)}\{\log\pi_i+\log f_i(\boldsymbol{y}_j;\boldsymbol{\omega}_i)\}\\&\quad
+m_j\sum_{i=1}^gz_{ij}^{(k)}\{\log\pi_i+\log f_i(\boldsymbol{y}_j;\boldsymbol{\omega}_i)\}\big]\\&
\quad+\sum_{j=1}^n\big[(1-m_j)\log\{1-q(\boldsymbol{y}_j;\boldsymbol{\theta},\boldsymbol{\xi}^{(k)})\}+m_j\log q(\boldsymbol{y}_j;\boldsymbol{\theta},\boldsymbol{\xi}^{(k)})\big].
\end{split}
\end{equation*}
CM-2-step: We now fix $\boldsymbol{\theta}$ at its updated value $\boldsymbol{\theta}^{(k+1)}$ and update $\boldsymbol{\xi}$ to $\boldsymbol{\xi}^{(k+1)}$ as
\begin{equation*}
\boldsymbol{\xi}^{(k+1)}=\underset{\boldsymbol{\xi}}{\arg\max}\,Q(\boldsymbol{\theta}^{(k+1)}, \boldsymbol{\xi}; \boldsymbol{\theta}^{(k)}, \boldsymbol{\xi}^{(k)}),
\end{equation*}
which reduces to
\begin{equation*}
\boldsymbol{\xi}^{(k+1)}=\underset{\boldsymbol{\xi}}{\arg\max}\,\log L_{{\text{PC}}}^{(\text{miss})}(\boldsymbol{\theta}^{(k+1)},\boldsymbol{\xi}),
\end{equation*}
on retaining only terms that depend on $\boldsymbol{\xi}$.
As $L_{\rm PC}^{(\rm miss)}(\boldsymbol{\theta}^{k+1)}, \boldsymbol{\xi})$ belongs to the regular exponential family, we use the function \code{glm()} from the base package \texttt{stats}.
The estimate $\boldsymbol{\Psi}_{{\text{PC}}}^{(\text{full})}$ is given by the limiting value $\boldsymbol{\Psi}^{(k)}$ as $k$ tends to infinity. We take the ECM process as having converged when
\begin{equation*}
\log L_{{\text{PC}}}^{(\text{full})}(\boldsymbol{\Psi}^{(k+1)})-\log L_{{\text{PC}}}^{(\text{full})}(\boldsymbol{\Psi}^{(k)})
\end{equation*}
is less than some arbitrarily specified value.
\section{Application of gmmsslm}
We consider the estimation of the parameters in the Bayes' classifier using the package \pkg{gmmsslm}. It is implemented via the primary function \code{gmmsslm()}.
The input arguments are described in Table~\ref{tab:emmixssl}.
\begin{table}[ht]
\centering
\begin{tabular}{cp{12.5cm}}
\toprule
\proglang{R} arguments & Description \\
\midrule
\code{dat} & An $n\times p$ matrix where each row represents an individual observation \\
\code{zm} & An $n$-dimensional vector containing the class labels including the missing-label denoted as \code{NA} \\
\code{pi} & A $g$-dimensional vector for the initial values of the mixing proportions \\
\code{mu} & A $p\times g$ matrix for the initial values of the location parameters \\
{\code{sigma}} & A $p\times p$ covariance matrix, or a list of $g$ covariance matrices with dimension $p\times p \times g$. It is assumed to fit the model with a common covariance matrix if \code{sigma} is a $p\times p$ covariance matrix; otherwise it is assumed to fit the model with unequal covariance matrices \\
\code{xi} & A 2-dimensional vector containing the initial values of the coefficients in the logistic function of the Shannon entropy \\
\code{type} & Three types of Gaussian mixture models can be defined as follows: `ign' indicates fitting the model to a partially classified sample based on the likelihood that the missing label mechanism is ignored; `full' indicates fitting the model to a partially classified sample on the basis of the full likelihood by taking into account the missing-label mechanism; and `com' indicate fitting the model to a completed classified sample \\
\code{iter.max} & Maximum number of iterations allowed. Defaults to 500 \\
\code{eval.max} & Maximum number of evaluations of the objective function allowed. Defaults to 500 \\
\code{rel.tol} & Relative tolerance. Defaults to 1e-15 \\
\code{sing.tol} & Singular convergence tolerance. Defaults to 1e-15 \\
\bottomrule
\end{tabular}
\caption{Optional arguments to the \code{gmmsslm()} function.}
\label{tab:emmixssl}
\end{table}
The default choices of \code{iter.max}, \code{eval.max}, \code{rel.tol}, and \code{sing.tol} that control the behavior of the algorithm often work well. This function implements the ECM algorithm described in the preceeding section.
As an example, we apply \code{gmmsslm()} to the dataset generated from a mixture of $g=4$ trivariate Gaussian distributions in equal proportions, using the function \code{rmix()} with the missing-label indicator from \code{rlabel()} to obtain the estimated parameters based on the partially classified sample. The parameter settings were specified as
$n=300$, $\boldsymbol{\pi}=(0.25,0.25,0.25,0.25)^T$
$\boldsymbol{\mu}_1=(0.2,0.3,0.4)^T$, $\boldsymbol{\mu}_2=(0.2,0.7,0.6)^T$, $\boldsymbol{\mu}_3=(0.1,0.7,1.6)^T$, $\boldsymbol{\mu}_4=(0.2,1.7,0.6)^T$, $\boldsymbol{\Sigma}_i=i\boldsymbol{I}$ for $i=1,2,3,4$, and $\boldsymbol{\xi}=(-0.5,1)^T$.
\begin{CodeChunk}
\begin{CodeInput}
R> set.seed(8)
R> n<-150
R> g<-4
R> p<-3
R> pi<-rep(1/g,g)
R> sigma<-array(0,dim=c(3,3,4))
R> for(i in 1:g) sigma[,,i]<-diag(i,p)
R> mu<-matrix(c(0.2,0.3,0.4,0.2,0.7,0.6,0.1,0.7,1.6,0.2,1.7,0.6),p,g)
R> dat<-rmix(n=n,pi=pi,mu=mu,sigma=sigma)
R> xi<-c(-0.5,1)
R> m<-rlabel(dat=dat$Y,pi=pi,mu=mu,sigma=sigma,xi=xi)
R> m
\end{CodeInput}
\begin{CodeOutput}
[1] 1 0 0 1 1 0 0 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 1
[37] 1 0 1 0 1 1 0 1 0 1 1 1 0 0 0 0 0 0 1 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 0 1
[73] 0 1 0 0 0 1 0 1 1 1 0 0 0 1 0 1 1 0 1 0 1 0 0 1 0 0 0 1 1 0 1 1 0 0 0 1
[109] 0 0 0 0 0 0 1 0 1 1 0 0 1 0 1 1 1 0 0 0 0 1 1 0 1 1 0 1 1 0 1 1 1 1 1 0
[145] 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0
[181] 1 1 1 0 0 1 0 0 0 0 0 1 0 1 0 0 1 1 1 0 1 1 1 0 0 0 0 1 0 1 0 0 1 0 0 0
[217] 1 1 1 1 1 0 1 0 1 1 0 0 0 1 0 1 1 1 0 1 1 0 1 1 0 1 0 0 1 0 0 1 1 1 0 0
[253] 1 1 0 0 1 1 1 1 0 0 0 0 1 1 0 1 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 1 1 1
[289] 0 1 1 0 0 0 1 1 1 1 1 0
\end{CodeOutput}
\end{CodeChunk}
The elements of the output \code{m} represent either a missing label if equal to one or an available label if equal to zero. {Regarding the vector of the class partition \code{clust} from \code{rmix}, we denote \code{NA} as the missing label; the following example is a reference.}
\begin{CodeChunk}
\begin{CodeInput}
R> zm<-dat$clust
R> zm[m==1]<-NA
R> zm
\end{CodeInput}
\begin{CodeOutput}
[1] NA 3 4 NA NA 4 4 NA NA NA NA 1 2 NA 2 4 2 1 1 4 2 4 4 1
[25] NA 4 NA 3 2 4 3 2 NA NA 2 NA NA 1 NA 2 NA NA 1 NA 3 NA NA NA
[49] 3 1 4 3 4 2 NA 3 NA NA 2 NA NA 1 4 3 2 1 NA NA 3 1 4 NA
[73] 4 NA 2 4 3 NA 2 NA NA NA 1 3 1 NA 4 NA NA 1 NA 1 NA 1 4 NA
[97] 1 1 1 NA NA 4 NA NA 3 3 4 NA 3 3 2 3 4 1 NA 3 NA NA 2 2
[121] NA 4 NA NA NA 4 3 3 3 NA NA 2 NA NA 4 NA NA 3 NA NA NA NA NA 3
[145] 1 2 4 3 NA NA NA NA NA NA 3 1 1 4 2 4 3 NA 4 NA 3 3 3 1
[169] 2 NA NA 4 2 1 3 1 NA 2 3 2 NA NA NA 1 3 NA 1 4 1 4 3 NA
[193] 1 NA 4 4 NA NA NA 3 NA NA NA 1 3 4 4 NA 1 NA 3 2 NA 4 4 1
[217] NA NA NA NA NA 2 NA 1 NA NA 2 4 4 NA 1 NA NA NA 4 NA NA 3 NA NA
[241] 4 NA 3 4 NA 3 1 NA NA NA 3 4 NA NA 1 2 NA NA NA NA 2 1 4 4
[265] NA NA 3 NA 2 NA 4 2 2 NA 4 NA NA 2 1 1 2 2 3 4 2 NA NA NA
[289] 1 NA NA 4 2 2 NA NA NA NA NA 3
\end{CodeOutput}
\end{CodeChunk}
Here, we fit the model to the above partially classified sample by considering the full likelihood and accounting for the missing-label mechanism by setting \code{zm=zm} and \code{type=`full'}.
For the initial values of parameters $\boldsymbol{\pi}$, $\boldsymbol{\mu}$, and $\boldsymbol{\Sigma}$, the function \code{initialvalue()} is available. Notice that we let \code{ncov} represent options for the class covariance matrices; the default value is 2; \code{ncov=1} for a common covariance matrix; \code{ncov=2} for unequal covariance matrices.
Initial values of $\xi_0$ and $\xi_1$ are obtainable by the function \code{glm()}.
\begin{CodeChunk}
\begin{CodeInput}
R> inits<-initialvalue(dat=dat$Y,zm=zm,g=g, ncov=2)
R> en<-get_entropy(dat=dat$Y,n=n,p=p,g=g,pi=inits$pi,mu=inits$mu,sigma=inits$sigma)
R> glmfit<-glm(m~en,family='binomial')
R> xi_inits<-coef(glmfit)
\end{CodeInput}
\begin{CodeOutput}
(Intercept) en
-0.5930693 0.3417874
\end{CodeOutput}
\end{CodeChunk}
\begin{CodeChunk}
\begin{CodeInput}
R> fit_ful<-gmmsslm(dat=dat$Y,zm=zm,pi=inits$pi,mu=inits$mu,
+ sigma=inits$sigma,xi=xi_inits,type='full')
R> fit_ful
\end{CodeInput}
\begin{CodeOutput}
$objective
[1] 2099.796
$ncov
[1] 2
$convergence
[1] 1
$iteration
[1] 108
$parhat
$parhat$pi
[1] 0.2448038 0.2247527 0.2521564 0.2782871
$parhat$mu
[,1] [,2] [,3] [,4]
[1,] 0.2491248 0.1430182 0.3248906 0.2862982
[2,] 0.1632115 0.6807152 0.8309257 1.6827811
[3,] 0.3066612 0.1033765 1.7899401 0.4550641
$parhat$sigma
, , 1
[,1] [,2] [,3]
[1,] 0.91534868 -0.03619485 0.02974812
[2,] -0.03619485 0.68636971 -0.26421012
[3,] 0.02974812 -0.26421012 0.97870318
, , 2
[,1] [,2] [,3]
[1,] 1.7435648 -0.5144975 -0.1351344
[2,] -0.5144975 2.3481520 -0.3188222
[3,] -0.1351344 -0.3188222 2.3780968
, , 3
[,1] [,2] [,3]
[1,] 2.4470608 -0.2274167 -0.5799091
[2,] -0.2274167 3.2809171 -0.2257279
[3,] -0.5799091 -0.2257279 3.3297267
, , 4
[,1] [,2] [,3]
[1,] 5.1069469 0.3922628 -0.1481315
[2,] 0.3922628 2.9109848 -0.1947038
[3,] -0.1481315 -0.1947038 4.3250141
$parhat$xi
(Intercept) en
-0.2490987 0.6779641
\end{CodeOutput}
\end{CodeChunk}
To fit the model to the partially classified sample without considering the missing-label mechanism, we set \code{zm=zm} and \code{type=`ign'}.
\begin{CodeChunk}
\begin{CodeInput}
R> fit_ign<-gmmsslm(dat=dat$Y,zm=zm,pi=inits$pi,mu=inits$mu,
+ sigma=inits$sigma,type='ign')
R> fit_ign
\end{CodeInput}
\begin{CodeOutput}
$objective
[1] 1894.371
$ncov
[1] 2
$convergence
[1] 0
$iteration
[1] 45
$parhat
$parhat$pi
[1] 0.2493635 0.2235416 0.2417861 0.2853088
$parhat$mu
[,1] [,2] [,3] [,4]
[1,] 0.2455327 0.16532951 0.2504642 0.3364182
[2,] 0.1611216 0.63215520 0.7556749 1.7754949
[3,] 0.3379078 0.08854911 1.8037221 0.4770904
$parhat$sigma
, , 1
[,1] [,2] [,3]
[1,] 0.91204204 -0.04085983 0.0261662
[2,] -0.04085983 0.71231030 -0.2920060
[3,] 0.02616620 -0.29200604 1.0110159
, , 2
[,1] [,2] [,3]
[1,] 1.7192169 -0.5524251 -0.1063493
[2,] -0.5524251 2.3080496 -0.3681465
[3,] -0.1063493 -0.3681465 2.4762132
, , 3
[,1] [,2] [,3]
[1,] 2.5732396 -0.39247106 -0.51646104
[2,] -0.3924711 2.83464544 -0.05839604
[3,] -0.5164610 -0.05839604 3.47527992
, , 4
[,1] [,2] [,3]
[1,] 4.9801691 0.4923271 -0.1489078
[2,] 0.4923271 3.1325763 -0.2079410
[3,] -0.1489078 -0.2079410 4.1376085
\end{CodeOutput}
\end{CodeChunk}
Similarly, to obtain the estimated parameters based on the completely classified sample, we apply \code{gmmsslm()} only to the above generated dataset \code{dat$Y} and \code{dat$clust} from \code{rmix()}. There is no longer a missing label because the sample is completely classified. We set \code{zm=dat$clust} and \code{type=`com'}.
\begin{CodeChunk}
\begin{CodeInput}
R> fit_com<-gmmsslm(dat=dat$Y,zm=dat$clust,pi=inits$pi,mu=inits$mu,
+ sigma=inits$sigma,type='com')
R> fit_com
\end{CodeInput}
\begin{CodeOutput}
$objective
[1] 2044.533
$ncov
[1] 2
$convergence
[1] 1
$iteration
[1] 39
$parhat
$parhat$pi
[1] 0.2466667 0.2466667 0.2433333 0.2633333
$parhat$mu
[,1] [,2] [,3] [,4]
[1,] 0.1985072 0.1890481 0.2206379 0.4004096
[2,] 0.2683220 0.5339800 0.7292851 1.8812899
[3,] 0.3685162 0.1467160 1.8445736 0.3810854
$parhat$sigma
, , 1
[,1] [,2] [,3]
[1,] 0.97793568 -0.09208251 -0.07225683
[2,] -0.09208251 0.84448793 -0.22104616
[3,] -0.07225683 -0.22104616 0.93807780
, , 2
[,1] [,2] [,3]
[1,] 1.80391990 -0.3886867 0.07865229
[2,] -0.38868669 2.2083642 -0.30586168
[3,] 0.07865229 -0.3058617 2.27606673
, , 3
[,1] [,2] [,3]
[1,] 2.8947952 -0.46421008 -0.36287163
[2,] -0.4642101 2.68606795 0.01285382
[3,] -0.3628716 0.01285382 3.49889359
, , 4
[,1] [,2] [,3]
[1,] 4.7856302 0.4454033 -0.3151916
[2,] 0.4454033 3.2271520 -0.2439202
[3,] -0.3151916 -0.2439202 4.4013177
\end{CodeOutput}
\end{CodeChunk}
The function returns an \code{gmmsslm} object containing the optimal objective value, convergence indicator, iteration number, and the final iterates for the parameter estimates. The estimates of parameters are contained in \code{fullvalue_pc$parhat} and \code{fullvalue_cc$parhat}. Finally, we can input the estimated parameters and the unclassified data into \code{bayesclassifier()} to obtain the predicted class labels.
\begin{CodeChunk}
\begin{CodeInput}
R> dat_ul<-dat$Y[which(m==1),]
R> n_ul<-length(which(m==1))
R> clust_ul<-dat$clust[which(m==1)]
R> parhat_ful<-value_ful$parhat
R> label_ful<-bayesclassifier(dat=dat_ul,n=n_ul,p=p,g=g,pi=parhat_ful$pi,
+ mu=parhat_ful$mu,sigma=parhat_ful$sigma)
R> label_ful
\end{CodeInput}
\begin{CodeOutput}
[1] 1 3 1 3 3 3 4 4 2 4 1 1 1 1 4 4 3 4 1 3 1 1 4 2 2 1 1 1 4 3 1 1 1 1 4 1
[37] 3 3 1 2 4 1 3 2 1 1 4 4 4 1 1 1 4 4 4 1 2 1 3 3 1 3 2 2 3 1 3 3 1 4 3 4
[73] 1 3 1 1 1 3 1 1 1 1 1 1 1 1 3 3 1 1 1 1 3 4 4 2 4 4 1 4 4 2 4 4 3 2 1 1
[109] 1 3 1 1 1 2 2 2 4 2 4 1 2 2 1 2 2 2 1 1 4 1 2 3 2
\end{CodeOutput}
\begin{CodeInput}
R> parhat_ign<-fit_ign$parhat
R> label_ign<-bayesclassifier(dat=dat_ul,n=n_ul,p=p,g=g,pi=parhat_ign$pi,
+ mu=parhat_ign$mu,sigma=parhat_ign$sigma)
R> label_ign
\end{CodeInput}
\begin{CodeOutput}
[1] 1 3 1 3 3 3 4 4 2 4 1 1 1 1 4 4 3 4 1 3 1 1 4 2 2 1 1 1 4 3 1 1 1 1 4 1
[37] 3 3 1 2 4 1 3 2 1 1 4 4 4 1 1 1 4 4 4 1 2 1 3 4 1 3 2 2 3 1 3 3 1 4 3 4
[73] 1 3 1 1 1 4 1 1 1 1 1 1 1 1 3 3 1 1 1 1 3 4 4 2 4 4 1 4 4 2 3 4 4 4 1 1
[109] 1 3 1 1 1 2 2 2 4 2 4 1 2 2 1 2 2 2 1 1 4 1 2 3 4
\end{CodeOutput}
\begin{CodeInput}
R> parhat_cc<-fit_com$parhat
R> label_cc<-bayesclassifier(dat=dat_ul,n=n_ul,p=p,g=g,pi=parhat_cc$pi,
+ mu=parhat_cc$mu,sigma=parhat_cc$sigma)
R> label_cc
\end{CodeInput}
\begin{CodeOutput}
[1] 1 3 1 3 3 3 4 4 2 4 3 1 1 1 4 4 3 4 1 3 1 1 4 2 2 1 1 1 4 3 1 1 1 1 3 1
[37] 3 3 1 1 4 1 3 2 1 1 4 4 4 1 1 1 4 4 4 1 2 1 3 3 1 3 2 2 3 1 3 3 1 4 3 4
[73] 1 3 1 1 1 4 1 1 1 1 1 1 1 1 3 3 1 2 1 1 3 4 4 2 4 4 1 4 4 2 3 4 4 2 1 1
[109] 2 3 1 1 1 2 2 2 4 1 4 1 2 2 1 2 2 2 1 1 4 1 2 3 2
\end{CodeOutput}
\end{CodeChunk}
We calculate the corresponding error rates where the classifier were applied only to the unclassified data in the partially classified data set.
\begin{CodeChunk}
\begin{CodeInput}
R> err_ful<-erate(dat=dat_ul,parlist=fit_ful,clust=clust_ul)
R> err_ign<-erate(dat=dat_ul,parlist=fit_ign,clust=clust_ul)
R> err_com<-erate(dat=dat_ul,parlist=fit_com,clust=clust_ul)
R> c(err_ful,err_ign,err_com)
\end{CodeInput}
\begin{CodeOutput}
[1] 0.4997755 0.5279257 0.4843896
\end{CodeOutput}
\end{CodeChunk}
\section{Application to a real dataset}
\label{sec:example}
In this section, we illustrate the functionality of \pkg{gmmsslm} on the gastrointestinal lesions data from \citet{mesejo2016computer}. The dataset is composed of 76 colonoscopy videos (recorded with both white light (WL) and narrow band imaging (NBI)), the histology (classification ground truth), and the opinions of the endoscopists (including 4 experts and 3 beginners). They used both WL and NBI methods to classify whether the lesions appear benign or malignant. Each of the $n=76$ observations consists of 698 features extracted from the colonoscopy videos.
A panel of seven endoscopists viewed the videos to give their opinion as to whether each patient needed resection (malignant) or no-resection (benign).
We formed our partially classified sample as follows. Feature vectors for
which all seven endoscopists agreed were taken to be classified with labels
specified either as 1 (resection) or 2 (no-resection) using the ground-truth labels.
Observations for which there was not total agreement among the endoscopists
were taken as having missing class labels denoted by \code{NA}.
To reduce the dimension of the dataset down from 698 features, we removed constant columns, normalized the dataset, and then used sparse linear discriminant analysis \citep{clemmensen2011sparse} to select a subset of four features useful for class discrimination according to the trinary ground truth labels (adenoma, serrated, and hyperplastic).
Figure~\ref{fig:Figlable1.1}
shows a plot of the data with the class labels of the feature vectors used in the fitting of the classifiers in our example.
\begin{figure}[ht]
\centering
\includegraphics[width=12cm]{gastroplot-7-wl.png}
\caption{Gastrointestinal dataset. The red triangles denote the benign observations, and the blue circles denote the malignant observations. The black squares correspond to the unlabeled observations. Observations are treated as unlabeled if fewer than seven endoscopists assigned the same class label to the feature vector.}
\label{fig:Figlable1.1}
\end{figure}
The black squares denote the unlabeled observations, the red triangles denote the benign observations and the blue circles denote the malignant observations. It can be seen that the unlabeled observations tend to be located in regions of class overlap.
To confirm further the appropriateness of the adopted missing-data mechanism for the data, we fitted a Gaussian mixture model \citep{lee2014finite} to estimate the entropy of each observation.
Figure~\ref{fig:figentropy}(a) compares the box plots of the estimated entropies in the labeled and unlabeled groups. Figure~\ref{fig:figentropy}(b) presents a Nadaraya--Watson kernel estimate of the probability of missing class labels.
\begin{figure}[ht]
\centerline{\includegraphics[scale=.30]{entropy.png}}
\caption{Analysis of the gastrointestinal dataset with respect to the relationship between the entropy and the labeled and unlabeled observations.}
\label{fig:figentropy}
\end{figure}
From Figure~\ref{fig:figentropy}(a), we find that the unlabeled observations typically have higher entropy than the labeled observations. Figure~\ref{fig:figentropy}(b) shows that the estimated probability of a missing class label decreases as the negative log entropy increases. This relation is in accordance with equation \eqref{eq:12}. The higher the entropy of a feature vector, the higher the probability of its class label being unavailable.
Now we use the function \code{gmmsslm()} to compare the
the performance of the Bayes' classifier with the unknown parameter vector $\boldsymbol{\theta}$ estimated by $\hat{\boldsymbol{\theta}}_{\text{CC}}$, $\hat{\boldsymbol{\theta}}_{\text{PC}}^{(\text{ig})}$, and $\hat{\boldsymbol{\theta}}_{\text{PC}}^{(\text{full})}$, respectively.
We applied the three classifiers $R(\hat{\boldsymbol{\theta}}_{\text{CC}})$, $R(\hat{\boldsymbol{\theta}}_{\text{PC}}^{(\text{ig})})$, and $R(\hat{\boldsymbol{\theta}}_{\text{PC}}^{(\text{full})})$
to all the feature vectors in the completely classified data set. Each error rate was estimated using leave-one-out cross-validation.
These estimated error rates are given in Table~\ref{tab:tabel1}.
\begin{table}[ht]
\centering
\begin{tabular}{cccc}
\toprule
Gastrointestinal dataset & $n_c$(classified) & $n_{uc}$(unclassified) & Error rate \\
\midrule
$R(\hat{\boldsymbol{\theta}}_{\text{CC}})$ & 76 & 0 & 0.171\\
$R(\hat{\boldsymbol{\theta}}_{\text{PC}}^{(\text{ig})})$ & 35 & 41 & 0.211 \\
$R(\hat{\boldsymbol{\theta}}_{\text{PC}}^{(\text{full})})$ & 35 & 41 & 0.158 \\
\bottomrule
\end{tabular}
\caption{Summary statistics for Gastrointestinal dataset. Here, $n=76$, $p=4$, and $g=2$.}
\label{tab:tabel1}
\end{table}
The classifier based on the estimates of the parameters
using the full likelihood for the partially classified training sample has lower estimated error rate than that of the rule that would be
formed if the sample were completely classified.
\section{Summary}
\label{sec:summary}
The \proglang{R} package \pkg{gmmsslm} implements the semi-supervised learning approach proposed
by \cite{ahfock2020apparent} for estimating the Bayes' classifier from a partially classified training sample
in which some of the feature vectors have missing labels. It uses a generative model approach whereby the
joint distribution of the feature vector and its ground-truth label is adopted. It is assumed that each of
$g$ pre-specified classes to which a feature vector can belong has a multivariate Gaussian distribution.
The conditional probability that a feature vector has a missing label is formulated in a framework in which a
missing-data mechanism is introduced that models this probability to depend on the entropy of the feature vector,
using the logistic model. The parameters in the Bayes' classifier are estimated by maximum likelihood
via an expectation conditional maximization algorithm.
The package applies to classes which have unequal or equal covariance matrices in their multivariate Gaussian distributions.
In the case of $g=2$ classes, the package is specialized to the particular cases of equal or proportional covariance matrices
since there is then much reduction in the form of the Bayes' classifier.
An example is presented with the application of the package to a real-data set.
It illustrates the potential of the semi-supervised approach to improve the accuracy of the estimated Bayes' classifier.
In this example, the estimated error rate of the latter estimated Bayes' classifier based on the
partially classified training sample is lower than that of the Bayes' classifier formed from a completely classified sample.
\section{Introduction}
Classifiers such as neural networks
often achieve their strong performance
through a completely supervised learning (CSL) approach,
which requires a fully classified (labeled) dataset.
However, missing labels often occur in practice
due to difficulty in determining
the true label for an observation (the feature vector).
For example, in fields such as medicine, defense, and other
scientific fields, images can often only be
correctly classified by a limited number of people
who are experts in the field.
Hence, a training sample might not be completely
classified with images difficult
to classify left without their class labels.
Moreover, in medical diagnosis, there might be
scans that can be diagnosed confidently only
after an invasive procedure
that might be regarded as unethical to perform at the time.
Much attention is given
in both the application oriented and the theoretical machine
learning communities to semi-supervised learning (SSL) approaches
that effectively form a classifier from
training data consisting of a limited number
of classified (labeled) observations but a much larger number
of unclassified (unlabeled) observations.
Thus, there is now a wide literature on SSL techniques, for example, \cite{chapelle2006semi}, \cite{berthelot2019remixmatch},
\cite{pan2006semi}, and \cite{zhou2003learning},
which is too numerous to review here.
One of the most intuitive approaches to SSL
is self-training, an iterative
method for learning with alternating steps
between generating pseudo-labels for the unclassified features
and then training a classifier using both
the labeled and pseudo-labeled data.
This approach goes back at least to the mid-seventies in the
statistical literature \citep{mclachlan1975iterative} as cited in \cite{sohn2020fixmatch}, and is discussed in more detail
in the brief review of \cite{ahfock2022semi} that focused
on statistical-based SSL techniques.
As noted in \cite{ahfock2022semi}, with the latter approach
the parameters are estimated iteratively
by treating the labels of the unclassified
features as unknown parameters to be estimated along with
the parameters of the allocation rule.
That is, it uses the so-called classification
maximum likelihood (CML) approach as considered
by \cite{hartley1968classification}, among others;
see Section 1.12 of \cite{mclachlan1988mixture}.
The CML approach gives an inconsistent estimate of
$\boldsymbol{\theta}$ except in special cases like
$\pi_1=\pi_2$.
The CML approach can be viewed as
applying the subsequent expectation-maximization (EM) algorithm of \cite{dempster1977maximum}
with the following modification \citep{mclachlan19829}.
Namely, the E-step is executed using outright (hard)
rather than fractional (soft) assignment of each unclassified
feature to a component of the mixture as with
the standard application of the EM algorithm.
Many open research questions remain
regarding SSL.
A fundamental issue is identifying the mechanisms
responsible for the success of SSL.
For generative classifiers that model the joint distribution
of the features and the labels \citep{mclachlan1992discriminant},
it is possible to compute the Fisher information
in unclassified observations and the benefits
of using unlabeled observations can be studied
on a sound theoretical basis.
Recently, \citet{ahfock2020apparent}
provided a basis on how to increase the
accuracy of the Bayes' (optimal) classifier
estimated parametrically from a
partially classified sample as in SSL.
For the commonly
used linear discriminant function
estimated from a partially classified sample
from two multivariate (homoscedastic) Gaussian classes,
they showed that under certain conditions
the increase in accuracy
can be of sufficient magnitude for this SSL-based
classifier to have smaller error rate than that
if it were formed from a completely classified sample.
This apparent paradox can be explained by the fact that
by introducing the concept of missingness
for the unobserved (missing) labels,
the information in the missing labels
has the potential to compensate
for the loss of information
from not knowing the missing labels. The compensation can occur to
the extent where the rule
can have lower error rate than that
if the sample were completely classified in
situations where the missingness-mechanism of the
class labels is non-ignorable in the pioneering framework
of \cite{rubin1976inference} for missingness in incomplete-data analysis.
Hence, it is a novel result that has
significant implications for constructing
parametric SSL classifiers in practice.
This model-based SSL approach in its general form
can be adopted for an arbitrary
number of classes with distributions belonging to any parametric
family in conjunction with a non-ignorable missingness mechanism.
In the \proglang{R} package presented in this paper,
the latter mechanism is
specified by multinomial logistic regression
in terms of the entropy of the feature vectors.
This package
applies to an arbitrary number of classes
having multivariate Gaussian distributions
with covariance matrices not necessarily the same. We call the new package \pkg{gmmsslm} (\underline{G}aussian \underline{m}ixture \underline{m}odel based \underline{s}emi-\underline{s}upervised \underline{l}earning with a \underline{m}issing data mechanism).
\section{Background}
We let $\boldsymbol{y}$ be a $p$-dimensional vector of features
on an entity to be assigned to one of $g$ predefined classes
$C_1,\ldots,C_g$.
The random variable $\boldsymbol{Y}$ corresponding to the
realization $\boldsymbol{y}$ is assumed to have density
$f_i(\boldsymbol{y};\boldsymbol{\omega}_i)$ up to a vector $\boldsymbol{\omega}_i$
of unknown parameters in class $C_i$ $(i=1,\ldots,g)$.
The Bayes' rule of allocation $R(\boldsymbol{y};\boldsymbol{\theta})$ assigns an entity
with feature vector $\boldsymbol{y}$ to class $C_k$
(that is, $R(\boldsymbol{y};\boldsymbol{\theta})=k)$ if
$k=\arg\max_i\, \tau_i(\boldsymbol{y};\boldsymbol{\theta}),$
where
\begin{equation*}
\tau_i(\boldsymbol{y};\boldsymbol{\theta})= {\pi_i f_i(\boldsymbol{y};\boldsymbol{\omega}_i)}/
{\sum_{h=1}^g \pi_h f_h(\boldsymbol{y};\boldsymbol{\omega}_h)}
\end{equation*}
is the posterior probability that the entity belongs
to class $C_i$ given $\boldsymbol{Y}=\boldsymbol{y}$. Here, $\pi_i$ is the
prior probability that the entity belongs to
$C_i$ and
$\boldsymbol{\theta}=(\pi_1,\ldots,\pi_{g-1},
\boldsymbol{\theta}_1^T,\ldots,\boldsymbol{\theta}_g^T)^T$
is the vector of unknown parameters.
The superscript $T$
denotes vector transpose.
In order to estimate $\boldsymbol{\theta}$, it is customary in practice to have
available a training sample of size $n$. We let
$\boldsymbol{x}_{\rm CC}=(\boldsymbol{x}_1^T,\ldots,\boldsymbol{x}_n^T)^T$
contain $n$ independent realizations of $\boldsymbol{X}=(\boldsymbol{Y}^T, Z)^T$
as the completely classified training data, where $Z$ denotes
the class membership of $\boldsymbol{Y}$, being equal to $i$ if $\boldsymbol{Y}$
belongs to class $C_i$,
and zero otherwise, and
where $\boldsymbol{x}_j=(\boldsymbol{y}_j^T, z_j)^T$.
For a partially classified training sample
$\boldsymbol{x}_{\rm PC}$ in SSL, we introduce
the missing-label indicator $m_j$ which equals 1 if $z_j$
is missing and 0 if it is available $(j=1,\ldots,n)$.
Thus, $\boldsymbol{x}_{\rm PC}$ consists of those observations $\boldsymbol{x}_j$
in $\boldsymbol{x}_{\rm CC}$ if $m_j=0$, but only the feature vector
$\boldsymbol{y}_j$ in $\boldsymbol{x}_{\rm CC}$
if $m_j=1$ (that is, the label $z_j$ is missing).
The construction of the parametric version
of the optimal (Bayes') classifier from partially classified data
can be undertaken by maximum likelihood (ML)
estimation of $\boldsymbol{\theta}$ implemented
via the EM algorithm of \cite{dempster1977maximum}; see also \cite{mclachlan2007algorithm}.
We let
\begin{eqnarray}
\log L_{\rm C}(\boldsymbol{\theta})&=&
\sum_{j=1}^n(1-m_j)\sum_{i=1}^g z_{ij}
\log\{\pi_i f_i(\boldsymbol{y}_j;\boldsymbol{\omega}_i)\}
\label{eq:3},\\
\log L_{\rm UC}(\boldsymbol{\theta})&=&
\sum_{j=1}^n m_j \log \sum_{i=1}^g \pi_i
f_i(\boldsymbol{y}_j;\boldsymbol{\omega}_i),\label{eq:5}\\
\log L_{\rm PC}^{(\rm ig)}(\boldsymbol{\theta})&=&\log L_{\rm C}(\boldsymbol{\theta})
+\log L_{\rm UC}(\boldsymbol{\theta}),
\label{eq:4}
\end{eqnarray}
where in \eqref{eq:3}, $z_{ij}=1$ if $z_j=i$ and is zero otherwise.
In situations where one proceeds by ignoring
the ``missingness'' of the class labels,
$L_{\rm C}(\boldsymbol{\theta})$ and $L_{\rm UC}(\boldsymbol{\theta})$
denote the likelihood function formed from the classified data
and the unclassified data, respectively, and
$L_{\rm PC}^{(\rm ig)}(\boldsymbol{\theta})$ is the likelihood function
formed from the partially classified sample $\boldsymbol{x}_{\rm PC}$,
ignoring the missing-data mechanism for the labels.
The log of the likelihood $L_{\rm CC}(\boldsymbol{\theta})$ for the
completely classified sample $\boldsymbol{x}_{\rm CC}$ is given by \eqref{eq:3}
with all $m_j=0$.
Situations in the present context where it is appropriate to
ignore the missing-data mechanism in carrying out likelihood inference
are where the missing labels are missing at random
in the framework for missing data
put forward by \cite{rubin1976inference}
for missingness in incomplete-data analysis.
This case will occur in the present context
if the missingness of the labels does not depend on
the features nor the labels (missing completely at random)
or if the missingness depends only on the features
and not also the class labels (missing at random),
as in the truncation example of \cite{mclachlan1989mixture}
where the mechanism for the missing labels was also ignorable.
The paper of \cite{mealli2015clarifying} is an excellent reference
for describing the terminology of missing (always) at random (MAR) and
the more restrictive version of missing (always) completely at random (MCAR).
We let $\hat{\btheta}_{\rm CC}$ and $\hat{\btheta}_{\rm PC}^{(\rm ig)}$
be the estimate of $\boldsymbol{\theta}$ formed by consideration
of $L_{\rm CC}(\boldsymbol{\theta})$ and $L_{\rm PC}^{(\rm ig)}(\boldsymbol{\theta})$,
respectively.
Also, we let
$R(\boldsymbol{y};\hat{\btheta}_{\rm CC})$ and $R(\boldsymbol{y};\hat{\btheta}_{\rm PC}^{(\rm ig)})$
denote the estimated Bayes' rule obtained by plugging in
$\hat{\btheta}_{\rm CC}$ and $\hat{\btheta}_{\rm PC}^{(\rm ig)}$,
respectively, for $\boldsymbol{\theta}$ in $R(\boldsymbol{y};\boldsymbol{\theta})$.
The overall conditional error rate of the rule
$R(\boldsymbol{y};\hat{\btheta}_{\rm CC})$ is defined by
\begin{equation}
{\rm err}(\hat{\btheta}_{\rm CC};\boldsymbol{\theta})
= 1-\sum_{i=1}^g \pi_i\, {\rm pr}\{ R(\boldsymbol{Y}; \hat{\btheta}_{CC}) = i
\mid \hat{\btheta}_{CC}, Z=i\}.
\label{eq:5a}
\end{equation}
The corresponding conditional error rate
${\rm err}(\hat{\btheta}_{PC}^{(\rm ig)}; \boldsymbol{\theta})$
of the rule $R(\boldsymbol{y};\hat{\btheta}_{\rm PC}^{(\rm ig)})$
is defined by replacing $\hat{\btheta}_{CC}$ with
$\hat{\btheta}_{PC}^{(\rm ig)}$ in \eqref{eq:5a}.
The optimal error rate ${\rm err}(\boldsymbol{\theta})$
is defined by replacing $\hat{\btheta}_{\rm CC}$
by $\boldsymbol{\theta}$ in \eqref{eq:5a}.
The asymptotic relative efficiency (ARE) of the estimated Bayes' rule
$R(\boldsymbol{y};\hat{\btheta}_{\rm PC}^{(\rm ig)})$ compared to the rule
$R(\boldsymbol{y};\hat{\btheta}_{\rm CC})$
based on the completely classified sample $\boldsymbol{x}_{\rm CC}$
is defined by
the ratio of their expected excess error rates with respect to
the optimal error rate ${\rm err}(\boldsymbol{\theta})$,
\begin{equation}
{\rm ARE}\{R(\boldsymbol{y};\hat{\btheta}_{\rm PC}^{(\rm ig)})\}
=\frac{E\{{\rm err}(\hat{\btheta}_{\rm CC};\boldsymbol{\theta})\} -{\rm err}(\boldsymbol{\theta})}
{E\{{\rm err}(\hat{\btheta}_{\rm PC}^{(\rm ig)};\boldsymbol{\theta})\}-{\rm err}(\boldsymbol{\theta})},
\label{eq:6}
\end{equation}
where the expectation in the numerator and denominator of
the right-hand side of \eqref{eq:6}
is taken over the distribution of the estimators of $\boldsymbol{\theta}$
and is expanded up to terms of the first order.
Considerable simplification is possible under the two-class
(homoscedastic) Gaussian model,
\begin{equation}
\boldsymbol{Y} \mid Z=i\, \sim \,N(\boldsymbol{\mu}_i, \boldsymbol{\Sigma})\quad \text{in }C_i
\text{ with prob. }\pi_i\quad (i=1,2),
\label{eq:7}
\end{equation}
which has a convenient canonical form of $\boldsymbol{\Sigma}$ being the
$p\times p$ identity matrix,
$\boldsymbol{\mu}_1=(\Delta,0,\ldots,0)^T,$ and $\boldsymbol{\mu}_2=\boldsymbol{0}$, where
$\Delta^2=(\boldsymbol{\mu}_1-\boldsymbol{\mu}_2)^T\boldsymbol{\Sigma}^{-1}(\boldsymbol{\mu}_1-\boldsymbol{\mu}_2)$
is the squared Mahalanobis distance between the two classes.
The Bayes' rule
reduces to depending on just the $(p+1)$-dimensional vector of
discriminant function coefficients $\boldsymbol{\beta}=(\beta_0,\boldsymbol{\beta}_1^T)^T$,
since $R(\boldsymbol{y};\boldsymbol{\theta})$ is 1 or 2, according as
$d(\boldsymbol{y};\boldsymbol{\beta})= \beta_0 + \boldsymbol{\beta}_1^T\boldsymbol{y}$
is greater or less than zero, where
$$\beta_0 = -{\textstyle\frac{1}{2}}(\boldsymbol{\mu}_1+\boldsymbol{\mu}_2)^T\boldsymbol{\Sigma}^{-1}
(\boldsymbol{\mu}_1-\boldsymbol{\mu}_2),\quad
\boldsymbol{\beta}_1 = \boldsymbol{\Sigma}^{-1} (\boldsymbol{\mu}_1-\boldsymbol{\mu}_2).$$
We can reparameterize the two-class Gaussian model
\eqref{eq:7} by taking
$\boldsymbol{\theta}=(\boldsymbol{\theta}_1^T,\boldsymbol{\beta}^T)^T,$
where $\boldsymbol{\theta}_1$ contains the elements of $\boldsymbol{\mu}=\pi_1\boldsymbol{\mu}_1+\pi_2\boldsymbol{\mu}_2$
and the distinct elements of
$\boldsymbol{\Lambda}=\boldsymbol{\Sigma}+\pi_1\pi_2(\boldsymbol{\mu}_1-\boldsymbol{\mu}_2)
(\boldsymbol{\mu}_1-\boldsymbol{\mu}_2)^T.$
Under the assumption that
the class labels are MCAR
(that is, the missingness of the labels does not depend on the data),
\cite{ganesalingam1978efficiency} derived the
ARE of $R(\boldsymbol{y};\hat{\bbeta}_{\rm PC}^{(\rm ig)})$
compared to $R(\boldsymbol{y};\hat{\bbeta}_{\rm CC})$
in the univariate case $(p = 1)$ of \eqref{eq:7}
and a completely unclassified sample $(\overline{m} = 1),$
where $\overline{m}=\sum_{j=1}^n m_j/n.$
Not surprisingly,
they showed that the
ARE of $R(\boldsymbol{y}; \hat{\bbeta}_{\rm PC}^{(\rm ig)})$
for a totally unclassified
sample is low, particularly for classes weakly separated
(for example, only 0.005
for $\Delta=1$ with $\pi_1=0.5).$
\cite{o1978normal} extended their result to
multivariate features and for arbitrary values of $\overline{m}$.
His results showed that this ARE was not sensitive to the
values of $p$ and does not vary with $p$ for equal class
prior probabilities.
In other work on the ARE of $R(\boldsymbol{y};\hat{\bbeta}_{\rm PC}^{(\rm ig)})$
compared to $R(\boldsymbol{y};\hat{\bbeta}_{\rm CC})$,
\cite{mclachian1995asymptotic}
evaluated it where
due to truncation the unclassified
univariate features had (ignorable) MAR labels.
\section{Fisher information matrix}
In the case of a partially classified training sample $\boldsymbol{x}_{\rm PC}$,
\cite{o1978normal} derived the information matrix
$$\boldsymbol{I}_{\rm PC}(\boldsymbol{\beta})
=E[\{\partial\log L_{\rm PC}(\boldsymbol{\theta})/\partial\boldsymbol{\theta}\}
\{\partial \log L_{\rm PC}(\boldsymbol{\theta})/\partial\boldsymbol{\theta}^T\}]$$
about the vector $\boldsymbol{\beta}$ of discriminant function coefficients
in the context of the two-class homoscedastic
Gaussian model \eqref{eq:7},
using the result of \cite{efron1975efficiency} for the information matrix
for $\boldsymbol{\beta}$ in applying logistic regression.
Assuming that the missingness of the labels
does not depend on the data,
\cite{o1978normal} showed that $\boldsymbol{I}_{\rm PC}(\boldsymbol{\beta})$
can be decomposed as
\begin{equation}
\boldsymbol{I}_{\rm PC}(\boldsymbol{\beta})=\boldsymbol{I}_{\rm CC}(\boldsymbol{\beta})
-\overline{m}\,\boldsymbol{I}_{\rm CC}^{(\rm lr)}(\boldsymbol{\beta}),
\label{eq:9}
\end{equation}
where $\boldsymbol{I}_{\rm CC}(\boldsymbol{\beta})$ is the information about $\boldsymbol{\beta}$
in a completely classified sample $\boldsymbol{x}_{\rm CC}$ and
$\boldsymbol{I}_{\rm CC}^{\rm(lr)}(\boldsymbol{\beta})$
is the information about $\boldsymbol{\beta}$ under
the logistic regression model for the distribution
of the class labels given the features in $\boldsymbol{x}_{\rm CC}$.
It can be seen from \eqref{eq:9} that the
loss of information due to the sample being partially
classified is equal to $\overline{m}\,\boldsymbol{I}_{\rm CC}^{(\rm lr)}(\boldsymbol{\beta})$.
\section{Modeling missingness for unobserved class labels}
\citet{ahfock2020apparent} proposed to treat the labels of
unclassified features as missing data
and to introduce a framework for their missingness.
To this end,
they introduced the random variable
$M_j$ corresponding to the realized value $m_j$ for the
missing-label indicator for the feature vector $\boldsymbol{y}_j$.
The missing-data mechanism of \cite{rubin1976inference} is specified
in the present context by the conditional distribution
\begin{equation}
{\rm pr}\{M_j=m_j\mid \boldsymbol{y}_j,z_j;\boldsymbol{\xi}\} \quad (j=1,\ldots,n),
\label{eq:10}
\end{equation}
where $\boldsymbol{\xi}$ is a vector of parameters.
\citet{ahfock2020apparent} noted from an examination
of several partially classified data sets
in the literature that unclassified features tend not to
occur at random in the feature space,
but rather tend to be concentrated
in regions of relatively high entropy.
They consequently proposed
to model the probability \eqref{eq:10} to depend on
the entropy $e_j(\boldsymbol{\theta})$ of the feature $\boldsymbol{y}_j$,
more precisely, the log entropy $\log e_j(\boldsymbol{\theta})$,
where $e_j(\boldsymbol{\theta})$ is defined by
\begin{equation*}
e_j(\boldsymbol{\theta})=-\sum_{i=1}^g \tau_i(\boldsymbol{y}_j;\boldsymbol{\theta})\log \tau_i(\boldsymbol{y}_j;\boldsymbol{\theta}).
\end{equation*}
Accordingly,
\cite{ahfock2020apparent}
specified the probability \eqref{eq:10} as
\begin{equation}
{\rm pr}\{M_j=1\mid \boldsymbol{y}_j,z_j\}= {\rm pr} \{M_j=1\mid \boldsymbol{y}_j\}
=q(\boldsymbol{y}_j;\boldsymbol{\theta},\boldsymbol{\xi}),
\label{eq:11}
\end{equation}
where $\boldsymbol{\Psi}=(\boldsymbol{\theta}^T, \boldsymbol{\xi}^T)^T$ and
the parameter $\boldsymbol{\xi}=(\xi_0, \xi_1)^T$ is distinct from $\boldsymbol{\theta}$.
The function $q(\boldsymbol{y}_j;\boldsymbol{\Psi})$ was taken to be
the logistic function,
\begin{equation}
q(\boldsymbol{y}_j;\boldsymbol{\Psi})= \frac{\exp\{\xi_0+\xi_1 \log e_j(\boldsymbol{\theta})\}}
{1+\exp\{\xi_0+\xi_1 \log e_j(\boldsymbol{\theta})\}}.
\label{eq:12}
\end{equation}
The expected proportion $\gamma(\boldsymbol{\Psi})$ of unclassified features
in a partially classified sample $\boldsymbol{x}_{\rm PC}$ is given by
\begin{equation*}
\gamma(\boldsymbol{\Psi})=\sum_{j=1}^n E(M_j)/n
=E[{\rm pr}\{M_j=1\mid \boldsymbol{Y}_j\}]
=E\{q(\boldsymbol{Y};\boldsymbol{\Psi})\}.
\end{equation*}
Under \eqref{eq:11}, the missingness is non-ignorable
as this probability depends also on $\boldsymbol{\theta}$, and hence,
the parameters of the Bayes' classifier.
In this case of non-ignorable MAR labels,
the log of the full likelihood function for $\boldsymbol{\Psi}$
is given by
\begin{equation*}
\log L_{\rm PC}^{(\rm full)}(\boldsymbol{\Psi})=
\log L_{\rm PC}^{(\rm ig)}(\boldsymbol{\theta}) +
\log L_{\rm PC}^{(\rm miss)}(\boldsymbol{\Psi}),
\end{equation*}
where
\begin{equation*}
\log L_{\rm PC}^{(\rm miss)}(\boldsymbol{\Psi})=
\sum_{j=1}^n [(1-m_j)\log \{1- q(\boldsymbol{y}_j;\boldsymbol{\Psi})\}
+ m_j \log q(\boldsymbol{y}_j;\boldsymbol{\Psi})]
\end{equation*}
is the log likelihood function for $\boldsymbol{\Psi}$
formed on the basis of the missing-label indicators
$m_j$ $(j=1,\ldots,n)$.
We let $\hat{\bPsi}_{\rm PC}^{(\rm full)}$
be the estimate of $\boldsymbol{\Psi}$ formed by consideration
of the full likelihood $L_{\rm PC}^{(\rm full)}(\boldsymbol{\Psi})$ and
$R(\boldsymbol{y};\hat{\bPsi}_{\rm PC}^{(\rm full)})$ be the estimated Bayes' rule
obtained by plugging in $\hat{\bPsi}_{\rm PC}^{(\rm full)}$ for
$\boldsymbol{\Psi}$ in the Bayes' rule $R(\boldsymbol{y};\boldsymbol{\Psi})$.
That is, $R(\boldsymbol{y};\hat{\bPsi}_{\rm PC}^{(\rm full)})$
is the rule referred to as the SSL-full rule
and $R(\boldsymbol{y};\hat{\bPsi}_{\rm CC})$ is the
completely supervised learning (CSL) rule.
Under the model \eqref{eq:12} for non-ignorable MAR labels
in the case of the two-class Gaussian model \eqref{eq:7},
\citet{ahfock2020apparent} derived the following theorem that
provides the motivation
for the development of a program to implement this SSL approach for possibly multiple classes with multivariate Gaussian distributions.
\noindent
{\bf \small{Theorem}} {\it The Fisher information about $\boldsymbol{\beta}$
in the partially classified sample $\boldsymbol{x}_{\rm PC}$
via the full likelihood function $L_{\rm PC}^{(\rm full)}(\boldsymbol{\Psi})$
can be decomposed as
\begin{equation}
\boldsymbol{I}_{\rm PC}^{(\rm full)}(\boldsymbol{\beta})= \boldsymbol{I}_{\rm CC}(\boldsymbol{\beta})
-\gamma(\boldsymbol{\Psi})\boldsymbol{I}_{\rm CC}^{(\rm clr)}(\boldsymbol{\beta}) +
\boldsymbol{I}_{\rm PC}^{(\rm miss)}(\boldsymbol{\beta}),
\label{eq:18}
\end{equation}
where
$\boldsymbol{I}_{\rm CC}(\boldsymbol{\beta})$
is the information about $\boldsymbol{\beta}$
in the completely classified sample $\boldsymbol{x}_{\rm CC},
\boldsymbol{I}_{\rm CC}^{(\rm clr)}(\boldsymbol{\beta})$ is the conditional information
about $\boldsymbol{\beta}$ under the logistic regression model fitted
to the class labels in $\boldsymbol{x}_{\rm CC}$, and
$\boldsymbol{I}_{\rm PC}^{(\rm miss)}(\boldsymbol{\beta})$ is the
information about
$\boldsymbol{\beta}$ in the missing-label indicators under the
assumed logistic model for their distribution
given their associated features
in the partially classified sample $\boldsymbol{x}_{\rm PC}$.}
On contrasting \eqref{eq:18} with \eqref{eq:9} which ignores the missingness
mechanism,
it can be seen that this expression for the Fisher
information about the vector of discriminant function coefficients
contains the additional
term $\boldsymbol{I}_{\rm PC}^{(\rm miss)}(\boldsymbol{\beta})$,
arising from the additional information
about $\boldsymbol{\beta}$ in the missing-label indicators $m_j$.
This term has the potential to compensate for the loss of information
in not knowing the true labels of those unclassified features
in the partially unclassified sample.
The compensation depends on the extent
to which the probability of a missing label
for a feature depends on its entropy.
It follows that if
\begin{equation}
\boldsymbol{I}_{\rm PC}^{(\rm miss)}(\boldsymbol{\beta}) > \gamma(\boldsymbol{\Psi})\,
\boldsymbol{I}_{\rm CC}^{(\rm clr)}(\boldsymbol{\beta}),
\label{eq:19}
\end{equation}
then there is an increase in the information about $\boldsymbol{\beta}$
in the partially classified sample over
the information $\boldsymbol{I}_{\rm CC}(\boldsymbol{\beta})$ about $\boldsymbol{\beta}$
in the completely classified sample.
The inequality in \eqref{eq:19} is used
in the sense that the left-hand side of the equation,
minus the right, is positive definite.
By deriving the ARE of the Bayes' rule using the full ML estimate of $\boldsymbol{\beta}$,
\citet{ahfock2020apparent} showed
that the asymptotic expected excess error rate
using the partially classified sample $\boldsymbol{x}_{\rm PC}$
can be much lower than the corresponding excess rate
using the completely classified sample $\boldsymbol{x}_{\rm CC}$.
The contribution to the Fisher information
from the missingness mechanism can
be relatively very high if $|\xi_{1}|$ is large,
as the location of the unclassified features
in the feature space
provides information
about regions of high uncertainty,
and hence where the entropy is high.
\section{Maximum likelihood estimation via the ECM algorithm}
In this section, we outline an application of the expectation conditional maximization (ECM) algorithm of \citet{meng1993maximum} to compute the ML estimate of $\boldsymbol{\Psi}$ on the basis of the full log likelihood $\log L_{{\text{PC}}}^{(\text{full})}(\boldsymbol{\Psi})$.
The E-step here is given as follows.
\textbf{E step:} The missing labels $z_{ij}$ for those observations $\boldsymbol{y}_j$ with $m_j=1$ are declared to be the ``missing'' data and are replaced by their conditional expectation given $\boldsymbol{y}_j$ and $m_j=1$:
\begin{equation*}
z_{ij}^{(k)}=\tau_i(\boldsymbol{y}_j;\boldsymbol{\theta}^{(k)})=\frac{\pi_i^{(k)} \phi(\boldsymbol{y}_j;\boldsymbol{\mu}_i^{(k)},\boldsymbol{\Sigma}_i^{(k)})}{\sum_{h=1}^g\pi_h^{(k)}\phi(\boldsymbol{y}_j;\boldsymbol{\mu}_h^{(k)},\boldsymbol{\Sigma}_h^{(k)})},
\end{equation*}
where $\boldsymbol{\theta}^{(k)}$ denotes the estimate of $\boldsymbol{\theta}$ after the $k$th iteration.
The ``missing'' data are handled by the E-step, which on the $(k+1)$th iteration takes the conditional expectation using the current estimate $\boldsymbol{\Psi}^{(k)}$ of $\boldsymbol{\Psi}$ of the complete-data log likelihood $\log L_{{\text{PC}}}^{(\text{full})}(\boldsymbol{\Psi})$ given
the observed data, using the current estimate $\boldsymbol{\Psi}^{(k)} $ for $\boldsymbol{\Psi}$ to give
\begin{equation*}
\begin{split}
Q(\boldsymbol{\theta},\boldsymbol{\xi};\boldsymbol{\theta}^{(k)},\boldsymbol{\xi}^{(k)})&=\sum_{j=1}^n\big[(1-m_j)\sum_{i=1}^gz_{ij}^{(k)}\{\log\pi_i+\log f_i(\boldsymbol{y}_j;\boldsymbol{\omega}_i)\}\\&\quad
+m_j\sum_{i=1}^gz_{ij}^{(k)}\{\log\pi_i+\log f_i(\boldsymbol{y}_j;\boldsymbol{\omega}_i)\}\big]\\&
\quad+\sum_{j=1}^n\big[(1-m_j)\log\{1-q(\boldsymbol{y}_j;\boldsymbol{\theta},\boldsymbol{\xi})\}+m_j\log q(\boldsymbol{y}_j;\boldsymbol{\theta},\boldsymbol{\xi})\big].
\end{split}
\end{equation*}
\textbf{CM steps:}
We calculate the updated value $\boldsymbol{\Psi}^{(k+1)}$ of $\boldsymbol{\Psi}$ using two conditional maximization (CM) steps.
\newline
\newline
CM-1 step: We fix $\boldsymbol{\xi}$ at its current value $\boldsymbol{\xi}^{(k)}$ and update $\boldsymbol{\theta}$ to $\boldsymbol{\theta}^{(k+1)}$ given by
\begin{equation*}
\boldsymbol{\theta}^{(k+1)}=\underset{\boldsymbol{\theta}}{\arg\max}\,Q(\boldsymbol{\theta},\boldsymbol{\xi}^{(k)};\boldsymbol{\theta}^{(k)},\boldsymbol{\xi}^{(k)})
\end{equation*}
where
\begin{equation*
\begin{split}
Q(\boldsymbol{\theta},\boldsymbol{\xi}^{(k)};\boldsymbol{\theta}^{(k)},\boldsymbol{\xi}^{(k)})&=\sum_{j=1}^n\big[(1-m_j)\sum_{i=1}^gz_{ij}^{(k)}\{\log\pi_i+\log f_i(\boldsymbol{y}_j;\boldsymbol{\omega}_i)\}\\&\quad
+m_j\sum_{i=1}^gz_{ij}^{(k)}\{\log\pi_i+\log f_i(\boldsymbol{y}_j;\boldsymbol{\omega}_i)\}\big]\\&
\quad+\sum_{j=1}^n\big[(1-m_j)\log\{1-q(\boldsymbol{y}_j;\boldsymbol{\theta},\boldsymbol{\xi}^{(k)})\}+m_j\log q(\boldsymbol{y}_j;\boldsymbol{\theta},\boldsymbol{\xi}^{(k)})\big].
\end{split}
\end{equation*}
CM-2-step: We now fix $\boldsymbol{\theta}$ at its updated value $\boldsymbol{\theta}^{(k+1)}$ and update $\boldsymbol{\xi}$ to $\boldsymbol{\xi}^{(k+1)}$ as
\begin{equation*}
\boldsymbol{\xi}^{(k+1)}=\underset{\boldsymbol{\xi}}{\arg\max}\,Q(\boldsymbol{\theta}^{(k+1)}, \boldsymbol{\xi}; \boldsymbol{\theta}^{(k)}, \boldsymbol{\xi}^{(k)}),
\end{equation*}
which reduces to
\begin{equation*}
\boldsymbol{\xi}^{(k+1)}=\underset{\boldsymbol{\xi}}{\arg\max}\,\log L_{{\text{PC}}}^{(\text{miss})}(\boldsymbol{\theta}^{(k+1)},\boldsymbol{\xi}),
\end{equation*}
on retaining only terms that depend on $\boldsymbol{\xi}$.
As $L_{\rm PC}^{(\rm miss)}(\boldsymbol{\theta}^{k+1)}, \boldsymbol{\xi})$ belongs to the regular exponential family, we use the function \code{glm()} from the base package \texttt{stats}.
The estimate $\boldsymbol{\Psi}_{{\text{PC}}}^{(\text{full})}$ is given by the limiting value $\boldsymbol{\Psi}^{(k)}$ as $k$ tends to infinity. We take the ECM process as having converged when
\begin{equation*}
\log L_{{\text{PC}}}^{(\text{full})}(\boldsymbol{\Psi}^{(k+1)})-\log L_{{\text{PC}}}^{(\text{full})}(\boldsymbol{\Psi}^{(k)})
\end{equation*}
is less than some arbitrarily specified value.
\section{Application of gmmsslm}
We consider the estimation of the parameters in the Bayes' classifier using the package \pkg{gmmsslm}. It is implemented via the primary function \code{gmmsslm()}.
The input arguments are described in Table~\ref{tab:emmixssl}.
\begin{table}[ht]
\centering
\begin{tabular}{cp{12.5cm}}
\toprule
\proglang{R} arguments & Description \\
\midrule
\code{dat} & An $n\times p$ matrix where each row represents an individual observation \\
\code{zm} & An $n$-dimensional vector containing the class labels including the missing-label denoted as \code{NA} \\
\code{pi} & A $g$-dimensional vector for the initial values of the mixing proportions \\
\code{mu} & A $p\times g$ matrix for the initial values of the location parameters \\
{\code{sigma}} & A $p\times p$ covariance matrix, or a list of $g$ covariance matrices with dimension $p\times p \times g$. It is assumed to fit the model with a common covariance matrix if \code{sigma} is a $p\times p$ covariance matrix; otherwise it is assumed to fit the model with unequal covariance matrices \\
\code{xi} & A 2-dimensional vector containing the initial values of the coefficients in the logistic function of the Shannon entropy \\
\code{type} & Three types of Gaussian mixture models can be defined as follows: `ign' indicates fitting the model to a partially classified sample based on the likelihood that the missing label mechanism is ignored; `full' indicates fitting the model to a partially classified sample on the basis of the full likelihood by taking into account the missing-label mechanism; and `com' indicate fitting the model to a completed classified sample \\
\code{iter.max} & Maximum number of iterations allowed. Defaults to 500 \\
\code{eval.max} & Maximum number of evaluations of the objective function allowed. Defaults to 500 \\
\code{rel.tol} & Relative tolerance. Defaults to 1e-15 \\
\code{sing.tol} & Singular convergence tolerance. Defaults to 1e-15 \\
\bottomrule
\end{tabular}
\caption{Optional arguments to the \code{gmmsslm()} function.}
\label{tab:emmixssl}
\end{table}
The default choices of \code{iter.max}, \code{eval.max}, \code{rel.tol}, and \code{sing.tol} that control the behavior of the algorithm often work well. This function implements the ECM algorithm described in the preceeding section.
As an example, we apply \code{gmmsslm()} to the dataset generated from a mixture of $g=4$ trivariate Gaussian distributions in equal proportions, using the function \code{rmix()} with the missing-label indicator from \code{rlabel()} to obtain the estimated parameters based on the partially classified sample. The parameter settings were specified as
$n=300$, $\boldsymbol{\pi}=(0.25,0.25,0.25,0.25)^T$
$\boldsymbol{\mu}_1=(0.2,0.3,0.4)^T$, $\boldsymbol{\mu}_2=(0.2,0.7,0.6)^T$, $\boldsymbol{\mu}_3=(0.1,0.7,1.6)^T$, $\boldsymbol{\mu}_4=(0.2,1.7,0.6)^T$, $\boldsymbol{\Sigma}_i=i\boldsymbol{I}$ for $i=1,2,3,4$, and $\boldsymbol{\xi}=(-0.5,1)^T$.
\begin{CodeChunk}
\begin{CodeInput}
R> set.seed(8)
R> n<-150
R> g<-4
R> p<-3
R> pi<-rep(1/g,g)
R> sigma<-array(0,dim=c(3,3,4))
R> for(i in 1:g) sigma[,,i]<-diag(i,p)
R> mu<-matrix(c(0.2,0.3,0.4,0.2,0.7,0.6,0.1,0.7,1.6,0.2,1.7,0.6),p,g)
R> dat<-rmix(n=n,pi=pi,mu=mu,sigma=sigma)
R> xi<-c(-0.5,1)
R> m<-rlabel(dat=dat$Y,pi=pi,mu=mu,sigma=sigma,xi=xi)
R> m
\end{CodeInput}
\begin{CodeOutput}
[1] 1 0 0 1 1 0 0 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 1
[37] 1 0 1 0 1 1 0 1 0 1 1 1 0 0 0 0 0 0 1 0 1 1 0 1 1 0 0 0 0 0 1 1 0 0 0 1
[73] 0 1 0 0 0 1 0 1 1 1 0 0 0 1 0 1 1 0 1 0 1 0 0 1 0 0 0 1 1 0 1 1 0 0 0 1
[109] 0 0 0 0 0 0 1 0 1 1 0 0 1 0 1 1 1 0 0 0 0 1 1 0 1 1 0 1 1 0 1 1 1 1 1 0
[145] 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0
[181] 1 1 1 0 0 1 0 0 0 0 0 1 0 1 0 0 1 1 1 0 1 1 1 0 0 0 0 1 0 1 0 0 1 0 0 0
[217] 1 1 1 1 1 0 1 0 1 1 0 0 0 1 0 1 1 1 0 1 1 0 1 1 0 1 0 0 1 0 0 1 1 1 0 0
[253] 1 1 0 0 1 1 1 1 0 0 0 0 1 1 0 1 0 1 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 1 1 1
[289] 0 1 1 0 0 0 1 1 1 1 1 0
\end{CodeOutput}
\end{CodeChunk}
The elements of the output \code{m} represent either a missing label if equal to one or an available label if equal to zero. {Regarding the vector of the class partition \code{clust} from \code{rmix}, we denote \code{NA} as the missing label; the following example is a reference.}
\begin{CodeChunk}
\begin{CodeInput}
R> zm<-dat$clust
R> zm[m==1]<-NA
R> zm
\end{CodeInput}
\begin{CodeOutput}
[1] NA 3 4 NA NA 4 4 NA NA NA NA 1 2 NA 2 4 2 1 1 4 2 4 4 1
[25] NA 4 NA 3 2 4 3 2 NA NA 2 NA NA 1 NA 2 NA NA 1 NA 3 NA NA NA
[49] 3 1 4 3 4 2 NA 3 NA NA 2 NA NA 1 4 3 2 1 NA NA 3 1 4 NA
[73] 4 NA 2 4 3 NA 2 NA NA NA 1 3 1 NA 4 NA NA 1 NA 1 NA 1 4 NA
[97] 1 1 1 NA NA 4 NA NA 3 3 4 NA 3 3 2 3 4 1 NA 3 NA NA 2 2
[121] NA 4 NA NA NA 4 3 3 3 NA NA 2 NA NA 4 NA NA 3 NA NA NA NA NA 3
[145] 1 2 4 3 NA NA NA NA NA NA 3 1 1 4 2 4 3 NA 4 NA 3 3 3 1
[169] 2 NA NA 4 2 1 3 1 NA 2 3 2 NA NA NA 1 3 NA 1 4 1 4 3 NA
[193] 1 NA 4 4 NA NA NA 3 NA NA NA 1 3 4 4 NA 1 NA 3 2 NA 4 4 1
[217] NA NA NA NA NA 2 NA 1 NA NA 2 4 4 NA 1 NA NA NA 4 NA NA 3 NA NA
[241] 4 NA 3 4 NA 3 1 NA NA NA 3 4 NA NA 1 2 NA NA NA NA 2 1 4 4
[265] NA NA 3 NA 2 NA 4 2 2 NA 4 NA NA 2 1 1 2 2 3 4 2 NA NA NA
[289] 1 NA NA 4 2 2 NA NA NA NA NA 3
\end{CodeOutput}
\end{CodeChunk}
Here, we fit the model to the above partially classified sample by considering the full likelihood and accounting for the missing-label mechanism by setting \code{zm=zm} and \code{type=`full'}.
For the initial values of parameters $\boldsymbol{\pi}$, $\boldsymbol{\mu}$, and $\boldsymbol{\Sigma}$, the function \code{initialvalue()} is available. Notice that we let \code{ncov} represent options for the class covariance matrices; the default value is 2; \code{ncov=1} for a common covariance matrix; \code{ncov=2} for unequal covariance matrices.
Initial values of $\xi_0$ and $\xi_1$ are obtainable by the function \code{glm()}.
\begin{CodeChunk}
\begin{CodeInput}
R> inits<-initialvalue(dat=dat$Y,zm=zm,g=g, ncov=2)
R> en<-get_entropy(dat=dat$Y,n=n,p=p,g=g,pi=inits$pi,mu=inits$mu,sigma=inits$sigma)
R> glmfit<-glm(m~en,family='binomial')
R> xi_inits<-coef(glmfit)
\end{CodeInput}
\begin{CodeOutput}
(Intercept) en
-0.5930693 0.3417874
\end{CodeOutput}
\end{CodeChunk}
\begin{CodeChunk}
\begin{CodeInput}
R> fit_ful<-gmmsslm(dat=dat$Y,zm=zm,pi=inits$pi,mu=inits$mu,
+ sigma=inits$sigma,xi=xi_inits,type='full')
R> fit_ful
\end{CodeInput}
\begin{CodeOutput}
$objective
[1] 2099.796
$ncov
[1] 2
$convergence
[1] 1
$iteration
[1] 108
$parhat
$parhat$pi
[1] 0.2448038 0.2247527 0.2521564 0.2782871
$parhat$mu
[,1] [,2] [,3] [,4]
[1,] 0.2491248 0.1430182 0.3248906 0.2862982
[2,] 0.1632115 0.6807152 0.8309257 1.6827811
[3,] 0.3066612 0.1033765 1.7899401 0.4550641
$parhat$sigma
, , 1
[,1] [,2] [,3]
[1,] 0.91534868 -0.03619485 0.02974812
[2,] -0.03619485 0.68636971 -0.26421012
[3,] 0.02974812 -0.26421012 0.97870318
, , 2
[,1] [,2] [,3]
[1,] 1.7435648 -0.5144975 -0.1351344
[2,] -0.5144975 2.3481520 -0.3188222
[3,] -0.1351344 -0.3188222 2.3780968
, , 3
[,1] [,2] [,3]
[1,] 2.4470608 -0.2274167 -0.5799091
[2,] -0.2274167 3.2809171 -0.2257279
[3,] -0.5799091 -0.2257279 3.3297267
, , 4
[,1] [,2] [,3]
[1,] 5.1069469 0.3922628 -0.1481315
[2,] 0.3922628 2.9109848 -0.1947038
[3,] -0.1481315 -0.1947038 4.3250141
$parhat$xi
(Intercept) en
-0.2490987 0.6779641
\end{CodeOutput}
\end{CodeChunk}
To fit the model to the partially classified sample without considering the missing-label mechanism, we set \code{zm=zm} and \code{type=`ign'}.
\begin{CodeChunk}
\begin{CodeInput}
R> fit_ign<-gmmsslm(dat=dat$Y,zm=zm,pi=inits$pi,mu=inits$mu,
+ sigma=inits$sigma,type='ign')
R> fit_ign
\end{CodeInput}
\begin{CodeOutput}
$objective
[1] 1894.371
$ncov
[1] 2
$convergence
[1] 0
$iteration
[1] 45
$parhat
$parhat$pi
[1] 0.2493635 0.2235416 0.2417861 0.2853088
$parhat$mu
[,1] [,2] [,3] [,4]
[1,] 0.2455327 0.16532951 0.2504642 0.3364182
[2,] 0.1611216 0.63215520 0.7556749 1.7754949
[3,] 0.3379078 0.08854911 1.8037221 0.4770904
$parhat$sigma
, , 1
[,1] [,2] [,3]
[1,] 0.91204204 -0.04085983 0.0261662
[2,] -0.04085983 0.71231030 -0.2920060
[3,] 0.02616620 -0.29200604 1.0110159
, , 2
[,1] [,2] [,3]
[1,] 1.7192169 -0.5524251 -0.1063493
[2,] -0.5524251 2.3080496 -0.3681465
[3,] -0.1063493 -0.3681465 2.4762132
, , 3
[,1] [,2] [,3]
[1,] 2.5732396 -0.39247106 -0.51646104
[2,] -0.3924711 2.83464544 -0.05839604
[3,] -0.5164610 -0.05839604 3.47527992
, , 4
[,1] [,2] [,3]
[1,] 4.9801691 0.4923271 -0.1489078
[2,] 0.4923271 3.1325763 -0.2079410
[3,] -0.1489078 -0.2079410 4.1376085
\end{CodeOutput}
\end{CodeChunk}
Similarly, to obtain the estimated parameters based on the completely classified sample, we apply \code{gmmsslm()} only to the above generated dataset \code{dat$Y} and \code{dat$clust} from \code{rmix()}. There is no longer a missing label because the sample is completely classified. We set \code{zm=dat$clust} and \code{type=`com'}.
\begin{CodeChunk}
\begin{CodeInput}
R> fit_com<-gmmsslm(dat=dat$Y,zm=dat$clust,pi=inits$pi,mu=inits$mu,
+ sigma=inits$sigma,type='com')
R> fit_com
\end{CodeInput}
\begin{CodeOutput}
$objective
[1] 2044.533
$ncov
[1] 2
$convergence
[1] 1
$iteration
[1] 39
$parhat
$parhat$pi
[1] 0.2466667 0.2466667 0.2433333 0.2633333
$parhat$mu
[,1] [,2] [,3] [,4]
[1,] 0.1985072 0.1890481 0.2206379 0.4004096
[2,] 0.2683220 0.5339800 0.7292851 1.8812899
[3,] 0.3685162 0.1467160 1.8445736 0.3810854
$parhat$sigma
, , 1
[,1] [,2] [,3]
[1,] 0.97793568 -0.09208251 -0.07225683
[2,] -0.09208251 0.84448793 -0.22104616
[3,] -0.07225683 -0.22104616 0.93807780
, , 2
[,1] [,2] [,3]
[1,] 1.80391990 -0.3886867 0.07865229
[2,] -0.38868669 2.2083642 -0.30586168
[3,] 0.07865229 -0.3058617 2.27606673
, , 3
[,1] [,2] [,3]
[1,] 2.8947952 -0.46421008 -0.36287163
[2,] -0.4642101 2.68606795 0.01285382
[3,] -0.3628716 0.01285382 3.49889359
, , 4
[,1] [,2] [,3]
[1,] 4.7856302 0.4454033 -0.3151916
[2,] 0.4454033 3.2271520 -0.2439202
[3,] -0.3151916 -0.2439202 4.4013177
\end{CodeOutput}
\end{CodeChunk}
The function returns an \code{gmmsslm} object containing the optimal objective value, convergence indicator, iteration number, and the final iterates for the parameter estimates. The estimates of parameters are contained in \code{fullvalue_pc$parhat} and \code{fullvalue_cc$parhat}. Finally, we can input the estimated parameters and the unclassified data into \code{bayesclassifier()} to obtain the predicted class labels.
\begin{CodeChunk}
\begin{CodeInput}
R> dat_ul<-dat$Y[which(m==1),]
R> n_ul<-length(which(m==1))
R> clust_ul<-dat$clust[which(m==1)]
R> parhat_ful<-value_ful$parhat
R> label_ful<-bayesclassifier(dat=dat_ul,n=n_ul,p=p,g=g,pi=parhat_ful$pi,
+ mu=parhat_ful$mu,sigma=parhat_ful$sigma)
R> label_ful
\end{CodeInput}
\begin{CodeOutput}
[1] 1 3 1 3 3 3 4 4 2 4 1 1 1 1 4 4 3 4 1 3 1 1 4 2 2 1 1 1 4 3 1 1 1 1 4 1
[37] 3 3 1 2 4 1 3 2 1 1 4 4 4 1 1 1 4 4 4 1 2 1 3 3 1 3 2 2 3 1 3 3 1 4 3 4
[73] 1 3 1 1 1 3 1 1 1 1 1 1 1 1 3 3 1 1 1 1 3 4 4 2 4 4 1 4 4 2 4 4 3 2 1 1
[109] 1 3 1 1 1 2 2 2 4 2 4 1 2 2 1 2 2 2 1 1 4 1 2 3 2
\end{CodeOutput}
\begin{CodeInput}
R> parhat_ign<-fit_ign$parhat
R> label_ign<-bayesclassifier(dat=dat_ul,n=n_ul,p=p,g=g,pi=parhat_ign$pi,
+ mu=parhat_ign$mu,sigma=parhat_ign$sigma)
R> label_ign
\end{CodeInput}
\begin{CodeOutput}
[1] 1 3 1 3 3 3 4 4 2 4 1 1 1 1 4 4 3 4 1 3 1 1 4 2 2 1 1 1 4 3 1 1 1 1 4 1
[37] 3 3 1 2 4 1 3 2 1 1 4 4 4 1 1 1 4 4 4 1 2 1 3 4 1 3 2 2 3 1 3 3 1 4 3 4
[73] 1 3 1 1 1 4 1 1 1 1 1 1 1 1 3 3 1 1 1 1 3 4 4 2 4 4 1 4 4 2 3 4 4 4 1 1
[109] 1 3 1 1 1 2 2 2 4 2 4 1 2 2 1 2 2 2 1 1 4 1 2 3 4
\end{CodeOutput}
\begin{CodeInput}
R> parhat_cc<-fit_com$parhat
R> label_cc<-bayesclassifier(dat=dat_ul,n=n_ul,p=p,g=g,pi=parhat_cc$pi,
+ mu=parhat_cc$mu,sigma=parhat_cc$sigma)
R> label_cc
\end{CodeInput}
\begin{CodeOutput}
[1] 1 3 1 3 3 3 4 4 2 4 3 1 1 1 4 4 3 4 1 3 1 1 4 2 2 1 1 1 4 3 1 1 1 1 3 1
[37] 3 3 1 1 4 1 3 2 1 1 4 4 4 1 1 1 4 4 4 1 2 1 3 3 1 3 2 2 3 1 3 3 1 4 3 4
[73] 1 3 1 1 1 4 1 1 1 1 1 1 1 1 3 3 1 2 1 1 3 4 4 2 4 4 1 4 4 2 3 4 4 2 1 1
[109] 2 3 1 1 1 2 2 2 4 1 4 1 2 2 1 2 2 2 1 1 4 1 2 3 2
\end{CodeOutput}
\end{CodeChunk}
We calculate the corresponding error rates where the classifier were applied only to the unclassified data in the partially classified data set.
\begin{CodeChunk}
\begin{CodeInput}
R> err_ful<-erate(dat=dat_ul,parlist=fit_ful,clust=clust_ul)
R> err_ign<-erate(dat=dat_ul,parlist=fit_ign,clust=clust_ul)
R> err_com<-erate(dat=dat_ul,parlist=fit_com,clust=clust_ul)
R> c(err_ful,err_ign,err_com)
\end{CodeInput}
\begin{CodeOutput}
[1] 0.4997755 0.5279257 0.4843896
\end{CodeOutput}
\end{CodeChunk}
\section{Application to a real dataset}
\label{sec:example}
In this section, we illustrate the functionality of \pkg{gmmsslm} on the gastrointestinal lesions data from \citet{mesejo2016computer}. The dataset is composed of 76 colonoscopy videos (recorded with both white light (WL) and narrow band imaging (NBI)), the histology (classification ground truth), and the opinions of the endoscopists (including 4 experts and 3 beginners). They used both WL and NBI methods to classify whether the lesions appear benign or malignant. Each of the $n=76$ observations consists of 698 features extracted from the colonoscopy videos.
A panel of seven endoscopists viewed the videos to give their opinion as to whether each patient needed resection (malignant) or no-resection (benign).
We formed our partially classified sample as follows. Feature vectors for
which all seven endoscopists agreed were taken to be classified with labels
specified either as 1 (resection) or 2 (no-resection) using the ground-truth labels.
Observations for which there was not total agreement among the endoscopists
were taken as having missing class labels denoted by \code{NA}.
To reduce the dimension of the dataset down from 698 features, we removed constant columns, normalized the dataset, and then used sparse linear discriminant analysis \citep{clemmensen2011sparse} to select a subset of four features useful for class discrimination according to the trinary ground truth labels (adenoma, serrated, and hyperplastic).
Figure~\ref{fig:Figlable1.1}
shows a plot of the data with the class labels of the feature vectors used in the fitting of the classifiers in our example.
\begin{figure}[ht]
\centering
\includegraphics[width=12cm]{gastroplot-7-wl.png}
\caption{Gastrointestinal dataset. The red triangles denote the benign observations, and the blue circles denote the malignant observations. The black squares correspond to the unlabeled observations. Observations are treated as unlabeled if fewer than seven endoscopists assigned the same class label to the feature vector.}
\label{fig:Figlable1.1}
\end{figure}
The black squares denote the unlabeled observations, the red triangles denote the benign observations and the blue circles denote the malignant observations. It can be seen that the unlabeled observations tend to be located in regions of class overlap.
To confirm further the appropriateness of the adopted missing-data mechanism for the data, we fitted a Gaussian mixture model \citep{lee2014finite} to estimate the entropy of each observation.
Figure~\ref{fig:figentropy}(a) compares the box plots of the estimated entropies in the labeled and unlabeled groups. Figure~\ref{fig:figentropy}(b) presents a Nadaraya--Watson kernel estimate of the probability of missing class labels.
\begin{figure}[ht]
\centerline{\includegraphics[scale=.30]{entropy.png}}
\caption{Analysis of the gastrointestinal dataset with respect to the relationship between the entropy and the labeled and unlabeled observations.}
\label{fig:figentropy}
\end{figure}
From Figure~\ref{fig:figentropy}(a), we find that the unlabeled observations typically have higher entropy than the labeled observations. Figure~\ref{fig:figentropy}(b) shows that the estimated probability of a missing class label decreases as the negative log entropy increases. This relation is in accordance with equation \eqref{eq:12}. The higher the entropy of a feature vector, the higher the probability of its class label being unavailable.
Now we use the function \code{gmmsslm()} to compare the
the performance of the Bayes' classifier with the unknown parameter vector $\boldsymbol{\theta}$ estimated by $\hat{\boldsymbol{\theta}}_{\text{CC}}$, $\hat{\boldsymbol{\theta}}_{\text{PC}}^{(\text{ig})}$, and $\hat{\boldsymbol{\theta}}_{\text{PC}}^{(\text{full})}$, respectively.
We applied the three classifiers $R(\hat{\boldsymbol{\theta}}_{\text{CC}})$, $R(\hat{\boldsymbol{\theta}}_{\text{PC}}^{(\text{ig})})$, and $R(\hat{\boldsymbol{\theta}}_{\text{PC}}^{(\text{full})})$
to all the feature vectors in the completely classified data set. Each error rate was estimated using leave-one-out cross-validation.
These estimated error rates are given in Table~\ref{tab:tabel1}.
\begin{table}[ht]
\centering
\begin{tabular}{cccc}
\toprule
Gastrointestinal dataset & $n_c$(classified) & $n_{uc}$(unclassified) & Error rate \\
\midrule
$R(\hat{\boldsymbol{\theta}}_{\text{CC}})$ & 76 & 0 & 0.171\\
$R(\hat{\boldsymbol{\theta}}_{\text{PC}}^{(\text{ig})})$ & 35 & 41 & 0.211 \\
$R(\hat{\boldsymbol{\theta}}_{\text{PC}}^{(\text{full})})$ & 35 & 41 & 0.158 \\
\bottomrule
\end{tabular}
\caption{Summary statistics for Gastrointestinal dataset. Here, $n=76$, $p=4$, and $g=2$.}
\label{tab:tabel1}
\end{table}
The classifier based on the estimates of the parameters
using the full likelihood for the partially classified training sample has lower estimated error rate than that of the rule that would be
formed if the sample were completely classified.
\section{Summary}
\label{sec:summary}
The \proglang{R} package \pkg{gmmsslm} implements the semi-supervised learning approach proposed
by \cite{ahfock2020apparent} for estimating the Bayes' classifier from a partially classified training sample
in which some of the feature vectors have missing labels. It uses a generative model approach whereby the
joint distribution of the feature vector and its ground-truth label is adopted. It is assumed that each of
$g$ pre-specified classes to which a feature vector can belong has a multivariate Gaussian distribution.
The conditional probability that a feature vector has a missing label is formulated in a framework in which a
missing-data mechanism is introduced that models this probability to depend on the entropy of the feature vector,
using the logistic model. The parameters in the Bayes' classifier are estimated by maximum likelihood
via an expectation conditional maximization algorithm.
The package applies to classes which have unequal or equal covariance matrices in their multivariate Gaussian distributions.
In the case of $g=2$ classes, the package is specialized to the particular cases of equal or proportional covariance matrices
since there is then much reduction in the form of the Bayes' classifier.
An example is presented with the application of the package to a real-data set.
It illustrates the potential of the semi-supervised approach to improve the accuracy of the estimated Bayes' classifier.
In this example, the estimated error rate of the latter estimated Bayes' classifier based on the
partially classified training sample is lower than that of the Bayes' classifier formed from a completely classified sample.
|
{
"arxiv_id": "2302.13281",
"language": "en",
"timestamp": "2023-02-28T02:15:00",
"url": "https://arxiv.org/abs/2302.13281",
"yymm": "2302"
} | \section{Introduction}
There has recently been renewed interest in feebly interacing particles with sub-GeV masses, partly due to the null results of the LHC and partly due to new experimental opportunities. These particles are motivated by a wide variety of theoretical models that address open problems in the Standard Model (SM) such as the hierarchy problem, the strong CP problem, neutrino oscillations and the existence of dark matter (DM) (for a recent review see \cite{Lanfranchi:2020crw,Agrawal:2021dbo} and references theirin). Feebly interacting particles with masses between an MeV to several GeV are targeted by a variety of fixed-target, beam dump, collider, and accelerator-based neutrino experiments. Among these experiments is the ATOMKI collaboration which recently found evidence for a feebly interacting boson, the X17, with a mass of 17 MeV produced in three separate nuclear reaction experiments:
\begin{itemize}
\item $p\, + ^{7}$Li$ \, \to \,^{8}$Be$^* \,\to\, ^{8}$Be$ \,+ \rm{X17}(e^{+}e^{-})$\cite{2016Kr_PRL},
\item $p\, + ^{3}$H$ \, \to \,^{4}$He$^* \,\to\, ^{4}$He$ \,+ \rm{X17}(e^{+}e^{-})$\cite{2021Kr02_PRC}, and
\item $p\, + ^{11}$B$ \, \to \,^{12}$C$^* \,\to\, ^{12}$C$ \,+ \rm{X17}(e^{+}e^{-})$\cite{2023Kr03_PRC}.
\end{itemize}
All these positive results are consistent with a production rate of $\approx 6 \times 10^{-6}$ times the corresponding nuclear reaction: $p\, + ^{Z}$X$ \,\to\, ^{Z+1}$Y$ \: + \: \gamma$.
However, these results have not been confirmed by an independent experiment. The CERN NA64 experiment, employing high energy bremsstrahlung $e^{-} Z \to e^{-} X$ reactions, found no evidence of anomalous $(e^{+}e^{-})$ production at 17 MeV \cite{2020Banerjee_PRD}. The observed anomaly cannot be explained within the Standard Model without stretching parameters to unrealistic values \cite{2021Aleksejevs_arXiv} and numerous theoretical investigations have proposed new physics models to explain the anomaly \cite{2022Viviani_PRC, 2020Feng_PRD, 2021Zhang_PLB, 2016Feng_PRL, Boehm:2002yz, Boehm:2003hm, Alves}.
There is general agreement among experimental and theoretical communities on the urgent need for a new independent experiment. We aim to meet this challenge by building a state-of-the-art detector with world-first capabilities and performing the experiments outlined below to test these anomalies and significantly improve on the current experimental sensitivity .
\section{Low Energy Nuclear Reactions as a Probe of New Physics}
There are now numerous projects to search for the X17 using high-energy particle physics experiments. However, no searches which employ the same nuclear reaction which observed the anomaly have yet been made, other than by the original experimenters. We intended to employ the University of Melbourne 5 MV Pelletron accelerator to initiate the $p + ^{7}$Li$ \,\to\, ^{8}$Be$\: +( e^{+}e^{-}) $ reaction and to build a low mass, high precision Time Projection Chamber (TPC) to provide a far more sensitive search for the X17 and to search for any other anomalous $(e^{+}e^{-})$ yield. If the X17 exists as an independent fundamental particle, it will be evident as a peak at the corresponding invariant mass of the $e^{+}e^{-}$ final state. Furthermore, the TPC will be employed to explore a larger mass region (5 - 21 MeV) and to search for new physics with 2 orders of magnitude greater precision than has been achieved to date. The irreducible background is Internal Pair Conversion (IPC), where a virtual photon facilitates a nuclear decay and subsequently converts to an $(e^{+}e^{-})$ pair. This produces a broad background peaked at low invariant mass and decreases exponentially to the kinematic limit. Our instrument is therefore designed to have the best resolution possible in the invariant mass of the $(e^{+}e^{-})$ pair. \Cref{fig:TPC_prod} shows the expected performance of our TPC relative to the original X17 anomaly and the ease with which we will observe the X17 if produced at the rate found by the ATOMKI group. Note that the TPC has over an order magnitude better invariant mass resolution.
\begin{figure}
\vspace*{-14pt} \hspace*{0pt}
\centerline{
\resizebox{0.9\textwidth}{!}{\includegraphics{{Krasznahorkay_Be8.jpg}} \hspace*{36pt} \resizebox{0.9\textwidth}{!}{\includegraphics{ipcX17lin_atomki.pdf}}}}
\caption{Left is figure 5 from \cite{2016Kr_PRL} which shows the invariant mass distribution of the $e^{+}e^{-}$ measured by the ATOMKI group. The dotted curve peaked at 16.6 MeV is the proposed contribution of the X17 convoluted with the experimental resolution. The anomaly differs from expectations by $\approx 7 \sigma$. Right are simulations of the $p + ^{7}$Li$ \,\to\, ^{8}$Be$\: +( e^{+}e^{-}) $ reaction for the TPC after 4 days of running on the Pelletron assuming X17 production at the rate found by the ATOMKI group together with the expected IPC background. The signal due to the X17 is the peak at 17 MeV. The green line is a fit to an ansatz which models the IPC contribution.}
\label{fig:TPC_prod}
\end{figure}
Previous experiments using nuclear transitions that show evidence for the X17 boson
were all carried out by the ATOMKI group \cite{2016Kr_PRL,2021Kr02_PRC,2023Kr03_PRC} who employed the $p + ^{7}$Li$ \,\to\, ^{8}$Be$\: +( e^{+}e^{-}) $, $p + ^{3}$H$ \,\to\, ^{4}$He$\: +( e^{+}e^{-}) $ and $p + ^{11}$B$ \,\to\, ^{12}$C$\: + (e^{+}e^{-})$ reactions. In the case of $p + ^{7}$Li$ \,\to\, ^{8}$Be$\: +( e^{+}e^{-}) $, there are two spin-parity $1^{+}$ states at 17.64 and 18.15 MeV excitation energy in $^{8}$Be, which can be selectively populated using resonances at 0.441 MeV and 1.100 MeV in the proton beam energy. ATOMKI found an $\approx 7\sigma$ excess of events at high separation angles from the $E_{p} = 1.1$ MeV run but not at $E_{p} = 0.441$ MeV~\cite{2016Kr_PRL}. Subsequent measurements of the $p + ^{3}$H$ \,\to\, ^{4}$He$\:+ (e^{+}e^{-}) $ reaction at incident proton energies of $E_{P} =0.510, 0.610,$ and $0.900$ MeV ($E_x$= 20.21, 20.29 and 20.49 MeV) found an $\approx 7 \sigma$ excess at all three energies\cite{2021Kr02_PRC}.
Their setup, based on scintillator and position-sensitive detector arrays around the target,
has high efficiency but has no provision to select $e^{-} \, e^{+}$ pair events over other types of radiation.
As a result, the yield of the X17 decays was only a tiny fraction, far less than 1 in a million,
of the total yield. Furthermore, the invariant mass resolution of their setup fundamentally limits the precision with which they can search for other signals of new physics in these nuclear reactions.
We propose to construct a TPC which provides magnetic selection and accurate particle tracking to overcome the limitations of the previous experiments. The TPC has large acceptance, excellent background rejection, and vastly improved invariant mass resolution which enables significantly more sensitive searches. In addition, the TPC allows us to make accurate measurements of the angular distributions of the $e^{+}e^{-}$ particles, enabling us to determine the spin and parity of the final state and hence that of any hypothetical boson.
Our initial plan is to investigate the $p + ^{7}$Li$ \,\to\, ^{8}$Be$\: +( e^{+}e^{-}) $ reaction with 2 orders of magnitude higher sensitivity than ATOMKI, running at the same energy. Following this we will run at higher energies to search for evidence of new physics which could potentially be unveiled by the unprecedented sensitivity of our detector.
Assuming a target thickness of the $2\times10^{19}$ atoms/cm$^2$ and a proton beam current of 2 microamp, the results of a 4-day run on the Pelletron are shown in \cref{fig:TPC_prod} for an X17 production rate of $6\times 10^{-6}$ that of $p+^{7}L \,\to\, ^{8}B + \gamma$, as found by Krasznahorkay et al.~\cite{2016Kr_PRL}. If it exists at the rate observed by the ATOMKI group, we will observe it with {\bf greater than 100 $\sigma$ significance} over a 30-day run (see \cref{fig:TPC_fits}). Otherwise the 30-day run on the Pelletron will place upper limits of the production of new physics signals in these reactions over {\bf two orders of magnitude smaller}, allowing us to place significantly stronger bounds on new physics models as shown in \cref{fig:TPC_reach}.
\begin{figure}
\centering
\resizebox{0.9\textwidth}{!}{\includegraphics{IPC_X17_30Day_200_2023_log.pdf}}
\caption{Simulations of signal and background for the $p + ^{7}$Li$ \,\to\, ^{8}$Be$\: +( e^{+}e^{-}) $ reaction for the TPC after 30 days of running on the Pelletron, assuming an X17 produced at a rate 200 times smaller than the ATOMKI observation \cite{2016Kr_PRL}.
}
\label{fig:TPC_reach}
\end{figure}
\section{Proposed Time Projection Chamber}
The TPC provides 3-dimensional tracking of charged particles
by reconstructing the ionization path of their passage through a gaseous medium. As charged
particles (in this case the $e^{+} \, e^{-}$ pairs) traverse the medium, they ionize the gas and liberate electrons.
The medium is placed inside parallel electric and magnetic fields. The magnetic field is employed to determine the momentum of the charged tracks.
The conceptual design of the TPC was made using the Event Visualization Environment (EVE) event-display package \cite{EVE} within the HEP-Physics root \cite{root} framework. The overall arrangement of the detector is shown in \cref{fig:TPC_overall}.
The TPC is a low-mass device made with low-$Z$ materials.
The outer shell will be constructed from a 1 mm thick Aluminium sheet. Inside the outer shell, we place an electric field cage to provide a uniform electric drift field to guide liberated electrons to the anode plane. The inner wall of the TPC consists of an Aluminum plated Kapton sheet held at ground potential located at a radius of 1.5 cm from the proton beam. The volume inside the electric field cage is isolated from the outer volume of the TPC. This outer volume is filled with pure CO$_2$ gas. The inner volume contains the active gas mixture of the device (90:10 He to CO$_{2}$ by volume). This arrangement minimizes contamination of the sensitive gas region from oxygen, water vapor, and other impurities found in the air. The CO$_2$ is also an excellent electrical insulator that prevents internal sparking. Holding the entire outer shell at ground potential significantly improves electrical safety and serves to minimize stray electromagnetic interference. Just before the anode readout plane, we place a Micromegas stage \cite{micromegas}.
\begin{figure}
\centerline{
\resizebox{0.95\textwidth}{!}{\includegraphics{TPC_overview_1.pdf}}}
\caption{Overall design of the TPC. The internal diameter of the magnet yoke is 40 cm. The TPC is instrumented to a radius of 17 cm. The proton beam is directed down a vacuum tube of 1.0 cm radius of a target located in the center of the detector. Simulated $(e^{+}e^{-})$ pairs originate from nuclear reactions on the target. The right figure is a cut-away view showing the field cage of the TPC.}
\label{fig:TPC_overall}
\end{figure}
The proposed TPC-based detector system consists of a magnetic solenoid with a 40 cm internal
diameter, which provides magnetic fields of up to 0.4 Tesla.
Inside is an arrangement of 8 scintillators which detect the $e^{+} \, e^{-}$ pairs
from the X17 decay X$17\to e^{+}e^{-}$.
The TPC is placed within the scintillator array and covers radii from $r=1.5$ cm to 17 cm.
Proton beams are transported through a 1 cm radius diameter beam pipe and impinge on
the target placed in a target chamber at the center of the TPC. The construction of the target chamber is optimized to minimize multiple scattering, which limits the invariant mass resolution of the $e^+ e^-$ pair.
\begin{figure}
\centerline{
\resizebox{0.9\textwidth}{!}{\includegraphics{TPC_concept.jpg}}}
\caption{A side-on view of the TPC concept. The details are explained in the text. }
\label{fig:TPC_concept}
\end{figure}
\begin{figure}
\centerline{ \resizebox{0.9\textwidth}{!}{\includegraphics{TPC_Optimized_electricField.pdf}}}
\caption{COMSOL simulation of the radial electric field in the drift region of the TPC.}
\label{fig:TPC_Z_field}
\end{figure}
The conceptual operation of the TPC is shown in \cref{fig:TPC_concept}. A uniform electric field provided by high voltage stepped down from -15 KV to 0 V over the 35 cm length of the active region. This was achieved via an electric field cage of aluminized rings on the inner and outer walls of the TPC. A uniform magnetic field is provided by an electromagnetic solenoid. The high energy $e^{+}e^{-}$ pair from nuclear reactions have a radius curvature inversely proportional to their momentum. The TPC is filled with a He/CO$_2$ gas mixture in a ratio of 90:10 at atmospheric pressure. As the $e^{+}e^{-}$ traverse the detector, they ionize the gas and liberate electrons. These drift along electric field lines at constant velocity until they reach the Micromegas gas amplification region. This amplifies the electron signal by a factor of $\approx 10^4$. These induce pulses on the X-Y readout strips located in a multilayered PCB. A coincidence between two of the scintillators is used to trigger the readout of the TPC and also to provide the time-zero to determine the drift time of the charged particle tracks through the device. Thus the time of arrival, as well as the $(x,y)$ location of the pulse, is recorded. This drift time and known electron drift velocity give the $z$-coordinate of the electron. The $(x,y,z)$ locations of the liberated electrons provide space-point measurements which are collected together to provide the 3-dimensional tracks of the $e^+e^-$ pair.
The electric fields within the TPC were modeled with the commercial multi-physics package COMSOL\cite{comsol} and an optimal solution was determined after a number of configurations were trialed. The upstream cathode is a 1mm thick PCB disk set to a potential of -15 KV. The downstream anode is set to zero volts on the Micromegas mesh positioned 125 microns above a multilayer PCB. The TPC drift field is defined as a series of concentric rings each 4.5 mm wide with a 0.5 mm gap between them. The potential stepped down from -15 V to 0 V via 1 M$\Omega$ resistors placed between each ring. The first ring is placed 1 mm downstream of the cathode. The field cage thus defined has a typical uniformity of 1 part in $10^3$. The COMSOL simulation is shown in \cref{fig:TPC_Z_field}.
\begin{figure}
\centerline{
\resizebox{0.9\textwidth}{!}{\includegraphics{TPC_micromesage.pdf}}}
\caption{Left shows the concept of the Micromegas electron amplification stage consisting of a 15-micron pitch stainless steel wire mesh held 120 microns above 300-micron wide graphite strips on top of a multilayer PCB. The strips are held at 800 V. Subsequent layers hold X and Y strips of width 0.5 and 0.9 mm located 0.5 and 1 mm beneath the surface of the PCB. The mesh is held 125 microns above the surface with electrically insulating spacers. The image on the right shows the electron drift lines in the Micromegas electric field. This was also simulated by COMSOL. The electrons are directed around the mesh and into the gas amplification region. Further details are explained in the text. }
\label{fig:TPC_micromegas}
\end{figure}
\Cref{fig:TPC_micromegas} shows the layout of the micromegas amplification and readout region. We employ GARFIELD~\cite{garfield} to simulate the probability of electrons released liberated from the primary $e^{+}e^{-}$ via ionization of the He/CO$_2$ gas, the drift of electrons through the electric field, the Micromegas gas amplification, and the charge induced on the X-Y strips embedded in the PCB. The calculations predict electrons are liberated at a rate of $\approx$ 1/mm and drift with a velocity of 1 cm/$\mu$s with the -15 KV potential on the cathode. We employ the VMM ASIC~\cite{vmm} as the front-end chip to read out the strips. This provides 1 ns timing resolution as well as a 64 $\mu$s dynamic range to enable the full 35 $\mu$s readout time of the TPC. We employ the hit time on the strips to determine the drift time of the electrons as well enabling the association of the correct X-Y strips for projective readout of the Micromegas. The +800 V potential difference between the wire mesh and the graphite strips on the front of the PCB provides a gas gain of $\approx 10^4$ and results in induced pulses of $\approx$ 10 pC spread across 2-3 strips in the X and Y planes. As shown in \cref{fig:TPC_XY}, taking the weighted mean of the distribution enables us to determine the location of the liberated electrons with a precision of $\approx 100\,\mu$m.
\begin{figure}
\centerline{
\resizebox{0.8\textwidth}{!}{\includegraphics{TPC_XY_resolution.jpg}}}
\caption{The X-Y resolution of the liberated electrons after projective readout onto the X-Y strips. The system provides $\approx 100\,\mu$m resolution}
\label{fig:TPC_XY}
\end{figure}
\begin{figure}
\centerline{
\resizebox{0.48\textwidth}{!}{\includegraphics{TPC_event_X17_5.jpg}}
\hspace*{6pt}
\resizebox{0.48\textwidth}{!}{\includegraphics{TPC_event_IPC_22.jpg}}}
\caption{Event display of simulated typical X17 (left) and IPC (right) events. The reconstructed electron tracks are shown as blue lines and positrons are red lines. The dots show the location of the space points used to reconstruct the $e^{+}e^{-}$ trajectories. The tracks are projected onto the target and are required to form a common vertex constrained to originate on the target which is shown as a grey disk. In this event display, only the beam pipe and vacuum walls are shown. IPC events originate from off-shell photons and hence the invariant mass of such events is concentrated at low values. This results in a small opening angle between the $e^{+}e^{-}$ pair. In contrast, the X17 has an invariant mass near the kinematic limit leading to large opening angles for the $e^{+}e^{-}$ pairs.}
\label{fig:TPC_events}
\end{figure}
We perform a full GEANT4 Monte-Carlo simulation of the TPC based on the Geometry Definition Markup Language (GDML) \cite{gdml} file generated from the EVE conceptual design. The model propagated the primary $e^{+}e^{-}$ pair through the TPC. It included energy loss and multiple scattering in the vacuum wall of the beam pipe, the inner walls of the TPC, and through the TPC gas-sensitive region itself. The electron ionization rate of 1 electron/mm in the He/CO$_2$ gas mixture results in $\approx$ 130 space points per track. These space points were reconstructed using the GENFIT2 \cite{GENFIT} Kalman-filter-based track-fitting package and the RAVE \cite{rave} vertex-finding package. \Cref{fig:TPC_events} shows event displays of simulated IPC and X17 processes. The invariant mass resolution of the TPC is limited by multiple scatting in the TPC gas and the vacuum chamber walls. The reconstructed invariant mass was optimized by using a 90:10 He/CO$_2$ gas mixture and by employing a very thin vacuum wall consisting of 50 micron thick aluminized mylar. The sheer strength of Mylar is 15 kg/mm$^2$ which enables us to employ it as a very thin vacuum wall in a suitably designed target chamber. We made a prototype target chamber by milling four 2 cm long holes in an 11 mm radius cylinder of 2 mm thickness. This left four 2 mm wide posts to support the structure. Our calculations showed that this should provide a factor 20 safety margin against breaking under atmospheric pressure. The 50 micron thick mylar foil was glued over the vacuum pipe and posts using Torr Seal glue. We attached a turbo pump to the prototype and pumped it down. We found it to be mechanically stable and able to reach high vacuums. By placing cuts on the quality of the vertex fit we are able to select events where both tracks only pass through the mylar foil. Overall we find our overall invariant mass resolution for the X17 has a standard deviation of 0.1 MeV, which provides excellent discrimination against the smoothly varying IPC background. \Cref{fig:chamber_res} shows the prototype target chamber under vacuum and the expected X17 resolution with a magnetic field of 0.25 Tesla. After applying the reconstruction criteria, the overall acceptance of the TPC to the $p + ^{7}$Li$ \,\to\, ^{8}$Be$\: + X17 \to ( e^{+}e^{-})$ reaction is 17\% with a magnetic field strength of 0.25 Tesla.
\begin{figure}
\centerline{
\resizebox{0.475\textwidth}{!}{\includegraphics{TPC_prototype_targetChamber_crop.jpg}}
\hspace*{6pt}
\resizebox{0.48\textwidth}{!}{\includegraphics{X17_full_Jan27_1_1_001_InvariantMass.pdf}}}
\caption{Left shows a photo of the prototype chamber under vacuum. Right shows the invariant mass resolution obtained from the full GEANT4 simulation using the thin mylar target wall, 90:10 He/CO$_2$ gas mixture, full reconstruction of the 130 hit collections per track, and vertex reconstruction.}
\label{fig:chamber_res}
\end{figure}
Signals from the Micromegas are induced onto X-Y strips etched into a double-side circuit board. We will employ CERN-RD51 \cite{RD51} developed Scalable Readout Systems (SRS) components for our electronics.
Pulses from the strips are fed into the Hybrid cards containing two 64-channel VMM Application Specific Integrated
Circuit (ASIC) \cite{vmm} chips where they are amplified and digitized. Both pulse height and hit time are recorded.
These data are read out via HDMI cables and sent through to the DVMM FPGA. This device enables sophisticated logic decisions while also providing a digital buffer 64 microseconds deep. This enables us to capture the time of arrival of the liberated electrons over the full drift time of the TPC. Data passing the DVMM trigger logic are transferred to the Front End Concentrator (FEC) which encodes the data into a 10 Gigabit ethernet stream for recording on a PC. This will be transferred to the University of Melbourne Spartan Research Computing platform for long-term storage and data analysis.
The TPC enables a long-term program to search for new physics through nuclear reactions of the kind $p+ ^{Z}$X$ \,\to\, ^{Z+1}$Y$ + (e^{+}e^{-})$, as well as detailed studies of nuclear structure through $e^{+}e^{-}$ decays of excited states. The TPC will be constructed under contract by CERN in consultation with our team.
\section{Subdominant Backgrounds}
\begin{figure}
\centerline{
\resizebox{0.48\textwidth}{!}{\includegraphics{TPC_event_gamma_3452.jpg}}
\hspace*{12pt}
\resizebox{0.48\textwidth}{!}{\includegraphics{GammaIPC.jpg}}}
\caption{Left shows the simulation of typical $p + ^{7}$Li$ \,\to\, ^{8}$Be$ + \gamma $ event detected by the TPC. The $(e^{+}e^{-})$ pair originates from a gamma conversion in the target wall. Such events are rejected by requiring the $e^{+}e^{-}$ to originate at a vertex on the target. The right plot shows the background from IPC and $p + ^{7}$Li$ \,\to\, ^{8}$Be$ + \gamma $ events as a function the $e^{+}e^{-}$ invariant mass. The black points show the IPC background after applying cuts to ensure the $e^{+}e^{-}$ originates from a common vertex at the target. The blue points show simulations of the background from the $p + ^{7}$Li$ \,\to\, ^{8}$Be$ + \gamma $ reaction without vertex constraints and the green points show the $p + ^{7}$Li$ \,\to\, ^{8}$Be$ + \gamma $ background after applying the same cuts as the IPC data. The background from $p + ^{7}$Li$ \,\to\, ^{8}$Be$ + \gamma $ is less than 0.1\% that of IPC in the X17 mass region after applying vertex constraints. The distributions are those expected after a 10-hour run on the Pelletron.}
\label{fig:gamma}
\end{figure}
As noted earlier, the irreducible background to the X17 and other new physics signals in proton-induced $e^{+}e^{-}$ reactions is the IPC process. The other large potential background is from the $p + ^{7}$Li$ \,\to\, ^{8}$Be$\: + \gamma $ reaction, where the gamma subsequently undergoes external conversion via interaction with matter: $ \gamma \,\to\, e^{+} e^{-}$. The $p + ^{7}$Li$ \,\to\, ^{8}$Be$ + \gamma$ reaction is $10^3$ times larger than IPC. To investigate this we simulated $10^{8}$ $p + ^{7}$Li$ \,\to\, ^{8}$Be$\: + \gamma $ reactions, approximately equivalent to 10 hours of running on the Pelletron. A typical event and the background expected from this process are shown in \Cref{fig:gamma}. After applying cuts to select $(e^{+}e^{-})$ with good vertices originating at the target, this background is reduced to less than 0.1\% that of IPC in the invariant mass region of the X17.
We considered cosmic rays and random beta decays as background. Primary cosmic ray muons in the $\approx$ 10 MeV/$c$ momentum region of X$17 \to e^{-}e^{-}$ have a kinetic energy of only 0.47 MeV so they do not have sufficient energy to initiate a trigger, which requires 1 MeV energy loss in each scintillator. Random events require two beta decays each with $\sim$ 10 MeV kinetic energy with two reconstructed tracks within 5 mm on the target, with an invariant mass near the 17 MeV peak, and which fall within a ~10 ns time window. Consequently, both these backgrounds are expected to be vanishing small. Finally, although $\sim 9$ MeV $\alpha$ particles are copiously produced via the $p + ^{7}$Li$ \,\to\, \alpha + \alpha $ reaction, they are all stopped in the inner detector walls and do not enter the sensitive region of the TPC.
The background to X17 production is then overwhelmingly due to IPC events. \Cref{fig:TPC_fits} shows Roofit fits to simulations of the X17 signal and IPC background. The IPC background is modeled with a two-component exponential ansatz while the X17 is modeled as Gaussian core plus a small tail from $(e^{+}e^{-})$ events that propagate through the edges of the thin mylar window.
\begin{figure}
\centerline{
\resizebox{0.48\textwidth}{!}{\includegraphics{IPC_X17_30Day_ATOMKI_2023_log.pdf}}
\hspace*{16pt}
\resizebox{0.48\textwidth}{!}{\includegraphics{ipcXStar_m53_50.pdf}}}
\caption{Left shows simulations of signal and background for the $p + ^{7}$Li$ \,\to\, ^{8}$Be$\: +( e^{+}e^{-}) $ reaction for the TPC after 30 days of running on the Pelletron assuming an X17 produced at a rate expected from the ATOMKI observation \cite{2016Kr_PRL}. Green is the double exponential ansatz used to model the background. Black is the sum of signal and background. We extract $17155 \pm 132$ events from the fit. Right shows simulations of a 30-day run at $E_p=4.5$ MeV, target thickness of $2\times10^{20}$ atoms/cm$^2$ and indicative results if feebly interacting bosons of 13, 15, 17, 18, and 20 MeV were produced at rates 200 times less than the X17.}
\label{fig:TPC_fits}
\end{figure}
\section{Experimental Research Programme}
We estimate the beam time required as follows. The total cross section for $p+ ^{7}$Li$ \,\to\, ^{8}$Be$ + \gamma$ at E$_{p} = 1.0$ MeV is $2.5\times 10^{-5} \: b$ \cite{Zahnow}. We assume that the $p+ ^{7}$Li$ \,\to\, ^{8}$Be$ + (e^{+} e^{-})$ IPC cross section is $3.6\times 10^{-3}$ times smaller. The Pelletron can comfortably supply beam currents of $ 2\mu A$ and we can make Li$_{2}$O targets of thickness $2\times 10^{19}$ Li atoms/cm$^2$. This corresponds to a proton beam energy loss of 0.040 MeV. The ATOMKI group found the branching ratio of X17 to the $p+ ^{7}$Li$ \,\to\, ^{8}$Be$ + \gamma$ reaction to be $6\times10^{-6}$ \cite{2016Kr_PRL}. Since the TPC has a 17\% acceptance for $p+ ^{7}$Li$ \,\to\, ^{8}$Be$ + $X$17 \to (e^{+} e^{-})$ we expect 17,000 X17 events in a 30-day run on the Pelletron if the ATOMKI group is correct. This situation is shown as the left plot in \cref{fig:TPC_fits}. Our simulations show that we would find $17155 \pm 132$ events, well over 100 $\sigma$ significance. We will also perform a `bump-hut' using the $E_{p} = 1.05$ MeV data to search for feebly coupled bosons. If none are found, we will set 90\% confidence upper limits as shown in Fig. \ref{fig:TPC_105_UL}. Depending on the results of the first runs we will either investigate the production of the X17 as a function of energy by running at E$_{p} = 1.5$, $2.0$ and $3.0$ MeV or make a high statistics run at E$_{p} = 4.5$ MeV to set the best upper limits possible. The cross section for $p+ ^{7}$Li$ \,\to\, ^{8}$Be$ + \gamma$ is similar ($3 \times 10^{-5} b$) at $E_{p}= 4.5$ \cite{fisher} and we will employ a target ten times thicker, $2\times 10^{20}$ Li atoms/cm$^2$, corresponding to a proton beam energy loss of 0.11 MeV. This allows a greater reach in energy for feebly-interacting bosons and allows us to set strong limits on generic dark photons in the 8 -- 20 MeV mass range. We will then perform a bump hunt in the invariant mass distribution of the $e^{+}e^{-}$ pair. The plot on the right of \cref{fig:TPC_fits} shows simulations of the $E_{p}= 4.5$ MeV run. Here, we show indicative results of feebly interacting bosons of mass 13, 15, 17, 18, or 20 MeV decaying to $e^{+}e^{-}$ being produced with cross sections 200 times smaller than the ATOMKI X17.
The 90\% confidence levels were determined via toy Monte-Carlo experiments where we search for upward fluctuations of the background that mimic signal. For these we employ roofit to simulate 1000 experiments of pure background, generated randomly using the PDF fit to our GEANT4 simulations of the IPC process. We then search for signal as a function of mass, where the expected signal PDF is also determined by GEANT4 simulations. The 90\% upper limit is the value that is greater than 90\% of the simulations.
\section{Expected Sensitivity}
\begin{figure}
\centering
\includegraphics[height=0.9\textwidth]{Be_105_ATOMIK_UL.pdf}
\caption{The black line shows the expected 90\% upper limits for a feebly interacting boson $X$ from the bump hunt at the TPC in the 30-day dataset for $p + ^{7}$Li$ \,\to\, ^{8}$Be$ +( e^{+}e^{-}) $ at $E_{p} = 1.05\,$MeV as a function of invariant mass of the boson. Also shown in red is the result for the ATOMKI experiment. The TPC will set an upper limit over two orders of magnitude smaller than ATOMKI at 17.0 MeV.
}
\label{fig:TPC_105_UL}
\end{figure}
The proposed experimental research programme will conclusively test the $^8\text{Be}$ anomaly seen in the ATOMKI experiments and, in the absence of the discovery of a new particle, will provide world leading limits on a variety of important low energy nuclear reactions. \Cref{fig:TPC_105_UL} shows the expected 90\% upper limit of the $^8\text{Be}$ search (solid black) in the mass range of $5 - 18 \,\text{MeV}$, along with the ATOMKI result at $E_p = 1.05$ MeV (red). We see that the $^8\text{Be}$ run of the proposed TPC will probe the $^8\text{Be}^* \to {}^8\text{Be}(e^+e^-)$ branching ratio two orders of magnitude better than required to be sensitive to the ATOMKI $^8\text{Be}$ anomaly.
The origin of the ATOMKI Beryllium anomaly has been widely discussed in the literature. While \cite{2021Aleksejevs_arXiv} proposes a SM explanation, resonances beyond the SM have been explored in \cite{2022Viviani_PRC, 2020Feng_PRD, 2021Zhang_PLB, 2016Feng_PRL, Boehm:2002yz, Boehm:2003hm, Alves} (see also \cite{2017Feng_PRD} and references theirin).
Following an $E_p = 1.05\,$MeV run, a high statistics run at $E_p = 4.5\,$MeV will provide even stronger limits on the $^8\text{Be}^* \to {}^8\text{Be}(e^+e^-)$ branching ratio, particularly at lower masses, as shown in \cref{fig:TPC_UL}.
\begin{figure}
\centering
\includegraphics[height=0.9\textwidth]{Be_45_NA48_UL.pdf}
\caption{The black line shows the expected 90\% upper limits for a feebly interacting boson $X$ from the bump hunt at the TPC in the 30-day dataset for $p + ^{7}$Li$ \,\to\, ^{8}$Be$\: +( e^{+}e^{-}) $ at $E_{p} = 4.5\,$MeV. Also shown is the 90\% confidence upper limits from the NA48/2 experiment \cite{NA48} for feebly interacting boson $X$ from the $\pi^{0} \to \gamma + X(e^{+} e^{-})$ in the range $5 - 20\,$MeV. The TPC will set upper limits significantly below NA48/2 over the mass range 6 = 20 MeV.
}
\label{fig:TPC_UL}
\end{figure}
\subsection{Sensitivity to Dark Photons}
A dark photon is a hypothetical massive gauge boson \cite{Holdom:1985ag}, which is `dark' as it is related to a gauge symmetry in a hypothetical dark sector. The dark photon can interact feebly with standard model particles due to kinetic mixing with the photon or through higher dimensional operators, and could act as a mediator between standard model particles and dark matter.\footnote{Higher dimensional operators (that is, operators with mass dimension greater than four) are non-renormalizable and require the introduction of additional fields.} Assuming that the dark photon couples only via kinetic mixing, we can write the Lagrangian as~\cite{Fabbrichesi:2020wbt}
\begin{equation}
\label{eq:L-dark-photon-1}
{\cal L} \supset
- \frac{1}{4}\hat{F}_{\mu\nu}\hat{F}^{\mu\nu}
- \frac{1}{4}\hat{F}^{'}_{\mu\nu}\hat{F}^{'\mu\nu}
- \frac{\epsilon}{2}\hat{F}_{\mu\nu}\hat{F}^{'\mu\nu}
- \frac{1}{2}m^{2}_{X} \hat{A}^{'}_{\mu}\hat{A}^{'\nu}
+ e\hat{A}_{\mu}\sum_{f}Q_{f}\bar{f}\gamma^{\mu}f \,,
\end{equation}
where $\hat{F}_{\mu\nu} = \partial_{\mu}\hat{A}_{\nu} - \partial_{\nu}\hat{A}_{\mu}$ and $\hat{F}^{'}_{\mu\nu}=\partial_{\mu}\hat{A}^{'}_{\nu}-\partial_{\nu}\hat{A}^{'}_{\mu}$ denote the field strength tensors of the photon and the dark photon before mixing, respectively, $\epsilon$ is the kinetic mixing parameter, $m_X$ is the mass parameter for the dark photon, $e$ is the unit of electric charge and $Q_f$ are the electric charges of the SM fermions $f$. We can now perform the non-unitary transformation
%
\begin{equation}
\label{eq:dark-photon-transformation}
\begin{pmatrix}
\hat{A}_{\mu} \\
\hat{A}^{'}_{\mu}
\end{pmatrix}
=
\begin{pmatrix}
1 & -\epsilon \\
0 & 1
\end{pmatrix}
\begin{pmatrix}
A_{\mu} \\
A^{'}_{\mu}
\end{pmatrix}\,,
\end{equation}
to remove the kinetic mixing term. At order $\epsilon$ the Lagrangian, \cref{eq:L-dark-photon-1}, then becomes
%
\begin{equation}
\label{eq:L-dark-photon-2}
{\cal L} \supset
- \frac{1}{4}F_{\mu\nu}F^{\mu\nu}
- \frac{1}{4}F^{'}_{\mu\nu}F^{'\mu\nu}
- \frac{1}{2}m^{2}_{A'}A^{'}_{\mu}A^{'\nu}
+ e(\gamma_{\mu} - \epsilon A_{\mu})\sum_{f}Q_{f}\bar{f}\gamma^{\mu}f \,,
\end{equation}
where $F_{\mu\nu} = \partial_{\mu}A_{\nu} - \partial_{\nu}A_{\mu}$ and $F^{'}_{\mu\nu}=\partial_{\mu}A^{'}_{\nu}-\partial_{\nu}A^{'}_{\mu}$.
At the nucleon level the current $e\epsilon\sum_{f}Q_{f}\bar{f}\gamma^{\mu}f$ can be written as $e(\epsilon_{p}\bar{p}\gamma_{\mu}p + \epsilon_{n}\bar{n}\gamma_{\mu}n)$, where $p$ and $n$ denote the proton and neutron, respectively, $\epsilon_p = \epsilon(2Q_u + Q_d) = \epsilon$ and $\epsilon_n = \epsilon(Q_u + 2Q_d) = 0$. The $^8\text{Be}^* \to {}^8\text{Be}\,X$ branching ratio is~\cite{2016Feng_PRL}
\begin{equation}
\label{eq:dark-photon-br}
\frac{\text{BR}(^8\text{Be}^* \to {}^8\text{Be}\,X)}{\text{BR}(^8\text{Be}^* \to {}^8\text{Be}\,\gamma)}
= (\epsilon_{p} + \epsilon_{n})^{2}\frac{|\vec{p}_{X}|^{3}}{|\vec{p}_{\gamma}|^{3}}
= \epsilon^{2}\frac{|\vec{p}_{X}|^{3}}{|\vec{p}_{\gamma}|^{3}}\,,
\end{equation}
where $\vec{p}_{X}$ and $\vec{p}_{\gamma}$ are the 3-momenta of the dark photon and the photon, respectively.
\begin{figure}
\centering
\includegraphics[height=0.65\textwidth]{dark-photon-limit.pdf}
\caption{Constraints on the dark photon mixing parameter $\epsilon^{2}$ as a function of the dark photon mass from the TPC (solid purple line) together with constraints of the experiments NA48/2 \cite{NA48} and the beam dump experiments SLAC E141 and FNAL E774 \cite{E141}. Also shown are the limits derived from the measurement of the electron magnetic moment \cite{electron_g-2}. The result from the $E_{p} = 4.5$\,MeV run of the TPC provides the world's best exclusion for a promptly decaying dark photon in the $5-20$\,MeV range.
}
\label{fig:TPC_eps}
\end{figure}
The upper limit shown in \cref{fig:TPC_UL} then provides limits on $\epsilon^{2}$, which is shown in \cref{fig:TPC_eps} together with the constraints of the experiments NA48/2~\cite{NA48}, the beam-dump experiments SLAC E141 and FNAL E774 \cite{E141} and the limits derived from the electron magnetic moment, $(g-2)_e$~\cite{electron_g-2}.
We see that the proposed TPC experiment will test an important unprobed region of parameter space in the mass range $6 - 20\,$MeV.
\subsection{Sensitivity to Axion-Like Particles}
A pseudoscalar explanation of the ATOMKI anomaly seen in the $^8\text{Be}(18.15)$ transition was first discussed in \cite{Ellwanger:2016wfe}. To investigate the TPC sensitivity to Axion-Like Particles (ALPs), we start with a Lagrangian describing the ALP coupling to nucleons \cite{Alves},
\begin{equation}
\label{eq:LALPnucleons}
{\cal L} = a \bar{N} i \gamma_5 (g_{aNN}^{(0)} + g_{aNN}^{(1)} \tau^3) N\,,
\end{equation}
where $N = (p, n)^T$ denotes the nucleon isospin doublet containing the proton and the neutron. The ALP coupling to the isosinglet and isotriplet currents is given by $g_{aNN}^{(0)}$ and $g_{aNN}^{(1)}$, respectively.
The nuclear decay anomalies can be explained using ALPs if (i) the ALP mass $m_a$ in natural units is close to the invariant mass of the observed resonance $m_{ee}$ ($\approx 17$\,MeV) and (ii) the branching ratio for ALP decay to electron-positron pairs satisfies $\text{BR}(a\to e^+ e^-)\approx 1$, since the $a\rightarrow\gamma\gamma$ braching ratio is highly constrained in this mass region. Taking the Beryllium anomaly observed by ATOMKI as an example, the ratio of the ALP emission rate of $^8\text{Be}(18.15)$ to the corresponding photon emission rate is given by~\cite{Donnelly:1978ty, Barroso:1981bp, Alves}
\begin{align} \label{eq:BeWidth}
\frac{\Gamma(^8\text{Be}^*\to ^8\!\!\text{Be}+a )}{\Gamma(^8\text{Be}^*\to ^8\!\!\text{Be}+\gamma )}\approx\frac{1}{2\pi\alpha}
\left| \frac{g_{aNN}^{(0)}}{\mu^{0}-1/2}\right|^2\left(1-\frac{m_a^2}{\Delta E^2}\right)^{3/2}\,,
\end{align}
where $\alpha$ is the fine-structure constant, $\mu^0\approx 0.88$ is the isoscalar magnetic moment~\cite{Donnelly:1978ty} and $\Delta E=18.15$\,MeV is the energy splitting of the Beryllium transition.
Assuming a null observation, the blue-shaded region in \cref{fig:ALPs-1} shows a projected exclusion limit from observations of the $^8\text{Be}(18.15)$ transition at the TPC in the $m_a-|g_{aNN}^{(0)}|^2$ plane, assuming the 90\% upper limit on the $^8\text{Be}$ branching ratio following the 30-day $E_p = 1.05\,\text{MeV}$ run as shown in \cref{fig:TPC_UL}. The orange region could explain the ATOMKI $^8\text{Be}(18.15)$ measurement and the green region could explain the ATOMKI $^8\text{He}(21.01)$ observation. The TPC experiment has the power to comfortably test these anomalies.
\begin{figure}
\centering \includegraphics[height=0.45\textwidth]{g-ma.pdf}
\caption{Reach of the proposed TPC experiment in the $m_a-|g_{aNN}^{(0)}|^2$ plane. The green (orange) regions are consistent with the ATOMKI observation in the $^8\text{Be}(18.15)$ ($^4\text{He}(21.01)$) transition, whilst the blue region would be excluded by the TPC if no resonance is observed.
}
\label{fig:ALPs-1}
\end{figure}
The ALP-nucleon couplings can be related to more fundamental ALP-SM couplings given at a high energy, or Ultra-Violet (UV), scale. These couplings are defined in terms of a general effective Lagrangian given here up to operators of mass dimension-5:
\begin{equation}\label{Leff}
\begin{aligned}
{\cal L}_{\rm eff}^{D\le 5}
&= \frac12 \left( \partial_\mu a\right)\!\left( \partial^\mu a\right) - \frac{m_{a,0}^2}{2}\,a^2
+ \frac{\partial^\mu a}{f}\,\sum_F\,\bar\psi_F{\hspace{0.3mm}}\bm{c}_F\,\gamma_\mu{\hspace{0.3mm}}\psi_F
+ c_\phi\,\frac{\partial^\mu a}{f}\,
\big( \phi^\dagger i \hspace{-0.6mm}\overleftrightarrow{D}\hspace{-1mm}_\mu{\hspace{0.3mm}}\phi \big) \\
&\quad\mbox{}+ c_{GG}\,\frac{\alpha_s}{4\pi}\,\frac{a}{f}\,G_{\mu\nu}^a\,\tilde G^{\mu\nu,a}
+ c_{WW}{\hspace{0.3mm}}\frac{\alpha_2}{4\pi}\,\frac{a}{f}\,W_{\mu\nu}^A\,\tilde W^{\mu\nu,A}
+ c_{BB}\,\frac{\alpha_1}{4\pi}\,\frac{a}{f}\,B_{\mu\nu}\,\tilde B^{\mu\nu} \,,
\end{aligned}
\end{equation}
where $f = -2 c_{GG} f_a$ is related to the ALP decay constant $f_a$ and defines the new physics scale $\Lambda = 4 \pi f$ which is the scale at which some new global symmetry is spontaneously broken, with the ALP $a$ emerging as the associated pseudo Nambu-Goldstone boson. $G_{\mu\nu}^a$, $W_{\mu\nu}^A$ and $B_{\mu\nu}$ are the field-strength tensors of $SU(3)_c$, $SU(2)_L$ and $U(1)_Y$, the dual field strength tensors are labelled with a tilde in the usual way, and $\alpha_s=g_s^2/(4\pi)$, $\alpha_2=g^2/(4\pi)$ and $\alpha_1=g^{\prime\,2}/(4\pi)$ denote the corresponding couplings. The sum in the first line extends over the chiral fermion multiplets $F$ of the SM and the quantities $\bm{c}_F$ are $3\times 3$ hermitian matrices in generation space. The field $\phi$ represents the Higgs doublet.
\begin{figure}
\centering
\includegraphics[height=0.45\textwidth]{cu-cGG.pdf}
\caption{Constraints on ALP couplings in the $c_{GG}-c_u$ plane, for an ALP mass of $m_a=17.01$~MeV, with $c_e/f$ or $c_L/f \gtrsim 11.3\,\text{TeV}^{-1}$ and all other Wilson coefficients set to zero. The green (orange) band shows the region consistent with the ATOMKI observation of the $^8\text{Be}(18.15)$ ($^4\text{He}(21.01)$) transition. The blue shaded region would be excluded by a $30$ day Pelletron run at $E_p=1.05\,$MeV. The pink region is excluded by $K^- \to \pi^- a (e^+ e^-)$ decays \cite{Baker:1987gp}.}
\label{fig:ALPs2}
\end{figure}
The ALP-nucleon coupling can be related to the Wilson coefficients of \cref{Leff} defined at the UV scale via renormalization group evolution \cite{Bauer:2020jbp,Chala:2020wvs}. Assuming flavour-universal ALP couplings in the UV the isosinglet ALP-nucleon coupling is given by \cite{Bauer}
\begin{align} \label{eq:gaNNNum}
g_{aNN}^{(0)} = 10^{-4}\, \left[ \frac{1\,\text{TeV}}{f} \right]
\times \Big[ &- 4.2 \,c_{GG} + 9.7\times 10^{-4}\,c_{WW} + 9.7\times 10^{-5}\,c_{BB} \notag\\
&- 2.0\,c_u(\Lambda) - 2.0\,c_d(\Lambda) +4.0\,c_Q(\Lambda) \notag\\
&+ 2.9\times 10^{-4}\,c_e(\Lambda) - 1.6\times10^{-3}\,c_L(\Lambda) \Big]^2\,.
\end{align}
Using this relation, we can now show the reach of the TPC on the UV parameter space and compare to independent experimental measurements constraining the same parameters. In \cref{fig:ALPs2}, we show the dominant constraint on the $c_{GG}-c_u$ plane, for $m_a=17$~MeV, assuming that $c_e/f$ or $c_L/f \gtrsim 11.3\,\text{TeV}^{-1}$ and all the other UV Wilson coefficients are zero. We focus in this particular parameter space since it is the only one with a viable region explaining the ATOMKI helium anomaly~\cite{Bauer}.\footnote{Note that such large ALP couplings to electrons are already severely constrained by beam dump experiments. These bounds may, however, be weakened by the inclusion of ALP-quark and -gluon couplings as required here.} Other coupling combinations and ALP couplings to electrons smaller than $c_{e}/f = 11.2\,\text{TeV}^{-1}$ or $c_{L}/f = 11.3\,\text{TeV}^{-1}$ \cite{Bauer} are excluded predominantly by $K_L \to \pi^0 X$ decays~\cite{CortinaGil:2021nts}, with $X$ decaying invisibly or escaping the NA62 detector. In \cref{fig:ALPs2}, the green and orange bands show the $3\sigma$ regions consistent with the ATOMKI measurements of the $^8\text{Be}(18.15)$ and $^4\text{He}(21.01)$ transitions, respectively. The pink region is excluded by $K^- \to \pi^- a (e^+ e^-)$ decays \cite{Baker:1987gp}. For better readability we omit the weaker constraints from $K_L \to \pi^0 X$~\cite{CortinaGil:2021nts} and $K^- \to \pi^- \gamma \gamma$ decays \cite{E949:2005qiy}.
The blue region would be excluded by a $30$ day TPC run at $E_p = 1.05\,$MeV. Note that the projected TPC limit based on an observation of the Beryllium transition alone is capable of excluding the ALP explanation of \emph{both} the ATOMKI Beryllium and Helium observations and also provides the leading constraint in part of the plane.
\section{Future Research Plans}
\begin{table}
\centering
\begin{tabular}{ |c|c|c| }
\hline
Reaction & Q-value & Mass Range for search \\ \hline
$p+ ^{7}$Li$\to ^{8}$Be$ \:+ (e^{+} e^{-})$ & 17.25 & 10 - 20 MeV \\
\hline
$p+ ^{3}$H$\to ^{4}$He$ \:+ (e^{+} e^{-})$ & 19.81 & 17 - 22 MeV \\
$p+ ^{11}$B$\to ^{12}$C$ \:+ (e^{+} e^{-})$ & 15.96 & 9 - 19 MeV \\
$p+ ^{27}$Al$\to ^{28}$Si$ \:+ (e^{+} e^{-})$ & 11.57 & 9 - 15 MeV \\
$p+ ^{25}$Mg$\to ^{26}$Al$ \:+ (e^{+} e^{-})$ & 6.31 & 5 - 10 MeV \\
$p+ ^{12}$C$\to ^{13}$N$ \:+ (e^{+} e^{-})$ & 1.94 & 3 - 5.5 MeV \\
\hline
\end{tabular}
\caption{Indicative program for future searches for new weakly coupled bosons. While an inital run would focus on $p\,+\,{}^7\text{Li}$ collisions, potential targets at subsequent runs are also shown.
`Q-value' is the energy difference between the proton plus target nucleus and the final state nucleus.
}
\label{table:program}
\end{table}
The nuclear reaction approach has an inherent advantage over particle physics experiments in that it is possible to tune the end-point of the IPC background via choice of nuclear reactions. For example, The NA48/2 result is limited at low mass by the $\pi^{0} \to \gamma e^{+} e^{-}$ background. Lower mass searches would be better made with nuclear reactions which populate lower excitation energies in the final state, eg. $p+ ^{13}$C$ \,\to\, ^{13}$N$ + (e^{+} e^{-})$ and $p+ ^{27}$Al$ \,\to\, ^{28}$Si$ + (e^{+} e^{-})$ to limit the IPC background. The IPC background implies that searches for lower mass bosons are best done with reactions with smaller Q-values which have a corresponding lower endpoint for the IPC.
Higher mass regions can be probed using the $p+^{3}$H$\,\to\, ^{4}$He$+e^{+}e^{-}$ reaction and higher beam energies such as those provided by the ANU 14 UD tandem accelerator.
\begin{table}
\centering
\begin{tabular}{|c|cccc|}
\hline
$J_\ast^{P_\ast}$ & Scalar $X$ & Pseudoscalar $X$ & Vector $X$ & Axial Vector $X$ \\
\hline
$0^-$ & $-$ & $\checkmark$ & $-$ & $\checkmark$\\
$0^+$ & $\checkmark$ & $-$ & $\checkmark$ & $-$\\
$1^-$ & $\checkmark$ & $-$ & $\checkmark$ & $\checkmark$\\
$1^+$ & $-$ & $\checkmark$ & $\checkmark$ & $\checkmark$\\
\hline
\end{tabular}
\caption{Allowed spin-parities of the $X$ particle in different transitions $N_\ast \to N$, for $J^P = 0^+$ ground states (such as $^8$Be and $^4$He).}
\label{tab:allowed-resonances}
\end{table}
An indicative program of reactions covering the 5 -- 22 MeV mass range is shown in \cref{table:program}. The exact choice of energies will be chosen to populate well-known resonances in the final states, particularly for $p+ ^{12}$C$\,\to\, ^{13}$N$ + (e^{+} e^{-})$. In selecting the reaction and excition energies we will be cognascent of the selection rules for differing types of new physics bosons and the spins and parities of the target resonance and final state ground state such as shown in table \ref{tab:allowed-resonances}
Should we find candidate bosons, we can determine their spin and parity via the angular distributions of the $(e^{+}e^{-})$ combined with knowledge of the spin and parity of the initial and final nuclear states.
\section{Conclusions}
We propose to build a state-of-the-art time projection chamber integrated with the University of Melbourne Pelletron to perform a series of experiments to resolve the conjecture of the X17 particle and to search for new physics originating from bosons in the 5 - 21 MeV mass range that feebly couple to nuclei. The TPC will be constructed and installed on the University of Melbourne Pelletron in one year following funding. We expect to provide a definitive result on the existence of the X17 one year after completion of construction. Assuming that we do not find the X17, we will begin the longer-term search for feebly interacting bosons using proton-induced nuclear reactions with a variety of tagets and proton energies.
\section*{Acknowledgements}
The authors would like to thank
Tim Grey (ANU) for the calculation of the energy and angular correlations of the electron-positron
pairs from the normal electromagnetic decay (IPC) of the 18.15 MeV M1 transition in $^{8}$Be.
We thank Nicholas Jackson (Melbourne), who studied the design of the TPC for his M.Sc.~thesis. The COMSOL, GEANT4, GENFIT, RAVE and GARFIELD calculations resulted from his work.
We thank Eraldo Oliveri, Rui de Oliveria, Bertrand Mehl, and Hans Muller, all from CERN, for their advice on the construction and signal readout of the TPC.
We thank Kimika Uehara (Melbourne) who constructed and tested the prototype TPC target chamber as a summer research project in 2022.
This research was partially supported by the Australian Government through the Australian Research Council Centre of Excellence for Dark Matter Particle Physics (CDM, CE200100008).
|
{
"arxiv_id": "2302.13240",
"language": "en",
"timestamp": "2023-02-28T02:13:59",
"url": "https://arxiv.org/abs/2302.13240",
"yymm": "2302"
} | \section{Introduction}
Evidence suggests that the human brain operates as a dual system. One system learns to repeat actions that lead to a reward, analogous to a model-free agent in reinforcement learning, the other learns a model of the environment which is used to plan actions, analogous to a model-based agent. These systems coexist in both cooperation and competition which allows the human brain to negotiate a balance between cognitively cheap but inaccurate model-free algorithms and relatively precise but expensive model-based algorithms \cite{Gershman2017}.
Despite the benefits causal inference can bring to autonomous learning agents, the degree of integration in artificial intelligence research are limited. This limitation becomes a risk, in particular that data-driven models are often used to infer causal effects. Solely relying on data that is never bias-free, eventually leads to untrustworthy decisions and sub-optimal interventions \cite{Prosperi2020}.
In this paper, we present \textit{Q-Cogni} a framework that integrates autonomous causal structure discovery and causal inference into a model-free reinforcement learning method. There are several emergent methods for integrating causality with reinforcement learning such as reward correction \cite{Buesing2018}, meta-reinforcement learning \cite{dasgupta2019causal}, latent causal-transition models \cite{gasse2021causal}, schema networks \cite{kansky2017schema} and explainable agents \cite{Madumal2020}. However, no method presents an approach that embeds causal reasoning, from an autonomously derived causal structure of the environment, during the learning process of a reinforcement learning agent to guide the generation of an optimal policy. Thus, our approach is able to target improvements in policy quality, learning efficiency and interpretability concurrently.
\textit{Q-Cogni} samples data from an environment to discover the causal structure describing the relationship between state transitions, actions and rewards. This causal structure is then used to construct a \textit{Bayesian Network} which is used during a redesigned \textit{Q-Learning} process where the agent interacts with the environment guided by the probability of achieving the goal and receiving rewards in a probabilistic manner. The causal structure integrated with the learning procedure delivers higher sample efficiency as it causally manages the trade-off between exploration and exploitation, is able to derive a broader set of policies as rewards are much less sparse and provides interpretability of the agent's decision making in the form of conditional probabilities related to each state transition for a given set of actions.
We validate our approach on the Vehicle Routing Problem (VRP) \cite{toth2002overview}. We start by comparing optimal learning metrics against state-of-the-art reinforcement learning algorithms \textit{PPO} \cite{schulman2017proximal} and \textit{DDQN} \cite{van2016deep}, using the \textit{Taxi-v3} environment from \textit{OpenAI Gym} \cite{brockman2016openai}. We also compare the advantages and disadvantages of \textit{Q-Cogni} against the shortest-path search algorithms \textit{Djikstra's} \cite{dijkstra1959note} and \textit{A*} \cite{Hart1968} with a particular focus in understanding applicability and scalability. Finally, we run experiments in a real-world scale problem, using the New York City TLC trip record data, which contains all taxi movements in New York City from 2013 to date \cite{NYC_data}, to validate \textit{Q-Cogni's} capabilities to autonomously route taxis for a given pickup and drop-off.
Our contributions with \textit{Q-Cogni} are three-fold. Firstly, \textit{Q-Cogni} is the first fully integrated, explainable, domain-agnostic and hybrid model-based and model-free reinforcement learning method that introduces autonomous causal structure discovery to derive an efficient model of the environment and uses that causal structure within the learning process. Secondly, we redesigned the \textit{Q-Learning} algorithm to use causal inference in the action selection process and a probabilistic Q-function during training in order to optimise policy learning. Finally, through extensive experiments, we demonstrate \textit{Q-Cogni's} superior capability in achieving better policies, improved learning efficiency and interpretability as well as near-linear scalability to higher dimension problems in a real-world navigation context.
\section{Background}
The focus of this work lies at the unification of causal inference and reinforcement learning. This is an emerging field that aims to overcome challenges in reinforcement learning such as 1) the lack of ability to identify or react to novel circumstances agents have not been programmed for \cite{darwiche2018human,chen2018lifelong}, 2) low levels of interpretability that erodes user's trust and does not promote ethical and unbiased systems \cite{ribeiro2016should,marcus2018deep} and 3) the lack of understanding of cause-and-effect relationships \cite{Pearl2010}.
Our approach builds upon a wealth of previous contributions to these areas, which we briefly cover below.
\vspace{1mm}
\noindent {\bf Causal Structure Discovery.} Revealing causal information by analysing observational data, i.e. ``causal structure discovery", has been a significant area of recent research to overcome the challenges with time, resources and costs by designing and running experiments \cite{Kuang2020}.
Most of the work associated with integrating causal structure discovery and reinforcement learning have been focused on using reinforcement learning to discover cause-and-effect relationships in environments which agents interact with to learn \cite{zhu2019causal,wang2021ordering,huang2020causal,amirinezhad2022active,sauter2022meta}. To our knowledge, a small amount of work has explored the reverse application, such as schema networks \cite{kansky2017schema}, counterfactual learning \cite{lu2020sample} and causal MDPs \cite{lu2022efficient}.
We build upon this work and redesign the way in which structure causal models (SCMs) are used. In the related work they are typically used to augment input data with what-if scenarios a-priori to the agent learning process. In our approach, the SCM is embedded as part of a redesigned Q-Learning algorithm and only used during the learning process. Our approach also enables learning a broader set of policies since what-if scenarios are estimated for each state-action pair during the learning process. This not only improves policy optimality but also
provides a superior sample efficiency as it allows for “shortcutting” the exploration step during the learning process.
\vspace{1mm}
\noindent {\bf Causal Inference.} Recent work has demonstrated the benefits of integrating causal inference in reinforcement learning.
In Seitzer, Sch\"{o}lkopf, and Martius \shortcite{NEURIPS2021_c1722a79} the authors demonstrate improvement in policy quality by deriving a measure that captures the causal influence of actions on the environment in a robotics control environment and devise a practical method to integrate in the exploration and learning of reinforcement learning agents.
In Yang et al. \shortcite{yang2022training} the authors propose an augmented DQN algorithm which receives interference labels during training as an intervention into the environment and embed a latent state into its model, creating resilience by learning to handle abnormal event (e.g. frozen screens in Atari games).
In Gasse et al. \shortcite{gasse2021causal} the authors derive a framework to use a structural causal model as a Partially Observable Markov Decision Process (POMDP) in model-based reinforcement learning.
Leveraging upon these concepts, in our approach we expand further the structural causal model and fit it with a \textit{Bayesian Network}. This enables our redesigned Q-Learning procedure to receive rewards as a function of the probability of achieving a goal for a given state-action pair, significantly improving the sample efficiency of the agent, as in each step the agent is able to concurrently derive dense rewards for several state transitions regulated by the causal structure. To our knowledge this is an integration perspective not yet explored.
\vspace{1mm}
\noindent {\bf Model-Free Reinforcement Learning.}
Centre to the reinforcement learning paradigm is the learning agent, which is the ``actor" that learns the optimal sequence of actions for a given task, i.e. the optimal policy. As this policy is not known a priori, the aim is to develop an agent capable of learning it by interacting with the environment \cite{kaelbling1996reinforcement}, an approach known as model-free reinforcement learning.
Model-free reinforcement learning relies on algorithms that sample from experience and estimate a utility function such as \textit{SARSA}, \textit{Q-Learning} and \textit{Actor-Critic} methods \cite{arulkumaran2017brief}. Recent advances in deep learning have promoted growth of model-free methods\cite{ccalicsir2019model}. However, whilst model-free reinforcement learning is a promising area to enable human-level artificial intelligence, it comes with its own limitations. These are the applicability restricted to a narrow set of assumptions (e.g. a Markov Decision Process) that is not necessarily reflective of the dynamics of the real-world environment \cite{StJohn2020}; lower performance when evaluating off-policy decisions (i.e. policies different than those contained in the underlying data used by the agent) \cite{Bannon2020} and perpetual partial observability since sensory data provide imperfect information about the environment and hidden variables are often the ones causally related to rewards \cite{Gershman2017}. These limitations are a disadvantage of model-free reinforcement learning which can be overcome with explicit models of the environment, i.e. model-based reinforcement learning. However, whilst the model-based approach would enhance sample efficiency, it also would come at a cost of increased computational complexity as many more samples are required to derive an accurate model of the environment \cite{polydoros2017survey}.
In our work we use causal structure discovery to simultaneously provide model-free reinforcement learning agents with the ability of dealing with imperfect environments (e.g., latent variables) and maintain sample efficiency of a model-based approach. A hybrid approach not extensively explored to our knowledge.
\section{Q-Cogni}
We present \textit{Q-Cogni}, a framework that integrates autonomous causal structure discovery, causal inference and reinforcement learning. Figure \ref{CRLframework} illustrates the modules and interfaces which we further detail below.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, height=300pt]{CRLframework.png}
\caption{Q-Cogni causal reinforcement learning framework with its modules highlighted.}
\label{CRLframework}
\end{figure}
\subsection{Autonomous Causal Structure Discovery}
The first module in \textit{Q-Cogni} is designed to autonomously discover the causal structure contained in an environment. It starts by applying a random walk in the environment while storing the state, actions and rewards. The number of steps required to visit every state in the environment with a random walk is proportional to the harmonic number, approximating the natural logarithm function which grows without limit, albeit slowly, demonstrating the efficiency of the process. This sampled dataset contains all the information necessary to describe the full state-action space and its associated transitions. A further benefit of our approach is that this step only needs to be performed once regardless of the environment configuration.
We use the \textit{NOTEARS} algorithm \cite{Zheng2018} in the \textit{Q-Cogni} framework, providing an efficient method to derive the causal structure encoded in the dataset sampled from the environment.
The resulting structure learned is then encoded as a DAG \textit{G} with nodes \textit{v} $\in$ \textit{G}, state variables \textit{x}, actions \textit{a} and edges \textit{e} $\in$ \textit{G} which represent the state transition probabilities. With a maximum likelihood estimation procedure, the discovered structure is then fitted with the dataset sample generated to estimate the conditional probability distributions of the graph and encode it as a \textit{Bayesian Network}.
Whilst this module focuses on autonomous causal structure learning, \textit{Q-Cogni} provides the flexibility to receive human inputs in the form of tabu nodes and edges, i.e. constraints in which a human expert can introduce in the causal structure discovery procedure. This capability allows integration between domain knowledge with a data-driven approach providing a superior model in comparison to using either in isolation.
\subsection{Causal Inference}
We leverage upon the causal structure model discovered and the \textit{Bayesian Network} of the environment estimated to provide \textit{Q-Cogni's} reinforcement learning module with causal inference capabilities.
The causal inference sub-module uses the causal DAG $G(V, E)$ and receives from the \textit{Q-Cogni} agent a single state $s \in S$ containing each state variable $x \in s$ with values sampled from the environment $M$, the actions list $A$ containing each action $a \in A$ and the first-priority sub-goal $o$ to be solved. The marginals in each node $v \in V \cap s$ are updated with the state variable values $x$ $\forall ~x$ $\in s$. The procedure described in Algorithm \ref{alg:q-cognicausalinference} selects the best action $a^*$ and calculates the associated $P(o={\rm True}|x, A)$ where $a^* \in A$. This is analogous to a probabilistic reward estimation \textit{r} for the given $(s, a^*)$ pair.
\begin{algorithm}[tb]
\begin{flushleft}
\caption{Q-Cogni causal inference routine}
\label{alg:q-cognicausalinference}
\textbf{Procedure}: INFER-MAX-PROB(G, s, A, o) \\
\textbf{Input}: $G(V)$: causal structure DAG as a function of nodes $v \in V$ where $V = \{s, A, O\}$ with a fitted $Bayesian Network$ containing $P(v|$ \textit{parents of} $v)$ $\forall v \in V$, $s$: a single state $s \in S$ containing state variables $x \in s$, $A$: actions list with each action $a \in A$ where $a$ values $\in \{True, False\}$ , $o$: node representing goal to be solved where $o \in O$ \\
\textbf{Output}: $a^*$: action $a \in A$ where $a = True$ $\land A \setminus a = False $ for $\max p$, $p$: $P(o = True|x, A)$ where $a^* \in A$ \\
\begin{algorithmic}[1]
\STATE Let $p = 0$, $a = False$ $\forall a \in A$, $a^* = $ \O
\FOR{each $v \in V \cap s$}
\STATE * Update each node $v$ representing a state variable $x \in s$ with its value
\STATE $v \gets x$
\ENDFOR
\FOR{each $a \in A$}
\STATE * Calculate the probability of $g = True$ for each action $a \in A$ when $a = True$
\STATE $a \gets True$
\STATE $a^- \gets False$ $\forall a^- \in A \setminus a$
\IF{$p < P(o = True| V \cap s, A)$}
\STATE $p \gets P(o = True| V \cap s, A)$
\STATE $a^* \gets a$
\ENDIF
\ENDFOR
\STATE \textbf{return} $a^*$, $p$
\end{algorithmic}
\end{flushleft}
\end{algorithm}
This module enables the gains in learning efficiency by the agent as it shortcuts the reinforcement learning exploration procedure through the structural prior knowledge of the conditional probability distributions of $(s,a)$ pairs encoded in the DAG. It also provides explicit interpretability capabilities to the \textit{Q-Cogni} reinforcement learning module by being able to estimate $P(o={\rm True}|x, A)$.
\subsection{Modified Q-Learning}
The modified \textit{Q-Learning} module uses a hybrid learning procedure that uses \textit{Q-Cogni's} causal inference module and a $\epsilon$-decay exploration strategy. In addition, one central idea in \textit{Q-Cogni} is to use the inherent and known structure of the reinforcement learning sub-goals for a given task to reduce the problem dimensionality. Whilst this is not a strict limitation to the approach, when available a-priori it gives a significant advantage in computational efficiency for the learning procedure as \textit{Q-Cogni} shrinks the $state-action$ space to the subset that matters for a given sub-goal. \textit{Q-Cogni's} learning procedure can receive a prioritised list $O$ of ordered reinforcement learning sub-goals $o$ and uses that information to choose when to ``explore vs. infer". If such a goal sequence is not known a-priori the benefits from our approach still hold, albeit a lower sample efficiency but still superior to a traditional agent that would require balancing exploration vs. exploitation.
To achieve that, for the prioritised sub-goal $o$, \textit{Q-Cogni} assesses if $ \max P(o=True|x, A)$ takes place when $a^* \in A$ is a $parent$ node $\in V$ of the sub-goal node $o$. In this case, the agent selects $a^*$ and directly applies into the environment to obtain the reward \textit{r} adjusted by $P(o=True|x, A)$ during the value function update procedure, a step taken to avoid reward sparsity and improve learning performannce. The Q-table stores this result, providing a more robust estimation of value without having to perform wide exploration in the environment in contrast to unadjusted rewards. Otherwise, the \textit{Q-Cogni} agent will perform the $\epsilon$-decay exploration procedure. Algorithm \ref{alg:q-cognimodifiedq-learning} describes the modified \textit{Q-Learning} routine in \textit{Q-Cogni}.
\begin{algorithm}[tb]
\begin{flushleft}
\caption{Q-Cogni modified Q-Learning}
\label{alg:q-cognimodifiedq-learning}
\textbf{Procedure}: Q-COGNI-LEARN(G, M, A, O)\\
\textbf{Input}: $G(V)$: causal structure DAG as a function of nodes $v \in V$ and $V = \{s, A, O\}$, $M$: environment containing a list $S$ of states $s \in S$ where $s = \{s_{i} \dots s_{t}\}$, $A$: actions containing list of actions $a \in A$, $O$: sequence of goal nodes $o_j \in O \in V$ and $j \in [1, \dots, n_{goals}]$ in a priority order \\
\textbf{Parameters}: $N$: number of episodes, $\alpha$: learning rate, $\gamma$: discount rate, $\epsilon$: initial threshold for exploration, $\epsilon_{min}$: minimum $\epsilon$, $\delta$: decay rate for $\epsilon$ \\
\textbf{Output}: Q-table with $Q(s,a)$ pairs estimating the optimal policy $\pi$*
\begin{algorithmic}[1]
\STATE Initialise $Q(s,a)$ arbitrarily;
\FOR{each episode $n \in [1, \dots, N]$}
\STATE Initialise state $s$ = $s_i$ from environment $M$;
\STATE Let $j = 1$, $a = \emptyset $
\WHILE{$s$ $\ne$ $s_t$}
\STATE $a^*, p = $ INFER-MAX-PROB(G, s, A, $o_j$)
\IF {$a^* \in \textit{parents of}$ $o_j$}
\STATE $a \gets a^*$
\STATE $j \gets j + 1$
\ELSE
\STATE $\mu \gets u \sim \mathcal{U}(0,1)$
\IF{$\mu < \epsilon $}
\STATE $a \gets $ RANDOM($A$)
\ELSE
\STATE $a \gets \max Q(s,.)$
\ENDIF
\STATE $\epsilon \gets \max(\epsilon_{min}, \epsilon*\delta)$
\ENDIF
\STATE * Apply action $a$ in environment $M$ with state $s$, observe reward $r$ and next state $s'$
\STATE $Q(s, a) \gets Q(s, a) + \alpha \cdot (r \cdot p + \gamma \cdot \max_{a} Q(s', a) - Q(s, a))$
\STATE $s \gets s'$
\ENDWHILE
\ENDFOR
\STATE \textbf{return} $Q(s, a)$, $\pi$*
\end{algorithmic}
\end{flushleft}
\end{algorithm}
This routine enables optimised learning. Policies are improved by the upfront knowledge acquired with the causal structure module and the representation of the state transition outcomes. Unnecessary exploration of state-action pairs that do not improve the probability of achieving the learning goal are eliminated, thus improving learning efficiency.
\section{Approach Validation}
To validate our approach, we start with
the Vehicle Routing Problem (VRP).
Here, we formally define the VRP problem and briefly discuss traditional solutions on which we build ours upon.
\vspace{1mm}
\noindent {\bf VRP.}
In our work, inspired by the emergence of self-driving cars, we use a variant of the VRP, where goods need to be picked up from a certain location and dropped off at their destination. The pick-up and drop-off must be done by the same vehicle, which is why the pick-up location and drop-off location must be included in the same route \cite{braekers2016vehicle}.
This variant is known as the VRP with Pickup and Delivery, a NP-hard problem extensively studied by the operations research community given its importance to the logistics industry. The objective is to find the least-cost tour, i.e. the shortest route, to fulfill the pickup and drop-off requirements \cite{ballesteros2016review}.
\vspace{1mm}
\noindent {\bf Shortest-Path Search Methods.}
The shortest-path search problem is one of the most fundamental problems in combinatorial optimisation. As a minimum, to solve most combinatorial optimisation problems either shortest-path search computations are called as part of the solving procedure or concepts from the framework are used \cite{gallo1986shortest}. Similarly, it is natural to solve the VRP with Pickup and Delivery with shortest-path search methods.
Despite successful shortest-path search algorithms such as Djikstra's and A*, VRP as a NP-hard problem can be very challenging for these methods. Exact algorithms like Djikstra's can be computationally intractable depending on the scale of the problem \cite{drori2020learning}; approximate algorithms like A* provide only worst-case guarantees and are not scalable \cite{williamson2011design}. Reinforcement learning is an appealing direction to such problems as it provides a generalisable, sample efficient and heuristic-free method to overcome the characteristic computational intractability of NP-hard problems.
\section{Experimental Results}
We start with the \textit{Taxi-v3} environment from \textit{OpenAI Gym} \cite{brockman2016openai}, a software implementation of an instance of the VRP with Pickup and Delivery. The environment was first introduced by Dietterich \shortcite{dietterich2000hierarchical} to illustrate challenges in hierarchical reinforcement learning.
Figure \ref{QLearningTaxiv3} illustrates the \textit{Taxi-v3} environment with the example of a solution using the \textit{Q-Learning} algorithm. The 5$\times$5 grid has four possible initial locations for the passenger and destination indicated by R(ed), G(reen), Y(ellow), and B(lue). The objective is to pick up the passenger at one location and drop them off in another. The agent receive a reward of +20 points for a successful drop-off, and receives a reward of -1 for every movement. There are 404 reachable discrete states and six possible actions by the agent (move west, move east, move north, move south, pickup and deliver).
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, trim=4 4 4 4,clip]{Q-Learning_Taxiv3.png}
\caption{\textit{Taxi-v3} environment showing a sequence of actions derived by a \textit{Q-Learning} agent.}
\label{QLearningTaxiv3}
\end{figure}
All experiments were performed on a p4d.24xlarge GPU enabled AWS EC2 instance.
\subsection{Optimal Learning}
We trained \textit{Q-Cogni}, \textit{Q-Learning}, \textit{DDQN} and \textit{PPO} algorithms for 1,000 episodes in the \textit{Taxi-v3} environment. \textit{DDQN} and \textit{PPO} were implemented using the \textit{Rlib} python package \cite{liang2018rllib}. Hyperparameters for \textit{DDQN} and \textit{PPO} were tuned using the \textit{BayesOptSearch} module, using 100 trials over 1,000 episodes each.
\subsubsection{Results and Discussion.}
Figure \ref{SCMTaxi} illustrates the autonomously discovered structure for the \textit{Taxiv3} environment using \textit{Q-Cogni}, after 500,000 samples collected with a random walk. We used the implementation of the \textit{NOTEARS} algorithm in the \textit{CausalNex} python package \cite{Beaumont_CausalNex_2021} to construct the causal structure model and fit the conditional probability distributions through a \textit{Bayesian Network}. The relationships discovered are quite intuitive demonstrating the high performance of the method. For example, for the node \textit{passenger in taxi} to be $True$ the nodes \textit{taxi on passenger location} and \textit{pickup action} must be $True$.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, height=160pt, trim=4 4 4 4,clip]{SCMTaxi.png}
\caption{Discovered causal structure for the \textit{Taxiv3} environment. Sub-goal nodes are highlighted in red.}
\label{SCMTaxi}
\end{figure}
The only domain related inputs given to the algorithm were constraints such as the sub-goal node 1 \textit{pax in taxi} cannot be a child node of the sub-goal 2 node \textit{drop-off} and location nodes must be a parent node. In addition, the list of ordered sub-goals was provided to \textit{Q-Cogni's} reinforcement learning module as [\textit{pax in taxi, drop-off}].
Figure \ref{Q-CognivsRL} shows the results achieved over the 1,000 training episodes. We observe that all methods present similar policy performance (total reward per episode towards the end of training). However, \textit{Q-Cogni} achieves superior stability and learning efficiency in comparison to all other methods, as it is able to use the causal structure model and its causal inference capability to accelerate the action selection process when interacting with the environment. In addition, Figure \ref{Interpretability} demonstrates the interpretability capabilities of \textit{Q-Cogni}. At each step, a probability of the best action to be taken is provided allowing for better diagnostics, tuning and most importantly assessment of possible biases built into autonomous agents such as \textit{Q-Cogni}.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth,height=4.5cm, trim=4 4 4 4,clip]{Q-CognivsRL.png}
\caption{Total reward vs. number of episodes for comparative reinforcement learning methods.}
\label{Q-CognivsRL}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, trim=4 4 4 4,clip]{Interpretability.png}
\caption{\textit{Q-Cogni} interpretability. Sequence of decisions in a randomly selected episode post training.}
\label{Interpretability}
\end{figure}
\subsection{Comparison to Shortest-Path Search Methods}
We analyse the characteristics of our approach against shortest-path search methods to highlight the advantages and disadvantages of \textit{Q-Cogni}. We perform experiments in which expand the \textit{Taxi-v3} environment into larger state sizes, represented by a grid of \textit{n$\times$m} rows and columns. We then compare the time taken to achieve an optimal tour against Djikstra's algorithm and A$^*$ using a Manhattan distance heuristic.
\subsubsection{Results and Discussion.}
We report our comparison analysis across dimensionality, prior knowledge requirements, transportability between configurations and interpretability.
\textbf{Scalability.} \textit{Q-Cogni} excels at large networks. Fig \ref{Scalability} shows the average time taken to identify the optimal tour for varying grid sizes representing a different scale of the \textit{Taxi-v3} environment. We performed experiments for grid sizes from \textit{8$\times$8}, to \textit{512$\times$512} and implemented best-fit curves to extrapolate the increase in problem dimension.
We can observe that \textit{Q-Cogni} takes orders of magnitude longer to identify the optimal tour for a low number of nodes. As the number of nodes increases \textit{Q-Cogni} is much more efficient. This is a product of the sample efficiency delivered within the \textit{Q-Cogni} framework where the causal component enables ``shortcuting" of exploration requirements, thus reducing the need to proportionally increase the observations required by the agent.
\textbf{A-priori knowledge requirement.} \textit{Q-Cogni} require no prior knowledge of the map. Shortest-path methods require prior knowledge of the graph structure to be effectively applied. For example, in our taxi problem, both Djikstra's and A* require the map upfront. \textit{Q-Cogni} requires the causal structure encoded as a graph, but does not require the map itself. This is a significant advantage to enable application in the real-world as a-priori knowledge can be limited for navigation applications.
\textbf{Transferability.} If the configuration within the map changes (e.g. the initial passenger position), \textit{Q-Cogni} would not need to be retrained. The same agent trained in a configuration of a map can be deployed to another configuration seamlessly. On the other hand, if configuration changes take place, we would require to rerun the shortest-path search algorithms. Therefore, \textit{Q-Cogni} has a significant advantage for dynamic settings, a common characteristic of real-world problems.
\textbf{Interpretability.} Shortest-path search methods are limited in interpretability of decisions made by the algorithm to derive the optimal tour. The causes in a particular edge is preferred over another are not explicitly described as part of their output. \textit{Q-Cogni} not only is able to provide a full history of reasons in which each decision was made but also the causes and associated probabilities of alternative outcomes at each step. This is another significant advantage on the applicability of \textit{Q-Cogni} for real-world problems in which there is an interface between humans and the agent.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, trim=4 4 4 4, clip]{Scalability_2.png}
\caption{Time taken (s) vs. number of nodes for \textit{Taxi-v3} modified environment. Full lines are the experiments performed for each algorithm and dashed lines the extrapolation performed. Log scale.}
\label{Scalability}
\end{figure}
\subsection{Real-World Application: Routing New York City Taxis}
We use the New York City Taxi \& Limousine Commission trip record data, which contains all taxi movements in New York City from 2013 to date \cite{NYC_data}, to validate the applicability of Q-Cogni in a real-world context. Figure \ref{NYCmap} shows all pickup and drop-off locations of yellow cabs on the 15th of October 2022. We see the highest density of taxi trips being in Manhattan, represented on the left hand side of Figure \ref{NYCmap}. However, we choose the neighborhoods between Long Island City and Astoria, represented in the highlighted area of Figure \ref{NYCmap} as they have a more challenging street configuration than Manhattan.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, height=80pt, clip]{NYC_map.png}
\caption{Pickup and drop-off points of yellow cabs on New York City 15th October 2022. Highlighted area is the selected for \textit{Q-Cogni} experiments. Built with kepler-gl (https://kepler.gl/).}
\label{NYCmap}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, height=120pt, clip]{Astoria.png}
\caption{Graph representation of the Astoria region in New York City used to train \textit{Q-Cogni}.}
\label{Astoriamap}
\end{figure}
We used the OSMNX library \cite{boeing2017osmnx} to convert the street map into a graph representation where intersections are nodes and edges are streets. We created a custom OpenAI \cite{brockman2016openai} gym environment to enable fitting of the \textit{Bayesian Network} and training of \textit{Q-Cogni}. The resulting graph is shown in Figure \ref{Astoriamap} containing 666 nodes and 1712 edges resulting in a state-action space of size 443,556, a problem $10^3$ larger than our \textit{Taxi-v3} environment.
We use the causal structure derived in Figure \ref{SCMTaxi} and perform a random walk with 1,000,000 steps in the custom built environment to fit a Bayesian Network to the causal model. It is important to appreciate here the transferability and domain agnostic characteristic of our approach, where we leverage on the structure previously discovered for the same task but in a complete different map.
We train \textit{Q-Cogni} once for 100,000 episodes and evaluate the trained agent against several trips contained in the original dataset without retraining each time the trip configuration changes in the real-world dataset. This is a significant benefit of the approach when comparing to shortest-path search methods. We also compare \textit{Q-Cogni} results against Q-Learning to observe the effects of the causal model against policy quality and compare against Dijkstra's algorithm to observe the effects of policy efficiency.
First, Figure \ref{LearningNYC} shows the optimal routes generated by Q-Learning and \textit{Q-Cogni} after 100,000 training episodes for a selected route. We can see that \textit{Q-Cogni} significantly improves the policy generation reaching a near-optimal shortest-path result post training whereas Q-Learning fails to detect the right pickup point and performs multiple loops to get to the destination. Across the 615 trips evaluated, Q-Learning was able to generate only 12\% of routes in which had the same travel distance as \textit{Q-Cogni} generated routes, with the remainder being longer. These results show the benefits of the causal module of \textit{Q-Cogni} towards optimal learning.
In addition, Figure \ref{NYCresults} shows a sample comparison of optimal routes generated with Dijkstra's algorithm and \textit{Q-Cogni}. We observe on the left picture that \textit{Q-Cogni} generates a more efficient route (in red) than Dijkstra's (in blue) measured as the total distance travelled. On the right picture \textit{Q-Cogni} generates a significantly different route which is slightly worse, albeit close in terms of distance. Overall, across the 615 trips evaluated we report 28\% where Q-Cogni generated a shorter route than Dijkstra's, 57\% were the same and 15\% worse.
These results show the applicability of \textit{Q-Cogni} for a real-world case with demonstrated (i) transferability - the causal network obtained in the Taxi-v3 environment can be used for the same task; (ii) no prior knowledge required - \textit{Q-Cogni} does not need to have access to the global map; and (iii) explainability and expandability - where the \textit{Bayesian Network} can be expanded to incorporate other causal relations such as traffic and weather. The application of \textit{Q-Cogni} in this real-world dataset demonstrate a promising framework to bring together causal inference and reinforcement learning to solve relevant and challenging problems.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, clip]{Q_learning_vs_Q_cogni_astoria.png}
\caption{Optimal routes generated by Q-Learning (left) and \textit{Q-Cogni} (right) after 100,000 episodes. Origin and destination highlighted with the white and green circles respectively.}
\label{LearningNYC}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth, clip]{Q_Cogni_NYC_results_2.png}
\caption{\textit{Q-Cogni} (red) generated routes vs. Dijkstra's (blue) for 2 random trips evaluated on the 15th October 2022. Origin and destination highlighted with the white and green circles respectively}
\label{NYCresults}
\end{figure}
\section{Conclusion}
We have presented \textit{Q-Cogni}, a novel causal reinforcement learning framework that redesigns \textit{Q-Learning} with an autonomous causal structure discovery method and causal inference as a hybrid model-based and model-free approach.
We have implemented a framework that leverages upon a data-driven causal structure model discovered autonomously (but flexible enough to accommodate domain knowledge based inputs) and redesigned the \textit{Q-Learning} algorithm to apply causal inference during the learning process in a reinforcement learning setting.
Our approach exploits the causal structural knowledge contained in a reinforcement learning environment to shortcut exploration requirements of a state space by the agent to derive a more robust policy with less training requirements as it increases the sample efficiency of the learning process. Together, these techniques are shown to achieve a superior policy, substantially improve learning efficiency, provide superior interpretability, efficiently scale with higher problem dimensions and more generalisable to varying problem configurations. While these benefits have been illustrated in the context of one specific application – the VRP problem in the navigation domain – it can be applied to any reinforcement learning problem that contains an environment with a implicit representation of the causal relationships between state variables and some level of prior knowledge of the environment dynamics.
We believe that the integration of causality and reinforcement learning will continue to be an attractive area towards human level intelligence for autonomous learning agents. One promising avenue of research is to broaden the integrated approach to continuous state-action spaces such as control environments, a current focus of our research.
\bibliographystyle{named}
|
{
"arxiv_id": "2302.13188",
"language": "en",
"timestamp": "2023-02-28T02:12:08",
"url": "https://arxiv.org/abs/2302.13188",
"yymm": "2302"
} | \section{Introduction}
Riemann surfaces of complex functions have been visualized by a number of authors and computer systems, and
often make striking visual images. See \cite{Trott} for an impressive collection. The traditional way that textbooks
introduced Riemann surfaces is through an informal discussion of layered sheets, which are cut and glued together to form a continuous (usually self-penetrating) surface \cite{Wegert}.
Such an approach does not lend itself to using software to create the surfaces.
This was discussed in \cite{Corless1998Riem}, prompted by a demonstration program in Matlab.
The methods used there are extended here to strengthen the link between the range and the domain.
Riemann surfaces are used to understand multi-valued functions, which are often the inverses of `proper' single-valued functions. Examples include the inverses of the exponential function (namely logarithm), the trigonometric functions (arc-trigs),
and the integral powers ($n$th-roots).
In each case, there is a function $z=f(w)$ which is single valued, and an inverse $w=\invItal{f}(z)$ which is multivalued.
For the multivalued function, the question is how to separate and access the various elements in the set of multiple values.
Two possibilities present themselves: the separation is made either in the range of the function or in its domain.
\section{Labelling the range}\label{sec:range}
We begin with a common example of a multivalued function: the logarithm.
If $ z=e^w$, then $w =\ln z $.
Since, for $n\in \mathbb{Z}$, $ z=e^{w +2n\pi i} = e^w$, for any complex number $z$, there are an infinite number of values for the logarithm.
The standard treatment defines two logarithm functions.
Following the notation of the DLMF \cite{AS, DLMF}, we write $ \mathop \mathrm{Ln} \nolimits z$ to
represent the \textit{general logarithm function}, which stands for the infinite collection of values, and $\ln z$ for the
\textit{principal value} or \textit{principal branch}. The principal branch being defined by $-\pi< \Im(\ln z) \le \pi$.
The general function notation, however, is frustratingly vague, and leads to statements such as
\begin{equation}\label{eq:genlog}
\mathop \mathrm{Ln} \nolimits z = \ln z + 2k\pi i\ ,
\end{equation}
where the equation has one variable $z$ on the left and two variables $z,n$ on the right.
In order to obtain a more precise definition of the logarithm value, the notation
\begin{equation}\label{eq:definelnk}
\ln_k z=\ln z +2k\pi i \ ,
\end{equation}
was introduced in \cite{unwindW}.
Geometrically, this coincides with the range of logarithm being partitioned into branches, and each branch labelled.
See figure \ref{fig:lnbranch}. We note $\ln_0 z$ also denotes the principal branch.
For each separate branch $\ln_k z$, the domain consists of the whole complex plane, with a line of discontinuity, called a branch cut, along the negative real axis. In thinking of branches, it is sometimes helpful to disregard the connections between the branches and think of each branch as a function separate in its own right. For example, the equation $\ln x=2$ has the solution $x=e^2$, but the equation $\ln_1 x=2$ has no solution because $2$ does not lie in the range of $\ln_1 x$.
\begin{figure}
\centering
\includegraphics[width=5cm]{logbranches.pdf}
\caption{The range of the logarithm partitioned into branches. The branches are labelled with the integer $k$, and the index is added
to the function notation: $\ln_k z$.
Thus, to stipulate a logarithm value with imaginary part between $\pi$ and $3\pi$, one writes
$\ln_1 z$.}\label{fig:lnbranch}
\end{figure}
A second example is provided by the cube-root function. We have $z=w^3$, implying the inverse $w=z^{1/3}$.
In the complex plane, the cube root always has 3 values. To denote the 3 cube roots, we need a notation which gives us space for a label.
Since the notations $z^{1/3}$ and $\sqrt[3]z$ look clumsy with additional labels, we create a cube-root function name:
$ \mathop \mathrm{cbrt} \nolimits_k z$.
The range of $ \mathop \mathrm{cbrt} \nolimits$ is partitioned as shown in figure \ref{fig:cbrtbranch}, where the branches are defined using the complex
phase\footnote{We use the name complex phase, rather than complex argument, because the word argument will used for function argument.}.
With $ \mathop \mathrm{ph} \nolimits z$ denoting the principal complex phase $-\pi< \mathop \mathrm{ph} \nolimits z \le\pi$, we set the principal branch by the requirement
$-\pi/3< \mathop \mathrm{ph} \nolimits(z^{1/3})\le \pi/3$. If we denote the primitive root of unity by $\omega=e^{2\pi i/3}$, and $\theta= \mathop \mathrm{ph} \nolimits z$,
then the 3 branches are defined by
\begin{equation}\label{eq:cbrtbranchlabel}
\mathop \mathrm{cbrt} \nolimits_k(z) = \mathop \mathrm{cbrt} \nolimits_k(re^{i\theta})
=\begin{cases}
\phantom{\omega}r^{1/3} e^{i\theta/3}\ , & k=0\ ,\\
\omega r^{1/3} e^{i\theta/3}\ , & k=1\ , \\
\overline \omega r^{1/3}e^{i\theta/3} \ , & k=-1\ .
\end{cases}
\end{equation}
\noindent
For each separate branch of $ \mathop \mathrm{cbrt} \nolimits_k(z)$, the
domain is the whole of the complex plane, with a line of discontinuity (the branch cut) along the negative real axis.
We note as an instructive special case the values of $(-8)^{1/3}$. The three cube roots are $-2,1+\sqrt3\,i,1-\sqrt3\, i$.
Some readers might be surprised to learn that the principal-branch value of $(-8)^{1/3}$ is not $-2$.
The principal branch value is $ \mathop \mathrm{cbrt} \nolimits_0(-8) = 1+\sqrt3\, i$ and the real-value root is branch 1: $ \mathop \mathrm{cbrt} \nolimits_1(-8)=-2$.
As seen in figure \ref{fig:cbrtbranch}, the negative real values of the function do not lie in the principal branch.
We can note here that the principal-branch values of both logarithm and cube root are the unique values returned by all major scientific
software, such as Matlab, Maple and Mathematica. This also applies to other multi-valued functions, such as $n$th roots and inverse trigonometric functions.
\begin{figure}
\centering
\includegraphics[scale=0.35]{cubicrange2.pdf}
\caption{The range of the cube root partitioned into branches. As with logarithm, the branches are labelled with an integer $k$. In this case there are only 3 branches, separated by the boundaries $r e^{\pm i\pi/3}$ and the negative real axis.}\label{fig:cbrtbranch}
\end{figure}
\section{Labelling the domain}
The main object of this paper is to show how the labelling ideas of the previous section can be transferred from the range of a complex function
to its domain, and by doing so we obtain a new perspective on plots of Riemann surfaces.
A Riemann surface for a multivalued function is built on its domain. A typical description of the construction process
talks about cutting and joining sheets of the domain.
For example, here is a description of a Riemann surface for the square-root function \cite{brown}.
\begin{quote}
A Riemann surface for $z^{1/2}$ is obtained by replacing the $z$ plane with a surface made up of two sheets $R_0$ and $R_1$, each cut along the positive real axis with $R_1$ placed in front of $R_0$. The lower edge of the slit in $R_0$ is joined to the upper edge of the slit in $R_1$, and the lower edge of the slit in $R_1$ is joined to the upper edge of the slit in $R_0$.
\end{quote}
A Riemann surface is erected over the domain of a multivalued function, that is, given $w=\invItal{f}(z)$, the surface represents values of $z$ rather than values of $w$.
The value of $z$ by itself is not sufficient to determine the value of $\invItal{f}(z)$,
and therefore there must be a property possessed by a particular value of $z$ that decides the value of the function, in conjunction with the complex value of $z$. This property is not at present visible.
We shall call this property \textit{charisma}. A variable $z$ with charisma will be denoted $z_{\cal C}$\,, while we decide what it is.
We aim to define charisma as a numerical value, which can then serve as an ordinate on an axis perpendicular to the complex plane.
We set up 3 axes: real and imaginary axes for locating the complex value of $z$ together with an orthogonal axis representing charisma.
The first example will be the cube-root function described above.
\subsection{Charisma for cube root}
We take as a first example the indexed cube root given in \eqref{eq:cbrtbranchlabel}.
We explore four possible choices for the charisma of this function.
\subsubsection{Charisma as branch index}
We have seen the branch label $k$ used to define values in the range of $ \mathop \mathrm{cbrt} \nolimits_k(z)$, so we begin by trying the assignment
\[{\cal{C}} =k\ .\]
With the charisma chosen to be the index $k$, we plot the corresponding surfaces as follows. We generate an array of points in the complex plane.
Then we create a 3-element plot structure consisting of the real and imaginary parts of a complex number, together with the charisma.
This structure is applied to the array of complex values three times, once for each of the three values of the charisma.
The complete array is handed to a three-dimension plotting command in Maple.
The pictures we get are shown in figure \ref{fig:charlabel}.
The effect of the charisma is simply to lift the flat complex planes so that the three planes are stacked and spaced.
The vertical walls in the plot are Maple's way of showing that the planes are connected discontinuously.
We can understand this plot by imagining a point\footnote{Or more picturesquely an ant. } in the \textit{range} of the function.
The point now circles the origin in the range. Each time the point crosses a branch boundary, the charisma jumps discontinuously with the
branch number. This discontinuous change is reflected in the domain by a jump from one sheet to another.
\begin{figure}
\centering
\includegraphics[scale=0.5]{cubicRiemannk.pdf} \quad
\raisebox{0cm}{\includegraphics[scale=0.3]{cbrt.png}}
\caption{On the left we see the domain of $z^{1/3}$ as a Riemann surface; on the right, the range of $z^{1/3}$. The sheets of the Riemann surface are separated by labelling them with the branch index. The branches of $ \mathop \mathrm{cbrt} \nolimits_k(z)$ are coloured in the same way as the sheets of the Riemann surface. The top sheet corresponds to $k=1$ meaning all the values of $z$ on this sheet map to $ \mathop \mathrm{cbrt} \nolimits_1(z)$ as shown by the corresponding colour in the range. Similarly for the other two branches. One can further note that the sheets are connected discontinuously, reflecting the discontinuous change in branch number as one moves around in the range.}
\label{fig:charlabel}
\end{figure}
\subsubsection{A continuous charisma} \label{sec:CharismaPhase}
Our first choice has an unsatisfactory feature. If we follow our point in the range of $z^{1/3}$,
the complex values it samples change continuously. The discontinuity arises solely from the branch label being discontinuous.
It is the same as driving across the boundary between two provinces or states. The road is continuous, the land is continuous, but suddenly everyone is speaking French and selling fireworks\footnote{At least that is what I see when driving from Toronto to Montreal.}.
In the previous section, we see that the simplest choice of charisma misses an important property of the
cube root, namely that between the branches there is a continuous transition. We wish to find a charisma that
captures this behaviour.
When searching for a measure of charisma, it is important\footnote{Too strong a word? Well, at least helpful.} not to fixate on the domain of $z$, even though the domain is where the surface will be located. We are trying to represent the multi-valuedness of the function, and that is defined in the range.
It was remarked that a point circling the origin in the range would see continuous behaviour.
Following the implications of this observation suggests that the phase of the cube root could be a better quantity to use for charisma.
Thus, given $w= \mathop \mathrm{cbrt} \nolimits_k(z)$, we set $\mathcal{C}= \mathop \mathrm{ph} \nolimits(w)$. The phase is computed from the value of $w$, the range of $z^{1/3}$, not from the value of $z$. This choice results in the Riemann surface shown in figure \ref{fig:charismaphase}.
\begin{figure}
\includegraphics[scale=0.3]{cubicRiemannArg.pdf} \quad
\raisebox{0cm}{\includegraphics[scale=0.2]{cbrt.png}}
\caption{The Riemann surface obtained by using the phase of the cube root as charisma. The phase has a minimum value of $-\pi$ and increases monotonically to $-\pi/3$ while the branch $k=-1$ covers its domain, which is the whole of the complex plane. This is the green surface in the domain. The behaviour is repeated for the other two branches. The surfaces meet smoothly where they join. Maple has joined the end of the surface at $\mathcal{C}=\pi$ discontinuously to $\mathcal{C}=-\pi$.}\label{fig:charismaphase}
\end{figure}
This removes the jumps seen in the previous section, but is still unsatisfactory in that
in the range of $z^{1/3}$
the branch $k=1$ joins smoothly to branch $k=-1$,
but the Riemann surface in figure \ref{fig:cbrtbranch} joins discontinuously.
Therefore, we try now a third choice which will give us smooth joins of all of the branch transitions.
\subsubsection{A continuous and periodic charisma}
In the range of $z^{1/3}$ we see that the behaviour is essentially periodic, in that for $x<0$
\begin{equation}\label{eq:periodiccbrt}
\lim_{y\downarrow 0} \mathop \mathrm{cbrt} \nolimits_1(x+iy)=\lim_{y\uparrow 0} \mathop \mathrm{cbrt} \nolimits_{-1}(x+iy) \ .
\end{equation}
A function that has these properties is $\mathcal{C}=\sin\theta\propto \Im(w)$.
In figure \ref{fig:cbrtsin}, the Riemann surface produced with this choice is shown.
The surface is continuous and smooth everywhere.
It now intersects itself in two places: branch $k=-1$ intersects $k=0$ when $\mathcal{C}=-1/2$
along the negative imaginary axis, and $k=0$ intersects $k=1$ at $\mathcal{C}=1/2$ along the positive imaginary axis.
The surfaces change colour (branch) along the negative real axis at
$\mathcal{C}=-\frac{\sqrt3}{2},0,\frac{\sqrt3}{2}$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{cubicRiemannImag.pdf}
\end{center}
\caption{A periodic Riemann surface for the cube-root function.
The charisma is based on $\sin \theta$ where $\theta= \mathop \mathrm{ph} \nolimits(z^{1/3})$.
This can equivalently be based on the imaginary part of $z^{1/3}$.
Since the Riemann surface is plotted over the domain $z$, while being coloured based on the range $z^{1/3}$,
the colours change along the negative real axis in the domain where $\mathcal{C}=-\sqrt3/2,0,\sqrt3/2$. }
\label{fig:cbrtsin}
\end{figure}
\subsubsection{An alternative continuous and periodic surface}
It is also possible to use the charisma $\mathcal{C}=\cos\theta$. The result is shown in figure \ref{fig:cbrtcos}. Note that the principal branch now is prominent at the top of the surface.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{cubicRiemanncos.pdf}
\end{center}
\caption{The Riemann surface for cube root, using $\cos\theta$ for the charisma. The surface is the same form as the one using the sine function, but the partition between the branches has changed. }\label{fig:cbrtcos}
\end{figure}
\subsection{And lastly, logarithm}
In \cite{Corless1998Riem}, a contrast was drawn between attempts to plot a Riemann surface for the logarithm.
One attempt was shown to be unsatisfactory. By emphasizing the need to think in terms of the range of a function,
we can see immediately why this is.
The charisma was chosen to be the real part of the logarithm,
but a glance at the range of log shows immediately that motion parallel to the real axis
never leaves the branch of log in which the point started.
In order to obtain a surface for logarithm, we must move parallel to the imaginary axis
in order to cross from one branch to the next. This was shown to be the correct choice.
We could try using the branch index of $\ln_k z$ as we did for cube root,
but again that would result in flat segments separated by jumps.
As shown in \cite{Corless1988Comp}, a simple charisma for logarithm is its imaginary part.
The result is shown in figure \ref{fig:logRiemann}, where the colouring again changes with the branch.
\begin{figure}
\centering
\includegraphics[scale=0.35]{lnbranch.pdf}
\caption{A section of the infinite Riemann surface for logarithm, coloured according to the branch index.}
\label{fig:logRiemann}
\end{figure}
\section{Conclusions}
Many treatments of multivalued functions concentrate on the domain of the function, and focus on branch cuts in the domain.
See for example, \cite{AS,DLMF,K}. This article hopes to demonstrate thatthinking in the range as well as the domain makes understanding easier.
|
{
"arxiv_id": "2302.13186",
"language": "en",
"timestamp": "2023-02-28T02:12:04",
"url": "https://arxiv.org/abs/2302.13186",
"yymm": "2302"
} | \section{}
\begin{abstract}
\noindent We count the number of ways to build paths, stars, cycles, and complete graphs as a sequence of vertices and edges, where each edge follows both of its endpoints. The problem was considered 50 years ago by Stanley but the explicit sequences corresponding to graph families seem to have been little studied. A cost-based variant is introduced and applications are considered.
\end{abstract}
{\it Key Words:} Roller-coaster problem, up-down permutations, linearization of posets, construction number, Hasse diagrams, minimizing edge delay, ontogenies.
\section{Introduction}
The {\bf elements} of a graph $G = (V,E)$ are the set $V \cup E$ of vertices and edges.
A linear order on $V \cup E$ is a {\bf construction
sequence} (or {\bf c-sequence}) for $G$ if each edge appears after both of its endpoints. For instance, for the path $P_3$ with vertices $1,2,3$ and edges $12, 23$, one has construction sequences $(1,2,3,23,13)$ and $(1,2,12,3,23)$, while $(1,3,12,2,23)$ is not a c-sequence. There are a total of 16 c-sequences for $P_3$.
Let ${\cal C}(G)$ be the set of all c-sequences for $G=(V,E)$.
The {\bf construction number} of $G$ is $c(G) := \#{\cal C}(G)$, the number of distinct construction sequences. So $c(P_3)=16$.
The problem of finding the construction numbers of graphs occurred to the author in November 2022, and he proposed, as a problem for the {\it American Math Monthly}, to determine $c(P_n)$. However, after some discussion, the problem to determine $c(K_n)$ for the complete graph $K_n$ was substituted. This was much harder to enumerate, and its solution involved the collaboration of Richard Stong, Jim Tilley, and Stan Wagon. The revised problem is to appear \cite{monthly-prob} with co-proposers PCK, RS, and JT; SW is the Problems Editor: {\it For $K_n = (V,E)$, how many ways are there to linearly order $V \cup E$ such that each edge appears after both vertices comprising the edge}?
After seeing how to extend construction sequences to more abstract objects, we found Stanley's work (e.g., \cite{stanley}), which includes two of our results. Stan Wagon pointed out a different approach to graph construction \cite{vb2012}, {\bf assembly numbers} due to Vince and Bon\'a \cite{vb2012}, motivated by the goal of describing the self-assembly of macromolecules performed by virus capsids in the host cell. (See \S 4 below.)
In this paper, we further refine the notion of construction number to account for the {\bf cost} of having two adjacent vertices with no edge that joins them. A construction sequence for a graph $G$ is {\bf economical} if it has minimum total cost for its edges. This sharply reduces the set of feasible sequences and allows a greedy heuristic.
Section 2 has additional definitions and some lemmas.
In Section 3, we find $c(G)$ when $G$ is $K_{1,n}$ (the star with $n$ peripheral points), path $P_n$, cycle $C_n$, and complete graph $K_n$.
Section 4 describes earlier appearances of construction numbers and considers extensions of c-sequences to hypergraphs, simplicial complexes, CW-complexes, posets, and categories. Section 5 defines cost-functions, some types of c-sequences, and relative constructability of graphs, while the last section has open problems and applications.
\section{Basic definitions and lemmas}
Let $G = (V,E)$ be a labeled graph, where $V=[p]:=\{1,\ldots,p\}$. Let $q:=\#E := |E|$, and let $S := V \cup E$ is the set of {\bf elements} of $G$. Suppose $\ell := p+q = \#S$. By a {\bf permutation} on $S$, we mean a sequence $x$ of length $\ell$ taken from $S$ where each element appears exactly once. If $s \in S$, write $x(s)$ for the unique $j \in [\ell]$ s.t. $x_j = s$.
Let $P_n$ be the path with $n$ vertices, $K_{1,n}$ the star with a single degree-$n$ hub having $n$ neighbors, each an endpoint; $C_n$ and $K_n$ are the cycle graph and complete graph with $n$ vertices. See, e.g., Harary \cite{harary} for any undefined graph terminology.
A permutation $x$ on $S$ is a {\bf construction sequence} (c-sequence) for $G$ if each edge follows both of its endpoints; i.e., for $e = uw$, $x(u)<x(e)$ and $x(w)<x(e)$. Let ${\cal C}(G)$ be the set of all construction sequences and let $c(G):= \#{\cal C}(G)$ be the {\bf construction number} of $G$. Clearly, $p!q! \leq c(G) \leq (p+q)!$ for each graph $G$.
The {\bf graph sequence} associated with a c-sequence $x$ is the sequence $G_i$ of graphs, $1 \leq i \leq \ell$, where $G_i$ is the subgraph of $G$ determined by the set $\{x_1,\ldots, x_i\}$ of elements, which is indeed a graph.
Let $b_i$ be the number of connected components of $G_i$. Let $\beta(x) := \max_{i \in [\ell]} b_i$ and let $b(x):=(b_1,\ldots,b_\ell)$.
\begin{lemma} [S. Wagon]
If $G$ is connected and has minimum degree $k$, then the last $k$ entries in any $x \in {\cal C}(G)$ are edges. Moreover,
$x(v) \leq \ell - r$, where $r = \deg(v,G)$.
\end{lemma}
Given two element-wise disjoint finite sequences $s_1$ and $s_2$ of lengths $n$ and $m$, we define a {\bf shuffle} of the two sequences to be a sequence of length $n+m$ which contains both $s_1$ and $s_2$ as subsequences. The number of shuffle sequences of $s_1$ and $s_2$ is ${{n+m}\choose{n}}$, giving the construction number of a disjoint union in terms of its parts.
\begin{lemma}
If $x_1$ and $x_2$ are c-sequences for disjoint graphs $G_1$ and $G_2$, resp., then each shuffle of $x_1$ and $x_2$ is a c-sequence for $G_1 \cup G_2$, and we have
\begin{equation}
c(G_1 \cup G_2) = c(G_1) c(G_2) {{\ell_1+\ell_2}\choose{\ell_1}},
\end{equation}
where $\ell_1$ and $\ell_2$ are the lengths of the sequences $x_1$ and $x_2$, resp.
\label{lm:union}
\end{lemma}
The number of ways to extend a c-sequence for $G {-} v$ to a sequence for $G$ can depend on which c-sequence is chosen.
For example, take $P_2 = (\{1,2\},\{a\})$, where $a=12$. Then ${\cal C}(P_2)=\{x',y'\}$, where $x'=(1,2,a) \equiv 12a$ and $y'=21a$. Consider $P_3 = (\{1,2,3\}, \{a,b\})$, where $b=23$. As $P_2 \subset P_3$, each c-sequence for $P_3$ extends a c-sequences of $P_2$. One finds that
$x'$ has exactly 7 extensions (in short form)
\[ 312ab, 312ba,132ab,132ba,123ab,123ba,12a3b\]
to $x \in {\cal C}(P_3)$, while $y'$ has exactly 9 extensions (in short form)
\[321ab,321ba,32b1a,231ab,231ba,23b1a,213ab,213ba,21a3b. \]
This gives $c(P_3)=16$ as it should; see below.
The previous lemma extends to any finite disjoint union. If $G_1, \ldots, G_n$ have $\ell_1, \ldots, \ell_n$ elements, then for $\ell := \ell_1 + \cdots + \ell_n$,
\begin{equation}
c(G_1 \cup \cdots \cup G_n) = \prod_{i=1}^n c(G_i) {{\ell}\choose{\ell_1 \cdots \ell_n}}.
\label{eq:disj-union}
\end{equation}
Let $G=([p],E)$ and let $v \in [p]$. The {\bf based construction number} of $(G,v)$ is the number of c-sequences for $G$ which start with the {\bf base point} $v$. We write ${\cal C}(G,v)$ for the set of suitably restricted c-sequences and $c(G,v)$ for its cardinality.
Every c-sequence starts with a vertex, so $c(G) = \sum_{v \in VG} c(G,v)$. Further, we have
\begin{lemma}
If $v, w \in VG$ and if $\exists \phi: G \to G$ an isomorphism such that $\phi(v)=w$, then $c(G,v) = c(G,w)$.
\label{lm:iso}
\end{lemma}
\begin{proof}
Let $x \in {\cal C}(G)$. Then $\tilde{x} := (\phi(x_1), \ldots, \phi(x_\ell))$ is also a c-sequence for $G$.
\end{proof}
For $i=1,\ldots, n$, let $G_i$ be pairwise-disjoint graphs with $\ell_i$ elements, $v_i \in VG_i$, and suppose the vertices $v_i$ are identified to a single vertex $v$.
Let $G := \bigvee_{i=1}^n G_i$ be the resulting {\bf wedge-product} graph with {\bf base point} $v$. Then as in (\ref{eq:disj-union}), we have
\begin{equation}
c(G,v) = \prod_{i=1}^n c(G_i,v_i) {{\ell-1}\choose{\ell_1-1 \cdots \ell_n-1}}.
\label{eq:wedge-prod}
\end{equation}
where $\ell := \ell_1 + \cdots + \ell_n$
\section{Construction numbers for some graph families}
In this section, we find $c(G)$ when $G$ is a star, path, cycle, or complete graph. The first result is also due to Stan Wagon.
\begin{theorem}
For $n \geq 0$, $c(K_{1,n}) = 2^n(n!)^2$.
\end{theorem}
\begin{proof}
For $n=0,1$, the result holds. Suppose $n \geq 2$ and let $x = (x_1, \ldots, x_{2n+1})$ be a construction sequence for $K_{1,n}$. There are $n$ edges $e_i = v_0 v_i$, where $v_0$ is the central node, and one of the edges, say, $e_i$, must be the last term in $x$. This leaves $2n$ coordinates in $x' := (x_1, \ldots, x_{2n})$ and one of them is $v_i$. The remaining $(2n-1)$ coordinates are a construction sequence for the $(n-1)$-star $K_{1,n-1}$. Hence, $c(K_{1,n}) = n (2n) c(K_{1,n} - v_i)= 2n^2 2^{n-1}(n-1)!^2 = 2^n (n!)^2$ by induction.
\end{proof}
The numbers 2, 16, 288, 9216, 460800 generated by the above formula count the number of c-sequences for $K_{1,n}$ for $n \in \{1,2,3,4,5\}$.
These numbers are the absolute value of the sequence A055546 in the OEIS (reference \cite{oeis}) and describe the number of ways to seat $n$ men and $n$ women in a roller coaster with $n$ rows, where each row has two seats which must be occupied by a man and a woman.\\
Note that the star $K_{1,n}$ is the wedge-product of $n$ copies of $K_2$. There is a unique based construction sequence for $K_2$. Using (\ref{eq:wedge-prod}),
\begin{equation}
c(K_{1,n}, \star)= {{2n}\choose{2 \cdots 2}}, \;\mbox{where the base-point $\star$ is the hub of the star}.
\end{equation}
Counting c-sequences for cycles and paths is essentially the same problem.
\begin{lemma}
If $n \geq 3$, then $c(C_n) = n \cdot c(P_n)$.
\label{lm:cycle-path}
\end{lemma}
\begin{proof}
If $x \in {\cal C}(C_n)$, then $x_\ell$ is an edge; the remainder is a c-sequence for $P_n$.
\end{proof}
Before determining $c(P_n)$, we give a Catalan-like recursion for these numbers.
\begin{lemma}
$c(P_n) = \sum_{k=1}^{n-1} c(P_k) \,c(P_{n-k}) \,{{2n-2}\choose{2k-1}}.$
\label{lm:cat}
\end{lemma}
\begin{proof}
Any construction sequence $x$ for $P_n$ has last entry an edge $e$, whose removal creates subpaths with $k$ and $n-k$ vertices, resp., for some $k$, $1 \leq k \leq n-1$. Now $x$ contains construction sequences for both subpaths which suffices by Lemma \ref{lm:union}.
\end{proof}
Trivially, $c(P_1) = 1$ and recursion gives the sequence
$ 1, 2, 16, 272, 7936, 353792$ for $n=1,\ldots,6$, in the OEIS as A000182 \cite{oeis}, the sequence of {\bf tangent numbers}, $T_n$, which has a long and interesting history. For instance, its exponential generating function is $\tan(x)$, and it corresponds to the odd terms in the sequence of Euler numbers \cite[A000111]{oeis}; see, e.g., Kobayashi \cite{kobayashi}. Here are two proofs that $c(P_n) = T_n$.\\
{\bf Proof 1.}\\
Let $U(n) \subset S(n)$ be the {\bf up-down} permutations,
where consecutive differences switch sign, and the first is positive. It is well-known \cite{mathw2} that $\#U(2n-1) = T_n$.
\begin{proposition}[D. Ullman]
There is a bijection from ${\cal C}(P_n)$ to $U(2n-1)$.
\end{proposition}
\begin{proof}
A permutation $\pi$ of the consecutively labeled elements of a path is a construction sequence if and only if $\pi^{-1}$ is an up-down sequence. Indeed, $\pi^{-1}(2j)$ is the position in $\pi$ occupied by the $j$-th edge, while $\pi^{-1}(2j-1),\pi^{-1}(2j+1)$ correspond to the positions of the two vertices flanking the $2j$-th edge and so are smaller iff $\pi$ is a construction sequence. Hence, $\pi^{-1}$ is an up-down sequence, and conversely.
\end{proof}
For instance, $P_5$ gives the sequence $(1,2,3,4,5,6,7,8,9)$, where odd-numbers correspond to vertices and even numbers to edges. An example c-sequence for $P_5$ is $\pi = (5,9,7,6,3,8,4,1,2)$; each even number (e.g., 4) is preceded by its odd neighbors (3 and 5) - i.e., each edge by its two endpoints.
The inverse sequence $\pi^{-1} = (8,9,5,7,1,4,3,6,2)$ is up-down, as required.\\
{\bf Proof 2.}\\
By \cite[A000182]{oeis}, $T_n = J_{2n-1}$, where for $r \geq 1$, $J_r$ denotes the number of permutations of $\{0,1, \ldots, r+1\}$ which begin with `$1$', end with `$0$', and have consecutive differences which alternate in sign.
Then $J_{2k}=0$ for $k \geq 1$ as the sequences counted by $J$ must begin with an {\it up} and end with a {\it down} and hence have an odd number of terms. These {\bf tremolo} sequences are in one-to-one correspondence with ``Joyce trees'' and were introduced by Street \cite{rs} who showed they satisfy the following recursion.
\begin{proposition} [R. Street]
For $r \geq 3$,
$J_r = \sum_{m=0}^{r-1} {{r-1}\choose{m}} J_m J_{r-1-m}$.
\label{pr:street}
\end{proposition}
Now we show $c(P_n) = J_{2n-1}$. Indeed,
$J_1 = c(P_1)$ and $J_3 = c(P_2)$. Replace $J_{2r-1}$ by $c(P_r)$ and
$J_{2r}$ by zero and re-index;
Street's recursion becomes Lemma \ref{lm:cat}, so $c(P_n)$ and $J_{2n-1}$ both satisfy the same recursion and initial conditions. But $J_{2n-1} = T_n$.\\
By \cite{mathw}, \cite[24.15.4]{dlmf}), we have for $n \geq 1$ with $B_{2n}$ the $2n$-th Bernoulli number,
\begin{equation}
c(P_n) = T_n = (1/n) {{2^{2n}}\choose{2}} |B_{2n}|.
\label{eq:path}
\end{equation}
An asymptotic analysis \cite{random-constr} shows
$c(P_n)$ is exponentially small compared to $c(K_{1,n-1})$.
Using Lemma \ref{lm:cycle-path} and equation (\ref{eq:path}), we have for $n \geq 1$,
\begin{equation}
c(C_n) = {{2^{2n}}\choose{2}} |B_{2n}|
\label{eq:cycle}
\end{equation}
The first two cycles are CW-complexes: $C_1$ has one vertex and a loop, and $C_2$ has two vertices
and two parallel edges; the formula is correct for both (and for $n \geq 3$).
If $\star$ is one of the endpoints of $P_n$, we can calculate $c(P_n,\star)$ for the first few values, getting (with some care for the last term) $1,1,5,61$ for $n=1,2,3,4$. In fact,
\begin{equation}
c(P_n,\star) = S_n,
\label{eq:star-base}
\end{equation}
where $S_n$ is the $n$-th secant number (\cite[A000364]{oeis}), counting the ``zig'' permutations.
\subsection{Complete graphs}
This section is omitted until the Monthly problem \cite{monthly-prob} has appeared and its solutions are collected. The solution is due primarily to S. Wagon, R. Stong, and J. Tilley.
\section{Earlier appearances and extensions}
The concept of construction sequence for graphs was already known in a more general
context \cite[p 10]{stanley} but integer sequences were only briefly considered. Stanley studied the number of linear extensions of a partial order \cite[p 8]{stanley}, using them to define a polynomial \cite[p 130]{enum-comb1}. In \cite[p 7]{two} he showed the number of linear extensions of a partial order determined by a path is an Euler number, implying (\ref{eq:path}) and (\ref{eq:star-base}).
Now take the Hasse diagram of any partial order; define a construction sequence for the diagram to be a total order on the elements such that each element is preceded in the linear order by all elements which precede it in the partial order.
Simplicial and CW-complexes can be partially ordered by ``is a face of'', and the linearization of posets includes graphs and hypergraphs (e.g., the $r$-regular case) as 2-layer Hasse diagrams,
where elements in the top layer have degree $r$ for $r$-regular hypergraphs.
A notion which sounds similar to construction numbers is due to Vince and B\'ona: the number of {\it assembly trees} of a graph \cite{vb2012}. Assembly numbers count ways to build up a graph from subgraphs induced by various subsets of the vertices.
For $n$-stars, \cite{vb2012} gives $n!$, while for paths and cycles, a Catalan-type value is found. Thus, assembly-tree numbers and construction numbers can be quite different.
Construction sequences make sense for hypergraphs, multigraphs, and indeed for any CW-complex. In the latter case, one counts sequences of cells, where each cell must follow the cells to which it is attached. For simplicial complexes, one might start with simplexes, cubes, and hyperoctahedra (the standard Platonic polytopes), and the sporadic instances in 3 and 4 dimensions.
One could also consider construction sequences for topoi and for (co)limits of diagrams in categories, even beyond the finite realm \cite[p 77]{stanley}. Philosophically, emergent concepts follow the substrate from which they arise.
\section{Types of construction sequences}
In this section, we describe {\it economical, easy, {\rm and} greedy} construction sequences.
Let $G=(V,E)$ be a graph, let $x \in {\cal C}(G)$, and let $e \in E$. We define the {\bf cost} of edge $e = uw$ with respect to a construction sequence $x$ to be
$$\nu(e,x) := 2x(e) - x(u) - x(w)$$
where for all $s \in V \cup E$, we have
$x(s)=j$ if and only if $x_j = s$.
Let
$$\nu(x) := \sum_{e \in E} \nu(e,x)$$
be the cost of $x$, and let $\nu(G)$ be the least cost of any of its c-sequences. Thus, edge-cost is the delay between placement of its endpoints and placement of the edge.
The {\bf greedy algorithm} $\mathbb{G}$ takes input a graph $G =(V,E)$ and a linear order $\lambda$ on $V$, $\lambda := (v_1, \ldots, v_p)$, and outputs a c-sequence $x := x(\lambda) := \mathbb{G}(G,\lambda) \in {\cal C}(G)$. As soon as an edge is {\bf available} (meaning that both its endpoints have appeared), the greedy algorithm selects it. If several edges are available, some method of breaking ties is employed - e.g., using lexicographic order or avoiding cycles as long as possible. When no edges are available, the next vertex on the input list is selected, thereby increasing the number of connected components. Put $\mathbb{G}(G) := \{\mathbb{G}(G,\lambda) : \lambda \;\mbox{linear order on}\; V\}$.
We call a c-sequence for $G$ with minimum cost {\bf economical} (an {\bf ec}-sequence). Let ${\cal C}'(G)$ be the set of ec-sequences for $G$ and let $c'(G)$ be its cardinality.
\begin{conjecture}
If $G$ is any connected graph, then $\mathbb{G}(G)$ contains ${\cal C}'(G)$.
\end{conjecture}
A good question is, for a specific graph, how to choose the vertex ordering $\lambda$ on which to run the greedy algorithm. For the path, the obvious two choices work.
\begin{lemma}
Let $n \geq 2$. Then $\nu(P_n)=4n-5$ and $c'(P_n) = 2$.
\end{lemma}
\begin{proof}
If $P_n$ is the $n$-path with $V = [n]$ and natural linear order, the greedy algorithm gives the c-sequence
$121'32'43'54'\cdots n (n-1)'$, where we write $k'$ for the edge $[k,k{+}1]$.
The first edge costs 3, while each subsequent edge costs 4. The unique nontrivial isomorphism of $P_n$ produces the other member of ${\cal C}'(P_n)$ by Lemma \ref{lm:iso}.
\end{proof}
For $K_{1,n}$, starting from the hub 0, all vertex orderings cause the greedy algorithm to give an $x$ with cost $(n+1)^2 - 1$ but, in fact, it is better to take $\lfloor n/2 \rfloor$ of the peripheral vertices (in any order) followed by the hub vertex, now fill in the available edges (all orders produce the same cost), and then continue in random order for each remaining peripheral vertex followed by the unique newly available edge.
For $K_{1,5}$, one gets $x(012345) = (011'22'33'44'55')$ (dropping commas and letting $n'$ denote the edge $[0,n]$) and $\nu(x(012345))= 35 = (5+1)^2 - 1$. But $x(120345) = (1201'2'33'44'55')$ has cost $4+5+5+7+9=30$. Postponing 0 reduces later delay.
We note that for the path, if $x$ is an economical c-sequence, then $\beta(x) \leq 2$; but for the star, this does not hold.
The {\bf easy} sequences are the c-sequences obtained by listing all the vertices first and then all the edges. Once the vertices are listed, {\it cost is independent of the ordering of the edges}. It suffices to check this for the interchange of two adjacent edges. One of the edges moves left and so has its cost reduced by 2 while the edge that moves to the right has its cost increased by 2; hence, the sum remains constant.
\begin{lemma}
For $n \geq 2$, let $x \in {\cal C}(P_n)$ be any easy sequence which begins with $v_1, \ldots, v_n$ in the order they appear along the path. Then $\nu(x) = {{2n-1}\choose{2}}$.
\end{lemma}
\begin{proof}
Let $n \geq 2$ and put $x_0 := 1 \,2 \,3 \cdots n\, [n-1,n] \,[n-2,n-1] \cdots 23 \,12$. Then $x_0$ has cost $\nu(x_0) = 3 + 7 + \cdots + 4(n-1)-1 = {{2n-1}\choose{2}}$, so $\nu(x) = {{2n-1}\choose{2}}$.
\end{proof}
We have examples where the cost of an easy sequence for $P_n$, which begins with some random order of vertices, can be slightly higher or lower than ${{2n-1}\choose{2}}$.
\begin{lemma}
Let $n \geq 2$. Then $\nu(C_n)=6n-4$ and $c'(C_n) = 2n$.
\end{lemma}
\begin{proof}
The elements $x \in {\cal C}'(C_n)$ for $C_n = (v_1, e_{12}, v_3, \ldots, e_{n-1,n}, v_n,e_{n,1})$ begin as in $P_n$ but the last edge costs $2+(2n-1) = 2n+1$, so $\nu(x) = 4n-5+2n+1=6n-4$. Any of the $n$ edges of the cycle could be last, and either clockwise or counterclockwise orientation could occur, so $c'(C_n)=2n$.
\end{proof}
A different cost model could be formed by attributing cost to the {\it vertices}, rather than the edges. For $v \in V(G)$, let $E_v$ be the set of edges incident with $v$ and put
$$\kappa(v,x) := \Big( \sum_{e \in E_v} x(e) - x(v)\Big)\Big/\deg(v,G) \;\; \mbox{for}\;\; x \in {\cal C}(G).$$
Then $\kappa(x) := \sum_{v \in V} \kappa(v,x)$ and $\kappa(G) := \min_{x \in {\cal C}(G)} \kappa(x)$ are an alternative measure.
It would also be possible to explore using {\it maximum}, rather than summation, to get the cost of a graph from that of its edges (or vertices), as with $L_1$ vs $L_\infty$ norms.
Rather than follow a discrete model, it is natural to introduce time as a continuous variable.
Suppose $G = (V,E)$ is a graph and let $h: V \cup E \to [0,1]$, where for all $e = uw \in E$, $h(e) > \max(h(u),h(w))$.
One could then define a new edge cost $\tilde{\nu}(e, h)$
\begin{equation}
\tilde{\nu}(e, h) := 2h(e) - h(u) - h(w).
\end{equation}
One might allow an edge to exist just prior to the existence of one or both of its endpoints, if the process of implementing the edge has measurable temporal extent.
Choice of cost function may be influenced by application as we shall discuss. However, merely having a sharply curtailed repertoire of construction sequences may make it easier to find nice c-sequences. Perhaps $c'(G) < c'(H)$ implies $c(G) < c(H)$.
Currently, we don't know how much variation can occur in $c(G)$ among families of graphs with a fixed number of vertices and edges. Let ${\cal G}(p,q)$ be the set of all graphs $G=(V,E)$ with $V=[p]$ and $q = \#E$ and suppose $G \in {\cal F} \subseteq {\cal G}(p,q)$. Define the {\bf constructability} of $G$ in ${\cal F}$ to be $c(G)$ divided by the average over ${\cal F}$,
\begin{equation}
\xi(G, {\cal F}) := \frac{c(G)}{\alpha({\cal F})},\;\;\mbox{where} \;\alpha({\cal F}) := (\# {\cal F})^{-1}\sum_{H \in {\cal F}} c(H).
\end{equation}
The $n$-vertex trees with greatest and lowest constructability are stars and paths \cite{random-constr}. But the value of $\alpha({\cal F})$ for ${\cal F}$ the family of $n$-vertex trees is unknown. We think that diameter and maximum degree are inversely proportional for maximal planar or maximal outerplanar graphs of fixed order $|V|$. Do these invariants affect $c(G)$ analogously with trees?
Are paths and stars available as spanning trees for the extremal examples of constructability?
\section{Discussion}
Aside from their combinatorial relevance for the structure of incidence algebras \cite{stanley} or for enumeration and integer sequences \cite{monthly-prob, random-constr}, construction numbers of graphs might have a deeper theoretical aspect. A natural idea is to think of construction sequences as the outcome of a constrained stochastic process, where a graph {\it evolves} through the addition of new vertices and edges subject to the condition that an edge cannot appear before its endpoints. Any given graph thus could be ``enriched'' by knowledge of its history, either the linear order on its elements or their actual time of appearance. The existence of such histories might enable some new methods of proof - e.g., for the graph reconstruction problem of Ulam and Harary.
Practical applications would include operations research, where directed hyperedges describe complex tasks such as ``build an airbase'' which depend on various supporting infrastructures.
If a link should occur at some moment, one would like the necessary endpoints to happen just-in-time.
Graphs occur in many scientific contexts.
It would be interesting to study the actual construction sequences for the complex biochemical networks found in the organic kingdom. How close are they to being economical?
Brightwell \& Winkler \cite{bw91} showed that counting the linear extensions of a poset is $\#P$-complete and contrast this with randomized polynomial-time algorithms which estimate this number. Their conjecture that $\#P$-completeness holds even for height-2 posets was proved by Dittmer \& Pak \cite{dp2020}, who further included incidence posets of graphs. (Note that their order is the reverse of ours.)
Applications of linear extensions of posets to equidistributed classes of permutations were given by Bj\"orner \& Wachs \cite{bw91}, and Burrow \cite{burrow} has studied using traversals of posets representing taxonomies and concept lattices to construct algorithms for information databases.
Computer calculation of construction numbers to get sample values can aid in finding correct formulas (via the OEIS \cite{oeis}) for inductive proofs, but such computation is difficult due to the large number of permutations.
This might be flipped into an asset by utilizing the theoretical calculations here as a ``teacher'' for neural network or machine learning methods (cf. Talvitie et al. \cite{talvitie}). More ambitiously, a mathematically oriented artificial intelligence could be challenged to discover the formulas above, along with some of the others we would like to have.
There are other ways to build graphs - one could attach stars by adding a vertex and all of its edges to any existing vertex or, for 2-connected graphs, one could attach {\it ears} (i.e., paths attached only by their endpoints). For random graphs, probabilities might depend on history. How are these various approaches related.
|
{
"arxiv_id": "2302.13250",
"language": "en",
"timestamp": "2023-02-28T02:14:10",
"url": "https://arxiv.org/abs/2302.13250",
"yymm": "2302"
} | \section{Introduction}
Throughout this paper, all groups are finite and $G$ always denotes
a finite group; ${\cal L}(G)$ is the lattice of all subgroups of $G$.
Moreover,
$\mathbb{P}$ is the set of all primes and
if
$n$ is an integer, the symbol $\pi (n)$ denotes
the set of all primes dividing $n$; as usual, $\pi (G)=\pi (|G|)$, the set of all
primes dividing the order of $G$;
A subgroup $A$ of $G$ is said to be \emph{quasinormal} (Ore)
or \emph{permutable} (Stonehewer)
in $G$
if $A$ permutes with every subgroup $H$ of $G$, that is, $AH=HA$; \emph{Sylow
permutable} or \emph{$S$-permutable} \cite{prod, ill} if $A$ permutes
with all Sylow subgroups of $G$.
A group $G$ is said to be a \emph{$T$-group} if normality is a transitive
relaion on $G$, that is, if $H$ is a normal subgroup of $K$ and $K$ is a
normal subgroup
of $G$, then $H$ is a normal subgroup of $G$. In other words, the group $G$
is a $T$-group if every subnormal subgroup of $G$ is normal in $G$.
The description of $T$-groups was first obtained by Gasch\"{u}tz \cite{gasch}
for the soluble case, and
by Robinson in \cite{217}, for the general case.
The works \cite{gasch, 217} aroused great interest in the
further study of $T$-groups
and groups in which some conditions of generalized normality are
transitive ($PT$-groups, i.e. groups in which quasinormality
is transitive; $PST$-groups, i.e.
groups, in which Sylow permutability is transitive, $P\sigma T$-groups,
i.e.
groups, in which $\sigma$-permutability is transitive (see Section 2 below), modularity
is transitive and etc.).
However, there are many unsolved problems in this direction,
and in this article we discuss some of them.
\section{$P\sigma T$-groups}
In what follows, $\sigma =\{\sigma_{i} \mid
i\in I \}$ is some partition of $\mathbb{P}$, that is,
$\Bbb{P
}=\bigcup_{i\in I} \sigma_{i}$
and $\sigma_{i}\cap
\sigma_{j}= \emptyset $ for all $i\ne j$.
A \emph{$ \sigma $-property} of a group $G$ \cite{ProblemI, 1, 2, commun} is any
of its property which does not depend on the the choice of the partition $ \sigma $
of $ \mathbb {P}$. In other words, in the
theory of $ \sigma $-properties of a group, we do not impose any restrictions on the
partition $\sigma$ of $\mathbb{P}$.
Before continuing, recall some basic concepts of the theory of $ \sigma $-properties
of a group (see \cite{ProblemI, 1, 2, commun}).
If
$n$ is an integer,
$\sigma (n)= \{ \sigma_{i}\mid \sigma_{i} \cap \pi (n)\ne \emptyset\}$
$\sigma (G)= \sigma (|G|)$.
A group $G$ is said to be: \emph{$\sigma$-primary} if $G$ is a $\sigma _{i}$-group
for some $i$; \emph{$\sigma$-nilpotent} if $G$ is a direct product
of $\sigma$-primary groups; \emph{$\sigma$-soluble} if every chief factor of $G$
is $\sigma$-primary.
A subgroup $A$ of $G$ is said to
be:
(i) \emph{$\sigma$-subnormal}
in $G$ if there is a subgroup chain
$$A=A_{0} \leq A_{1} \leq \cdots \leq
A_{n}=G$$ such that either $A_{i-1}
$ or
$A_{i}/(A_{i-1})_{A_{i}}$ is ${\sigma}$-primary
for all $i=1, \ldots , n$;
(ii) \emph{$\sigma$-seminormal
in $G$} (J.C. Beidleman) if $x\in N_{G}(A)$ for all $x\in G$ such that
$\sigma (|x|)\cap \sigma (A)=\emptyset$;
(iii) \emph{$\sigma$-permutable}
in $G$ if either $A \trianglelefteq G$ or
$G$ is \emph{$\sigma$-full}, that is, $G$ has a Hall $\sigma _{i}$-subgroup
for every $\sigma _{i}\in \sigma (G)$ and $A$ permutes with all such Hall
subgroups of $G$.
{\bf Example 2.1.} (i) In the classical case
$\sigma =\sigma ^{1}=\{\{2\}, \{3\}, \{5\}, \ldots
\}$ (we use here and below the notations in
\cite{alg12, 3}) a subgroup $A$ of $G$ is ${\sigma} ^{1}$-subnormal in $G$
if and only if $A$ is subnormal in $G$; $A$ is ${\sigma} ^{1}$-permutable in $G$
if and only if $A$ is Sylow permutable in $G$. A group $G$ is ${\sigma} ^{1}$-soluble
(${\sigma} ^{1}$-nilpotent) if and only if $G$ is soluble
(respectively, nilpotent)
(ii) In the other classical case $\sigma =\sigma ^{\pi}=\{\pi,
\pi'\}$, $\pi\subseteq \Bbb{P}$, a group $G$ is ${\sigma} ^{\pi}$-soluble
(${\sigma} ^{\pi}$-nilpotent) if and only if $G$ is $\pi$-separable
(respectively, $\pi$-decomposable, that is, $G=O_{\pi}(G)\times O_{\pi'}(G)$).
A subgroup $A$ of $G$ is
${\sigma} ^{\pi}$-subnormal in $G$ if and only
if $G$ has a subgroup chain
$A=A_{0} \leq A_{1} \leq \cdots \leq
A_{n}=G$ such that either $A_{i-1} \trianglelefteq A_{i}$, or
$A_{i}/(A_{i-1})_{A_{i}}$ is ${\pi}$-group, or
$A_{i}/(A_{i-1})_{A_{i}}$ is ${\pi}'$-group for all $i=1, \ldots , n$.
In this case we say that $A$ is
\emph{$\pi, \pi'$-subnormal} in $G$.
A subgroup $A$ of $G$ is ${\sigma} ^{\pi}$-permutable in $G$ if and only
if $G$ has a Hall $\pi$-subgroup and a Hall $\pi'$-subgroup and $A$ permutes with all
such Hall subgroups of $G$.
In this case we say that $A$ is
\emph{$\pi, \pi'$-permutable} in $G$.
(iii) In fact, in the theory of $\pi$-soluble groups ($\pi= \{p_{1}, \ldots , p_{n}\}$)
we deal with the partition
$\sigma =\sigma ^{1\pi }=\{\{p_{1}\}, \ldots , \{p_{n}\}, \pi'\}$ of $\Bbb{P}$.
A group $G$ is ${\sigma} ^{1\pi}$-soluble
(${\sigma} ^{1 \pi}$-nilpotent) if and only if $G$ is $\pi$-soluble
(respectively, $\pi$-special \cite{alg12, 3}, that is,
$G=O_{p_{1}}(G) \times \cdots \times O_{p_{n}}(G) \times O_{\pi'}(G)$).
A subgroup $A$ of
$G$ is $\sigma^{1\pi }$-subnormal in $G$ if and only if $G$ has
a subgroup chain
$A=A_{0} \leq A_{1} \leq \cdots \leq
A_{n}=G$ such that either $A_{i-1} \trianglelefteq A_{i}$ or
$A_{i}/(A_{i-1})_{A_{i}}$ is ${\pi}'$-group for all $i=1, \ldots , n$.
In this case we say that $A$ is \emph{$\pi$-subnormal} in $G$.
In fact, the appearance of the theory of $ \sigma $-properties
of a group was
connected chiefly with attempts to solve the following very dificult problem.
{\bf Question 2.2} (See Question in \cite{1}).
{\sl What is the structure of a $\sigma$-full group $G$ in which
$\sigma$-permutability is transitive on $G$, that is,
if $H$ is a $\sigma$-permutable subgroup of $K$ and $K$ is a $\sigma$-permutable
subgroup
of $G$, then $H$ is a $\sigma$-permutable subgroup of $G$?
}
This problem turned out to be difficult even in the $\sigma$-soluble
case, its
solution required the development of many aspects of the
theory of $\sigma$-properties of a
group. The theory of $\sigma$-soluble $P\sigma T$-groups
was mainly developed in the papers
\cite{1, alg12, adar, 4, 2021, 444, 555, Ad} and the
following theorem (which, in fact, is the main result of the papers \cite{1, alg12})
is the key result in this direction.
{\bf Theorem 2.3 } (See Theorem A in
\cite{alg12}). {\sl If $G$ is a $\sigma$-soluble
$P\sigma
T$-group and $D=G^{\frak{N_{\sigma}}}$,
then
the following conditions hold:}
(i) {\sl $G=D\rtimes M$, where $D$ is an abelian Hall
subgroup of $G$ of odd order, $M$ is $\sigma$-nilpotent and every element of $G$ induces a
power automorphism in $D$; }
(ii) {\sl $O_{\sigma _{i}}(D)$ has
a normal complement in a Hall $\sigma _{i}$-subgroup of $G$ for all $i$.}
{\sl Conversely, if Conditions (i) and (ii) hold for some subgroups $D$ and $M$ of
$G$, then $G$ is a $\sigma$-soluble
$P\sigma T$-group.}
In this theorem, $G^{\frak{N_{\sigma}}}$ is the \emph{$\sigma$-nilpotent
residual} of $G$,
that is, the intersection of all normal subgroups $N$ of $G$ with
$\sigma$-nilpotent quotient $G/N$.
In the case
$\sigma =\sigma ^{1}=\{\{2\}, \{3\}, \{5\}, \ldots
\}$, we get from Theorem 2.3 the following known result.
{\bf Corollary 2.4} (Agrawal \cite[Theorem 2.3]{Agr}). {\sl Let
$D=G^{\frak{N}}$ be
the nilpotent residual of $G$. If $G$ is a soluble $PST$-group,
then $D$ is an
abelian Hall subgroup of $G$ of odd order and every element
of $G$ induces a power automorphism
in $D$. }
In order to consider some further
applications of this Theorem 2.3, we introduce the following concepts.
{\bf Definition 2.5.} Let $\mathfrak{X}$ be a class of groups. Suppose that
with each group $G\in \mathfrak{X}$ we
associate some system of
its subgroups $\tau(G)$. Then we say that $\tau$ is a \emph{subgroup
functor} in the sense of Skiba
\cite{I} or $\tau$ is a \emph{Skiba subgroup functor} on $\mathfrak{X}$ \cite{II}
if the following conditions are met:
(1) $G\in\tau(G)$ for any group $G$;
(2) for any epimorphism $\varphi :A\mapsto B$, where
$A, \in \mathfrak{X}$, and for any groups $H\in\tau(A)$ and $T\in\tau(B)$,
we have $H^{\varphi}\in \tau(B)$ and $T^{{\varphi}^{-1}}\in \tau(A)$.
In what follows, $\tau$ is some subgroup
functor on $\mathfrak{X}$ in the sense of Skiba.
If $A\in \tau(G)$, then we say that $A$ is a \emph{$\tau$-subgroup} of $G$.
If $\mathfrak{X}$ is the class of all groups, then instead of "subgroup
functor on $\mathfrak{X}$" we will simply say "subgroup
functor".
We say also that a subgroup functor $\tau$ on $\mathfrak{X}$
is \emph{$\sigma$-special}
if for any group $G\in \mathfrak{X}$
the following three conditions hild:
(*) Each of $\sigma$-subnormal $\tau$-subgroups of $G$ is
$\sigma$-permutable in $G$, and
(**) $\langle A, B \rangle\in \tau (G)$ for any two $\sigma$-subnormal
subgroups $A, B \in \tau (G)$ of $G$,
(***) If $G=D\rtimes M$ is a $\sigma$-soluble $P\sigma T$-group, where
$D=G^{\frak{N_{\sigma}}}$ and $A$
is a $\sigma$-primary $\sigma$-subnormal subgroup of $G$ such that
$A \in \tau (M)$, then $A \in \tau (G)$.
{\bf Lemma 2.6.} {\sl Suppose that $G=D\rtimes M$ is a
$\sigma$-soluble $P\sigma T$-group, where
$D=G^{\frak{N_{\sigma}}}$. If $A$
is a $\sigma$-primary $\sigma$-subnormal subgroup of $G$ such that
$A \leq M$, then $D\leq C_{G}(A)$. }
{\bf Proof.} Let $A$ be a $\sigma _{i}$-group and $x$ an element
in $D$ of prime power order $p^{n}$. The group $G$ is $\sigma$-soluble,
so $G$ has a Hall $\sigma _{k}$-subgroup $H_{k}$ for all $k$
by Theorem B in \cite{2}. In view of Theorem
2.3, $H_{k}=O_{\sigma _{k}}(D)\times S_{i}$.
Since $A$ is
$\sigma$-subnormal in $G$, $A \leq H_{i}$ by Lemma 2.30(7) below. On the other
hand, since
$A \leq M$, $A\cap D=1$. Therefore $A=(A\cap O_{\sigma _{i}}(D)) \times (A\cap S_{i})$
since
$O_{\sigma _{i}}(D)$ is a Hall subgroup of $D$ and of $G$,
so $A\leq S_{i}$ and hence $ O_{\sigma _{i}}(D)\leq C_{G}(A)$. Now, let $k\ne i$.
Then
$A$ is a Hall $\sigma _{i}$-subgroup of $V:=O_{\sigma _{k}}(D)A$ and $A$ is
$\sigma$-subnormal
in $V$ by Lemma 2.30(1) below, so $V=O_{\sigma _{k}}(D)\times A$ by
Lemma 2.30(6) below
and hence
$ O_{\sigma _{k}}(D)\leq C_{G}(A)$.
The lemma is proved.
{\bf Lemma 2.7.} {\sl Let $N\leq A$ be subgroups of a $\sigma$-full
group $G$, where $N$ is normal in $G$. Suppose that $ \{\sigma _{1}, \ldots ,
\sigma _{t}\}=\sigma (G)$
and $H_{i}$ is a Hall $\sigma _{i}$-subgroup of $G$ for
all $i=1, \ldots, t$.}
(1) {\sl If $AH_{i}^{x}=H_{i}^{x}A$ for all $i=1, \ldots, t$ and all $x\in G$, then $A$
is $\sigma$-permutable in $G$.}
(see Proposition 1.1 in \cite{2019}).
(2) {\sl $A/N$ is $\sigma$-permutable in $G/N$ if and only if $A$ is
$\sigma$-permutable in $G$}.
{\bf Proof.} First note that
$H_{i}N/N $ is a Hall $\sigma _{i}$-subgroup of $G/N$ for
every $\sigma _{i}\in \sigma (G/N)$, so $G/N$ is $\sigma$-full.
If $AH_{i}^{x}=H_{i}^{x}A$ for all $i=1, \ldots, t$ and all $x\in G$, then
$$AH_{i}^{x}/N= (A/N)(H_{i}N/N)^{xN}=(H_{i}N/N)^{xN}(A/N)=H_{i}^{x}A/N$$
for all $i=1, \ldots, t$ and all $xN\in G/N$. Therefore if
$A$ is
$\sigma$-permutable in $G$, then $A/N$ is $\sigma$-permutable in $G/N$ by Part (1).
Similarly, if $A/N$ is $\sigma$-permutable in $G/N$, then $A$ is
$\sigma$-permutable in $G$ by Part (1).
The lemma is proved.
{\bf Example 2.8. } Let $\mathfrak{X}$ be the class of of all $\sigma$-full
groups.
(1) Let, for any $\sigma$-full group $G$, $ \tau (G)$ be the set of all
$\sigma$-permutable subgroups of $G$. Then, in view of Lemma 2.7(2),
$\tau$ is a subgroup
functor in the sense of Skiba and, clearly Condition (*) holds for any
group $G\in \mathfrak{X}$. Moreover, in view of \cite[A, 1.6(a)]{DH},
Condition (**)
holds for any group $G$. Finally,
Conditions (***) holds in any $P\sigma T$-group by Theorem 2.3.
(2) Recall that a subgroup $M$ of $G$ is said to be \emph{ modular} in $G$
if $M$ is a modular element (in the sense of
Kurosh \cite[p. 43]{Schm}) of the lattice ${\cal L}(G)$, that is,
(i) $\langle X,M \cap Z \rangle=\langle X, M \rangle \cap Z$ for all $X \leq G, Z \leq
G$ such that $X \leq Z$, and
(ii) $\langle M, Y \cap Z \rangle=\langle M, Y \rangle \cap Z$ for all $Y \leq G, Z \leq
G$ such that $M \leq Z$.
Let, for any $\sigma$-full group $G$, $ \tau (G)$ be the set of all
modular subgroups of $G$. Then, in view of
\cite[p. 201, Properties (3), (4)]{Schm},
$\tau$ is a subgroup
functor in the sense of Skiba.
Now, we show that the functor $\tau$ is $\sigma$-special. Indeed, if $A$ is a
$\sigma$-subnormal modular subgroup of a $\sigma$-full group $G$, then
$A$ is $\sigma$-permutable in $G$ by Theorem 3.3 (i) below, so Condition (*)
holds for $G$. Next let $A$ and $B$ be $\sigma$-subnormal modular subgroups of $G$.
Then
$\langle A, B \rangle$ is modular in $G$ by \cite[p. 201, Property (5)]{Schm}, so
Condition (**) holds for $G$.
Finally, suppose that $G=D\rtimes M$ is a $\sigma$-soluble $P\sigma T$-group, where
$D=G^{\frak{N_{\sigma}}}$, and let $A$
be a $\sigma$-primary $\sigma$-subnormal subgroup of $G$ such that
$A \in \tau (M)$, that is, $A$ is modular in $M$. We show
that in this case
we have $A \in \tau (G)$, that is, $A$ is modular in $G$.
In view of Lemma 5.1.13 in \cite{Schm}, it is enough to show that if $x$
is an element of $G$ of prime
power order $p^{n}$, then $A$ is modular in $\langle x, A \rangle$.
If $x\in D$, it is clear. Now assume that $x\not \in D$ and so $x\in M^{d}$ for
some $d\in D$ since $M$ is a Hall subgroup of $G$. But $A$ is modular in $M$ and
so $A$ is modular in $M^{d}$ since $A^{d}=A$ by Lemma 2.6. Therefore
$A$ is modular in $\langle x, A \rangle$.
Hence Condition (***) holds for $G$, so $\tau$ is a
$\sigma$-special subgroup functor on $\mathfrak{X}$ .
(3) Let, for any group $G$, $ \tau (G)$ be the set of all
normal subgroups of $G$. Then a subgroup functor $\tau$ is
$\sigma$-special (see Part (2)).
{\bf Lemma 2.9 } (See Corollary 2.4 and Lemma 2.5 in \cite{1}). {\sl
The class of all $\sigma$-nilpotent groups
${\mathfrak{N}}_{\sigma}$ is closed under taking
products of normal subgroups, homomorphic images and subgroups.
Moreover, if $E$ is a normal subgroup of $G$ and $E/(E\cap \Phi (G))$
is $\sigma$-nilpotent, then $E$
is $\sigma$-nilpotent. }
{\bf Lemma 2.10 } (See Proposition 2.3 in \cite{1}). {\sl
A group $G$ is $\sigma$-nilpotent if and only if every subgroup of $G$ is
$\sigma$-subnormal in $G$.
}
Now we prove the following result.
{\bf Theorem 2.11. } {\sl Suppose that $G$ is a $\sigma$-soluble group with
$D=G^{\frak{N_{\sigma}}}$ and let
$\tau$ be a $\sigma$-special subgroup fuctor on the set of all $\sigma$-full groups
$\mathfrak{X}$.
If every $\sigma$-subnormal subgroup of $G$ is $\tau$-subgroup of $G$, then $G$ is a
$P\sigma
T$-group and the following conditions hold:}
(i) {\sl $G=D\rtimes M$, where $D$ is an abelian Hall
subgroup of $G$ of odd order, $M$ is a $\sigma$-nilpotent group with $U\in \tau (M)$
for all subgroups $U$ of $M$, and every element of $G$ induces a
power automorphism in $D$; }
(ii) {\sl $O_{\sigma _{i}}(D)$ has
a normal complement in a Hall $\sigma _{i}$-subgroup of $G$ for all $i$.}
{\sl Conversely, if Conditions (i) and (ii) hold for some subgroups $D$ and $M$ of
$G$, then every $\sigma$-subnormal subgroup of $G$ belongs to $\tau (G)$.}
{\bf Proof.} First assume, arguing
by contradiction, that Conditions
(i) and (ii) hold for some subgroups $D$ and $M$ of $G$ but $G$ has a
$\sigma$-subnormal
subgroup $U$ such that $U\not \in \tau (G)$. Moreover,
we can assume that $G$ is a counterexample with $|G|+|U|$ minimal. Then
$U_{0} \in \tau (G)$
for every
$\sigma$-subnormal subgroup $U_{0}$ of $G$ such that $U_{0} < U$.
(1) $U\cap D=1$.
Indeed, assume that $V:=U\cap D \ne 1$. Then $G/V=(D/V)\rtimes (MV/V)$, where
$$D/V=G^{\frak{N_{\sigma}}}/V=
(G/V)^{\frak{N_{\sigma}}}$$ is an abelian Hall
subgroup of $G/V$ of odd order, $MV/V\simeq M$ is a $\sigma$-nilpotent
group in which every subgroup
belongs to $\tau (MV/V)$ and every element of $G/V$ induces a
power automorphism in $D/V$. It is clear also that
$O_{\sigma _{i}}(D/V)$ has
a normal complement in a Hall $\sigma _{i}$-subgroup of $G/V$ for all $i$. Therefore
Conditions
(i) and (ii) hold for the subgroups $D/V$ and $MV/V$ of $G/V$. Therefore
$U/V \in \tau (G/V)$
for the
$\sigma$-subnormal subgroup $U/V$ of $G/V$ by the choice of $G$, so $U \in \tau (G)$ by
the definition of the subgroup functor $\tau$. This contradiction shows that we have
(1).
(2) {\sl $U$ is a $\sigma _{i}$-group for some $i$.}
In view of Lemma 2.9 and Claim (1), $U\simeq UD/D$ is $\sigma$-nilpotent.
Then $U$ is the
direct product of some $\sigma$-primary non-identity
groups $U_{1}, \ldots , U_{t}$. If $U_{1}\ne U$, then $U_{1}, \ldots , U_{t}\in
\tau (G)$ by the choice of $U$ and so $U=U_{1}\times \cdots \times U_{t}\in
\tau (G)$ since $\tau$ is special by hypothesis, a contradiction. Hence
$U=U_{1}$ is a $\sigma _{i}$-group for some $i$.
(3) {\sl $U\leq \tau (M)$. In particular, $U \in \tau (M)$. }
Let $\pi =\pi (D)$. Then $G$ is $\pi$-separable by Condition (i)
and $U$ is a $\pi'$-group since $D$ is a
normal Hall $\pi$-subgroup of $G$ by hypothesis and $U\cap D=1$ by Claim (1).
Then $U^{d}\leq M$ for some $d\in D$,
where $d^{-1}\in C_{G}(U^{d})$ by Lemma 2.6 and so $U\leq M$.
Therefore $U \in \tau (M)$ by hypothesis
and so $U \in \tau (G)$ since $\tau$ is special
by hypothesis and $G$ is a $\sigma$-soluble $P\sigma T$-group by
Conditions (i) and (ii)
and Theorem 2.3. This contradiction completes the proof of the sifficiency
of the condition of the theorem.
Now assume that
every $\sigma$-subnormal subgroup of a $\sigma$-soluble group $G$
belongs to the set $\tau (G). $ We show that in this case
Conditions (i) and (ii) hold for $G$. First note that
every $\sigma$-subnormal subgroup of $G$ $\sigma$-permutable in $G$ since
$\tau$ is special by hypothesis. Therefore $G$ is a $\sigma$-soluble $P\sigma T$-group
and
so, in view of Theorem 2.3, $G=D\rtimes M$, where $ D=G^{\frak{N_{\sigma}}}$
and Conditions (i) and (ii) in Theorem 2.3 hold for $D$ and $M$ hold. Therefore
we have only to show that $H\in \tau (M) $ for every subgroup $H$ of $M$.
Since
$G/D$ is $\sigma$-nilpotent by Lemma 2.9, every subgroup $H/D\leq G/D$ is
$\sigma$-subnormal in $G/D$ by Lemma 2.10 and then, by Lemma 2.30(3) below,
$H$ is $\sigma$-subnormal
in $G$ and so $H\in \tau (G)$. But then $H/D\in \tau (G/D)$. Therefore for every
subgroup $H/D$ of $G/D$ we have $H/D\in \tau (G/D)$, so
for every
subgroup $V$ of $M$ we have $V\in \tau (M)$ since $M\simeq G/D$. Therefore Conditions
(i) and (ii) hold for $G$.
The theorem is proved.
We say that $G$ is a $Q\sigma T$-group if every
$\sigma $-subnormal subgroup of $G$ is modular in $G$.
It is clear that the lattice of all subgroups ${\cal L}(G)$ of the group $G$ is modular
if and only if every subgroup of $G$ is modular in $G$. Therefore, in view
of Example 2.8(2), we get from Theorem 2.11 the following known result.
{\bf Corollary 2.12} (Hu, Huang and Skiba \cite{Hu12}). {\sl
A group $G$ with
$D=G^{\frak{N_{\sigma}}}$ is a $\sigma$-soluble $ Q\sigma T$-group if and only if
the following conditions hold:}
(i) {\sl $G=D\rtimes L$, where $D$ is an abelian Hall
subgroup of $G$ of odd order, $L$ is $\sigma$-nilpotent and
the lattice
of all subgroups ${\cal L}(L)$ of $L$ is modular,}
(ii) {\sl every element of $G$ induces a
power automorphism in $D$, and }
(iii) {\sl $O_{\sigma _{i}}(D)$ has
a normal complement in a Hall $\sigma _{i}$-subgroup of $G$ for all $i$.}
In view of Example 2.8(3), we get from Theorem 2.11 in the case when
$\sigma =\sigma ^{1}=\{\{2\}, \{3\}, \{5\}, \ldots
\}$ the following classical result.
{\bf Corollart 2.13 } (Gasch\"{u}tz \cite{gasch}). {\sl A group
$G$ is a soluble $T$-group if and
only if the following conditions are satisfied:}
(i) {\sl the nilpotent residual $L$ of $G$ is an abelian Hall subgroup of odd order,}
(ii) {\sl $G$ acts by conjugation on $L$ as a group power automorphisms, and }
(iii) {\sl $G/L$ is
a Dedekind group}.
Every quasinormal subgroup is clearly modular in the group. Moreover,
the following remarkable fact is well-known.
{\bf Theorem 2.14} (Schmidt \cite[Theorem 5.1.1]{Schm}). {\sl A subgroup $A$ of $G$ is
quasinormal in $G$ if and only if $A$ is modular and subnormal in $G$}.
Recall that an \emph{Iwasawa group} is a group
in which every subgroup is quasinormal.
In view of Example 2.8(2) and Theorem 2.14, we get from Theorem 2.11
in the case when
$\sigma =\sigma ^{1}=\{\{2\}, \{3\}, \{5\}, \ldots
\}$ the following well-known result.
{\bf Corollart 2.15 } (Zacher \cite{zaher}). {\sl A group
$G$ is a soluble $PT$-group if and
only if the following conditions are satisfied:}
(i) {\sl the nilpotent residual $L$ of $G$ is an abelian Hall
subgroup of odd order,}
(ii) {\sl $G$ acts by conjugation on $L$ as a group
power automorphisms, and }
(iii) {\sl $G/L$ is an Iwasawa group}.
We say, following \cite{555}, that $G$ is a $T_{\sigma}$-group if every $\sigma$-subnormal
subgroup of $G$ is normal.
In view of Example 2.8(3), we get from Theorem 2.11
the following known result.
{\bf Corollary 2.16} (Zhang, Guo and Liu \cite{555}).
{\sl A $\sigma$-soluble group $G$ with
$D= G^{{\mathfrak{N}}_{\sigma}}$ is a $T_{\sigma}$-group if and only if
the following conditions hold:}
(i) {\sl $G=D\rtimes L$, where $D$ is an abelian Hall
subgroup of $G$ of odd order, and $L$ is a Dedekind group,}
(ii) {\sl every element of $G$ induces a
power automorphism in $D$, and }
(iii) {\sl $O_{\sigma _{i}}(D)$ has
a normal complement in a Hall $\sigma _{i}$-subgroup of $G$ for all $i$.}
{\bf Corollary 2.17} (Ballester-Bolinches, Pedraza-Aguilera and P\`{e}rez-Calabuing
\cite{Ad}). {\sl A $\sigma$-soluble group $G$ is a $T_{\sigma}$-group if and only if
$G$ is a $T$-group and the Hall $\sigma _{i}$-subgroups of $G$ are Dedekind
for all $i\in I$.}
{\bf Proof. } First assume that $G$ is a
$\sigma$-soluble $T_{\sigma}$-group. Then $G$ satisfies Condition (i) and (ii) in
Corollary 2.16, so $G$ is a $T$-group by Corollary 2.13. On the other hand, for every Hall
$\sigma _{i}$-subgroup $H$ of $G$ we have $H=(O_{i}(D))\times S$, where $O_{i}(D)$
is a Hall subgroup of $D$ and $D$ is a Hall subgroup of $G$, so $H$ is a Dedekind group
since $S\simeq DS/D$ and $D$ are Dedekind.
Finally, suppose that $G$ is a soluble $T$-group and the
Hall $\sigma _{i}$-subgroups of $G$ are Dedekind
for all $i\in I$. And let $D=G^{\mathfrak{N}_{\sigma}}$. Then $G/N$ is Dedekind by , so
$D=G^{\mathfrak{N}}$. Hence $G$ is a $\sigma$-soluble $T_{\sigma}$-group.
The corollary is proved.
In the recent papers \cite{preprI, alg2023}, a description of $P\sigma T$-groups $G$
was obtained for the case when every Hall $\sigma _{i}$-subgroup of
$G$ is either supersoluble or $PST$-group for all $\sigma _{i}\in \sigma (G)$.
Our next goal to discuss some results of these two papers.
{\bf Definition 2.18.} We say that
$(D, Z(D); U_{1}, \ldots , U_{k})$ is a
\emph{Robinson $\sigma$-complex } (a \emph{Robinson complex} in the case where
$\sigma =\sigma ^{1}=
\{\{2\}, \{3\}, \{5\}, \ldots \}$) of $G$ if $D\ne 1$ is a normal subgroup
of $G$ such that:
(i) $D/Z(D)=U_{1}/Z(D)\times \cdots \times U_{k}/Z(D)$, where $U_{i}/Z(D)$ is a
simple non-$\sigma$-primary
chief factor of $G$, $Z(D)\leq \Phi(D)$, and
(ii) every chief factor of $G$ below $Z(D)$ is cyclic.
{\bf Example 2.19.} (i) Let $G=SL(2, 7)\times A_{7}\times A_{5}\times B$,
where $B=C_{43}\rtimes C_{7}$
is a non-abelian group of order 301 and let
$\sigma =\{\{2, 3, 5\}, \{7, 43\}, \{2, 3, 5, 7, 43\}'\}$.
Then $$(SL(2, 7)\times A_{7}, Z(SL(2, 7)); SL(2, 7), A_{7}Z(SL(2, 7)))$$
is a Robinson $\sigma $-complex of $G$ and
$$(SL(2, 7)\times A_{7}\times A_{5}, Z(SL(2, 7)); SL(2, 7), A_{7}Z(SL(2, 7)),
A_{5}Z(SL(2, 7)))$$ is a Robinson complex of $G$.
(ii) If $(D, Z(D); U_{1}, \ldots , U_{k})$ is a
Robinson $\sigma ^{\pi}$-complex of $G$ (see Example 2.1(ii)), then $U_{i}/Z(D)$ is
neither
a $\pi$-group nor a $\pi'$-group and we say in this case that
$(D, Z(D); U_{1}, \ldots , U_{k})$ is a
Robinson \emph{$\pi, \pi'$-complex} of $G$.
(iii) If $(D, Z(D); U_{1}, \ldots , U_{k})$ is a
Robinson $\sigma ^{1\pi}$-complex of $G$ (see Example 2.1(iii)), then $U_{i}/Z(D)$ is
neither
a $\pi'$-group nor a $p$-group for all $p\in \pi$ and we say in this case that
$(D, Z(D); U_{1}, \ldots , U_{k})$ is a
Robinson \emph{$\pi$-complex} of $G$.
Let $\pi\subseteq \mathbb{P}$. If $\pi=\emptyset$, then we put $O_{\pi}(G)=O_{\emptyset}(G)=1$. We say that \emph{$G$ satisfies
${\bf N}_{\pi}$} if whenever $N$ is a soluble normal
subgroup of $G$, $\pi'$-elements of $G$ induce power automorphisms in
$O_{\pi}(G/N)$. We also say, following \cite[2.1.18]{prod}, that
\emph{$G$ satisfies
${\bf N}_{p}$} if whenever $N$ is a soluble normal
subgroup of $G$, $p'$-elements of $G$ induce power automorphisms in
$O_{p}(G/N)$.
Our next goal here is to prove the following fact.
{\bf Theorem 2.20.} {\sl Suppose that $G$ is a $\sigma$-full group and every
Hall $\sigma _{i}$-subgroup
of $G$ is either supersoluble or a $PST$-group for all $i\in I$.
Then $G$ is a $P\sigma T$-group if
and only if $G$ has a normal subgroup $D$ such that:}
(i) {\sl $G/D$ is a $\sigma$-soluble $P\sigma T$-group, }
(ii) {\sl if $D\ne 1$, $G$ has a Robinson $\sigma$-complex
$(D, Z(D); U_{1}, \ldots , U_{k})$, and }
(iii) {\sl for any set $\{j_{1}, \ldots , j_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$, $G$ and $G /U_{j_{1}}'\cdots U_{j_{r}}'$ satisfy
${\bf N}_{\sigma _{i}}$ for all $\sigma _{i}\in \sigma (Z(D))$.}
In view of Example 2.1(iii), we get from Theorem 2.20 the following
{\bf Corollary 2.21.} {\sl Suppose that $G$ has a Hall $\pi'$-subgroup, where
$\pi= \{p_{1}, \ldots , p_{n}\}$,
and all such Hall subgroups of $G$
are supersoluble. Then the condition
$\pi$-permutability is a transitive relation on $G$ if and only if
$G$ has a normal subgroup $D $ such that:}
(i) {\sl $G/D$ is $\pi $-soluble and the condition
$\pi$-permutability is a transitive relation on $G/D$, }
(ii) {\sl if $D\ne 1$, $G$ has a Robinson $\pi$-complex
$(D, Z(D); U_{1}, \ldots , U_{k})$, and }
(iii) {\sl for any set $\{j_{1}, \ldots , j_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$, $G$ and $G /U_{j_{1}}'\cdots U_{j_{r}}'$ satisfy
${\bf N}_{p}$ for all $p\in \pi(Z(D))$ and, also,
${\bf N}_{\pi'}$ for the case $O_{\pi'}(Z(D))\ne 1$. }
In view of Example 2.1(i), we get from Corollary 1.5 the following
{\bf Corollary 2.22} (Robinson \cite{217}). {\sl A group
$G$ is an $PST$-group if
and
only if $G$ has a perfect normal subgroup $D$ such that:}
(i) {\sl $G/D$ is a soluble $PST$-group, }
(ii) {\sl if $D\ne 1$, $G$ has a Robinson complex
$(D, Z(D); U_{1}, \ldots , U_{k})$, and }
(iii) {\sl for any set $\{j_{1}, \ldots , j_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$, $G$ and $G /U_{j_{1}}'\cdots U_{j_{r}}'$ satisfy
${\bf N}_{p}$ for all $p\in \pi (Z(D))\cap \pi$.}
Theorem 2.20 has also many other consequences.
In particular, in view of Example 2.1(ii), we get from Theorem 2.20 the following
{\bf Corollary 2.23.} {\sl Suppose that $G$ has a Hall $\pi$-subgroup
and Hall $\pi'$-subgroup and all such Hall subgroups of $G$
are supersoluble. Then the condition
$\pi, \pi'$-permutability is a transitive relation on $G$ if and only if
$G$ has a normal subgroup $D $ such that:}
(i) {\sl $G/D$ is $\pi $-separable and the condition
$\pi, \pi'$-permutability is a transitive relation on $G/D$, }
(ii) {\sl if $D\ne 1$, $G$ has a Robinson $\pi, \pi'$-complex
$(D, Z(D); U_{1}, \ldots , U_{k})$, and }
(iii) {\sl for any set $\{j_{1}, \ldots , j_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$, $G$ and $G /U_{j_{1}}'\cdots U_{j_{r}}'$ satisfy
${\bf N}_{\pi}$ if $O_{\pi}(Z(D))\ne 1$
and ${\bf N}_{\pi'}$ if $O_{\pi'}(Z(D))\ne 1$. }
{\bf Exapmle 2.24.} Let $\alpha: Z(SL(2, 5))\to Z(SL(2, 7))$ be an isomorphism and let
$$D:= SL(2, 5) \Ydown SL(2, 7)=(SL(2, 5)\times SL(2, 7))/V,$$
where $$V=\{(a, (a^{\alpha})^{-1})\mid a\in Z(SL(2, 5))\},$$
is the direct product of the groups $SL(2, 5)$ and $SL(2, 7)$ with joint center
(see \cite[p. 49]{hupp}).
Let $M=(C_{23}\wr C_{11}) \Yup (C_{67}\rtimes C_{11}$) be
the direct product of the groups $C_{23}\wr C_{11}$ and $C_{67}\rtimes C_{11}$
with joint
factorgroup $C_{11}$ (see \cite[p. 50]{hupp}),
where
$C_{23}\wr C_{11}$ is the regular wreath product of the groups
$C_{23}$ and $ C_{11}$ and $C_{67}\rtimes C_{11}$ is a non-abelian group of
order 737.
Now, let $G=D\times M$ and $\sigma =\{\{5\}, \{7\}, \{2, 11, 23\}, \{3, 67\},
\{2, 3, 5, 7, 11, 23, 67\}'
\}.$ We show that $G$ is a
$P\sigma T$-group.
In view of \cite[I, Satz 9.10]{hupp}, $D=U_{1}U_{2}$ and
$U_{1}\cap U_{2}=Z(D)=\Phi (D)$, where $U_{i}$ is normal in $D$,
$U_{1}/Z(D)$ is a a simple group of order 60, and
$U_{2}/Z(D)$ is a a simple group of order 168. Hence $(D, Z(D); U_{1}, U_{2})$ is
a Robinson $\sigma$-complex of $G$. In view of \cite[I, Satz 9.11]{hupp}, $M$ has
normal subgroups $R$ ($|R|=23^{11}$) and $L$ ($|L|=C_{67}$)
such that $M/R\simeq C_{67}\rtimes C_{11}$ and $M/L\simeq C_{23}\wr C_{11}.$ It
is cleat that $M$ is not $\sigma$-nilpotent, so $M^{\mathfrak{N}_{\sigma}}=L$ since
$M/L$ is $\sigma$-primary, Therefore $M\simeq G/D$ is a
$\sigma$-soluble $P\sigma T$-group by Theorem 2.3 and, clearly,
$D=G^{\mathfrak{S}_{\sigma}}=G^{\mathfrak{S}}$.
The group $G$ is $\sigma$-full and all Hall
$\sigma _{i}$-subgroups of $G$ are supseroluble for all $i$. It is not also to
show that $G$ satisfies ${\bf N}_{\pi}$, where $\pi=\{2, 11, 23\}$. Therefore
Conditions
(i), (ii), and (iii) hold for $G$, so $G$ is a
$P\sigma T$-group by Theorem 2.20.
Assume thatt $G$ is a $PST$-group. Then, in view of Example 2.1(i) and Theorem 2.20,
$M\simeq G/D$ is a soluble $PST$-group, so
$M^{\mathfrak{N}}$ is a Hall subgroup of $M$ and all subgroups of
$M^{\mathfrak{N}}$ are normal in $M$ by Example 2.1(i) and Theorem 2.3.
But $R$ contains subgroups
which are
not normal in $M$. Hence $M^{\mathfrak{N}}=L$, so $M/L\simeq C_{23}\wr C_{11}$
is nilpotent. This contradiction shows that $G$ is not a $PST$-group.
From Theorems 2.3 and 2.20 it follows that every $\sigma$-soluble $P\sigma T$-group is
$\sigma$-supersoluble and every $\sigma$-full $P\sigma T$-group with supersoluble Hall
$\sigma _{i}$-subgroups for all $i$ is a $\sigma$-$SC$-group
in the sense of the following
{\bf Definition 2.25.} We say that $G$ is:
(1) \emph{$\sigma $-supersoluble} \cite{3}
if every chief factor of $G$ below $G^{{\mathfrak{N}}_{\sigma}}$ is
cyclic;
(2) a \emph{$\sigma $-$SC$-group}
if every chief factor of $G$ below $G^{{\mathfrak{N}}_{\sigma}}$ is simple.
{\bf Example 2.26.} (i) $G$ is supersoluble if and only if it is $\sigma
$-supersoluble where $\sigma =\sigma ^{1}$ (see Example 1.1(i)).
(ii) A group $G$ is called an \emph{$SC$-group} \cite{217} if every chief
factor of $G$ is a simple group. Note that $G$ is an
$SC$-group if and only if it is a $\sigma$-$SC$-group
where $\sigma =\sigma ^{1}$.
(iii) Let $G=A_{5}\times B$, where $A_{5}$ is
the alternating group of degree 5 and $B=C_{29}\rtimes C_{7}$ is
a non-abelian group of order 203, and let $\sigma =\{\{2, 3, 5\}, \{7\},
\{29\}, \{2, 3, 5, 7, 29\}'\}$. Then $G^{{\frak{N}}_{\sigma}}=C_{29}$,
so
$G$ is a $\sigma$-supersoluble group but it is neither soluble
nor $\sigma$-nilpotent.
(iv) Let $G=SL(2, 7)\times A_{7}\times A_{5}\times B$, where
$B=C_{43}\rtimes C_{7}$
is a non-abelian group of order 301 and let
$\sigma =\{\{2, 3, 5\}, \{7, 43\}, \{2, 3, 5, 7, 43\}'\}$.
Then $G^{{\frak{N}}_{\sigma}}=SL(2, 7)\times A_{7}$, so
$G$ is a $\sigma $-$SC$-group but it is not a $\sigma$-supersoluble group.
Let $1\in \mathfrak{F}$ be a class of groups. Then $G^{\mathfrak{F}}$ is the
\emph{$ \mathfrak{F}$-residual} of $G$, that is, the intersection of all normal subgroups
$N$ of $G$ with $G/N\in \mathfrak{F}$.
The class of groups $ 1\in\mathfrak{F}$ is
said to be a \emph{formation} if every
homomorphic image of $G/G^{\mathfrak{F}}$ belongs to $ \mathfrak{F}$ for every
group $G$.
The formation
$\mathfrak{F}$ is said to be \emph{(normally) hereditary }
if $H\in \mathfrak{F}$ whenever $ G \in \mathfrak{F}$ and $H$ is a (normal) subgroup of $G$.
{\bf Lemma~2.27} (See \cite[Proposition 2.2.8]{15}). {\sl Let $\frak{F}$ be a non-empty
formation and
$N$, $R$ subgroups of $G$, where $N$ is normal in $G$.}
(1) {\sl
$(G/N)^{\frak{F}}=G^{\frak{F}}N/N.$ }
(2) {\sl If $G=RN$, then $G^{\frak{F}}N=R^{\frak{F}}N$}.
In what follows,
${\mathfrak{U}}_{\sigma}$ is the class
of all $\sigma $-supersoluble groups; ${\mathfrak{U}}_{c\sigma}$
is the class of all $\sigma $-$SC$-groups.
In our proofs, we often use the following
{\bf Proposition~2.28.} {\sl For any partition $\sigma$ of \ $\mathbb{P}$ the
following hold. }
(i) {\sl The class ${\mathfrak{U}}_{c\sigma}$ is a normally hereditary
formation. }
(ii) {\sl The class ${\mathfrak{U}}_{\sigma}$ is a hereditary
formation } \cite{3}.
{\bf Proof.} (1) Let $D=G^{{\mathfrak{N}}_{\sigma}}$.
First note that if
$R$ is a normal subgroup of $G$, then
$(G/R)^{{\mathfrak{N}}_{\sigma}}=DR/R$ by
Lemmas 2.9 and 2.27 and so from the $G$-isomorphism
$DR/R\simeq D/(D\cap R)$ we get that every chief factor of $G/R$ below
$(G/R)^{{\mathfrak{N}}_{\sigma}}$ is simple if and only if every chief factor of $G$
between $D\cap R$ and $D$ is simple.
Therefore if $G\in {\mathfrak{U}}_{c\sigma}$, then $G/R\in {\mathfrak{U}}_{c\sigma}$.
Hence the class
${\mathfrak{U}}_{c\sigma}$ is closed under taking homomorphic images.
Now we show that if $G/R$, $G/N\in {\mathfrak{U}}_{c\sigma}$,
then $G/(R\cap N) \in {\mathfrak{U}}_{c\sigma}$. We can assume without loss
of generality that $R\cap N=1$. Since $G/R \in {\mathfrak{U}}_{c\sigma}$,
every chief factor of $G$
between $D\cap R$ and $D$ is simple. Also, every chief factor of $G$
between $D\cap N$ and $D$ is simple. Now let $H/K$ be any chief factor of $G$ below $D\cap R$.
Then $H\cap D\cap N=1$ and hence from the $G$-isomorphism
$$H(D\cap N)/K(D\cap N)\simeq H/(H\cap K(D\cap N))=H/K(H\cap D\cap N)=H/K$$
we get that $H/K$ is simple since $D\cap N\leq K(D\cap N)\leq H(D\cap N) \leq D$. On the
other hand, every chief factor of $G$
between $D\cap R$ and $D$ is also simple.
Therefore the Jordan-H\"{o}lder
theorem for groups with operators \cite[Ch. A, Theorem 3.2]{DH} implies that every
chief factor of $G$
below $D$ is simple. Hence $G \in
{\mathfrak{U}}_{c\sigma}$, so the class
${\mathfrak{U}}_{c\sigma}$ is closed under taking subdirect products.
Finally, if $H\trianglelefteq G\in {\mathfrak{U}}_{c\sigma}$, then from Lemmas 2.9
and
2.27 and the isomorphism $H/(H\cap D)\simeq HD/D \in {\mathfrak{N}}_{\sigma}$ we get that
$H^{{\mathfrak{N}}_{\sigma}}\leq H\cap D$ and
so every chief factor of $H$
below $H^{{\mathfrak{N}}_{\sigma}}$ is simple since every chief factor of $G$
below $D$ is simple. Hence
$H\in
{\mathfrak{U}}_{c\sigma}$, so the class
${\mathfrak{U}}_{c\sigma}$ is closed under taking normal subgroups.
The proposition~is proved.
{\bf Proposition 2.29.} {\sl Suppose that $G$ is a $P\sigma T$-group.
Then }
(i) {\sl $G/R$ satisfies
${\bf N}_{\sigma _{i}}$ for every normal subgroup $R$ of $G$ and all $i\in I$, and }
(ii) {\sl if all
Hall $\sigma _{i}$-subgroups of $G$ are supersoluble for all $i\in I$,
then $G$ is a $\sigma$-$SC$-group}.
To prove the proposition, we need a few lemmas.
Recall that a subgroup $A$ of $G$ is called \emph{${\sigma}$-subnormal}
in $G$ \cite{1} if there is a subgroup chain $$A=A_{0} \leq A_{1} \leq \cdots \leq
A_{n}=G$$ such that either $A_{i-1} \trianglelefteq A_{i}$ or
$A_{i}/(A_{i-1})_{A_{i}}$ is ${\sigma}$-primary
for all $i=1, \ldots , n$.
We say that: an integer \emph{$n$ is a $\Pi$-number} if
$\sigma (n)\subseteq \Pi$; a subgroup $H$ of $G$ is a
\emph{$\Pi$-subgroup of $G$} if $|H|$ is a $\Pi$-number;
a $\sigma$-Hall subgroup $H$ of $G$ is a Hall
\emph{$\Pi$-subgroup} of $G$ if $H$ is a $\Pi$-subgroup of $G$
and $|G:H|$ is a $\Pi'$-number.
We use $O^{{\Pi}}(G) $ to denote the subgroup of $G$ generated by all
its ${\Pi}'$-subgroups.
{\bf Lemma~2.30.} {\sl Let $A$, $K$ and $N$ be subgroups of a $\sigma$-full group $G$.
Suppose that $A$
is $\sigma$-subnormal in $G$ and $N$ is normal in $G$. }
(1) {\sl $A\cap K$ is $\sigma$-subnormal in $K$}.
(2) {\sl $AN/N$ is
$\sigma$-subnormal in $G/N$. }
(3) {\sl If $N\leq K$ and $K/N$ is
$\sigma$-subnormal in $G/N$, then $K$ is
$\sigma$-subnormal in $G.$}
(4) {\sl If $H\ne 1 $ is a Hall $\sigma _{i}$-subgroup of $G$ and $A$ is not a
$\sigma _{i}'$-group, then $A\cap H\ne 1$ is
a Hall $\sigma _{i}$-subgroup of $A$. }
(5) {\sl If $A$ is a $\sigma _{i}$-group, then $A\leq O_{\sigma _{i}}(G)$.
}
(6) {\sl If $A$ is a Hall $\sigma _{i}$-subgroup of $G$, then $A$ is normal in $G$.}
(7) {\sl If $|G:A|$ is a $\Pi$-number, then $O^{\Pi}(A)=
O^{\Pi}(G)$.}
(8) {\sl If $O^{\sigma _{i}}(G)=G$ for all $i\in I$,
then $A$ is subnormal in $G$. }
(9) {\sl $A^{{\frak{N}}_{\sigma}}$ is subnormal in $G$. }
{\bf Proof. } Assume that this Lemma~ is false and let $G$ be a counterexample of
minimal order. By hypothesis, there is a subgroup chain $A=A_{0} \leq
A_{1} \leq \cdots \leq A_{r}=G$ such that
either $A_{i-1} \trianglelefteq A_{i}$
or $A_{i}/(A_{i-1})_{A_{i}}$ is $\sigma $-primary for all $i=1, \ldots , r$.
Let $M=A_{r-1}$.
We can assume without loss of generality that $M\ne G$.
(1)--(7) See Lemma~2.6 in \cite{1}.
(8) $A$ is
subnormal in $M$ by the choice of $G$. On the other hand, since $G$ is $\sigma$-perfect,
$G/M_{G}$ is not $\sigma$-primary. Hence $M$ is normal in $G$ and so $A$
is subnormal in $G$.
(9) $A$ is $\sigma$-subnormal in $AM_{G}\leq M$ by Part (1), so the
choice of $G$ implies that $A^{{\frak{N}}_{\sigma}}$ is
subnormal in $AM_{G}$. Hence $G/M_{G}$ is a $\sigma
_{i}$-group for some $i$, so $M_{G}A/M_{G}\simeq A/(A\cap M_{G})$ is a $\sigma
_{i}$-group. Hence $A^{{\frak{N}}_{\sigma}}\leq M_{G}$, so $A^{{\frak{N}}_{\sigma}}$ is subnormal
in $M_{G}$ and hence $A^{{\frak{N}}_{\sigma}}$ is subnormal
in $G$.
The lemma~is proved.
{\bf Lemma~2.31.} {\sl The following statements hold:}
(1) {\sl $G$ is a
$P\sigma T$-group if and only if every $\sigma$-subnormal subgroup of
$G$ is $\sigma$-permutable in $G$. }
(2) {\sl
If $G$ is a
$P\sigma T$-group, then every quotient $G/N$ of $G$ is also a
$P\sigma T$-group. }
{\bf Proof.} (1) First note that if $A$ is a maximal
$\sigma$-subnormal subgroup of $G$, then either $A$ is normal in $G$ or $G/A_{G}$
is a $\sigma _{i}$-group for some $i$. We show that $A$ is $\sigma$-permutable in $G$.
If $A$ is normal in $G$, it is clear. Now assume that $G/A_{G}$
is a $\sigma _{i}$-group.
Let $H$ be a Hall $\sigma _{j}$-subroup of $G$. If $j\ne i$, then $H\leq A_{G}$ and so
$AH=A=HA$. Finally, if $i=j$, then $A_{G}H=G$, so $AH=G=HA$.
Now assume that $G$ is a $P\sigma T$-group and let $A$
be a $\sigma$-subnormal subgroup of $G$. Then there is a subgroup chain
$A=A_{0} \leq A_{1} \leq \cdots \leq
A_{n}=G$ such that $A_{i-1}$ is a maximal
$\sigma$-subnormal subgroup of $A_{i}$ and so $A_{i-1}$ is $\sigma$-permutable in
$A_{i}$ for all $i=1, \ldots , n$. But then $A$ is $\sigma$-permutable in
$G$. Therefore every $\sigma$-subnormal subgroup of any $P\sigma T$-group
is $\sigma$-permutable.
Finally, from Theorem B in \cite{1} it follow that every $\sigma$-permutable
subgroup
of $G$ is $\sigma$-subnormal in $G$. Hence (1) holds.
(2) Let $A/N$ be any $\sigma$-subnormal subgroup of $G/N$.
Then $A$ is $\sigma$-subnormal $G$ by Lemma 2.30(3), so $A$
is $\sigma$-permutable in $G$ and so $A/N$ is $\sigma$-permutable in $G$
by Lemma 2.7(1). Therefore we have (2) by Part (1).
The lemma is proved.
{\bf Lemma~2.32.} {\sl
Let $A$ and $B$ be subgroups of $G$, where $A$ is
$\sigma$-permutable in $G$. }
(1) {\sl If $A\leq B$ and $B$ is $\sigma$-subnormal in $G$,
then $A$ is $\sigma$-permutable in $B$}.
(2) {\sl Suppose that $B$ is a $ \sigma _{i}$-group. Then $B$ is $\sigma$-permutable in
$G$ if and only if $O^{\sigma _{i}}(G) \leq N_{G}(B)
$} (See Lemma 3.1 in \cite{1}).
{\bf Proof. } (1) Let $\sigma (B) =\{\sigma _{1}, \ldots, \sigma _{n} \}$ and let
$H_{i}$ be a Hall $ \sigma _{i}$-subgroup of $G$ for all $i$. Let $x\in B$ and
$H=H_{i}$. Then we have $AH^{x}=H^{x}A$,
so $$AH^{x}\cap B=A(H^{x}\cap B)=A(H\cap B)^{x}=(H\cap B)^{x}A,$$ where
$H\cap B$ is a Hall $ \sigma _{i}$-subgroup of $B$ by Lemma~2.30(4).
Hence $A$ is
$\sigma$-permutable in $B$ by Lemma 2.7(1).
The lemma~is proved.
{\bf Proof of Proposition 2.29.}
Let $ S= G^{{\mathfrak{S}}_{\sigma}}$ be the
$\sigma$-soluble residual and $ D= G^{{\mathfrak{N}}_{\sigma}}$
the $\sigma$-nilpotent residual of $G$.
(i) In view of Lemma~2.31(2), we can assume without loss of
generality that $R=1$.
Let $L$ be any soluble normal subgroup of $G$ and let $x$ be
a $\sigma _{i}'$-element of $G$. Let $V/L\leq O_{\sigma _{i}}(G/L)$.
Then $V/L$ is $\sigma$-subnormal in $G/L$, so $V/L$ is
$\sigma$-permutable in $G/L$ by Lemma~2.7(1) since $G/L$ is a $P\sigma
T$-group by Lemma~2.31(2). Therefore $xL\in O^{\sigma _{i}}(G/L)\leq N_{G/L}(V/L)$
by Lemma~2.32(2). Hence $G$ satisfies ${\bf N}_{\sigma _{i}}$.
(ii)
Suppose that this is false and let $G$ be a
counterexample of minimal order. If $S=1$, then $G$ is $\sigma$-soluble
and so $G$ is a $\sigma$-$SC$-group by Theorem B. Therefore $S\ne 1$, so $D\ne 1$.
Let $R$ be a minimal normal subgroup of $G$ contained in $D$.
Then $G/R$
is a $P\sigma T$-group by Lemma~2.31(2). Therefore the choice of $G$
implies that $G/R$ is a $\sigma$-$SC$-group. Since
$(G/R)^{{\mathfrak{N}}_{\sigma }}=D/R$ by Lemmas 2.9 and 2.27,
every
chief factor of $G/R$ below $D/R$ is simple. Hence
every
chief factor of $G$ between $R$ and $G^{{\mathfrak{N}}_{\sigma }}$ is
simple.
Therefore, in view of the Jordan-H\"{o}lder
theorem for groups with operators \cite[Ch. A, Theorem 3.2]{DH},
it is enough to show that $R$ is simple.
Suppose
that this is false and let $L$ be
a minimal normal subgroup of $R$. Then $1 < L < R$ and $L$
is $\sigma$-subnormal in $G$, so $L$ is $\sigma$-permutable in $G$
by Lemma~2.31(1) since $G$ is a $P\sigma T$-group.
Moreover, $L_{G}=1$ and so $L$ is
$\sigma$-nilpotent by Theorem A. Therefore $R$ is a
$\sigma _{i}$-groups for some $i$ and so $R\leq H$, where $H$ is a
Hall $\sigma _{i}$-subgroup of $G$. Since $H$ is supersoluble by hypothesis, $R$
is abelian and so
there is a maximal subgroup $V$ of $R$ such that $V$ is normal in $H$.
Therefore, in view of Lemma~2.32(2),
we have $G=HO^{\sigma _{i}}(G)\leq N_{G}(V).$ Hence $V=1$ and so $|R|=p$, a
contradiction. Thus $G$ is a $\sigma$-$SC$-group.
The proposition is proved.
{\bf Lemma~2.33.} {\sl Let $H/K$ be a non-abelian chief factor of $G$.
If $H/K$ is simple, then $G/HC_{G}(H/K)$ is soluble.}
{\bf Proof.} Since $C_{G}(H/K)/K=C_{G/K}(H/K)$, we can assume without
loss of generality that $K=1$. Then $G/C_{G}(H)\simeq V\leq
\text{Aut}(H)$ and $H/(H\cap C_{G}(H) )\simeq HC_{G}(H)/C_{G}(H)\simeq \text{Inn}(H)$
since $C_{G}(H)\cap H=1.$ Hence
$$G/HC_{G}(H)\simeq (G/C_{G}(H))/(HC_{G}(H)/C_{G}(H))\simeq W\leq
\text{Aut}(H)/\text{Inn}(H).$$ From the validity of the Schreier
conjecture, it follows that $G/HC_{G}(H/K)$ is soluble.
The lemma~is proved.
{\bf Lemma~2.34.} {\sl If $L$ is a non-abelian minimal subnormal subgroup of $G$, then
$L^{G}$ is a minimal normal subgroup of $G$.}
{\bf Proof.} Since every
two perfect subnormal subgroups are permutable by \cite[II, Theorem 7.9]{26},
for some $x_{1}, \ldots , x_{t}\in G$ we have $L^{G}=L^{x_{1}} \cdots L^{x_{t}}$.
Now let $R$ be a minimal normal subgroup of $G$ contained in $L^{G}$. In view of
\cite[Ch. A, Lemma 14.3]{DH}, $R\leq N_{G}(L^{x_{i}})$, so
for
some $j$ we have $L^{x_{j}}\leq R$ since $L^{G}$ and $R$ are non-abelian groups.
But then $L^{G}\leq R\leq L^{G}$, so $L^{G}\leq R$.
The lemma~is proved.
{\bf Theorem 2.35.} {\sl Suppose that every $\sigma$-primary chief factor of $G$
is abelian. Then $G$ is a $\sigma$-$SC$-group if and only if
$G/G^{\mathfrak{S}_{\sigma}}$ is $\sigma$-supersoluble and
if $G^{\mathfrak{S}_{\sigma}}\ne 1$, $G$ has a Robinson $\sigma$-complex
$(G^{\mathfrak{S}_{\sigma}}, Z(G^{\mathfrak{S}_{\sigma}}); U_{1}, \ldots , U_{k}).$}
{\bf Proof. } Let $D=G^{\mathfrak{S}_{\sigma}}$. Then
$O^{\sigma _{i}}(D)=D$ for all $i\in I$.
First assume that $G$ is a $\sigma$-$SC$-group. Then
every chief factor of $G$ below $Z(D)$ is cyclic.
Now let $H/K$ be any chief factor of $G$ between $Z(D)$
and $D$. If $H/K$ is abelian, this factor is cyclic, which implies that
$D\leq C_{G}(H/K)$. On the other hand, if $H/K$ is a non-$\sigma$-primary
simple group, then Lemma~2.33 implies that $G/HC_{G}(H/K)$ is soluble.
Hence $$DHC_{G}(H/K)/HC_{G}(H/K)\simeq D/(D\cap HC_{G}(H/K))=D/HC_{D}(H/K)$$
is soluble, so $D=HC_{D}(H/K)$ since $O^{\sigma _{i}}(D)=D$ for all $i$.
Therefore, in both cases,
every element of
$D$ induces an inner automorphism on $H/K$. Therefore $D$ is
quasinilpotent. Hence in view of \cite[X, Theorem 13.6]{31},
$D/Z(D)\simeq U_{1}/Z(D)\times \cdots \times U_{k}/Z(D)$, where $U_{j}/Z(D)$ is a
non-$\sigma$-primary simple factor of $D$ for all $j=1, \ldots, k$.
Finally, note that $Z(D)\leq \Phi(D)$ since $O^{\sigma _{i}}(D)=D$ for all $i$.
Therefore $(D, Z(D); U_{1}, \ldots ,
U_{k})$ is a
Robinson $\sigma$-complex of $G$ by Lemma~2.34 since $G$ is a $\sigma$-$SC$-group.
Now assume that $G/D$ is $\sigma$-supersoluble and, in the case $D\ne 1$,
$G$ has a Robinson $\sigma$-complex $(D, Z(D); U_{1}, \ldots , U_{k}).$
Then there is a chief series $1 =G_{0} < G_{1} < \cdots < G_{t-1} < G_{t}=D$ of $G$
below $G^{\mathfrak{N}_{\sigma}} $ such that $ G_{i}/G_{i-1}$ is simple for all
$i=1, \ldots, t$.
Hence the Jordan-H\"{o}lder
theorem for groups with operators \cite[Ch. A, Theorem 3.2]{DH}
implies that every chief factor of $G$ below $G^{\mathfrak{N}_{\sigma}}$ is simple,
that is,
$G$ is a $\sigma$-$SC$-group.
The theorem is proved.
In the case $\sigma =\sigma ^{1}$ we get from Theorem 2.35 the following
{\bf Corollary 2.36} (See Theorem 1.6.5 in \cite{prod}). {\sl A group $G$ is a
$SC$-group if and only if $G$ satisfies: }
(i) {\sl $G/G^{{\mathfrak{S}} }$ is supersoluble.}
(ii) {\sl If $G^{\mathfrak{S}}\ne 1$, then $G$ has a Robinson complex
$(G^{\mathfrak{S}}, Z(G^{\mathfrak{S}}); U_{1}, \ldots , U_{k}).$}
{\bf Proposition 2.37.} {\sl
If $G$ is a $\sigma$-$SC$-group with $G^{{\mathfrak{S}} _{\sigma}}=
G^{{\mathfrak{U}} _{\sigma}}$, then
$G^{{\mathfrak{S}} _{\sigma}}\leq
C_{G}(G_{\mathfrak{S}}\cap G^{{\mathfrak{N}} _{\sigma}}).$ }
{\bf Proof.} Let $H/K$ be any chief factor of $G$ below
$G_{\mathfrak{S}}\cap G^{{\mathfrak{N}} _{\sigma}}$. Then $H/K$ is cyclic since
$G$ is a $\sigma$-$SC$-group, so $G_{\mathfrak{S}}\cap G^{{\mathfrak{N}} _{\sigma}}$ is
contained in the supersoluble hypercentre $Z_{\mathfrak{U}}(G)$ of $G$. Then
$G/C_{G}(G_{\mathfrak{S}}\cap G^{{\mathfrak{N}} _{\sigma}})$ is supersoluble by
\cite[IV, Theorem 6.10]{DH} and so $G^{{\mathfrak{S}} _{\sigma}}=G^{{\mathfrak{U}} _{\sigma}}\leq
C_{G}(G_{\mathfrak{S}}\cap G^{{\mathfrak{N}} _{\sigma}})$.
The proposition is proved.
In the case $\sigma =\sigma ^{1}$ we get from Proposition 2.37 the following
{\bf Corollary 2.38} (See Proposition 1.6.4 in \cite{prod}). {\sl If $G$ is
an $SC$-group, then $G^{\mathfrak{S}} \leq C_{G}(G_{\mathfrak{S}})$}.
{\bf Lemma~2.39.} {\sl Suppose that $G$ has a Robinson
$\sigma$-complex $(D, Z(D); U_{1}, \ldots ,
U_{k})$ and let $N$ be a normal subgroup of $G$. }
(1) {\sl If $N=U_{i}'$ and $k\ne 1$, then $Z(D/N) =U_{i}/N =Z(D)N/N$ and
$$(D/N,
Z(D/N); U_{1}N/N, \ldots , U_{i-1}N/N,
U_{i+1}N/N, \ldots ,
U_{k}N/N)$$ is a Robinson $\sigma$-complex of $G/N$. }
(2) {\sl If $N$ is nilpotent, then $Z(DN/N)= Z(D)N/N$ and
$$(DN/N, Z(DN/N); U_{1}N/N, \ldots ,
U_{k}N/N)$$ is a Robinson $\sigma$-complex of $G/N$.}
{\bf Proof. } Let $Z=Z(D)$. Then $U_{i}'Z=U_{i}$
since $U_{i}/Z$ is a non-$\sigma$-primary simple group, so
$U_{i}/U_{i}'=U_{i}'Z/U_{i}'\leq Z(D/U_{i}')$ and $U_{i}/U_{i}'\leq \Phi (D/U_{i}')$.
Moreover,
$DN/N$ is a non-identity normal subgroup of $G/N.$
Indeed, if $D\leq N=U_{i}'\leq D$ for some $i$, then $D=U_{i}'=U_{i}=U_{1}$ and
so $k=1$,
a contradiction. On the other hand,
the case $D\leq N$, where $N$ is nilpotent, is also impossible since $U_{2}/Z$ is a
non-$\sigma$-primary group.
(1) We can assume without loss of generality that $i=1.$
We have $$ D/U_{1}'=(U_{1}/U_{1}')(U_{2}U_{1}'/U_{1}')\cdots
(U_{k}U_{1}'/U_{1}') ,$$
so $$(D/U_{1}')/(U_{1}/U_{1}')=((U_{2}U_{1}'/U_{1}')/(U_{1}/U_{1}'))\cdots
((U_{k}U_{1}'/U_{1}')/(U_{1}/U_{1}'))$$ and hence from $D\ne U_{1}$ and
the $G$-isomorphisms
$$(U_{j}U_{1}'/U_{1}')/(U_{1}/U_{1}')\simeq U_{j}U_{1}'/U_{1}=U_{j}U_{1}/U_{1}\simeq
U_{j}/(U_{1}\cap U_{j})=U_{j}/(ZU_{1}'\cap U_{j})$$$$=U_{j}/Z(U_{1}'\cap U_{j})=U_{j}/Z$$
we get that $(D/U_{1}')/(U_{1}/U_{1}')$ is the direct product of the non-$\sigma$-primary
simple $G/U_{1}'$-invariant subgroups
$(U_{j}U_{1}'/U_{1}')/(U_{1}/U_{1}')$, $j\ne 1$.
Hence $U_{1}/U_{1}'=ZU_{1}'/U_{1}'\leq Z(D/U_{1}') \leq U_{1}/U_{1}'$, so
$U_{1}/U_{1}'=ZU_{1}'/U_{1}'=Z(D/U_{1}')\leq \Phi (D/U_{1}')$.
Therefore $$(D/U_{1}',
Z(D/U_{1}'); U_{2}U_{1}'/U_{1}', \ldots , U_{k}U_{1}'/U_{1}')$$
is a Robinson $\sigma$-complex of $G/N$.
(2) It is clear that $N\cap D\leq Z$, so we have the $G$-isomorphisms
$$ (DN/N)/(ZN/N)\simeq DN/ZN\simeq D/(D\cap NZ)=D/(D\cap N)Z=D/Z.$$ Note also that
$$(U_{i}N/N)/(NZ/N)\simeq U_{i}/(NZ\cap U_{i})=U_{i}/(N\cap U_{i})Z=U_{i}/Z$$
for all $i$
and
$ (DN/N)/(ZN/N)$ is the direct product of the non-$\sigma$-primary simple
$G/N$-invariant groups $(U_{i}N/N)/(NZ/N)$.
Hence $Z(DN/N)=ZN/N\leq \Phi (DN/N)$
and
every chief factor of $G/N$ below $NZ/N\simeq _{G} Z/(N\cap Z) $ is cyclic.
Therefore $$(DN/N, Z(DN/N); U_{1}N/N, \ldots ,
U_{k}N/N)$$ is a Robinson $\sigma$-complex of $G/N$.
The lemma~is proved.
{\bf Lemma~2.40.} {\sl Let $G$ be a non-$\sigma$-soluble $\sigma$-full
$\sigma$-$SC$-group with Robinson $\sigma$-complex
$$(D, Z(D); U_{1}, \ldots ,
U_{k}),$$ where $D=G^{\mathfrak{S}_{\sigma}}=G^{\mathfrak{U}_{\sigma}}$.
Let $U$ be a
non-$\sigma$-permutable $\sigma$-subnormal subgroup of $G$ of minimal
order. Then:}
(1) {\sl if $UU_{j}'/U_{j}'$ is $\sigma$-permutable in $G/U_{j}'$ for
all $j=1, \ldots, k$, then $U$ is $\sigma$-supersoluble, and }
(2) {\sl if $U$ is $\sigma$-supersoluble and $UL/L$ is
$\sigma$-permutable in $G/L$ for
all non-trivial nilpotent normal subgroups $L$ of $G$, then
$U$ is a cyclic $p$-group for some prime $p$. }
{\bf Proof. } Suppose that this Lemma~is false and let $G$ be a
counterexample of minimal order. By hypothesis, for some
$i$ and for some Hall $\sigma _{i}$-subgroup $H$ of $G$ we have $UH\ne
HU$. Moreover,
$O^{\sigma _{s}}(D)=D$ for all $s\in I$ and, in view of Proposition~2.28(ii),
$G/D$ is $\sigma$-supersoluble.
(1) Assume that this is false and suppose that
$U\cap D\leq Z(D)$. Then every chief factor of $U$ below
$U\cap Z(D)=U\cap D$ is cyclic and, also, $UD/D\simeq U/(U\cap D)$ is
$\sigma$-supersoluble by Proposition~2.28(ii).
Hence $U$ is $\sigma$-supersoluble, a contradiction. Therefore
$U\cap D\nleq Z(D)$. Moreover, Lemma~2.30(1)(2)
implies that $(U\cap D)Z(D)/Z(D)$ is
$\sigma$-subnormal in $D/Z(D)$ and so $(U\cap
D)Z(D)/Z(D)$ is a non-trivial
subnormal subgroup of $D/Z(D)$ by Lemma~2.30(8)
since $O^{\sigma _{s}}(D/Z(D))=D/Z(D)$ for all $s$.
Hence for some $j$ we
have $U_{j}/Z(D)\leq (U\cap
D)Z(D)/Z(D),$ so $U_{j}\leq (U\cap
D)Z(D).$ But then $U_{j}'\leq ((U\cap
D)Z(D))'\leq U\cap D.$ By hypothesis, $UU_{j}'/U_{j}'=U/U_{j}'$ is $\sigma$-permutable
in
$G/U_{j}'$ and so
$$UH/U_{j}'=(U
/U_{j}')(HU_{j}'/U_{j}')=(HU_{j}'/U_{j}')(U/U_{j}')=HU/U_{j}'.$$ Hence
$UH=HU$,
a contradiction. Therefore Statement (1) holds.
(2) Let $N=
U^{{\mathfrak{N}}_{\sigma}}$. Then $N$ is subnormal in $G$ by Lemma
2.30(9).
Since $U$ is $\sigma$-supersoluble by
hypothesis, $N < U$. By Lemmas 2.10, 2.27, and 2.30(3), every proper subgroup
$V$ of $U$ with
$N\leq V$ is $\sigma$-subnormal in $G$, so the minimality of $U$ implies
that $VH=HV$. Therefore, if $U$ has at least two distinct maximal subgroups $V$ and $W$
such that $N\leq V\cap W$, then $U=\langle V, W \rangle $ is permutes with $H$ by
\cite[Ch. A, Proposition 1.6]{DH},
contrary to our assumption on $(U, H)$. Hence $U/N$
is a cyclic $p$-group for some prime $p$.
Therefore we can assume that $N\ne 1$.
First assume that $p\in \sigma_{i} $. Lemma~2.30(4) implies that $H\cap U$
is a Hall $\sigma _{i}$-subgroup of $U$, so $U=N(H\cap U)=(H\cap U)N$. Hence
$$UH=(H\cap U)NH=H(H\cap U)N=HU,$$ a contradiction. Thus
$p\in \sigma_{j}$ for some $j\ne i$.
Now we show that $U$ is a $P\sigma T$-group. Let $V$ be a proper
$\sigma$-subnormal subgroup of $U$. Then $V$ is
$\sigma$-subnormal in $G$ since $U$ is
$\sigma$-subnormal in $G$. The minimality of $U$ implies that $V$ is
$\sigma$-permutable in $G$, so $V$ is
$\sigma$-permutable in $U$ by Lemma~ 2.32(1). Hence
$U$ is a $\sigma$-soluble $P\sigma T$-group by Lemma~2.31(1), so $N$ is
an abelian Hall subgroup of $U$ and all subgroups of $N$ are normal in $U$
by Theorem B.
Therefore $N$ is a $\sigma_{j}'$-group since $N=
U^{{\mathfrak{N}}_{\sigma}}$, so
$N\leq O=O_{\sigma_{s}'}(F(G))$ by Lemma~2.30(5) (taking in the case
$\sigma =\sigma ^{1}$). By hypothesis, $OU/O$ permutes with
$OH/O$. By Lemma~2.30(1)(2), $OU/O$ is $\sigma$-subnormal in
$$(OU/O)(OH/O)=(OH/O)(OU/O)=OHU/O,$$
where
$OU/O\simeq U/(U\cap O)$
is a $\sigma _{j}$-group and $OH/O\simeq H/(H\cap O)$ is a $\sigma _{i}$-group.
Hence $UO/O$ is normal in $OHU/O$ by Lemma~2.6(6). Therefore $H\leq N_{G}(OU).$
Now let $\Pi =\sigma \setminus \{\sigma _{j}\}$.
Then
$$H\leq N_{G}(O^{\Pi}(OU))=N_{G}(O^{\Pi}(U))$$ by Lemma~2.30(7) since
$|OU:U|=|O:O\cap U|$ is a $\Pi$-number and $U$ is $\sigma$-subnormal in $OU$ by Lemma
2.30(1).
For a Sylow $p$-subgroup $P$ of $U$ we have
$P\leq O^{\Pi}(U)$ since $p\in \sigma _{j}$.
Therefore $U=O^{\Pi}(U)N$ and $U/O^{\Pi}(U)\simeq
N/(N\cap O^{\Pi}(U))$ is nilpotent and so $N\leq O^{\Pi}(U)$. But then $O^{\Pi}(U) = U$,
so $H\leq N_{G}(U)$ and hence $HU=UH$, a
contradiction. Therefore Statement (2) holds.
The lemma~is proved.
{\bf Lemma~2.41} (See Lemma~5 in
\cite{knyag}). {\sl
Let $H$, $K$ and $N$ be pairwise permutable
subgroups of $G$ and $H$ a Hall subgroup of $G$. Then $N\cap HK=(N\cap H)(N\cap K).$}
{\bf Lemma~2.42.} {\sl If $G$ satisfies
${\bf N}_{\sigma _{i}}$ and $N$ is a soluble normal subgroup of $G$, then
$G/N$ satisfies
${\bf N}_{\sigma _{i}}$.}
{\bf Proof. } Let $L/N$ be a normal soluble subgroup of $G/N$,
$(U/N)/(L/N)\leq O_{\sigma _{i}}((G/N)/(L/N))$
and let $yN$ be a $\sigma _{i}'$-element in $G/N$. Then for some
$\sigma _{i}'$-element $x\in G$ we have $yN=xN$.
On the other hand, $L$ is a soluble normal
subgroup of $G$ and $U/L\leq O_{\sigma _{i}}(G/L)$,
so $(U/L)^{x}=U^{x}/L=U/L$. Hence
$$((U/N)/(L/N))^{yN}=((U/N)/(L/N))^{xN}=(U/N)/(L/N).$$
Hence $G/N$ satisfies ${\bf N}_{\sigma _{i}}$.
The lemma~is proved.
{\bf Proof of Theorem 2.20.} First assume that $G$ is a $P\sigma T$-groups and
let $D=G^{{\mathfrak{S}}_{\sigma}}$ be the
$\sigma$-soluble residual of $G$.
Then $O^{\sigma _{i}}(D)=D$ for all $i\in I$.
Moreover, $G/D$ is a $\sigma$-soluble $P\sigma T$-group by Lemma~2.31(2),
hence
$G/D$ is $\sigma$-supersoluble by Theorem 2.3 and so, in fact,
$D=G^{{\mathfrak{U}}_{\sigma}}$ is the $\sigma$-supersoluble residual of $G$.
From Proposition 2.29(ii) it
follows that $G$ is a $\sigma$-$SC$-group.
Therefore, if $D\ne 1$, then, in view of Theorem 2.35,
$G$ has a Robinson $\sigma$-complex
$(D, Z(D);$ $U_{1}, \ldots , U_{k})$
and, in view of Proposition 2.29(i),
for any set $\{j_{1}, \ldots , j_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$, $G$ and $G /U_{j_{1}}'\cdots U_{j_{r}}'$ satisfy
${\bf N}_{\sigma _{i}}$ for all $\sigma _{i}\in \sigma (Z(D))$.
Therefore the necessity of the condition of the theorem holds.
Now, assume, arguing
by contradiction, that $G$ is non-$P\sigma T$-group of minimal order
satisfying Conditions (i), (ii) and (iii).
Then $D\ne 1$
and, by Lemma~2.31(1), $G$ has a $\sigma$-subnormal
subgroup $U$ such that $UH\ne HU$ for some $i\in I$ and some Hall $\sigma
_{i}$-subgroup $H$ of $G$ and, also, every $\sigma$-subnormal
subgroup $U_{0}$ of $G$ with $U_{0} < U$ is $\sigma$-permutable in
$G$. From Conditions (i) and (ii) it follows that
$G$ is a $\sigma$-$SC$-group.
(1) {\sl $U$ is $\sigma$-supersoluble. }
First assume that $k=1$, that is, $D=U_{1}=D'$.
Then $Z(D)\leq \Phi (D)$
and
$(U\cap D)Z(D)/Z(D)$ is a $\sigma$-subnormal subgroup
of a simple non-abelian group $D/Z(D)$ by Lemma~2.30(1)(2). Hence $U\cap D\leq Z(D)$,
so $U\cap D=U\cap Z(D)$. Therefore
every chief factor of $U$ below $U\cap D$ is cyclic. On the other hand, $U/(U\cap D)
\simeq UD/D$ is $\sigma$-supersoluble by Condition (i), Theorem 2.3, and
Proposition~2.28(ii)
and so $U$ is $\sigma$-supersoluble.
Now let $k\ne 1$. We show that the hypothesis holds for $G/U_{j}'$
for all $j=1, \ldots , k$. We can assume without loss of generality that $j=1$.
Let $N=U_{1}'$. Then
$ (G/N)/(D/N)\simeq G/D$ is a $\sigma$-soluble $P\sigma T$-group, so Condition (i)
holds for
$G/N$. From Lemma
2.39(1) it follows that
$$(D/N, Z(D/N); U_{2}N/N, \ldots , U_{k}N/N)$$ is
a Robinson $\sigma$-complex of $G/N$ and
$Z(D/N)=U_{1}/N=Z(D)N/N\simeq Z(D)/(Z(D)\cap N)$.
Moreover, if
$\{j_{1}, \ldots , j_{r}\}\subseteq \{2, \ldots , k\}$, where $2\leq r < k$,
then the quotients
$G/N=G/U_{1}'$ and $$(G/N) /(U_{j_{1}}N/N)'\cdots (U_{j_{r}}N/N)'=
(G/N)/(U_{j_{1}}'\cdots U_{j_{r}}'U_{1}'/N)\simeq G/U_{j_{1}}'\cdots U_{j_{r}}'U_{1}'$$
satisfy
${\bf N}_{\sigma _{i}}$ for all
$\sigma _{i}\in \sigma (Z(D/N))\subseteq \sigma (Z(D))$
by Condition (iii).
Therefore the hypothesis holds for $G/N=G/U_{1}'$, so the
$\sigma$-subnormal subgroup $UU_{1}'/U_{1}'$ of $G/U_{1}'$ is $\sigma$-permutable in
$G/U_{1}'$ by the choice of $G$. Finally,
from Lemma 2.27(2) and Proposition 2.28(ii)
it follows that
$$(G/U_{1}')^{\mathfrak{S}_{\sigma}}=U_{1}'G^{\mathfrak{S}_{\sigma}}/U_{1}'
=DU_{1}'/U_{1}'=
U_{1}'G^{\mathfrak{U}_{\sigma}}/U_{1}'= (G/U_{1}')^{\mathfrak{U}_{\sigma}}.$$
Therefore $U$ is $\sigma$-supersoluble by Lemma~2.40(1)
(2) {\sl $U$ is a cyclic $p$-group for some prime $p\in \sigma _{s}$,
where $s\ne i$.}
First we show that $U$ is a cyclic $p$-group for some prime $p$.
In view of Proposition 2.28(ii) and Lemma 2.31(2),
$$(G/N)/(DN/N)\simeq G/DN\simeq (G/D)/(DN/D)$$ is
a $\sigma$-soluble $P\sigma T$-group. Moreover, $Z(DN/N)= Z(D)N/N$ and $(DN/N, Z(D)N/N; U_{1}N/N, \ldots , $ $U_{k}N/N)$ is a Robinson $\sigma$-complex of $G/N$ by Lemma~2.27(2).
In view of Lemma~2.42, $G/N$ satisfies ${\bf N}_{\sigma _{i}}$ for all
$$\sigma _{i}\in \sigma (Z(D)N/N)\subseteq \sigma (Z(D)).$$
Similarly, for any set
$\{j_{1}, \ldots , j_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$, $$(G/N) /(U_{j_{1}}N/N)'\cdots (U_{j_{r}}N/N)'=
(G/N) /(U_{j_{1}}'\cdots U_{j_{r}}'N/N)$$$$\simeq G/U_{j_{1}}'\cdots U_{j_{r}}'N\simeq
(G/U_{j_{1}}'\cdots U_{j_{r}}')/(U_{j_{1}}'\cdots
U_{j_{r}}'N/U_{j_{1}}'\cdots U_{j_{r}}')
$$ satisfies ${\bf N}_{\sigma _{i}}$ for all
$\sigma _{i}\in \sigma (Z(D)N/N)$ by Lemma~2.42 since $$U_{j_{1}}'\cdots
U_{j_{r}}'N/U_{j_{1}}'\cdots U_{j_{r}}'\simeq N/(N\cap U_{j_{1}}'\cdots
U_{j_{r}}')$$ is nilpotent.
Therefore the hypothesis holds
on $G/N$, so $UN/N$ is $\sigma$-permutable in $G/N$.
Also we have $(G/N)^{\mathfrak{S}_{\sigma}}= (G/N)^{\mathfrak{U}_{\sigma}}.$
Therefore $U$ is a cyclic $p$-group for some prime $p\in \sigma _{s}$ by Lemma
2.40(2).
Finally, Lemma
2.30(4)
implies that in the case $i=s$ we have $U\leq H$, so $UH=H=HU$. Therefore
$s\ne i$.
(3) $\sigma _{s}\not \in \sigma (Z(D))$ (This follows from Condition (iii)
and Lemma~2.32(2) since
$H$ is a $\sigma _{s}'$-group by Claim (2)).
(4) $O_{\sigma _{s}}(G)\cap D=1$.
Assume $O_{\sigma _{s}}(G)\cap D\ne 1$. Since
$D/Z(D)$ is the direct product of non-$\sigma$-primary
simple groups, $O_{\sigma _{s}}(G)\cap D\leq
Z(D)$. But then $\sigma _{s}
\in \sigma (Z(D))$, contrary to Claim (3).
Therefore we have (4).
{\sl Final contradiction.} By Lemma~2.30(2), $UD/D$ is
$\sigma$-subnormal in $G/D$. On the other hand, $HD/D$ is a Hall $\sigma
_{i}$-subgroup of $G/D$. Hence $$(UD/D)(HD/D)=(HD/D)(UD/D)=HUD/D$$ by Condition (ii) and
Lemma~2.31(1),
so $HUD$ is a subgroup of $G$.
Therefore, by Claim (4) and Lemma~2.41,
$$UHD\cap HO_{\sigma _{s}}(G)
=UH(D\cap HO_{\sigma _{s}}(G))=UH(D\cap H)(D\cap
O_{\sigma _{s}}(G))=UH(D\cap H)=UH$$ is a subgroup of $G$ and so $HU=UH$,
a contradiction. Therefore the sufficiency of the condition of the theorem holds.
The theorem is proved.
\section{Groups in which every $\sigma$-subnormal subgroup is $\sigma$-quasinormal}
The quasinormal and the Sylow permutable subgroups have many useful for applications
properties.
For instance, if $A$ is quasinormal in $G$, then: {\sl $A$ is subnormal in $G$}
(Ore \cite{5}), {\sl $A/A_{G}$ is nilpotent} (Ito and Szep \cite{It}),
{\sl every chief factor
$H/K$ of $G$ between $A_{G}$ and $A^{G}$ is central, that is, $C_{G}(H/K)=G$}
({Maier and Schmid \cite{MaierS}),
and,
in general, the section \emph{$A/A_{G}$ is not necessarily abelian}
(Thomson \cite{Th}).
Note also that the quasinormal subgroups have a
close connection with the modular subgroups.
Every quasinormal is clearly modular in the group. Moreover,
the following remarkable fact is well-known.
{\bf Theorem 3.1} (Schmidt \cite[Theorem 5.1.1]{Schm}). {\sl A subgroup $A$ of $G$ is
quasinormal in $G$ if and only if $A$ is modular and subnormal in $G$}.
This result made
it possible to find analogues of quasinormal subgroups in the theory of the
$\sigma$-properties of a group.
{\bf Definition 3.2.} We say, following \cite{Hu11}, that a subgroup $A$ of $G$
is \emph{$\sigma$-quasinormal} in $G$ if $A$ is modular and $\sigma$-subnormal in $G$.
The description of the
$PT$-groups
was first obtained by Zacher \cite{zaher},
for the soluble case, and
by Robinson in \cite{217}, for the general case.
In the further publications, authors
(see Chapter 2 in \cite{prod}) have found out and described
many other interesting characterizations of
$PT$-groups and generalized $PT$-groups.
The
theory of $\sigma$-quasinormal subgroups was constructed in the paper \cite{Hu11}.
In particular, it is proved the following result covering in the
case $\sigma =
\sigma ^{1}=\{\{2\}, \{3\}, \{5\} \ldots \}$ the above mentioned results in
\cite{5, It, MaierS}.
{\bf Theorem 3.3} (See Theorem C in \cite{Hu11}). {\sl
Let $A$ be a $\sigma$-quasinormal subgroup of $G$.
Then the following statements hold:}
(i) {\sl If $G$
possesses a Hall $\sigma_{i}$-subgroup,
then $A$ permutes with each Hall $\sigma_{i}$-subgroup of $G$. }
(ii) {\sl The quotients
$A^{G}/A_{G}$ and $G/C_{G}(A^{G}/A_{G}) $ are $\sigma$-nilpotent, and }
(iii) {\sl Every chief factor of $G$ between
$A^{G}$ and $A_{G}$ is $\sigma$-central in $G$. }
(iv) {\sl For every $i$ such that $\sigma _{i} \in \sigma
(G/C_{G}(A^{G}/A_{G}))$ we have
$\sigma _{i} \in \sigma (A^{G}/A_{G}).$
}
(v) {\sl $A$ is $\sigma$-seminormal in $G$.}
However, the following problem still remains open.
{\bf Question 3.4.} {\sl What is the structure of the $Q\sigma T$-groups,
that are, groups in which every $\sigma$-subnormal subgroup is $\sigma$-quasinormal?}
Partially, Problem 3.4 was solved in the recent paper \cite{Hu12}.
{\bf Theorem 3.5 } (See Theorem 1.5 in \cite{Hu12}). {\sl
Let $D$ be the $\sigma$-nilpotent residual of $G$, that is, the
intersection of all normal subgroups $N$
of $G$ with $\sigma$-nilpotent quotient $G/N$.
If $G$ is a $\sigma$-soluble $Q\sigma T$-group,
then the following conditions hold:}
(i) {\sl $G=D\rtimes L$, where $D$ is an abelian Hall
subgroup of $G$ of odd order, $L$ is $\sigma$-nilpotent and
the lattice
of all subgroups ${\cal L}(L)$ of $L$ is modular,}
(ii) {\sl every element of $G$ induces a
power automorphism in $D$, and }
(iii) {\sl $O_{\sigma _{i}}(D)$ has
a normal complement in a Hall $\sigma _{i}$-subgroup of $G$ for all $i$.}
{\sl Conversely, if Conditions (i), (ii) and (iii) hold for some
subgroups $D$ and $L$ of
$G$, then every $G$ is a $\sigma$-soluble $Q\sigma T$-group.}
In the recent paper \cite{preprI}, Problem 3.4
was solved in the general case.
Let $\pi \subseteq \Bbb{P}$. Then we say \emph{$G$ satisfies}
${\bf P}_{\pi}$ if whenever $N$ is a soluble normal
subgroup of $G$ and $G$ has a Hall
$\pi$-subgroup, every subgroup of $O_{\pi}(G/N)$ is modular
in Hall $\sigma _{i}$-subgroups of $G/N$.
{\bf Theorem 3.6} (Safonova, Skiba \cite{preprI})
{\sl Suppose that $G$ is a $\sigma$-full group.
Then $G$ is a $Q\sigma T$-group if
and only if $G$ has a normal subgroup $D$ such that:}
(i) {\sl $G/D$ is a $\sigma$-soluble $Q\sigma T$-group, }
(ii) {\sl if $D\ne 1$, $G$ has a Robinson $\sigma$-complex
$(D, Z(D); U_{1}, \ldots , U_{k})$ and }
(iii) {\sl for any set $\{i_{1}, \ldots , i_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$, $G$ and $G /U_{i_{1}}'\cdots U_{i_{r}}'$ satisfy
${\bf N}_{\sigma _{i}}$ for all $\sigma _{i}\in \sigma (Z(D))$ and
${\bf P}_{\sigma _{i}}$
for all $\sigma _{i}\in \sigma (D)$. }
In the case $\sigma =\sigma ^{1\pi}$
(see Example 2.1(ii)) we get from Theorem 3.6 the following
{\bf Corollary 3.7.} {\sl Suppose that $G$ has a Hall $\pi'$-subgroup.
Then every $\pi$-subnormal subgroup of $G$ is modular in $G$ if
and only if $G$ has a normal subgroup $D$ such that:}
(i) {\sl $G/D$ is a $\pi$-soluble group in which every $\pi$-subnormal
subgroup is modular, }
(ii) {\sl if $D\ne 1$, $G$ has a Robinson $\pi$-complex
$(D, Z(D); U_{1}, \ldots , U_{k})$ and }
(iii) {\sl for any set $\{i_{1}, \ldots , i_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$,
$G$ and $G /U_{i_{1}}'\cdots U_{i_{r}}'$ satisfy ${\bf N}_{p}$
for all primes $p$ dividing $|Z(D)|$ and
${\bf N}_{\pi'}$ if $O_{\pi'}(Z(D))\ne 1$, and, also
$G$ and $G /U_{i_{1}}'\cdots U_{i_{r}}'$ satisfy ${\bf P}_{p}$ for all
primes $p$ dividing $|D|$ and ${\bf P}_{\pi'}$
if $O_{\pi'}(D)\ne 1$.}
In the case $\pi=\Bbb{P}$, we get from Corollary 3.7 the following classical result.
{\bf Corollary 3.8} (Robinson \cite{217}). {\sl $G$ is a $PT$-group if
and only if $G$ has a normal perfect subgroup $D$ such that:}
(i) {\sl $G/D$ is a soluble $PT$-group, and }
(i) {\sl if $D\ne 1$, $G$ has a Robinson complex
$(D, Z(D); U_{1}, \ldots , U_{k})$ and }
(iii) {\sl for any set $\{i_{1}, \ldots , i_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$, $G$ and $G /U_{i_{1}}'\cdots U_{i_{r}}'$ satisfy
${\bf N}_{p}$ for all $p\in \pi (Z(D))$ and
${\bf P}_{p}$
for all $p \in \pi (D)$. }
Theorem 3.6 has also many other consequences.
In particular, in view of Example 2.1(ii), we get from Theorem 3.6 the following
{\bf Corollary 3.9.} {\sl Suppose that $G$ has a Hall
$\pi$-subgroup and a Hall $\pi'$-subgroup.
Then every $\pi, \pi'$-subnormal subgroup of $G$ is modular in $G$ if
and only if $G$ has a normal subgroup $D$ such that:}
(i) {\sl $G/D$ is a $\pi$-separable group in which every $\pi, \pi'$-subnormal
subgroup is modular, }
(ii) {\sl if $D\ne 1$, $G$ has a Robinson $\pi, \pi'$-complex
$(D, Z(D); U_{1}, \ldots , U_{k})$ and }
(iii) {\sl for any set $\{i_{1}, \ldots , i_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$,
$G$ and $G /U_{i_{1}}'\cdots U_{i_{r}}'$ satisfy ${\bf N}_{\pi}$ if
$O_{\pi}(Z(D))\ne 1$
and
${\bf N}_{\pi'}$ if $O_{\pi'}(D)\ne 1$, and, also,
$G$ and $G /U_{i_{1}}'\cdots U_{i_{r}}'$ satisfy ${\bf P}_{\pi}$
if $O_{\pi}(D)\ne 1$ and ${\bf P}_{\pi'}$
if $O_{\pi'}(D)\ne 1$.}
{\bf Exapmle 3.10.} Let $\alpha: Z(SL(2, 5))\to Z(SL(2, 7))$ be an isomorphism and let
$$D:= SL(2, 5) \Ydown SL(2, 7)=(SL(2, 5)\times SL(2, 7))/,$$
where $$V=\{(a, (a^{\alpha})^{-1})\mid a\in Z(SL(2, 5))\},$$
is the direct product of the groups $SL(2, 5)$ and $SL(2, 7)$ with joint center
(see \cite[p. 49]{hupp}).
Let $M=(C_{23}\wr C_{11}) \Yup
(C_{67}\rtimes C_{11}$) (see \cite[p. 50]{hupp}, where
$C_{23}\wr C_{11}$ is the regular wreath product of the groups
$C_{23}$ and $ C_{11}$ and $C_{11}\wr C_{67}$ is a non-abelian group of
order 737. Then $M$ is a soluble $P\sigma T$-group and $M$ is not a $PST$ by
Theorem B.
Let $G=D\times M$ and $\sigma =\{\{2, 3\}, \{5, 11, 23\},
\{7, 67\},
\{2, 3, 5, 7, 11, 23, 67\}'\}$. Then $G$ is not a $PT$-group by Corollary 3.8.
Moreover, $G$
is $\sigma$-ful and
Conditions (i), (ii), and (iii) in Theorem 3.6 hold for $G$. Therefore
every $\sigma$-subnormal subgroup of $G$ is modular in $G$.
To prove this theorem, we need the results from Section 2 and several new lemmas.
First note that from Theorem 3.3(1) we get the following
{\bf Lemma 3.11.} {\sl Suppose that $G$ is a $Q\sigma T$-group. Then }
(1) {\sl every $\sigma$-subnormal subgroup of $G$ is $\sigma$-seminormal in $G$, and }
In particular,
(2) {\sl if $G$ is $\sigma$-full, then every
$\sigma$-quasinormal subgroup of $G$ is $\sigma$-permutable in $G$.
}
From Lemma 3.11 we get the following
{\bf Corollary 3.12.} {\sl Every $\sigma$-full $Q\sigma T$-group is a
$P\sigma T$-group. }
{\bf Lemma 3.13.} {\sl If $G$ is an
$Q\sigma T$-group, then every quotient $G/N$ of $G$ is also an
$Q\sigma T$-group. }
{\bf Proof.}
Let $L/N$ is a $\sigma$-subnormal subgroup of $G/N$. Then $L$ is
$\sigma$-subnormal subgroup in $G$ by Lemma 2.30(3), so $L$
is modular in $G$ and
then $L/N$ is modular and so $\sigma$-quasinormal
in $G/N$ by \cite[Page 201, Property (3)]{Schm}. Hence
$G/N$ is a $Q\sigma T$-group.
The lemma is proved.
{\bf Lemma 3.14.} {\sl Suppose that $G$ is a $\sigma$-full $Q\sigma T$-group. Then
$G/R$ satisfies ${\bf P}_{\sigma _{i}}$ and ${\bf N}_{\sigma _{i}}$
for every normal subgroup $R$ of $G$ and all $i\in I$.}
{\bf Proof.}
In view of Lemma 3.12, we can assume without loss of
generality that $R=1$. Let $U/N\leq O_{\sigma _{i}}(G/N)$. Then $U/N$ is
$\sigma$-subnormal in $G/N$, so $U$ is $\sigma$-subnormal in $G$ by Lemma 2.30(3).
Thereore
$U$ is modular in $G$
since $G$ is a $Q\sigma T$-group and
so, by \cite[Page 201, Property (3)]{Schm}, $U/R$ is
modular in every Hall $\sigma _{i}$-subgroup $H/R$ of $G/N$ since
$O_{\sigma _{i}}(G/N)\leq H/N$.
Hence $G$ satisfies
${\bf P}_{\sigma _{i}}$. Moreover, $U/N$ is
$\sigma$-permutable in $G/N$ by Lemma 3.11 and so for every $\sigma _{i}'$-element
$x\in G$ we have $xL\in O^{\sigma _{i}}(G/N)\leq N_{G/L}(U/N)$
by Lemma 2.32(2). Hence $G$ satisfies ${\bf N}_{\sigma _{i}}$.
The lemma is proved.
{\bf Lemma 3.15.} {\sl Let $G$ be a non-$\sigma$-soluble $\sigma$-full
$\sigma$-$SC$-group with Robinson $\sigma$-complex
$$(D, Z(D); U_{1}, \ldots ,
U_{k}),$$ where $D=G^{\mathfrak{S}_{\sigma}}=G^{\mathfrak{U}_{\sigma}}$.
Let $U$ be a $\sigma$-subnormal non-modular (non-normal)
subgroup of $G$ of minimal
order. Then:}
(1) {\sl If $UU_{i}'/U_{i}'$ is modular (respectively, normal) in $G/U_{i}'$ for
all $i=1, \ldots, k$, then $U$ is $\sigma$-supersoluble.}
(2) {\sl If $U$ is $\sigma$-supersoluble and $UL/L$ is modular (respectively, normal)
in $G/L$ for
all non-trivial nilpotent normal subgroups $L$ of $G$, then
$U$ is a cyclic $p$-group for some prime $p$. }
{\bf Proof. } Suppose that this lemma is false and let $G$ be a
counterexample of minimal order.
(1) Assume this is false. Suppose that
$U\cap D\leq Z(D)$. Then every chief factor of $U$ below
$U\cap Z(D)=U\cap D$ is cyclic and, also, $UD/D\simeq U/(U\cap D)$ is
$\sigma$-supersoluble by Proposition~2.28.
Hence $U$ is $\sigma$-supersoluble, a contradiction. Therefore
$U\cap D\nleq Z(D)$.
$U\cap D\nleq Z(D)$. Moreover, Lemma~2.30(1)(2)
implies that $(U\cap D)Z(D)/Z(D)$ is
$\sigma$-subnormal in $D/Z(D)$ and so $(U\cap
D)Z(D)/Z(D)$ is a non-trivial
subnormal subgroup of $D/Z(D)$ by Lemma~2.30(8)
since $O^{\sigma _{s}}(D/Z(D))=D/Z(D)$ for all $s$.
Hence for some $j$ we
have $U_{j}/Z(D)\leq (U\cap
D)Z(D)/Z(D),$ so $U_{j}\leq (U\cap
D)Z(D).$ But then $U_{j}'\leq ((U\cap
D)Z(D))'\leq U\cap D.$ By hypothesis, $UU_{j}'/U_{j}'=U/U_{j}'$ is modular
(respectively, normal) in
in
$G/U_{j}'$ and so $U$ is modular (respectively, normal) in $G$ by \cite[p.~201, Property~(4)]{Schm},
a contradiction. Therefore Statement (1) holds.
(2) Let $N=
U^{{\mathfrak{N}}_{\sigma}}$. Then $N$ is subnormal in $G$ by Lemma 2.30(9).
Since $U$ is $\sigma$-supersoluble by
hypothesis, $N < U$. By Lemmas 2.9, 2.10, and 2.30(3),
every proper subgroup
$V$ of $U$ with
$N\leq V$ is $\sigma$-subnormal in $G$, so the minimality of $U$ implies
that $V$ is modular (respectively, normal) in $G$.
Therefore, if $U$ has at least two distinct maximal subgroups $V$ and $W$
such that $N\leq V\cap W$, then $U=\langle V, W \rangle $
is modular (respectively, normal) in
$G$ by \cite[p. 201, Property (5)]{Schm}, contrary to our assumption on $U$.
Hence $U/N$
is a cyclic $p$-group for some $p\in \sigma _{i}$ and $N\ne 1$ since $U$ is not cyclic.
Now we show that $U$ is a $P\sigma T$-group. Let $V$ be a proper
$\sigma$-subnormal subgroup of $U$. Then $V$ is
$\sigma$-subnormal in $G$ since $U$ is
$\sigma$-subnormal in $G$, so $V$ is modular (respectively, normal)
in $G$ and hence $V$ is
modular (respectively, normal) in $U$.
Therefore $V$ is $\sigma$-permutable in $U$ by Lemma 3.11. Hence
$U$ is a $\sigma$-soluble soluble $P\sigma T$-group, so $N=U^{{\mathfrak{N}_{\sigma}}}$
is a
Hall abelian subgroup of $U$ and every subgroup of $N$ is normal in $U$ by
Theorem 2.3. Therefore, in fact, for a Sylow $p$-subgroup $P$ of $U$
we have $U=N\rtimes P$ and $P$ is a cyclic Hall $\sigma _{i}$-subgroup of $U$. Let
Clearly,
$N$ is $\sigma$-quasinormal in $G$. Assume that for some minimal normal
subgroup $R$ of $G$ we have $R\leq N_{G}$. Then $R$ is abelian,
$U/R$ is modular (respectively, normal) in $G/R$ by hypothesis,
so $U$ is modular (respectively, normal) in $G$ by
\cite[p. 201, Property (4)]{Schm},
a contradiction. Therefore
$N_{G}=1$, so $P\leq C_{G}(N^{G})$ since for every $k$ such that $\sigma _{k} \in \sigma
(G/C_{G}(N^{G})$ we have
$\sigma _{i} \in \sigma (N^{G})$ by Theorem 3.3(iv).
Therefore $U=N\times P$ is
$\sigma$-nilpotent and so $N=1$, a contradiction.
Therefore Statement (2) holds.
The lemma is proved.
{\bf Lemma 3.16.} {\sl Let $N$ be a soluble normal subgroup of $G$.}
{\sl If $G$ satisfies $P_{\sigma _{i}}$, then $G/N$ satisfies
$P_{\sigma _{i}}$.}
{\bf Proof. } Let $L/N$ be a normal soluble subgroup of $G/N$ and
$(U/N)/(L/N)\leq O_{\sigma _{i}}((G/N)/(L/N))$. Then
$L$ is a soluble normal
subgroup of $G$ and $U/L\leq O_{\sigma _{i}}(G/L)$.
By hypothesis, $U/L$ is modular in a Hall $\sigma _{i}$-subgroup $H/L$ of $G/L$.
Then $(U/N)/(L/N)$ is modular in the Hall $\sigma _{i}$-subgroup
$(H/N)/(L/N)$ of $(G/N)/(L/N)$.
Hence $G/N$ satisfies $P_{\sigma _{i}}$.
The lemma is proved.
{\bf Lemma 3.17.} {\sl If $G$ is a $Q\sigma T$-group,
then $G$ is a $\sigma$-$SC$-group. }
{\bf Proof. } Suppose that this lemma is false and let $G$ be a
counterexample of minimal order. Let $D=G^{\frak{N_{\sigma}}}$.
If $D=1$, then $G$ is $\sigma$-soluble
and so $G$ is a $\sigma$-$SC$-group by Theorem 2.3. Therefore $D\ne 1$.
Let $R$ be a minimal normal subgroup of $G$ contained in $D$. Then $G/R$
is an $Q\sigma T$-group by Lemma 3.13. Therefore the choice of $G$
implies that $G/R$ is a $\sigma$-$SC$-group. Since
$(G/R)^{{\mathfrak{N}}_{\sigma }}=D/R$ by Lemmas 2.9 and 2.27,
every
chief factor of $G/R$ below $D/R$ is simple. Hence
every
chief factor of $G$ between $G^{{\mathfrak{N}}_{\sigma }}$ and $R$ is
simple.
Therefore, in view of the Jordan-H\"{o}lder
theorem for groups with operators \cite[Ch. A, Theorem 3.2]{DH},
it is enough to show that $R$ is simple.
Suppose
that this is false and let $L$ be
a minimal normal subgroup of $R$. Then $1 < L < R$ and $L$
is $\sigma$-subnormal in $G$, so $L$ is modular in $G$ since
$G$ is an $Q\sigma T$-group.
Moreover, $L_{G}=1$ and so every chief factor of $G$ below $L^{G}$
is cyclic by \cite[Theorem 5.2.5]{Schm}. But
$L^{G}=R$ and so $|R|=p$, a
contradiction. Thus $G$ is a $\sigma$-$SC$-group.
The lemma is proved.
{\bf Lemma 3.18.} {\sl If $G$ is a $\sigma$-soluble $Q\sigma T$-group, then
every Hall $\sigma _{i}$-subgroup $H$ of $G$ is an $M$-group for all $i$.}
{\bf Proof.} By Theorem 3.3, $G=D\rtimes L$, where $D$ is an abelian Hall
subgroup of $G$ of odd order, $L$ is $\sigma$-nilpotent and
the lattice
of all subgroups ${\cal L}(L)$ of $L$ is modular, every subgroup of $D$ is normal
in $G$, and $O_{\sigma _{j}}(D)$ has
a normal complement in a Hall $\sigma _{i}$-subgroup of $G$ for all $j$.
It follows that $H_{i}=O_{\sigma _{i}}(D) \times S$, where $O_{\sigma _{i}}(D)$ is a Hall
$\sigma _{i}$-subgroup of $G$ and $S\leq L^{x}$ for some $x\in G$. Then
$O_{\sigma _{i}}(D)$ and $S\simeq SD/D \leq G/D\simeq L $ are have modular
lattices of the subgroups, so the lattice of all subgroups ${\cal L}(H)$ of $H$ is modular
by \cite[Theorem 2.4.4]{Schm}.
The lemma is proved.
{\bf Proof of Theorem 3.6.} First assume that $G$ is a $\sigma$-full
$Q\sigma T$-groups and let $D=G^{{\mathfrak{S}}_{\sigma}}$ be the
$\sigma$-soluble residual of $G$.
Then $O^{\sigma _{i}}(D)=D$ for all $i\in I$.
Moreover, $G/D$ is a $\sigma$-soluble $Q\sigma T$-group by Lemma 3.13,
so
$G/D$ is a $\sigma$-supersoluble by Theorem 3.3 and hence, in fact,
$D=G^{{\mathfrak{U}}_{\sigma}}$ is the $\sigma$-supersoluble residual of $G$.
From Lemma 3.17 it
follows that $G$ is a $\sigma$-$SC$-group.
Therefore, if $D\ne 1$,
$G$ has a Robinson $\sigma$-complex $(D, Z(D); U_{1}, \ldots , U_{k})$
by Theorem 2.37 and, in view of Lemma 3.14,
for any set $\{i_{1}, \ldots , i_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$, $G$ and $G /U_{i_{1}}'\cdots U_{i_{r}}'$ satisfy
${\bf N}_{\sigma _{i}}$ for all $\sigma _{i}\in \sigma (Z(D))$ and
${\bf P}_{\sigma _{i}}$ for all $\sigma _{i}\in \sigma (D)$.
Therefore the necessity of the condition of the theorem holds.
Now, assume, arguing
by contradiction, that $G$ is non-$Q\sigma T$-group of minimal order
satisfying Conditions (i), (ii), and (iii). Then $D\ne 1$.
We consider a $\sigma$-subnormal
non-modular subgroup $U$ of $G$ of minimal order.
First we show that
$U$ is $\sigma$-supersoluble.
Asssume that $k=1$, that is, $D=U_{1}=D'$. Then $Z(D)\leq \Phi (D)$
and
$(U\cap D)Z(D)/Z(D)$ is a $\sigma$-subnormal subgroup by Lemma 2.30(1)(2)
of a non-$\sigma$-primary simple group $D/Z(D)$.
Hence $U\cap D\leq Z(D)$, so $U\cap D=U\cap Z(D)$.
Therefore
every chief factor of $U$ below $U\cap D$ is cyclic. On the other hand, $U/(U\cap D)
\simeq UD/D$ is $\sigma$-supersoluble by Theorem 3.3 and Proposition 2.28(ii), so $U$ is
$\sigma$-supersoluble.
Now let $k\ne 1$. We show that the hypothesis holds for $G/U_{i}'$
for all $i=1, \ldots , k$. We can assume without loss of generality that $i=1$.
Let $N=U_{1}'$.
Then
$ (G/N)/(D/N)\simeq G/D$ is a $\sigma$-soluble $M\sigma T$-group, so
Condition (i) holds for $G/D$. From Lemma
2.41(1) it follows that
$$(D/N, Z(D/N); U_{2}N/N, \ldots , U_{k}N/N)$$ is
a Robinson $\sigma$-complex of $G/N$ and
$$Z(D/N)=U_{1}/N=Z(D)N/N\simeq Z(D)/(Z(D)\cap N).$$
Moreover, if
$\{i_{1}, \ldots , i_{r}\}\subseteq \{2, \ldots , k\}$, where $2\leq r < k$,
then the quotients
$G/N=G/U_{1}'$ and $$(G/N) /(U_{i_{1}}N/N)'\cdots (U_{i_{r}}N/N)'=
(G/N)/(U_{i_{1}}'\cdots U_{i_{r}}'U_{1}'/N)\simeq G/U_{i_{1}}'\cdots U_{i_{r}}'U_{1}'$$
satisfy
${\bf N}_{\sigma _{i}}$ for all
$\sigma _{i}\in \sigma (Z(D/N))\subseteq \sigma (Z(D))$
and ${\bf P}_{\sigma _{i}}$ for all
$\sigma _{i}\in \sigma (D/N)$ by Condition (iii), so Conditions (ii) and (iii)
hold for $G/D$.
Therefore the hypothesis holds for $G/N=G/U_{1}'$, so the
$\sigma$-subnormal subgroup $UU_{1}'/U_{1}'$ of $G/N$ is modular in
$G/U_{1}'$ by the choice of $G$. Finally,
from Lemma 2.27(2) and Proposition 2.28(2)
it follows that
$$(G/U_{1}')^{\mathfrak{S}_{\sigma}}=U_{1}'G^{\mathfrak{S}_{\sigma}}/U_{1}'
=DU_{1}'/U_{1}'=
U_{1}'G^{\mathfrak{U}_{\sigma}}/U_{1}'= (G/U_{1}')^{\mathfrak{U}_{\sigma}}.$$
Hence $U$ is $\sigma$-supersoluble by
Lemma 3.15(1).
Next we show that $U$ is a $p$-group for some prime $p\in \sigma _{i}$.
Clearly, $(G/N)^{\mathfrak{S}_{\sigma}}= (G/N)^{\mathfrak{U}_{\sigma}}$
and so, since $U$ is $\sigma$-supersoluble,
it is enough to show that the hypothesis holds
on
$G/N$ for every nilpotent normal subgroup $N$ of $G$ by Lemma 3.15(2).
It is clear also that $$(G/N)/(DN/N)\simeq G/DN\simeq (G/D)/(DN/D)$$ is
a $\sigma$-soluble $Q\sigma T$-group, where $|DN/N|\ne 1$.
Moreover, $Z(DN/N)= Z(D)N/N$ and
$$(DN/N, Z(D)N/N; U_{1}N/N, \ldots ,
U_{k}N/N)$$ is a Robinson $\sigma$-complex of $G/N$ by Lemma 2.41(2).
In view of Lemmas 2.42 and 3.16, $G/N$ satisfies ${\bf N}_{\sigma _{i}}$ for all
$$\sigma _{i}\in \sigma (Z(D)N/N)\subseteq \sigma (Z(D))$$ and
${\bf P}_{\sigma _{i}}$ for all
$$\sigma _{i}\in \sigma (DN/N)\subseteq \sigma (D.)$$
Similarly, for any set
$\{j_{1}, \ldots , j_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$, $$(G/N) /(U_{j_{1}}N/N)'\cdots (U_{j_{r}}N/N)'=
(G/N) /(U_{j_{1}}'\cdots U_{j_{r}}'N/N)$$$$\simeq G/U_{j_{1}}'\cdots U_{j_{r}}'N\simeq
(G/U_{j_{1}}'\cdots U_{j_{r}}')/(U_{j_{1}}'\cdots
U_{j_{r}}'N/U_{j_{1}}'\cdots U_{j_{r}}')
$$ satisfies ${\bf N}_{\sigma _{i}}$ for all
$\sigma _{i}\in \sigma (Z(D)N/N)$ since $$U_{j_{1}}'\cdots
U_{j_{r}}'N/U_{j_{1}}'\cdots U_{j_{r}}'\simeq N/(N\cap U_{j_{1}}'\cdots
U_{j_{r}}')$$ is nilpotent.
Therefore the hypothesis holds
on $G/N$, so $U$ is a $p$-group for some prime $p$.
Since $U$ is $\sigma$-subnormal in $G$ by hypothesis,
for some $p\in \sigma _{i}$ we have $U \leq U^{G} \leq O_{\sigma _{i}}(G)$ by
Lemma 2.30(5).
We show that $U$ modular in $\langle x, U\rangle $ for
all elements $x$ of $G$ of prime power order $q^{a}$.
First assume $q\in \sigma _{i}'$ and we show in this case that
$x$ induces a
power automorphisms in $O_{\sigma _{i}}(G)$.
Assume that $O_{\sigma _{i}}(G)\cap D=1$. Since $G/D$ is a $Q\sigma T$-group
by Condition (i), it follows that $\sigma _{i}'$-elements of $G/D$ induce
power automorphisms in $O_{\sigma _{i}}(G/D)$. Then from the $G$-isomorphism
$U^{G}\simeq U^{G}D/D$ it follows that $x$ induces
a power automorphisms in $O_{\sigma _{i}}(G)$.
Now assume that
$E:=O_{\sigma _{i}}(G)\cap D\ne 1. $ Then $EZ/Z$ is a $\sigma$-soluble normal
subgroup of $D/Z$, so $E\leq Z$ since $O^{\sigma _{j}}(D/Z)=D/Z$ for all $j$.
Hence $\sigma _{i}\in \sigma (Z)$, so $G$ sasisfies
${\bf N}_{\sigma _{i}}$, which implies that $x$
induces
power automorphisms in $O_{\sigma _{i}}(G)$.
Therefore the subgroup $U$ modular in $\langle x, U\rangle $.
Now let $q\in \sigma _{i}$. If $\sigma _{i}\cap \pi (D)\ne \emptyset$, then
$U$ modular in $\langle x, U\rangle $ since in this case $G$ sasisfies
${\bf P}_{\sigma _{i}}$ by hypthesis. Finally, assume that $\sigma _{i}\cap \pi (D)=
\emptyset$. Then $O_{\sigma _{i}}(G)\cap D=1$.
Let $E$ be a minimal supplement to $D$ in $G$. Then $E\cap D\leq \Phi(E)$, so $E$ is
$\sigma$-soluble. Hence $E$ has a Hall $\sigma _{i}$-subgroup $H$. Then $H$ is
a Hall $\sigma _{i}$-subgroup of $G$ since $|G:E|=|D/(D\cap E)|$ is a
$\sigma _{i'}$-number. Therefore $U\leq O_{\sigma _{i}}(G)\leq H^{y}$, where
$y$ is such that $x\leq H^{y}$,
and $H^{y}\simeq DH^{y}/D$ is a Hall $\sigma _{i}$-subgroup of $G/D$, so $H^{y}$ is an
$M$-group by Lemma 3.18. But then $U$ modular in $\langle x, U\rangle $. Therefore $U$
modular in $\langle x, U\rangle $ for every element $x\in G$ of prime power order.
It follows that $U$ is modular in $G$ by \cite[Lemma~5.1.13]{Schm}. This contradiction completes the
proof of the fact that the sufficiency of the condition of the theorem holds.
The theorem is proved.
The class of all $Q\sigma T$-groups is much wider than the class of
all $PT$-groups. Indeed, a non-soluble group $G$
of order 60 is a $PT$-group and this group
is not a $Q\sigma T$-group, where $\sigma =\{\{2, 3, 5\}, \{2, 3, 5\}'\}$,
since its subgroups of order 2 are $\sigma$-subnormal but not quasinormal in $G$.
Let $\pi \subseteq \Bbb{P}$. Then we say \emph{$G$ satisfies}
${\bf t}_{\pi}$ if whenever $N$ is a soluble normal
subgroup of $G$, every subgroup of $O_{\pi}(G/N)$ is normal in $G/N$.
By making slight changes to the
proof of Theorem 3.6 and using this theorem,
we can prove the following result, which answers Question 3.4(2).
{\bf Theorem 3.19} (Safonova, Skiba \cite{preprI})
{\sl Suppose that $G$ is a $\sigma$-full group.
A group $G$ is a $T_{\sigma}$-group if
and only if $G$ satisfies}
${\bf t}_{\sigma _{i}}$ for all $i\in I$
and every non-$\sigma$-primary chief factor of $G$ is simple.
{\bf Corollary 3.20} (Ballester-Bolinces, Beidleman and Heineken \cite{Ball})
{\sl A group $G$ is a $T$-group if
and only if $G$ satisfies}
${\bf t}_{p}$ for all primes $p$ ,
and every non-abelian chief factor of $G$ is simple.
\section{Groups in which $\sigma$-quasinormality is a transitive relation}
Recall that a subgroup $A$ of $G$ is $\sigma$-quasinormal in $G$ if $A$
is $\sigma$-subnormal and modular in $G$.
We say that $G$ is an \emph{$M\sigma T$-group} if
$\sigma$-quasinormality is
a transitive relation on $G$, that is, if $H$ is a $\sigma$-quasinormal subgroup
of $K$ and $K$ is a $\sigma$-quasinormal subgroup
of $G$, then $H$ is a $\sigma$-quasinormal subgroup of $G$.
The following open problem
is one of the most intriguing and difficult problems in the theory of
$\sigma$-properties of a group.
{\bf Question 4.1.}
{\sl What is the structure of $M\sigma T$-groups?}
In this and the following sections,
we discuss two special cases of this problem.
First note that in the case when $\sigma =\{\mathbb {P}\}$, this problem
is the following old open question.
{\bf Question 4.2.} {\sl What is the structure of \emph{$MT$-groups},
that are, groups
$G$ in which modularity is a transitive
relation on $G$, that is, if $H$ is a modular subgroup of $K$ and $K$ is a modular
subgroup of $G$, then $H$ is a modular subgroup of $G$?}
The following special case of Proble. 4.1 is also not explored.
{\bf Question 4.3.} {\sl What is the structure of
a $\sigma$-soluble $M\sigma T$-groups?}
The solution of Problem 4.2 in the class of soluble groups was found
by Frigerio in \cite{A. Frigerio}.
{\bf Theorem 4.4} (Frigerio \cite{A. Frigerio}, Zimmermann \cite{mod}).
{\sl A soluble group $G$ is an $MT$-group
if and only if $G$ is a group with modular latice of all subgroups ${\cal L}(G)$.}
In a later paper \cite{mod}, Zimmermann found a new proof of this result.
In the class of soluble groups,
the solution of Problem 4.1 was given for the most general case
in the recent papers \cite{preprI, matem}.
{\bf Theorem 4.5 }(Zhang, Guo, Safonova, Skiba \cite{preprI, matem}). {\sl If $G$ is a soluble
$M\sigma T$-group
and $D=G^{\frak{N_{\sigma}}}$, then
the following conditions hold:}
(i) {\sl $G=D\rtimes M$, where $D$ is an abelian Hall
subgroup of $G$ of odd order, $M$ is a $\sigma$-nilpotent and the lattice
of all subgroups
${\cal L}(M)$ of $M$ is modular, }
(ii) {\sl every element of $G$ induces a
power automorphism in $D$, }
(iii) {\sl $O_{\sigma _{i}}(D)$ has
a normal complement in a Hall $\sigma _{i}$-subgroup of $G$ for all $i$.}
{\sl Conversely, if Conditions (i), (ii) and (iii) hold for
some subgroups $D$ and $M$ of
$G$, then $G$ is a soluble $M\sigma T$-group.}
From Theorems 3.5 and 4.5 we get the following interesting result.
{\bf Theorem 4.6 }(Zhang, Guo, Safonova, Skiba \cite{preprI, matem}). {\sl Let $G$ be a soluble group.
Then $G$ is a $W\sigma T$-group if and only if $G$ is an $M\sigma T$-group}.
In the case $\sigma =\{\mathbb{P}\}$ we get
from this theorem the following known result.
{\bf Corollary 4.7} (Frigerio \cite{A. Frigerio}, Zimmermann \cite{mod}).
{\sl A soluble group $G$ is an $MT$-group
if and only if $G$ is a group with modular latice of all subgroups ${\cal L}(G)$.}
In the classical case $\sigma =
\sigma ^{1}=\{\{2\}, \{3\}, \{5\}, \ldots \}$ we get from Theorem 4.5
the following well-known result.
{\bf Corollart 4.8 } (Zacher \cite{zaher}). {\sl A group
$G$ is a soluble $PT$-group
if and only if the following conditions are satisfied:}
(i) {\sl the nilpotent residual $L$ of $G$ is an abelian Hall subgroup of odd order,}
(ii) {\sl $G$ acts by conjugation on $L$ as a group power automorphisms, and }
(iii) {\sl every subgroup of $G/L$ is quasinormal in $G/D$. }
Nevertheless, the answer to Question 4.3 is unknown.
The next lemma is a corollary of general properties of modular
subgroups \cite[p. 201]{Schm} and $\sigma$-subnormal subgroups (see Lemma 2.30).
{\bf Lemma 4.9.} {\sl Let $A$, $B$ and
$N$ be subgroups of $G$, where $A$ is $\sigma$-quasinormal and $N$ is
normal in $G$.}
(1) {\sl The subgroup $A\cap B$ is $\sigma$-quasinormal in $B$.}
(2) {\sl The subgroup $AN/N$
is $\sigma$-quasinormal in $G/N$}.
(3) {\sl If $N\leq B$ and $B/N$ is $\sigma$-quasinormal in $G/N$, then
$B$ is
$\sigma$-quasinormal in $G$. }
{\bf Lemma 4.10.} {\sl A subgroup $A$ of $G$ is a maximal $\sigma$-quasinormal in $G$
if and only if either $G/A=G/A_{G}$ is a
simple $\sigma$-primary group or
$G/A_{G}$ is a $\sigma$-primary non-abelian group of order $pq$ for primes
$p$ and $q$.}
{\bf Proof. } Assume that $A$ is a maximal $\sigma$-quasinormal in $G$. In view of
Theorem 3.3(2), $G/A_{G}$ is a $\sigma _{i}$-group for some $i$, so every subgroup of $G$
between $A_{G}$ and $G$ is $\sigma$-subnormal in $G$ by Lemma 2.30(5).
On the other hand, $U/A_{G}$ is modular in $G$ if and only if $U$ is modular in $G$
by \cite[Page 201, Property (4)]{Schm}. Therefore, in fact, $A$ is a maximal modular
subgroup of $G$. Hence either $A\trianglelefteq G$ and $G/A=G/A_{G}$ is a
simple $\sigma _{i}$-group or
$G/A_{G}$ is a non-abelian group of order $pq$ for primes $p, q \in \sigma _{i}$
by \cite[Lemma 5.1.2]{Schm}.
Finally, if $G/A_{G}$ is a $\sigma$-primary non-abelian group of order $pq$ for primes
$p$ and $q$, then $A$ is a maximal subgroup of $G$ and $A/A_{G}$
is, clearly, modular in $G/A_{G}$, so $A$ is a maximal modular subgroup of $G$
by \cite[Page 201, Property (4)]{Schm}, so $A$ is
a maximal $\sigma$-quasinormal of $G$.
Similarly,
if $G/A=G/A_{G}$ is a
simple non-abelian $\sigma$-primary group, then $A$ is a maximal modular
subgroup of $G$ by \cite[Lemma 5.1.2]{Schm}, so $A$ is
a maximal $\sigma$-quasinormal of $G$.
The lemma is proved.
We say that a subgroup $A$ of $G$ is said to be \emph{$\sigma$-subquasinormal} in $G$
if
there is a subgroup chain $A=A_{0} \leq A_{1} \leq \cdots \leq
A_{n}=G$ such that $A_{i-1}$ is $\sigma$-quasinormal in $ A_{i}$
for all $i=1, \ldots , n$.
It is clear that $G$ is an $M\sigma T$-group if and only if every its
$\sigma$-subquasinormal subgroup is $\sigma$-quasinormal in $G$.
We use $\frak{A}^{*}$ to denote the class of all abelian groups of squarefree exponent;
$G^{{\frak{A}^{*}}}$ is the intersection of all normal subgroups $N$ of $G$
with $G/N\in {\frak{A}^{*}}$. It is clear that $\frak{A}^{*}$ is a hereditary formation,
so $G/G^{{\frak{A}^{*}}}\in \frak{A}^{*}$.
The following lemma is a corollary of Lemmas 1 and 4 in \cite{mod} and Lemma
2.30.
{\bf Lemma 4.11.} {\sl Let $A$, $B$ and $N$ be subgroups of $G$,
where
$A$ is $\sigma$-subquasinormal $G$ and $N$ is normal in $G$. }
(1) {\sl $A\cap B$ is $\sigma$-subquasinormal in $B$}.
(2) {\sl $AN/N$ is
$\sigma$-subquasinormal in $G/N$. }
(3) {\sl If $N\leq K$ and $K/N$ is $\sigma$-subquasinormal
in $G/N$, then $K$ is $\sigma$-subquasinormal in $G.$}
(4) {\sl $A^{{\frak{A}^{*}}}$ is subnormal in $G$.}
{\bf Lemma 4.12} (See \cite[Lemma 5.1.13]{Schm}). {\sl A subgroup $M$ of $G$ is modular
in $G$ if and only if $M$ is modular in $\langle x, M\rangle$ for every
element of prime power order $x\in G$.}
{\bf Proposition 4.13.} {\sl Suppose that $G$ is a soluble $PT$-group and
let $p$ be a prime.
If every submodular $p$-subgroup of $G$ is modular in $G$, then every
$p$-subgroup of $G$ is modular in $G$. In particular, if
every submodular subgroup of a soluble
$PT$-group $G$ is modular in $G$, then $G$ is an $M$-group.}
{\bf Proof.} Assume that this proposition is false and let $G$ be a
counterexample of minimal order. Then, by \cite[Theorem 2.1.11]{prod},
the following conditions are satisfied:
the nilpotent residual $D$ of $G$ is a Hall subgroup,
$G$ acts by conjugation on $D$ as a group power automorphisms, and
every subgroup of $G/D$ is quasinormal in $G/D$. In particular, $G$ is supersoluble.
Let $M$ be a complement to $D$ in $G$ and $U$ a
non-modular $p$-subgroup of $G$ of minimal order. Then $U$ is not submodular in $G$ and
every maximal subgroup of $U$
is modular in $G$, so $U$ is a cyclic group by \cite[p. 201, Property (5)]{Schm}.
Let $V$ be the maximal subgroup of $U$. Then $V\ne 1$ since every subgroup of
prime order of a supersoluble group is submodular by \cite[Lemma 6]{mod}.
We can assume without loss of generality that $U\leq M$ since $M$ is a
Hall subgroup of $G$.
(1) {\sl If $R$ is a normal $p$-subgroup of $G$, then
every $p$-subgroup of $G$ containg $R$ is modular in $G$. In particular,
$U_{G}=1$ and so $U\cap D=1.$}
Let $L/R$ be a submodular $p$-subgroup of $G/R$.
Then $L$
is a submodular $p$-subgroup of $G$ by Lemma 4.9(3), so $L$
is modular in $G$ by hypothesis. Hence $L/R$ is modular in $G/R$ by
\cite[p. 201, Property (4)]{Schm}. Hence the hypothesis holds for $G/R$.
Therefore every $p$-subgroup $S/R$ of $G/R$ is modular in $G/R$ by the choice of $G$, so
$S$ is modular in $G$ by \cite[p. 201, Property (4)]{Schm}.
(2) {\sl If $K$ is a proper submodular subgroup of $G$, then every $p$-subgroup $L$
of $K$ is modular in $G$, so every proper subgroup of $G$ contaning $U$ is not
submodular in $G$. }
First note that $K$ is a $PT$-group by \cite[Corollary~2.1.13]{prod} and if $S$ is a
submodular $p$-subgroup of $K$,
then $S$ is submodular in $G$ and so $S$ is modular in $G$.
Hence $S$ is modular in $K$.
Therefore the hypothesis holds for $K$, so every $p$-subgroup $L$
of $K$ is modular in $K$ by the choice of $G$. Hence
so $L$ is modular in $G$ by hypothesis.
(3) $DU=G$ (This follows from Claim (2) and the fact that every subgroup of $G$
containing $D$ is subnormal in $G$).
(4) {\sl $V$ is not subnormal in $G$.}
Assume that $V$ is subnormal in $G$. Then $V$ is quasinormal in $G$ by
Theorem A since $V$ is
modular in $G$.
Therefore $1 < V \leq R=O_{p}(Z_{\infty}(G))$ by \cite[Corollary 1.5.6]{prod}
since $V_{G}=1=U_{G}$ by Claim (1). But $R\leq U$ since $U$ is a Sylow $p$-subgroup
of $G$ by Claim (3), hence $R=V=1$ and
so $|U|=p$, a contradiction. Therefore we have (4).
(5) {\sl $G=V^{G}\times K$, where $V^{G}$ is a non-abelian $P$-group of order prime
to $|K|$} (Since $V_{G}=1$, this follows from Claim (4) and
Lemma~2.2.
{\sl Final contradiction.} From Claim (5) it follows that $U\leq V^{G}$, so $U$ is
submodular in $G$ by \cite[Theorem 2.4.4]{Schm}). This final contradiction
completes the proof of the result.
The proposition is proved.
{\bf Proof of Theorem 4.4.} Let $\sigma (G)=\{\sigma _{1}, \ldots, \sigma _{t}\}$.
First suppose that $G$ is a soluble $M\sigma
T$-group. Then $G$ has a Hall $\sigma _{i}$-subgroup $H_{i}$ for all
$i=1, \ldots, t$.
We show that Conditions (i), (ii), and (iii) hold for $G$. Assume that
this is false and let $G$ be a counterexample of minimal order.
(1) {\sl If $R$ is a non-identity normal subgroup of $G$, then
Conditions (i), (ii), and (iii) hold for $G/R$.}
If $H/R$ is a $\sigma$-subquasinormal subgroup of $G/R$, then $H$ is
$\sigma$-subquasinormal
in $G$ by Lemma 4.11(3), so $H$ is $\sigma$-quasinormal in $G$ by hypothesis
and hence $H/R$ is a $\sigma$-quasinormal in $G/R$ by Lemma 4.9(2). Therefore
$G/R$ is an
$M\sigma T$-group, so we have (1) by the choice of $G$.
(2) {\sl If $E$ is a proper $\sigma$-subquasinormal subgroup of $G$, then
$E^{\frak{N_{\sigma}}}\leq D$ and Conditions (i) and (ii) hold for $E$. }
Every $\sigma$-subquasinormal subgroup $H$ of $E$ is $\sigma$-subquasinormal
in $G$, so $H$ is $\sigma$-quasinormal
in $G$ by hypothesis. Therefore the hypothesis holds for $E$,
so Conditions (i) and (ii) hold for $E$ by the choice of $G$.
Moreover, since $G/D\in {\frak{N_{\sigma}}}$
and ${\frak{N_{\sigma}}}$ is a hereditary class by Lemma 2.9,
$E/E\cap D\simeq ED/D\in {\frak{N_{\sigma}}}$ and so
$E/E\cap D \in {\frak{N_{\sigma}}}$. Hence $$E^{\frak{N_{\sigma}}}\leq
E\cap D\leq D.$$
(3) {\sl $D$ is nilpotent and $G$ is supersoluble.}
Let $R$ be a minimal normal subgroup of $G$. Then
$RD/R=(G/R)^{{\frak{N_{\sigma}}}}$ is abelian by Lemma 2.27 and Claim (1).
Therefore $R\leq D$, $R$
is the unique minimal normal subgroup of $G$ and $R\nleq \Phi (G)$. Hence
$R=C_{G}(R)=O_{p}(G)=F(G)$ for some
$p\in \sigma _{i}$ by \cite[Ch. A, 13.8(b) and 15.2]{DH}
Let $V$ be a maximal
subgroup of $R$. Then $V_{G}=1$ and $V$ is $\sigma$-subquasinormal
in $G$, so $H$ is $\sigma$-quasinormal in $G$. Hence $R=V^{G}$ is a group of order $p$
by \cite[Theorem 5.2.3]{Schm}, so $ G/R=C_{G}(R)$
is cyclic and hence $G$ is supersoluble. But
then $D=G^{{\frak{N_{\sigma}}}}\leq G'\leq F(G)$ and so $D$ is nilpotent.
(4) {\sl If $H/K$ is $\sigma$-nilpotent, where $K\leq H$ are normal subgroups of $G$,
then $H/K$ is an $M$-group.}
Let $U/K$ be any submodular subgroup of $H/K$, then $U/K$ is submodular in $G/K$ and so
$U$ is is submodular in $G$.
On the other hand,
$U/K$ is $\sigma$-subnormal in $G/D$ by Lemma 2.10,
so $U$ is $\sigma$-subnormal in $G$ by
Lemma 2.30(5). Therefore $U$ is $\sigma$-subquasinormal in $G$
and so, by hypothesis,
$U$ is $\sigma$-quasinormal in $G$. It follows that $U/K$ is
$\sigma$-quasinormal in $G/K$ and so in $H/K$ by Lemma 4.9(1)(2), so every
submodular subgroup of $H/K$
is modular $H/K$. Therefore $H/K$ is an $M$-group by Proposition 4.13.
(5) {\sl $D$ is a Hall subgroup of $G$. }
Suppose
that this is false and let $P$ be a Sylow $p$-subgroup of $D$ such
that $1 < P < G_{p}$, where $G_{p}\in \text{Syl}_{p}(G)$. We can assume
without loss of generality that $G_{p}\leq H_{1}$.
(a) {\sl $D=P$ is a minimal normal subgroup of $G$. }
Let $R$ be a minimal normal subgroup of $G$ contained in $D$.
Then $R$ is a $q$-group for some prime
$q$. Moreover,
$D/R=(G/R)^{\mathfrak{N}_{\sigma}}$ is a Hall subgroup of $G/R$ by
Claim (1). Suppose that $PR/R \ne 1$. Then
$PR/R \in \text{Syl}_{p}(G/R)$.
If $q\ne p$, then $P \in \text{Syl}_{p}(G)$. This contradicts the fact
that $P < G_{p}$. Hence $q=p$, so $R\leq P$ and therefore $P/R \in
\text{Syl}_{p}(G/R)$ and we again get that
$P \in \text{Syl}_{p}(G)$. This contradiction shows that $PR/R=1$, which implies that
$R=P$ is the unique minimal normal subgroup of $G$ contained in $D$.
Since $D$ is nilpotent,
a $p'$-complement $E$ of $D$ is characteristic in
$D$ and so it is normal in $G$. Hence $E=1$, which implies that $R=D=P$.
(b) {\sl $D\nleq \Phi (G)$. Hence for some maximal subgroup
$M$ of $G$ we have $G=D\rtimes M$ } (This follows from Lemma 2.9 since $G$
is not $\sigma$-nilpotent).
(c) {\sl If $G$ has a minimal normal subgroup $L\ne D$,
then $G_{p}=D\times (L\cap G_{p})$.
Hence $O_{p'}(G)=1$. }
Indeed, $DL/L\simeq D$ is a Hall
subgroup of $G/L$ by Claims (1) and (a). Hence
$G_{p}L/L=RL/L$, so $G_{p}=D\times (L\cap G_{p})$.
Thus $O_{p'}(G)=1$ since $D < G_{p}$ by Claim (a).
(d) {\sl $V=C_{G}(D)\cap M$ is a normal subgroup of $G$ and
$C_{G}(D)=D\times V \leq H_{1}$. }
In view of Claims (a) and
(b), $C_{G}(D)=D\times V$, where $V=C_{G}(D)\cap M$
is a normal subgroup of $G$. Moreover, $V\simeq DV/D$ is $\sigma $-nilpotent by
Lemma 2.9. Let $W$ be a $\sigma
_{1}$-complement of $V$. Then $W$ is characteristic in $V$ and so it is normal
in $G$. Therefore we have (d) by Claim (c).
(e) $G_{p}\ne H_{1}$.
Assume that $G_{p}=H_{1}$. Let $Z$ be a subgroup of order $p$ in $Z(G_{p})\cap D$.
Then $Z$ is $\sigma$-subquasinormal in $G$ by Claim (3), so $Z$ is
$\sigma$-quasinormal in $G$ and hence $ O^{\sigma _{1}}(G)=O^{p}(G)\leq N_{G}(Z)$
by Theorem 3.3(v).
Therefore $G=G_{p}O^{p}(G)\leq N_{G}(Z)$, hence $D=Z < G_{p}$ If follows that
$D < C_{G}(D)$.
Then $V=C_{G}(D)\cap M\ne 1$ is a normal subgroup of $G$ and
$V\leq H_{1}=G_{p}$ by Claim (d). Let $L$ be a minimal
normal subgroup of $G$ contained in $V$. Then $G_{p}=D\times L$ is a normal
elementary abelian subgroup of $G$ of order $p^{2}$ by Claim (c) and
every subgroup of $G_{p}$ is
normal in $G$ by Theorem 3.3(v).
Let $D=\langle a \rangle$, $L=\langle b \rangle$ and $N=\langle ab \rangle$.
Then $N\nleq D$, so in view of the $G$-isomorphisms
$$DN/D\simeq N\simeq NL/L= G_{p}/L=DL/L\simeq D $$ we get that
$G/C_{G}(D)=G/C_{G}(N)$ is a $p$-group since $G/D$ is $\sigma$-nilpotent by Lemma
2.9.
But then Claim (d) implies that $G$ is a $p$-group. This
contradiction shows that we have (e).
{\sl Final contradiction for (5).} In view of Theorem A in \cite{2}, $G$ has a $\sigma
_{1}$-complement $E$ such that $EG_{p}=G_{p}E$.
Let $V=(EG_{p})^{{\frak{N}}_{\sigma}}$. By
Claim (e), $EG_{p}\ne G$. On the other hand, since $ D\leq
EG_{p}$ by Claim (a), $EG_{p}$ is $\sigma$-subquasinormal in $G$ by Claim (4) and
Lemma 4.11(3).
Therefore Claim (2) implies that $V$ is a Hall subgroup of $EG_{p}$
and $V\leq D$,
so for a Sylow
$p$-subgroup $V_{p}$ of $V$ we have $|V_{p}|\leq |P| < |G_{p}|$.
Hence $V=1$.
Therefore $EG_{p}=E\times G_{p}$ is $\sigma$-nilpotent and so
$E\leq C_{G}(D)\leq H_{1}$. Hence $E=1$ and so $ D =1$, a contradiction. Thus,
$D$ is a Hall subgroup of $G$.
(6) {\sl $H_{i}=O_{\sigma _{i}}(D)\times S$
for each $\sigma _{i} \in \sigma (D) $.}
Since $D$ is a nilpotent Hall subgroup of $G$ by Claims (3) and (5),
$D=L\times N$, where $L=O_{\sigma _{i}}(D)$
and $N=O^{\sigma _{i}}(D)$ are Hall subgroups of $G$.
First assume that $N\ne 1$. Then $$O_{\sigma
_{i}}((G/N)^{\mathfrak{N}_{\sigma}})=O_{\sigma
_{i}}(D/N)=LN/N$$ has a normal complement $V/N$ in $H_{i}N/N\simeq H_{i}$
by Claim (1). On the other hand, $N$ has a complement $S$ in $V$ by
the Schur-Zassenhaus theorem. Hence $H_{i}=.H_{i} \cap LSN=LS$ and $L\cap
S=1$ since $$(L\cap
S)N/N\leq (LN/N)\cap (V/N)=(LN/N)\cap (SN/N)=1$$
It is clear that $V/N$ is a Hall subgroup of $H_{i}N/N$, so $V/N $ is
characteristic in $H_{i}N/N$. On the other hand, $H_{i}N/N$ is
normal in $G/N$ by Lemma 2.9 since $D/N\leq H_{i}N/N$.
Hence $V/N $ is normal in $G/N$.
Thus $H_{i}\cap V =H_{i}\cap NS=S(H_{i}\cap N)=S
$ is normal in $H_{i}$, so $H_{i}=O_{\sigma _{i}}(D)\times S$.
Now assume that $D=O_{\sigma _{i}}(D)$. Then $H_{i}$ is normal in
$G$, so $H_{i}$ is an $M$-group by Claim (4).
Therefore every subgroup $U$ of $H_{i}$ is $\sigma$-quasinormal and so
$\sigma$-normal in $G$ by Theorem B(v). Since $D$ is a normal Hall subgroup of $H_{i}$,
it has a
complement $S$ in $H_{i}$ and $D\leq O^{\sigma _{i}}(G)\leq N_{G}(S)$ since $S$ is
$\sigma$-normal in $G$. Hence
$H_{i}=D\times S=O_{\sigma _{i}}(D)\times S$.
(7) {\sl Every subgroup $H$ of $D$ is normal in $G$. Hence every element of
$G$ induces a power automorphism in $D$. }
Since $D$ is nilpotent by Claim (3), it is enough to consider
the case when $H\leq O_{\sigma _{i}}(D)=H_{i}\cap D$ for some $\sigma _{i}\in \sigma (D)$.
Claim (6) implies that $H_{i}=O_{\sigma _{i}}(D)\times S$.
It is clear that $H$ is $\sigma$-subquasinormal in $G$, so $H$ is $\sigma$-quasinormal
in
$G$. Therefore $H$ is $\sigma$-normal in $G$ by Theorem B(v),
so $$G=H_{i}O^{\sigma _{i}}(G)=
(O_{\sigma _{i}}(D)\times S)O^{\sigma _{i}}(G)=SO^{\sigma _{i}}(G)\leq
N_{G}(H)$$.
(8) {\sl If $p$ is a prime such that $(p-1, |G|)=1$, then $p$
does not divide $|D|$. Hence the smallest prime in $\pi (G)$ belongs to $\pi (|G:D|)$.
In particular, $|D|$ is odd. }
Assume that this is false.
Then, by Claim (7), $D$ has a maximal subgroup $E$ such that
$|D:E|=p$ and $E$ is normal in $G$. It follows that $C_{G}(D/E)=G$ since $(p-1,
|G|)=1$.
Since
$D$ is a Hall subgroup of $G$ by Claim (5), it has a complement $M$ in $G$.
Hence
$G/E=(D/E)\times (ME/E)$, where
$ME/E\simeq M\simeq G/D$ is $\sigma$-nilpotent. Therefore $G/E$ is
$\sigma$-nilpotent by Lemma 2.9. But then $D\leq E$, a contradiction.
Hence we have (8).
(9) {\sl $D$ is abelian.}
In view of Claim
(7), $D$ is a Dedekind group. Hence $D$ is abelian since $|D|$ is odd by Claim (8).
From Claims (4)--(9) we get that Conditions (i), (ii), and (iii), where $M$ is an $M$-group
hold for $G$.
Now, we show that if Conditions (i), (ii),and (iii), where $M$ is an $M$-group,
hold for $G$,
then
$G$ is an $M\sigma T$-group. Assume that
this is false and let $G$ be a counterexample of minimal order.
Then $D\ne 1$ and, by Lemma 4.12,
for some $\sigma$-subquasinormal
subgroup $A$ of $G$ and for
some element $x\in G$ of prime power order $p^{a}$ the subgroup $A$ is not
modular in $\langle A, x \rangle $. Moreover, we can assume that every proper
$\sigma$-subquasinormal subgroup of $A$ is $\sigma$-quasinormal in $G$.
(*) {\sl If $N$ is a minimal normal subgroup of $G$, then
$G/N$ is an $M\sigma T$-group. } (Since the hypothesis holds for $G/N$, this follows from
the choice of $G$).
(**) {\sl If $N$ is a minimal normal subgroup of $G$, then $AN$ is
$\sigma$-quasinormal in $G$. In particular, $A_{G}=1$.}
Claim (*) implies that $G/N$ is an $M\sigma T$-group. On the other hand,
by Lemma 4.11(2),
$AN/N$ is a $\sigma$-subquasinormal subgroup of
$G/N$, so $AN/N$ is a $\sigma$-quasinormal in $G$. Hence we have (**) by
Lemma 4.11(3).
(***) {\sl $A$ is a $\sigma _{i}$-group for some $i$.}
From Condition (i) and Claim (**) it follows that $A\cap D=1$,
so $AD/D\simeq A$ is $\sigma$-nilpotent.
Then for $\sigma _{i}\in \sigma (A)$ we have
$A=O_{\sigma _{i}}(A)\times O_{\sigma _{i}'}(A)$. Assume that
$O_{\sigma _{i}'}(A)\ne 1$. Then $O_{\sigma _{i}}(A)$ and
$ O_{\sigma _{i}'}(A)$
are $\sigma$-quasinormal in $G$, so $A$ is $\sigma$-quasinormal
in $G$ by Lemma 2.30(3) and \cite[Page 201, Property (5)]{Schm}. This
contradiction shows that $A=O_{\sigma _{i}}(A)$ is a $\sigma _{i}$-group.
{\sl Final contradiction for the sufficiency.} Since $A$ is $\sigma$-subnormal in $G$,
from Claim (***) it follows that $A\leq H_{i}^{y}$ for all $y\in G$
by Lemma 2.30(5).
From Conditions (i) and (ii) it follows that $H_{i}=(H_{i}\cap D)\times S$
for some Hall subgroup $S$ of $H_{i}$. Then from
$A\leq H_{i}^{y}=(H_{i}^{y}\cap D)\times S^{y}$ it follows that $A\leq S^{y}$
for all $y\in G$.
Moreover, $S\simeq SD/D$ by Claim (**) and $H_{i}\cap D $ are
$M$-groups by Condition (i). Hence $H_{i}$ is an $M$-group by
\cite[Theorem 2.4.4]{Schm}.
Therefore $x\not \in H_{i}^{y}$ all $y\in G$, so
$p\in \sigma_{j}$ for some $j\ne i$.
Let $U= \langle x\rangle$. First assume that $x\in D$, then
$U\trianglelefteq G$. On the other hand, $A$ is $\sigma$-subnormal in $UA$,
so $UA=U\times A$ by Lemma 2.30(5). Hence $A$ is modular in
$\langle x, A\rangle =UA$, a contradiction. Therefore, $x\not \in D$.
Since $D$ is a Hall subgroup of $G$ and $x\not \in D$, $x\in M^{z}$
for some $z\in G$. It is
also clear that
$ A\leq S^{y} \leq M^{z}$ for some $y\in G$, where $M^{z}$ is an
$M$-group by Condition (i), and then $A$ is
modular in $\langle A, x \rangle $.
This final contradiction
completes the proof of the fact that $G$ is an $M\sigma T$-group.
The theorem is proved.
\section{Groups in which modularity is transitive}
Recall that a group $G$ is said to be an \emph{$M$-group} \cite[p. 54]{Schm}
if the latice ${\cal L}(G)$, of all subgroups of $G$, is modular.
{\bf Definition 5.1.} We say that $G$ is
an \emph{$M T$-group}
if modularity is a transitive relation on $G$, that is, if $H$ is a modular subgroup
of $K$ and $K$ is a modular subgroup
of $G$, then $H$ is a modular subgroup of $G$.
The next problem goes back to the paper by Frigerio \cite{A. Frigerio}.
{\bf Problem 5.1.} {\sl What is the structure of an $MT$-group $G$, that is, a group
in which modularity is a transitive relation on $G$?}
Frigerio proved the following theorem which gives a complete answer to
this problem for the soluble case.
{\bf Theorem 5.3} (Frigerio \cite{A. Frigerio}). {\sl A soluble group is an $MT$-group
if and only if $G$ is a group with modular latice of all subgroups ${\cal L}(G)$.}
A new proof of Theorem 5.3 was obtained in the paper \cite{mod}.
{\bf Remark 5.4.} Problem 5.2 is a special case
of general Propblem 4.2, where $\sigma =\{\mathbb{P}\}$, since in this case every
subgroup of every group is $\sigma$-subnormal.
Before continuing, we give a few definitions.
A subgroup $A$ of $G$ is said to be \emph{submodular} in $G$ if
there is a subgroup chain $A=A_{0} \leq A_{1} \leq \cdots \leq
A_{n}=G$ such that $A_{i-1}$ is a modular subgroup of $ A_{i}$
for all $i=1, \ldots , n$. Thus a group $G$ is an $M T$-group if and only if every
submodular
subgroup of $G$ is modular in $G$.
{\bf Remark 5.5.} It is clear that every subnormal subgroup is
submodular. On the other hand,
in view of the above mentioned Ore's result in \cite{5} and Theorem 3.1,
$G$ is a $PT$-group if and only if
every its subnormal subgroup is modular. Therefore every $MT$-group is a
$PT$-group.
In view of Remark 5.5, the following well-known
result partially describes the structure of insoluble $MT$-groups.
{\bf Theorem 5.6} (Robinson \cite{217}). {\sl $G$ is a $PT$-group if
and only if $G$ has a normal perfect subgroup $D$ such that:}
(i) {\sl $G/D$ is a soluble $PT$-group, and }
(i) {\sl if $D\ne 1$, $G$ has a Robinson complex
$(D, Z(D); U_{1}, \ldots , U_{k})$ and }
(iii) {\sl for any set $$\{i_{1}, \ldots , i_{r}\}\subseteq \{1, \ldots , k\},$$ where
$1\leq r < k$, $G$ and $G /U_{i_{1}}'\cdots U_{i_{r}}'$ satisfy
${\bf N}_{p}$ for all $p\in \pi (Z(D))$ and
${\bf P}_{p}$
for all $p \in \pi (D)$. }
Now, recall that $G$ is a non-abelian $P$-group (see \cite[p. 49]{Schm}) if
$G=A\rtimes \langle t \rangle$, where $A$ is an elementary abelian
$p$-group and an element $t$ of
prime order $q\ne p$ induces a non-trivial power
automorphism on $A$. In this case we say that $G$ is a \emph{$P$-group
of type $(p, q)$}.
{\bf Definition 5.7.} We say that $G$ \emph{satisfies
${\bf M }_{P}$} (\emph{${\bf M }_{p, q}$}, respectively) if
whenever $N$ is a soluble normal
subgroup of $G$ and $P/N$ is a normal non-abelian $P$-subgroup (a normal $P$-group
of type $(p, q)$, respectively)
of $G/N$,
every non-subnormal subgroup of $P/N$ is modular in $G/N$.
The following theorem answers to Problem 5.1.
{\bf Theorem 5.8} (Liu, Guo, Safonova and Skiba \cite{preprI, pure}).
{\sl A group $G$ is an $MT$-group if
and only if $G$ has a perfect normal subgroup $D$ such that:}
(i) {\sl $G/D$ is an $M$-group, }
(ii) {\sl if $D\ne 1$, $G$ has a Robinson complex
$(D, Z(D); U_{1}, \ldots , U_{k})$ and }
(iii) {\sl for any set $$\{i_{1}, \ldots , i_{r}\}\subseteq \{1, \ldots , k\},$$ where
$1\leq r < k$, $G$ and $G /U_{i_{1}}'\cdots U_{i_{r}}'$ satisfy
${\bf N}_{p}$ for all $p\in \pi (Z(D))$,
${\bf P}_{p}$
for all $p\in \pi (D)$, and ${\bf M}_{p, q}$ for all pears
$\{p, q\}\cap \pi (D)\ne \emptyset.$}
The following example
shows that, in general, a $PT$-group may not be an $MT$-group.
{\bf Exapmle 5.9.} (i) Let $\alpha: Z(SL(2, 5))\to Z(SL(2, 7))$ be an isomorphism and let
$$D:= SL(2, 5) \Ydown SL(2, 7)=(SL(2, 5)\times SL(2, 7))/V,$$
where $$V=\{(a, (a^{\alpha})^{-1})\mid a\in Z(SL(2, 5))\},$$
is the direct product of the groups $SL(2, 5)$ and $SL(2, 7)$ with joint center
(see \cite[p. 49]{hupp}).
Let $M=(C_{7}\rtimes C_{3}) \Yup (C_{13}\rtimes C_{3}$) be
the direct product of the groups $C_{7}\rtimes C_{3}$ and $C_{13}\rtimes C_{3}$
with joint
factorgroup $C_{3}$ (see \cite[p. 50]{hupp}),
where
$C_{7}\rtimes C_{3}$ is a non-abelian group of
order 21 and $C_{13}\rtimes C_{3}$ is a non-abelian group of
order 39. Finally, let $G=D\times M$.
We show that $G$ satisfies the conditions in Theorem 5.6.
It is clear that $D=G^{\mathfrak{S}} $ is the soluble
residual of $G$ and $M\simeq G/D$ is a soluble $PT$-group.
In view of \cite[I, Satz 9.10]{hupp}, $D=U_{1}U_{2}$ and
$U_{1}\cap U_{2}=Z(D)=\Phi (D)$, where $U_{i}$ is normal in $D$,
$U_{1}/Z(D)$ is a simple group of order 60, and
$U_{2}/Z(D)$ is a simple group of order 168. Hence $(D, Z(D); U_{1}, U_{2})$ is
a Robinson complex of $G$ and
the subgroup $ Z(D)$ has order 2 and $Z(D)\leq Z(G)$.
Therefore Conditions (i) and (ii) hold for $G$. It is not difficult to show
that for every prime
$r$ dividing $|G|$ and for $O_{r}(G/N)$, where $N$ is a normal
soluble subgroup of $G$, we have $|O_{r}(G/N)|\in \{1, r\}$, so Condition (iii)
also holds for $G$.
Therefore $G$ is a $PT$-group by Theorem~5.6.
Now we show that $G$ is not an $MT$-group. First note that $M$ has
a subgroup $T\simeq C_{7}\rtimes C_{3}$ and $|M:T|=13$. Then $T$ is a maximal
subgroup of $M$ and $M/T_{M}\simeq C_{7}\rtimes C_{3}$.
Hence a subgroup $L$ of $T$ of order 3 is modular in $T$ and $T$ is modular in $M$ by
\cite[Lemma 5.1.2]{Schm}, so $L$ is submodular in $G$. Finally,
$L$ is not modular in $M$ by Lemma 5.11 below.
Therefore $G$ is not an $MT$-group by Theorem~5.8.
(ii) The group $D \times (C_{7}\rtimes C_{3})$ is an $MT$-group by Theorem~5.8.
{\bf Lemma 5.10.} {\sl Let $A$, $B$ and $N$ be subgroups of $G$, where
$A$
is submodular and $N$ is normal in $G$. Then:}
(1) {\sl $A\cap B$ is submodular in $B$},
(2) {\sl $AN/N$ is
submodular in $G/N$, }
(3) {\sl if $N\leq K$ and $K/N$ is submodular
in $G/N$, then $K$ is submodular in $G,$}
(4) {\sl $A^{{\frak{A}^{*}}}$ is subnormal in $G$, }
(5) {\sl if $G=U_{1}\times \cdots \times U_{k}$, where $U_{i}$ is a simple
non-abelian group, then $A$ is normal in $G$. }
{\bf Proof.} Statements (1)--(4) are proved in \cite{mod}.
(5) Let $E=U_{i}A$ and $A\ne 1$. Then $A$ is submodular in $E$ by Part (1), so
there is a subgroup chain
$$A=E_{0} < E_{1} < \cdots < E_{t-1} < E_{t}=E$$ such that $E_{i-1}$ is a maximal
modular subgroup of $E_{i}$ for all $i=1, \ldots, t$ and for $M=E_{t-1}$ we have
$M=A(M\cap U_{i})$ and, by \cite[Lemma 5.1.2]{Schm}, either
$M=E_{t-1}$ is a maximal normal subgroup of $E$ or $M$
is a maximal subgroup of $E$ such that
$E/M_{E}$ is a non-abelian group of order $qr$ for primes $q$ and $r$.
In the former case we have $M\cap U_{i}=1$, so $A=M$ is normal in $E$. The second case
is impossible
since $E$ has no a quotient of order $qr$. Therefore $U_{i}\leq N_{G}(A)$ for all $i$,
so
$G\leq N_{G}(A)$. Hence we have (5).
The lemma is proved.
{\bf Lemma 5.11} (See Lemma 5.1.9 \cite{Schm}). {\sl Let $M$ be a modular subgroup of $G$
of prime power order. If $M$ is not quasinormal in $G$, then
$$G/M_{G}=M^{G}/M_{G}\times K/M_{G},$$ where $M^{G}/M_{G}$ is a non-abelian
$P$-group of order prime to $|K/M_{G}|$. }
Recall that a group $G$ is said to be an \emph{$SC$-group}
if every chief factor of $G$ is simple \cite{217}.
{\bf Lemma 5.12.} {\sl Let $G$ be a non-soluble $SC$-group and suppose that $G$ has a
Robinson complex
$$(D, Z(D); U_{1}, \ldots ,
U_{k}),$$ where $D=G^{\mathfrak{S}}=G^{\mathfrak{U}}$.
Let $U$ be a submodular non-modular subgroup of $G$ of minimal
order. Then:}
(1) {\sl If $UU_{i}'/U_{i}'$ is modular in $G/U_{i}'$ for
all $i=1, \ldots, k$, then $U$ is supersoluble.}
(2) {\sl If $U$ is supersoluble and $UL/L$ is modular in $G/L$ for
all non-trivial nilpotent normal subgroups $L$ of $G$, then
$U$ is a cyclic $p$-group for some prime $p$. }
{\bf Proof. } Suppose that this lemma is false and let $G$ be a
counterexample of minimal order.
(1) Assume this is false. Suppose that
$U\cap D\leq Z(D)$. Then every chief factor of $U$ below
$U\cap Z(D)=U\cap D$ is cyclic and, also, $UD/D\simeq U/(U\cap D)$ is
supersoluble.
Hence $U$ is supersoluble, a contradiction. Therefore
$U\cap D\nleq Z(D)$. Moreover, Lemma 5.10(1)(2)
implies that $(U\cap D)Z(D)/Z(D)$ is
submodular in $D/Z(D)$ and so $(U\cap
D)Z(D)/Z(D)$ is a non-trivial
normal subgroup of $D/Z(D)$ by Lemma 5.10(5).
Hence for some $i$ we
have $U_{i}/Z(D)\leq (U\cap
D)Z(D)/Z(D),$ so $U_{i}\leq (U\cap
D)Z(D).$ But then $U_{i}'\leq ((U\cap
D)Z(D))'\leq U\cap D.$ By hypothesis, $UU_{i}'/U_{i}'=U/U_{i}'$ is modular in
$G/U_{i}'$ and so $U$ is modular in $G$ by \cite[p.~201, Property~(4)]{Schm}, a
contradiction.
Therefore Statement (1) holds.
(2) Assume that this is false. Let $N=
U^{{\mathfrak{N}}}$ be the nilpotent residual of $U$.
Then $N < U$ since $U$ supersoluble, so $N$ is modular in $G$.
It is clear that
every proper subgroup $S$ of $U$ with
$N\leq S$ is submodular in $G$, so the minimality of $U$ implies
that $S$ is modular in $G$. Therefore, if $U$ has at least two distinct
maximal subgroups $S$ and $W$
such that $N\leq S\cap W$, then $U=\langle S, W \rangle $ is modular in
$G$ by \cite[p. 201, Property (5)]{Schm}, contrary to our assumption on $U$.
Hence $U/N$
is a cyclic $p$-group for some prime $p$ and $N\ne 1$ since $U$ is not cyclic.
Now we show that $U$ is a $PT$-group. Let $S$ be a proper
subnormal subgroup of $U$. Then $S$ is
submodular in $G$ since $U$ is
submodular in $G$, so $S$ is modular in $G$ and hence $S$ is
quasinormal in $U$ by Theorem~3.1. Therefore
$U$ is a soluble $PT$-group, so $N=U^{{\mathfrak{N}}}=U'$ is a
Hall abelian subgroup of $U$ and every subgroup of $N$ is normal in $U$ by
\cite[Theorem 2.1.11]{prod}.
Then $N\leq U^{{\frak{A}^{*}}}$ and so $U^{{\frak{A}^{*}}}=NV,$ where $V$ is a maximal
subgroup of a Sylow $p$-subgroup $P\simeq U/N$ of $U$.
Then
$NV$ is modular in $G$ and $NV$ is subnormal in $G$ by Lemma 5.10(4). Therefore
$NV$ is quasinormal in $G$ by Theorem~3.1. Assume that for some minimal normal
subgroup $R$ of $G$ we have $R\leq (NV)_{G}$. Then $U/R$ is a modular in $G/R$
by hypothesis, so $U$ is modular in $G$, a contradiction. Therefore
$(NV)_{G}=1$, so $NV$ is nilpotent by \cite[Corollary 1.5.6]{prod} and
then $V$ is normal in $U$.
Some maximal subgroup $W$ of $N$ is normal in $U$ with $|N:W|=q$. Then $S=WP$
is a maximal subgroup of $U$ such that $U/S_{U}$ is a non-abelian
group of order $pq$.
Hence $S$ is modular in $U$ by \cite[Lemma 5.1.2]{Schm}, so
$S$ is modular in $G$. It follows that
$U=NS$ is modular in $G$, a contradiction.
Therefore Statement (2) holds.
The lemma is proved.
{\bf Lemma 5.13.} {\sl If $G$ is an
$MT$-group, then every quotient $G/N$ of $G$ is an
$ MT$-group and satisfies
${\bf M}_{P}$.}
{\bf Proof.}
Let $L/N$ be submodular subgroup of $G/N$. Then $L$ is
submodular subgroup in $G$ by Lemma 2.1(3), so $L$
is modular in $G$ by hypothesis
and
then $L/N$ is modular in $G/N$ by \cite[p. 201, Property (3)]{Schm}. Hence
$G/N$ is an $MT$-group.
Now we show that $G/N$ satisfies
${\bf M}_{P}$. Since $G/N$ is an $ MT$-group, we can assume without loss of
generality that $N=1$. Let $P/R$ be a normal non-abelian $P$-subgroup
of $G/R$ and let $L/R\leq P/R$. Then $R/N$ is modular
in $P/R$ by \cite[Lemma 2.4.1]{Schm}, so $L/R$ is submodular in $G/R$ and hence
$L/R$ is modular in $G/R$ since $G/R$ is an $MT$-group. Therefore $G$ satisfies
${\bf M}_{P}$.
The lemma is proved.
We need the following special case of Lemma 2.39.
{\bf Lemma 5.14.} {\sl Suppose that $G$ has a Robinson
complex $(D, Z(D);$ $ U_{1}, \ldots , U_{k})$ and let $N$ be a normal subgroup of $G$. }
(1) {\sl If $N=U_{i}'$ and $k\ne 1$, then $Z(D/N) =U_{i}/N =Z(D)N/N$ and
$$(D/N,
Z(D/N); U_{1}N/N, \ldots , U_{i-1}N/N,
U_{i+1}N/N, \ldots
U_{k}N/N)$$ is a Robinson complex of $G/N$. }
(2) {\sl If $N$ is nilpotent, then $Z(DN/N)= Z(D)N/N$ and
$$(DN/N, Z(DN/N); U_{1}N/N, \ldots ,
U_{k}N/N)$$ is a Robinson complex of $G/N$.}
{\bf Proof of Theorem 5.8.}
First assume that $G$ is an $M T$-group. Then $G$ is a $PT$-group and
every quotient $G/N$ is
an $M T$-group by Lemma 5.13.
Moreover, by Theorem 5.6,
$G$ has a normal perfect subgroup $D$ such that:
$G/D$ is a soluble group $PT$-group, and
if $D\ne 1$, $G$ has a Robinson complex
$(D, Z(D); U_{1}, \ldots , U_{k})$ such that
for any set $\{i_{1}, \ldots , i_{r}\}\subseteq \{1, \ldots , k\}$, where
$1\leq r < k$, $G$ and $G /U_{i_{1}}'\cdots U_{i_{r}}'$ satisfy
${\bf N}_{p}$ for all $p\in \pi (Z(D))$ and
${\bf P}_{p}$ for all $p \in \pi (D)$. In view of Lemma 5.13, $G$
and $G /U_{i_{1}}'\cdots U_{i_{r}}'$ satisfy ${\bf M}_{p, q}$ for all pears
$\{p, q\}\cap \pi (D)\ne \emptyset.$
In view of \cite[Theorem 2.1.11]{prod}, $G/D$ is a supersoluble $PT$-group
and if $U/D$ is a
submodular subgroup of $G/D$, then $U$ is submodular in $G$ by Lemma 5.10(3), so
$U$ is modular in $G$ by hypothesis and hence $U/D$ is modular in $G/D$ by
\cite[p. 201, Property (4)]{Schm}, Therefore $G/D$ is an $M$-group by Proposition
4.13.
Therefore the necessity
of the condition of the theorem holds.
Now, assume, arguing
by contradiction, that $G$ is a non-$MT$-group of minimal order
satisfying Conditions (i), (ii), and (iii).
Then $D\ne 1$
and $G$ has a submodular subgroup $U$ such that $U$ is not modular
in $G$
but every submodular subgroup $U_{0}$ of $G$ with $U_{0} < U$ is modular in
$G$. Let $Z=Z(D).$ Then $Z= \Phi (U_{i})=\Phi (D)$ since $D/Z$ is perfect.
(1) {\sl $U$ is supersoluble. }
First assume that $k=1$, that is, $D=U_{1}=D'$.
Then $(U\cap D)Z/Z$ is a submodular subgroup
of a simple group $D/Z$ by Lemma 5.10(1)(2). If $(U\cap D)Z/Z\ne 1$, then
$(U\cap D)Z/Z=U_{i}/Z$ by Lemma 5.10(5), so $(U\cap D)Z=U_{i}$, contrary to the fact
that $Z\leq \Phi(U_{i})$.
Hence $U\cap D\leq Z$,
so $U\cap D=U\cap Z$. Therefore
every chief factor of $U$ below $U\cap D$ is cyclic. On the other hand, $U/(U\cap D)
\simeq UD/D$ is supersoluble by Condition (i) since every $M$-group is
supersoluble by \cite[Theorem 2.4.4]{Schm} and so $U$ is supersoluble.
Now let $k\ne 1$. We show that the hypothesis holds for $G/U_{i}'$
for all $i=1, \ldots , k$. We can assume without loss of generality that $i=1$.
Let $N=U_{1}'$. Then
$ (G/N)/(D/N)\simeq G/D$ is an $M$-group and $
(D/N)'=D '/N=D/N$. From Lemma
5.14(1) it follows that
$(D/N, Z(D/N); U_{2}N/N, \ldots , U_{k}N/N)$ is
a Robinson complex of $G/N$ and
$$Z(D/N)=U_{1}/N=ZN/N\simeq Z/(Z\cap N).$$
Moreover, if
$\{i_{1}, \ldots , i_{r}\}\subseteq \{2, \ldots , k\}$, where $2\leq r < k$,
then the quotients
$G/N=G/U_{1}'$ and $$(G/N) /(U_{i_{1}}N/N)'\cdots (U_{i_{r}}N/N)'=
(G/N)/(U_{i_{1}}'\cdots U_{i_{r}}'U_{1}'/N)\simeq G/U_{i_{1}}'\cdots U_{i_{r}}'U_{1}'$$
satisfy
${\bf N}_{p}$ for all
$p\in \pi (Z(D/N))\subseteq \pi (Z)$, ${\bf P}_{p}$ for all
$p\in \pi (D/N))\subseteq \pi (D)$, and ${\bf M}_{p, q}$ for all pears
$\{p, q\}\cap \pi (D)\ne \emptyset$ by Condition (iii).
Therefore the hypothesis holds for $G/N=G/U_{1}'$, so the
submodular subgroup $UU_{1}'/U_{1}'$ of $G/U_{1}
$ is modular in
$G/U_{1}'$ by the choice of $G$. Hence $U$ is supersoluble by
Lemma 5.12(1).
(2) {\sl If $N\ne 1$
is a normal nilpotent subgroup of $G$, then Conditions (i), (ii), and (iii)
hold for $G/N$ } (See the proof of Claim (1) and use the fact that every quotien of an
$M$-group is an $M$-group as well).
(3) {\sl If $ XN/N$ is submodular in $G/N$ for some
normal nilpotent subgroup $N\ne 1$, then $ XN/N$ is modular in $G/N$ and
$XN$ is modular in $G$. In particular, $U_{G}=1$ } (This follows from
Claim (2), Lemma 5.10(3) and the choice of $G$).
(4) {\sl $U$ is a cyclic $p$-group for some prime $p$ and
$U\cap Z_{\infty}(G)$ is the maximal subgroup of $U$.}
Let $N$ be a nilpotent
non-identity normal subgroup of $G$. Then $UN/N$ is submodular in $G/N$
by Lemma 5.10(2), so $UN/N$ is modular in $G/N$ by Claim (3). Hence $U$ is a cyclic
$p$-group for some prime $p$ by Lemma 5.12(2) and Claim (1).
Now let $V$ be a maximal subgroup of $U$. Then $V=U^{{\frak{A}^{*}}}$
is subnormal in $G$ by Lemma 5.10(4), hence $V$ is modular and so
quasinormal in $G$ by Theorem A. Therefore
$V\leq Z_{\infty}(G)$ by \cite[Corollary 1.5.6]{prod} since $V_{G}=1=U_{G}$
by Claim (3). Hence $V=U\cap Z_{\infty}(G)$.
(5) {\sl $G$ has a normal subgroup $C_{q}$ of order $q$ for some $q\in \pi (Z(D))$. }
It is enough to show that $Z\ne 1$. Assume
that this is false. Then $D= U_{1}\times \cdots
\times U_{k}$, where $U_{i}$ is a simple non-abelian group for all $i$.
Let $E=U_{i}U$.
We show that $U_{i}\leq N_{G}(U) $. Note that $U$ is submodular in $E$
by Lemma 5.10(1). Therefore there is a subgroup chain
$$U=E_{0} < E_{1} < \cdots < E_{t-1} < E_{t}=E$$ such that $E_{i-1}$ is a maximal
modular subgroup of $E_{i}$ for all $i=1, \ldots, t$ and for $M=E_{t-1}$ we have
$M=U(M\cap U_{i})$. Moreover, by \cite[Lemma 5.1.2]{Schm}, either
$M=E_{t-1}$ is a maximal normal subgroup of $E$ or $M$
is a maximal subgroup of $E$ such that
$E/M_{E}$ is a non-abelian group of order $qr$ for primes $q$ and $r$.
In the former case we have $M\cap U_{i}=1$, so $U=M$ is normal in $E$. The second case
is impossible
since $E$ has no a quotient of order $qr$. Therefore $U_{i}\leq N_{E}(U)$ for all $i$,
so
$D\leq N_{G}(U)$ and hence $U\cap D\leq O_{p}(D)=1$. It follows than $DU=D\times U$, so
$1 < U\leq C_{G}(D)$. But $C_{G}(D)\cap D=1$ since $Z(D)=1$. Therefore
$C_{G}(D)\simeq
C_{G}(D)D/D$ is soluble. Hence for some prime $q$ dividing $|C_{G}(D)|$ we have
$O_{q}(C_{G}(D))\ne 1$. But $O_{q}(C_{G}(D))$ is characteristic in $C_{G}(D)$, so
we have (5).
(6) {\sl $U^{G}$ is soluble}.
The subgroup $C_{q}U/C_{q}$ is submodular in $G/C_{q}$ by Lemma 5.10(2), so
$C_{q}U$ is modular in $G$ by Claim (3).
First assume that $C_{q}U$ is not subnormal in $G$.
Then
$C_{q}U$ is not quasinormal in $G$, so
$(C_{q}U)^{G}/(C_{q}U)_{G}$ is a non-abelian $P$-group by Lemma 5.11.
Hence $(C_{q}U)^{G}$ is soluble.
Now assume that $C_{q}U$ is subnormal in $G$, so
$C_{q}U/C_{q}$ is a subnormal $p$-subgroup of $G/C_{q}$ and hence
$$U^{G}/(U^{G}\cap C_{q})\simeq
C_{q}U^{G}/C_{q}=(C_{q}U/C_{q})^{G/C_{q}}\leq O_{p}(G/C_{q}).$$
Hence $U^{G}$ is soluble.
(7) {\sl $U$ is not subnormal in $G$. }
Assume that $U$ is subnormal in $G$. Then $U$ is quasinormal in $G$ since
$G$ is a $PT$-group by Theorem 5.6, so $U$ is modular in $G$ by Theorem 3.1,
a contradiction. Hence we have (7).
(8) $|U|=p$.
Assume that $|U| > p$. Then $1 < V\leq R:=O_{p}(Z_{\infty}(G))$ by Claim (4) and
$U\nleq R$ by Claim (7). Let $E=RU$. Then $E$ is not subnormal in $G$ by Claims
(4) and (7) and this subgroup is modular in $G$ by Claim (3) and Lemma 5.10(2). Moreover,
$UR/R\simeq U/(U\cap R)=U/V$ has order $p$, so $(RU)_{G}=R$.
Therefore,
in view of Lemma 5.11, $$G/R=E^{G}/E_{G}\times K/E_{G}=U^{G}R/R\times K/R,$$ where
$RU^{G}/R\simeq U^{G}/(U^{G}\cap R)$
is a non-abelian $P$-group of order prime to $K/R$. Then $RU^{G}/R$ is a $\pi$-group,
where $\pi =\{p, q\}$ for some prime $q$, so $G$ is $\pi$-soluble and hence
$D$ and $D/Z$ are $\pi$-soluble groups.
Assume that $U^{G}\cap D\ne 1$. Since
$U^{G}\cap D\leq Z=Z(D)\leq \Phi (D)$ by Claim (6),
for some $i$ and for some $r\in \{p, q\}$ the mumber $r$ divides $|U_{i}/Z|$.
It follows that $U_{i}/Z$ is an abelian group, a contradiction.
Therefore $U^{G}\cap D= 1$, so from $$U^{G}\simeq U^{G}/(U^{G}\cap D)\simeq U^{G}D/D$$
we get that $U^{G}$ is an $M$-group.
Then from Lemma 2.4.1 and Theorem 2.4.4 in \cite{Schm} it follows that
$ U(U^{G}\cap R)/(U^{G}\cap R)$ is a Sylow $p$-subgroup of
$U^{G}/(U^{G}\cap R)$ and so $U(U^{G}\cap R)$ is a cyclic Sylow $p$-subgroup of $U^{G}$.
It follows that either $U(U^{G}\cap R)=U$ or $U(U^{G}\cap R)=U^{G}\cap R$. In the former case
we have $U^{G}\cap R=V$, which is impossible by Claim (3), so
$U(U^{G}\cap R)=U^{G}\cap R$ and hence $U$ is subnormal in $G$, contrary to Claim (7).
Therefore we have (8).
(9) {\sl $U\nleq D$.}
Assume $U\leq D$.
From Claim (7) it follows that $U\nleq Z$ and then, by Claim (8) and
Lemma 5.10(1)(2)(5), for some $i$ we have
$U\simeq UZ/Z=U_{i}/Z$, so $UZ=U_{i}$, a
contradiction. Hence we have~(9).
(10) $O_{p}(D)=1$.
Assume that $G$ has a normal subgroup $Z_{p}\leq Z$ of order $p$.
Then $Z_{p}U$ is not subnormal in $G$ by Claim (7) and, also, $(Z_{p}U)_{G}=Z_{p}$
by Claim (8) and
$(Z_{p}U)^{G}=Z_{p}U^{G}$, so
$$G/Z_{p}=Z_{p}U^{G}/Z_{p}\times K/Z_{p},$$ where $Z_{p}U^{G}/Z_{p}$ is
a non-abelian $P$-group of order $p^{a}q^{b}$ prime to $|K/Z_{p}|$. Hence
$G/Z_{p}$, $D/Z_{p}$, and $D$ are $\{p, q\}$-soluble, where $p$ divides $|D/Z_{p}|$.
Hence $O_{p}(D/Z)\ne 1$. This contradiction completes the proof of the claim.
(11) {\sl $DU=D\times U$. In particulart, $NU$ is not subnormal in $G$ for every normal
subgroup $N$ of $G$ contained in $D$.}
In view of Claims (8) and (9), it is enough to show that $U_{i}\leq N_{G}(U)$ for all $i$.
Let $E=U_{i}U$ and let
$$U=E_{0} < E_{1} < \cdots < E_{t-1} < E_{t}=E$$ be a subgroup chain such that $E_{i-1}$
is a maximal
modular subgroup of $E_{i}$ for all $i=1, \ldots, t$ and for $M=E_{t-1}$ we have
$M=U(M\cap U_{i})$ and either
$M=E_{t-1}$ is a maximal normal subgroup of $E$ or $M$
is a maximal subgroup of $E$ such that
$E/M_{E}$ is a non-abelian group of order $qr$ for some primes $q$ and $r$.
First assume that $M$ is normal in $E$.
From $E=U_{i}U=U_{i}M$ it follows that $E/M\simeq U_{i}/(M\cap U_{i})$ is a
simple group, so $M\cap U_{i}=Z$ and hence $U\cap U_{i}=1$ by Claim (7).
Then, by the Frattini argument,
$E=MN_{E}(U)=ZN_{E}(U)$.
But $Z\leq \Phi (E)$ since $Z\leq \Phi (U_{i})$. Therefore $N_{E}(U)=E$,
so $U_{i}\leq N_{G}(U)$.
Finally, assume that $E/M_{E}$ is a non-abelian group of order $qr$. Then
$U_{i}/(U_{i}\cap M_{E})$ is soluble, so $U_{i}=U_{i}\cap M_{E}$
since $U_{i}$ is perfect. Therefore $U_{i}, U\leq M$ and so $M=E $,
a contradiction. Hence we have (11).
(12) {\sl $U^{G}$ is not a non-abelian $P$-group. }
Assume that $U^{G}=Q\rtimes U$ is a non-abelian $P$-group of type $(q, p)$
and let $\pi =
\{q, p\}$. Let $ S=U^{G}\cap D $. In view of Claim (9) and Lemma
2.2.2 in \cite{Schm}, $U\nleq S$ and so $S\leq O_{q}(D)\leq Z(D)$.
Moreover, Claim (11) implies that
$S\ne Q$, so $DU^{G}/D\simeq U^{G}/S$ is a
non-abelian $P$-group of type $(q, p)$ by \cite[Lemma 2.2.2]{Schm}}. In view of
Claims (8) and (11), $(DU)_{G}=D$ and hence
$G/D=DU^{G}/D \times K/D,$ where
$DU^{G}/D=O_{\pi}(G/D)$ and $K/D=O_{\pi'}(G/D)$, so $G/D$ is $\pi$-decomposable.
Let $E$ be a minimal supplement to $N$ in $G$. Then $E\cap D\leq \Phi(E)$, so $E$ is
soluble and $\pi$-decomposable, that is, $E=O_{\pi}(E)\times O_{\pi'}(E)$ since
$G/D\simeq E/(E\cap D)$.
First suppose that $\pi \cap \pi (D)= \emptyset$ and
let $G_{r}$ be a Sylow $r$-subgroup of
$E$, where $r\in \pi$. Then $G_{r}$ is a Sylow $r$-subgroup of
$G$, therefore $E$ has a Hall $\pi$-subgroup $H$ since $E$ is soluble
and $U^{G}=Q\times U\leq H^{x}$ for all $x\in G$.
Then, by Condition (i),
for every $r$-element $x$ of $G$, where $r\in \pi$, $U$ is modular in
$\langle x, U \rangle$ since $H\simeq DH/D$ is an $M$-group.
Now let $x\in G_{r}$, where $r\not \in \pi$. Then for some Sylow
$r$-subgroup $D_{r}$ of $D$
and a Sylow $r$-subgroup $E_{r}$ of $E$ and some $y\in G$ we have
$G_{r}=D_{r}E_{r}^{y}$. Hence $x=de$, where $d\in D_{r}$ and $e\in E_{r}^{y}$. Now note
that for any $u\in U$ we have $d^{-1}u^{-1}du\in D\cap U^{G}=1$, so $D\in C_{G}(U)$.
On the other hand, $e$ is a $\pi'$-element of the $\pi$-decomposable group $E^{y}$, so
$e\in C_{G}(U)$. Therefore $x\in C_{G}(U)$ and hence $U$ is normal in
$\langle x, U \rangle$. Therefore $U$ is modular in $G$ by
\cite[Theorem 5.1.13]{Schm},
a contradiction. Finally, if
$\pi \cap \pi (D)\ne \emptyset$, then $U$ is modular in $G$ by Condition (iii).
This contradiction completes the proof of the claim.
{\sl Final contradiction.} From Claims (5), (7), (9), and (11) it follows that
$E=C_{q}U=C_{q}\times U$ is not subnormal in $G$ and, in view of Claim (8),
$E_{G}=C_{q}$.
Hence $G/E_{G}= E^{G}/E_{G}\times K/E_{G},$ where
$$E^{G}/E_{G}=C_{q}U^{G}/C_{q}\simeq
U^{G}/(C_{q}\cap U^{G})$$ is a non-abelian
$P$-group of order prime to $|K/C_{q}|$ by Lemma 5.11. Hence $G$ is a $\pi$-soluble
group, where $\pi= \pi (U^{G}/(C_{q}\cap U^{G}))$. Then $D/C_{q}$ is
$\pi$-soluble. But $C_{q}\leq \Phi (D)$, so $q$ divides $|D/C_{q}|$. Hence
$q$ does not divides $|C_{q}U^{G}/C_{q}|$.
If $C_{q}\cap U^{G}=1$, then $U^{G}\simeq C_{q}U^{G}/C_{q} $ is a non-abelian
$P$-group, contrary to Claim (12), so $C_{q}\leq U^{G}$. Then
$C_{q}$ is a Sylow $q$-subgroup of $U^{G}$.
Hence $U^{G}=C_{q}\rtimes (R\rtimes U)$,
where $R\rtimes U\simeq U^{G}/C_{q}$ is a non-abelian $P$-group.
Let $C=C_{U^{G}}(C_{q})$. Then $U\leq C$ by Claim (11) and so, by Lemma
2.2.2 in \cite{Schm}, $R\rtimes
U=U^{R\rtimes U}\leq C$. Hence $C_{q}\leq Z(U^{G})$.
Therefore
$U^{G}=C_{q}\times (R\rtimes U)$, where $R\rtimes U$ is characterisric in $U^{G}$
and so it is normal in $G$. But then $U^{G}=R\rtimes U\ne C_{q}\rtimes (R\rtimes U)$,
a contradiction.
The theorem is proved.
|
{
"arxiv_id": "2302.13246",
"language": "en",
"timestamp": "2023-02-28T02:14:05",
"url": "https://arxiv.org/abs/2302.13246",
"yymm": "2302"
} | \section{Introduction}
Most optimization algorithms rely on classical or generalized derivative information of the objective and constraint functions.
However, in many applications, such information is not available.
This is the case, for example, if the objective function does not have an explicit formulation but can only be evaluated through complex simulations or experiments.
Such problems motivate the development of optimization algorithms that use only function values but not derivatives, also known as \gls{dfo} algorithms.
Powell devised five algorithms to tackle unconstrained and constrained problems without using derivatives, namely \gls{cobyla}~\cite{Powell_1994}, \gls{uobyqa}~\cite{Powell_2002}, \gls{newuoa}~\cite{Powell_2006}, \gls{bobyqa}~\cite{Powell_2009}, and \gls{lincoa}.
He did not only propose these algorithms but also implemented them into publicly available solvers, paying great attention to the stability and complexity of their numerical linear algebra computations.
Renowned for their robustness and efficiency, these solvers are used in a wide spectrum of applications, for instance, aeronautical engineering~\cite{Gallard_Etal_2018}, astronomy~\cite{Mamon_Biviano_Boue_2013}, computer vision~\cite{Izadinia_Shan_Seitz_2017}, robotics~\cite{Mombaur_Truong_Laumond_2010}, and statistics~\cite{Bates_Etal_2015}.
However, Powell coded the solvers in Fortran 77, an old-fashion language that damps the enthusiasm of many users to exploit these solvers in their projects.
There has been a continued demand from both researchers and practitioners for the availability of Powell's solvers in more user-friendly languages such as Python and MATLAB.
Responding to such a demand, this paper presents a package named \gls{pdfo}, an acronym for ``\glsxtrlong{pdfo}.''
\Gls{pdfo} interfaces Powell's Fortran solvers with other languages, enabling users of such languages to call Powell's solvers without dealing with the Fortran code.
For each supported language, \gls{pdfo} provides a simple function that can invoke one of Powell's solvers according to the user's request~(if any) or according to the type of the problem to solve.
The current release (Version 1.2) of \gls{pdfo} supports Python and MATLAB, with more languages to be covered in the future.
The signature of the Python subroutine is consistent with the \texttt{minimize} function of the SciPy optimization library; the signature of the MATLAB subroutine is consistent with the \texttt{fmincon} function of the MATLAB Optimization Toolbox.
\Gls{pdfo} is cross-platform, available on Linux, macOS, and Windows at
\begin{center}
\url{https://www.pdfo.net}\,.
\end{center}
It has been downloaded more than \num{50000} times as of February 2023, mirror downloads excluded.
Moreover, it is one of the optimization engines in GEMSEO~\cite{Gallard_Etal_2018},\footnote{\url{https://gemseo.readthedocs.io}\,.} an industrial software package for Multidisciplinary Design Optimization (MDO).
\Gls{pdfo} is not the first attempt to facilitate the usage of Powell's solvers in languages other than Fortran.
Various efforts have been made in this direction.
Py-BOBYQA~\cite{Cartis_Etal_2019,Cartis_Roberts_Sheridan-Methven_2022} provides a Python implementation of \gls{bobyqa}; NLopt includes multi-language interfaces for \gls{cobyla}, \gls{newuoa}, and \gls{bobyqa};\footnote{\url{https://github.com/ stevengj/nlopt}\,.} minqa wraps \gls{uobyqa}, \gls{newuoa}, and \gls{bobyqa} in R;\footnote{\url{https://CRAN.R-project.org/package=minqa}\,.} SciPy makes \gls{cobyla} available in Python under its optimization library.\footnote{\url{https://docs.scipy.org/doc/scipy/reference/optimize.minimize-cobyla.html}\,.}
However, \gls{pdfo} has several features that distinguish it from others.
\begin{enumerate}
\item \emph{Comprehensiveness.}
To the best of our knowledge, \gls{pdfo} is the only package that provides all of \gls{cobyla}, \gls{uobyqa}, \gls{newuoa}, \gls{bobyqa}, and \gls{lincoa} with a \emph{uniform interface}.
\item \emph{Solver selection.}
\Gls{pdfo} can automatically select a solver for a given problem.
The selection takes into account the performance of the solvers on the CUTEst~\cite{Gould_Orban_Toint_2015} problem set.
\item \emph{Problem preprocessing.}
\Gls{pdfo} preprocesses the inputs to simplify the problem and reformulate it to meet the requirements of Powell's solvers.
\item \emph{Code patching.}
\Gls{pdfo} patches several bugs in the Fortran code.
Such bugs can lead to serious problems such as infinite cycling or memory errors.
\item \emph{Fault tolerance.}
\Gls{pdfo} tolerates failures of function evaluations.
In case of such failures, \gls{pdfo} will not exit but try to progress.
\item \emph{Additional options.}
\Gls{pdfo} includes options for the user to control the solvers in some manners that are useful in practice.
For example, the user can request \gls{pdfo} to scale the problem according to bound constraints on the variables before solving.
\end{enumerate}
In addition to the \gls{pdfo} package, this paper also provides an overview of Powell's \gls{dfo} methods.
We will not repeat Powell's description of these methods but summarize them from a uniform viewpoint, aiming at easing the understanding of Powell's methods and paving the way for further development based on them.
The analysis of Powell's \gls{dfo} methods is not within the scope of this paper.
Under some assumptions, adapted versions of Powell's algorithms may be covered by existing theory for trust-region \gls{dfo} methods~\cite[Chapter~10]{Conn_Scheinberg_Vicente_2009b}.
However, it will still be interesting to pursue a tailored theory for Powell's algorithms.
The remaining part of this paper is organized as follows.
Section~\ref{sec:dfo} briefly reviews \gls{dfo} methods in order to provide the context of \gls{pdfo}.
We then present an overview of Powell's \gls{dfo} methods in Section~\ref{sec:powell}, including a sketch of the algorithms and a summary of their main features.
A detailed exposition of~\gls{pdfo} is given in Section~\ref{sec:pdfo}, highlighting its solver selection, problem preprocessing, bug fixes, and handling of function evaluation failures.
Section~\ref{sec:numerical} presents some experiments on \gls{pdfo}, demonstrating its stability under noise, tolerance of function evaluation failures, and potential in hyperparameter optimization.
We conclude the paper with some remarks in Section~\ref{sec:conclude}.
\section{A brief review of \glsfmtshort{dfo} methods}
\label{sec:dfo}
Consider a nonlinear optimization problem
\begin{equation}
\label{eq:nlc}
\min_{x \in \Omega} ~ f(x),
\end{equation}
where~$f : \mathbb{R}^n \to \mathbb{R}$ is the objective function and~$\Omega \subseteq \mathbb{R}^n$ represents the feasible region.
As summarized in~\cite{Conn_Scheinberg_Vicente_2009b}, two strategies have been developed to tackle problem~\eqref{eq:nlc} without using derivatives, which we will introduce in the following.
The first strategy, known as direct search,\footnote{
In some early papers (e.g.,~\cite{Powell_1994,Powell_1998}) Powell and many other authors used ``direct search'' to mean what is known as ``\glsfmtlong{dfo}'' today.
Powell rarely used the word ``derivative-free optimization.''
The only exceptions known to us are his last paper~\cite{Powell_2015} and his distinguished lecture titled ``A parsimonious way of constructing quadratic models from values of the objective function in derivative-free optimization'' at the National Center for Mathematics and Interdisciplinary Sciences, Beijing on November 4, 2011~\cite{Buhmann_Fletcher_Iserles_Toint_2018}.
} samples the objective function~$f$ and chooses iterates by simple comparisons of function values, examples including the Nelder-Mead algorithm~\cite{Nelder_Mead_1965}, the MADS methods~\cite{Audet_Dennis_2006,Digabel_2011}, and BFO~\cite{Porcelli_Toint_2017,Porcelli_Toint_2020,Porcelli_Toint_2022}.
See~\cite{Kolda_Lewis_Torczon_2003},~\cite[Chapters~7 and~8]{Conn_Scheinberg_Vicente_2009b},~\mbox{\cite[Part~3]{Audet_Hare_2017}}, and~\mbox{\cite[\S~2.1]{Larson_Menickelly_Wild_2019}} for more discussions on this paradigm, and we refer to~\cite{Gratton_Etal_2015,Gratton_Etal_2019} for recent developments of randomized methods in this category.
The second strategy approximates the original problem~\eqref{eq:nlc} by relatively simple models and locates the iterates according to these models.
Algorithms applying this strategy are referred to as model-based methods.
They often make use of the models within a trust-region framework~\cite{Conn_Scheinberg_Vicente_2009a} or a line-search framework~\cite{Berahas_Byrd_Nocedal_2019}.
Interpolation and regression are two common ways of establishing the models~\cite{Powell_2001,Conn_Scheinberg_Vicente_2008a,Conn_Scheinberg_Vicente_2008b,Wild_Regis_Shoemaker_2008,Bandeira_Scheinberg_Vicente_2012,Billups_Larson_Graf_2013,Regis_Wild_2017}.
Algorithms using finite-difference of gradients can also be regarded as model-based methods, because such gradients essentially come from linear (for the forward and backward differences) or quadratic (for the central difference) interpolation of the function under consideration over rather special interpolation sets~\cite[\S~1.4.3]{Ragonneau_2022}.
Most model-based \gls{dfo} methods employ linear or quadratic models, examples including
Powell's algorithms~\cite{Powell_1994,Powell_2002,Powell_2006,Powell_2009} in \gls{pdfo}, MNH~\cite{Wild_2008}, DFLS~\cite{Zhang_Conn_Scheinberg_2010}, DFO-TR~\cite{Bandeira_Scheinberg_Vicente_2012}, and DFO-LS~\cite{Cartis_Etal_2019,Hough_Roberts_2022}, but there are also methods exploiting \glspl{rbf}, such as ORBIT~\cite{Wild_Regis_Shoemaker_2008}, CONORBIT~\cite{Regis_Wild_2017}, and BOOSTERS~\cite{Oeuvray_Bierlaire_2009}.
Hybrids between direct search and model-based approaches exist, for instance, Implicit Filtering~\cite[Algorithm~4.7]{Kelley_2011} and MADS with quadratic models~\cite{Conn_Digabel_2013}.
Theories of global convergence and convergence rate have been established for both direct search~\cite{Torczon_1997,Kolda_Lewis_Torczon_2003,Vicente_2013,Gratton_Etal_2015,Dodangeh_Vicente_2016} and model-based methods~\cite{Conn_Scheinberg_Toint_1997a,Conn_Scheinberg_Vicente_2009a,Powell_2012,Garmanjani_Judice_Vicente_2016}.
Since the objective and constraint functions in \gls{dfo} problems are commonly expensive to evaluate, the worst-case complexity in terms of function evaluations is a major theoretical aspect of \gls{dfo} algorithms.
Examples of such complexity analysis can be found in~\cite{Vicente_2013,Gratton_Etal_2015,Dodangeh_Vicente_2016,Garmanjani_Judice_Vicente_2016}.
For more extensive discussions on \gls{dfo} methods and theory, see the monographs~\cite{Conn_Scheinberg_Vicente_2009b,Audet_Hare_2017}, the survey papers~\cite{Rios_Sahinidis_2013,Custodio_Scheinberg_Vicente_2017,Larson_Menickelly_Wild_2019}, the recent thesis~\cite{Ragonneau_2022}, and the references therein.
\section{Powell's derivative-free algorithms}
\label{sec:powell}
Powell published in 1964 his first \gls{dfo} algorithm based on conjugate directions~\cite{Powell_1964}.\footnote{
According to Google Scholar, this is Powell's second published paper and also the second most cited work.
The earliest and meanwhile most cited one is his paper on the DFP method~\cite{Fletcher_Powell_1963} co-authored with Fletcher and published in 1963.
DFP is not a DFO algorithm but the first quasi-Newton method.
The least-change property~\cite{Dennis_Schnabel_1979} of quasi-Newton methods is a major motivation for Powell to investigate the least Frobenius norm updating~\cite{Powell_2004b} of quadratic models in DFO, which is the backbone of \gls{newuoa}, \gls{bobyqa}, and \gls{lincoa}.
}
His code for this algorithm is contained in the
HSL Mathematical Software Library as subroutine \texttt{VA24}.\footnote{\url{https://www.hsl.rl.ac.uk}\,.}
It is not included in \gls{pdfo} because the code is not in the public domain, although open-source implementations are available (see~\cite[footnote~4]{Conn_Scheinberg_Toint_1997b}).
From the 1990s to the final years of his career, Powell developed five model-based \gls{dfo} algorithms to solve~\eqref{eq:nlc}, namely \gls{cobyla}~\cite{Powell_1994} (for nonlinearly constrained problems), \gls{uobyqa}~\cite{Powell_2002} (for unconstrained problems), \gls{newuoa}~\cite{Powell_2006} (for unconstrained problems), \gls{bobyqa}~\cite{Powell_2009} (for bound-constrained problems), and \gls{lincoa} (for linearly constrained problems).
Moreover, Powell implemented these algorithms into Fortran solvers and made the code publicly available.
They are the cornerstones of \gls{pdfo}.
This section provides an overview of these five algorithms, starting with a sketch in Section~\ref{subsec:sketch} and then presenting more details afterward.
\subsection{A sketch of the algorithms}
\label{subsec:sketch}
Powell's model-based \gls{dfo} algorithms are trust-region methods.
At iteration~$k$, the algorithms construct a linear (for \gls{cobyla}) or quadratic (for the other methods) model~$\objm$ for the objective function~$f$ to meet the interpolation condition
\begin{equation}
\label{eq:itpls}
\objm(y) = f(y), \quad y \in \xpt,
\end{equation}
where~$\xpt \subseteq \mathbb{R}^n$ is a finite interpolation set updated along the iterations.
\Gls{cobyla} models the constraints by linear interpolants on~$\xpt$ as well.
Instead of repeating Powell's description of these algorithms, we outline them in the sequel, emphasizing the trust-region subproblem, the interpolation problem, and the management of the interpolation set.
\subsubsection{The trust-region subproblem}
In all five algorithms, iteration~$k$ places the trust-region center~$\iter$ at the ``best'' point where the objective function and constraints have been evaluated so far.
Such a point is selected according to the objective function or a merit function that takes the constraints into account.
After choosing the trust-region center~$\iter$, with the trust-region model~$\objm$ constructed according to~\eqref{eq:itpls}, a trial point~$\iter^{{\text{t}}}$ is then obtained by solving approximately the trust-region subproblem
\begin{equation}
\label{eq:trsp}
\min_{x \in \fsetm} ~ \objm(x) \quad \text{s.t.} \quad ~ \norm{x - \iter} \le \rad,
\end{equation}
where~$\rad$ is the trust-region radius, and~$\norm{\cdot}$ is the~$\ell_2$-norm in~$\mathbb{R}^n$.
In this subproblem, the set~$\fsetm \subseteq \mathbb{R}^n$ is a local approximation of the feasible region~$\Omega$.
\Gls{cobyla} defines~$\fsetm$ by linear interpolants of the constraint functions over the set~$\xpt$, whereas the other four algorithms take~$\fsetm = \Omega$.
\subsubsection{The interpolation problem}
\label{subsec:iptprob}
\paragraph{\textnormal{\textbf{Fully determined interpolation.}}}
The interpolation condition~\eqref{eq:itpls} is essentially a linear system.
Given a base point~$y^{{\text{b}}}\in \mathbb{R}^n$, which may depend on~$k$, a linear model~$\objm$ takes the form of~$\objm(x) = f(y^{{\text{b}}}) + (x - y^{{\text{b}}})^{\mathsf{T}} \nabla \objm(y^{{\text{b}}})$, and hence~\eqref{eq:itpls} is equivalent to
\begin{equation}
\label{eq:litpls}
\objm(y^{{\text{b}}}) + (y -y^{{\text{b}}})^{\mathsf{T}} \nabla \objm(y^{{\text{b}}}) = f(y), ~ y \in \xpt,
\end{equation}
which is a linear system with respect to~$f(y^{\text{b}}) \in \mathbb{R}$ and~$\nabla f(y^{\text{b}}) \in \mathbb{R}^n$, the degrees of freedom being~$n+1$.
\Gls{cobyla} builds linear models by the system~\eqref{eq:litpls}, with~$\xpt$ being an interpolation set of~$n + 1$ points updated along the iterations.
Similarly, if~$\objm$ is a quadratic model, then~\eqref{eq:itpls} is equivalent to
\begin{equation}
\label{eq:qitpls}
\objm(y^{{\text{b}}}) + (y - y^{{\text{b}}})^{\mathsf{T}} \nabla \objm(y^{{\text{b}}}) + \frac{1}{2} (y - y^{{\text{b}}})^{\mathsf{T}} \nabla^2 \objm(y^{{\text{b}}}) (y - y^{{\text{b}}}) = f(y), ~ y \in \xpt,
\end{equation}
a linear system with unknowns~$\objm(y^{\text{b}}) \in \mathbb{R}$, $\nabla \objm(y^{\text{b}}) \!\in\! \mathbb{R}^n$, and~$\nabla^2 \objm(y^{{\text{b}}})\!\in\!\mathbb{R}^{n \times n}$, the degrees of freedom being~$(n + 1)(n + 2) / 2$ due to the symmetry of~$\nabla^2 \objm(y^{\text{b}})$.
\Gls{uobyqa} constructs quadratic models by the system~\eqref{eq:qitpls}.
To decide a quadratic model~$\objm$ completely by this system alone, \gls{uobyqa} requires that~$\xpt$ contains~$(n + 1) (n + 2) / 2$ points, and~$f$ should have been evaluated at all these points before the system can be formed.
Even though most of these points will be reused at the subsequent iterations so that the number of function evaluations needed per iteration is tiny (see Section~\ref{subsec:iptset}), we must perform~$(n + 1)(n + 2) / 2$ function evaluations during the very first iteration.
This is impracticable unless~$n$ is small, which motivates the underdetermined quadratic interpolation.
\paragraph{\textnormal{\textbf{Underdetermined quadratic interpolation.}}}
In this case, models are established according to the interpolation condition~\eqref{eq:itpls} with~$\abs{\xpt}$ being less than~$(n + 1)(n + 2) / 2$, the remaining degrees of freedom being taken up by minimizing a certain functional~$\mathcal{F}_k$ to promote the regularity of the quadratic model.
More specifically, this means building~$\objm$ by solving
\begin{equation}
\label{eq:undqitp}
\min_{Q \in \qspace} ~ \mathcal{F}_k(Q) \quad \text{s.t.} \quad Q(y) = f(y), ~ y \in \xpt,
\end{equation}
where~$\qspace$ is the space of polynomials on~$\mathbb{R}^n$ of degree at most~$2$.
\Gls{newuoa}, \gls{bobyqa}, and \gls{lincoa} construct quadratic models in this way, with
\begin{equation}
\label{eq:leastchange}
\mathcal{F}_k(Q) = \norm[\big]{\nabla^2 Q - \nabla^2 \objm[k - 1]}_{\mathsf{F}}^2,
\end{equation}
which is inspired by the least-change property of quasi-Newton updates~\cite{Dennis_Schnabel_1979}, although other functionals are possible (see~\cite{Conn_Toint_1996,Bandeira_Scheinberg_Vicente_2012,Powell_2013,Zhang_2014,Xie_Yuan_2023} for example).
The first model~$\objm[1]$ is obtained by setting~$\objm[0] = 0$.
Powell~\cite{Powell_2013} referred to his approach as the \emph{symmetric Broyden update} of quadratic models (see also~\cite[\S~3.6]{Zhang_2012} and~\cite[\S~2.4.2]{Ragonneau_2022}).
It can be regarded as a derivative-free version of \gls{psb} quasi-Newton update~\cite{Powell_1970b}, which minimizes the functional~$\mathcal{F}_k$ among all quadratic polynomials that fulfill~$Q(\iter) = f(\iter)$, $\nabla Q(\iter) = \nabla f(\iter)$, and~$\nabla Q(\iter[k - 1]) = \nabla f(\iter[k - 1])$ (see~\cite[Theorem~4.2]{Dennis_Schnabel_1979}), with~$\iter$ and~$\iter[k - 1]$ being the current and the previous iterates, respectively.
The interpolation problem~\mbox{\eqref{eq:undqitp}--\eqref{eq:leastchange}} is a convex quadratic programming problem with respect to the coefficients of the quadratic model.
\paragraph{\textnormal{\textbf{Solving the interpolation problem.}}}
Powell's algorithms do not solve the interpolation problems~\eqref{eq:litpls}, \eqref{eq:qitpls}, and~\mbox{\eqref{eq:undqitp}--\eqref{eq:leastchange}} from scratch.
\Gls{cobyla} maintains the inverse of the coefficient matrix for~\eqref{eq:litpls} and updates it along the iterations.
Since each iteration of \gls{cobyla} alters the interpolation set~$\xpt$ by only one point (see Subsection~\ref{subsec:iptset}), the coefficient matrix is modified by a rank-$1$ update, and hence its inverse can be updated according to the Sherman-Morrison-Woodbury formula~\cite{Hager_1989}.
\Gls{uobyqa} does the same for~\eqref{eq:qitpls}, except that~\cite[\S~4]{Powell_2002} describes the update in terms of the Lagrange functions of the interpolation problem~\eqref{eq:qitpls}, the coefficients of a Lagrange function corresponding precisely to a column of the inverse matrix.
For the underdetermined quadratic interpolation~\mbox{\eqref{eq:undqitp}--\eqref{eq:leastchange}}, \gls{newuoa}, \gls{bobyqa}, and \gls{lincoa} maintain and update the inverse of the coefficient matrix for the KKT system of~\mbox{\eqref{eq:undqitp}--\eqref{eq:leastchange}}.
The update is also done by the Sherman-Morrison-Woodbury formula as detailed in~\cite[\S~2]{Powell_2004c}.
In this case, each iteration modifies the coefficient matrix and its inverse by rank-$2$ updates.
In addition, the columns of this inverse matrix readily provide the coefficients of Lagrange functions that make the interpolation problem~\mbox{\eqref{eq:undqitp}--\eqref{eq:leastchange}} easy to solve (see~\cite[\S~3]{Powell_2004b})
\paragraph{\textnormal{\textbf{The base point.}}}
The choice of the base point~$y^{{\text{b}}}$ is also worth mentioning.
\Gls{cobyla} sets~$y^{{\text{b}}}$ to the center~$x_k$ of the current trust region.
In contrast, the other four algorithms initiate~$y^{{\text{b}}}$ to the starting point provided by the user and keep it unchanged except for occasionally updating~$y^{{\text{b}}}$ to~$\iter$, without which the distance~$\norm{y^{{\text{b}}}-\iter}$ may become unfavorably large for the numerical solution of the interpolation problem.
See~\cite[\S~5]{Powell_2004b} and~\cite[\S~7]{Powell_2006} for more elaboration.
\subsubsection{The interpolation set}
\label{subsec:iptset}
The strategy to update~$\xpt$ is crucial.
It should reuse points from previous iterations, at which the objective and constraint functions have been evaluated.
Meanwhile, it needs to maintain the geometry of the interpolation set so that it is well poised, or equivalently, the interpolation problem is well conditioned~\cite{Conn_Scheinberg_Vicente_2009b}.
At a normal iteration, Powell's methods compute a point~$\iter^{{\text{t}}} \in \mathbb{R}^n$ by solving the trust-region subproblem~\eqref{eq:trsp}, and Powell's \gls{dfo} methods update the interpolation set as
\begin{equation}
\label{eq:xpt-update-tr}
\xpt[k + 1] = \big(\xpt \cup \set{\iter^{{\text{t}}}}\big) \setminus \set{y_k^{{\text{d}}}},
\end{equation}
where~$y_k^{{\text{d}}} \in \xpt$ is selected after obtaining~$\iter^{{\text{t}}}$, aiming to maintain the well-poisedness of~$\xpt[k + 1]$.
As mentioned, Powell's methods update the inverse of the coefficient matrix for either the interpolation system or the corresponding KKT\xspace system by the Sherman-Morrison-Woodbury formula.
To keep the interpolation problem well-conditioned, $y_k^{{\text{d}}}$ is chosen to enlarge the magnitude of the denominator in this formula, which is also the ratio between the determinants of the old and new coefficient matrices.\footnote{
Suppose that~$W$ is a square matrix and consider~$W_+ = W + UV^\mathsf{T}$, where $U$ and~$V$ are two matrices of the same size and $UV^\mathsf{T}$ has the same size as $W$.
Then $\det(W_+) = \det(W)\det(I+V^\mathsf{T} W^{-1}U)$, and the Sherman-Morrison-Woodbury formula is $(W_+)^{-1} = W^{-1} -W^{-1}U(I+V^\mathsf{T} W^{-1}U)^{-1} V^\mathsf{T} W^{-1}$, assuming that both~$W$ and~$I+V^\mathsf{T} W^{-1}U$ are nonsingular.
The number~$\det(I+V^\mathsf{T} W^{-1}U)$ is the only denominator involved in the numerical computation of the formula.
}
In the fully determined interpolation, this denominator is~$\ell_k^{\text{d}}(\iter^{{\text{t}}})$, where~$\ell_k^{\text{d}}$ is the Lagrange function associated with~$\xpt$ corresponding to~$y_k^{\text{d}}$~(see equations~\mbox{(10)--(13)} and~\S~2 of~\cite{Powell_2001}).
In the underdetermined case, the denominator is lower bounded by~$[\ell_k^{{\text{d}}}(\iter^{{\text{t}}})]^2$~(see equation~(2.12), Lemma~1, and~\S~2 of~\cite{Powell_2004c}, where the denominator is $\sigma$, and~$\ell_k^{{\text{d}}}(\iter^{{\text{t}}})$ is~$\tau$).
However, Powell's methods do not choose the point~$y_k^{\text{d}}$ merely according to this denominator, but also take into account its distance to the trust-region center, giving a higher priority to farther points, as we can see in~\cite[equation~(56)]{Powell_2002} and~\cite[equations~(7.4)--(7.5)]{Powell_2006}, for example.
An alternative update of the interpolation set takes place when the methods detect that~$\objm$ does not represent~$f$ well enough, attempting to improve the geometry of the interpolation set.
In this case, the methods first select a point~$y_k^{{\text{d}}} \in \xpt$ to drop from~$\xpt$, and then set
\begin{equation}
\label{eq:xpt-update-geo}
\xpt[k + 1] = \big(\xpt \setminus \set{y_k^{{\text{d}}}}\big) \cup \set{\iter^{{\text{g}}}},
\end{equation}
where~$\iter^{{\text{g}}} \in \mathbb{R}^n$ is chosen to improve the well-poisedness of~$\xpt[k + 1]$.
In \gls{cobyla}, the choice of~$y_k^{{\text{d}}}$ and~$\iter^{{\text{g}}}$ is guided by the geometrical fact that the interpolation set forms a simplex in~$\mathbb{R}^n$, trying to keep~$\xpt[k + 1]$ away from falling into an~$(n - 1)$-dimensional subspace, as is detailed in~\mbox{\cite[equations~(15)--(17)]{Powell_1994}}.
The other four methods select~$y_k^{{\text{d}}}$ from~$\xpt$ by maximizing its distance to the current trust-region center~$\iter$, and then obtain~$\iter^{{\text{g}}}$ by solving
\begin{equation}
\label{eq:biglag}
\max_{x \in \Omega} ~ \abs{\ell_k^{{\text{d}}}(x)} \quad \text{s.t.} \quad \norm{x - \iter} \le \radalt
\end{equation}
for some~$\radalt \in (0, \rad]$.
The motivation for this problem is again to enlarge the magnitude of the aforementioned denominator in the Sherman-Morrison-Woodbury updating formula: for~\gls{uobyqa}, the denominator is $\ell_k^{{\text{d}}}(x)$, while for~\gls{newuoa}, \gls{bobyqa}, and \gls{lincoa}, the denominator is lower bounded by $[\ell_k^{{\text{d}}}(x)]^2$.
In addition, \gls{newuoa} maximizes this denominator directly if~\eqref{eq:biglag} fails to make its magnitude large enough, which rarely happens~\cite[\S~6]{Powell_2006}.
Given the two possible updates~\eqref{eq:xpt-update-tr} and~\eqref{eq:xpt-update-geo} of the interpolation set, it is clear that the number of interpolation points remains constant.
As mentioned earlier, this number is~$n + 1$ in \gls{cobyla} and~$(n + 1) (n + 2) / 2$ in \gls{uobyqa}.
\Gls{newuoa}, \gls{bobyqa}, and \gls{lincoa} set it to an integer in~$[n+2, (n + 1) (n + 2) / 2]$, with the default value being~$2n + 1$, which is proved optimal in terms of the well-poisedness of the initial interpolation set chosen by Powell for \gls{newuoa}~\cite{Ragonneau_Zhang_2023a}.
\subsection{\glsfmtshort{cobyla}}
\label{subsec:cobyla}
Published in 1994, \gls{cobyla} was the first model-based \gls{dfo} solver by Powell.
The solver is named after ``\glsxtrlong{cobyla}.''
It aims to solve problem~\eqref{eq:nlc} with the feasible region
\begin{equation*}
\Omega \mathrel{\stackrel{\mathsf{def}}{=}} \set{x \in \mathbb{R}^n : \con(x) \ge 0, ~ i = 1, \dots, m},
\end{equation*}
where~$\con : \mathbb{R}^n \to \mathbb{R}$ denotes the~$i$th constraint function for each $i \in \set{1, 2, \dots, m}$.
The same as the objective function, all constraints are assumed to be accessible only through function values.
As mentioned before, iteration~$k$ of \gls{cobyla} models the objective and the constraint functions with linear interpolants on the interpolation set~$\xpt$ of~$n + 1$ points.
Once the linear models~$\conm$ of~$\con$ are built for~$i \in \set{1, \dots, m}$, the trust-region subproblem~\eqref{eq:trsp} is formed with
\begin{equation}
\label{eq:cobylarg}
\fsetm \mathrel{\stackrel{\mathsf{def}}{=}} \set{x \in \mathbb{R}^n : \conm(x) \ge 0, ~ i=1, \dots, m}.
\end{equation}
This subproblem may not be feasible, as the trust region and the region~\eqref{eq:cobylarg} may not intersect.
\Gls{cobyla} handles the trust-region subproblem in two stages.
In the first stage, it solves
\begin{equation*}
\min_{x \in \mathbb{R}^n} \max_{1 \le i \le m} ~ [\conm(x)]_{-} \quad \text{s.t.} \quad \norm{x - \iter} \le \rad,
\end{equation*}
where~$[t]_{-} = \max \set{0, -t}$ for any~$t \in \mathbb{R}$.
In doing so, the method attempts to reduce the~$\ell_{\infty}$-violation of the linearized constraints within the trust region.
If the first stage finds a point in the interior of the trust region, then the second stage uses the resultant freedom in $x$ to minimize the linearized objective function~$\objm$ within the trust region subject to no increase in any greatest violation of the linearized constraints.
\Gls{cobyla} assesses the quality of points and updates the trust-region radius according to an~$\ell_\infty$-merit function and a reduction ratio based on it (see~\cite[equations~(5),~(9), and~(10)]{Powell_1994}).
It never increases the trust-region radius, and reduces the radius if the geometry of~$\xpt$ is acceptable but the trust-region trial point~$\iter^{\text{t}}$ is too close to~$\iter$ or does not render a big enough reduction ratio~\cite[equation~(11)]{Powell_1994}.
\subsection{\glsfmtshort{uobyqa}}
\label{subsec:uobyqa}
In 2002, Powell published \gls{uobyqa}~\cite{Powell_2002}, named after ``\glsxtrlong{uobyqa}.''
It aims at solving the nonlinear optimization problem~\eqref{eq:nlc} in the unconstrained case, i.e., when~$\Omega = \mathbb{R}^n$.
At iteration~$k$, \gls{uobyqa} constructs the model~$\objm$ for the objective function~$f$ by the fully determined quadratic interpolation on the interpolation set~$\xpt$ containing~$(n + 1)(n + 2) / 2$ points.
The trust-region subproblem~\eqref{eq:trsp} is solved with the Mor{\'{e}}-Sorensen algorithm~\cite{More_Sorensen_1983}.
For the geometry-improving subproblem~\eqref{eq:biglag}, Powell developed an inexact algorithm that requires only~$\mathcal{O}(n^2)$ operations.
See~\cite[\S~2]{Powell_2002} for more details.
\Gls{uobyqa} updates the trust-region radius~$\rad$ in a noteworthy way.
The update is typical for trust-region methods, except that a lower bound~$\radlb$ is imposed on~$\rad$.
The value of~$\radlb[k]$ can be regarded as an indicator for the current \emph{resolution} of the algorithm.
Without imposing~$\rad[k] \ge \radlb[k]$, the trust-region radius~$\rad[k]$ may be reduced to a value that is too small for the current resolution, making the interpolation points concentrate too much.
The value of~$\radlb[k]$ is never increased and is decreased when the \gls{uobyqa} decides that the work for the current value of~$\radlb[k]$ is finished.
It decides so if~$\rad[k]$ reaches its lower bound~$\radlb[k]$, the current trust-region trial step does not perform well, and the current interpolation set seems adequate for the current resolution.
See~\cite[\S~3]{Powell_2002} for more information on the updates of~$\rad$ and~$\radlb[k]$.
\subsection{\glsfmtshort{newuoa}, \glsfmtshort{bobyqa}, and \glsfmtshort{lincoa}}
\label{subsec:nbloa}
Later on, based on the underdetermined quadratic interpolation introduced in Subsection~\ref{subsec:iptprob},
Powell developed his last three \gls{dfo} solvers, namely \gls{newuoa}~\cite{Powell_2006,Powell_2008}, \gls{bobyqa}~\cite{Powell_2009}, and \gls{lincoa}.
\Gls{bobyqa} and \gls{lincoa} are named respectively after ``\glsxtrlong{bobyqa}'' and ``\glsxtrlong{lincoa}'', but Powell~\cite{Powell_2006,Powell_2008} did not specify the meaning of \gls{newuoa}, which is likely an acronym for ``\glsxtrlong{newuoa}.''
It is worth mentioning that Powell \emph{never} published a paper to introduce \gls{lincoa}, and~\cite{Powell_2015} discusses only how to solve its trust-region subproblem.
\Gls{newuoa}, \gls{bobyqa}, and \gls{lincoa} aim at solving unconstrained, bound-constrained, and linearly constrained problems respectively.
They all set~$\fsetm$ in the trust-region subproblem~\eqref{eq:trsp} to be~$\Omega$, corresponding to the whole space for \gls{newuoa}, a box for \gls{bobyqa}, and a polyhedron for \gls{lincoa}.
To solve the trust-region subproblem~\eqref{eq:trsp}, \gls{newuoa} employs the Steihaug-Toint truncated conjugate gradient (TCG) algorithm~\cite{Steihaug_1983,Toint_1981}; if the boundary of the trust region is reached, then \gls{newuoa} may make further changes to the trust-region step, each one obtained by searching in the two-dimensional space spanned by the current step and the corresponding gradient of the trust-region model~\cite[\S~5]{Powell_2006}.
\Gls{bobyqa} solves~\eqref{eq:trsp} by an active-set variant of the TCG algorithm, and it may also improve the TCG step by two-dimensional searches if it reaches the trust-region boundary~\cite[\S~3]{Powell_2009}.
\Gls{lincoa} uses another active-set variant of TCG to solve the trust-region subproblem~\eqref{eq:trsp} with linear constraints~\cite[\S~3 and \S~5]{Powell_2015}.
An accessible description of the TCG algorithms employed by \gls{bobyqa} and \gls{lincoa} can be found in~\cite[\S~6.2.1 and \S~6.2.2]{Ragonneau_2022}.
\Gls{newuoa}, \gls{bobyqa}, and \gls{lincoa} manage the trust-region radius in a way similar to \gls{uobyqa}, imposing a lower bound~$\radlb$ on~$\rad$ when updating~$\rad$.
\begin{sloppypar}
When solving the geometry-improving subproblem~\eqref{eq:biglag}, \gls{newuoa} first takes $\iter \pm \radalt (y_k^{\text{d}} - \iter)/\norm{y_k^{\text{d}} -\iter}$, with the sign that provides the larger value of~$\ell_k^{\text{d}}$, and then revises it by a procedure similar to the two-dimensional searches that improve the TCG step for~\eqref{eq:trsp}~(see~\cite[\S~6]{Powell_2006}).
\Gls{bobyqa} computes two approximate solutions to~\eqref{eq:biglag} and chooses the better one: the first one solves~\eqref{eq:biglag} with an additional constraint that~$x$ is located on the straight lines through~$\iter$ and another point in~$\xpt$, and the second is obtained by a Cauchy step for~\eqref{eq:biglag} (see~\cite[\S~3]{Powell_2009}).
The geometry-improving step of \gls{lincoa} is more complex, as it is chosen from three approximate solutions to~\eqref{eq:biglag}:
\end{sloppypar}
\begin{enumerate}
\item the point that maximizes~$\abs{\ell_k^{{\text{d}}}}$ within the trust region on the lines through~$\iter$ and another point in~$\xpt$,
\item a point obtained by a gradient step that maximizes~$\abs{\ell_k^{{\text{d}}}}$ within the trust region, and
\item a point obtained by a projected gradient step that maximizes~$\abs{\ell_k^{{\text{d}}}}$ within the trust region, the projection being made onto the null space of the constraints that are considered active at~$\iter$.
\end{enumerate}
Note that the first two cases disregard the linear constraints (i.e.~$x\in\fsetm = \Omega$), while the third case considers only the active constraints.
\Gls{lincoa} first selects the point among the first two alternatives for a larger value of~$\abs{\ell_k^{{\text{d}}}}$; further, this point is replaced with the third alternative if the latter nearly satisfies the linear constraints~$x\in\Omega$ while rendering a value of~$\abs{\ell_k^{{\text{d}}}}$ that is not too small compared with the above one.
\Gls{bobyqa} respects the bound constraints~$x \in \Omega$ when solving the trust-region subproblem~\eqref{eq:trsp} and the geometry-improving subproblem~\eqref{eq:biglag}, even though these problems are solved approximately.
It also chooses the initial interpolation set~$\xpt[1]$ within the bounds.
Therefore, \gls{bobyqa} is a feasible method.
In contrast, \gls{lincoa} may violate the linear constraints when solving the geometry-improving subproblem and when setting up the initial interpolation set.
Consequently, \gls{lincoa} is an infeasible method, which requires~$f$ to be defined even when the linear constraints are not satisfied.
\section{The \glsfmtshort{pdfo} package}
\label{sec:pdfo}
This section details the main features of \gls{pdfo}, in particular the signature of the main function, solver selection, problems preprocessing, bug fixes, and handling failures of function evaluations.
For more features of \gls{pdfo}, we refer to its homepage at \url{https://www.pdfo.net}\,.
Before starting, we emphasize that \gls{pdfo} does not re-implement Powell's solvers but rather enables Python and MATLAB to call Powell's Fortran implementation.
At a low level, it uses F2PY\,\footnote{\url{https://numpy.org/doc/stable/f2py}\,.} to interface Python with Fortran, and MEX to interface MATLAB with Fortran, although users never need such knowledge to employ \gls{pdfo}.
\subsection{Signature of the main function}
The philosophy of \gls{pdfo} is simple: providing a single function named \texttt{pdfo}\xspace to solve \gls{dfo} problems with or without constraints, calling Powell's Fortran solvers in the backend.
It takes for input an optimization problem of the form
\begin{subequations}
\label{eq:pdfo}
\begin{align}
\min_{x \in \mathbb{R}^n} & ~ f(x)\\
\text{s.t.} & ~ l \le x \le u, \label{eq:pdfo-b}\\
& ~ A_{\scriptscriptstyle\text{I}} x \le b_{\scriptscriptstyle\text{I}}, ~ A_{\scriptscriptstyle\text{E}} x = b_{\scriptscriptstyle\text{E}}, \label{eq:pdfo-l}\\
& ~ \con[\scriptscriptstyle\text{I}](x) \le 0, ~ \con[\scriptscriptstyle\text{E}](x) = 0, \label{eq:pdfo-nl}
\end{align}
\end{subequations}
where~$f$ a real-valued objective function, while~$\con[\scriptscriptstyle\text{E}]$ and~$\con[\scriptscriptstyle\text{I}]$ are vector-valued constraint functions.
The bound constraints are given by~$n$-dimensional vectors~$l$ and~$u$, which may take infinite values.
The linear constraints are formulated by real matrices~$A_{\scriptscriptstyle\text{E}}$ and~$A_{\scriptscriptstyle\text{I}}$ together with real vectors~$b_{\scriptscriptstyle\text{E}}$ and~$b_{\scriptscriptstyle\text{I}}$ of proper sizes.
We allow one or more of the constraints~\eqref{eq:pdfo-b}--\eqref{eq:pdfo-nl} to be absent.
Being a specialization of~\eqref{eq:nlc}, problem~\eqref{eq:pdfo} is broad enough to cover numerous applications of \gls{dfo}.
In the Python version of \gls{pdfo}, the signature of the \texttt{pdfo}\xspace function is compatible with the \texttt{minimize} function available in the \texttt{scipy.optimize} module of SciPy.
It can be invoked in exactly the same way as \texttt{minimize} except that \texttt{pdfo}\xspace does not accept derivative arguments.
The MATLAB version of \gls{pdfo} designs the \texttt{pdfo}\xspace function following the signature of the \texttt{fmincon} function available in the Optimization Toolbox of MATLAB.
In both Python and MATLAB, users can check the detailed syntax of \texttt{pdfo}\xspace by the standard \texttt{help} command.
\subsection{Automatic selection of the solver}
\label{subsec:solver-selection}
When invoking the \texttt{pdfo}\xspace function, the user may specify which solver to call in the backend.
However, if the user does not do so or chooses a solver that is incapable of solving the problem (e.g., \gls{uobyqa} cannot solve constrained problems), then \texttt{pdfo}\xspace selects the solver as follows.
\begin{enumerate}
\item If the problem is unconstrained, then \gls{uobyqa} is selected when~$2 \le n \le 8$, and \gls{newuoa} is selected when~$n = 1$ or~$n > 8$.
\item If the problem is bound-constrained, then \gls{bobyqa} is selected.
\item If the problem is linearly constrained, then \gls{lincoa} is selected.
\item Otherwise, \gls{cobyla} is selected.
\end{enumerate}
The problem type is detected automatically according to the input.
In the unconstrained case, we select \gls{uobyqa} for small problems because it is more efficient, and the number \num{8} is set according to our experiments on the CUTEst~\cite{Gould_Orban_Toint_2015} problems.
Note that Powell's implementation of \gls{uobyqa} cannot handle univariate unconstrained problems, for which \gls{newuoa} is chosen.
In addition to the \texttt{pdfo}\xspace function, \gls{pdfo} provides functions named \texttt{cobyla}\xspace, \texttt{uobyqa}\xspace, \texttt{newuoa}\xspace, \texttt{bobyqa}\xspace, and \texttt{lincoa}\xspace, which invoke the corresponding solvers directly, but it is highly recommended to call the solvers via the \texttt{pdfo}\xspace function.
\subsection{Problem preprocessing}
\label{subsec:pdfo-preprocessing}
\Gls{pdfo} preprocesses the input of the user in order to fit the data structure expected by Powell's Fortran code.
For example, \gls{lincoa} needs a feasible starting point to work properly, unless the problem is infeasible.
If the starting point is not feasible, then \gls{lincoa} would modify the right-hand sides of the linear constraints to make it feasible and then solve the modified problem.
Therefore, for linearly constrained problems, \gls{pdfo} attempts to project the user-provided starting point onto the feasible region before passing the problem to the Fortran code, so that a feasible problem will not be modified by \gls{lincoa}.
Another noticeable preprocessing of the constraints made by \gls{pdfo} is the treatment of the linear equality constraints in~\eqref{eq:pdfo-l}.
As long as these constraints are consistent, we eliminate them and reduce~\eqref{eq:pdfo} to an~$(n - \rank A_{\scriptscriptstyle\text{E}})$-dimensional problem.
This is done by a QR factorization of~$A_{\scriptscriptstyle\text{E}}$.
The main motivation for this reduction comes again from \gls{lincoa}, which accepts only linear inequality constraints.
An alternative approach is to write a linear equality constraint as two inequalities, but our approach reduces the dimension of the problem, which is beneficial for the efficiency of \gls{dfo} solvers in general.
\subsection{Bug fixes in the Fortran source code}
\label{subsec:bug-corrections}
The current version of \gls{pdfo} patches several bugs in the original Fortran source code, particularly the following ones.
\begin{enumerate}
\item The solvers may encounter infinite loops.
This happens when the exit conditions of a loop can never be met because variables involved in these conditions become NaN due to floating point exceptions.
The user's program will never end if this occurs.
\item The Fortran code may encounter memory errors due to uninitialized indices.
This is because some indices are initialized according to conditions that can never be met due to NaN, similar to the previous case.
The user's program will crash if this occurs.
\end{enumerate}
In our extensive tests based on the CUTEst problems, these bugs take effect from time to time but not often.
They are activated only when the problem is rather ill-conditioned or the inputs are rather extreme.
This has been observed, for instance, on the CUTEst problems DANWOODLS, GAUSS1LS, and LAKES with some perturbation and randomization.
Even though these bugs are rarely observed in our tests, it is vital to patch them for two reasons.
First, their consequences are severe once they occur.
Second, application problems are often more irregular and savage than the testing problems we use, and hence the bugs may be triggered more often than we expect.
Nevertheless, \gls{pdfo} allows the users to call Powell's original Fortran code without these patches by setting the option~\texttt{classical} to true, which is highly discouraged.
\subsection{Handling failures of function evaluations}
\label{subsec:barrier}
\Gls{pdfo} tolerates NaN values returned by function evaluations.
Such a value can be used to indicate failures of function evaluations, which are common in applications of \gls{dfo}.
To cope with NaN values, \gls{pdfo} applies a \emph{moderated extreme barrier}.
Suppose that~$f(\tilde{x})$ is evaluated to NaN at a certain~$\tilde{x} \in \mathbb{R}^n$.
\Gls{pdfo} takes the view that~$\tilde{x}$ violates a hidden constraint~\cite{LeDigabel_Wild_2015,Audet_Caporossi_Jacquet_2020}.
Hence it replaces NaN with a large but finite number \texttt{HUGEFUN}\xspace (e.g., $10^{30}$) before passing~$f(\tilde{x})$ to the Fortran solver, so that the solver can continue to progress while penalizing~$\tilde{x}$.
Indeed, since Powell's solvers construct trust-region models by interpolation, all points that are close to~$\tilde{x}$ will be penalized.
Similar things are done when the constraint functions return NaN.
A caveat is that setting~$f(\tilde{x})$ to \texttt{HUGEFUN}\xspace may lead to extreme values or even NaN in the coefficients of the interpolation models, but Powell's solvers turn out to be quite tolerant of such values.
The original extreme barrier approach~\cite[equation~(13.2)]{Conn_Scheinberg_Vicente_2009b} sets \texttt{HUGEFUN}\xspace to~$\infty$, which is inappropriate for methods based on interpolation.
In fact, we also moderate~$f(\tilde{x})$ to~$\texttt{HUGEFUN}\xspace$ if it is actually evaluated to~$\infty$.
Our approach is clearly naive, but it is better than terminating the solver once the function evaluation fails
In our experiments, this simple approach significantly improves the robustness of \gls{pdfo} with respect to failures of function evaluation, as will be demonstrated in Subsection~\ref{subsec:nan}.
There do exist other more sophisticated approaches~\cite{Audet_Caporossi_Jacquet_2020}, which will be explored in the future.
\section{Numerical results}
\label{sec:numerical}
This section presents numerical experiments on \gls{pdfo}.
Since Powell's solvers are widely used as benchmarks in \gls{dfo}, extensive comparisons with standard \gls{dfo} solvers are already available in the literature~\cite{More_Wild_2009,Rios_Sahinidis_2013}.
Instead of repeating such comparisons, which is unnecessary, the purpose of our experiments is the following.
\begin{enumerate}
\item Demonstrate the fact the \gls{pdfo} is capable of adapting to noise without fine-tuning according to the noise level, in contrast to methods based on finite differences.
This is done in Subsection~\ref{subsec:noise} by comparing \gls{pdfo} with finite-difference \gls{cg} and \gls{bfgs} on unconstrained CUTEst problems.
\item Verify the effectiveness of the moderated extreme barrier mentioned in Subsection~\ref{subsec:barrier} for handling failures of function evaluations.
This is done in Subsection~\ref{subsec:nan} by testing \gls{pdfo} with and without the barrier on unconstrained CUTEst problems.
\item Illustrate the potential of \gls{pdfo} in hyperparameter optimization problems from machine learning, echoing the observations made in~\cite{Ghanbari_Scheinberg_2017} about trust-region \gls{dfo} methods for such problems.
This is done in Subsection~\ref{subsec:hypertune} by comparing \gls{pdfo} with two solvers from the Python package \texttt{hyperopt}.\footnote{\url{https://hyperopt.github.io/hyperopt}\,.}
\end{enumerate}
Our experiments are carried out in double precision based on the Python version of \gls{pdfo}~1.2.
The finite-difference \gls{cg} and \gls{bfgs} are provided by SciPy~1.10.0.
The version of \texttt{hyperopt} is~0.2.7.
All these packages are tested with the latest stable version at the time of writing.
We conduct the test on a ThinkStation P620 with an AMD Ryzen Threadripper PRO 3975WX CPU and 64 GB of memory, the operating system being Ubuntu~22.04, and the Python version being~3.10.6.
\subsection{Stability under noise}
\label{subsec:noise}
We first compare \gls{pdfo} with finite-difference \gls{cg} and \gls{bfgs} on unconstrained problems with multiplicative Gaussian noise.
We take the view that multiplicative noise makes more sense if the scale of the objective function changes widely, as is often the case in applications.
SciPy provides both \gls{cg} and \gls{bfgs} under the \texttt{minimize} function in the \texttt{scipy.optimize} module.
Invoked with the default configurations in SciPy without derivatives, \gls{cg} and \gls{bfgs} approximate gradients by forward differences with the default difference parameter~$h = \sqrt{u} \approx 1.5\times 10^{-8}$, where~$u$ is the unit roundoff.
For \gls{pdfo}, we specify \gls{newuoa} as the solver, while setting all the other options to the default.
In particular, the initial trust-region radius is~$1$, the final trust-region radius is~$10^{-6}$, and the number of interpolation points is~$2n + 1$, with~$n$ being the dimension of the problem being solved.
We perform the comparison on \num{158} unconstrained problems with~$n \le 50$ from the CUTEst~\cite{Gould_Orban_Toint_2015} problem set using PyCUTEst~1.4~\cite{Fowkes_Roberts_Burmen_2022}.
For each testing problem, the starting point is set to the one provided by CUTEst, and the maximal number of function evaluations is set to~$500n$.
Let~$\sigma \ge 0$ be the noise level to test.
For a testing problem with the objective function~$f$, we define
\begin{equation}
\label{eq:noisy-obj}
\tilde{f}_\sigma(\iter[]) = [1 + \sigma R(\iter[])] f(\iter[]),
\end{equation}
with $R(\iter[])\sim \mathrm{N}(0, 1)$ being independent and identically distributed when~$x$ varies.
If~$\sigma = 0$, then~$\tilde{f}_\sigma = f$, corresponding to the noise-free case.
In general,~$\sigma$ equals the standard deviation of the noise.
Given a noise level~$\sigma \ge 0$ and a convergence tolerance~$\tau \in (0, 1)$, we will plot the performance profiles~\cite{More_Wild_2009} of the solvers on the testing problems.
We run all the solvers on all the problems, every objective function being evaluated by its contaminated version~\eqref{eq:noisy-obj}.
For each solver, the performance profile displays the proportion of problems solved with respect to the normalized cost to solve the problem up to the convergence tolerance~$\tau$.
For each problem, the cost to solve the problem is the number of function evaluations needed to achieve
\begin{equation}
\label{eq:cvt}
f(\iter[0]) - f(\iter) \ge (1 - \tau) [f(\iter[0]) - f_{\ast}],
\end{equation}
and the normalized cost is this number divided by the minimum cost of this problem among all solvers; we define the normalized cost as infinity if the solver fails to achieve~\eqref{eq:cvt} on this problem.
Here,~$\iter[0]$ represents the starting point, and~\cite[\S~2.1]{More_Wild_2009} suggests that the value~$f_{\ast}$ should be the least value of~$f$ obtained by all solvers.
Note that the convergence test~\eqref{eq:cvt} uses the values of~$f$ and not those of~$\tilde{f}_\sigma$.
This means that we assess the solvers according to the true objective function values, even though the solvers can only evaluate~$\tilde{f}_\sigma$, which is contaminated unless~$\sigma = 0$.
To make our results more reliable, when~$\sigma>0$, the final performance profile is obtained by averaging the profiles obtained via the above procedure over ten independent runs.
In addition, the value~$f_{\ast}$ in the convergence test~\eqref{eq:cvt} is set to the least value of~$f$ obtained by all solvers during all these ten runs plus a run with~$\sigma = 0$.
Finally, for better scaling of the profiles, we plot the binary logarithm of the normalized cost on the horizontal axis, instead of the normalized cost itself.
Figure~\ref{fig:noise} shows the performance profiles of the solvers for the noise levels~$\sigma = 0$, $\sigma = 10^{-10}$, and~$\sigma = 10^{-8}$.
Two profiles are included for each noise level, with the convergence tolerance being~$\tau = 10^{-2}$ and~$\tau = 10^{-4}$ respectively.
\begin{figure}[htbp]
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\textwidth,page=2]{perf-plain-bfgs_cg_pdfo-50.pdf}
\caption{$\sigma = 0$, $\tau = 10^{-2}$}
\end{subfigure}
\hfill
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\textwidth,page=4]{perf-plain-bfgs_cg_pdfo-50.pdf}
\caption{$\sigma = 0$, $\tau = 10^{-4}$}
\end{subfigure}
\hfill
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\textwidth,page=2]{perf-noisy-bfgs_cg_pdfo-50-10.pdf}
\caption{$\sigma = 10^{-10}$, $\tau = 10^{-2}$}
\end{subfigure}
\hfill
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\textwidth,page=4]{perf-noisy-bfgs_cg_pdfo-50-10.pdf}
\caption{$\sigma = 10^{-10}$, $\tau = 10^{-4}$}
\end{subfigure}
\hfill
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\textwidth,page=2]{perf-noisy-bfgs_cg_pdfo-50-8.pdf}
\caption{$\sigma = 10^{-8}$, $\tau = 10^{-2}$}
\end{subfigure}
\hfill
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\textwidth,page=4]{perf-noisy-bfgs_cg_pdfo-50-8.pdf}
\caption{$\sigma = 10^{-8}$, $\tau = 10^{-4}$}
\end{subfigure}
\caption{Performance profiles of \gls{cg}, \gls{bfgs}, and \gls{pdfo} on unconstrained problems with the objective functions evaluated by~$\tilde{f}_\sigma$ in~\eqref{eq:noisy-obj}}
\label{fig:noise}
\end{figure}
In the noise-free case ($\sigma = 0$), \gls{pdfo} is more efficient than finite-difference \gls{cg} and \gls{bfgs}, although the distinction is less visible when~$\tau$ is smaller, and \gls{bfgs} can solve slightly more problems than \gls{pdfo}.
When there is noise ($\sigma >0$), the advantage of \gls{pdfo} becomes significant.
The performances of \gls{cg} and \gls{bfgs} deteriorate considerably under noise, even if the noise level is not high and the convergence tolerance is not demanding.
We do not include results with larger values of~$\sigma$, as \gls{cg} and \gls{bfgs} will barely solve any problem, while \gls{pdfo} can still tackle a significant proportion of them up to a reasonable precision.
However, given the fact that \gls{cg} and \gls{bfgs} take the finite difference parameter~$h \approx 10^{-8}$, their unfavorable performance is not surprising.
It does not contradict the observations in~\cite{Shi_Etal_2022a,Shi_Etal_2022}, where~$h$ is chosen more carefully according to the noise level and the smoothness of the problems.
Our experiment does not include such fine-tuning but adopts the default settings of \gls{cg} and \gls{bfgs} in SciPy, as well as the default configurations of \gls{pdfo}.
To summarize, the performance of finite-difference \gls{cg} and \gls{bfgs} is encouraging when there is no noise, yet much more care is needed when the problems are noisy.
In contrast, \gls{pdfo} adapts to noise automatically in our experiment, demonstrating good stability under noise without requiring knowledge about the noise level.
This is because Powell's methods (\gls{newuoa} in this experiment) gradually adjust the geometry of the interpolation set during the iterations, making progress until the interpolation points are too close to distinguish noise from true objective function values.
This is not specific to Powell's methods but also applies to other algorithms that sample the objective function on a set of points with adaptively controlled geometry, including finite-difference methods with well-chosen difference parameters~\cite{Shi_Etal_2022a}.
\subsection{Robustness with respect to failures of function evaluations}
\label{subsec:nan}
We now test the robustness of the solvers when function evaluations fail from time to time.
We assume that the objective function returns NaN if the evaluation fails, which occurs randomly with a certain probability.
As mentioned in Section~\ref{subsec:barrier}, \gls{pdfo} uses a moderated extreme barrier to handle such failures.
To verify the effectiveness of this approach, we compare \gls{pdfo} with its variant that does not apply the barrier.
To make the experiment more informative, we also include the finite-difference \gls{cg} and \gls{bfgs} tested before, which do not handle evaluation failures particularly.
The solvers are set up in the same way as in the previous experiment, and we still employ the \num{158} unconstrained CUTEst problems used previously.
Let~$p \in [0,1]$ be the failure probability of function evaluations.
For a testing problem with the objective function~$f$, we define
\begin{equation}
\label{eq:nan-obj}
\hat{f}_p(x) = \begin{cases}
\, f(x) & \text{if~$U(x) \ge p$}, \\[0.5ex]
\, \text{NaN} & \text{otherwise},
\end{cases}
\end{equation}
where $U(\iter[])$ follows the uniform distribution on~$[0, 1]$, being independent and identically distributed when~$x$ varies.
Note that~$\hat{f}_0 = f$.
In the experiment, the solvers can evaluate~$f$ only via~$\hat{f}_p$.
We plot the performance profiles of the solvers in a way that is similar to the previous experiment.
The profiles are also averaged over ten independent runs.
For each problem, the value~$f_{\ast}$ in the convergence test~\eqref{eq:cvt} is set to the least value of~$f$ obtained by all solvers during these ten runs plus a run with~$p = 0$.
Figure~\ref{fig:nan} shows the performance profiles of the solvers with~$p = 0.01$ and~$p=0.05$.
Two profiles are included for each~$p$, with the convergence tolerance being~$\tau = 10^{-2}$ and~$\tau = 10^{-4}$ respectively.
\begin{figure}[htbp]
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\textwidth,page=2]{perf-nan-bfgs_cg_pdfo-50-10-0.01.pdf}
\caption{$p = 0.01$, $\tau = 10^{-2}$}
\end{subfigure}
\hfill
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\textwidth,page=4]{perf-nan-bfgs_cg_pdfo-50-10-0.01.pdf}
\caption{$p = 0.01$, $\tau = 10^{-4}$}
\end{subfigure}
\hfill
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\textwidth,page=2]{perf-nan-bfgs_cg_pdfo-50-10-0.05.pdf}
\caption{$p = 0.05$, $\tau = 10^{-2}$}
\end{subfigure}
\hfill
\begin{subfigure}{.48\textwidth}
\centering
\includegraphics[width=\textwidth,page=4]{perf-nan-bfgs_cg_pdfo-50-10-0.05.pdf}
\caption{$p = 0.05$, $\tau = 10^{-4}$}
\end{subfigure}
\caption{Performance profiles of \gls{cg}, \gls{bfgs}, \gls{pdfo}, and~\gls{pdfo} without
barrier on unconstrained problems
with the objective functions evaluated by~$\hat{f}_p$ in~\eqref{eq:nan-obj}}
\label{fig:nan}
\end{figure}
The contrast is clear.
Compared with finite-difference \gls{cg} and \gls{bfgs}, \gls{pdfo} is more efficient and solves significantly more problems given the same convergence tolerance.
Moreover, comparing \gls{pdfo} and its no-barrier counterpart, we see that the moderated extreme barrier improves evidently the robustness of \gls{pdfo} with respect to failures of function evaluations, even though it is a quite naive approach.
When~$p = 0.05$, the function evaluation fails roughly once every \num{20} times, but \gls{pdfo} can still solve almost~$60\%$ of the problems up to the convergence tolerance~$\tau = 10^{-4}$ in the sense of~\eqref{eq:cvt}, whereas all its competitors solve less than $25\%$.
We speculate that the moderated extreme barrier will also benefit other model-based \gls{dfo} methods, including those based on finite differences.
It deserves further investigation in the future.
\subsection{An illustration of hyperparameter optimization with \glsfmtshort{pdfo}}
\label{subsec:hypertune}
We now consider a hyperparameter optimization problem from machine learning and illustrate the potential of \gls{pdfo} for such problems.
We compare \gls{pdfo} with \gls{rs} and \gls{tpe}, two solvers from the Python package \texttt{hyperopt} for hyperparameter optimization.
Our experiment is inspired by~\cite[\S~5.3]{Ghanbari_Scheinberg_2017}, which investigates the application of trust-region \gls{dfo} methods to hyperparameter optimization.
Similar to~\cite[\S~5.3]{Ghanbari_Scheinberg_2017}, we tune the $C$-SVC model detailed in~\cite[\S~2.1]{Chang_Lin_2011} for binary classifications.
Given a dataset~$\{(u_i,z_i)\}_{i=1}^{\scriptscriptstyle{N}}$, with~$u_i \in \mathbb{R}^d$ being a vector of features and~$z_i \in \{-1,1\}$ being the corresponding label, the $C$-SVC constructs a classifier by solving
\begin{equation}
\label{eq:svc}
\min_{\alpha} ~ \frac{1}{2} \alpha^{\scriptscriptstyle{\mathsf{T}}} Q \alpha - \alpha^{\scriptscriptstyle{\mathsf{T}}} \mathbbm{1} \quad \text{s.t.} \quad 0 \leq \alpha \leq C, ~ \alpha^{\scriptscriptstyle{\mathsf{T}}} z = 0, ~ \alpha \in\mathbb{R}^{{\scriptscriptstyle{N}}},
\end{equation}
where~$C\in(0,\infty)$ is a hyperparameter, $z\in\mathbb{R}^{{\scriptscriptstyle{N}}}$ is the vector whose~$i$th entry is~$z_i$,~$\mathbbm{1}\in\mathbb{R}^{\scriptscriptstyle{N}}$ is the vector of all ones, and~$Q\in\mathbb{R}^{{\scriptscriptstyle{N}}\times {\scriptscriptstyle{N}}}$ is the matrix whose~$(i,j)$ entry is~$K(u_i,u_j)$ with a kernel function~$K\mathrel{:} \mathbb{R}^{d}\times\mathbb{R}^{d} \to \mathbb{R}$.
We take the Gaussian kernel~$K(u,v) = \exp(-\gamma\norm{u-v}^2)$, where~$\gamma \in (0,\infty)$ is another hyperparameter.
See~\cite[\S~2.1]{Chang_Lin_2011} for more details.
As suggested by~\cite[\S~9]{Chang_Lin_2011}, we tune~$C$ and~$\gamma$ for the performance of the~$C$-SVC.
We model this process as solving the problem
\begin{equation}
\label{eq:hypertune}
\max ~ P(C, \gamma) \quad \text{s.t.} \quad C > 0, ~ \gamma > 0,
\end{equation}
where~$P(C, \gamma)$ measures the performance corresponding to parameters~$(C, \gamma)$.
In our experiment, we define~$P$ based on the AUC score~\cite[\S~3]{Ghanbari_Scheinberg_2017}, which lies in~$[0,1]$ and measures the quality of a classifier on a dataset, the higher the better.
More precisely, $P(C, \gamma)$ is set to a five-fold cross-validation AUC score as follows.
Split the training dataset~$\mathcal{S}$ into five folds, and train the $C$-SVC five times, each time solving~\eqref{eq:svc} on a union of four distinct folds.
After each training, calculate the AUC score of the resulting classifier on the fold not involved in the training, leading to five scores, the average of which is~$P(C, \gamma)$.
Our experiment is based on binary classification problems from \mbox{LIBSVM},\footnote{\url{https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets}\,.} where we adopt three datasets detailed in Table~\ref{tab:htdata}.
LIBSVM divides each dataset~$\mathcal{D}$ into two disjoint subsets, namely a training dataset~$\mathcal{S}$ and a testing dataset~$\mathcal{T}$.
Given~$\mathcal{D}$, we evaluate~$P$ based on~$\mathcal{S}$ by the procedure described above, with~\eqref{eq:svc} being handled by the SVC class of the Python package \mbox{scikit-learn}.\footnote{\url{https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html}\,.}
Then we solve~\eqref{eq:hypertune} by~\gls{pdfo}, \gls{rs}, or \gls{tpe} to obtain the tuned parameters~$(\bar{C}, \bar{\gamma})$.
As in~\cite[\S~5.3]{Ghanbari_Scheinberg_2017}, we modify the constraints of~\eqref{eq:hypertune} to~$C \in [10^{-6}, 1]$ and~$\gamma \in [1, 10^{3}]$.
For better scaling of the problem, we perform the maximization with respect to~$(\log_{10}C,\, \log_{10}\gamma)$ instead of~$(C, \gamma)$, the initial guess being chosen randomly from~$[-6, 0] \times [0, 3]$.
The solver of \gls{pdfo} is \gls{bobyqa}, for which we set the maximal number of function evaluations to \num{100}.
For \gls{rs} and \gls{tpe}, we try both \num{100} and \num{300} for the maximal number of function evaluations, and they do not terminate until this number is reached.
\begin{table}[ht]
\caption{Datasets from LIBSVM}
\label{tab:htdata}
\centering
\begin{tabular}{cS[table-format=2]S[table-format=5]S[table-format=5]}
\toprule
Dataset~$\mathcal{D}$ & {Number of features} & {Size of~$\mathcal{S}$} & {Size of~$\mathcal{T}$}\\
\midrule
splice & 60 & 1000 & 2175\\
svmguide1 & 4 & 3088 & 4000\\
ijcnn1 & 22 & 49990 & 91701\\
\bottomrule
\end{tabular}
\end{table}
To assess the quality of the tuned parameters~$(\bar{C}, \bar{\gamma})$, we solve~\eqref{eq:svc} on~$\mathcal{S}$ with~$(C, \gamma)=(\bar{C}, \bar{\gamma})$, and calculate both the AUC score and accuracy of the resulting classifier on~$\mathcal{T}$, the latter being the fraction of correctly classified data points.
Note that~$\mathcal{T}$ is not involved in the tuning process.
Tables~\ref{tab:splice}--\ref{tab:ijcnn1} present the results for this experiment, where \#$P$ denotes the number of evaluating the function~$P$ and ``Time'' represents the computing time for obtaining~$(\bar{C}, \bar{\gamma})$.
\begin{table}[!ht]
\caption{Hyperparameter tuning on the dataset ``splice''}
\label{tab:splice}
\centering
\begin{tabular}{cSSS[table-format=3]S}
\toprule
Solver & {AUC Score ($10^{-1}$)} & {Accuracy ($10^{-1}$)} & {\#$P$} & {Time (\si{\second})}\\
\midrule
\gls{rs} & 5.00 & 5.20 & 100 & 8.23\\
\gls{rs} & 5.00 & 5.20 & 300 & 24.73\\
\gls{tpe} & 5.00 & 5.20 & 100 & 8.22\\
\gls{tpe} & 5.00 & 5.20 & 300 & 23.57\\
\gls{pdfo} & 9.27 & 7.37 & 33 & 5.11\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[!ht]
\caption{Hyperparameter tuning on the dataset ``svmguide1''}
\centering
\begin{tabular}{cSSS[table-format=3]S}
\toprule
Solver & {AUC Score ($10^{-1}$)} & {Accuracy ($10^{-1}$)} & {\#$P$} & {Time (\si{\second})}\\
\midrule
\gls{rs} & 9.94 & 9.61 & 100 & 10.22\\
\gls{rs} & 9.95 & 9.68 & 300 & 30.75\\
\gls{tpe} & 9.95 & 9.65 & 100 & 8.65\\
\gls{tpe} & 9.95 & 9.68 & 300 & 22.38\\
\gls{pdfo} & 9.95 & 9.66 & 51 & 2.83\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[!ht]
\caption{Hyperparameter tuning on the dataset ``ijcnn1''}
\label{tab:ijcnn1}
\centering
\begin{tabular}{cSSS[table-format=3]S}
\toprule
Solver & {AUC Score ($10^{-1}$)} & {Accuracy ($10^{-1}$)} & {\#$P$} & {Time (\SI{}[\bf 10^3]{\bf\second})}\\
\midrule
\gls{rs} & 9.97 & 9.82 & 100 & 3.99\\
\gls{rs} & 9.97 & 9.77 & 300 & 11.52\\
\gls{tpe} & 9.98 & 9.79 & 100 & 3.42\\
\gls{tpe} & 9.98 & 9.79 & 300 & 8.81\\
\gls{pdfo} & 9.97 & 9.80 & 44 & 2.48\\
\bottomrule
\end{tabular}
\end{table}
In terms of the AUC score and accuracy, \gls{pdfo} achieves a clearly better result than \gls{rs} and \gls{tpe} on the ``splice'' dataset, and they all attain comparable results on the other datasets.
However, \gls{pdfo} always uses much fewer function evaluations, and hence, much less computing time.
The difference in the computing time is particularly visible on the dataset ``ijcnn1'' in Table~\ref{tab:ijcnn1}, as each evaluation of~$P$ takes much time due to the large data size.
Note that our intention is not to manifest that \gls{pdfo} outperforms standard solvers for general hyperparameter optimization problems, which is unlikely the case.
Indeed, \gls{pdfo} is not applicable unless the hyperparameters are continuous variables.
Our objective is rather to provide an example that shows the possibility of applying Powell's methods to hyperparameter optimization, which is not well studied up to now.
In doing so, we also hope to call for more investigation on \gls{dfo} methods for machine learning problems in general, as is suggested in~\cite{Ghanbari_Scheinberg_2017}.
\section{Concluding remarks}
\label{sec:conclude}
We have presented the \gls{pdfo} package, which aims at simplifying the use of Powell's \gls{dfo} solvers by providing user-friendly interfaces.
More information about the package can be found on the homepage of the package at \mbox{\url{https://www.pdfo.net}}\,, including the detailed syntax of the interfaces, an extensive documentation of the options, and several examples to illustrate the usage.
In addition, we have provided an overview of Powell's methods behind \gls{pdfo}.
The overview does not intend to repeat Powell's description of the methods, but rather to provide a summary of the main features and structures of the methods, highlighting the intrinsic connections and similarities among them.
We hope that the overview will ease the understanding of Powell's methods, in the same way as the \gls{pdfo} package eases the use of these methods.
Besides Powell's solvers, \gls{pdfo} also provides a unified interface for \gls{dfo} solvers.
Such an interface can facilitate the development and comparison of different \gls{dfo} solvers.
The interface can readily accommodate solvers other than those by Powell, for example, the \gls{cobyqa} (\glsxtrlong{cobyqa}) solver for general nonlinearly
constrained \gls{dfo} problems (see~\cite[Chapters~5--7]{Ragonneau_2022} and~\cite{Ragonneau_Zhang_cobyqa}).
Finally, we stress that \gls{pdfo} does not implement Powell's \gls{dfo} solvers in Python or MATLAB, but only interfaces Powell's implementation with such languages.
The implementation of these solvers in Python, \mbox{MATLAB}, and other languages is a project in progress under the name of \gls{prima} (\glsxtrlong{prima})~\cite{Zhang_prima}.
\paragraph{\textnormal{\textbf{Funding.}}}
This work was funded by the University Grants Committee of Hong Kong under the
projects PF18-24698 (Hong Kong Ph.D. Fellowship Scheme), PolyU 253012/17P, PolyU 153054/20P, and PolyU 153066/21P.
It was also supported by The Hong Kong Polytechnic University under project P0009767.
\paragraph{\textnormal{\textbf{Data availability.}}}
The source code of the \gls{pdfo} package is available at \url{https://www.pdfo.net}\,.
The source code of the numerical experiments is available at \url{https://www.github.com/pdfo/paper/blob/main/experiments}\,.
\bibliographystyle{plain}
|
{
"arxiv_id": "2302.13176",
"language": "en",
"timestamp": "2023-02-28T02:11:58",
"url": "https://arxiv.org/abs/2302.13176",
"yymm": "2302"
} | \section{Introduction}
Secret key agreement is a fundamental problem in cryptography:
Alice wants to share a random string called {\em key} with Bob, such that a third party Eve who has access to the communication channel between them, has no information about the key.
Information theoretic secret key agreement (SKA) was first proposed by Maurer~\cite{Maurer1993} and Ahlswede~\cite{Ahlswede1993}.
In their model that is referred to as {\em the source model},
Alice, Bob and Eve have private samples of random
variables (RVs) $ X, Y$ and $Z$ with a joint probability distribution $P_{XYZ}$ that is known to all parties.
The goal of Alice and Bob is to obtain a common secret key
by exchanging messages over a public and error
free channel.
An important quality parameter of an SKA protocol is the length $\ell$ of the
established secret key.
In the setting that Alice, Bob and Eve's variables $\mathbf{X},\mathbf{Y}$ and $\mathds{Z}$ are $n$
independent realizations of $X,Y$ and $Z$ distributed according to $P_{XYZ}$,
the {\em secret-key rate } is defined as the maximal rate at which Alice and Bob generate a highly secret key where the rate is (informally) given by $\ell/n$.
\remove{
A natural setting is
when Alice, Bob and Eve have access to $n$
independent realizations of their random variables $[X_1,X_2\cdots X_n]$, $[Y_1,Y_2\cdots Y_n]$ and $[Z_1,Z_2\cdots Z_n]$, denoted by $(\mathbf{X}, \mathbf{Y}, \mathds{Z})$.
The {\em secret-key rate } is defined as the maximal rate at which Alice and Bob generate a highly secret key where the rate is measured relative to the number of realizations of RVs, and is (informally) given by $\ell/n$.
}
\remove{
The{\em secret key (SK) capacity $C_s(X,Y,Y)$} is the highest achievable key rate when $n \rightarrow \infty$. The SK capacity of a general distribution
is a long-standing open problem. In a real
life application of SKA protocols, $n$, the number of available
samples to the parties, is finite and the efficiency of the
protocol is captured by the rate and in finite-block regime.
}
We consider {\em one-way secret key agreement} (OW-SKA)
where Alice sends a single message over the public channel to Bob, to arrive at a shared key.
OW-SKA problem is important in practice because it
avoids interaction between Alice and Bob,
\remove{
that in addition to longer time and more complexity to arrive at an established key, requires
stateful protocols which would introduce additional vulnerabilities in the implementation.
The problem is
also
}
as well as being theoretically interesting because of its relation to circuit
polarization and immunization of public-key encryption in complexity theory and cryptography \cite{Holenstein2006}.
{\em Adversaries.}
SKAs were first studied with the assumption that Eve is passive and only observes the communication channel.
\remove{
SKA in this case requires that when one party accepts, the other also accepts and with a high probability both generate the same key.
}
Maurer \cite{Maurer1997authencation} considered
a more powerful adversary who can eavesdrop and tamper with the communication.
Against such adversaries the protocol is required to establish a secret key when the adversary is passive, and
with probability $1-\delta$
Alice {\em or } Bob must detect the tampering, or a key that is unknown to Eve be established. It was proved \cite{Maurer1997authencation} that SKA with security against active adversaries exists only when certaing {\em simulatablity conditions} are satisfied (see section \ref{ap:related}.)
\remove{
In \cite{Maurer1997authencation}
a more powerful adversary was considered where
in addition to eavesdropping the adversary can modify or block the
transmitted message. This is a more realistic model for communication in practice.
In this case the protocol must establish a secret key when the adversary is passive, and if active, with probability $1-\delta$
Alice or Bob must detect the corruption, or a key that is unknown to Eve be established. It is proved \cite{Maurer1997authencation} that SKA with security against active adversaries exists only when some {\em simulatablity conditions} are satisfied (see section \ref{ap:related}.)
}
{\em In the following we use SKA$_p$~ and SKA$_a$~ to denote SKA with security against passive and active adversaries, respectively.}
{\em Constructions.} There are a number of constructions for OW-SKA$_p$~
that (asymptotically) achieve the highest possible secret-key rate for a given distribution $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$
\cite{holenstein2005one,
renes2013efficient,Chou2015a,sharif2020}.
It was proved (Theorem 9, \cite{Maurer1997authencation}) that secret-key rate of SKA$_a$~
is the same as the secret-key rate of SKA$_p$~
if secure key agreement is possible (simulatability conditions hold).
Construction of protocols with security against active adversaries however is less studied.
%
It was also shown \cite{Maurer1997authencation}, through a construction, that it is possible to provide
message authentication when Alice, Bob and Eve have private samples of $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$. In \cite{MaurerW03b} a concrete construction of a MAC was given, when the key $X$ is partially leaked to Eve through $Z$ for a known $P_{\mathbf{X}\mathds{Z}}$.
These MACs can be used to provide protection against active adversaries in SKA$_a$~.
\vspace{-0.5em}
\subsection*{ Our Work}
We propose an efficient OW-SKA$_a$~
for the setting of
$P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$. The construction is based on a previous construction of OW-SKA$_p$~
\cite{sharif2020} that achieves the highest secret-key rate for a given distribution,
and employs two hash functions, $h$ and $h'$ that are used for reconciliation and key extraction, respectively. Security proof of the protocol determines parameters of the hash functions in terms of distribution $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$, number of samples, key length and security and reliability parameters of the system.
We modify the protocol in two steps.\\
1) We modify the inputs to $h$ (and so the message sent from
Alice to Bob) is modified. Theorem~\ref{Thm:ikemotsecurity} recalculates the parameters of the two hash functions to achieve security against passive adversary, and
the
key length. This will also give
the key length of our final construction when the protocol succeed to establish a key against an active adversary. \\
2) Noting that one of the inputs to $h$ is Alice's private sample $\mathbf{x}$, one can see the hash function as effectively a keyed hash function. This MAC however is different from traditional MAC systems that use a shared secret key. In Section \ref{owskamac} we define information theoretically secure MAC in correlated randomness setting
of $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$, with security against impersonation and substitution attacks, and
design a MAC with provable security for the case when $\mathbf{X}=\mathbf{Y}$ and the shared key is partially leaked through $\mathds{Z}$.
Using this MAC, which is a keyed hash function, for the hash function $h$ in our protocol, gives us a secure SKA$_a$~. In Theorem~\ref{mac2:ctxt} we prove robustness of the protocol against active adversaries.
To our knowledge the formal definition of MAC in correlated randomness setting of $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$ is new, and our construction of MAC is a new efficient construction with proved concrete security for special case of $\mathbf{X}=\mathbf{Y}$, and so
would be of independent interest.
Other known constructions of MACs for the general setting of $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$ are due to Maurer \cite{Maurer1997authencation} that uses the theory of {\em typical sequences} to prove asymptotic security (proof is not publicly available), and for the case of $\mathbf{X}=\mathbf{Y}$ due to Maurer et al. \cite{MaurerW03b}. In Section \ref{comparison}, we compare our MAC with the MAC due to \cite{MaurerW03b}.
\begin{comment}
{\color{magenta} REMOVE IN ISIT VERSION
We note that a OW-SKA only guarantees that with the high
probability the two parties will share a key. However additional
messages are required for the two parties to know about the
success of the protocol.
}
\end{comment}
\subsection{Related work}
\label{ap:related}
Key establishment with information theoretic setting has been studied in Quantum Key Distribution protocols (QKD) where the correlated random variables of Alice and Bob are obtained using quantum communication \cite{BB84}, as well as
{\em fuzzy extractor setting} \cite{eurocryptDodisRS04}
where $\mathbf{X} $ and $\mathbf{Y} $ are samples of the same random source with a bound on the {\em distance} between the two. Commonly used distance functions are Hamming distance and set difference.
\begin{comment}
{\color{magenta} -REMOVE FROM ISIT
Key agreement protocols in source model follows the pioneering work of Wyner on wiretap channels \cite{W75}, and its extension by Csisz\'ar to broadcast channel \cite{Ahlswede1993}, where the goal is transmission of a private message from Alice to Bob over noisy channels, with access to public channels.
The two main steps in information theoretic key agreement protocols after the establishment of the initial correlation, are {\em information reconciliation} where the goal is to arrive at a common random string by Alice and Bob, and {\em privacy amplification} where the goal is to extract a secret key from the shared string. Both these steps have been widely studied \cite{minsky2003set,bennett1988privacy,renner2005simple}.
\cite{holenstein2005one,
renes2013efficient,
Chou2015a,sharif2020}.
}
\end{comment}
Key establishment
with security against active adversaries was studied in \cite{maurer2003authen1,maurer2003authen2,Renner2004exact,kanukurthi2009key}. Feasibility of information theoretic
SKA$_a$~ was
formulated by Maurer through {\em simulatability} property that is defined as follows: $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$ is $\mathbf{X}$-simulatable by Eve if they can send $\mathds{Z}$ through a simulated channel $P_{\hat{\mathbf{X}}|\mathds{Z}}$ whose output $\hat{\mathbf{X}}$ has the same joint distribution with $\mathbf{Y}$ as $\mathbf{X}$.
One can similarly define $\mathbf{Y}$-simulatability. SKA cannot be constructed if $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$ is $\mathbf{X}$-simulatable or $\mathbf{Y}$-simulatable.
\begin{comment}
{\color{magenta} -REMOVE FROM ISIT
Construction of SKA$_a$~ (assuming the required simulatability conditions hold) requires sending messages
over an unauthenticated channel. In above, we discussed the construction of message authentication codes that provide security in this setting.
Robustness in the setting of fuzzy extractors where correlated variables are ``close'' according to some distance measure is given in \cite{dodis2006robust,kanukurthi2009key}.
}
\end{comment}
\vspace{1mm}
\noindent
{\bf Organization.}
Section~\ref{sec:pre} is preliminaries.
Section~\ref{sec:ikem-inst} is the construction of secure OW-SKA$_p$~.
Section \ref{robustness}
is the construction of MAC
and
its security.
Section~\ref{robustnessanalysis} uses the MAC to construct a secure SKA$_a$~.
Section~\ref{conclusion} concludes the paper.
\section{Preliminaries}
\label{sec:pre}
\noindent
We use capital letters (e.g., $X$) to
denote
random variables (RVs), and lower-case letters (e.g., $x$) for their instantiations. Sets are denoted by calligraphic letters, (e.g. $\mathcal{X}$), and the size of $\mathcal{X}$
is denoted by $|\mathcal{X}|$.
We denote vectors using boldface letters; for example $\mathbf{X}=(X_1,\cdots,X_n)$ is a vector of $n$ RVs, and its realization is given by $\mathbf{x}=(x_1,\cdots,x_n)$. $U_\ell$ denotes an RV with uniform distribution over $\{0,1\}^\ell, \ell \in \mathbb{N}$. If $X$ is a discrete RV, we denote its probability mass function (p.m.f)
by $\mathrm{P}_X(x)=\mathsf{Pr}(X=x)$. The conditional p.m.f. of an RV $X$ given RV $Y$ is denoted as $\mathrm{P}_{X|Y}(x|y)=\mathsf{Pr}(X=x|Y=y)$. For two RVs $X$ and $Y$ defined over the same domain $\mathcal{L}$, the statistical distance between $X$ and $Y$ is given by ${\rm \Delta}(X,Y)=\frac{1}{2} \sum_{v\in {\cal L}} |\Pr[X=v]-\Pr[Y=v]|$. \emph{Shannon entropy} of an RV $X$ is denoted by $H(X)=-\sum_x\mathsf{P}_X(x)\log(\mathsf{P}_X(x))$.
The \emph{min-entropy} of a random variable $X$ with p.m.f. $\mathrm{P}_X$
is defined as
$H_{\infty}(X)= -\log (\max_{x} (\mathrm{P}_X({x})))$.
The \emph{average conditional min-entropy} \cite{dodis2004fuzzy} of an RV $X$ given RV $Y$ is
$\tilde{H}_{\infty}(X|Y)= -\log (\mathbb{E}_{{y} \leftarrow Y}\max_{x}\mathrm{Pr}({X=x}|{Y=y})).
$
We write $[x]_{i\cdots j}$ to denote the block from the $i$th bit to $j$th bit in $x.$ We use universal hash function on the output of weakly random entropy source, together with a random seed, to generate an output that is close to uniformly distributed,
as shown by Leftover Hash Lemma~\cite{impagliazzo1989pseudo}.
\begin{definition} [Universal hash family]\label{defn:uhf}
A family of hash functions $h:\mathcal{X} \times \mathcal{S} \to \mathcal{Y}$ is called a universal hash family if $\forall x_1,x_2 \in \mathcal{X}$, $x_1 \ne x_2 :$ $\pr[h(x_1,s)=h(x_2,s)] \le \frac{1}{|\mathcal{Y}|}$, where the probability is over the uniform choices of $s$ from $\mathcal{S}$.
\end{definition}
\vspace{-.5em}
\begin{definition}[Strong universal hash family]\label{defn:suhash}
A family of hash functions $h: \mathcal{X} \times \mathcal{S} \rightarrow \mathcal{Y}$ is called a strong universal hash family if $\forall x_1,x_2 \in \mathcal{X}$, $x_1 \ne x_2$, and for any $c,d \in \mathcal{Y} :$ $\pr[h(x_1,s)=c \wedge h(x_2,s)=d]=\frac{1}{|\mathcal{Y}|^2}$, where the probability is over the uniform choices of $s$ from $\mathcal{S}$.
\end{definition}
We
use a variant of Leftover Hash Lemma, called Generalized Leftover Hash Lemma [\cite{DodisORS08}, Lemma 2.4]
that includes side information about the hash input, to prove security properties of our construction.
\begin{lemma}[Generalized Leftover Hash Lemma~\cite{DodisORS08}]
\label{glhl}
Assume a universal hash family $h: \mathcal{X} \times \mathcal{S} \rightarrow \{0,1\}^{\ell}$. Then for any two random variables $A$ and $B$, defined over $\mathcal{X}$ and $\mathcal{Y}$ respectively, applying $h$ on $A$ can extract a uniform random variable of length $\ell$ satisfying: \\ $\Delta(h(A, S), S, B; U_\ell, S, B)\le \frac{1}{2}\sqrt{2^{-\tilde{H}_{\infty}(A|B)}\cdot 2^
\ell}$, where $S$ is chosen randomly from $\mathcal{S}$.
\end{lemma}
We now recall the definition of almost strong universal hash family~\cite{Stinson94}.
\vspace{-.5em}
\begin{definition}[Almost strong universal hash family~\cite{Stinson94}]\label{defn:suhash1}
A family of hash functions $h: \mathcal{X} \times \mathcal{S} \rightarrow \mathcal{Y}$ is called $\epsilon$-almost strong universal hash family if $\forall x_1,x_2 \in \mathcal{X}$, $x_1 \ne x_2$, and for any $c,d \in \mathcal{Y}$, it holds that : (a) $\pr[h(x_1,s)=c]=\frac{1}{|\mathcal{Y}|}$ and (b) $\pr[h(x_1,s)=c \wedge h(x_2,s)=d] \leq \frac{\epsilon}{|\mathcal{Y}|}$, where the probability is over the uniform choices of $s$ from $\mathcal{S}$.
\end{definition}
The notion of {\em fuzzy min-entropy} has been introduced in \cite{Fuller2020fuzzy} to estimate the guessing probability of a value within distance $t$ of a sample value $x$ of a distribution $P_{\mathbf{X}}$. In~\cite{Fuller2020fuzzy}, Fuller et al. used fuzzy min-entropy to compute length of the extracted key in presence of passive adversaries. We consider active adversaries who try to guess a point around a secret key. We use fuzzy min-entropy to compute the probability that an active adversary can correctly guess that point. We define fuzzy min-entropy of a sample value $\mathbf{x}$ corresponding to a joint distribution $P_{\mathbf{X}\mathbf{Y}}$. The adversary tries to guess $\mathbf{x}_1$ such that the inequality: $-\log(P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x}_1 |\mathbf{y} )) \le \nu$ holds, where $\mathbf{y}$ in the secret key of Bob, and $\nu$ is some predetermined value. That is, the adversary tries to guess $\mathbf{x}_1$ such that $P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x}_1 |\mathbf{y} ) \ge 2^{-\nu}$. To have the maximum chance that the inequality $P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x}_1 |\mathbf{y} ) \ge 2^{-\nu}$ holds, the adversary would choose the point $\mathbf{x}_1$ that maximizes the total probability mass of $\mathbf{Y}$ within the set $\{\mathbf{y} : P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x}_1 |\mathbf{y} ) \ge 2^{-\nu}\}$.
\begin{comment}
The {\it $\nu$-fuzzy min-entropy} of an RV $X$ with probability distribution $P_{X}$ is defined as
$H_{\nu,\infty}^{\mathsf{fuzz}}(X)=-\log\big[\mathsf{max}_{x'} \Pr(X\in B_t(x'))\big]$
where $B_t(x')$ is the set of point at distance $t$ from $x'$.
We use this definition for the distance function $|-\log P_X(x)-(-\log P_X(x'))|$ for two samples $x$ and $x'$. It is easy to see that this function satisfies {\em identity of indiscernibles}, and {\em symmetric} properties.
\footnote{ We note that the notion of fuzzy entropy was originally defined for a metric space and using it for a space that is equipped with a distance function implies that some of the inequalities that are derived for this entropy may not hold. }
The conditional fuzzy min-entropy is defined similar to conditional min-entropy, as below.
For a joint distribution $P_{XZ},$ the {\it $\nu$-conditional fuzzy min-entropy} of $X$ given $Z$ is defined as
{\scriptsize
\begin{align}\nonumber
\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(X|Z)=-\log\Big(\underset{z \leftarrow Z}{\mathbb{E}}\max_{x'}\sum_{x:|-\log P_{X|Y}(x|y) +\log(P_{X|Y}(x'|y))| \le \nu}\pr[x|z]\Big).
\end{align}
}
*** fuzzy entropy is defined for "metric" space - we are using it for a space with a "distance" function - need to check\\
SOmnath: if you agree with above writing, remove yours***
\end{comment}
The {\it $\nu$-fuzzy min-entropy}~\cite{Fuller2020fuzzy} of an RV $\mathbf{X}$ with joint distribution $P_{\mathbf{X} \mathbf{Y}}$ is defined as \\
{\small
$H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X})=-\log\big(\mathsf{max}_{\mathbf{x}} \sum_{\mathbf{y}:-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu}\pr[\mathbf{Y}= \mathbf{y}]\big).$
}
For a joint distribution $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$, the {\it $\nu$-conditional fuzzy min-entropy} of $\mathbf{X}$ given $\mathds{Z}$ is defined as
{\scriptsize
\begin{align}\nonumber
\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X}|\mathds{Z})=-\log\Big(\underset{\mathbf{z} \leftarrow \mathds{Z}}{\mathbb{E}}\max_{\mathbf{x}}\sum_{\mathbf{y}:-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu}\pr[\mathbf{Y}=\mathbf{y}|\mathds{Z}=\mathbf{z}]\Big).
\end{align}
}
The following lemma gives both lower and upper bounds of $H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X})$.
\begin{lemma}\label{fuzzyent}
Let the joint distribution of two RVs $\mathbf{X}$ and $\mathbf{Y}$ be denoted as $P_{\mathbf{X}\mathbf{Y}}$. Then the following properties hold.
(i) $H_\infty(\mathbf{X}) - \nu \le H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X}).$
(ii) Let $\max_{\mathbf{y}}P(\mathbf{y})=P(\mathbf{y}_{\max})$ for some point $\mathbf{y}_{\max}$ in the domain of $\mathbf{Y}$, and let there exist a point $\mathbf{x}_{\mathbf{y}_{\max}}$ in the domain of $\mathbf{X}$ such that $-\log(P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}_{\mathbf{y}_{\max}}|\mathbf{y}_{\max})) \le \nu$, then $H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X}) \le H_\infty(\mathbf{Y})$.
\end{lemma}
\begin{proof}
(i) Note that,
{\small
\begin{align} \nonumber
&-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu \implies \frac{P(\mathbf{x},\mathbf{y})}{P(\mathbf{y})} \ge 2^{-\nu} \\\label{eqn:fuzzyminent}
&\implies P(\mathbf{y}) \le 2^{\nu}P(\mathbf{x},\mathbf{y}).
\end{align}
}
Let $\mathsf{max}_{\mathbf{x}} \sum_{\mathbf{y}:-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu}P(\mathbf{x},\mathbf{y})$ occur at a point $\mathbf{x}=\mathbf{x}_{\mathsf{max}}$, then
{\small
\begin{align}\nonumber
&\mathsf{max}_{\mathbf{x}} \sum_{\mathbf{y}:P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y}) \ge 2^{-\nu}}P(\mathbf{x},\mathbf{y}) \\\label{eqn:fuzzyminentropy2}
&= \sum_{\mathbf{y}:-P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}_{\mathsf{max}}|\mathbf{y}) \ge 2^{-\nu}}P(\mathbf{x}_{\mathsf{max}},\mathbf{y})
\end{align}
}
Now,
{
\small
\begin{align} \nonumber
&H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X}) \\\nonumber
&=-\log\big(\mathsf{max}_{\mathbf{x}} \sum_{\mathbf{y}:-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu}\pr[\mathbf{Y}= \mathbf{y}]\big) \\\nonumber
&=\log\big(\frac{1}{\mathsf{max}_{\mathbf{x}} \sum_{\mathbf{y}:-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu}\pr[\mathbf{Y}= \mathbf{y}]}\big) \\\nonumber
&\ge \log\big(\frac{1}{\mathsf{max}_{\mathbf{x}} \sum_{\mathbf{y}:-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu}2^{\nu}P(\mathbf{x},\mathbf{y})}\big) \\\nonumber
&\qquad\qquad\text{ (by equation.~\ref{eqn:fuzzyminent})} \\\nonumber
&=\log\big(\frac{1}{2^{\nu}\mathsf{max}_{\mathbf{x}} \sum_{\mathbf{y}:-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu}P(\mathbf{x},\mathbf{y})}\big) \\\nonumber
&=\log\big(\frac{1}{\mathsf{max}_{\mathbf{x}} \sum_{\mathbf{y}:-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu}P(\mathbf{x},\mathbf{y})}\big) - \nu \\\nonumber
&=\log\big(\frac{1}{\sum_{\mathbf{y}:P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}_{\mathsf{max}}|\mathbf{y}) \ge 2^{-\nu}}P(\mathbf{x}_{\mathsf{max}},\mathbf{y})}\big) - \nu \text{ (by equation.~\ref{eqn:fuzzyminentropy2})} \\\nonumber
&\ge \log\big(\frac{1}{P(\mathbf{x}_{\mathsf{max}})}\big) - \nu \\\nonumber
& \ge \log\big(\frac{1}{\mathsf{max}_{\mathbf{x}}P(\mathbf{x})}\big) -\nu \\\label{eqn:fuzzyenteqn1}
&= H_{\infty}(\mathbf{X}) -\nu
\end{align}
}
(ii) Let $\mathsf{max}_{\mathbf{x}} \sum_{\mathbf{y}:-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu}\pr[\mathbf{Y}= \mathbf{y}]$ occur at a point $\mathbf{x}=\hat{\mathbf{x}}_{\max}$, then
{\small
\begin{align} \nonumber
&\mathsf{max}_{\mathbf{x}} \sum_{\mathbf{y}:P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y}) \ge 2^{-\nu}}\pr[\mathbf{Y}= \mathbf{y}] \\\nonumber
&=\sum_{\mathbf{y}:P_{\mathbf{X}|\mathbf{Y}}(\hat{\mathbf{x}}_{\max}|\mathbf{y}) \ge 2^{-\nu}}\pr[\mathbf{Y}= \mathbf{y}] \\\label{eqn:fuzzent2}
&\ge \sum_{\mathbf{y}:P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}_{\mathbf{y}_{\max}}|\mathbf{y}) \ge 2^{-\nu}}\pr[\mathbf{Y}= \mathbf{y}]
\end{align}
}
{\small
\begin{align} \nonumber
&H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X}) \\\nonumber
&=\log\big(\frac{1}{\mathsf{max}_{\mathbf{x}} \sum_{\mathbf{y}:-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu}\pr[\mathbf{Y}= \mathbf{y}]}\big) \\\nonumber
& \le \log\big(\frac{1}{\sum_{\mathbf{y}:P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}_{\mathbf{y}_{\max}}|\mathbf{y}) \ge 2^{-\nu}}\pr[\mathbf{Y}= \mathbf{y}]}\big) \text{ (by equation~\ref{eqn:fuzzent2})} \\\nonumber
& \le \log\big(\frac{1}{P(\mathbf{y}_{\max})}\big) \\\nonumber
&= \log\big(\frac{1}{\max_{\mathbf{y}}P(\mathbf{y})}\big) \\\label{eqn:fuzzyenty3}
&=H_{\infty}(\mathbf{Y})
\end{align}
}
\end{proof}
As a corollary, we obtain the following lemma.
\begin{lemma}\label{fuzzyentcor}
Let the joint distribution of three RVs $\mathbf{X}$, $\mathbf{Y}$ and $\mathds{Z}$ be denoted as $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$. Then the following properties hold.
(i) $\tilde{H}_\infty(\mathbf{X}|\mathds{Z}) - \nu \le \tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X}|\mathds{Z}).$
(ii) Let $\max_{\mathbf{y}}P(\mathbf{y})=P(\mathbf{y}_{\max})$ for some point $\mathbf{y}_{\max}$ in the domain of $\mathbf{Y}$, and let there exist a point $\mathbf{x}_{\mathbf{y}_{\max}}$ in the domain of $\mathbf{X}$ such that $-\log(P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}_{\mathbf{y}_{\max}}|\mathbf{y}_{\max})) \le \nu$, then $\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X}|\mathds{Z}) \le \tilde{H}_\infty(\mathbf{Y}|\mathds{Z})$.
\end{lemma}
\begin{comment}
The security parameter is denoted by $\lambda$. Following \cite{pfitzmann2000model}, we define
two classes
of functions, ${NEGL}$ and ${SMALL}$, for
computational and statistical indistinguishability respectively. The class ${NEGL}$ of negligible functions consists of all functions $f: \mathbb{N} \to \mathbb{R}_{\ge 0}$ such that, for any positive polynomial $g(\cdot)$, there exists $n_0 \in \mathbb{N}$ such that $\forall n \ge n_0, |f(n)| < \frac{1}{g(n)}$, where $\mathbb{R}_{\ge 0}$ is the set of all non-negative real numbers. A set ${SMALL}$ is called a class of \textit{small} functions: $\mathbb{N} \to \mathbb{R}_{\ge 0}$ if it satisfies two conditions: $(i)$ $SMALL$ is closed under addition, and $(ii)$ a function $f' \in SMALL$ implies that all functions $g':\mathbb{N} \to \mathbb{R}_{\ge 0}$ with $g' \le f'$ are also in the set $SMALL$.
\end{comment}
\begin{comment}
The following lemma was taken from~\cite{DodisORS08}.
\begin{lemma}
\label{extdodisent}
Let $A,B,C$ be random variables:
(i) for any $\delta>0$, the conditional entropy $H_{\infty}(A|B=b)$ is at least $\tilde{H}_{\infty}(A|B) - \log(\frac{1}{\delta})$ with probability at least $(1-\delta)$ over the choice of $b$.
(ii) If $B$ has at most $2^t$ possible values, then $\tilde{H}_{\infty}(A|B,C) \geq \tilde{H}_{\infty}(A,B|C) \geq \tilde{H}_{\infty}(A|C) - t$.
\end{lemma}
\end{comment}
\begin{comment}
We now define (avarage case) strong randomness extractor \cite{dodis2004fuzzy}. Since we consider only strong extractor in this paper, we often omit the qualifier ``strong''.
\begin{definition} $E: \{0, 1\}^n\times \{0, 1\}^r\rightarrow \{0, 1\}^\lambda$ is an average $(n, \alpha, \lambda, \epsilon)$-extractor if for any pair of random variables $X, A$ with $X$ a $n$-bit string and $\tilde{H}_\infty(X|A)\ge \alpha$,
\begin{equation}
\Delta(E(X, R), A, R; U, A, R)\le \epsilon,
\end{equation}
where $R$ is a purely random $r$-bit string and $U$ is uniformly random over $\{0, 1\}^\lambda$.
\end{definition}
$A$ in the definition is side information. If this variable is removed, then this is the definition of $(n, \alpha, \lambda, \epsilon)$-extractor.
\end{comment}
\remove{
Calligraphic letters are to denote sets. If $\mathcal{S}$ is a set then $|\mathcal{S}|$ denotes its size. $U_{\mathcal{X}}$ denotes a random variable with uniform distribution over ${\mathcal{X}}$ and $U_\ell$ denotes a random variable with uniform distribution over $\{0,1\}^\ell$. {All the logarithms are in base 2.}
A function $\ms F:\mc X\to \mc Y$ maps an element $x\in \mc X$ to an element $y\in \mc Y$. This is denoted by $ y=\ms F(x)$.
We use the symbol `$\leftarrow$', to
assign a constant value (on the right-hand side) to a variable (on the left-hand side). Similarly,
we use, `$\stackrel{\$}\leftarrow$', to assign to a variable either a uniformly
sampled value from a set or the output of a randomized algorithm.
We denote by $x\stackrel{r}\gets \mathrm{P}_X$ the assignment of a
sample from $\mathrm{P}_X$ to the variable $x$.
}
\begin{comment}
We
denote the algorithm $\mathsf{D}$ that has access to oracles $\ms O_1, \ms O_2, \dots$, taking inputs $x, y, \cdots$, and generating output $u$ by $u \gets\mathsf{D}^{\ms O_1,\ms O_2,...}(x, y, \cdots)$.
For $\lambda\in\mathbb{N}$, $1^\lambda$ the unary representation of $\lambda$
that is used as input to an algorithm specifies the requirement that the running time of the algorithm is
a polynomial in $\lambda$.
\end{comment}
\subsection{One-way secret key agreement (OW-SKA)}\label{owskamac}
\vspace{-.2em}
\begin{comment}
{\color{magenta} -REMOVE FROM ISIT
Information-theoretic secret key agreement was first introduced by Maurer~\cite{Maurer1993} and Ahlswede et al.~\cite{Ahlswede1993} independently. Information-theoretic secure one-way secret key agreement (OW-SKA) was first considered by Ahlswede~\cite{Ahlswede1993}.
In OW-SKA protocol, Alice sends a single message to Bob for key establishment. An information-theoretic secure OW-SKA protocol has two main steps~\cite{Brassard}: {\it information reconciliation} to agree on a common string and {\it privacy amplification} to extract a secret key from the common string.
}
\end{comment}
One natural setting is that the
probabilistic experiment that underlies
$(X,Y,Z)$ is repeated $n$ times independently, and Alice, Bob and Eve privately receive realizations of the RVs $\mathbf{X} = (X_1,\cdots,X_n)$, $\mathbf{Y} = (Y_1,\cdots,Y_n)$ and $\mathds{Z} =(Z_1,\cdots,Z_n)$ respectively, where \\
$P_{\mathbf{X} \mathbf{Y} \mathds{Z} }(\mathbf{x} , \mathbf{y} , \mathbf{z} )=P_{\mathbf{X} \mathbf{Y} \mathds{Z}}(x_1,\cdots,x_n,y_1,\cdots,y_n,z_1,\cdots,z_n)\\=\prod_{i=1}^n P_{XYZ}(x_i,y_i,z_i)$. This setting is considered in
Maurer's satellite scenario where a randomly generated string is received by Alice, Bob and Eve over independent noisy channels ~\cite{Maurer1993,Maurer1997authencation}.
We note that for all $i=1,\cdots,n,$, the RVs $(X_i, Y_i, Z_i)$
have the underlying distribution $P_{X Y Z}$ and $P_{X_i Y_i Z_i} (x,y,z)=P_{X Y Z}(x,y,z)$, $\forall (x,y,z) \in {\cal X}\times{\cal Y} \times {\cal Z}$.
\vspace{-.5em}
\begin{definition}[Secure OW-SKA protocol~\cite{holenstein2005one}]\label{owska}
Let $X$ and $Y$ be two RVs over $\mathcal{X}$ and $\mathcal{Y}$ respectively. For shared key length $\ell$, a OW-SKA protocol on $\mathcal{X} \times \mathcal{Y}$ consists of two function families: a (probabilistic) function family \{$\tau_{\mathrm{Alice}}:\mathcal{X}^n \to \{0,1\}^\ell \times \mathcal{C}$\} that outputs $k_A \in \{0,1\}^\ell$ and $c$; and a function family $\{\tau_{\mathrm{Bob}}: \mathcal{Y}^n \times \mathcal{C} \to \{0,1\}^\ell\}$ that outputs $k_B \in \{0,1\}^\ell$. A OW-SKA protocol on $\mathcal{X} \times \mathcal{Y}$ is secure for a probability distribution $P_{XYZ}$ over $\mathcal{X} \times \mathcal{Y} \times \mathcal{Z}$ if for $\ell \in \mathbb{N}$, the OW-SKA protocol establishes an $(\epsilon,\sigma)$-secret key
$k$ satisfying the following properties:
(i) (reliability)\quad $\pr(K_A=K_B=K) \ge 1 - \epsilon$
(ii) (security) \quad $\Delta(K,C,\mathds{Z};U_\ell,C,\mathds{Z}) \le \sigma$,
where $K_A$, $K_B$, $K$, $C$ are RVs corresponding to $k_A$, $k_B$, $k$ and $c$ respectively, and the RV $\mathds{Z}$ is the vector of $n$ instances of the RV $Z$ and is Eve's side information.
The RV $U_\ell$ is uniformly distributed over $\{0,1\}^\ell$.
\end{definition}
{\color{red}
\begin{comment}
{\color{magenta} -REMOVE FROM ISIT
We define information-theoretic one-time MAC in correlated randomness setting defined by $P_{XYZ}$ where $X$ and $Y$ serves as secret keys of Alice and Bob that are partially leaked to Eve through $Z$. We
The two types of attacks that are considered for traditional MACs where Alice and Bob share a secret key \cite{Simmons}, are possible
{\it{impersonation attack}} and {\it{substitution attack}}. In an {\it{impersonation attack}}, the adversary's goal is to construct a message and tag pair that is accepted by the verification algorithm. In a {\it{substitution attack}}, the adversary constructs the fraudulent message and tag pair after observing a an authenticated message.
In the following, these attacks are defined for the correlated randomness setting.
}
\end{comment}
}
\vspace{-1em}
\begin{definition}(Secure information-theoretic one-time MAC in correlated randomness setting) \label{defn:mac}
Let $P_{\mathbf{X} \mathbf{Y} \mathds{Z}} $ be a public distribution.
An $(|\mathcal{S}|,P_{\mathbf{X}\mathbf{Y}\mathds{Z}},|\mathcal{T'}|,\delta_{mac})$-information-theoretic one-time message authentication code is a triple of algorithms $(\mathsf{gen},\mathsf{mac},\mathsf{ver})$ where
$\mathsf{gen}: P_{\mathbf{X} \mathbf{Y} \mathds{Z}} \to (\mathbf{x}\mathbf{y}\mathbf{z})$ samples $ P_{\mathbf{X} \mathbf{Y} \mathds{Z}}$ and privately gives $\mathbf{x}$ and $\mathbf{y}$ to Alice and Bob, and leaks $\mathbf{z}$ to Eve,
$\mathsf{mac}: \mathcal{X}^n \times \mathcal{S} \to \mathcal{T'}$ is the tag generation algorithm that maps an input message from the message set $\mathcal{S}$
to a tag in the tag set $\mathcal{T'}$,
using the private input of Alice, $\mathbf{x} $, and
$\mathsf{ver}: \mathcal{Y}^n \times \mathcal{S} \times \mathcal{T'} \to \{acc, rej\}$ takes a message and tag pair, and outputs either accept (acc) or reject (rej) using Bob's private input $\mathbf{y} $.
The MAC satisfies correctness and unforgeability properties defined as follows.\\
{\em Correctness }: For any choice of $s \in \mathcal{S}$, we have,
\vspace{-.3em}
\begin{equation}
\footnotesize
\pr[ (\mathbf{x}\mathbf{y}\mathbf{z}) \leftarrow \mathsf{gen}(P_{\mathbf{X}\mathbf{Y}\mathds{Z}}),\mathsf{ver}(\mathbf{y} ,s,\mathsf{mac}(\mathbf{x} ,s))=acc \text{ $|$ }\mathds{Z}=\mathbf{z}] =1
\vspace{-.3em}
\end{equation}
{\em $\delta_{ot}$-Unforgeability (one-time unforgeability):}
For any $(\mathbf{x}\mathbf{y}\mathbf{z}) \leftarrow \mathsf{gen}(P_{\mathbf{X}\mathbf{Y}\mathds{Z}})$, we consider protection against two types of attacks, \\
(i) $\delta_{imp}$-impersonation: for any message and tag pair $s' \in \mathcal{S}$ and $t' \in \mathcal{T'}$ chosen by the adversary, the following holds.
\vspace{-.4em}
{\small
\begin{equation}
\pr[ \mathsf{ver}(\mathbf{y},s',t')=acc \text{ $|$ } \mathds{Z}=\mathbf{z}] \le \delta_{imp}, \\
\vspace{-.3em}
\end{equation}
}
(ii) $\delta_{sub}$-substitution: for any observed message and tag pair $(s,t)$, for any adversary choice of $s' \ne s \in \mathcal{S}$ and $t', t \in \mathcal{T'}$, the following holds.
\vspace{-.4em}
{\small
\begin{equation}
\pr[ \mathsf{ver}(\mathbf{y},s',t')=acc\text{ $|$ }
s, \mathsf{mac}(\mathbf{x} ,s)=t, \mathds{Z} =\mathbf{z}] \le \delta_{sub}, \\
\vspace{-.3em}
\end{equation}
}
The MAC is called $\delta_{ot}$-one-time unforgeable with $\delta_{ot}=\mathsf{max}\{\delta_{imp},\delta_{sub}\}$, where the probability is taken over the randomness of $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$.
\end{definition}
\vspace{-.5em}
A special case of the above definition is when $\mathbf{X}=\mathbf{Y}$ and $P_{\mathbf{X}\mathbf{Y}\mathds{Z}} = P_{\mathbf{X}\mathds{Z}}$, where $\mathbf{X}$ is the shared key of Alice and Bob that is partially leaked through $\mathds{Z}$ to Eve.
Maurer's construction in \cite{Maurer1997authencation} is an example of the general case one-time MAC, while the construction in \cite{MaurerW03b} is an example of the latter case $\mathbf{X}=\mathbf{Y}$ with partially leaked $\mathbf{X}$.
\begin{comment}
\begin{definition}(Secure information-theoretic one time MAC, cf.~\cite{cramer2008detection})
An $(|\mathcal{S}|,|\mathcal{X}^n|,|\mathcal{T'}|,\delta)$-information-theoretic one-time message authentication code is a triple of algorithms $(\mathsf{gen},\mathsf{mac},\mathsf{verify})$: \{$\mathsf{gen}: P_{\mathbf{X} \mathbf{Y} \mathds{Z}} \to \mathcal{S}$\} gives a key $\mathbf{x} \in \mathcal{X}^n$, which is only partially leaked through a random variable $\mathds{Z} $ according to a distribution $P_{\mathbf{X} \mathbf{Y} \mathds{Z} }$ and has min-entropy $\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} )$; \{$\mathsf{mac}: \mathcal{X}^n \times \mathcal{S} \to \mathcal{T'}$\} that maps an input message in a set $\mathcal{S}$ of order $|\mathcal{S}|$ to a tag in the set $\mathcal{T'}$ of order $|\mathcal{T'}|$ using the key $\mathbf{x} $; \{$\mathsf{verify}: \mathcal{X}^n \times \mathcal{S} \times \mathcal{T'} \to \{accept,reject\}$\} outputs either accept or reject on inputs the key $\mathbf{x} $, a message and a tag. The algorithms $\mathsf{mac}(\cdot)$ and $\mathsf{verify}(\cdot)$ must satisfy the following: for any $\mathbf{x} \in \mathcal{X}^n$, $s \in \mathcal{S}$,
\begin{equation}
\pr[\mathbf{x} \leftarrow \mathsf{gen}(\cdot),\mathsf{verify}(\mathbf{x} ,s,\mathsf{mac}(\mathbf{x} ,s))=accept] =1
\end{equation}
An information theoretic one-time MAC is unforgeable if for any $s \ne s_f \in \mathcal{S}$, any $t',t'_f \in \mathcal{T}$,
\begin{equation}
\pr[\mathsf{verify}(\mathbf{x} ,s_f,t'_f)=accept\text{ $|$ }\mathsf{mac}(\mathbf{x} ,s)=t] \le \delta,
\end{equation}
where the probability is over the random choices of $\mathbf{x} $ in $\mathcal{X}^n$.
In particular, if the $\mathsf{verify}(\cdot)$ algorithm also uses the $\mathsf{mac}(\cdot)$ algorithm as:
\[
\mathsf{verify}(\mathbf{x} ,s,t') =
\begin{cases}
\text{accept,} &\quad\text{if $\mathsf{mac}(\mathbf{x} ,s)=t'$}\\
\text{reject,} &\quad\text{otherwise,} \\
\end{cases}
\]
then the security definition reduces to the following. An information-theoretic MAC is unforgeable if for any $s,s_f \in \mathcal{S}$, any $t',t'_f \in \mathcal{T'}$,
\begin{equation}
\pr[\mathsf{mac}(\mathbf{x} ,s_f)=t'_f\text{ $|$ }\mathsf{mac}(\mathbf{x} ,s)=t'] \le \delta,
\end{equation}
where the probability is over the random choices of $\mathbf{x} $ in $\mathcal{X}^n$.
\end{definition}
\end{comment}
In the following we define robustness of a secure SKA$_a$~.
\begin{comment}
{\color{magenta} -REMOVE FROM ISIT
In \cite{Wolf98} SKA$_a$~ with two types of robustness is considered. Informally, {\em strong robustness } requires that with a high probability, Alice and Bob both accept, or both reject. In {\em weak robustness } however require either Alice or Bob reject, or the protocol establishes a secret shared key.
As noted in \cite{Wolf98} it is easy to see that a OW-SKA can only achieve weak robustness.
}
\end{comment}
We follow the definition in \cite{Wolf98}.
\vspace{-.5em}
\begin{definition}[Robustness of secure OW-SKA protocol]
\label{robustowska}
A
OW-SKA protocol $(\tau_{\mathrm{Alice}}, \tau_{\mathrm{Bob}})$ is called $(\epsilon,\sigma)$ OW-SKA$_a$~ with robustness $\delta$
if for any strategy of an active attacker with access to $Z$ and communicated messages over the public channel, the probability that either Bob rejects the outcome of the protocol or the secret key agreement protocol is successful with reliability parameter $\epsilon$ and key security parameter $\sigma$, is no less than $(1-\delta)$. An $(\epsilon,\sigma)$ OW-SKA$_a$~ protocol has robustness $\delta$ if for all adversary $\mathcal{D}$, the probability that the following experiment outputs `success' is at most $\delta$: sample $(\mathbf{x},\mathbf{y},\mathbf{z})$ according to the distribution $P_{\mathbf{X} \mathbf{Y} \mathds{Z}}$; let $(k_A,c) \leftarrow \tau_{\mathrm{Alice}}(\mathbf{x})$, and let $\Tilde{c} \leftarrow \mathcal{D}(\mathbf{z}, c)$; output `success' if $\Tilde{c} \ne c$ and $\tau_{\mathrm{Bob}}(\mathbf{y},\tilde{c}) \ne \perp$.
\end{definition}
\remove{
++++++++++++++++
\begin{definition}[Robustness of secure OW-SKA protocol~\cite{Wolf98,dodis2006robust}]\label{robustowska} {\color{red} USE THE ONE THAT MATCHES}
An $(\epsilon,\sigma)$-SK OW-SKA protocol has robustness $\delta$ if for every possible strategy of Eve, the probability that either Bob rejects the outcome of the protocol or the secret key agreement protocol is successful is no less than $(1-\delta)$. More precisely, an $(\epsilon,\sigma)$-SK OW-SKA protocol has robustness $\delta$ if for all adversary $\mathcal{D}$, the probability that the following experiment outputs `success' is at most $\delta$: sample $(\mathbf{x},\mathbf{y},\mathbf{z})$ according to the distribution $P_{\mathbf{X} \mathbf{Y} \mathds{Z}}$, let $(k_A,c) \leftarrow \tau_{\mathrm{Alice}}(\mathbf{x})$ and let $\Tilde{c} \leftarrow \mathcal{D}(\mathbf{z}, c)$; output `success' if $\Tilde{c} \ne c$ and $\tau_{\mathrm{Bob}}(\mathbf{y},\tilde{c}) \ne \perp$.
\end{definition}
}
\section{Design and analysis of a OW-SKA$_a$~ protocol}
\label{sec:ikem-inst}
We give the construction of a robust and secure OW-SKA$_a$~
protocol, and
prove its security and robustness.
Our construction provides security against active adversary.
\subsection{A robust and secure OW-SKA$_a$~
}
\label{constructionikem}
A secure OW-SKA protocol that satisfies Definition~\ref{owska} provides security against passive adversary in which the adversary observes the message $c$ and tries to learn something about the extracted key $k$. However, it does not say anything about what happens if an adversary can modify a message $c$ as it is sent to Bob over public channel. In particular, the definition does not say anything about the output of $\tau_{\mathrm{Bob}}(\mathbf{y},\tilde{c})$ if $\tilde{c} \ne c$. A robust and secure SKA$_a$~
satisfies Definition~\ref{robustowska} and guarantees that any tampering with $c$ will be either detected by Bob, or does not affect a shared secret key establishment between Alice and Bob.
We build on
the secure SKA$_p$~
protocol~\cite{sharif2020} and modify it to provide security and robustness.
\begin{comment}
In order to derive interesting results, we focus our attention to specific types of probability distribution of random variables given to Alice, Bob and Eve.
One natural assumption is that the random experiment generating the triplet $(X,Y,Z)$ is repeated $n$ times independently. We asume that Alice, Bob and Eve privately receive any instances of the RVs $\mathbf{X} = (X_1,\cdots,X_n)$, $\mathbf{Y} = (Y_1,\cdots,Y_n)$ and $\mathds{Z} =(Z_1,\cdots,Z_n)$ respectively, where \\
$P_{\mathbf{X} \mathbf{Y} \mathds{Z} }(\mathbf{x} , \mathbf{y} , \mathbf{z} )=P_{\mathbf{X} \mathbf{Y} \mathds{Z}}(x_1,\cdots,x_n,y_1,\cdots,y_n,z_1,\cdots,z_n)\\=\prod_{i=1}^n P_{XYZ}(x_i,y_i,z_i)$. This assumption is also used in the satellite model of Maurer~\cite{Maurer1993,Maurer1997authencation}.
Notice that the RVs $X_i, Y_i, Z_i$ defined over \{0,1\} have identical joint distribution for all $i=1,\cdots,n$ i.e. $P_{X_i Y_i Z_i} (x_i,y_i,z_i)=P_{X Y Z}(x_i,y_i,z_i)$, $\forall i \in \{1,\cdots,n\}$.
\end{comment}
The protocol uses
two hash function families: a strong universal hash family $ h': \mathcal{X}^n \times \mathcal{S'} \rightarrow \{0,1\}^\ell $ and
a universal hash family $ h: \mathcal{X}^n \times (\mathcal{S} \times \mathcal{S'}) \rightarrow \{0,1\}^t $ that are used to extract the key and construct the protocol message $c$, respectively.
$h$ is also an almost strong universal hash family when the probability is taken over $\mathcal{X}^n$. The message and key domains are
the sets $\mathcal{C} = \{0,1\}^t \times \mathcal{S} \times \mathcal{S'}$ and $\mathcal{K} = \{0,1\}^\ell $,
respectively.
\vspace{-.3em}
\begin{construction}[A robust and secure OW-SKA$_a$~]
\label{owska:robust}
The SKA$_a$~ protocol $\mathsf{OWSKA}_{a}=(\mathsf{iK.Gen},\mathsf{iK.Enc},\mathsf{iK.Dec})$ is given as follows:
\begin{comment}
Let the joint distribution of three random variables $\mathbf{X} $, $\mathbf{Y} $ and $\mathds{Z} $ be $P_{\mathbf{X} \mathbf{Y} \mathds{Z} }=\prod_{i=1}^nP'_{X_i Y_i Z_i}$ due to $n$ independent experiments generating $XYZ$, where $\mathbf{X} =(X_1,\cdots,X_n)$, $\mathbf{Y} =(Y_1,\cdots,Y_n)$, $\mathds{Z} =(Z_1,\cdots,Z_n)$, the RVs $X,Y,Z,X_i, Y_i, Z_i$ are defined over $\{0,1\}$ and $P'_{X_i Y_i Z_i}=P'_{X Y Z}$, $\forall i \in \{1,\cdots,n\}$. The correlated random samples of $\mathbf{X} , \mathbf{Y} , \mathds{Z} \in {\mathcal{V}}^n$ are generated according to the joint distribution $P_{\mathbf{X} \mathbf{Y} \mathds{Z} }$, $\mathcal{V} \in \{0,1\}$. Let $ h': \mathcal{X} \times \mathcal{S'} \rightarrow \{0,1\}^\ell $ and
$ h: \mathcal{X'} \times (\mathcal{S} \times \mathcal{S'}) \rightarrow \{0,1\}^t $ be two strong universal hash families. The domains of messages and keys are the sets $\mathcal{C} = \{0,1\}^t \times \mathcal{S} \times \mathcal{S'}$ and $\mathcal{K} = \{0,1\}^\ell $ respectively.
\end{comment}
The SKA$_a$~
protocol $\mathsf{OWSKA}_{a}$ has three algorithms, $(\mathsf{iK.Gen},\mathsf{iK.Enc},\mathsf{iK.Dec})$, that are given in Algorithm~\ref{alg:iKGen}, Algorithm~\ref{alg:iK.Enc} and Algorithm~\ref{alg:iK.dec}, respectively. The parameter $\nu$ in Algorithm~\ref{alg:iK.dec} relies on the correlation between the random variables $\mathbf{X} $ and $\mathbf{Y} $. Higher correlation between $\mathbf{X} $ and $\mathbf{Y} $ implies smaller value of $\nu$ and smaller number of elements in the set $\mathcal{R}$. The relationship among parameters is given by both Theorem~\ref{Thm:ikemotsecurity} and Theorem~\ref{mac2:ctxt}.
\end{construction}
\vspace{-1.5em}
\begin{algorithm}[!ht]
\footnotesize
\SetAlgoLined
\DontPrintSemicolon
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{A public distribution $P_{\mathbf{X} \mathbf{Y} \mathds{Z} }$}
\Output{$(\mathbf{x} ,\mathbf{y} ,\mathbf{z} )$}
\SetKw{KwBy}{by}
\SetKwBlock{Beginn}{beginn}{ende}
1. A trusted sampler samples the given public distribution $P_{\mathbf{X} \mathbf{Y} \mathds{Z} }$ to generate $(\mathbf{x} ,\mathbf{y} ,\mathbf{z} ) \xleftarrow{\$} P_{\mathbf{X} \mathbf{Y} \mathds{Z} }$ and provides them privately to Alice, Bob and Eve respectively. \\
\vspace{-1em}
\parbox{\linewidth}{\caption{\footnotesize $\mathsf{iK.Gen}(P_{\mathbf{X} \mathbf{Y} \mathds{Z} })$}}
\label{alg:iKGen}
\end{algorithm}
\vspace{-2em}
\begin{algorithm}[!ht]
\footnotesize
\SetAlgoLined
\DontPrintSemicolon
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{$\mathbf{x} $}
\Output{extracted key= $k$ and message=$c$}
\SetKw{KwBy}{by}
\SetKwBlock{Beginn}{beginn}{ende}
1. Randomly sample seed $s' \xleftarrow{\$} \mathcal{S'}$ for $h'(\cdot)$ \\
2. Randomly sample seed $s \xleftarrow{\$} \mathcal{S}$ for $h(\cdot)$\\
3. $k$ = $h'(\mathbf{x} ,s')$ \\
4. $c$ = $(h(\mathbf{x} , (s',s)),s',s)$ \\
5. Output = $(k,c)$ \\
\vspace{-1em}
\parbox{\linewidth}{\caption{\footnotesize $\mathsf{iK.Enc}(\mathbf{x} )$}}
\label{alg:iK.Enc}
\end{algorithm}
\vspace{-2em}
\begin{algorithm}[!ht]
\footnotesize
\SetAlgoLined
\DontPrintSemicolon
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{$\mathbf{y} $ and message $c$}
\Output{Either an extracted key $k$ or $\perp$}
\SetKw{KwBy}{by}
\SetKwBlock{Beginn}{beginn}{ende}
1. Parse message $c$ as $(d,s',s)$, where $d$ is a $t$-bit string \\
\vspace{-1.6em}
\begin{flalign}\label{reconset}
\text{2. Consider the set }\mathcal{R} =\{\mathbf{x} :-\log(P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x} |\mathbf{y} )) \le \nu \} &&
\end{flalign}
3. For every $\hat{\mathbf{x} } \in \mathcal{R}$, Bob verifies whether $d=h(\hat{\mathbf{x} }, (s',s))$ \\
4. \eIf{there is a unique $\hat{\mathbf{x} } \in \mathcal{R}$ satisfying $d=h(\hat{\mathbf{x} }, (s',s))$}{
Output $k=h'(\hat{\mathbf{x} },s')$ \\
}
{
Output $\perp$ \\
}
\vspace{-.5em}
\parbox{\linewidth}{\caption{\footnotesize $\mathsf{iK.Dec}(\mathbf{y} ,c)$}}
\label{alg:iK.dec}
\end{algorithm}
\begin{comment}
as follows:
\begin{enumerate}
\item {\bf $\mathsf{iK.Gen}(P_{\mathbf{X} \mathbf{Y} \mathds{Z} })$}: In the generation algorithm
$ \mathsf{iK.Gen}(1^{\lambda}, P_{\mathbf{X} \mathbf{Y} \mathds{Z} })$, a trusted sampler samples the distribution $P_{\mathbf{X} \mathbf{Y} \mathds{Z} }$
to output $ \mathbf{x} , \mathbf{y} , \mathbf{z} \in \mathcal{V}^n$ and privately gives them to Alice, Bob and Eve respectively.
Thus, \\
$( \mathbf{x} ,\mathbf{y} ,\mathbf{z} )\stackrel{\$}\gets \mathsf{iK.Gen}(1^{\lambda},\Theta_{\mathbf{X} \mathbf{Y} \mathds{Z} }).$
\item {\bf $\mathsf{iK.Enc}(\mathbf{x} )$.} The encapsulation algorithm $\mathsf{iK.Enc}(\mathbf{x} )$ takes as input $\mathbf{x} $, samples the seeds $s$ and $s'$ for two universal hash functions and outputs a key $k=h'(\mathbf{x} ,s')$ and a ciphertext $c=(h((\mathbf{x} \parallel s'),s),s',s)$, where `$\parallel$' denotes the concatenate operation.
That is, \\ $(k,c)= (h'(\mathbf{x} ,s'),(h((\mathbf{x} \parallel s'),s),s',s)) \stackrel{\$}\gets \mathsf{iK.Enc}(\mathbf{x} )$.
An example of a strong universal hash family is: \\ $h(\mathbf{x} ,(s_m,s_{m-1},..,s_1))=[s_m \mathbf{x} ^m + s_{m-1} \mathbf{x} ^{m-1} + ... + s_1 \mathbf{x} ]_{1..t}$. \\ $[\mathbf{x} ]_{1..t}$ denotes the first $t$ bits of $\mathbf{x} $.
\item {\bf $\mathsf{iK.Dec}(\mathbf{y} ,c)$} The decapsulation algorithm $\mathsf{iK.Dec}(\mathbf{y} ,c)$ takes the correlated sample $\mathbf{y} $ and the ciphertext $c=(h((\mathbf{x} \parallel s'),s),s',s)$ as input and reproduces the key $k=h'(\mathbf{x} ,s')$ or $\perp$. Thus, \\ $k= h'(\mathbf{x} ,s') \leftarrow \mathsf{iK.Dec}(\mathbf{y} ,(h((\mathbf{x} \parallel s'),s),s',s))$.
The decapsulation algorithm is given below:
\begin{itemize}
\item (a) It parses the received ciphertext c as $(d,s',s)$, where $d$ is a $t$ bit string, $s'$ and $s$ are strings of length same as the length of elements of the set $S'$ and $S$ respectively.
\item (b) Let us consider the set:
\begin{equation}\label{reconset}
\mathcal{R} = \{\mathbf{x} : -\log (P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x} |\mathbf{y} )) \leq \nu\}.
\end{equation}
For each $\hat{\mathbf{x} } \in \mathcal{R}$, check whether $d=h((\hat{\mathbf{x} }\parallel s'),s)$.
\item (c) If there is a unique $\hat{\mathbf{x} }$ such that $d=h((\hat{\mathbf{x} }\parallel s'),s)$, then it outputs the key $k=h'(\hat{\mathbf{x} },s')$; otherwise, it outputs $\perp$.
The value $\nu$ depends on the correlation between random variables $\mathbf{X} $ and $\mathbf{Y} $. Higher correlation between $\mathbf{X} $ and $\mathbf{Y} $ corresponds to smaller $\nu$ and smaller number of elements in the set $\mathcal{R}$.
\end{itemize}
\end{enumerate}
\end{comment}
\subsection{Relation with ~\cite{sharif2020}}
\label{securityfuzzyext}
\vspace{-.3em}
Our construction~\ref{owska:robust} is inspired by the construction of SKA$_p$~ in ~\cite{sharif2020}, that is reproduced for completeness
in Appendix~\ref{app:theo}.
The main difference of the constructions is that the protocol message $c$ in Construction 1 takes the seeds for both hash functions as inputs (in addition to $\mathbf{x}$).
In Algorithm~\ref{alg:iK.Enc} of our construction, Alice executes a single hash computation $h(\mathbf{x} , (s',s))$ and sends the result together with the randomness $(s',s)$ to Bob.
In Algorithm~\ref{alg:iK.dec}
upon receiving the message,
Bob searches the set $\mathcal{R}$ to find a unique $\hat{\mathbf{x} } \in \mathcal{R}$ such that $h(\hat{\mathbf{x} }, (s',s))=h(\mathbf{x} , (s',s))$.
Robust information reconciliation
succeeds if a unique $\hat{\mathbf{x} }$ is found in $\mathcal{R}$, allowing Bob
to extract
the key $k=h'(\hat{\mathbf{x}},s')$.
In the SKA$_p$~ protocol in~\cite{sharif2020}, the protocol message is $h(\mathbf{x} ,s)$ together with the randomness $(s',s)$.
Bob's algorithm is similar but uses a different check function: Bob searches the set $\mathcal{R}$ to obtain a unique $\hat{\mathbf{x} }$ such that $h(\hat{\mathbf{x} } ,s)=h(\mathbf{x} ,s)$, and is successful if such an $\hat{\mathbf{x} }$ is found.
This change to the input of the hash function requires a complete re-evaluation of the protocol parameters including parameters of the hash functions and the set $\cal R$, and security and robustness evaluation of the protocol.
In Theorem~\ref{Thm:ikemotsecurity}, we prove that Construction \ref{owska:robust} is an SKA$_p$~,
and provide relationship among parameters and the length of the extracted key,
In Theorem~\ref{mac2:ctxt}, we prove the robustness and show that the construction is an SKA$_a$~.
Combining these two theorems, we conclude that our construction is an $(\epsilon,\sigma)$-OW-SKA$_a$~
protocol with robustness $\delta$ if the parameters $\ell$, $t$ and $\nu$ are chosen to satisfy both the Theorem~\ref{Thm:ikemotsecurity} and Theorem~\ref{mac2:ctxt}.
The following lemma~\ref{lemma:minentropy} can be proven using standard properties of independent variables and is given below.
\begin{lemma}\label{lemma:minentropy} For any $(X_1Z_1), \cdots, (X_nZ_n)$ independently and identically distributed RV pairs, each with underlying distribution
$P_{XZ}$,
it holds that $\tilde{H}_\infty({\mathbf{X} }|{\mathds{Z} })=n\tilde{H}_\infty(X|Z)$, where ${\mathbf{X} }=(X_1, \cdots, X_n)$ and ${\mathds{Z} }=(Z_1, \cdots, Z_n).$
\end{lemma}
\begin{proof}
$L_{\bf z}:=\max_{\bf x} P_{{\bf X}|{\bf Z}}({\bf x}|{\bf z})=\max_{\bf x}\prod_{i=1}^n P_{X_i|Z_i}(x_i|z_i)\\=\max_{\bf x}\prod_{i=1}^n P_{X|Z}(x_i|z_i).$ Note that ${\bf x}$ goes over ${\cal X}^n$. Hence, $L_{\bf z}=\prod_{i=1}^n \max_{x_i} P_{X|Z}(x_i|z_i).$ We can define $x_z=\arg\max_x P_{X|Z}(x|z).$ Hence,
$L_{\bf z}=\prod_{i=1}^n P_{X|Z}(x_{z_i}|z_i).$ Since
\vspace{-1.5em}
{\small
\begin{align*}
&\tilde{H}_\infty({\bf X}|{\bf Z})=-\log \sum_{{\bf z}\in {\cal Z}^n} (P_{\bf Z}({\bf z})L_{\bf z})\\
=&-\log \sum_{{\bf z}\in {\cal Z}^n}\prod_{i=1}^n P_{XZ}(x_{z_i}, z_i)\\
=&-\log \sum_{z_1, \cdots, z_n\in {\cal Z}}\prod_{i=1}^n P_{XZ}(x_{z_i}, z_i) \\
=&-\log \prod_{i=1}^n (\sum_{z_i\in {\cal Z}} P_{XZ}(x_{z_i}, z_i))\\
=&-\log \prod_{i=1}^n (\sum_{z\in {\cal Z}} P_{XZ}(x_{z}, z))
=-n\log (\sum_{z\in {\cal Z}} P_{XZ}(x_{z}, z))\\
=&-n\log (\sum_{z\in {\cal Z}} P_Z(z)P_{X|Z}(x_{z}|z))\\
=&-n\log (\sum_{z\in {\cal Z}} P_Z(z)\max_x P_{X|Z}(x|z))
=n\tilde{H}_\infty(X|Z).
\end{align*} }
This completes the proof.
\end{proof}
\subsection{Security against passive adversaries}
\begin{theorem}[Secure OW-SKA protocol]
\label{Thm:ikemotsecurity}
Let the parameters $\nu$ and $t$ be chosen such that \\
{\scriptsize $\nu = nH(X |Y ) + \sqrt(n) \log (|\mathcal{X}|+3) \sqrt{\log (\frac{\sqrt{n}}{(\sqrt(n)-1)\epsilon})}$, and \\$t \ge nH(X |Y ) + \sqrt(n) \log (|\mathcal{X}|+3) \sqrt{\log (\frac{\sqrt{n}}{(\sqrt(n)-1)\epsilon})} + \log (\frac{\sqrt(n)}{\epsilon})$}, then the OW-SKA$_a$~ protocol $\mathsf{OWSKA}_{a}$ given in construction~\ref{owska:robust} establishes a secret key of length $\ell \le n\tilde{H}_{\infty}(X |Z ) + 2\log(\sigma) + 2 - t$ that is $\epsilon$-correct and $\sigma$-indistinguishable from random (i.e. $(\epsilon,\sigma)$-OW-SKA protocol according to Definition~\ref{owska}).
\end{theorem}
\begin{proof}
We need to prove that the construction~\ref{owska:robust} satisfies Definition~\ref{owska} for secure OW-SKA protocol. We first prove the reliability of the protocol and then analyze its security.
{\it Reliability.} We first determine the value of $\nu$ and $t$ to bound the error probability (i.e. reliability) of the protocol by $\epsilon$, and then we compute the extracted secret key length $\ell$. In algorithm $\mathsf{iK.Dec}(\cdot)$~\ref{alg:iK.dec}, Bob searches the set $\mathcal{R}$ for $\hat{\mathbf{x} }$ and checks whether there is a unique $\hat{\mathbf{x} }$ whose hash value matches with the received hash value $d$. The algorithm succeeds if a unique $\hat{\mathbf{x} }$ is found in the set $\mathcal{R}$ with such property. Hence, the algorithm fails in the following two events: $(i)$ there is no element $\mathbf{x} \in \mathcal{R}$, whose hash value matches with the received hash value $d$ i.e. $\mathbf{x} \notin \mathcal{R}$, $(ii)$ the set $\mathcal{R}$ contains more than one element, whose hash values are the same as the received hash value $d$. Therefore, the probability that Bob fails to reproduce the correct key $\mathbf{x} $ is at most the sum of the probabilities of these two cases. These two cases represent the following two events respectively:
\begin{align} \nonumber
& \mathcal{E}_{1} = \{\mathbf{x} : \mathbf{x} \notin \mathcal{R}\}=\{\mathbf{x} : -\log(P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x} |\mathbf{y} )) > \nu\} \text{ and } \\\nonumber
&\mathcal{E}_2 = \{\mathbf{x} \in \mathcal{R}: \exists \text{ } \hat{\mathbf{x} } \in \mathcal{R} \text{ s.t. } h(\mathbf{x} , (s',s)) = h(\hat{\mathbf{x} }, (s',s)\}.
\end{align}
For any $\epsilon > 0$, we choose $\epsilon_1 >0$ and $\epsilon_2 >0 $ satisfying $\epsilon_1 + \epsilon_2 \le \epsilon$. Let $\delta_1$ satisfy the equation \\$\epsilon_1 = 2^{\frac{-n{\delta_1}^2}{2\log^2(|\mathcal{X}|+3)}}$ and $\nu = H(\mathbf{X} |\mathbf{Y} ) + n\delta_1$.
Then,
$\mathsf{Pr}(\mathcal{E}_{1})={\mathsf{Pr}}\left(-\log(P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x} |\mathbf{y} )) > H(\mathbf{X} |\mathbf{Y} ) + n\delta_1\right) \le \epsilon_1$ (from~\cite{Holenstein11}, Theorem 2).
We now proceed to compute an upper bound for $\mathsf{Pr}(\mathcal{E}_{2})$. Since $h(\cdot)$ is a universal hash family,
for any $\mathbf{x} ,\hat{\mathbf{x} } \in \mathcal{R}$, $\mathbf{x} \ne \hat{\mathbf{x} }$, random $s' \in \mathcal{S'}$ and randomly chosen $s \in \mathcal{S}$, we have $\mathsf{Pr}\left(h(\mathbf{x} ,( s',s))=h(\hat{\mathbf{x} },( s',s))\right) \le 2^{-t}$, where the probability is over the uniform choices of $(s',s)$ from $(\mathcal{S}' \times \mathcal{S})$. Consequently, $\mathsf{Pr}(\mathcal{E}_2) \le |\mathcal{R}|\cdot 2^{-t}$. From equation~\ref{reconset} and considering that the sum of probability of elements of the set $\mathcal{R}$ is less than or equal to 1, we obtain \\$\frac{|\mathcal{R}|}{2^\nu} \le \mathsf{Pr}(\mathcal{R}) \le 1 \Rightarrow |\mathcal{R}| \le 2^\nu$. Hence, $\mathsf{Pr}(\mathcal{E}_2) \le |\mathcal{R}|\cdot 2^{-t} \le 2^{\nu-t}.$ If we set $t=\nu -\log(\epsilon_2)$, we obtain $\mathsf{Pr}(\mathcal{E}_2) \le \epsilon_2$. Thus, if $t=H(\mathbf{X} |\mathbf{Y} ) + n\delta_1 -\log(\epsilon_2)$, \\then the probability that Bob fails to reproduce the correct key $\mathbf{x}$ is at most $\mathsf{Pr}(\mathcal{E}_1) + \mathsf{Pr}(\mathcal{E}_2) \le \epsilon_1 + \epsilon_2 = \epsilon$.
Furthermore, considering that $\mathbf{X} , \mathbf{Y} $ are generated due to $n$ independent and identical experiments $P_{X_i Y_i Z_i}(x_i,y_i,z_i)=P_{X Y Z}(x_i,y_i,z_i)$ for $ 1 \le i \le n$, and $P_{\mathbf{X} \mathbf{Y} \mathds{Z} }(\mathbf{x} ,\mathbf{y} ,\mathbf{z} )=\prod_{i=1}^n P_{XYZ}(x_i,y_i,z_i)$, we have $H(\mathbf{X} |\mathbf{Y} )=nH(X |Y )$.
Now, setting \\$\epsilon_1=(\sqrt{n}-1)\epsilon/\sqrt{n}$ and $\epsilon_2=\epsilon/\sqrt{n}$, we have that if \\$\nu = nH(X |Y ) + \sqrt{n} \log (|\mathcal{X}|+3) \sqrt{\log (\frac{\sqrt{n}}{(\sqrt{n}-1)\epsilon})}$ and \\$t \ge nH(X |Y ) + \sqrt{n} \log (|\mathcal{X}|+3) \sqrt{\log (\frac{\sqrt{n}}{(\sqrt{n}-1)\epsilon})} + \log (\frac{\sqrt{n}}{\epsilon})$, then $\mathsf{Pr}(\mathcal{E}_1) + \mathsf{Pr}(\mathcal{E}_2) \le \epsilon$. Therefore, we conclude that the construction~\ref{owska:robust} is $\epsilon$-correct, and the reliability condition of Definition~\ref{owska} is satisfied.
{\it Security.} We now prove that the construction~\ref{owska:robust} also satisfies the security property of Definition~\ref{owska}. Let the RV $\mathds{Z} $ correspond to $\mathbf{z} $, the attacker's initial information. Let $K$, $C$, $S'$ and $S$ be the RVs corresponding to the extracted key $k$, the ciphetext $c$, $s'$ and $s$ respectively, where $k=h'(\mathbf{x} ,s')$ and $c=\Big(h\big(\mathbf{x} , ( s',s)\big),s',s\Big)$. The RV $C$ is distributed over $\{0,1\}^t$. Since $s',s$ are randomly chosen and independent of RV $\mathbf{X} $, from [\cite{DodisORS08}, Lemma 2.2(b)], we obtain $\tilde{H}_\infty(\mathbf{X} |\mathds{Z} ,C) =\tilde{H}_\infty\big(\mathbf{X} |\mathds{Z} ,h\big(\mathbf{X} , ( S',S)\big)\big)\ge \tilde{H}_\infty(\mathbf{X} |\mathds{Z} ) - t$.
Therefore, utilizing this expression, from Lemma~\ref{glhl}, we have
{\scriptsize
\begin{align}\nonumber
&\Delta(K,C,\mathds{Z};U_\ell,C,\mathds{Z}) \\\nonumber
&=\Delta\Big(h'(\mathbf{X} , S'), h\left(\mathbf{X} ,( S', S)\right), S', S, \mathds{Z} ; U_\ell, h\left(\mathbf{X} ,(S', S)\right), S', S, \mathds{Z} \Big) \\\nonumber
&\le \frac{1}{2}\sqrt{2^{-\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} ,h\left(\mathbf{X} ,( S', S)\right))}\cdot 2^
{\ell}} \le \frac{1}{2}\sqrt{2^{-\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} )}\cdot 2^
{\ell + t}} \\\nonumber
&=\frac{1}{2}\sqrt{2^{-n\tilde{H}_{\infty}(X |Z ) + \ell + t}} \le \sigma
\end{align}
}
The last equality is obtained by applying Lemma~\ref{lemma:minentropy} that proves $\tilde{H}_\infty({\mathbf{X} }|{\mathds{Z} })=n\tilde{H}_\infty(X|Z)$. The last inequality follows due to $\ell \le n\tilde{H}_{\infty}(X |Z ) + 2\log(\sigma) + 2 - t$.
Consequently, the security property of Definition~\ref{owska} for OW-SKA protocol is satisfied. Therefore, construction~\ref{owska:robust} is an $(\epsilon,\sigma)$-OW-SKA protocol.
\end{proof}
\vspace{-.5em}
\section{Robustness of
construction~\ref{owska:robust}} \label{robustness}
\vspace{-.3em}
We now prove that our construction~\ref{owska:robust} is a robust OW-SKA$_a$~ protocol as defined in Definition~\ref{robustowska}. In order to prove robustness, we consider specific construction of the almost strong universal hash family $h: \mathcal{X}^n \times (\mathcal{S} \times \mathcal{S'}) \rightarrow \{0,1\}^t$ described below.
\vspace{-.3em}
\subsection{ A one-time secure MAC in $P_{\mathbf{X}\mathds{Z}}$ setting}
\vspace{-.3em}
For the private sample
$\mathbf{x} $, we split $\mathbf{x}$ into two strings: \\$y'_1=[\mathbf{x} ]_{1\cdots t}$ and $y'_2=[\mathbf{x} ]_{t+1 \cdots n}$, where $t \le n/2$. Observe that $\mathbf{x} =y'_2 \parallel y'_1$.
For a message $m$, we represent it as a sequence $(s',s)$, where $s'=s''_2 \parallel s''_1$, $s''_1 \in GF(2^{n})$, $s''_2 \in GF(2^{n})$, $s=s_2\parallel s_1$, $s_1 \in GF(2^t)$ and $s_2 \in_R GF(2^{n - t})$ such that $s_2$ is non-zero on the last element. Note that we can always have $s_2$ to be non-zero on the last element by suitably appending a 1 to $m$. The verifier checks that the last element of $s_2$ is non-zero.
We represent $s'$, suitably padded with 1s, as a sequence $(s'_r,\cdots,s'_1)$ of elements of $GF(2^{n - t})$, where $r$ is odd.
Define $h\big(\mathbf{x} ,(s',s)\big)=h\big(\mathbf{x} ,( s',(s_2,s_1))\big)=\big[s_2(y'_2)^{r+2} + {\sum}_{i=1}^r s'_i(y'_2)^i\big]_{1\cdots t} + (y'_1)^{3} + s_1y'_1$.
We use this MAC to prove robustness of our construction in later section. Our MAC is inspired by the MAC in~\cite{CramerDFPW08}.
\begin{comment}
For a message $m$, let $s'=m$.
We represent $s'$, suitably padded with 1s, as a sequence $(s'_r,\cdots,s'_1)$ of elements of $GF(2^{n - t})$, where $r$ is odd. We choose a random element $s_2$ in $GF(2^{n - t})$ such that $s_2 \ne 0$. We choose another element $s_1$ uniformly at random from $GF(2^t)$. Let \\$s=s_2\parallel s_1$.
Define $h\big(\mathbf{x} ,(s',s)\big)=h\big(\mathbf{x} ,( s',(s_2,s_1))\big)=\big[s_2(y'_2)^{r+2} + {\sum}_{i=1}^r s'_i(y'_2)^i\big]_{1\cdots t} + (y'_1)^{3} + s_1y'_1$.
We use this MAC to prove robustness of our construction in later section. Our MAC is inspired by the MAC in~\cite{CramerDFPW08}.
\end{comment}
\vspace{-.3em}
\begin{lemma}
\label{lemma:mac2}
Let $h\big(\mathbf{x} , (s',s)\big)=\big[s_2(y'_2)^{r+2} + {\sum}_{i=1}^r s'_i(y'_2)^i\big]_{1 \cdots t} + (y'_1)^{3} + s_1y'_1$ as defined in section~\ref{robustness}. Let $\mathsf{mac}:=h(\cdot)$. Define $\mathsf{ver}(\mathbf{x} ,(s',s), t)$ s.t. it outputs {\em acc} $\text{ if }h(\mathbf{x} , (s',s)) = t$ and {\em rej}, \text{ otherwise}. Then $(\mathsf{gen},h,\mathsf{ver})$ is an $(|\mathcal{S'} \times \mathcal{S}|,P_{\mathbf{X}\mathds{Z}},|\mathcal{T'}|,\delta_{mac})$-information-theoretic one-time MAC with $\delta_{imp}=3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}$, $\delta_{sub}=3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}$, and $\delta_{mac}=3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}$, where $\mathcal{T'}=\{0,1\}^t$.
\end{lemma}
\begin{proof}
We need to prove that $(\mathsf{gen},h,ver)$ satisfies Definition~\ref{defn:mac} for the case $\mathbf{x} = \mathbf{y}$. Since $\mathbf{x} = \mathbf{y} $, it is easy to see that the correctness property of Definition~\ref{defn:mac} is satisfied. We now focus on the unforgeability property.
We first compute an adversary's success probability in an {\it{impersonation attack}} and then compute the adversary's success probability in an {\it{substitution attack}}.
{\it{Impersonation attack.}} In this attack, an adversary tries to generate a correct authenticated message $(t'_f,s'_f,s_f)$ such that
\vspace{-1em}
{\small
\begin{align}\label{eqn:impappn}
t'_f=\big[s_{2f}(y'_2)^{r+2} + {\sum}_{i=1}^r s'_{if}(y'_2)^i\big]_{1 \cdots t} + (y'_1)^{3} + s_{1f}y'_1,
\vspace{-.6em}
\end{align}
}
where $s_f=s_{2f} \parallel s_{1f}$, $s'_f=(s'_{rf}\parallel \cdots \parallel s'_{1f})$, and the last element of $s_{2f}$ is non-zero.
This is a non-zero polynomial in two variables $y'_2$ and $y'_1$ of degree at most $(r+2)$. The term $s_{2f}(y'_2)^{r+2}+ {\sum}_{i=1}^r s'_{if}(y'_2)^i + 0^{n-t-t}|((y'_1)^{3} + s_{1f}y'_1)$ takes on each element in $GF(2^{n-t})$ for at most $3(r+2)2^t$ times when $y'_2$ and $y'_1$ varies. Hence, there are at most $3(r+2)2^t (2^{n-t}/2^t)=3(r+2)2^{n-t}$ values of $(y'_2 \parallel y'_1)$ that satisfies the equation~\ref{eqn:impappn}. Let $\mathbf{X}$ and $\mathds{Z} $ denote the RVs corresponding to $\mathbf{x}$ and $\mathbf{z} $ respectively. Note that $\mathbf{x} = (y'_2 \parallel y'_1)$, and each value of $(y'_2 \parallel y'_1)$ (i.e. $\mathbf{x}$) occurs with probability at most $2^{-H_{\infty}(\mathbf{X} |\mathds{Z}=\mathbf{z} )}$. Thus, the probability that an adversary can successfully construct a correctly authenticated message =
\vspace{-1.7em}
{\small
\begin{align}\nonumber
&\mathbb{E}_{\mathbf{z} \leftarrow \mathds{Z}}\Big[\mathsf{Pr}_{\mathbf{X} }\big[\mathsf{ver}(\mathbf{x},(s'_f,s_f),t')=acc \text{ $|$ } \mathds{Z}=\mathbf{z}\big]\Big] \\\nonumber
&=\mathbb{E}_{\mathbf{z} \leftarrow \mathds{Z} }\Big[\mathsf{Pr}_{\mathbf{X} }\big[ t'_f=[s_{2f}(y'_2)^{r+2} + {\sum}_{i=1}^r s'_{if}(y'_2)^i]_{1 \cdots t} \\\nonumber
&\qquad + (y'_1)^{3} + s_{1f}y'_1 \text{ $|$ } \mathds{Z}= \mathbf{z}\big]\Big] \\\nonumber
&\le \mathbb{E}_{\mathbf{z} \leftarrow \mathds{Z}}\big[3(r+2)2^{n-t}2^{-H_{\infty}(\mathbf{X} |\mathds{Z} = \mathbf{z})}\big] \\\nonumber
&= 3(r+2)2^{n-t}\mathbb{E}_{\mathbf{z} \leftarrow \mathds{Z} }\big[2^{-H_{\infty}(\mathbf{X} |\mathds{Z} = \mathbf{z})}\big] \\\nonumber
&= 3(r+2)2^{n-t}\big[2^{-\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} )}\big]
=3(r+2)2^{-(t+\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} ) - n)} \\\label{eqn:macimpappn}
&= 3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}.
\end{align}
}
The last equality follows from Lemma~\ref{lemma:minentropy} that proves $\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} )=n\tilde{H}_{\infty}(X |Z )$. Therefore, the success probability in an {\it impersonation attack} is at most $\delta_{imp}=3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}$.
{\it{Substitution attack.}}
Assume that an adversary is given a correctly authenticated message $c=(t',s',s)$, where
{\small
\begin{align} \label{mac:senond1appn}
t'=\big[s_2(y'_2)^{r+2} + {\sum}_{i=1}^r s'_i(y'_2)^i\big]_{1 \cdots t } + (y'_1)^{3} + s_1 y'_1 .
\end{align}
}
Let $T'$, $C$, $Y'_1$ and $Y'_2$ denote the RVs corresponding to $t'$, $c$, $y'_1$ and $y'_2$ respectively. After observing the message $c=(t',s',s)$, the adversary tries to generate a forged message $(t'_f,s'_f,s_f)$ such that \\$t'_f=\big[s_{2f}(y'_2)^{r+2}+ {\sum}_{i=1}^r s'_{if}(y'_2)^i\big]_{1 \cdots t} +(y'_1)^{3} + s_{1f}y'_1$, $(t'_f,s'_f,s_f) \ne (t',s',s)$, and the last element of $s_{2f}$ is non-zero. Then the expected probability that an adversary can successfully construct a forged message, given any message $c$, is \\ $\mathbb{E}_{(c, \mathbf{z} ) \leftarrow (C, \mathds{Z})}\big[\mathsf{Pr}_{\mathbf{X} }[ t'_f=\big[s_{2f}(y'_2)^{r+2} + {\sum}_{i=1}^r s'_{if}(y'_2)^i\big]_{1 \cdots t} +(y'_1)^{3} + s_{1f}y'_1 \text{ $|$ } C=c, \mathds{Z} = \mathbf{z} ]\big]$.
Now if $(s'_f,s_f)=(s',s)$, then Bob will reject the message unless $t'_f=t'$. Hence, we only need to focus on the case $(s'_f,s_f) \ne (s',s)$.
Since addition and subtraction correspond to the bit-wise exclusive-or in the corresponding field, we have
\begin{align}\nonumber \label{mac:seconddiffappn}
t'-t'_f=&\big[(s_{2} - s_{2f})(y'_2)^{r+2}+ {\sum}_{i=1}^r (s'_{i} - s'_{if})(y'_2)^i\big]_{1 \cdots t} \\
&\quad + (s_1 - s_{1f})y'_1
\end{align}
If $(s_1=s_{1f})$, then the degree of this polynomial in $y'_2$ is at most $(r+2)$. Now the term\\ $[(s_{2} - s_{2f})(y'_2)^{r+2} + {\sum}_{i=1}^r (s'_{i} - s'_{if})(y'_2)^i\big]$ takes on each element of the field $GF(2^{n-t})$ at most $(r+2)$ times as $y'_2$ varies. Consequently, there are at most $(r+2)(2^{n-t}/2^t)=(r+2)2^{n-2t}$ values of $y'_2$ that satisfies equation~\ref{mac:seconddiffappn}.
Equation~\ref{mac:senond1appn} implies that, for each value of $y'_2$, there exists at most three values of $y'_1$ which satisfies the equation. Therefore, there are at most $3(r+2)2^{n-2t}$ values of $(y'_2 \parallel y'_1)$ that satisfies both the equation~\ref{mac:senond1appn} and equation~\ref{mac:seconddiffappn}.
If $(s_1 \ne s_{1f})$, then representing $y'_1$ in equation~\ref{mac:seconddiffappn} in terms of $y'_2$ and substituting it in equation~\ref{mac:senond1appn}, we obtain \\$t'=[-(s_1 - s_{1f})^{-3}(s_{2} - s_{2f})^3 (y'_2)^{3(r+2)} ]_{_{1 \cdots t}} + g(y'_2)$ for some polynomial $g(y'_2)$ of degree at most $3r$. Therefore, there are at most $3(r+2)2^{n-2t}$ values of $y'_2$ to satisfy this equation. From equation~\ref{mac:seconddiffappn}, we see that, for each value of $y'_2$, there is a unique $y'_1$ that satisfies the equation. Therefore, in both the cases, there are at most $3(r+2)2^{n-2t}$ values of $(y'_2|| y'_1)$ that satisfies both the equation~\ref{mac:senond1appn} and equation~\ref{mac:seconddiffappn}.
Note that $\mathbf{x} =y'_2 \parallel y'_1$. Then each value of $(y'_2\parallel y'_1)$ (i.e. $\mathbf{x} $) occurs with probability at most $2^{-H_{\infty}(\mathbf{X} |\mathds{Z} = \mathbf{z} ,T'=t')}$, where $\mathds{Z} $ is the RV corresponding to $\mathbf{z} $, Eve's initial information. Since $|t'|=t$, applying [\cite{DodisORS08}, Lemma 2.2(b)], we obtain $\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} ,T') \ge \tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} ) - t$.
Therefore, the required expected probability =
\vspace{-.5em}
{\small
\begin{align}\nonumber
&\mathbb{E}_{(c, \mathbf{z} ) \leftarrow (C, \mathds{Z})}\Big[\pr\big[ \mathsf{ver}(\mathbf{x},(s'_f,s_f),t')=acc \\\nonumber
&\qquad\text{ $|$ } (s',s),
\mathsf{mac}(\mathbf{x} ,(s',s))=t, \mathds{Z} =\mathbf{z}\big]\Big] \\\nonumber
&=\mathbb{E}_{(c, \mathbf{z} ) \leftarrow (C, \mathds{Z})}\Big[\mathsf{Pr}_{\mathbf{X} }\big[ t'_f=[s_{2f}(y'_2)^{r+2} + {\sum}_{i=1}^r s'_{if}(y'_2)^i]_{1 \cdots t}\\\nonumber
&\qquad +(y'_1)^{3} + s_{1f}y'_1 \text{ $|$ } C=c, \mathds{Z} = \mathbf{z} \big]\Big] \\\nonumber
&=\mathbb{E}_{(c, \mathbf{z} ) \leftarrow (C, \mathds{Z} )}\Big[\mathsf{Pr}_{(Y'_2\parallel Y'_1)}\big[ t'_f=[s_{2f}(y'_2)^{r+2} + \\\nonumber
&\qquad{\sum}_{i=1}^r s'_{if}(y'_2)^i]_{1 \cdots t} +(y'_1)^{3} + s_{1f}y'_1 \\\nonumber
&\qquad\wedge t'=[s_2(y'_2)^{r+2} + {\sum}_{i=1}^r s'_i(y'_2)^i\big]_{1 \cdots t} +(y'_1)^{3} + s_{1}y'_1 \\\nonumber
&\qquad \text{ $|$ } C=c, \mathds{Z}= \mathbf{z} \big]\Big]& \\\nonumber
&=\mathbb{E}_{(c, \mathbf{z} ) \leftarrow (C, \mathds{Z} )}\Bigg[\mathsf{Pr}_{(Y'_2\parallel Y'_1)}\bigg[ t'-t'_f=\big[(s_{2} - s_{2f})(y'_2)^{r+2}+ \\\nonumber
&\qquad {\sum}_{i=1}^r (s'_{i} - s'_{if})(y'_2)^i\big]_{1 \cdots t}+ (s_1 - s_{1f})y'_1 \\\nonumber
&\qquad\wedge t'=\big[s_2(y'_2)^{r+2} + {\sum}_{i=1}^r s'_i(y'_2)^i\big]_{1 \cdots t} +(y'_1)^{3} + s_{1}y'_1 \\\nonumber
&\qquad \text{ $|$ } C=c, \mathds{Z} = \mathbf{z} \bigg]\Bigg] \\\nonumber
&\le \mathbb{E}_{(c, \mathbf{z} ) \leftarrow (C, \mathds{Z} )}\big[3(r+2)2^{n-2t}2^{-H_{\infty}(\mathbf{X} |\mathds{Z} = \mathbf{z} , T'=t')}\big] \\\nonumber
&= 3(r+2)2^{n-2t}\mathbb{E}_{(c, \mathbf{z} ) \leftarrow (C, \mathds{Z} )}\big[2^{-H_{\infty}(\mathbf{X} |\mathds{Z} = \mathbf{z} ,T'=t')}\big] \\\nonumber
&= 3(r+2)2^{n-2t}\big[2^{-\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} ,T')}\big]
\le 3(r+2)2^{n-2t}2^{-(\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} )-t)} \\\label{eqn:macsubappn}
&= 3(r+2)2^{-(t+\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} )-n)}= 3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}.
\end{align}
}
The last equality is obtained by using Lemma~\ref{lemma:minentropy} that proves $\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} )= n\tilde{H}_{\infty}(X |Z )$.
Therefore, the success probability in a {\it substitution attack} is at most \\$\delta_{sub}=3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}$.
Consequently, $(\mathsf{gen},h,ver)$ is an $(|\mathcal{S'} \times \mathcal{S}|,P_{\mathbf{X}\mathds{Z}},|\mathcal{T'}|,\delta_{mac})$\\-information-theoretic one-time MAC with $\delta_{imp}=3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}$, $\delta_{sub}=3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}$, and $\delta_{mac}$$=\mathsf{max}\{\delta_{imp},\delta_{sub}\}$$=3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}$.
\end{proof}
\subsubsection{Comparison with other MAC constructions}\label{comparison}
Comparing our results with
\cite{MaurerW03b}, we note that
the MAC construction in [~\cite{MaurerW03b}, Theorem 3] for $t=n/2$, has
success probabilities in {\it impersonation } and {\it substitution attacks} as (roughly) $2^{-(H_2(\mathbf{X} | \mathds{Z} = \mathbf{z})-n/2)/2}) \approx 2^{-(n\tilde{H}_\infty(X | Z)-n/2)/2}$ and $(3\cdot 2^{-(H_2(\mathbf{X} | \mathds{Z} = \mathbf{z})-n/2)/4}) \approx 3 \cdot (2^{-(n\tilde{H}_\infty(X | Z)-n/2)/4})$, respectively, where $H_2(\mathbf{X}):=-\log(\sum_{\mathbf{x} \in \mathcal{X}^n}P_{\mathbf{X} }(\mathbf{x})^2)$, the R{\'{e}}nyi entropy of $\mathbf{X}$.
In our construction however, the probabilities are significantly less as
Lemma~\ref{lemma:mac2} shows that the success probabilities of the two attacks are the same,
and are
at most $3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}$ which for $t=n/2$,
are bounded by $3(r+2)2^{-(n\tilde{H}_{\infty}(X |Z )-n/2)}$.
We note that the construction in ~\cite{MaurerW03b} focuses on privacy amplification and assumes $\mathbf{x}=\mathbf{y}$, while our construction uses the MAC for reconcilliation, also. Assuming $\mathbf{x}=\mathbf{y}$
in our protocol,
the extracted key length will be roughly $(2n\tilde{H}_{\infty}(X |Z ) - n)$, and the extraction is possible as long as $n\tilde{H}_{\infty}(X |Z ) > n/2$ i.e. $\tilde{H}_{\infty}(X |Z ) > 1/2$. These improve the results of Maurer et al.[~\cite{MaurerW03b}, Theorem 5] that require $H_2(\mathbf{X}|\mathds{Z}=\mathbf{z}) > 2n/3$, i.e. roughly $n\tilde{H}_{\infty}(X |Z ) > 2n/3$, and extracts a key of length roughly $H_2(\mathbf{X}|\mathds{Z}=\mathbf{z}) - 2n/3$, i.e. approximately $n\tilde{H}_{\infty}(X |Z ) - 2n/3$.
\remove{
From Lemma~\ref{lemma:mac2}, we see that an adversary's success probability in both an {\it impersonation attack} and {\it substitution attack} is at most $(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}$. If $t=n/2$, these probabilities are bounded by $(r+2)2^{-(n\tilde{H}_{\infty}(X |Z )-n/2)}$.
This improves on the MAC of Maurer et al.[~\cite{MaurerW03b}, Theorem 3] who require that, for t=n/2, an adversary's success probability in {\it impersonation attack} and in {\it substitution attack} are roughly $2^{-(H_2(\mathbf{X} | \mathds{Z} = \mathbf{z})-n/2)/2}) \approx 2^{-(n\tilde{H}_\infty(X | Z)-n/2)/2}$ and $(3\cdot 2^{-(H_2(\mathbf{X} | \mathds{Z} = \mathbf{z})-n/2)/4}) \approx 3 \cdot (2^{-(n\tilde{H}_\infty(X | Z)-n/2)/4})$ respectively, where $H_2(\mathbf{X}):=-\log(\sum_{\mathbf{x} \in \mathcal{X}^n}P_{\mathbf{X} }(\mathbf{x})^2)$, the R{\'{e}}nyi entropy of $\mathbf{X}$.
The construction of Maurer et al.~\cite{MaurerW03b} only focus on privacy amplification without information reconciliation. If $\mathbf{x}=\mathbf{y}$ (i.e. without information reconciliation which is the case considered in~\cite{MaurerW03b}), in case of robustness, we extract a key of length roughly $(2n\tilde{H}_{\infty}(X |Z ) - n)$ and the extraction is possible as long as $n\tilde{H}_{\infty}(X |Z ) > n/2$ i.e. $\tilde{H}_{\infty}(X |Z ) > 1/2$. This also improves on the results of Maurer et al.[~\cite{MaurerW03b}, Theorem 5] who require $H_2(\mathbf{X}|\mathds{Z}=\mathbf{z}) > 2n/3$ i.e. roughly $n\tilde{H}_{\infty}(X |Z ) > 2n/3$ and extract a key of length roughly $H_2(\mathbf{X}|\mathds{Z}=\mathbf{z}) - 2n/3$ i.e. approximately $n\tilde{H}_{\infty}(X |Z ) - 2n/3$.
}
\begin{comment}
Our MAC is significantly more efficient than Maurer's MAC ~\cite{Maurer1997authencation}
that is aimed
to prove {\em feasibility of authentication for general $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$ setting}.
To authenticate an $\ell$-bit message, the MAC requires $2\ell$-bit authenticator using $n=4\ell$ bits of the random string $\mathbf{x}$.
\end{comment}
\subsubsection{Robustness analysis}\label{robustnessanalysis}
In order to prove robustness, we consider specific constructions of the strong universal hash family $h': \mathcal{X}^n \times \mathcal{S'} \rightarrow \{0,1\}^\ell$ and the universal hash family \\$h: \mathcal{X}^n \times (\mathcal{S} \times \mathcal{S'}) \rightarrow \{0,1\}^t$ as described below. $h$ is also an almost strong universal hash family when the probability is taken over the randomness of $\mathcal{X}^n$. For the private sample
$\mathbf{x} $, let $y'_1=[\mathbf{x} ]_{1\cdots t}$ and $y'_2=[\mathbf{x} ]_{t+1 \cdots n}$, where $t \le n/2$. Notice that $\mathbf{x} =y'_2 \parallel y'_1$.
For two random elements $s''_1,s''_2 \in_R GF(2^{n})$, let $s'=s''_2 \parallel s''_1$ and $h'(\mathbf{x} ,s')=h'\big(\mathbf{x} ,(s''_2,s''_1)\big)=\big[s''_2 (\mathbf{x} )^2+s''_1 \mathbf{x} \big]_{1 \cdots \ell}$. We represent $s'$, suitably padded with 1s, as a sequence $(s'_r,\cdots,s'_1)$ of elements of $GF(2^{n - t})$ such that $r$ is odd. For two random elements $s_1 \in_R GF(2^{t})$, $s_2 \in_R GF(2^{n - t})$ such that the last element of $s_{2}$ is non-zero, let $s=s_2\parallel s_1$ and $h\big(\mathbf{x} ,(s',s)\big)=h\big(\mathbf{x} ,( s',(s_2,s_1))\big)=\big[s_2(y'_2)^{r+2} + {\sum}_{i=1}^r s'_i(y'_2)^i\big]_{1\cdots t} + (y'_1)^{3}+ s_1 y'_1$. While verifying the tag, the receiver checks that the last element of $s_{2}$ is non-zero.
Notice that $h$ and $h'$ are universal hash family and strong universal hash family respectively.
We now prove the following lemma~\ref{lemma:fuzzy} that will be used to prove robustness of our protocol. It is analogous to the Lemma 3.5 of Fuller et al.~\cite{Fuller2020fuzzy}.
\begin{lemma}\label{lemma:fuzzy}
If $A$ is a random variable over a set of at most $2^b$ possible values, then
$\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} | A) \ge \tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} ) - b$.
\end{lemma}
\begin{proof}
{\footnotesize
\begin{align}\nonumber
&\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} | A) \\\nonumber
&=-\log\Big(\underset{a \leftarrow A}{\mathbb{E}}\max_{\mathbf{x}}\sum_{\mathbf{y}:-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu}\pr[\mathbf{Y}=\mathbf{y}|A=a]\Big) \\\nonumber
&=-\log\Big(\sum_a\max_{\mathbf{x}}\sum_{\mathbf{y}:P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \ge 2^{-\nu}}\pr[\mathbf{Y}=\mathbf{y}|A=a]\pr[A=a]\Big) \\\nonumber
&=-\log\Big(\sum_a\max_{\mathbf{x}}\sum_{\mathbf{y}:P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \ge 2^{-\nu}}\pr[\mathbf{Y}=\mathbf{y} \wedge A=a]\Big) \\\nonumber
&\ge -\log\Big(\sum_a\max_{\mathbf{x}}\sum_{\mathbf{y}:P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \ge 2^{-\nu}}\pr[\mathbf{Y}=\mathbf{y} ]\Big) \\\nonumber
&\ge -\log\Big(2^b \max_{\mathbf{x}}\sum_{\mathbf{y}:P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \ge 2^{-\nu}}\pr[\mathbf{Y}=\mathbf{y} ]\Big) \\\nonumber
&\ge -\log\Big( \max_{\mathbf{x}}\sum_{\mathbf{y}:-\log (P_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})) \le \nu}\pr[\mathbf{Y}=\mathbf{y} ]\Big) - b \\\nonumber
&\ge \tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} ) - b \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\end{align}
}
\end{proof}
\vspace{-1em}
\begin{theorem}[robustness of secure OW-SKA protocol]\label{mac2:ctxt}
The robustness of secure OW-SKA protocol as defined in Definition~\ref{robustowska} is broken with probability at most \\$3(r+2)\big( 2^{-(t-n+\min\{n\tilde{H}_{\infty}(X |Z ),\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )\})}\big)$. Hence, if $t \ge n + \log\big(\frac{3(r+2)}{\delta}\big) - \min\{n\tilde{H}_{\infty}(X |Z ),\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )\}$, then
the OW-SKA$_a$~ protocol $\mathsf{OWSKA}_{a}$ given in construction~\ref{owska:robust} has robustness $\delta$.
\end{theorem}
\begin{proof}
We need to prove that the construction~\ref{owska:robust} satisfies Definition~\ref{robustowska}. The algorithm $\mathsf{iK.Dec}(\cdot)$~\ref{alg:iK.dec} successfully outputs an extracted key if there is a unique element $\hat{\mathbf{x} }$ in the set $\mathcal{R}$ such that $h(\hat{\mathbf{x}},s',s)$ is equal to the received hash value $d=h(\mathbf{x},s',s)$.
Note that, if the construction is $(\epsilon,\sigma)$-OW-SKA protocol, $\mathbf{x}$ is in the set $\mathcal{R}$ with probability at least $(1 - \epsilon)$.
In the robustness experiment as defined in Definition~\ref{robustowska}, an adversary receives an
authenticated message $c$ with $c=(t',s',s)$, where
{\footnotesize
\begin{align}
\nonumber
t'=&h\big(\mathbf{x} , (s',s)\big)\\
=&\big[s_2(y'_2)^{r+2} + {\sum}_{i=1}^r s'_i(y'_2)^i\big]_{1 \cdots t} + (y'_1)^{3}+ s_1 y'_1, \label{eqn:robxappn}
\end{align}
}
and the last element of $s_{2}$ is non-zero.
Define $f(y'_2,(s',s))=s_2(y'_2)^{r+2} + {\sum}_{i=1}^r s'_i(y'_2)^i$. For fixed $s, s'$, we denote the RVs corresponding to $t'$, $c$, $y'_1$, $y'_2$, $\mathbf{x} $ as $T'$, $C$, $Y'_1$, $Y'_2$, $\mathbf{X} $, respectively and hence the randomness is over ${\mathbf{X}\mathbf{Y}\mathds{Z}}$ only. After observing
the message $c$ corresponding to $\mathbf{x} $, the adversary tries to generate $c'$
corresponding to
$\mathbf{x} $ or, some
$\mathbf{x}_1 \in \mathcal{R}$ such that $\mathbf{x}_1 \ne \mathbf{x} $.
Let $\delta_{\mathbf{x}}$ and $\delta_{\mathbf{x}_1}$ denote the success probabilities of the adversary in generating a $c'$ corresponding to the above two cases, respectively.
Since Bob's algorithm looks for a unique element in $\mathcal{R}$, only one of the above two cases succeed and hence,
the probability that an adversary can generate a forged message is at most $\mathsf{max}\{\delta_\mathbf{x},\delta_{\mathbf{x}_1}\}$, and we have
the success probability of the adversary bounded as:
{\small
\begin{align}\nonumber
&\le \text{Expected probability that an adversary can generate a forged} \\\nonumber
&\quad\quad\text{authenticated message given the message $c=(t',s',s)$} \\\nonumber
&\quad\quad\text{corresponding to $\mathbf{x} $} \\\label{eq:forgetagappn}
&=\mathsf{max}\{\delta_{\mathbf{x}},\delta_{\mathbf{x}_1}\},
\end{align}
}
where the expectation is over $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$.
We first bound $\delta_\mathbf{x}$. Let the adversary generate a forged message $(t'_f,s'_f,s_f)$ corresponding to $\mathbf{x} $ such that \\$t'_f=\big[s_{2f}(y'_2)^{r+2} + {\sum}_{i=1}^r s'_{if}(y'_2)^i\big]_{1 \cdots t} + (y'_1)^{3}+ s_{1f} y'_1$, $(t'_f,s'_f,s_f) \ne (t',s',s)$, and the last element of $s_{2f}$ is non-zero. Thus,
{\small
\begin{align}\nonumber
\delta_\mathbf{x} &\le \text{Expected probability that adversary can generate a forged} \\\nonumber
&\quad\quad\text{message $(t'_f,s'_f,s_f)$ valid when verified with $\mathbf{x} $, given } \\
\nonumber
&\quad\quad\text{the message $c=(t',s',s)$} \\\nonumber
&=\mathbb{E}_{(c, \mathbf{z}) \leftarrow (C, \mathds{Z} )}\Big[\mathsf{Pr}_{\mathbf{X} }\big[ t'_f=\big[s_{2f}(y'_2)^{r+2} + \\\nonumber
&\quad\quad {\sum}_{i=1}^r s'_{if}(y'_2)^i\big]_{1 \cdots t}+ (y'_1)^{3}+ s_{1f} y'_1 \text{ $|$ } C=c, \mathds{Z} = \mathbf{z} \big]\Big] \\\label{eqn:forgedletax}
&\le 3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}
\end{align}
}
The expectation is over the distribution $P_{X|C=c, Z=z}$.
The inequality~\ref{eqn:forgedletax} follows from the {\it substitution attack} part of Lemma~\ref{lemma:mac2} (i.e., equation~\ref{eqn:macsubappn}).
We now compute the success probability (i.e., $\delta_{\mathbf{x}_1}$) that the forged value of $c$
$(t'_{f_{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})$ corresponds to $\mathbf{x}_1 \neq \mathbf{x} $
such that \\
{\small
\begin{align} \nonumber
t'_{f_{\mathbf{x}_1}}&=h\big(\mathbf{x}_1 , (s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})\big)=\big[s_{2{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+2} + \\\label{eqn:robx1}
&{\sum}_{i=1}^r s'_{i{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^i\big]_{1 \cdots t} + (y'_{1{\mathbf{x}_1}})^{3}+ s_{1{f_{\mathbf{x}_1}}} y'_{1{\mathbf{x}_1}}
\end{align}
}
and $(t'_{f_{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}}) \ne (t',s',s)$, where $y'_{1{\mathbf{x}_1}}=[\mathbf{x}_1 ]_{1\cdots t}$, $y'_{2{\mathbf{x}_1}}=[\mathbf{x}_1 ]_{t+1 \cdots n}$, $s_{f_{\mathbf{x}_1}}=s_{2{f_{\mathbf{x}_1}}} \parallel s_{1{f_{\mathbf{x}_1}}}$, $s'_{f_{\mathbf{x}_1}}=(s'_{r{f_{\mathbf{x}_1}}}\parallel \cdots \parallel s'_{1{f_{\mathbf{x}_1}}})$, and the last element of $s_{2{f_{\mathbf{x}_1}}}$ is non-zero. Since addition and subtraction correspond to bit-wise exclusive-or, we obtain,
{\small
\begin{align}\nonumber
t'-t'_{f_{\mathbf{x}_1}}&=\Big[\big[s_2(y'_2)^{r+2}+ {\sum}_{i=1}^r s'_i(y'_2)^i\big]_{1 \cdots t} + (y'_1)^{3}+ s_1 y'_1 \Big] -\\\nonumber
&\qquad \Big[\big[s_{2{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+2} + {\sum}_{i=1}^r s'_{i{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^i\big]_{1 \cdots t}\\\nonumber\label{eqn:robusttag1}
&\qquad + (y'_{1{\mathbf{x}_1}})^{3}+ s_{1{f_{\mathbf{x}_1}}} y'_{1{\mathbf{x}_1}}\Big] \\\nonumber
&=\big[f(y'_2,s',s) - f(y'_{2{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})\big]_{1 \cdots t} + \\
&\qquad(y'_1)^{3} - (y'_{1{\mathbf{x}_1}})^{3} + s_1 y'_1 - s_{1{f_{\mathbf{x}_1}}} y'_{1{\mathbf{x}_1}}
\end{align}
}
Since $\mathbf{x}$ can be written as $\mathbf{x}_1 + e$ for some $e=(e_2 \parallel e_1) \in GF(2^{n})$, then $(y'_2\parallel y'_1)=((y'_{2{\mathbf{x}_1}} + e_2) \parallel (y'_{1{\mathbf{x}_1}} + e_1))$. Substituting $(y'_2\parallel y'_1)=((y'_{2{\mathbf{x}_1}} + e_2) \parallel (y'_{1{\mathbf{x}_1}} + e_1))$ in equation~\ref{eqn:robusttag1}, we have
{\small
\begin{align}\nonumber
t'-t'_{f_{\mathbf{x}_1}}&=\Big[\big[s_2(y'_{2\mathbf{x}_1}+e_2)^{r+2}+ {\sum}_{i=1}^r s'_i(y'_{2\mathbf{x}_1}+e_2)^i\big]_{1 \cdots t} + \\\nonumber
&\qquad (y'_{1\mathbf{x}_1}+e_1)^{3}+ s_1 (y'_{1\mathbf{x}_1}+e_1) \Big] -\\\nonumber
&\qquad \Big[\big[s_{2{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+2} + {\sum}_{i=1}^r s'_{i{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^i\big]_{1 \cdots t}\\\label{eqn:robusttag111}
&\qquad + (y'_{1{\mathbf{x}_1}})^{3}+ s_{1{f_{\mathbf{x}_1}}} y'_{1{\mathbf{x}_1}}\Big].
\end{align}
}
It is a polynomial in two variables $y'_{2{\mathbf{x}_1}}$ and $y'_{1{\mathbf{x}_1}}$ of degree at most $(r+2)$.
There may be two cases: either $e_1 \ne 0$ or $e_1=0$.
{\bf Case 1.} Let $e_1 \ne 0$, then equation~\ref{eqn:robusttag111} implies that there are at most two values of $y'_{1\mathbf{x}_1}$ for any fixed value of $y'_{2\mathbf{x}_1}$.
From equation~\ref{eqn:robx1}, we obtain that, for each
value of $y'_{1\mathbf{x}_1}$, there exist at most $(r+2)2^{n-2t}$ values of $y'_{2\mathbf{x}_1}$ since $s_{2{f_{\mathbf{x}_1}}} \ne 0$. Therefore, there are at most $2(r+2)2^{n-2t}$ values of $(y'_{2\mathbf{x}_1} \parallel y'_{1\mathbf{x}_1})$ (i.e. $\mathbf{x}_1$) that satisfy both the equation~\ref{eqn:robxappn} and equation~\ref{eqn:robx1} (and hence also equation~\ref{eqn:robusttag111}).
{\bf Case 2.} Let $e_1=0$. There may be two sub-cases: either $e_2 \ne 0$ or $e_2=0$.
{\bf Subcase 2(i).} Let $e_2 \ne 0$, then equation~\ref{eqn:robusttag111} implies that there are at most $(r+2)2^{n-2t}$ values of $y'_{2\mathbf{x}_1}$ for any fixed value of $y'_{1\mathbf{x}_1}$ (note that $s_2 \ne 0$). From equation~\ref{eqn:robx1}, for each value of $y'_{2\mathbf{x}_1}$, we see that there exist at most three values of $y'_{1\mathbf{x}_1}$. Consequently, there are at most $3(r+2)2^{n-2t}$ values of $(y'_{2\mathbf{x}_1} \parallel y'_{1\mathbf{x}_1})$ (i.e. $\mathbf{x}_1$) that satisfy both the equation~\ref{eqn:robxappn} and equation~\ref{eqn:robx1} (and hence also equation~\ref{eqn:robusttag111}).
{\bf Subcase 2(ii).} Let $e_2 =0$. Then $y'_2=y'_{2{\mathbf{x}_1}}$ and $y'_1=y'_{1{\mathbf{x}_1}}$. Now proceeding the same way as {\it substitution attack} part of the proof of Lemma~\ref{lemma:mac2}, we prove that there are at most $3(r+2)2^{n-2t}$ values of $(y'_{2\mathbf{x}_1} \parallel y'_{1\mathbf{x}_1})$ (i.e. $\mathbf{x}_1$) that satisfy both the equation~\ref{eqn:robxappn} and equation~\ref{eqn:robx1} (and hence also equation~\ref{eqn:robusttag111}).
Therefore, in any case, there are at most $3(r+2)2^{n-2t}$ values of $(y'_{2{\mathbf{x}_1}} \parallel y'_{1{\mathbf{x}_1}})$ (i.e. $\mathbf{x}_1 $) that satisfy both the equation~\ref{eqn:robxappn} and equation~\ref{eqn:robx1} (and hence also equation~\ref{eqn:robusttag111}).
\begin{comment}
The term $f(y'_2,s',s) - f(y'_2+e_2,s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})$ takes on each element of the field $GF(2^{n-t})$ at most $(r+2)$ times as $y'_{2{\mathbf{x}_1}}$ varies. Therefore, there are at most $(r+2)2^{n-t}/2^t=(r+2)2^{n-2t}$ values of $y'_{2{\mathbf{x}_1}}$ that satisfy equation~\ref{eqn:robusttag1} when $y'_{1{\mathbf{x}_1}}$ is fixed. From equation~\ref{eqn:robx1}, we see that, for each value of $y'_{2{\mathbf{x}_1}}$, there are at most three values of $y'_{1{\mathbf{x}_1}}$ that satisfies the equation. Hence, there are at most $3(r+2)2^{n-2t}$ values of $(y'_{2{\mathbf{x}_1}} \parallel y'_{1{\mathbf{x}_1}})$ (i.e. $\mathbf{x}_1 $) that satisfy both the equation~\ref{eqn:robx1} and equation~\ref{eqn:robusttag1}.
\end{comment}
Let $\mathbf{X}_1$, $Y'_{2{\mathbf{x}_1}}$ and $Y'_{1{\mathbf{x}_1}}$ be the RVs corresponding to $\mathbf{x}_1$, $y'_{2{\mathbf{x}_1}}$ and $y'_{1{\mathbf{x}_1}}$ respectively. We assume that the only way to attack the MAC and create a forged ciphertext is guessing the secret key for the MAC. The adversary tries to guess $\mathbf{x}_1$ such that the inequality: $-\log(P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x}_1 |\mathbf{y} )) \le \nu$ holds, where $\mathbf{y}$ in the secret key of Bob. That is, the adversary tries to guess $\mathbf{x}_1$ such that $P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x}_1 |\mathbf{y} ) \ge 2^{-\nu}$. To have the maximum chance that the inequality $P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x}_1 |\mathbf{y} ) \ge 2^{-\nu}$ holds, the adversary would choose the point $\mathbf{x}_1$ that maximizes the total probability mass of $\mathbf{Y}$ within the set $\{\mathbf{y} : P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x}_1 |\mathbf{y} ) \ge 2^{-\nu}\}$. In addition, the adversary is given $\mathbf{z}$ and the authenticated message $c$. Therefore, an adversary can guess
$\mathbf{x}_1$ with probability at most $2^{-H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} | \mathds{Z} = \mathbf{z} , T'=t')}$ (Lemma~\ref{fuzzyentcor} proves that guessing $\mathbf{x}_1$ given $\mathds{Z}$ and $C$ in this way is better than guessing Bob's secret key $\mathbf{y}$ given $\mathds{Z}$ and $C$). Consequently,
each value $\mathbf{x}_1$ in $\mathcal{R}$ occurs with probability at most $2^{-H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} | \mathds{Z} = \mathbf{z} , T'=t')}$.
Since the size of the support of RV $T'$ is $2^t$, from Lemma~\ref{lemma:fuzzy}, we obtain
$\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} | \mathds{Z}, T') \ge \tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} | \mathds{Z}) - t$.
Thus,
{\small
\begin{align}\nonumber
\delta_{\mathbf{x}_1} &\le \text{Expected probability that an adversary can generate a forged} \\\nonumber
&\quad\quad\text{authenticated message $(t'_{f_{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})$ corresponding to } \\\nonumber
&\quad\quad\text{$\mathbf{x}_1 $ given the message $c=(t',s',s)$ corresponding to $\mathbf{x} $} \\\nonumber
&=\mathbb{E}_{(c, \mathbf{z}) \leftarrow (C, \mathds{Z} )}\Big[\mathsf{Pr}_{(Y'_{2{\mathbf{x}_1}}\parallel Y'_{1{\mathbf{x}_1}}) }\big[ t'_{f_{\mathbf{x}_1}}=\big[s_{2{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+2}+ \\\nonumber
&\quad\quad \sum_{i=1}^r s'_{i{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^i\big]_{1 \cdots t} + (y'_{1{\mathbf{x}_1}})^{3}+ s_{1{f_{\mathbf{x}_1}}} y'_{1{\mathbf{x}_1}} \\\nonumber
&\quad\quad{\text{ $|$ } C=c, \mathds{Z} = \mathbf{z} \big]\Big]} \\\nonumber
&=\mathbb{E}_{(c, \mathbf{z}) \leftarrow (C, \mathds{Z} )}\Big[\mathsf{Pr}_{(Y'_{2{\mathbf{x}_1}}\parallel Y'_{1{\mathbf{x}_1}}) }\big[ t'_{f_{\mathbf{x}_1}}=\big[f(y'_{2{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})\big]_{1 \cdots t} \\\nonumber
&\qquad + (y'_{1{\mathbf{x}_1}})^{3}+ s_{1{f_{\mathbf{x}_1}}} y'_{1{\mathbf{x}_1}} \\\nonumber
&\qquad \wedge t' = \big[f(y'_2,s',s)\big]_{1 \cdots t} + (y'_1)^{3}+ s_1 y'_1 \text{ $|$ } C=c, \mathds{Z} = \mathbf{z} \big]\Big] \\\nonumber
&\le \mathbb{E}_{(c, \mathbf{z} ) \leftarrow (C, \mathds{Z} )}\big[3(r+2)2^{n-2t}2^{-H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} = \mathbf{z} , T'=t')}\big] \\\nonumber
&= 3(r+2)2^{n-2t}\mathbb{E}_{(c, \mathbf{z} ) \leftarrow (C, \mathds{Z} )}\big[2^{-H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} = \mathbf{z} ,T'=t')}\big] \\\nonumber
&= 3(r+2)2^{n-2t}\big[2^{-\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} ,T')}\big] \\\nonumber
&\le 3(r+2)2^{n-2t}2^{-(\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )-t)} \\\label{eqn:robsub2}
&= 3(r+2)2^{-(t+\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )-n)}.
\end{align}
}
Consequently, from equations~\ref{eq:forgetagappn},~\ref{eqn:forgedletax} and~\ref{eqn:robsub2}, we conclude that, after observing a message, the expected probability that an adversary will be able to forge a message is at most \\$\max\{3(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)},3(r+2)2^{-(t+\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )-n)}\}$
$=3(r+2)\big( 2^{-(t-n+\min\{n\tilde{H}_{\infty}(X |Z ),\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )\})}\big)$ $ \le \delta$ (if $t \ge n + \log\big(\frac{3(r+2)}{\delta}\big) - \min\{n\tilde{H}_{\infty}(X |Z ),\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )\}$).
Therefore, if $t \ge n + \log\big(\frac{3(r+2)}{\delta}\big) - \min\{n\tilde{H}_{\infty}(X |Z ),\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )\}$, the OW-SKA$_a$~ protocol $\mathsf{OWSKA}_{a}$ given in construction~\ref{owska:robust} has robustness $\delta$.
\end{proof}
\begin{comment}
\begin{proof}
We need to prove that the construction~\ref{owska:robust} satisfies Definition~\ref{robustowska}. The algorithm $\mathsf{iK.Dec}(\cdot)$~\ref{alg:iK.dec} successfully outputs an extracted key if there is a unique element $\hat{\mathbf{x} }$ in the set $\mathcal{R}$ such that $h(\hat{\mathbf{x}},s',s)$ is equal to the received hash value $d=h(\mathbf{x},s',s)$.
Note that, if the construction is $(\epsilon,\sigma)$-OW-SKA protocol, $\mathbf{x}$ is in the set $\mathcal{R}$ with probability at least $(1 - \epsilon)$.
In the robustness experiment as defined in Definition~\ref{robustowska}, an adversary receives an
authenticated message $(s,s', c)$ with $c=(t',s',s)$, where
{\footnotesize
\begin{align}
\nonumber
t'=&h\big(\mathbf{x} , (s',s)\big)\\
=&\big[s_2(y'_2)^{r+2}+s_1(y'_2)^{r+1} + {\sum}_{i=1}^r s'_i(y'_2)^i\big]_{1 \cdots t} + y'_1. \label{eqn:robx}
\end{align}
}
Define $f(y'_2,(s',s))=s_2(y'_2)^{r+2}+s_1(y'_2)^{r+1} + {\sum}_{i=1}^r s'_i(y'_2)^i$. For fixed $s, s'$, we denote the RVs corresponding to $t'$, $c$, $y'_1$, $y'_2$, $\mathbf{x} $ as $T'$, $C$, $Y'_1$, $Y'_2$, $\mathbf{X} $, respectively and hence the randomness is over ${\mathbf{X}\mathbf{Y}\mathds{Z}}$ only. After observing
the message $c$ corresponding to $\mathbf{x} $, the adversary tries to generate $c'$
corresponding to
$\mathbf{x} $ or, some
$\mathbf{x}_1 \in \mathcal{R}$ such that $\mathbf{x}_1 \ne \mathbf{x} $.
Let $\delta_{\mathbf{x}}$ and $\delta_{\mathbf{x}_1}$ denote the success probabilities of the adversary in generating a $c'$ corresponding to the above two cases, respectively.
Since Bob's algorithm looks for a unique element in $\mathcal{R}$, only one of the above two cases succeed and hence,
the probability that an adversary can generate a forged message is at most $\mathsf{max}\{\delta_\mathbf{x},\delta_{\mathbf{x}_1}\}$, and we have
the success probability of the adversary bounded as:
\vspace{-.5em}
{\small
\begin{align}\nonumber
&\le \text{Expected probability that an adversary can generate a forged} \\\nonumber
&\quad\quad\text{authenticated message given the message $c=(t',s',s)$} \\\nonumber
&\quad\quad\text{corresponding to $\mathbf{x} $} \\\label{eq:forgetag}
&=\mathsf{max}\{\delta_{\mathbf{x}},\delta_{\mathbf{x}_1}\},
\end{align}
}
where the expectation is over $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$.
We first bound $\delta_\mathbf{x}$. Let the adversary generate a forged message $(t'_f,s'_f,s_f)$ corresponding to $\mathbf{x} $ such that \\$t'_f=\big[s_{2f}(y'_2)^{r+2}+s_{1f}(y'_2)^{r+1} + {\sum}_{i=1}^r s'_{if}(y'_2)^i\big]_{1 \cdots t} + y'_1$ and $(t'_f,s'_f,s_f) \ne (t',s',s)$. Thus,
{\small
\begin{align}\nonumber
\delta_\mathbf{x} &\le \text{Expected probability that adversary can generate a forged} \\\nonumber
&\quad\quad\text{message $(t'_f,s'_f,s_f)$ valid when verified with $\mathbf{x} $, given } \\
\nonumber
&\quad\quad\text{the message $c=(t',s',s)$} \\\nonumber
&=\mathbb{E}_{(c, \mathbf{z}) \leftarrow (C, \mathds{Z} )}\Big[\mathsf{Pr}_{\mathbf{X} }\big[ t'_f=\big[s_{2f}(y'_2)^{r+2}+s_{1f}(y'_2)^{r+1} + \\\nonumber
&\quad\quad{\sum}_{i=1}^r s'_{if}(y'_2)^i\big]_{1 \cdots t} + y'_1 \text{ $|$ } C=c, \mathds{Z} = \mathbf{z} \big]\Big] \\\label{eqn:forgedletax}
&\le (r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}
\end{align}
}
{\color{red} The expectation is over the distribution $P_{X|C=c, Z=z}$.
In the probability expression, $\mathbf{x} $ is chosen according the distribution $P_{\mathbf{X}\mathbf{Y}\mathds{Z}}$ conditioned on $c$
??? WHAT DO YOU MEAN "or equivalently conditioned on $t'$." ???}
The inequality~\ref{eqn:forgedletax} follows from the {\it substitution attack} part of Lemma~\ref{lemma:mac2} (i.e., equation~\ref{eqn:macsub}).
We now compute the success probability (i.e., $\delta_{\mathbf{x}_1}$) that the forged value of $c$
$(t'_{f_{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})$ corresponds to $\mathbf{x}_1 \neq \mathbf{x} $
such that \\
{\small
\begin{align} \nonumber
h\big(\mathbf{x}_1 , (s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})\big)=t'_{f_{\mathbf{x}_1}}&=\big[s_{2{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+2}+s_{1{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+1} + \\\label{eqn:robx1}
&{\sum}_{i=1}^r s'_{i{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^i\big]_{1 \cdots t} + y'_{1{\mathbf{x}_1}}
\end{align}
}
and $(t'_{f_{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}}) \ne (t',s',s)$, where $y'_{1{\mathbf{x}_1}}=[\mathbf{x}_1 ]_{1\cdots t}$, $y'_{2{\mathbf{x}_1}}=[\mathbf{x}_1 ]_{t+1 \cdots n}$, $s_{f_{\mathbf{x}_1}}=s_{2{f_{\mathbf{x}_1}}} \parallel s_{1{f_{\mathbf{x}_1}}}$, $s'_{f_{\mathbf{x}_1}}=(s'_{r{f_{\mathbf{x}_1}}}\parallel \cdots \parallel s'_{1{f_{\mathbf{x}_1}}})$. Since addition and subtraction correspond to bit-wise exclusive-or, we obtain,
{\small
\begin{align}\nonumber
t'-t'_{f_{\mathbf{x}_1}}&=\Big[\big[s_2(y'_2)^{r+2}+s_1(y'_2)^{r+1} + {\sum}_{i=1}^r s'_i(y'_2)^i\big]_{1 \cdots t} + y'_1 \Big] -\\\nonumber
&\qquad \Big[\big[s_{2{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+2}+s_{1{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+1} + \\\nonumber\label{eqn:robusttag1}
&\qquad{\sum}_{i=1}^r s'_{i{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^i\big]_{1 \cdots t} + y'_{1{\mathbf{x}_1}}\Big] \\
&=\big[f(y'_2,s',s) - f(y'_{2{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})\big]_{1 \cdots t} + y'_1 - y'_{1{\mathbf{x}_1}}
\end{align}
}
Since $\mathbf{x}$ can be written as $\mathbf{x}_1 + e$ for some $e=(e_2 \parallel e_1) \in GF(2^{n})$, then $(y'_2\parallel y'_1)=((y'_{2{\mathbf{x}_1}} + e_2) \parallel (y'_{1{\mathbf{x}_1}} + e_1))$. Substituting $(y'_2\parallel y'_1)=((y'_{2{\mathbf{x}_1}} + e_2) \parallel (y'_{1{\mathbf{x}_1}} + e_1))$ in equation~\ref{eqn:robusttag1}, we have that it is a polynomial in $y'_{2{\mathbf{x}_1}}$ of degree at most $(r+2)$. The term $f(y'_2,s',s) - f(y'_2+e_2,s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})$ takes on each element of the field $GF(2^{n-t})$ {\color{red} ??at most $(r+2)$ times as $y'_{2{\mathbf{x}_1}}$ varies.?? WHY} Therefore, there are at most $(r+2)2^{n-t}/2^t=(r+2)2^{n-2t}$ values $y'_{2{\mathbf{x}_1}}$ that satisfy equation~\ref{eqn:robusttag1}. From equation~\ref{eqn:robx1}, we see that, for each value of $y'_{2{\mathbf{x}_1}}$, there exists a unique element $y'_{1{\mathbf{x}_1}}$ that satisfies the equation. Hence, there are at most $(r+2)2^{n-2t}$ values of $(y'_{2{\mathbf{x}_1}} \parallel y'_{1{\mathbf{x}_1}})$ (i.e. $\mathbf{x}_1 $) that satisfy both the equation~\ref{eqn:robx1} and equation~\ref{eqn:robusttag1}. Let $\mathbf{X}_1$, $Y'_{2{\mathbf{x}_1}}$ and $Y'_{1{\mathbf{x}_1}}$ be the RVs corresponding to $\mathbf{x}_1$, $y'_{2{\mathbf{x}_1}}$ and $y'_{1{\mathbf{x}_1}}$ respectively. The adversary tries to guess $\mathbf{x}_1$ such that the inequality: $-\log(P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x}_1 |\mathbf{y} )) \le \nu$ holds, where $\mathbf{y}$ in the secret key of Bob. That is, the adversary tries to guess $\mathbf{x}_1$ such that $P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x}_1 |\mathbf{y} ) \ge 2^{-\nu}$. To have the maximum chance that the inequality $P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x}_1 |\mathbf{y} ) \ge 2^{-\nu}$ holds, the adversary would choose the point $\mathbf{x}_1$ that maximizes the total probability mass of $\mathbf{Y}$ within the set $\{\mathbf{y} : P_{\mathbf{X} |\mathbf{Y} }(\mathbf{x}_1 |\mathbf{y} ) \ge 2^{-\nu}\}$. In addition, the adversary is given $\mathbf{z}$ and the authenticated message $c$. Therefore, an adversary can guess {\color{red} SHOULD THE ADVERSARY'S GUESS BE FOR (y), GIVEN THE INFORMATION (c,z) - SUCH THAT THE SUM OF PROB OF x THAT ARE AT DISTANCE $|...| \leq \nu$ FROM y IS MAXIMUM? THIS IS BECAUSE *ANY* x THAT IS AT DISTANCE$ < \nu$ WORKS - WE NEED A y THAT MAXIMIZES THIS SUM OF PROBS- SO WE NEED MAXIMIZATION ON y AND SO
$2^{-H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{Y} | \mathds{Z} = \mathbf{z} , T'=t')}$.
THAT IS
$-\log max_y
\sum_{x \; at\; distance \; less \;than \; \nu \; from \; y} P(x|y, C=c, Z=z) $
}
$\mathbf{x}_1$ with probability at most $2^{-H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} | \mathds{Z} = \mathbf{z} , T'=t')}$. Consequently,
each value $\mathbf{x}_1$ in $\mathcal{R}$ occurs with probability at most $2^{-H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} | \mathds{Z} = \mathbf{z} , T'=t')}$.
Since the size of the support of RV $T'$ is $2^t$, from Lemma~\ref{lemma:fuzzy}, we obtain
$\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} | \mathds{Z}, T') \ge \tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} | \mathds{Z}) - t$.
Thus,
{\small
\begin{align}\nonumber
\delta_{\mathbf{x}_1} &\le \text{Expected probability that an adversary can generate a forged} \\\nonumber
&\quad\quad\text{authenticated message $(t'_{f_{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})$ corresponding to } \\\nonumber
&\quad\quad\text{$\mathbf{x}_1 $ given the message $c=(t',s',s)$ corresponding to $\mathbf{x} $} \\\nonumber
&=\mathbb{E}_{(c, \mathbf{z}) \leftarrow (C, \mathds{Z} )}\Big[\mathsf{Pr}_{(Y'_{2{\mathbf{x}_1}}\parallel Y'_{1{\mathbf{x}_1}}) }\big[ t'_{f_{\mathbf{x}_1}}=\big[s_{2{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+2}+ \\\nonumber
&\quad\quad{s_{1{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+1} + \sum}_{i=1}^r s'_{i{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^i\big]_{1 \cdots t} + y'_{1{\mathbf{x}_1}} \\\nonumber
&\quad\quad{\text{ $|$ } C=c, \mathds{Z} = \mathbf{z} \big]\Big]} \\\nonumber
&=\mathbb{E}_{(c, \mathbf{z}) \leftarrow (C, \mathds{Z} )}\Big[\mathsf{Pr}_{(Y'_{2{\mathbf{x}_1}}\parallel Y'_{1{\mathbf{x}_1}}) }\big[ t'_{f_{\mathbf{x}_1}}=\big[f(y'_{2{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})\big]_{1 \cdots t} \\\nonumber
&\qquad + y'_{1{\mathbf{x}_1}} \wedge t' = \big[f(y'_2,s',s)\big]_{1 \cdots t} + y'_1 \text{ $|$ } C=c, \mathds{Z} = \mathbf{z} \big]\Big] \\\nonumber
&\le \mathbb{E}_{(c, \mathbf{z} ) \leftarrow (C, \mathds{Z} )}\big[(r+2)2^{n-2t}2^{-H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} = \mathbf{z} , T'=t')}\big] \\\nonumber
&= (r+2)2^{n-2t}\mathbb{E}_{(c, \mathbf{z} ) \leftarrow (C, \mathds{Z} )}\big[2^{-H_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} = \mathbf{z} ,T'=t')}\big] \\\nonumber
&= (r+2)2^{n-2t}\big[2^{-\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} ,T')}\big] \\\nonumber
&\le(r+2)2^{n-2t}2^{-(\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )-t)} \\\label{eqn:robsub2}
&= (r+2)2^{-(t+\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )-n)}.
\end{align}
}
Consequently, from equations~\ref{eq:forgetag},~\ref{eqn:forgedletax} and~\ref{eqn:robsub2}, we conclude that, after observing a message, the expected probability that an adversary will be able to forge a message is at most \\$\max\{(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)},(r+2)2^{-(t+\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )-n)}\}$
$=(r+2)\big( 2^{-(t-n+\min\{n\tilde{H}_{\infty}(X |Z ),\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )\})}\big)$ $ \le \delta$ (if $t \ge n + \log\big(\frac{r+2}{\delta}\big) - \min\{n\tilde{H}_{\infty}(X |Z ),\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )\}$).
Therefore, if $t \ge n + \log\big(\frac{r+2}{\delta}\big) - \min\{n\tilde{H}_{\infty}(X |Z ),\tilde{H}_{\nu,\infty}^{\mathsf{fuzz}}(\mathbf{X} |\mathds{Z} )\}$, the OW-SKA$_a$~ protocol $\mathsf{OWSKA}_{a}$ given in construction~\ref{owska:robust} has robustness $\delta$.
\end{proof}
\end{comment}
\begin{comment}
\begin{proof}
We need to prove that our construction~\ref{owska:robust} satisfies Definition~\ref{robustowska}. The algorithm $\mathsf{iK.Dec}(\cdot)$~\ref{alg:iK.dec} successfully outputs an extracted key if there is a unique element in the set $\mathcal{R}$ such that its hash value is equal to the received hash value $d$. Note that, for an $(\epsilon,\sigma)$ OW-SKA protocol, $\mathbf{x}$ is in the set $\mathcal{R}$ with probability at least $(1 - \epsilon)$.
In the robustness experiment as defined in Definition~\ref{robustowska}, an adversary receives a correctly authenticated message $c=(t',s',s)$ corresponding to $\mathbf{x} $, where $t'=h\big(\mathbf{x} , (s',s)\big)=\big[s_2(y'_2)^{r+2}+s_1(y'_2)^{r+1} + {\sum}_{i=1}^r s'_i(y'_2)^i\big]_{1 \cdots t} + y'_1$. We denote the RVs corresponding to $t'$, $c$, $y'_1$, $y'_2$, $\mathbf{x} $ as $T'$, $C$, $Y'_1$, $Y'_2$, $\mathbf{X} $ respectively. After observing the correct authenticated message $c$ corresponding to $\mathbf{x} $, an adversary tries to generated an authenticated message corresponding to either $\mathbf{x} $ or a $\mathbf{x}_1 \in \mathcal{R}$ such that $\mathbf{x}_1 \ne \mathbf{x} $. For $\mathbf{x}_1 \in \mathcal{R}$ such that $\mathbf{x}_1 \ne \mathbf{x} $, let $\delta_{\mathbf{x}}$ and $\delta_{\mathbf{x}_1}$ denote the probability that an adversary can generate an authenticated message corresponding to $\mathbf{x} $ and $\mathbf{x}_1$ respectively given the correctly authenticated message $c$. Since Bob searches for a unique element in $\mathcal{R}$ such that its hash value matches with the received hash value, the probability that an adversary can generate a forged message is at most $\mathsf{max}\{\delta_\mathbf{x},\delta_{\mathbf{x}_1}\}$. For any element $\mathbf{x}'' \in \mathcal{R}$, $\tilde{H}_{\infty}(\mathbf{X}'' |\mathds{Z}, C ) \ge \tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} , C)$, where $\mathbf{X}''$ is the RV corresponding to $\mathbf{x}''$.
Therefore, the probability that an adversary will be able to forge is:
\vspace{-.5em}
{\small
\begin{align}\nonumber
&\le \text{Expected probability that an adversary can generate a forged} \\\nonumber
&\quad\quad\text{authenticated message given the message $c=(t',s',s)$} \\\nonumber
&\quad\quad\text{corresponding to $\mathbf{x} $} \\\label{eq:forgetag}
&=\mathsf{max}\{\delta_{\mathbf{x}},\delta_{\mathbf{x}_1}\}
\end{align}
}
We first bound $\delta_\mathbf{x}$. Let the adversary generate a forged message $(t'_f,s'_f,s_f)$ corresponding to $\mathbf{x} $ such that \\$t'_f=\big[s_{2f}(y'_2)^{r+2}+s_{1f}(y'_2)^{r+1} + {\sum}_{i=1}^r s'_{if}(y'_2)^i\big]_{1 \cdots t} + y'_1$ and $(t'_f,s'_f,s_f) \ne (t',s',s)$. Thus,
{\small
\begin{align}\nonumber
\delta_\mathbf{x} &\le \text{Expected probability that an adversary can generate a forged} \\\nonumber
&\quad\quad\text{authenticated message $(t'_f,s'_f,s_f)$ corresponding to $\mathbf{x} $ given } \\\nonumber
&\quad\quad\text{the message $c=(t',s',s)$ corresponding to $\mathbf{x} $} \\\nonumber
&=\mathbb{E}_{(c, \mathbf{z}) \leftarrow (C, \mathds{Z} )}\Big[\mathsf{Pr}_{\mathbf{X} }\big[ t'_f=\big[s_{2f}(y'_2)^{r+2}+s_{1f}(y'_2)^{r+1} + \\\nonumber
&\quad\quad{\sum}_{i=1}^r s'_{if}(y'_2)^i\big]_{1 \cdots t} + y'_1 \text{ $|$ } C=c, \mathds{Z} = \mathbf{z} \big]\Big] \\\label{eqn:forgedletax}
&\le (r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)}
\end{align}
}
In the probability expression, $\mathbf{x} $ is chosen according the distribution of $\mathbf{X} $ conditioned on $c$ or equivalently conditioned on $t'$. The last inequality follows from Lemma~\ref{lemma:mac2} (and equation~\ref{eqn:macsub}).
We now proceed to bound $\delta_{\mathbf{x}_1}$. Let the adversary try to generate a forged message $(t'_{f_{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})$ corresponding to $\mathbf{x}_1 $ such that \\
{\small
$h\big(\mathbf{x}_1 , (s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})\big)=t'_{f_{\mathbf{x}_1}}=\big[s_{2{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+2}+s_{1{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+1} + {\sum}_{i=1}^r s'_{i{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^i\big]_{1 \cdots t} + y'_{1{\mathbf{x}_1}}$
} and $(t'_{f_{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}}) \ne (t',s',s)$, where $y'_{1{\mathbf{x}_1}}=[\mathbf{x}_1 ]_{1\cdots t}$, $y'_{2{\mathbf{x}_1}}=[\mathbf{x}_1 ]_{t+1 \cdots n}$, $s_{f_{\mathbf{x}_1}}=s_{2{f_{\mathbf{x}_1}}} \parallel s_{1{f_{\mathbf{x}_1}}}$, $s'_{f_{\mathbf{x}_1}}=(s'_{r{f_{\mathbf{x}_1}}}\parallel \cdots \parallel s'_{1{f_{\mathbf{x}_1}}})$.
Thus,
{\small
\begin{align}\nonumber
\delta_{\mathbf{x}_1} &\le \text{Expected probability that an adversary can generate a forged} \\\nonumber
&\quad\quad\text{authenticated message $(t'_{f_{\mathbf{x}_1}},s'_{f_{\mathbf{x}_1}},s_{f_{\mathbf{x}_1}})$ corresponding to } \\\nonumber
&\quad\quad\text{$\mathbf{x}_1 $ given the message $c=(t',s',s)$ corresponding to $\mathbf{x} $} \\\nonumber
&=\mathbb{E}_{(c, \mathbf{z}) \leftarrow (C, \mathds{Z} )}\Big[\mathsf{Pr}_{\mathbf{X}_1 }\big[ t'_{f_{\mathbf{x}_1}}=\big[s_{2{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+2}+ \\\nonumber
&\quad\quad{s_{1{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^{r+1} + \sum}_{i=1}^r s'_{i{f_{\mathbf{x}_1}}}(y'_{2{\mathbf{x}_1}})^i\big]_{1 \cdots t} + y'_{1{\mathbf{x}_1}} \\\label{eqn:forgetagx2}
&\quad\quad{\text{ $|$ } C=c, \mathds{Z} = \mathbf{z} \big]\Big]}
\end{align}
}
If $\mathbf{X}_1$ and $\mathbf{X} $ are independent RVs, then $\mathbf{X}_1$ and $C$ are also independent, then proceeding the same way as Lemma~\ref{lemma:mac2} (and equation~\ref{eqn:macimp}), we obtain $\delta_{\mathbf{x}_1}=$.
Consequently, from equation~\ref{eq:forgetag} and Lemma~\ref{lemma:mac2}, we conclude that, after observing a message, the expected probability that an adversary will be able to forge a message is at most \\$(r+2)2^{-(t+\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} )-n)}=(r+2)2^{-(t+n\tilde{H}_{\infty}(X |Z )-n)} \le \delta$.
The last equation is obtained by applying Lemma~\ref{lemma:minentropy} that proves
$\tilde{H}_{\infty}(\mathbf{X} |\mathds{Z} )=n\tilde{H}_{\infty}(X |Z )$. The last inequality follows due to $t \ge n + \log\big(\frac{r+2}{\delta}\big) - n\tilde{H}_{\infty}(X |Z )$. Therefore, if \\$t \ge n + \log\big(\frac{r+2}{\delta}\big) - n\tilde{H}_{\infty}(X |Z )$, the OW-SKA protocol $\mathsf{OWSKA}_{a}$ given in construction~\ref{owska:robust} has robustness $\delta$.
\end{proof}
\end{comment}
\vspace{-.5em}
\section{Concluding remarks.}\label{conclusion}
\vspace{-.3em}
We proposed a OW-SKA$_a$~ in source model, proved its security and robustness, and derived the established key length.
To our knowledge there is no explicit construction of OW-SKA$_a$~ to compare our protocol with.
There are
numerous OW-SKA$_p$~ constructions
(Section~\ref{ap:related}) that can be the basis of OW-SKA$_a$~.
Our construction in Section~\ref{constructionikem}
is based on the construction in~\cite{sharif2020}, and uses a new
MAC construction (Section~\ref{robustness}).
Interesting directions for future work will be improving efficiency of decoding (which is currently exponential), and
proving capacity achieving property of the protocol.
\begin{comment}
An immediate application of the model is constructing secure biometric based encryption systems with provable security: a user stores their profile on a secure server, and for each access to the server will use a different reading of their biometric in an HE system.
The construction in Section \ref{constructionikem} does not extract key from biometric data, but uses it to {\em mask} (encrypt) a true random session key. This also provides implicit authentication for communication.
The exact security of this application however requires further analysis and entropy estimation of multiple biometric readings.
\end{comment}
\remove{
We initiated the study of KEM and hybrid encryption in preprocessing model, introduced information theoretic KEM, and proved composition theorems for secure hybrid encryption. We defined
and constructed secure cryptographic combiners
for iKEMs and public-key KEMs that ensure
secure hybrid encryption in the corresponding setups.
Using iKEM will guarantee post-quantum security, with the unique security property of being secure against offline attacks, that is particularly important in long-term security.
Our work raises many interesting research
questions
including construction of
fuzzy extractor based iKEM with security against higher number of queries,
iKEM construction in other correlated randomness setups
such as wiretap setting, source model and satellite model of Maurer \cite{Maurer1993},
and
construction of computational KEMs in preprocessing model.
}
\bibliographystyle{abbrv}
|
{
"arxiv_id": "2302.13180",
"language": "en",
"timestamp": "2023-02-28T02:11:59",
"url": "https://arxiv.org/abs/2302.13180",
"yymm": "2302"
} | \section{Introduction}
\IEEEPARstart{D}{ouble-scattering} fading conditions often appear in radio propagation environments like vehicular communications \cite{Bithas2016,Ai2018,Bithas2018}, unmanned aerial vehicle-enabled communications \cite{Bithas2020}, backscattering systems \cite{Devineni2019,Unai2020}, indoor mobile \cite{Vinogradov2015}, or land mobile satellite channels \cite{Nikolaidis2018}. A family of multiple-order scattering fading channels was originally defined by Andersen \cite{Andersen2002} and formalized by Salo \cite{Salo2006}, so that a finite number of increasing-order scattering terms is considered. Even for its simplest formulation that consists of a \ac{LoS} component plus Rayleigh \textit{and} \ac{dR} diffuse components, referred to as \ac{SOSF} model, its mathematical complexity and potential numerical instability have limited the applicabilty of this otherwise useful fading model. Only recently, an alternative formulation for the \ac{SOSF} model proposed in \cite{Lopez2018} provided a reasonably simpler approach that fully avoided the original numerical issues suffered by the model, and the \ac{MGF} of the \ac{SOSF} model was derived for the first time.
Several state-of-the-art fading models incorporate the ability to model random amplitude fluctuations of dominant specular waves associated to \ac{LoS} propagation. Relevant examples include popular fading models like Rician shadowed \cite{Abdi2003} and its generalizations \cite{Paris2014,Romero2022}. Recently, a \ac{fdRLoS} fading model was formulated as a combination of a randomly fluctuating \ac{LoS} component plus a \ac{dR} diffuse one, {although the first-order Rayleigh-like component also present in the original \ac{SOSF} model is neglected}. In this work, we define a natural generalization of the \ac{SOSF} model to incorporate random fluctuations on its \ac{LoS} component, for which the moniker \ac{fSOSF} is proposed. The newly proposed model is able to capture the same propagation conditions as the baseline \ac{SOSF} model, and allows to tune the amount of fluctuation suffered by the dominant component through one additional parameter. Interestingly, the addition of a new parameter for this model does not penalize its mathematical tractability, and the resulting expressions for its chief statistics have the same functional form (even simpler in some cases) {as those of the} original \ac{SOSF} model. The applicability of the \ac{fSOSF} model for performance analysis purposes is also exemplified through several illustrative examples.
\textit{Notation}: $\mathbb{E}\{X\}$ and $|X|$ denote the statistical average and the modulus of the complex \ac{RV} $X$ respectively. The \ac{RV} $X$ conditioned to $Y$ will be denoted as $X|Y$. The symbol $\sim$ reads as \emph{statistically distributed as}. The symbol $\stackrel{d}{=}$ reads as \emph{equal in distribution}. A circularly symmetric normal \ac{RV} $X$ with mean $\mu$ and variance $\Omega$ is denoted as $X\sim \mathcal{N}_c(\mu,\Omega)$.
\section{Physical model}
\label{Sec:The system model}
Based on the original formulation of the \ac{SOSF} model introduced by Andersen \cite{Andersen2002} and Salo \cite{Salo2006}, let us consider the following definition for the received signal $S$ as
\begin {equation}
S=\omega_0 \sqrt{\xi} e^{j\phi}+\omega_1 G_1+\omega_2 G_2 G_3,
\label{Eq:Modelo_fSOSF}
\end{equation}
where $\omega_0 e^{j\phi}$ is the dominant specular component classically associated to \ac{LoS} propagation, with $\omega_0$ being a constant value and $\phi$ a \ac{RV} uniformly distributed in $[0,2\pi)$. The \acp{RV} $G_1$, $G_2$ and $G_3$ are distributed as independent zero-mean, unit-variance complex normal variables, i.e., $G_i\sim\mathcal{N}_c(0,1)$ for $i=1,2,3$. {The constant parameters $\omega_0$, $\omega_1$ and $\omega_2$ act as scale weights for the \ac{LoS}, Rayleigh and \ac{dR} components, respectively}. Now, the key novelty of the model in \eqref{Eq:Modelo_fSOSF} lies on its ability to incorporate random fluctuations into the \ac{LoS} similarly to state-of-the-art fading models in the literature \cite{Abdi2003,Paris2014} through $\xi$, which is a Gamma distributed \ac{RV} with unit power and real positive shape parameter $m$, with \ac{PDF}:
\begin{equation}
f_{\xi}(u)=\frac{m^mu^{m-1}}{\Gamma(m)}e^{-m u},
\end{equation}
where $\Gamma(\cdot)$ is the gamma function. The severity of \ac{LoS} fluctuations is captured through the parameter $m$, being fading severity inversely proportional to this shape parameter. In the limit case of $m\rightarrow\infty$, $\xi$ degenerates to a deterministic unitary value and the \ac{LoS} fluctuation vanishes, thus collapsing into the original \ac{SOSF} distribution.
Besides $m$, the \ac{fSOSF} model is completely defined by the constants $\omega_0$, $\omega_1$ and $\omega_2$. Typically, an alternative set of parameters is used in the literature for the baseline \ac{SOSF} model, i.e. ($\alpha, \beta$), defined as
\begin {equation}
\alpha=\frac{\omega_2^2}{\omega_0^2+\omega_1^2+\omega_2^2},\;\;\;\;\;\beta=\frac{\omega_0^2}{\omega_0^2+\omega_1^2+\omega_2^2}.
\label{Eq:alpha_beta}
\end{equation}
Assuming a normalized channel (i.e., $\mathbb{E}\{|S|^2\}=1$) so that $\omega_0^2+\omega_1^2+\omega_2^2=1$, the parameters $(\alpha,\beta)$ are constrained to the triangle $\alpha \geq 0$, $\beta \geq 0$ and $\alpha+\beta \leq 1$.
\section{Statistical Characterization}
\label{Sec:3}
Let us define the instantaneous \ac{SNR} $\gamma=\overline\gamma|S|^2$, where $\overline\gamma$ is the average \ac{SNR}. The model in \eqref{Eq:Modelo_fSOSF} reduces to the \ac{SOSF} one \cite{Salo2006} when conditioning to $\xi$. However, it is possible to find an alternative pathway to connect this model with a different underlying model in the literature, so that its mathematical formulation is simplified.
According to \cite{Lopez2018}, the \ac{SOSF} model can be seen as a Rician one when conditioning to $x=|G_3|^2$. Hence, this observation can be leveraged to formulate the \ac{fSOSF} model in terms of an underlying Rician shadowed one, as shown in the sequel. For the \ac{RV} $\gamma$ we can express:
\begin {equation}
\gamma=\overline\gamma|\omega_0 {\color{black}\sqrt{\xi}}e^{j\phi}+\omega_1 G_1+\omega_2 G_2 G_3|^2.
\label{Eq:gamma}
\end{equation}
Since $G_3$ is a complex Gaussian RV, we reformulate $G_3=|G_3|e^{j\Psi}$, where $\Psi$ is uniformly distributed in $[0,2\pi)$. Because $G_2$ is a circularly-symmetric \ac{RV}, $G_2$ and $G_2e^{j\Psi}$ are equivalent in distribution, so that the following equivalence holds for $\gamma$
\begin {equation}
\gamma\stackrel{d}{=}\overline\gamma|\omega_0 {\color{black}\sqrt{\xi}}e^{j\phi}+\omega_1 G_1+\omega_2 G_2| G_3||^2.
\label{Eq:gamma2}
\end{equation}
Conditioning on $x=|G_3|^2$, define the conditioned \ac{RV} $\gamma_x$ as
\begin {equation}
\gamma_x\triangleq\overline\gamma|\omega_0 {\color{black}\sqrt{\xi}}e^{j\phi}+\omega_1 G_1+\omega_2 \sqrt{x} G_2|^2.
\label{Eq:gamma3}
\end{equation}
where the two last terms correspond to the sum of two RVs distributed as $\mathcal{N}_c(0;\omega_1^2)$ and $\mathcal{N}_c(0;\omega_2^2x)$, respectively. This is equivalent to one single RV distributed as $\mathcal{N}_c(0;\omega_1^2+\omega_2^2x)$. With all these considerations,$\gamma_x$ is distributed according to a squared Rician shadowed \ac{RV} \cite{Abdi2003} with parameters $m$ and
\begin{align}
\overline{\gamma}_x&=\frac{\omega_0^2}{\omega_1^2+x \omega_2^2}=\overline{\gamma}(1-\alpha(1-x)), \label{Eq_gamma_x}\\
K_x&=\omega_0^2+\omega_1^2+x \omega_2^2=\frac{\beta}{1-\beta-\alpha(1-x)} \label{Eq_K_x}.
\end{align}
We note that these parameter definitions include as special case the model in \cite{Lopez2022}, when $\omega_1^2=0$. In the following set of Lemmas, the main statistics of the \ac{fSOSF} distribution are introduced for the first time in the literature; these include the \ac{PDF}, \ac{CDF}, \ac{GMGF} and the moments.
\begin{lemma}\label{lemma1}
Let $\gamma$ be an \ac{fSOSF}-distributed \ac{RV} with shape parameters $\{\alpha,\beta,m\}$, i.e., $\gamma\sim\mathcal{F}_{\rm SOSF}\left(\alpha,\beta,m;\overline\gamma\right)$. Then, the \ac{PDF} of $\gamma$ is given by
\begin{align}\label{eqpdf1}
f_\gamma(\gamma)=\int_{0}^{\infty}&\tfrac{m^m(1+K_x)}{(m+K_x)^m\overline\gamma_x}e^{-\tfrac{1+K_x}{\overline\gamma_x}\gamma-x}\times\nonumber\\&{}_1F_{1}\left(m;1;\tfrac{K_x(1+K_x)}{K_x+m}\tfrac{\gamma}{\overline\gamma_x}\right)dx,
\end{align}
\begin{align} \nonumber
f_{\gamma}(\gamma)=&\sum_{j=0}^{m-1} \tbinom{m-1}{j} \frac{\gamma^{m-j-1} e^{\tfrac{m(1-\alpha-\beta)+\beta)}{m\alpha}}\left( \tfrac{\beta}{m}\right)^{m-j-1} }{(\overline{\gamma})^{m-j}(m-j-1)! \alpha^{2m-j-1}}\times \\
&\sum_{r=0}^{j} \binom{j}{r} \left( \tfrac{-\beta}{m}\right)^{j-r} \alpha^{r} \times \nonumber \\
&\Gamma\left(r-2m+j+2, \tfrac{m(1-\alpha-\beta)+\beta)}{m\alpha}, \tfrac{\gamma}{\alpha\overline{\gamma}} \right),
\label{Eq_PDF}
\end{align}
for $m\in\mathbb{R}^+$ and $m\in\mathbb{Z}^+$, respectively, and where $_1F_{1}\left(\cdot;\cdot;\cdot\right)$ and $\Gamma(a,z,b)=\int_{z}^{\infty}t^{a-1} e^{-t}e^{\tfrac{-b}{t}}dt$ are Kummer's hypergeometric funcion, and a generalization of the incomplete gamma function defined in \cite{CHAUDHRY199499}, respectively.
\end{lemma}
\begin{proof}
See Appendix \ref{ap1}.
\end{proof}
\begin{lemma}\label{lemma2}
Let $\gamma\sim\mathcal{F}_{\rm SOSF}\left(\alpha,\beta,m;\overline\gamma\right)$. Then, the \ac{CDF} of $\gamma$ is given by
\begin{align} \nonumber
\label{CDF_SOSFShadowed}
F_{\gamma}(\gamma)=&1-e^{\tfrac{m(1-\alpha-\beta)+\beta)}{m\alpha}} \sum_{j=0}^{m-1}\sum_{r=0}^{m-j-1}\sum_{q=0}^{j}\tbinom{m-1}{j}\tbinom{j}{q} \times \\
&\tfrac{(-1)^{j-q} \alpha^{q-r-m+1}}{r!} \left(\tfrac{\gamma}{\overline \gamma}\right)^r \left( \tfrac{\beta}{m}\right)^{m-q-1}\times\\ \nonumber
&\Gamma\left(q-r-m+2,\tfrac{m(1-\alpha-\beta)+\beta)}{m\alpha},\tfrac{\gamma}{\alpha \overline \gamma}\right).
\end{align}
for $m\in\mathbb{Z}^+$.
\end{lemma}
\begin{proof}
See Appendix \ref{ap2}.
\end{proof}
\begin{lemma}\label{lemma3}
Let $\gamma\sim\mathcal{F}_{\rm SOSF}\left(\alpha,\beta,m;\overline\gamma\right)$. Then, for $m\in\mathbb{Z}^+$ the \ac{GMGF} of $\gamma$ is given by
\begin{align}
\label{MGF SOSFS_1}
\mathcal{M}_{\gamma}^{(n)}(s)&=\sum_{q=0}^{n}\tbinom{n}{q}\frac{(-1)^{q+1}(m-n-q)_{n-q}(m)_q}{s^{n+1}\bar\gamma \alpha} \nonumber \times \\
&\sum_{i=0}^{n-q}\sum_{j=0}^{q}\sum_{r=0}^{m-1-n+q}\tbinom{n-q}{i}\tbinom{q}{j}\tbinom{m-1-n+q}{r} \nonumber \times \\
&c^{n-q-i}d^{q-j}a(s)^{m-1-n+q-r} \Gamma(1+r+i+j)\nonumber \times \\
&{\rm U}(m+q,m+q-r-i-j,b(s)).
\end{align}
\begin{figure*}[ht!]
\begin{align}
\label{MGF SOSFS_2}
\mathcal{M}_{\gamma}^{(n)}(s)=&\sum_{q=0}^{n-m}\tbinom{n}{q}\tfrac{(-1)^{q+1}(m-n-q)_{n-q}(m)_q}{s^{n+1}\bar\gamma \alpha}\sum_{i=0}^{n-q}\sum_{j=0}^{q}\tbinom{n-q}{i}\tbinom{q}{j} c^{n-q-i}d^{q-j}\times \nonumber\\
&\left[\sum_{k=1}^{n+1-m-q}A_k(s) {\rm U}(k,k,a(s))+ \sum_{k'=1}^{m+q}B_k(s) {\rm U}(k',k',b(s))\right] + \nonumber\\
\vspace{2mm}&\sum_{q=n+1-m}^{n}\tbinom{n}{q}\tfrac{(-1)^{q+1}(m-n-q)_{n-q}(m)_q}{s^{n+1}\bar\gamma \alpha}\sum_{i=0}^{n-q}\sum_{j=0}^{q}\tbinom{n-q}{i}\tbinom{q}{j}c^{n-q-i}d^{q-j} \times \nonumber\\
&\sum_{r=0}^{m-1-n+q}\tbinom{m-1-n+q}{r} a(s)^{m-1-n+q-r} \Gamma(1+r+i+j){\rm U}(m+q,m+q-r-i-j,b(s)).
\end{align}
\hrulefill
\end{figure*}
for $m \geq n+1$, and in \eqref{MGF SOSFS_2} at the top of next page for $m < n+1$, where $A_k(s)$ and $B_k(s)$ are the partial fraction expansion coefficients given by
\begin{align}
\label{A_k}
A_k(s)=&\sum_{l=0}^{\sigma_1-k} \tfrac{\tbinom{\sigma_1-k}{l}}{(\sigma_1-k)!}(i+j-\sigma_1+k+l+1)_{\sigma_1-k-l} \times \nonumber \\
&(\sigma_2)_l (-1)^l(-a)^{i+j-\sigma_1+k+l}(b-a)^{-\sigma_2-l}, \nonumber \\
B_k(s)=&\sum_{l=0}^{\sigma_2-k}\tfrac{\tbinom{\sigma_2-k}{l}}{(\sigma_2-k)!}(i+j-\sigma_2+k+l+1)_{\sigma_2-k-l}\times \nonumber \\
&(\sigma_1)_l (-1)^l(-b)^{i+j-\sigma_2+k+l}(a-b)^{-\sigma_1-l},
\end{align}
with $\sigma_1=n+1-m-q$ and $\sigma_2=m+q$, and where ${\rm U}\left(\cdot,\cdot,\cdot \right)$ is Tricomi's confluent hypergeometric function \cite[(13.1)]{NIST}.
\end{lemma}
\begin{proof}
See Appendix \ref{ap3}.
\end{proof}
\begin{lemma}\label{lemma4}
Let $\gamma\sim\mathcal{F}_{\rm SOSF}\left(\alpha,\beta,m;\overline\gamma\right)$. Then, for $m\in\mathbb{Z}^+$ the $n^{\rm th}$ moment of $\gamma$ is given by
\begin{align}
\label{Moments}
\mathbb{E}[\gamma^n]=&(\bar\gamma \alpha)^n \sum_{q=0}^{n}\tbinom{n}{q}(-1)^{q-n}(m-n+q)_{n-q}(m)_q \nonumber \times \\
&\sum_{i=0}^{n-q}\sum_{j=0}^{q}\tbinom{n-q}{i}\tbinom{q}{j}c^{n-q-i}d^{q-j} (i+j)!
\end{align}
\end{lemma}
\begin{proof}
See Appendix \ref{ap4}
\end{proof}
\newcommand\figureSize{1}
\section{Numerical results}
\label{Sec:4}
\begin{figure}[t]
\centering
\includegraphics[width=\figureSize \columnwidth]{figuras/pdffSOSF.pdf}
\caption{\ac{PDF} of \ac{fSOSF} model for different values of $m$. Parameter values are $\alpha=0.1$, $\beta=0.7$ and $\overline{\gamma}_{\rm dB}=3$dB. Theoretical values (\ref{Eq_PDF}) are represented with lines. Markers correspond to \ac{MC} simulations.}
\label{fig:pdf}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\figureSize \columnwidth]{figuras/pdffSOSF2.pdf}
\caption{\ac{PDF} comparison for different values of $\alpha$ and $\beta$. Solid/dashed lines obtained with (\ref{Eq_PDF}) correspond to ($m=1$, $\overline{\gamma}_{\rm dB}=5$dB) and ($m=20$, $\overline{\gamma}_{\rm dB}=1$dB), respectively. Markers correspond to \ac{MC} simulations.}
\label{fig:pdf2}
\end{figure}
{In Fig. \ref{fig:pdf}, we represent the \ac{PDF} of the \ac{fSOSF} fading model in Lemma \ref{lemma1}, for different values of the \ac{LoS} fluctuation severity parameter $m$. Parameter values are $\alpha=0.1$, $\beta=0.7$ and $\overline{\gamma}_{\rm dB}=3$dB. \ac{MC} simulations are also included as a sanity check. See that, as we increase the fading severity of the \ac{LoS} component (i.e., $\downarrow m$), the probability of occurrence of low SNR values increases, as well as the variance of the distribution.}
{In Fig. \ref{fig:pdf2} we analyze the impact of $\alpha$ and $\beta$ on the \ac{PDF} of the fSOSF model. Two scenarios have been considered: one with low fading severity and low average \ac{SNR} ($m=20$, $\overline{\gamma}_{\rm dB}=1 dB$), and another with higher fading severity and higher average \ac{SNR} ($m=1$, $\overline{\gamma}_{\rm dB}=5 dB$).} { In the case of mild fluctuations of the \ac{LoS} component, the effect of $\beta$ dominates to determine the shape of the distribution, observing a bell-shaped \ac{PDF}s with higher $\beta$ values. Conversely, the value of $\alpha$ becomes more influential for the left tail of the distribution. We see that lower values of alpha make lower SNR values more likely, which implies an overall larger fading severity.}
{Finally, we analyze the \ac{OP} under \ac{fSOSF} which is defined as the probability that the instantaneous SNR takes a value below a given threshold, $\gamma_{\rm th}$. It can be obtained from the \ac{CDF} (\ref{CDF_promediado}) as
\begin{equation}
\label{Eq_OP}
{\rm OP}=F_\gamma(\gamma_{\rm th}).
\end{equation}
{Fig. \ref{fig:op} shows the \ac{OP} under \ac{fSOSF} model, for different values of the parameter $m$. Additional parameters are set to $\alpha=0.1$ and $\beta=0.7$, and two threshold values are considered: $\gamma_{\rm th}=3$dB and $\gamma_{\rm th}=1$dB. We see that a $2$dB change in the thresold SNR is translated into a $\sim5$dB power offset in terms of \ac{OP} performance. We observe that as the severity of fading increases (i.e., $\downarrow m$), the less likely it is to exceed the threshold value $\gamma_{\rm th}$, i.e., the higher the \ac{OP}. In all instances, the diversity order (i.e., the down-slope decay of the OP) is one, and the asymptotic OP in \eqref{Eq_OP_asymp} tightly approximates the exact OP, which is given by}
\begin{align}
\label{Eq_OP_asymp}
{\rm OP} & \left(\alpha,\beta,m;\overline\gamma,\gamma_{\rm th} \right)\approx\frac{\gamma_{\rm th}}{\alpha \overline\gamma } \sum_{j=0}^{m-1} \tbinom {m-1}{j} \left(\tfrac{1-\beta-\alpha}{\alpha}\right)^{m-1-j} \times \nonumber\\
&\Gamma(1+j) {\rm U}(m,m-j,\tfrac{1}{\alpha}-\tfrac{\beta}{\alpha}\left(\tfrac{m-1}{m} \right)-1).
\end{align}
Expression \eqref{Eq_OP_asymp} can be readily derived by integration over the asymptotic \ac{OP} of the underlying Rician shadowed model \cite[eq. (22)]{Lopez2022}.
\begin{figure}[t]
\centering
\includegraphics[width=\figureSize \columnwidth]{figuras/oPfSOSF_2.pdf}
\caption{\ac{OP} as a function of $\overline\gamma$, for different values of $m$. Parameter values are $\alpha=0.1$ and $\beta=0.7$. Solid/dashed lines correspond to $\gamma_{\rm th}=3$ and $\gamma_{\rm th}=1$ respectively. Theoretical values (\ref{Eq_OP}) are represented with lines. Markers correspond to \ac{MC} simulations.}
\label{fig:op}
\end{figure}
\section{Conclusions}
We presented a generalization of Andersen's \ac{SOSF} model by incorporating random fluctuations on its dominant specular component, yet without incurring in additional complexity. We provided closed-form expressions for its probability and cumulative distribution functions, as well as for its generalized Laplace-domain statistics and raw moments. Some insights have been provided on how the set of parameters ($\alpha$, $\beta$ and $m$) affect propagation, and its application to performance analysis has been exemplified through an outage probability analysis.
\appendices
\section{ Proof of Lemma~\ref{lemma1}}
\label{ap1}
Noting that $x=|G_3|^2$ is exponentially distributed with unitary mean, we can compute the distribution of $\gamma$ by averaging over all possible values of $x$ as:
\begin{equation}
f_{\gamma}(\gamma)=\int_0^\infty f_{\gamma_x}(\gamma;x) e^{-x}dx.
\label{Eq_int_PDF}
\end{equation}
The \ac{PDF} of $\gamma_x$ is that of a squared Rician shadowed \ac{RV}, which for integer $m$ is given by \cite[eq. (5)]{Martinez2017}
\begin{equation}
f_{\gamma_x}(\gamma;x)=\sum_{j=0}^{m-1}B_j \left( \tfrac{m-j}{\omega_B}\right)^{m-j} \tfrac{\gamma^{m-j-1}}{(m-j-1)!} e^{-\frac{\gamma(m-j)}{\omega_B}},
\label{Eq_PDF_RS}
\end{equation}
where
\begin{equation}
B_j=\binom{m-1}{j} \left( \tfrac{m}{K_x+m}\right)^j \left( \tfrac{K_x}{K_x+m}\right)^{m-j-1},
\label{Eq_Bj}
\end{equation}
\begin{equation}
\omega_B=(m-j)\left( \tfrac{K_x}{K_x+m}\right)\left( \tfrac{\overline{\gamma}_x}{1+K_x}\right).
\label{Eq_wB}
\end{equation}
with $K_x$ and $\bar{\gamma}_x$ given in \eqref{Eq_gamma_x} and \eqref{Eq_K_x}.
\color{black} Substituting (\ref{Eq_PDF_RS}) into (\ref{Eq_int_PDF}), using the change of variables $t=\tfrac{1}{\alpha}\left(1-\beta(\tfrac{m-1}{m})-\alpha(1-x)\right)$ and taking into account that ${(t-\tfrac{\beta}{\alpha m})^j=\sum_{r=0}^{j}\binom{j}{r}t^j (\tfrac{\beta}{\alpha m})^{j-r}}$, the final expression for the PDF is derived.
\section{ Proof of Lemma~\ref{lemma2}}
\label{ap2}
The \ac{CDF} of the \ac{fSOSF} model can also be obtained by averaging the \ac{CDF} of $\gamma_{x}$, i.e., the Rician shadowed \ac{CDF} over the exponential distribution:
\begin{equation}
\label{CDF_promediado}
F_{\gamma}(\gamma)=\int_0^\infty F_{\gamma_x}(\gamma;x) e^{-x}dx.
\end{equation}
\color {black}
For the case of integer $m$, a closed-form expression for the Rician shadowed CDF is presented in \cite[eq. (10)]{Martinez2017}, i.e.
\begin{equation}
\label{CDF_Rice_Shadowed}
F_{\gamma_x}(\gamma;x)=1-\sum_{j=0}^{m-1}B_je^{\frac{-\gamma (m-j)}{\omega_B}} \sum_{r=0}^{m-j-1}\tfrac{1}{r!}\left( \tfrac{\gamma (m-j)}{\omega_B}\right)^r,
\end{equation}
Substituting \eqref{CDF_Rice_Shadowed} in \eqref{CDF_promediado} and following the same approach used in the previous appendix, we obtain the final expression.
\section{ Proof of Lemma~\ref{lemma3}}
\label{ap3}
Following the same procedure, the generalized \ac{MGF} of the \ac{fSOSF} model denoted as $\mathcal{M}_{\gamma}^{(n)}(s)$ can be obtained by averaging the generalized \ac{MGF} of $\gamma_{x}$, i.e., the Rician shadowed generalized \ac{MGF} over the exponential distribution:
\begin{equation}
\label{MGF_promediado}
\textcolor[rgb]{0,0,0}{\mathcal{M}_{\gamma}^{(n)}(s)=\int_0^\infty \mathcal{M}_{\gamma_x}^{(n)}(s;x) e^{-x}dx.}
\end{equation}
\textcolor[rgb]{0,0,0}{A closed-form expression for $M_{\gamma_x}(s;x)$ for integer $m$ is provided in \cite[eq. (26)]{Martinez2017}}
\begin{equation}
\label{MGF_Rician Shadowed}
M_{\gamma_x}(s;x)=\frac{m^m (1+K_x)}{\overline \gamma_x(K_x+m)^m}\frac{\left(s-\frac{1+K_x}{\overline \gamma_x} \right)^{m-1}}{\left(s-\frac{1+K_x}{\overline \gamma_x}\frac{m}{K_x+m} \right)^{m}}
\end{equation}
Substituting \eqref{Eq_gamma_x} and \eqref{Eq_K_x} into \eqref{MGF_Rician Shadowed} the expression for $M_{\gamma_x}(s;x)$ can be rewritten as
\begin{equation}
\label{MGF_Rician Shadowed__F1_F2}
M_{\gamma_x}(s;x)=F_1(s;x)\cdot F_2(s;x),
\end{equation}
where
\begin{align}
\label{MGF_Rician Shadowed_F1_F2_bis}
F_1(s;x)=&-\left[s\bar{\gamma}\left(1-\beta-\alpha(1-x)\right)-1\right]^{m-1},\\
F_2(s;x)=&\left[s\bar{\gamma}\left(1-\beta\left(\tfrac{m-1}{m} \right)-\alpha(1-x)\right)-1\right]^{-m}.
\end{align}
Next, we compute the $n$-th derivative of \eqref{MGF_Rician Shadowed__F1_F2} wich yields the Rician shadowed generalized \ac{MGF}
\begin{equation}
\label{MGF_Rician Shadowed_derivative}
\mathcal{M}_{\gamma_x}^{(n)}(s;x)=\tfrac{\partial^n M_{\gamma_x}(s;x)}{\partial s^n}=\sum_{q=0}^{n}\tbinom{n}{q}F_1^{(n-q)}(s;x)\cdot F_2^{(q)}(s;x),
\end{equation}
where $F_i^{(n)}(s;x)$ denotes the $n$-th derivative of $F_i(s;x)$ with respect to $s$.
\begin{align}
\label{F1_derivative}
F_1^{(n-q)}(s;x)=&-(m-n+q)_{n-q} s^{m-1-n+q} (\bar\gamma\alpha)^{m-1} \times \nonumber \\
&(x+a(s))^{m-1-n+q}(x+c)^{n-q}
\end{align}
\begin{align}
\label{F2_derivative}
F_2^{(q)}(s;x)=&(-1)^q (m)_{q} s^{-m-q} (\bar\gamma\alpha)^{-m} \times \nonumber\\
&(x+b(s))^{-m-q}(x+d)^{q},
\end{align}
where $(z)_j$ denotes the Pochhammer symbol and where
\begin{align}
\label{a}
a(s)=&\tfrac{s\bar\gamma(1-\beta-\alpha)-1}{s\bar\gamma\alpha},\\
b(s)=&\tfrac{s\bar\gamma\left(1-\beta\left( \tfrac{m-1}{m}\right)-\alpha\right)-1}{s\bar\gamma\alpha},\\
c=&\tfrac{1-\beta}{\alpha}-1,\\
d=&\tfrac{1-\beta\left( \tfrac{m-1}{m}\right)}{\alpha}-1.
\end{align}
From \eqref{MGF_Rician Shadowed_derivative}, \eqref{F1_derivative} and \eqref{F2_derivative} notice that $\mathcal{M}_{\gamma_x}^{(n)}(s;x)$ is a rational function of $x$ with two real positive zeros at $x=c$ and $x=d$, one real positive pole at $x=b(s)$ and one real positive zero or pole at $x=a(s)$ depending on whether $m\geq n+1$ or not. Integration of \eqref{MGF_promediado} is feasible with the help of \cite[eq. 13.4.4]{NIST}
\begin{equation}
\label{Integral Tricomi Wolfram}
\int_{0}^{\infty}\frac{x^i}{(x+p)^j}e^{-x}dx=\Gamma(i+1){\rm U}(j,j-i,p).
\end{equation}
where ${\rm U}\left(\cdot,\cdot,\cdot \right)$ is Tricomi's confluent hypergeometric function \cite[(13.1)]{NIST}. We need to expand $\mathcal{M}_{\gamma_x}^{(n)}(s;x)$ in partial fractions of the form $\tfrac{x^i}{(x+p)^j}$. Two cases must be considered:
\begin{itemize}
\item $m \geq n+1$
In this case there is only one pole at $x=b(s)$ and no partial fraction expansion is required. Using the fact that $(x+p)^n=\sum_{i=0}^{n}x^ip^{n-i}$, then \eqref{MGF SOSFS_1} is obtained.
\item $m < n+1$
Now, there are two poles ($x=a(s)$, $x=b(s)$). After performing partial fraction expansion we obtain \eqref{MGF SOSFS_2}.
\end{itemize}
\section{ Proof of Lemma~\ref{lemma4}}
\label{ap4}
Using the definition of $\gamma_x$, we can write
\begin{equation}
\label{Momentos_promediado}
\mathbb{E}[\gamma^n]=\int_0^\infty \mathbb{E}[\gamma_x^n] e^{-x}dx.
\end{equation}
where $\mathbb{E}[\gamma_x^n]=\lim_{s \rightarrow 0^{-}} M_{\gamma_x}^{(n)}(s;x)$ where $M_{\gamma_x}^{(n)}(s;x)$ is given in \eqref{MGF_Rician Shadowed_derivative}. Performing the limit and using the integral $\int_{0}^{\infty}x^pe^{-x}dx=p!$, the proof is complete.
\bibliographystyle{ieeetr}
|
{
"arxiv_id": "2302.13244",
"language": "en",
"timestamp": "2023-02-28T02:14:03",
"url": "https://arxiv.org/abs/2302.13244",
"yymm": "2302"
} | \section{Introduction}
M.~O.~Bourgoin \cite{1} introduced twisted knot theory as a generalization of knot theory.
Twisted link diagrams are link diagrams on $\mathbb{R}^2$ possibly with some crossings called virtual crossings and bars which are short arcs intersecting the arcs of the diagrams. Twisted links are diagrammatically defined as twisted link diagrams modulo isotopies of $\mathbb{R}^2$ and local moves called {\it extended Reidemeister moves} which are
Reidemeister moves (R1, R2, R3), virtual Reidemeister moves (V1, V2, V3, V4) and twisted moves (T1, T2, T3) depicted in Figure~\ref{tm}. Twisted links correspond to stable equivalence classes of links in oriented three-manifolds which are orientation I-bundles over closed but not necessarily orientable surfaces.
Twisted links are analogous to virtual links introduced by L.~H.~Kauffman \cite{2}. Virtual link diagrams are link diagrams on $\mathbb{R}^2$ possibly with some virtual crossings. Virtual links are defined as virtual link diagrams modulo isotopies of $\mathbb{R}^2$ and local moves called {\it generalized Reidemeister moves} which are
Reidemeister moves (R1, R2, R3) and virtual Reidemeister moves (V1, V2, V3, V4) depicted in Figure~\ref{tm}. Virtual links correspond to stable equivalence classes of links in oriented three-manifolds which are orientation I-bundles over closed oriented surfaces.
\begin{figure}[ht]
\centering
\includegraphics[width=12cm,height=7.5cm]{GR-moves.pdf}
\caption{Extended Reidemeister moves.}
\label{tm}
\end{figure}
The Alexander theorem states that every link is represented as the closure of a braid, and the Markov theorem states that such a braid is unique modulo certain moves so called Markov moves. In virtual knot theory, analogous theorems are established in \cite{kl, sk}.
In this paper we show theorems for twisted links corresponding to the Alexander theorem and the Markov theorem. We also provide a group presentation and a reduced group presentation
of the twisted virtual braid group.
This article is organized as follows.
In Section~\ref{sect:braid},
we state the definition of the twisted virtual braid group and provide a group presentation of the group.
In Section~\ref{sect:Alexander}, the Alexander theorem for twisted links is shown by introducing a method of braiding a given twisted link diagram, which we call the braiding process.
In Section~\ref{sect:Markov}, we give the statement of the Markov theorem for twisted links and prove it. In Section~\ref{sect:exchange}, virtual exchange moves are discussed.
In Section~\ref{sect:reduced}, we give a reduced presentation of the twisted virtual braid group, and concluding remarks.
\section{The twisted virtual braid group}
\label{sect:braid}
Let $n$ be a positive integer.
\begin{definition}
A {\it twisted virtual braid diagram} on $n$ strands (or of degree $n$)
is a union of $n$ smooth or polygonal curves, which are called {\it strands}, in $\mathbb{R}^2$ connecting points $(i,1)$ with points $(q_i,0)$ $(i=1, \dots, n)$, where $(q_1, \ldots, q_n)$ is a permutation of the numbers $(1, \ldots, n)$, such that these curves are monotonic with respect to the second coordinate and intersections of the curves are transverse double points equipped with information as a positive/negative/virtual crossing and
strings may have {\it bars} by which we mean short arcs intersecting the strings transversely.
See Figure~\ref{exa}, where the five crossings are
negative, positive, virtual, positive and positive from the top.
\end{definition}
\begin{figure}[h]
\centering
\includegraphics[width=2cm,height=3cm]{example-ktb.pdf}
\caption{A twisted virtual braid diagram on 3 strands.}
\label{exa}
\end{figure}
Here is an alternative definition.
\begin{definition}
Let $E$ be $[0, n+1] \times [0, 1]$ and let $p_2 : E \to [0, 1]$ be the second factor projection. A {\it twisted virtual braid diagram} of $n$ strands (or of degree $n$) is an immersed 1-manifold $b = a_1 \cup \ldots \cup a_n$ in E, where $a_1, \ldots, a_n$ are embedded arcs, called {\it strands},
possibly with {\it bars} by which we mean short arcs intersecting the strands transversely, satisfying the following conditions (1)--(5):
\begin{itemize}
\item[(1)] $\partial b= \{1, 2, \dots, n\} \times \{0, 1\} \subset E$.
\item[(2)] For each $i \in \{1, \ldots, n\}$, $p_2|_{a_i}: a_i \to [0, 1]$ is a homeomorphism.
\item[(3)] The set of multiple points of the strands consists of transverse double points, which are referred as {\it crossings} of the diagram.
\item[(4)] Each crossing is equipped with information of a positive crossing, a negative crossing or a virtual crossing.
\item[(5)] Every bar avoids the crossings.
\end{itemize}
Let $X(b)$ denote the set of crossings of $b$ and the points on the strands where bars intersect with.
A twisted virtual braid diagram is said to be {\it good} if it satisfies the following condition.
\begin{itemize}
\item[(6)] The restriction map $p_2 |_{X(b)}: X(b) \to [0, 1]$ is injective.
\end{itemize}
\end{definition}
The twisted virtual braid diagram depicted in Figure~\ref{exa} is good.
\begin{definition}
Two twisted virtual braid diagrams $b$ and $b'$ of degree $n$ are {\it equivalent} if there is a finite sequence of twisted virtual braid diagrams of degree $n$, say $b_0, b_1, \dots, b_m$, with $b=b_0$ and $b'=b_m$ such that for each $j = 1, \dots, m$, $b_j$ is obtained from $b_{j-1}$ by one of the following:
\begin{itemize}
\item An isotopy of $E$ keeping the conditions (1)--(5) of a twisted virtual braid diagram.
\item An extended Reidemeister move.
\end{itemize}
A {\it twisted virtual braid} is an equivalence class of twisted virtual braid diagrams.
\end{definition}
The set of twisted virtual braids forms a group, where the product is defined by the concatenation similar to the braid group such that $b b'$ is $b$ on $b'$ when we draw the braid diagram vertically.
The twisted virtual braid group is denoted by $TVB_n$.
Let $\sigma_i$, $\sigma_i^{-1}$, $v_i$ $(i=1, \dots, n-1)$ and $\gamma_i$ $(i=1, \dots, n)$ be
twisted virtual braid diagrams depicted in Figure~\ref{gen}.
Twisted virtual braids represented by them will be also denoted by the same symbols.
The group $TVB_n$ is generated by $\sigma_i$, $v_i$ $(i=1, \dots, n-1)$ and $\gamma_i$ $(i=1, \dots, n)$, which we call {\it standard generators}.
\begin{figure}[h]
\centering
\includegraphics[width=12cm,height=2.5cm]{gpelts.pdf}
\caption{Generators of the group of twisted virtual braids.}
\label{gen}
\end{figure}
Figure~\ref{bmoves} shows classical braid moves, corresponding to R2 and R3. Figure~\ref{vbmoves} shows virtual braid moves, corresponding to V2, V3, and V4.
(There are some other moves corresponding to R3 and V4. However, it is well known that those moves are equivalent to the moves in the figure, cf. \cite{kl}.)
\begin{figure}[h]
\centering
\includegraphics[width=8cm,height=3.5cm]{EBraid-moves.pdf}
\caption{Classical braid moves.}
\label{bmoves}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=9cm,height=3cm]{EVBraid-moves.pdf}
\caption{Virtual braid moves.}
\label{vbmoves}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=9cm,height=4cm]{ktbraidmoves.pdf}
\caption{Twisted braid moves.}
\label{moves}
\end{figure}
We call the two moves depicted in the top row of Figure~\ref{moves} {\it twisted braid moves of type ${\rm I}$},
and the move on the left of the second row a {\it twisted braid move of type ${\rm II}$}.
The move on the right of the bottom is called a {\it twisted braid move of type ${\rm III}$} or
{\it of type ${\rm III(+)}$}. When we replace the positive crossings with negative ones, it is called
a {\it twisted braid move of type ${\rm III}$} or {\it of type ${\rm III(-)}$}.
Braid moves corresponding to extended Reidemeister moves are classical braid moves, virtual braid moves and twisted braid moves.
\begin{theorem}\label{thm:StandardPresentation}
The twisted virtual braid group $TVB_n$ is generated by standard generators,
$\sigma_i$, $v_i$ $(i=1, \dots, n-1)$ and $\gamma_i$ $(i=1, \dots, n)$, and
the following relations are defining relations, where $e$ denotes the identity element:
\begin{align}
\sigma_i \sigma_j & = \sigma_j \sigma_i & \text{ for } & |i-j| > 1; \label{rel-height-ss}\\
\sigma_i \sigma_{i+1} \sigma_i & = \sigma_{i+1} \sigma_i \sigma_{i+1} & \text{ for } & i=1,\ldots, n-2; \label{rel-sss}\\
v_i^2 & = e & \text{ for } & i=1,\ldots, n-1; \label{rel-inverse-v}\\
v_i v_j & = v_j v_i & \text{ for } & |i-j| > 1 ; \label{rel-height-vv}\\
v_i v_{i+1} v_i & = v_{i+1} v_i v_{i+1} & \text{ for } & i=1,\ldots, n-2; \label{rel-vvv}\\
\sigma_i v_j & = v_j \sigma_i & \text{ for } & |i-j| >1 ; \label{rel-height-sv}\\
v_i \sigma_{i+1} v_i & = v_{i+1} \sigma_i v_{i+1} & \text{ for } & i=1,\ldots, n-2; \label{rel-vsv}\\
\gamma_i^2 & = e & \text{ for } & i=1,\ldots, n; \label{rel-inverse-b}\\
\gamma_i \gamma_j & = \gamma_j \gamma_i & \text{ for } & i,j=1,\ldots, n; \label{rel-height-bb} \\
\gamma_j v_i & = v_i \gamma_j & \text{ for } & j\neq i, i+1; \label{rel-height-bv}\\
\sigma_i\gamma_j & = \gamma_j\sigma_i & \text{ for } & j\neq i, i+1; \label{rel-height-sb}\\
\gamma_{i+1} v_i & = v_{i} \gamma_i & \text{ for } & i=1,\ldots, n-1; \label{rel-bv} \\
v_{i} \sigma_i v_{i} & = \gamma_{i+1} \gamma_i \sigma_{i} \gamma_i \gamma_{i+1} & \text{ for } & i=1,\ldots, n-1. \label{rel-twist-III}
\end{align}
\end{theorem}
\begin{remark}
Using $(\ref{rel-inverse-v})$, we see that relations $(\ref{rel-vsv})$ and $(\ref{rel-bv})$ are equivalent the following $(\ref{relC-vsv})$ and $(\ref{relC-vb})$, respectively:
\begin{align}
\sigma_{i+1} & = v_i v_{i+1} \sigma_i v_{i+1} v_i & \text{ for } & i=1,\ldots, n-2, \label{relC-vsv} \\
\gamma_{i+1} & = v_i \gamma_i v_i & \text{ for } & i=1,\ldots, n-1. \label{relC-vb}
\end{align}
\end{remark}
\begin{remark}
There are two kinds of twisted braid moves of type ${\rm I}$ as shown in Figure~\ref{moves}. The left one corresponds to relations $(\ref{rel-bv})$ and the right one to $(\ref{rel-vb})$:
\begin{align}
\gamma_i v_i & = v_i \gamma_{i+1} & \text{ for } & i=1,\ldots, n-1. \label{rel-vb}
\end{align}
Using $(\ref{rel-inverse-v})$, we see that relations $(\ref{rel-bv})$ are equivalent to $(\ref{rel-vb})$. \end{remark}
\begin{remark}
There are two kinds of twisted braid moves of ${\rm III}$; one is type ${\rm III(+)}$ as shown in Figure~\ref{moves}
and the other is type ${\rm III(-)}$. The former
corresponds to relations $(\ref{rel-twist-III})$
and the latter to $(\ref{rel-twist-III-negative})$:
\begin{align}
v_{i} \sigma_i^{-1} v_{i} & = \gamma_{i+1} \gamma_i \sigma_{i}^{-1} \gamma_i \gamma_{i+1}
& \text{ for } & i=1,\ldots, n-1. \label{rel-twist-III-negative}
\end{align}
Using $(\ref{rel-inverse-v})$ and $(\ref{rel-inverse-b})$, we see that relations $(\ref{rel-twist-III})$ are equivalent to $(\ref{rel-twist-III-negative})$.
\end{remark}
\begin{proof}
Note that the inverse elements of $v_i$ and $\gamma_i$ in $TVB_n$ are themselves.
Let $\mathcal{S}$ be the set of standard generators of $TVB_n$ and let ${\mathcal{S}}^*$ be the set of standard generators and their inverse elements of $TVB_n$:
\begin{align*}
{\mathcal{S}} &= \{ \sigma_i, v_i \mid i=1, \dots, n-1 \} \cup \{ \gamma_i \mid i=1, \dots, n\}, \\
{\mathcal{S}}^* &= \{ \sigma_i, \sigma_i^{-1}, v_i \mid i=1, \dots, n-1 \} \cup \{ \gamma_i \mid i=1, \dots, n\}.
\end{align*}
Let $b$ be a twisted virtual braid diagram.
When it is good, it is presented uniquely as a concatenation of elements of ${\mathcal{S}}^*$, which we call a {\it preferred word} of $b$.
When it is not good, one can modify it slightly by an isotopy of $E$ keeping the condition of a twisted virtual braid diagram to become good.
Thus, ${\mathcal{S}}$ generates the group $TVB_n$.
Let $b$ and $b'$ are good twisted virtual braid diagrams.
Suppose that $b'$ is obtained from $b$ by an isotopy of $E$ keeping the condition of a twisted virtual braid diagram.
Then they are related by a finite sequence of changing heights of a pair of points in $X(b)$.
A single height change of a pair of such points corresponds to one of relations
(\ref{rel-height-ss}), (\ref{rel-height-vv}), (\ref{rel-height-sv}), (\ref{rel-height-bb}), (\ref{rel-height-bv}),
(\ref{rel-height-sb}) and variants of (\ref{rel-height-ss}), (\ref{rel-height-sv}) and (\ref{rel-height-sb})
with $\sigma_i$ replaced by $\sigma_i^{-1}$ and/or $\sigma_j$ replaced by $\sigma_j^{-1}$. Note that the variants are consequences of the original relations up to relations (\ref{rel-inverse-v}) and (\ref{rel-inverse-b}).
Thus, we see that the preferred words of $b$ and $b'$ are congruent modulo
relations (\ref{rel-height-ss}), (\ref{rel-height-vv}), (\ref{rel-height-sv}), (\ref{rel-height-bb}), (\ref{rel-height-bv}),
(\ref{rel-height-sb}) and relations (\ref{rel-inverse-v}) and (\ref{rel-inverse-b}).
Suppose that $b'$ is obtained from $b$ by an extended Reidemeister move.
When the move is R2, the change of preferred words corresponds to
$\sigma_i^{\epsilon} \sigma_i^{-\epsilon} = \sigma_i^{-\epsilon} \sigma_i^{\epsilon}$ $(\epsilon \in \{\pm 1\})$, which is a trivial relation.
When the move is R3, it is well known that the change of preferred words corresponds to
a relation which is a consequence of relations (\ref{rel-sss}).
When the move is V2, the change of preferred words corresponds to relations (\ref{rel-inverse-v}).
When the move is V3, the change of preferred words corresponds to relations (\ref{rel-vvv}).
When the move is V4, we may assume that it is the move as in Figure~\ref{bmoves}, which
corresponds to relations (\ref{rel-vsv}).
When the move is T1, the change of preferred words corresponds to relations (\ref{rel-bv}) or (\ref{rel-vb}).
When the move is T3, the change of preferred words corresponds to relations (\ref{rel-twist-III}) or (\ref{rel-twist-III-negative}).
Therefore we see that the preferred words of $b$ and $b'$ are congruent each other modulo all relations
(\ref{rel-height-ss})--(\ref{rel-twist-III}).
Since all relations (\ref{rel-height-ss})--(\ref{rel-twist-III}) are valid in the group $TVB_n$, these relations are defining relations.
\end{proof}
\begin{remark}
The twisted virtual group $TVB_n$ is different from the ring group (\cite{BH}) or the extended welded braid group (\cite{Damiani2017B}).
Brendle and Hatcher \cite {BH} discussed the space of configurations of $n$ unlinked Euclidean circles, called {\it rings}, whose fundamental group is the {\it ring group} $R_n$.
They showed that the ring group is isomorphic to the {\it motion group} of the trivial link of $n$ components in the sense of Dahm \cite{Dahm}.
The ring group has a finite index subgroup isomorphic to the {\it braid-permutation group}, also called the {\it welded braid group}, introduced by Fenn, Rim{\' a}nyi and Rourke \cite{FRR}.
Damiani~\cite{Damiani2017A} studied the ring group from various points of view.
In particular, she introduced in \cite {Damiani2017B} the notion of the {\it extended welded braid group} defined by using diagrams motivated from the work of Satoh~\cite{Satoh}.
Damiani's extended welded braid group is isomorphic to the ring group.
The twisted virtual braid group $TVB_n$ is different from the ring group and the extended welded braid group for $n>2$, since they admit a relation $v_1 \sigma_2 \sigma_1 = \sigma_2 \sigma_1 v_2$, which is not allowed in the twisted virtual braid group $TVB_n$.
\end{remark}
\section{Braid presentation of twisted links}
\label{sect:Alexander}
The closure of a twisted virtual braid (diagram) is defined by a similar way for a classical braid.
\begin{Example}
The closure of a twisted virtual braid diagram is shown in Figure~\ref{one}.
\begin{figure}[ht]
\centering
\includegraphics[width=2.5cm,height=3cm]{onefoil-1.pdf}
\caption{The closure of braid $\gamma_1\sigma_1^{-1}\gamma_2$.}
\label{one}
\end{figure}
\end{Example}
In this section we show that every twisted link is represented by the closure of a twisted virtual braid diagram (Theorem~\ref{prop:AlexanderB}).
\subsection{Gauss data}
For a twisted link diagram $K$, we prepare some notation:
\begin{itemize}
\item Let $V_R(K)$ be the set of all real crossings of $K$.
\item Let $S(K)$ be the map from $V_R(K)$ to the set $\{+1,-1 \}$ assigning the signs to real crossings.
\item Let $B(K)$ be the set of all bars in $K$.
\item Let $N(v)$ be a regular neighborhood of $v$, where $v \in V_R(K) \cup B(K)$.
\item For $c \in V_R(K)$, we denote by $c^{(1)}, c^{(2)}, c^{(3)}$, and $c^{(4)}$ the four points of $\partial N(c) \cap K$ as depicted in Figure~\ref{r}.
\begin{figure}[h]
\centering
\includegraphics[width=9cm,height=3cm]{nbd.pdf}
\caption{Boundary points of $N(c) \cap K$.}
\label{r}
\end{figure}
\item For $\gamma \in B(K)$, we denote by $\gamma^{(1)}$ and $\gamma^{(2)}$ the two points of $\partial N(\gamma) \cap K$ as depicted in Figure \ref{b}.
\begin{figure}[h]
\centering
\includegraphics[width=1cm,height=3cm]{2nbd.pdf}
\caption{Boundary points of $N(\gamma) \cap K$.}
\label{b}
\end{figure}
\item Put $W=W(K)=Cl(\mathbb{R}^2 \setminus \cup_{v\in V_R(K) \cup B(K)} N(v))$, where $Cl$ means the closure.
\item Let $V^{\partial}_R(K) = \{c^{(j)} | c \in V_R(K), 1\leq j \leq 4\}$, and $B^{\partial}(K) = \{\gamma^{(j)} | \gamma \in B(K), 1\leq j \leq 2\}.$
\item Let $K |_W$ be the restriction of $K$ to $W$, which is a union of some oriented arcs and loops generically immersed in $W$ such that the double points are virtual crossings of $K$, and the set of boundary points of the arcs is the set $V^{\partial}_R(K) \cup B^{\partial}(K).$
\item Let $\mu(K)$ be the number of components of $K$.
\item Define a subset $G(K)$ of $(V^{\partial}_R(K) \cup B^{\partial}(K))^2$ such that $(a,b) \in G(K)$ if and only if $K |_W$ has an oriented arc starting from $a$ and ends at $b$.
\end{itemize}
The {\it Gauss data} of a twisted link diagram $K$ is the quintuple
$$(V_R(K), S(K), B(K), G(K), \mu(K)).$$
\begin{Example}
Let $K$ be a twisted link diagram depicted in Figure~\ref{gd}. When we name the real crossings $c_1$ and $c_2$ as in the figure, the Gauss data is
$$(\{c_1,c_2\}, \{+1,+1\}, \{\gamma_1\},\{ (c_1^{(4)}, c_2^{(2)}), (c_2^{(3)}, c_1^{(2)}), (c_2^{(4)}, \gamma_1^{(1)}), (\gamma_1^{(2)}, c_1^{(1)}), (c_1^{(3)}, c_2^{(1)}) \},1).$$
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=4.5cm]{exgd.pdf}
\caption{A twisted link diagram with one bar.}
\label{gd}
\end{figure}
\end{Example}
We say that two twisted link diagrams $K$ and $K'$ have {\it the same Gauss data} if
$\mu (K) = \mu (K')$ and there exists a bijection $g:V_R(K) \cup B(K) \to V_R(K) \cup B(K)$ satisfying
the following conditions:
\begin{itemize}
\item $g(V_R(K)) = V_R(K)$, and $g(B(K))= B(K)$.
\item $g$ preserves the signs of real crossings; $S(K)(c) =S(K')(g(c))$ for $c \in V_R(K)$.
\item $(a,b) \in G(K)$ if and only if $(g^{\partial}(a), g^{\partial}(b)) \in G(K')$, where $g^{\partial}: V^{\partial}_R(K) \cup B^{\partial}(K) \to V^{\partial}_R(K') \cup B^{\partial}(K')$ is the bijection induced from $g$, i.e., $g^{\partial}(c^{(j)}) = (g(c))^{(j)}$ for $c \in V_R(K), 1\leq j \leq 4$ and
$g^{\partial}(\gamma^{(j)}) = (g(\gamma))^{(j)}$ for $\gamma \in B(K), 1\leq j \leq 2$.
\end{itemize}
Let $K$ be a twisted link diagram and $W=W(K)=Cl(\mathbb{R}^2 \setminus \cup_{v\in V_R(K) \cup B(K)} N(v))$ as before. Suppose that $K'$ is a twisted link diagram with the same Gauss data with $K$. Then by an isotopy of $\mathbb{R}^2$ we can move $K'$ such that
\begin{itemize}
\item $K$ and $K'$ are identical in $N(v)$ for every $v \in V_R(K) \cup B(K)$,
\item $K'$ has no real crossings and bars in $W$, and
\item there is a bijection between the arcs/loops of $K|_W$ and those of $K'|_W$ with respect to the endpoints of the arcs.
\end{itemize}
In this situation, we say that $K'$ is obtained from $K$ {\it by replacing $K|_W$}.
\begin{lemma}\label{lemma:Same Gauss Data}
Let $K$ and $K'$ be twisted link diagrams, and let $W=W(K)=Cl(\mathbb{R}^2 \setminus \cup_{v\in V_R(K) \cup B(K)} N(v))$.
(1) If $K'$ is obtained from $K$ by replacing $K|_W$, then they are related by a finite sequence of isotopies of $\mathbb{R}^2$ with support $W$ and V1, V2, V3, V4, and T1 moves.
(2) If two twisted link diagrams $K$ and $K'$ have the same Gauss data, then $K$ is equivalent to $K'$.
\end{lemma}
\begin{proof}
(1) Let $N_1, N_2, \ldots, N_m $ be regular neighborhoods of the real crossings and bars of $K$.
Let $a_1, a_2, \ldots, a_n $ and $a'_1, a'_2, \ldots, a'_n $ be the arcs/loops of $K|_W$ and $K'|_W$ respectively.
Using an isotopy of $\mathbb{R}^2$ with support $W$, we may assume that the intersection of $a'_1$ with $a_2, \ldots, a_n $ are transverse double points.
The arc/loop $a_1$ is homotopic to $a'_1$ in $\mathbb{R}^2$ (relative to the boundary when $a_1$ is an arc).
Taking the homotopy generically with respect to the arcs/loops $ a_2, \ldots, a_n $, and the 2-disks $N_1, N_2, \ldots, N_m $, we see that the arc/loop $a_1$ can be changed into $a'_1$ by a finite sequence of moves as shown in Figure~\ref{mv} up to isotopy of $\mathbb{R}^2$ with support $W$.
Considering that all crossings in Figure~\ref{mv} are virtual crossings, we regard these moves as V1, V2, V3, V4, and T1 moves.
In this way, we can change $a_1$ into $a_1'$ without changing other arcs/loops of $K|_W$ and $K'|_W$.
Applying this argument inductively, all arcs/loops of $K|_W$ change into the corresponding ones of $K'|_W$.
\begin{figure}[h]
\centering
\includegraphics[width=12cm,height=2cm]{movesvir-1.pdf}
\caption{Moves on immersed curves.}
\label{mv}
\end{figure}
(2) Moving $K$ by an isotopy of $\mathbb{R}^2$, we may assume that $K'$ is obtained from $K$ by replacing $K|_W$.
By (1), we obtain the assertion.
\end{proof}
\subsection{Braiding process}
Let $O$ be the origin of $\mathbb{R}^2$ and
identify $\mathbb{R}^2 \setminus \{O\}$ with $\mathbb{R}_+\times S^1$ by polar coordinates, where $\mathbb{R}_+$ is the set of positive numbers. Let $\pi:\mathbb{R}^2 \setminus \{O\}=\mathbb{R}_+\times S^1 \to S^1$ denote the radial projection.
For a twisted link diagram $K$, we denote by $V_R(K)$ the set of real crossings, by $V_B(K)$ the set of points on $K$ where bars intersect with, and by $X(K)$ the set of all (real or virtual) crossings and the set of points on $K$ where bars intersect with.
\begin{definition}
A {\it closed twisted virtual braid diagram} is a twisted link diagram $K$
satisfying the following conditions (1) and (2):
\begin{itemize}
\item[(1)] $K$ is contained in $\mathbb{R}^2 \setminus \{O\}$.
\item[(2)] Let $k: \sqcup S^1 \to \mathbb{R}^2 \setminus \{O\}$ be the underlying immersion of $K$, where
$\sqcup S^1$ is a disjoint union of copies of $S^1$.
Then $\pi \circ k: \sqcup S^1 \to S^1$ is a covering map of $S^1$ of degree $n$ which respects the orientations of $\sqcup S^1$ and $S^1$.
\end{itemize}
A closed twisted virtual braid diagram is {\it good} if it satisfies the following condition.
\begin{itemize}
\item[(3)] Let $N_1, N_2, \ldots, N_m $ be regular neighborhoods of the real crossings and bars of $K$. Then $\pi(N_i) \cap \pi(N_j) = \emptyset$ for $i \neq j$.
\end{itemize}
\end{definition}
\begin{prop}\label{prop:AlexanderA}
Every twisted link diagram $K$ is equivalent, as a twisted link, to a good closed twisted virtual braid diagram $K'$ such that $K$ and $K'$ have the same Gauss data.
\end{prop}
\begin{proof}
Let $K$ be a twisted link diagram and let $N_1, N_2, \ldots, N_m $ be regular neighborhoods of the real crossings and bars of $K$.
Moving $K$ by an isotopy of $\mathbb{R}^2$, we may assume that all $N_i$ are in $\mathbb{R}^2 \setminus \{O\}$, $\pi(N_i) \cap \pi(N_j) =\emptyset$ for $i\neq j$ and the restriction of $K$ to $N_i$ satisfies the condition of a closed twisted virtual braid diagram.
Replace the remainder $K|_{W(K)}$ such that the result is a good closed twisted virtual braid diagram $K'$.
Then $K$ and $K'$ have the same Gauss data, and by Lemma~\ref{lemma:Same Gauss Data} they are equivalent as twisted links.
\end{proof}
The procedure in the proof of Proposition~\ref{prop:AlexanderA} makes a given twisted link diagram to a good closed twisted virtual braid diagram having the same Gauss data with $K$. This is the {\it braiding process} in our paper.
A point $\theta$ of $S^1$ is called a {\it regular value} for a closed twisted virtual braid diagram $K$ if $X(K)\cap\pi^{-1}(\theta)=\emptyset$.
Cutting $K$ along the half line $\pi^{-1}(\theta)$ for a regular value of $\theta$, we obtain a twisted virtual braid diagram whose closure is equivalent to $K$.
Thus, Proposition~\ref{prop:AlexanderA} implies the following.
\begin{theorem}\label{prop:AlexanderB}
Every twisted link is represented by the closure of a twisted virtual braid diagram.
\end{theorem}
\section{The Markov theorem for twisted links}
\label{sect:Markov}
In this section we show a theorem on braid presentation of twisted links which is
analogous to the Markov theorem for classical links.
A {\it twisted Markov move of type $0$} or a {\it TM0-move} is a replacement of a twisted virtual braid diagram $b$ with another $b'$ of the same degree such that $b$ and $b'$ are equivalent as twisted virtual braids, i.e., they represent the same element of the twisted virtual braid group.
A {\it twisted Markov move of type $1$} or a {\it TM1-move} is a replacement of a twisted virtual braid (or its diagram) $b$ with $b_1 b b_1^{-1}$ where $b_1$ is a twisted virtual braid (or its diagram) of the same degree with $b$. We also call this move a {\it conjugation}.
A {\it twisted Markov move of type $1$} or a {\it TM1-move} may be defined as a replacement of a twisted virtual braid (or its diagram) $b = b_1 b_2$ with $b' = b_2 b_1$ where $b_1$ and $b_2$ are twisted virtual braids (or their diagrams) of the same degree.
See Figure~\ref{tm1}.
\begin{figure}[h]
\centering
\includegraphics[width=8cm,height=3cm]{markov-move1.pdf}
\caption{A twisted Markov move of type 1 or a TM1-move.}
\label{tm1}
\end{figure}
For a twisted virtual braid (or its diagram) $b$ of degree $n$ and non-negative integers $s$ and $t$, we denote by $\iota_s^t(b)$ the twisted virtual braid (or its diagram) of degree $n + s + t$ obtained from $b$ by adding $s$ trivial strands to the left and $t$ trivial strands to the right.
This defines a monomorphism $\iota_s^t: TVB_n \to TVB_{n+s+t}$.
A {\it stabilization of positive, negative or virtual type} is a replacement of a twisted virtual braid (or its diagram) $b$ of degree $n$ with $\iota_0^1(b)\sigma_n$, $\iota_0^1(b)\sigma_n^{-1}$ or $\iota_0^1(b)v_n$, respectively.
\begin{figure}[h]
\centering
\includegraphics[width=10cm,height=2cm]{markov-move2.pdf}
\caption{A twisted Markov move of type 2 or a TM2-move.}
\label{tm2}
\end{figure}
A {\it twisted Markov move of type $2$} or a {\it TM2-move} is a stabilization of positive, negative or virtual type, or its inverse operation.
See Figure~\ref{tm2}.
A {\it right virtual exchange move} is a replacement
$$ \iota_0^1(b_1) \sigma_n^{-1} \iota_0^1(b_2) \sigma_n \longleftrightarrow
\iota_0^1(b_1) v_n \iota_0^1(b_2) v_n, $$
and a {\it left virtual exchange move} is a replacement
$$ \iota_1^0(b_1) \sigma_1^{-1} \iota_1^0(b_2) \sigma_1 \longleftrightarrow
\iota_1^0(b_1) v_1 \iota_1^0(b_2) v_1, $$
where $b_1$ and $b_2$ are twisted virtual braids (or their diagrams).
A {\it twisted Markov move of type $3$} or a {\it TM3-move} is a right/left virtual exchange move or its inverse operation.
See Figure~\ref{tm3}.
\begin{figure}[h]
\centering
\includegraphics[width=10cm,height=3.5cm]{markov-move3.pdf}
\caption{A twisted Markov move of type 3 or a TM3-move.}
\label{tm3}
\end{figure}
\begin{definition}
Two twisted virtual braids (or their diagrams) are {\it Markov equivalent} if they are related by a finite sequence of twisted Markov moves TM1--TM3 (or TM0--TM3 when we discuss them as diagrams).
\end{definition}
\begin{theorem}\label{theorem:MarkovA}
Two twisted virtual braids (or their diagrams) have equivalent closures as twisted links if and only if they are Markov equivalent.
\end{theorem}
\begin{remark}
In Section~\ref{sect:exchange},
it turns out that if two twisted virtual braids (or their diagrams) is related by a left virtual exchange move then they are related by a sequence of TM1-moves (or TM0-moves and TM1-moves when we discuss them as diagrams) and a right virtual exchange move. Thus we may remove left virtual exchange moves from the definition of Markov equivalence.
\end{remark}
Let $K$ and $K'$ be closed twisted virtual braid diagrams and let $b$ and $b'$ be twisted virtual braid diagrams obtained from $K$ and $K'$ by cutting along $\pi^{-1}(\theta)$ and $\pi^{-1}(\theta')$ for some regular values $\theta$ and $\theta'$.
We say that $K'$ is obtained from $K$ by a {\it twisted Markov move of type $0$} or a {\it TM0-move} if they are equivalent as closed twisted virtual braids.
Note that $K'$ is obtained from $K$ by a TM0-move if and only if $b$ and $b'$ are related by a finite sequence of TM0-moves and TM1-moves.
We say that $K'$ is obtained from $K$ by a {\it twisted Markov move of type $2$} or a {\it TM2-move} if $b$ and $b'$ are related by a TM2-move and some TM1-moves.
We say that $K'$ is obtained from $K$ by a {\it twisted Markov move of type $3$} or a {\it TM3-move} if $b$ and $b'$ are related by a TM3-move and some TM1-moves.
\begin{definition}
Two closed twisted virtual braid diagrams $K$ and $K'$ are {\it Markov equivalent} if they are related by a finite sequence of TM0-, TM2- and TM3-moves.
\end{definition}
\begin{prop}\label{prop:closure}
Two closed twisted virtual braid diagrams $K$ and $K'$ are Markov equivalent if and only if twisted virtual braid diagrams $b$ and $b'$ are Markov equivalent, where $b$ and $b'$ are obtained from $K$ and $K'$ by cutting along $\pi^{-1}(\theta)$ and $\pi^{-1}(\theta')$ for some regular values $\theta$ and $\theta'$.
\end{prop}
\begin{proof}
For a given closed twisted virtual braid diagram $K$, $b$ is uniquely determined up to TM1-moves. Then the assertion is trivial by definition.
\end{proof}
By Proposition~\ref{prop:closure},
Theorem~\ref{theorem:MarkovA} is equivalent to the following theorem.
\begin{theorem}\label{theorem:MarkovB}
Two closed twisted virtual braid diagrams are equivalent as twisted links if and only if they are Markov equivalent.
\end{theorem}
To prove Theorem~\ref{theorem:MarkovB}, we require the following lemma.
\begin{lemma}\label{lem:unique}
Two closed twisted virtual braid diagrams with the same Gauss data are Markov equivalent.
\end{lemma}
\begin{proof}
Let $K$ and $K'$ be closed twisted virtual braids with the same Gauss data. Modifying them by isotopies of $\mathbb{R}^2 \setminus \{O\}$, we may assume that they are good.
Let $N_1, N_2, \ldots, N_m$ be regular neighborhoods of the real crossings and bars of $K$, and $N'_1, N'_2, \ldots, N'_m$ be regular neighborhoods of the corresponding real crossings and bars of $K'$.
Case (I). Suppose that $\pi(N_1), \pi(N_2), \ldots, \pi(N_m)$ and $\pi(N'_1), \pi(N'_2), \ldots, \pi(N'_m)$ appear in $S^1$ in the same cyclic order.
Modifying $K$ by an isotopy of $\mathbb{R}^2 \setminus \{O\}$ keeping the condition of a good closed twisted virtual braid,
we may assume that $N_1=N'_1, N_2=N'_2, \ldots, N_m=N'_m$ and the restrictions of $K$ and $K'$ to these disks are identical.
Let $a_1, \dots, a_s$ be the arcs/loops of $K|_W$ and $a'_1, \dots, a'_s$ be the corresponding arcs/loops of $K'|_W$.
Let $\theta \in S^1$ be a regular value for $K$ and $K'$ such that $\pi^{-1}(\theta)$ is disjoint from $N_1 \cup \dots \cup N_m$.
If there exists an arc/loop $a_i$ of $K|_W$ such that $|a_i \cap \pi^{-1}(\theta)|\neq |a'_i \cap \pi^{-1}(\theta)|$, then move a small segment of $a_i$ or $a'_i$ toward the origin $O$ by some V2 moves which are
TM0-moves and apply some VM2-moves of virtual type so that $|a_i \cap \pi^{-1}(\theta)|=|a'_i \cap \pi^{-1}(\theta)|$ after the modification.
Thus without loss of generality, we may assume that $|a_i \cap \pi^{-1}(\theta)|=|a'_i \cap \pi^{-1}(\theta)|$ for all $i=1, \dots, s$.
Let $k : \sqcup S^1 \to \mathbb{R}^2 \setminus \{O\}$ and $k' : \sqcup S^1 \to \mathbb{R}^2 \setminus \{O\}$ be the underlying immersions of $K$ and $K'$, respectively, such that they are identical near the preimages of the real crossings and bars.
Let $I_1, \ldots, I_s$ be arcs/loops in $\sqcup S^1$ with $k(I_i)=a_i$ and $k'(I_i)=a'_i$ for $i=1, \dots, s$.
Note that $\pi \circ k|_{I_i}$ and $\pi \circ k'|_{I_i}$ are orientation-preserving immersions into $S^1$ with $\pi \circ k|_{\partial I_i}=\pi \circ k'|_{\partial I_i}$.
Since $a_i$ and $a'_i$ have the same degree, so we have a homotopy $k_i^t: I_i \to \mathbb{R}^2 \setminus \{O\}$ $(t \in [0,1])$ of $I_i$ relative to the boundary $\partial I_i$ such that $k^0_i=k|_{I_i}$ and $k^1_i=k|_{I_i}$ and $\pi \circ k^t_i$ is an orientation-preserving immersion.
Taking such a homotopy generically with respect to the other arcs/loops of $K|_{W}$ and $K'|_{W}$ and the 2-disks $N_1, N_2, \ldots, N_m$, we see that $a_i$ can be transformed to $a'_i$ by a sequence of TM0-moves.
Apply this procedure inductively, we can change $a_1, \dots, a_s$ to $a'_1, \dots, a'_s$ by a sequence of TM0-moves and TM2-moves.
Thus we see that $K$ is transformed into $K'$ by a finite sequence of TM0 and TM2-moves.
Case (II). Suppose that $\pi(N_1), \pi(N_2), \dots, \pi(N_m)$ and $\pi(N'_1), \pi(N'_2), \dots, \pi(N'_m)$ do not appear in $S^1$ in the same cyclic order.
It is sufficient to show that we can interchange the position of two consecutive $\pi(N_i)$'s.
Suppose that we want to interchange $\pi(N_1)$ and $\pi(N_2)$.
(1) Suppose that $N_2$ is a neighborhood of a real crossing. Figure~\ref{c} shows how to interchange $\pi(N_1)$ and $\pi(N_2)$ by TM0-moves and TM2-moves.
\begin{figure}[h]
\centering
\includegraphics[width=10cm,height=7cm]{cycle-2.pdf}
\caption{Interchange the positions of $N_1$ and $N_2$.}
\label{c}
\end{figure}
(2) Suppose that $N_2$ is a neighborhood of a bar. Figure~\ref{c2} shows how to interchange $\pi(N_1)$ and $\pi(N_2)$ by TM0-moves and TM2-moves.
\begin{figure}[h]
\centering
\includegraphics[width=10cm,height=4cm]{cycle-1.pdf}
\caption{Interchange the positions of $N_1$ and $N_2$.}
\label{c2}
\end{figure}
\sloppy{
Applying this argument, we can make $\pi(N_1), \pi(N_2), \ldots, \pi(N_m)$ and $\pi(N'_1), \pi(N'_2), \ldots, \pi(N'_m)$ to appear in the same cyclic order on $S^1$ using VM0 and VM2-moves.
Then we can reduce the case to Case~(I). }
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem:MarkovB}]
If two closed twisted virtual braids (or their diagrams) are Markov equivalent then they are equivalent as twisted links.
Conversely, suppose that $K$ and $K'$ are closed twisted virtual braid diagrams which are equivalent as twisted links.
There is a finite sequence of twisted link diagrams, say, $K=K_0, K_1, \ldots, K_n=K'$ such that $K_{i+1}$ is obtained from $K_{i}$ by one of the extended Reidemeister moves.
For each $i = 1, \dots, n-1$, $K_i$ may not be a closed twisted virtual braid diagram.
Let $\widetilde K_i$ be a closed twisted virtual braid diagram obtained from $K_i$ by the braiding process in the previous section.
We assume $K_0=\widetilde K_0$ and $K_n =\widetilde K_n$.
Then for each $i =0,1,\dots, n$, $\widetilde K_i$ and $K_i$ have the same Gauss data.
It is sufficient to prove that $\widetilde K_i$ and $\widetilde K_{i+1}$ are Markov equivalent.
It is shown in \cite{sk} that $\widetilde K_i$ and $\widetilde K_{i+1}$ are Markov equivalent when $K_{i+1}$ is obtained from $K_{i}$ by one of R1, R2, R3, V1, V2, V3, and V4.
(In \cite{sk} virtual links and closed virtual braid diagrams are discussed.
However the argument in \cite{sk} is valid in our current situation.)
Thus, it is sufficient to consider a case that $K_{i+1}$ is obtained from $K_{i}$ by a twisted move T1, T2 or T3.
(1) Let $K_{i+1}$ be obtained by $K_i$ from a T1 move.
Then $K_{i}$ and $K_{i+1}$ have same Gauss data, and hence $\widetilde K_{i}$ and $\widetilde K_{i+1}$ have same Gauss data.
By Lemma~\ref{lem:unique}, $\widetilde K_i$ and $\widetilde K_{i+1}$ are Markov equivalent.
(2) Let $K_{i+1}$ be obtained by $K_i$ by a T2 move. Assume that a pair of bars in $K_i$ is removed by the T2 move to obtain $K_{i+1}$.
Let $N$ be a $2$-disk where the T2 move is applied such that $N \cap K_i$ is an arc, say $\alpha$, with two bars and $N \cap K_{i+1}$ is the arc $\alpha$.
Let $N_1$ and $N_2$ be neighborhoods of the two bars such that $N_1 \cup N_2 \subset N$.
By an isotopy of $\mathbb{R}^2$, deform $K_i$, $\alpha$ and $N$ such that $N \cap K_i$ is $\alpha$ with two bars and $\pi|_\alpha: \alpha \to S^1$ is an orientation-preserving embedding.
Let $\widetilde K_i'$ be a closed twisted virtual braid obtained from the deformed $K_i$ by applying the braid process in the previous section such that $N$ is pointwise fixed, and let $\widetilde K_{i+1}'$ be a closed twisted virtual braid obtained from $\widetilde K_i'$ by removing the two bars intersecting $\alpha$.
Then $\widetilde K_i'$ and $\widetilde K_{i+1}'$ are related by a TM0-move.
Since $\widetilde K_i$ and $\widetilde K_i'$ have the same Gauss data, they are Markov equivalent.
Since $\widetilde K_{i+1}$ and $\widetilde K_{i+1}'$ have the same Gauss data, they are Markov equivalent.
Thus $\widetilde K_i$ and $\widetilde K_{i+1}$ are Markov equivalent.
The case that a pair of bars are introduced to $K_i$ to obtain $K_{i+1}$ is shown similarly.
(3) Let $K_{i+1}$ be obtained from $K_i$ by a T3 move.
There are 4 possible orientations for a T3 move, say T3a, T3b, T3c, and T3d as in Figure~\ref{ot3m}.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=5cm]{t3moves.pdf}
\caption{Oriented T3 moves.}
\label{ot3m}
\end{figure}
First consider a case that $K_{i+1}$ is obtained from $K_i$ by a move T3a or T3b.
Assume that $K_i$ is as in the left and $K_{i+1}$ is as in the right of Figure~\ref{ot3m}. Let $N$ be a $2$-disk where the move is applied.
Then $N \cap K_i$ is a pair of arcs, say $\alpha_1$ and $\alpha_2$, intersecting transversely at a real crossing and there are four bars.
Let $N_1$ be a neighborhood of the real crossing of $K_i$ and $N_2, \dots, N_5$ be neighborhoods of the four bars of $K_i$ in $N$ such that $N_1 \cup \dots \cup N_5 \subset N$.
By an isotopy of $\mathbb{R}^2$, deform $K_i$, $\alpha_1$, $\alpha_2$, and $N$ such that $\pi|_{\alpha_1}: \alpha_1 \to S^1$ and $\pi|_{\alpha_2}: \alpha_2 \to S^1$ are orientation-preserving embeddings.
Let $\widetilde K_i'$ be a closed twisted virtual braid diagram obtained from the deformed $K_i$ by applying the braid process in the previous section such that $N$ is pointwise fixed, and let $\widetilde K_{i+1}'$ be a closed twisted virtual braid diagram obtained from $\widetilde K_i'$ by applying a T3a (or T3b) move.
Then $\widetilde K_i'$ and $\widetilde K_{i+1}'$ are related by a TM0-move.
Since $\widetilde K_i$ and $\widetilde K_i'$ have the same Gauss data, they are Markov equivalent.
Since $\widetilde K_{i+1}$ and $\widetilde K_{i+1}'$ have the same Gauss data, they are Markov equivalent. Thus $\widetilde K_i$ and $\widetilde K_{i+1}$ are Markov equivalent.
The case that $K_i$ is as in the right and $K_{i+1}$ is as in the left of the figure is shown similarly.
Now consider the case that $K_{i+1}$ is obtained from $K_i$ by a move T3c or T3d.
Note that a move T3c (or T3d) is a consequence of a move T3b (or T3a) modulo moves V1, V2, V3, and V4.
One can see this by rotating the two diagrams in T3c (or T3d) by 90 degrees clockwise.
Then the left hand side becomes the same diagram with the left hand side of T3b (or T3a).
The right hand side of T3c (or T3d) after the rotation has a real crossing and no bars. One can see that the right hand side of T3b (or T3a) also has a real crossing and no bars.
Considering the Gauss data of the tangle in $N$ and applying the same argument to the proof of Lemma~\ref{lem:unique}, we see that the right hand side of T3c (or T3d) after the rotation is transformed to the right hand side of T3b (or T3a) by V1, V2, V3, and V4 moves in $N$.
Thus we can reduce the case to T3a (or T3b) and the case of V1, V2, V3, and V4 moves.
\end{proof}
\section{On virtual exchange moves of twisted virtual braids}
\label{sect:exchange}
It turns out that if two twisted virtual braids (or their diagrams) are related by a left virtual exchange move then they are related by a sequence of TM1-moves (or TM0-moves and TM1-moves) and a right virtual exchange move. Thus we may remove left virtual exchange moves from the definition of Markov equivalence.
Let $f_n: TVB_n \to TVB_n$ be an isomorphism determined by
\begin{align*}
\sigma_i & \mapsto \sigma_{n-i}, & \text{for } & i=1, \dots, n-1 \\
v_i & \mapsto v_{n-i}, & \text{for } & i=1, \dots, n-1 \\
\gamma_i & \mapsto \gamma_{n-i+1}, & \text{for } & i=1, \dots, n.
\end{align*}
For a twisted virtual braid diagram $b$ of degree $n$ which is good,
we also denote by $f_n(b)$ a twisted virtual braid diagram obtained from the diagram $b$ by applying the above correspondence to the preferred word of $b$.
Let $\nabla_n$ be a twisted virtual braid (or its diagram) with
\begin{align*}
\nabla_n = \prod_{i=1}^{n-1} (v_i v_{i-1} \dots v_1) \prod_{j=1}^{n} \gamma_j.
\end{align*}
Let $F_n: TVB_n \to TVB_n$ be an isomorphism determined by
\begin{align*}
b & \mapsto \nabla_n b \nabla_n^{-1} & \text{for } & b \in TVB_n.
\end{align*}
Then $\nabla_n^2 = e$ in $TVB_n$ and $F_n(b) = f_n(b)$ for $b \in TVB_n$.
In particular $b$ and $f_n(b)$ are related by a TM1-move (or TM0-moves and TM1-moves when we discuss them as diagrams).
\begin{theorem}
If two twisted virtual braids of degree $n$ (or their diagrams) are related by a left virtual exchange move,
then they are related by a sequence of TM1-moves (or TM0-moves and TM1-moves) and a right virtual exchange move.
\end{theorem}
\begin{proof}
Let $b$ and $b'$ be twisted virtual braid diagrams of degree $n$ related by a left virtual exchange move.
Suppose that
$$ b= \iota_1^0(b_1) \sigma_1^{-1} \iota_1^0(b_2) \sigma_1 \quad \mbox{and} \quad
b'= \iota_1^0(b_1) v_1 \iota_1^0(b_2) v_1, $$
where $b_1$ and $b_2$ are good twisted virtual braid diagrams of degree~$n-1$.
Then
$$ f_n(b) = \iota_0^1( f_{n-1}(b_1) ) \sigma_n^{-1} \iota_0^1( f_{n-1}(b_2) ) \sigma_n
\quad \mbox{and} \quad
f_n(b') = \iota_0^1( f_{n-1}(b_1) ) v_n \iota_0^1( f_{n-1}(b_2) ) v_n, $$
and hence $f_n(b)$ and $f_n(b')$ are related by a right virtual exchange move.
Since $b$ is conjugate to $F_n(b) = f_n(b)$ as elements of $TVB_n$, and
$b'$ is conjugate to $F_n(b') = f_n(b')$, we see that $b$ and $b'$ are related by a sequence of TM1-moves
(or TM0-moves and TM1-moves when we discuss them as diagrams) and a right virtual exchange move.
\end{proof}
\section{A reduced presentation of the twisted virtual braid group}
\label{sect:reduced}
L. Kauffman and S. Lambropoulou~\cite{kl} gave a reduced presentation of the virtual braid group. Motivated by their work, we give a reduced presentation of the twisted virtual braid group.
Using the reduced presentation, one can deal the twisted virtual braid group with less number of generators and relations.
In this section, we show that the presentation
of the twisted virtual braid group $TVB_n$
given in Theorem~\ref{thm:StandardPresentation} can be reduced to a presentation with $n+1$ generators and less
relations by rewriting $\sigma_i$ $(i=2,\ldots, n-1)$
and $\gamma_i$ $(i=2,\ldots, n)$ in terms of $\sigma_1$, $\gamma_1$ and $v_1, \dots, v_{n-1}$ as follows:
\begin{align}
\sigma_i & =(v_{i-1}\ldots v_1)(v_i \ldots v_2)\sigma_1(v_2 \ldots v_i)(v_1 \ldots v_{i-1}) & \text{ for } & i=2,\ldots, n-1, \label{1st reduction} \\
\gamma_i & =(v_{i-1}\ldots v_1)\gamma_1(v_1 \ldots v_{i-1}) & \text{ for } & i=2,\ldots, n. \label{2nd reduction}
\end{align}
See Figure~\ref{o}. These can be seen geometrically from their diagrams or algebraically from $(\ref{relC-vsv})$ and $(\ref{relC-vb})$.
\begin{figure}[h]
\centering
\includegraphics[width=6cm,height=10cm]{ktbraids.pdf}
\caption{$\sigma_i \text{ and } \gamma_i$.}
\label{o}
\end{figure}
\begin{theorem}\label{thm:ReducedPresentation}
The twisted virtual braid group $TVB_n$ has a presentation whose generators are
$\sigma_1, \gamma_1, v_1,\dots, v_{n-1}$
and the defining relations are as follows:
\begin{align}
v_i^2 & = e & \text{ for } & i=1,\ldots, n-1; \label{relB-inverse-v}\\
v_iv_j & = v_jv_i & \text{ for } & |i-j| > 1 ; \label{relB-height-vv}\\
v_iv_{i+1}v_i & = v_{i+1}v_iv_{i+1} & \text{ for } & i=1,\ldots, n-2; \label{relB-vvv}\\
\sigma_1(v_2v_3v_1v_2\sigma_1v_2v_1v_3v_2) & = (v_2v_3v_1v_2\sigma_1v_2v_1v_3v_2)\sigma_1, & & \label{relB-height-ss} \\
(v_1\sigma_1v_1)(v_2\sigma_{1}v_2)(v_1\sigma_1v_1) & = (v_2\sigma_1v_2)(v_1\sigma_{1}v_1)(v_2\sigma_1v_2), & & \label{relB-sss} \\
\sigma_1v_j & = v_j\sigma_1 & \text{ for } & j = 3, \ldots, n-1; \label{relB-height-sv}\\
\gamma_1^2 & = e, & & \label{relB-inverse-b} \\
\gamma_1v_j & = v_j\gamma_1 & \text{ for } & j = 2, \ldots, n-1; & \label{relB-height-bv}\\
\gamma_1v_1\gamma_1v_1 & =v_1\gamma_1v_1\gamma_1, & & \label{relB-height-bb}\\
\gamma_1v_1v_2\sigma_1v_2v_1 & = v_1v_2\sigma_1v_2v_1\gamma_1, & & \label{relB-height-sb}\\
\gamma_{1}v_1\gamma_1\sigma_{1} \gamma_1v_1\gamma_{1} & = \sigma_1. & & \label{relB-bv}
\end{align}
\end{theorem}
In what follows, we refer to relations $(\ref{rel-inverse-v})$, $(\ref{rel-height-vv})$ and $(\ref{rel-vvv})$ or
equivalently $(\ref{relB-inverse-v})$, $(\ref{relB-height-vv})$ and $(\ref{relB-vvv})$ as the {\it virtual relations}.
\begin{lemma}[cf. \cite{kl}]
Relations $(\ref{rel-vsv})$ follow from relations $(\ref{1st reduction})$ and the virtual relations.
\end{lemma}
This lemma is directly seen. The following three lemmas are proved in \cite{kl}. So we omit the proofs.
\begin{lemma}[Lemma~1 of \cite{kl}]
Relations $(\ref{rel-height-sv})$ follow from relations $(\ref{1st reduction})$, the virtual relations, and
relations $(\ref{relB-height-sv})$.
\end{lemma}
\begin{lemma}[Lemma~3 of \cite{kl}]
Relations $(\ref{rel-height-ss})$ follow from relations $(\ref{1st reduction})$, the virtual relations, and
relations $(\ref{relB-height-ss})$ and $(\ref{relB-height-sv})$.
\end{lemma}
\begin{lemma}[Lemma~2 of \cite{kl}]
Relations $(\ref{rel-sss})$ follow from relations $(\ref{1st reduction})$, the virtual relations, and
relations $(\ref{relB-sss})$ and $(\ref{relB-height-sv})$.
\end{lemma}
In the following proofs, we underline the expressions which we focus on.
\begin{lemma}
Relations $(\ref{rel-inverse-b})$ follow from relations $(\ref{2nd reduction})$, the virtual relations, and
relation $(\ref{relB-inverse-b})$.
\end{lemma}
\begin{proof}
\begin{align*}
\gamma_i^2
& = (v_{i-1}\ldots v_1)\gamma_1 \underline{(v_1 \ldots v_{i-1})(v_{i-1}\ldots v_1)}\gamma_1(v_1 \ldots v_{i-1})\\
& = (v_{i-1}\ldots v_1)\underline{\gamma_1^2}(v_1 \ldots v_{i-1})\\
& = \underline{(v_{i-1}\ldots v_1)(v_1 \ldots v_{i-1})}\\
& = e.
\end{align*}
\end{proof}
\begin{lemma}
Relations $(\ref{rel-height-bv})$ follow from relations $(\ref{2nd reduction})$, the virtual relations, and relations $(\ref{relB-height-bv})$.
\end{lemma}
\begin{proof}
Since $j \neq i, i + 1$, we consider the following two cases.
Case(i) Suppose $j\leq i-1$. Then $i \geq 2$ and we have
\begin{align*}
v_i\gamma_j & = \underline{v_i}(v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\underline{v_i\gamma_1}(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1\underline{v_i}(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})v_i\\
& = \gamma_jv_i.
\end{align*}
Case(ii) Suppose $j\geq i+2$. Then
\begin{align*}
v_i\gamma_j & = \underline{v_i}(v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i+2}\underline{v_iv_{i+1}v_i}v_{i-1} \ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i+2}v_{i+1}v_i\underline{v_{i+1}}v_{i-1} \ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i+2}v_{i+1}v_iv_{i-1} \ldots v_1)\underline{v_{i+1}\gamma_1}(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1\underline{v_{i+1}}(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{i-1}\underline{v_{i+1}v_iv_{i+1}}v_{i+2} \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{i-1}v_iv_{i+1}\underline{v_{i}}v_{i+2} \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})v_i\\
& = \gamma_jv_i.
\end{align*}
\end{proof}
\begin{lemma}
Relations $(\ref{rel-height-bb})$
follow from relations $(\ref{2nd reduction})$, the virtual relations, and relations $(\ref{relB-height-bv})$
and $(\ref{relB-height-bb})$.
\end{lemma}
\begin{proof} By the previous lemma, we may assume relations $(\ref{rel-height-bv})$.
It is sufficient to consider a case of $j > i$.
\begin{align*}
\gamma_i\gamma_j
& = (v_{i-1}\ldots v_1) \gamma_1 (\underline{v_1 \ldots v_{i-1}}) (\underline{v_{j-1} \ldots v_1}) \gamma_1 (v_1 \ldots v_{j-1})\\
& = (v_{i-1}\ldots v_1) \underline{\gamma_1} (v_{j-1} \ldots v_1) (v_2 \ldots v_i) \underline{\gamma_1} (v_1 \ldots v_{j-1})
~~~~~~~~~~~ \text{ (by (\ref{rel-height-bv}))} \\
& = (v_{i-1}\ldots v_1) (v_{j-1} \ldots v_2) \underline{\gamma_1 v_1 \gamma_1} (v_2 \ldots v_i) (v_1 \ldots v_{j-1})
~~~~~~~~~~~ \text{ (by (\ref{relB-height-bb}))} \\
& = (\underline{v_{i-1}\ldots v_1}) \underline{(v_{j-1} \ldots v_2) v_1} \gamma_1 v_1 \gamma_1 v_1 (v_2 \ldots v_i) (v_1 \ldots v_{j-1}) \\
& = (v_{j-1} \ldots v_2 v_1) (v_i \ldots v_2) \gamma_1 v_1 \gamma_1 v_1 (\underline{v_2 \ldots v_i}) (\underline{v_1 \ldots v_{j-1}}) \\
& = (v_{j-1} \ldots v_2 v_1) (v_i \ldots v_2) \gamma_1 v_1 \gamma_1 \underline{v_1} (\underline{v_1} \ldots v_{j-1}) (v_1 \ldots v_{i-1}) \\
& = (v_{j-1} \ldots v_2 v_1) (v_i \ldots v_2) \underline{\gamma_1} v_1 \underline{\gamma_1} (v_2 \ldots v_{j-1}) (v_1 \ldots v_{i-1})
~~~~~~~~~~~ \text{ (by (\ref{rel-height-bv}))} \\
& = (v_{j-1} \ldots v_2 v_1) \gamma_1 (\underline{v_i \ldots v_2}) \underline{v_1 (v_2 \ldots v_{j-1})} \gamma_1 (v_1 \ldots v_{i-1}) \\
& = (v_{j-1} \ldots v_2 v_1) \gamma_1 v_1 (v_2 \ldots v_{j-1}) (v_{i-1} \ldots v_1) \gamma_1 (v_1 \ldots v_{i-1}) \\
& = \gamma_j\gamma_i.
\end{align*}
\end{proof}
\begin{lemma}
Relations $(\ref{rel-height-sb})$
follow from relations $(\ref{1st reduction})$, $(\ref{2nd reduction})$, the virtual relations, relations
$(\ref{relB-height-sv})$,
$(\ref{relB-height-bv})$ and
$(\ref{relB-height-sb})$.
\end{lemma}
\begin{proof}
By previous lemmas, we may assume relations $(\ref{rel-height-sv})$ and $(\ref{rel-vsv})$ or equivalently $(\ref{relC-vsv})$.
First we show $(\ref{rel-height-sb})$ when $j=1$, i.e.,
$\sigma_i\gamma_1=\gamma_1\sigma_i$ for $i \neq 1$.
We apply induction on $i$, with initial condition $i=2$.
The relation $\sigma_2\gamma_1=\gamma_1\sigma_2$ follows from $(\ref{1st reduction})$ and $(\ref{relB-height-sb})$.
Assuming $\sigma_i\gamma_1=\gamma_1\sigma_i$, we obtain $\sigma_{i+1}\gamma_1=\gamma_1\sigma_{i+1}$ as follows:
\begin{align*}
\sigma_{i+1}\gamma_1 & = v_{i}v_{i+1}\sigma_{i}v_{i+1}\underline{v_{i}\gamma_1}\\
& = v_{i}v_{i+1}\sigma_{i}\underline{v_{i+1}\gamma_1}v_{i}\\
& = v_{i}v_{i+1}\underline{\sigma_{i}\gamma_1}v_{i+1}v_{i}\\
& = v_{i}\underline{v_{i+1}\gamma_1}\sigma_{i}v_{i+1}v_{i}\\
& = \underline{v_{i}\gamma_1}v_{i+1}\sigma_{i}v_{i+1}v_{i}\\
& = \gamma_1 v_{i}v_{i+1}\sigma_{i}v_{i+1}v_{i}\\
& = \gamma_1\sigma_{i+1}.
\end{align*}
Hence,\begin{equation}
\sigma_i\gamma_1=\gamma_1\sigma_i \quad \text{ for } i \neq 1. \label{3rd reduction}
\end{equation}
Now, we show relations $(\ref{rel-height-sb})$: $\sigma_i\gamma_j = \gamma_j\sigma_i$ for $j\neq i, i+1$.
Case(i) Suppose $j\leq i-1$. Then
\begin{align*}
\sigma_i\gamma_j & = \underline{\sigma_i }(v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\underline{\sigma_i \gamma_1}(v_1 \ldots v_{j-1}) ~~~~~~~~~~~ \text{ (by (\ref{3rd reduction}))}\\
& = (v_{j-1}\ldots v_1)\gamma_1\underline{\sigma_i }(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\sigma_i\\
& = \gamma_j\sigma_i.
\end{align*}
Case(ii) Suppose $j\geq i+2$. Then
\begin{align*}
\sigma_i\gamma_j & = \underline{\sigma_i }(v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i+2}) \underline{\sigma_i} (v_{i+1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i+2}) v_{i+1} v_i \sigma_{i+1} \underline{v_i v_{i+1}} (\underline{v_{i+1} v_i} v_{i-1} \ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i})\underline{\sigma_{i+1} }(v_{i-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i})(v_{i-1}\ldots v_1)\underline{\sigma_{i+1}\gamma_1 }(v_1 \ldots v_{j-1})~~~~~~~~~~~~~~ \text{ (by (\ref{3rd reduction}))}\\
& = (v_{j-1}\ldots v_{1}) \gamma_1 \underline{\sigma_{i+1}} (v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{1}) \gamma_1 (v_1 \ldots v_{i-1}) \underline{\sigma_{i+1}} (v_i \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{1}) \gamma_1 (v_1 \ldots v_{i-1}) v_i v_{i+1} \sigma_i \underline{v_{i+1} v_i} ( \underline{v_i v_{i+1}} v_{i+2} \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{1}) \gamma_1 (v_1 \ldots v_{i+1}) \underline{\sigma_{i}} (v_{i+2} \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{1}) \gamma_1 (v_1 \ldots v_{i+1})(v_{i+2} \ldots v_{j-1})\sigma_{i} \\
& = \gamma_j\sigma_i.
\end{align*}
\end{proof}
\begin{lemma}
Relations $(\ref{rel-bv})$
follow from relations $(\ref{2nd reduction})$ and the virtual relations.
\end{lemma}
\begin{proof}
\begin{align*}
\gamma_{i+1} v_i
& = (v_i \ldots v_1) \gamma_1 (v_1 \ldots \underline{v_i}) \underline{v_i} \\
& = v_i ( v_{i-1} \ldots v_1) \gamma_1 (v_1 \ldots v_{i-1}) \\
& = v_i \gamma_i.
\end{align*}
\end{proof}
\begin{lemma}
Relations $(\ref{rel-twist-III})$
follow from relations $(\ref{1st reduction})$, $(\ref{2nd reduction})$, the virtual relations, and relations
$(\ref{relB-height-bv})$ and $(\ref{relB-bv})$.
\end{lemma}
\begin{proof}
\begin{align*}
\gamma_{i+1}\gamma_{i}\sigma_i\gamma_{i}\gamma_{i+1}
& = (v_i\ldots v_1)\gamma_1(v_1\ldots v_i)(v_{i-1} \ldots v_1)\gamma_1 \underline{(v_1 \ldots v_{i-1})(v_{i-1} \ldots v_1)}(v_{i} \ldots v_2)\\
& \sigma_1(v_2 \ldots v_{i}) \underline{(v_1 \ldots v_{i-1})(v_{i-1} \ldots v_1)} \gamma_1(v_1 \ldots v_{i-1})(v_i \ldots v_1)\gamma_1(v_1 \ldots v_i)\\
& = (v_i\ldots v_1)\gamma_1\underline{(v_1\ldots v_{i-1} v_i v_{i-1} \ldots v_1)} \gamma_1(v_{i} \ldots v_2)\sigma_1(v_2 \ldots v_{i})\gamma_1\\
& \underline{(v_1 \ldots v_{i-1} v_i v_{i-1} \ldots v_1)}\gamma_1(v_1 \ldots v_i)\\
& = (v_i\ldots v_1)\underline{\gamma_1}(v_i\ldots v_2 v_1 v_2 \ldots v_i)\underline{\gamma_1}(v_{i} \ldots v_2)\sigma_1(v_2 \ldots v_{i}) \underline{\gamma_1}\\
& (v_i \ldots v_2 v_1 v_2 \ldots v_i) \underline{\gamma_1}(v_1 \ldots v_i)\\
& = (v_i\ldots v_1)(v_i\ldots v_{2})\gamma_1 v_1\underline{(v_{2} \ldots v_i)(v_{i} \ldots v_2)}\gamma_1\sigma_1\gamma_1\underline{(v_2 \ldots v_{i})(v_i \ldots v_{2})}\\
& v_1 \gamma_1 (v_{2} \ldots v_i)(v_1 \ldots v_i)\\
& = (v_i\ldots v_1)(v_i\ldots v_{2})\underline{\gamma_1 v_1\gamma_1\sigma_1\gamma_1v_1\gamma_1}(v_{2} \ldots v_i)(v_1 \ldots v_i)\\
& = v_i\underline{(v_{i-1}\ldots v_1)(v_i\ldots v_{2})\sigma_1(v_{2} \ldots v_i)(v_1 \ldots v_{i-1})}v_i\\
& = v_i\sigma_iv_i.
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:ReducedPresentation}]
In the twisted virtual braid group, it is verified that all relations $(\ref{1st reduction})$--$(\ref{relB-bv})$ are valid by a geometrical argument using diagrams or algebraic argument using the relations $(\ref{rel-height-ss})$--$(\ref{rel-twist-III})$. On the other hands, we see that the relations $(\ref{rel-height-ss})$--$(\ref{rel-twist-III})$ follow from the relations $(\ref{1st reduction})$--$(\ref{relB-bv})$ by the previous lemmas.
\end{proof}
\section*{Concluding remarks}
In this paper we study twisted virtual braids and the twisted virtual braid group, and provide theorems for twisted links corresponding to the Alexander theorem and the Markov theorem. We also provide a group presentation and a reduced group presentation of the twisted virtual braid group.
As a future work, it will be interesting to study the pure twisted virtual braid group and construct invariants for twisted virtual braids and twisted links.
For example, biquandles with structures related to twisted links introduced in \cite{ns} may be discussed by using twisted virtual braids.
\section*{Acknowledgements}
K.~Negi would like to thank the University Grants Commission(UGC), India, for Research Fellowship with NTA Ref.No.191620008047. M.~Prabhakar acknowledges the support given by the Science and Engineering Board(SERB), Department of Science $\&$ Technology, Government of India under grant-in-aid Mathematical Research Impact Centric Support(MATRICS) with F.No.MTR/2021/00394. This work was partially supported by the FIST program of the Department of Science and Technology, Government of India, Reference No. SR/FST/MS-I/2018/22(C), and was supported by JSPS KAKENHI Grant Number JP19H01788.
\section{Introduction}
M.~O.~Bourgoin \cite{1} introduced twisted knot theory as a generalization of knot theory.
Twisted link diagrams are link diagrams on $\mathbb{R}^2$ possibly with some crossings called virtual crossings and bars which are short arcs intersecting the arcs of the diagrams. Twisted links are diagrammatically defined as twisted link diagrams modulo isotopies of $\mathbb{R}^2$ and local moves called {\it extended Reidemeister moves} which are
Reidemeister moves (R1, R2, R3), virtual Reidemeister moves (V1, V2, V3, V4) and twisted moves (T1, T2, T3) depicted in Figure~\ref{tm}. Twisted links correspond to stable equivalence classes of links in oriented three-manifolds which are orientation I-bundles over closed but not necessarily orientable surfaces.
Twisted links are analogous to virtual links introduced by L.~H.~Kauffman \cite{2}. Virtual link diagrams are link diagrams on $\mathbb{R}^2$ possibly with some virtual crossings. Virtual links are defined as virtual link diagrams modulo isotopies of $\mathbb{R}^2$ and local moves called {\it generalized Reidemeister moves} which are
Reidemeister moves (R1, R2, R3) and virtual Reidemeister moves (V1, V2, V3, V4) depicted in Figure~\ref{tm}. Virtual links correspond to stable equivalence classes of links in oriented three-manifolds which are orientation I-bundles over closed oriented surfaces.
\begin{figure}[ht]
\centering
\includegraphics[width=12cm,height=7.5cm]{GR-moves.pdf}
\caption{Extended Reidemeister moves.}
\label{tm}
\end{figure}
The Alexander theorem states that every link is represented as the closure of a braid, and the Markov theorem states that such a braid is unique modulo certain moves so called Markov moves. In virtual knot theory, analogous theorems are established in \cite{kl, sk}.
In this paper we show theorems for twisted links corresponding to the Alexander theorem and the Markov theorem. We also provide a group presentation and a reduced group presentation
of the twisted virtual braid group.
This article is organized as follows.
In Section~\ref{sect:braid},
we state the definition of the twisted virtual braid group and provide a group presentation of the group.
In Section~\ref{sect:Alexander}, the Alexander theorem for twisted links is shown by introducing a method of braiding a given twisted link diagram, which we call the braiding process.
In Section~\ref{sect:Markov}, we give the statement of the Markov theorem for twisted links and prove it. In Section~\ref{sect:exchange}, virtual exchange moves are discussed.
In Section~\ref{sect:reduced}, we give a reduced presentation of the twisted virtual braid group, and concluding remarks.
\section{The twisted virtual braid group}
\label{sect:braid}
Let $n$ be a positive integer.
\begin{definition}
A {\it twisted virtual braid diagram} on $n$ strands (or of degree $n$)
is a union of $n$ smooth or polygonal curves, which are called {\it strands}, in $\mathbb{R}^2$ connecting points $(i,1)$ with points $(q_i,0)$ $(i=1, \dots, n)$, where $(q_1, \ldots, q_n)$ is a permutation of the numbers $(1, \ldots, n)$, such that these curves are monotonic with respect to the second coordinate and intersections of the curves are transverse double points equipped with information as a positive/negative/virtual crossing and
strings may have {\it bars} by which we mean short arcs intersecting the strings transversely.
See Figure~\ref{exa}, where the five crossings are
negative, positive, virtual, positive and positive from the top.
\end{definition}
\begin{figure}[h]
\centering
\includegraphics[width=2cm,height=3cm]{example-ktb.pdf}
\caption{A twisted virtual braid diagram on 3 strands.}
\label{exa}
\end{figure}
Here is an alternative definition.
\begin{definition}
Let $E$ be $[0, n+1] \times [0, 1]$ and let $p_2 : E \to [0, 1]$ be the second factor projection. A {\it twisted virtual braid diagram} of $n$ strands (or of degree $n$) is an immersed 1-manifold $b = a_1 \cup \ldots \cup a_n$ in E, where $a_1, \ldots, a_n$ are embedded arcs, called {\it strands},
possibly with {\it bars} by which we mean short arcs intersecting the strands transversely, satisfying the following conditions (1)--(5):
\begin{itemize}
\item[(1)] $\partial b= \{1, 2, \dots, n\} \times \{0, 1\} \subset E$.
\item[(2)] For each $i \in \{1, \ldots, n\}$, $p_2|_{a_i}: a_i \to [0, 1]$ is a homeomorphism.
\item[(3)] The set of multiple points of the strands consists of transverse double points, which are referred as {\it crossings} of the diagram.
\item[(4)] Each crossing is equipped with information of a positive crossing, a negative crossing or a virtual crossing.
\item[(5)] Every bar avoids the crossings.
\end{itemize}
Let $X(b)$ denote the set of crossings of $b$ and the points on the strands where bars intersect with.
A twisted virtual braid diagram is said to be {\it good} if it satisfies the following condition.
\begin{itemize}
\item[(6)] The restriction map $p_2 |_{X(b)}: X(b) \to [0, 1]$ is injective.
\end{itemize}
\end{definition}
The twisted virtual braid diagram depicted in Figure~\ref{exa} is good.
\begin{definition}
Two twisted virtual braid diagrams $b$ and $b'$ of degree $n$ are {\it equivalent} if there is a finite sequence of twisted virtual braid diagrams of degree $n$, say $b_0, b_1, \dots, b_m$, with $b=b_0$ and $b'=b_m$ such that for each $j = 1, \dots, m$, $b_j$ is obtained from $b_{j-1}$ by one of the following:
\begin{itemize}
\item An isotopy of $E$ keeping the conditions (1)--(5) of a twisted virtual braid diagram.
\item An extended Reidemeister move.
\end{itemize}
A {\it twisted virtual braid} is an equivalence class of twisted virtual braid diagrams.
\end{definition}
The set of twisted virtual braids forms a group, where the product is defined by the concatenation similar to the braid group such that $b b'$ is $b$ on $b'$ when we draw the braid diagram vertically.
The twisted virtual braid group is denoted by $TVB_n$.
Let $\sigma_i$, $\sigma_i^{-1}$, $v_i$ $(i=1, \dots, n-1)$ and $\gamma_i$ $(i=1, \dots, n)$ be
twisted virtual braid diagrams depicted in Figure~\ref{gen}.
Twisted virtual braids represented by them will be also denoted by the same symbols.
The group $TVB_n$ is generated by $\sigma_i$, $v_i$ $(i=1, \dots, n-1)$ and $\gamma_i$ $(i=1, \dots, n)$, which we call {\it standard generators}.
\begin{figure}[h]
\centering
\includegraphics[width=12cm,height=2.5cm]{gpelts.pdf}
\caption{Generators of the group of twisted virtual braids.}
\label{gen}
\end{figure}
Figure~\ref{bmoves} shows classical braid moves, corresponding to R2 and R3. Figure~\ref{vbmoves} shows virtual braid moves, corresponding to V2, V3, and V4.
(There are some other moves corresponding to R3 and V4. However, it is well known that those moves are equivalent to the moves in the figure, cf. \cite{kl}.)
\begin{figure}[h]
\centering
\includegraphics[width=8cm,height=3.5cm]{EBraid-moves.pdf}
\caption{Classical braid moves.}
\label{bmoves}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=9cm,height=3cm]{EVBraid-moves.pdf}
\caption{Virtual braid moves.}
\label{vbmoves}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=9cm,height=4cm]{ktbraidmoves.pdf}
\caption{Twisted braid moves.}
\label{moves}
\end{figure}
We call the two moves depicted in the top row of Figure~\ref{moves} {\it twisted braid moves of type ${\rm I}$},
and the move on the left of the second row a {\it twisted braid move of type ${\rm II}$}.
The move on the right of the bottom is called a {\it twisted braid move of type ${\rm III}$} or
{\it of type ${\rm III(+)}$}. When we replace the positive crossings with negative ones, it is called
a {\it twisted braid move of type ${\rm III}$} or {\it of type ${\rm III(-)}$}.
Braid moves corresponding to extended Reidemeister moves are classical braid moves, virtual braid moves and twisted braid moves.
\begin{theorem}\label{thm:StandardPresentation}
The twisted virtual braid group $TVB_n$ is generated by standard generators,
$\sigma_i$, $v_i$ $(i=1, \dots, n-1)$ and $\gamma_i$ $(i=1, \dots, n)$, and
the following relations are defining relations, where $e$ denotes the identity element:
\begin{align}
\sigma_i \sigma_j & = \sigma_j \sigma_i & \text{ for } & |i-j| > 1; \label{rel-height-ss}\\
\sigma_i \sigma_{i+1} \sigma_i & = \sigma_{i+1} \sigma_i \sigma_{i+1} & \text{ for } & i=1,\ldots, n-2; \label{rel-sss}\\
v_i^2 & = e & \text{ for } & i=1,\ldots, n-1; \label{rel-inverse-v}\\
v_i v_j & = v_j v_i & \text{ for } & |i-j| > 1 ; \label{rel-height-vv}\\
v_i v_{i+1} v_i & = v_{i+1} v_i v_{i+1} & \text{ for } & i=1,\ldots, n-2; \label{rel-vvv}\\
\sigma_i v_j & = v_j \sigma_i & \text{ for } & |i-j| >1 ; \label{rel-height-sv}\\
v_i \sigma_{i+1} v_i & = v_{i+1} \sigma_i v_{i+1} & \text{ for } & i=1,\ldots, n-2; \label{rel-vsv}\\
\gamma_i^2 & = e & \text{ for } & i=1,\ldots, n; \label{rel-inverse-b}\\
\gamma_i \gamma_j & = \gamma_j \gamma_i & \text{ for } & i,j=1,\ldots, n; \label{rel-height-bb} \\
\gamma_j v_i & = v_i \gamma_j & \text{ for } & j\neq i, i+1; \label{rel-height-bv}\\
\sigma_i\gamma_j & = \gamma_j\sigma_i & \text{ for } & j\neq i, i+1; \label{rel-height-sb}\\
\gamma_{i+1} v_i & = v_{i} \gamma_i & \text{ for } & i=1,\ldots, n-1; \label{rel-bv} \\
v_{i} \sigma_i v_{i} & = \gamma_{i+1} \gamma_i \sigma_{i} \gamma_i \gamma_{i+1} & \text{ for } & i=1,\ldots, n-1. \label{rel-twist-III}
\end{align}
\end{theorem}
\begin{remark}
Using $(\ref{rel-inverse-v})$, we see that relations $(\ref{rel-vsv})$ and $(\ref{rel-bv})$ are equivalent the following $(\ref{relC-vsv})$ and $(\ref{relC-vb})$, respectively:
\begin{align}
\sigma_{i+1} & = v_i v_{i+1} \sigma_i v_{i+1} v_i & \text{ for } & i=1,\ldots, n-2, \label{relC-vsv} \\
\gamma_{i+1} & = v_i \gamma_i v_i & \text{ for } & i=1,\ldots, n-1. \label{relC-vb}
\end{align}
\end{remark}
\begin{remark}
There are two kinds of twisted braid moves of type ${\rm I}$ as shown in Figure~\ref{moves}. The left one corresponds to relations $(\ref{rel-bv})$ and the right one to $(\ref{rel-vb})$:
\begin{align}
\gamma_i v_i & = v_i \gamma_{i+1} & \text{ for } & i=1,\ldots, n-1. \label{rel-vb}
\end{align}
Using $(\ref{rel-inverse-v})$, we see that relations $(\ref{rel-bv})$ are equivalent to $(\ref{rel-vb})$. \end{remark}
\begin{remark}
There are two kinds of twisted braid moves of ${\rm III}$; one is type ${\rm III(+)}$ as shown in Figure~\ref{moves}
and the other is type ${\rm III(-)}$. The former
corresponds to relations $(\ref{rel-twist-III})$
and the latter to $(\ref{rel-twist-III-negative})$:
\begin{align}
v_{i} \sigma_i^{-1} v_{i} & = \gamma_{i+1} \gamma_i \sigma_{i}^{-1} \gamma_i \gamma_{i+1}
& \text{ for } & i=1,\ldots, n-1. \label{rel-twist-III-negative}
\end{align}
Using $(\ref{rel-inverse-v})$ and $(\ref{rel-inverse-b})$, we see that relations $(\ref{rel-twist-III})$ are equivalent to $(\ref{rel-twist-III-negative})$.
\end{remark}
\begin{proof}
Note that the inverse elements of $v_i$ and $\gamma_i$ in $TVB_n$ are themselves.
Let $\mathcal{S}$ be the set of standard generators of $TVB_n$ and let ${\mathcal{S}}^*$ be the set of standard generators and their inverse elements of $TVB_n$:
\begin{align*}
{\mathcal{S}} &= \{ \sigma_i, v_i \mid i=1, \dots, n-1 \} \cup \{ \gamma_i \mid i=1, \dots, n\}, \\
{\mathcal{S}}^* &= \{ \sigma_i, \sigma_i^{-1}, v_i \mid i=1, \dots, n-1 \} \cup \{ \gamma_i \mid i=1, \dots, n\}.
\end{align*}
Let $b$ be a twisted virtual braid diagram.
When it is good, it is presented uniquely as a concatenation of elements of ${\mathcal{S}}^*$, which we call a {\it preferred word} of $b$.
When it is not good, one can modify it slightly by an isotopy of $E$ keeping the condition of a twisted virtual braid diagram to become good.
Thus, ${\mathcal{S}}$ generates the group $TVB_n$.
Let $b$ and $b'$ are good twisted virtual braid diagrams.
Suppose that $b'$ is obtained from $b$ by an isotopy of $E$ keeping the condition of a twisted virtual braid diagram.
Then they are related by a finite sequence of changing heights of a pair of points in $X(b)$.
A single height change of a pair of such points corresponds to one of relations
(\ref{rel-height-ss}), (\ref{rel-height-vv}), (\ref{rel-height-sv}), (\ref{rel-height-bb}), (\ref{rel-height-bv}),
(\ref{rel-height-sb}) and variants of (\ref{rel-height-ss}), (\ref{rel-height-sv}) and (\ref{rel-height-sb})
with $\sigma_i$ replaced by $\sigma_i^{-1}$ and/or $\sigma_j$ replaced by $\sigma_j^{-1}$. Note that the variants are consequences of the original relations up to relations (\ref{rel-inverse-v}) and (\ref{rel-inverse-b}).
Thus, we see that the preferred words of $b$ and $b'$ are congruent modulo
relations (\ref{rel-height-ss}), (\ref{rel-height-vv}), (\ref{rel-height-sv}), (\ref{rel-height-bb}), (\ref{rel-height-bv}),
(\ref{rel-height-sb}) and relations (\ref{rel-inverse-v}) and (\ref{rel-inverse-b}).
Suppose that $b'$ is obtained from $b$ by an extended Reidemeister move.
When the move is R2, the change of preferred words corresponds to
$\sigma_i^{\epsilon} \sigma_i^{-\epsilon} = \sigma_i^{-\epsilon} \sigma_i^{\epsilon}$ $(\epsilon \in \{\pm 1\})$, which is a trivial relation.
When the move is R3, it is well known that the change of preferred words corresponds to
a relation which is a consequence of relations (\ref{rel-sss}).
When the move is V2, the change of preferred words corresponds to relations (\ref{rel-inverse-v}).
When the move is V3, the change of preferred words corresponds to relations (\ref{rel-vvv}).
When the move is V4, we may assume that it is the move as in Figure~\ref{bmoves}, which
corresponds to relations (\ref{rel-vsv}).
When the move is T1, the change of preferred words corresponds to relations (\ref{rel-bv}) or (\ref{rel-vb}).
When the move is T3, the change of preferred words corresponds to relations (\ref{rel-twist-III}) or (\ref{rel-twist-III-negative}).
Therefore we see that the preferred words of $b$ and $b'$ are congruent each other modulo all relations
(\ref{rel-height-ss})--(\ref{rel-twist-III}).
Since all relations (\ref{rel-height-ss})--(\ref{rel-twist-III}) are valid in the group $TVB_n$, these relations are defining relations.
\end{proof}
\begin{remark}
The twisted virtual group $TVB_n$ is different from the ring group (\cite{BH}) or the extended welded braid group (\cite{Damiani2017B}).
Brendle and Hatcher \cite {BH} discussed the space of configurations of $n$ unlinked Euclidean circles, called {\it rings}, whose fundamental group is the {\it ring group} $R_n$.
They showed that the ring group is isomorphic to the {\it motion group} of the trivial link of $n$ components in the sense of Dahm \cite{Dahm}.
The ring group has a finite index subgroup isomorphic to the {\it braid-permutation group}, also called the {\it welded braid group}, introduced by Fenn, Rim{\' a}nyi and Rourke \cite{FRR}.
Damiani~\cite{Damiani2017A} studied the ring group from various points of view.
In particular, she introduced in \cite {Damiani2017B} the notion of the {\it extended welded braid group} defined by using diagrams motivated from the work of Satoh~\cite{Satoh}.
Damiani's extended welded braid group is isomorphic to the ring group.
The twisted virtual braid group $TVB_n$ is different from the ring group and the extended welded braid group for $n>2$, since they admit a relation $v_1 \sigma_2 \sigma_1 = \sigma_2 \sigma_1 v_2$, which is not allowed in the twisted virtual braid group $TVB_n$.
\end{remark}
\section{Braid presentation of twisted links}
\label{sect:Alexander}
The closure of a twisted virtual braid (diagram) is defined by a similar way for a classical braid.
\begin{Example}
The closure of a twisted virtual braid diagram is shown in Figure~\ref{one}.
\begin{figure}[ht]
\centering
\includegraphics[width=2.5cm,height=3cm]{onefoil-1.pdf}
\caption{The closure of braid $\gamma_1\sigma_1^{-1}\gamma_2$.}
\label{one}
\end{figure}
\end{Example}
In this section we show that every twisted link is represented by the closure of a twisted virtual braid diagram (Theorem~\ref{prop:AlexanderB}).
\subsection{Gauss data}
For a twisted link diagram $K$, we prepare some notation:
\begin{itemize}
\item Let $V_R(K)$ be the set of all real crossings of $K$.
\item Let $S(K)$ be the map from $V_R(K)$ to the set $\{+1,-1 \}$ assigning the signs to real crossings.
\item Let $B(K)$ be the set of all bars in $K$.
\item Let $N(v)$ be a regular neighborhood of $v$, where $v \in V_R(K) \cup B(K)$.
\item For $c \in V_R(K)$, we denote by $c^{(1)}, c^{(2)}, c^{(3)}$, and $c^{(4)}$ the four points of $\partial N(c) \cap K$ as depicted in Figure~\ref{r}.
\begin{figure}[h]
\centering
\includegraphics[width=9cm,height=3cm]{nbd.pdf}
\caption{Boundary points of $N(c) \cap K$.}
\label{r}
\end{figure}
\item For $\gamma \in B(K)$, we denote by $\gamma^{(1)}$ and $\gamma^{(2)}$ the two points of $\partial N(\gamma) \cap K$ as depicted in Figure \ref{b}.
\begin{figure}[h]
\centering
\includegraphics[width=1cm,height=3cm]{2nbd.pdf}
\caption{Boundary points of $N(\gamma) \cap K$.}
\label{b}
\end{figure}
\item Put $W=W(K)=Cl(\mathbb{R}^2 \setminus \cup_{v\in V_R(K) \cup B(K)} N(v))$, where $Cl$ means the closure.
\item Let $V^{\partial}_R(K) = \{c^{(j)} | c \in V_R(K), 1\leq j \leq 4\}$, and $B^{\partial}(K) = \{\gamma^{(j)} | \gamma \in B(K), 1\leq j \leq 2\}.$
\item Let $K |_W$ be the restriction of $K$ to $W$, which is a union of some oriented arcs and loops generically immersed in $W$ such that the double points are virtual crossings of $K$, and the set of boundary points of the arcs is the set $V^{\partial}_R(K) \cup B^{\partial}(K).$
\item Let $\mu(K)$ be the number of components of $K$.
\item Define a subset $G(K)$ of $(V^{\partial}_R(K) \cup B^{\partial}(K))^2$ such that $(a,b) \in G(K)$ if and only if $K |_W$ has an oriented arc starting from $a$ and ends at $b$.
\end{itemize}
The {\it Gauss data} of a twisted link diagram $K$ is the quintuple
$$(V_R(K), S(K), B(K), G(K), \mu(K)).$$
\begin{Example}
Let $K$ be a twisted link diagram depicted in Figure~\ref{gd}. When we name the real crossings $c_1$ and $c_2$ as in the figure, the Gauss data is
$$(\{c_1,c_2\}, \{+1,+1\}, \{\gamma_1\},\{ (c_1^{(4)}, c_2^{(2)}), (c_2^{(3)}, c_1^{(2)}), (c_2^{(4)}, \gamma_1^{(1)}), (\gamma_1^{(2)}, c_1^{(1)}), (c_1^{(3)}, c_2^{(1)}) \},1).$$
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=4.5cm]{exgd.pdf}
\caption{A twisted link diagram with one bar.}
\label{gd}
\end{figure}
\end{Example}
We say that two twisted link diagrams $K$ and $K'$ have {\it the same Gauss data} if
$\mu (K) = \mu (K')$ and there exists a bijection $g:V_R(K) \cup B(K) \to V_R(K) \cup B(K)$ satisfying
the following conditions:
\begin{itemize}
\item $g(V_R(K)) = V_R(K)$, and $g(B(K))= B(K)$.
\item $g$ preserves the signs of real crossings; $S(K)(c) =S(K')(g(c))$ for $c \in V_R(K)$.
\item $(a,b) \in G(K)$ if and only if $(g^{\partial}(a), g^{\partial}(b)) \in G(K')$, where $g^{\partial}: V^{\partial}_R(K) \cup B^{\partial}(K) \to V^{\partial}_R(K') \cup B^{\partial}(K')$ is the bijection induced from $g$, i.e., $g^{\partial}(c^{(j)}) = (g(c))^{(j)}$ for $c \in V_R(K), 1\leq j \leq 4$ and
$g^{\partial}(\gamma^{(j)}) = (g(\gamma))^{(j)}$ for $\gamma \in B(K), 1\leq j \leq 2$.
\end{itemize}
Let $K$ be a twisted link diagram and $W=W(K)=Cl(\mathbb{R}^2 \setminus \cup_{v\in V_R(K) \cup B(K)} N(v))$ as before. Suppose that $K'$ is a twisted link diagram with the same Gauss data with $K$. Then by an isotopy of $\mathbb{R}^2$ we can move $K'$ such that
\begin{itemize}
\item $K$ and $K'$ are identical in $N(v)$ for every $v \in V_R(K) \cup B(K)$,
\item $K'$ has no real crossings and bars in $W$, and
\item there is a bijection between the arcs/loops of $K|_W$ and those of $K'|_W$ with respect to the endpoints of the arcs.
\end{itemize}
In this situation, we say that $K'$ is obtained from $K$ {\it by replacing $K|_W$}.
\begin{lemma}\label{lemma:Same Gauss Data}
Let $K$ and $K'$ be twisted link diagrams, and let $W=W(K)=Cl(\mathbb{R}^2 \setminus \cup_{v\in V_R(K) \cup B(K)} N(v))$.
(1) If $K'$ is obtained from $K$ by replacing $K|_W$, then they are related by a finite sequence of isotopies of $\mathbb{R}^2$ with support $W$ and V1, V2, V3, V4, and T1 moves.
(2) If two twisted link diagrams $K$ and $K'$ have the same Gauss data, then $K$ is equivalent to $K'$.
\end{lemma}
\begin{proof}
(1) Let $N_1, N_2, \ldots, N_m $ be regular neighborhoods of the real crossings and bars of $K$.
Let $a_1, a_2, \ldots, a_n $ and $a'_1, a'_2, \ldots, a'_n $ be the arcs/loops of $K|_W$ and $K'|_W$ respectively.
Using an isotopy of $\mathbb{R}^2$ with support $W$, we may assume that the intersection of $a'_1$ with $a_2, \ldots, a_n $ are transverse double points.
The arc/loop $a_1$ is homotopic to $a'_1$ in $\mathbb{R}^2$ (relative to the boundary when $a_1$ is an arc).
Taking the homotopy generically with respect to the arcs/loops $ a_2, \ldots, a_n $, and the 2-disks $N_1, N_2, \ldots, N_m $, we see that the arc/loop $a_1$ can be changed into $a'_1$ by a finite sequence of moves as shown in Figure~\ref{mv} up to isotopy of $\mathbb{R}^2$ with support $W$.
Considering that all crossings in Figure~\ref{mv} are virtual crossings, we regard these moves as V1, V2, V3, V4, and T1 moves.
In this way, we can change $a_1$ into $a_1'$ without changing other arcs/loops of $K|_W$ and $K'|_W$.
Applying this argument inductively, all arcs/loops of $K|_W$ change into the corresponding ones of $K'|_W$.
\begin{figure}[h]
\centering
\includegraphics[width=12cm,height=2cm]{movesvir-1.pdf}
\caption{Moves on immersed curves.}
\label{mv}
\end{figure}
(2) Moving $K$ by an isotopy of $\mathbb{R}^2$, we may assume that $K'$ is obtained from $K$ by replacing $K|_W$.
By (1), we obtain the assertion.
\end{proof}
\subsection{Braiding process}
Let $O$ be the origin of $\mathbb{R}^2$ and
identify $\mathbb{R}^2 \setminus \{O\}$ with $\mathbb{R}_+\times S^1$ by polar coordinates, where $\mathbb{R}_+$ is the set of positive numbers. Let $\pi:\mathbb{R}^2 \setminus \{O\}=\mathbb{R}_+\times S^1 \to S^1$ denote the radial projection.
For a twisted link diagram $K$, we denote by $V_R(K)$ the set of real crossings, by $V_B(K)$ the set of points on $K$ where bars intersect with, and by $X(K)$ the set of all (real or virtual) crossings and the set of points on $K$ where bars intersect with.
\begin{definition}
A {\it closed twisted virtual braid diagram} is a twisted link diagram $K$
satisfying the following conditions (1) and (2):
\begin{itemize}
\item[(1)] $K$ is contained in $\mathbb{R}^2 \setminus \{O\}$.
\item[(2)] Let $k: \sqcup S^1 \to \mathbb{R}^2 \setminus \{O\}$ be the underlying immersion of $K$, where
$\sqcup S^1$ is a disjoint union of copies of $S^1$.
Then $\pi \circ k: \sqcup S^1 \to S^1$ is a covering map of $S^1$ of degree $n$ which respects the orientations of $\sqcup S^1$ and $S^1$.
\end{itemize}
A closed twisted virtual braid diagram is {\it good} if it satisfies the following condition.
\begin{itemize}
\item[(3)] Let $N_1, N_2, \ldots, N_m $ be regular neighborhoods of the real crossings and bars of $K$. Then $\pi(N_i) \cap \pi(N_j) = \emptyset$ for $i \neq j$.
\end{itemize}
\end{definition}
\begin{prop}\label{prop:AlexanderA}
Every twisted link diagram $K$ is equivalent, as a twisted link, to a good closed twisted virtual braid diagram $K'$ such that $K$ and $K'$ have the same Gauss data.
\end{prop}
\begin{proof}
Let $K$ be a twisted link diagram and let $N_1, N_2, \ldots, N_m $ be regular neighborhoods of the real crossings and bars of $K$.
Moving $K$ by an isotopy of $\mathbb{R}^2$, we may assume that all $N_i$ are in $\mathbb{R}^2 \setminus \{O\}$, $\pi(N_i) \cap \pi(N_j) =\emptyset$ for $i\neq j$ and the restriction of $K$ to $N_i$ satisfies the condition of a closed twisted virtual braid diagram.
Replace the remainder $K|_{W(K)}$ such that the result is a good closed twisted virtual braid diagram $K'$.
Then $K$ and $K'$ have the same Gauss data, and by Lemma~\ref{lemma:Same Gauss Data} they are equivalent as twisted links.
\end{proof}
The procedure in the proof of Proposition~\ref{prop:AlexanderA} makes a given twisted link diagram to a good closed twisted virtual braid diagram having the same Gauss data with $K$. This is the {\it braiding process} in our paper.
A point $\theta$ of $S^1$ is called a {\it regular value} for a closed twisted virtual braid diagram $K$ if $X(K)\cap\pi^{-1}(\theta)=\emptyset$.
Cutting $K$ along the half line $\pi^{-1}(\theta)$ for a regular value of $\theta$, we obtain a twisted virtual braid diagram whose closure is equivalent to $K$.
Thus, Proposition~\ref{prop:AlexanderA} implies the following.
\begin{theorem}\label{prop:AlexanderB}
Every twisted link is represented by the closure of a twisted virtual braid diagram.
\end{theorem}
\section{The Markov theorem for twisted links}
\label{sect:Markov}
In this section we show a theorem on braid presentation of twisted links which is
analogous to the Markov theorem for classical links.
A {\it twisted Markov move of type $0$} or a {\it TM0-move} is a replacement of a twisted virtual braid diagram $b$ with another $b'$ of the same degree such that $b$ and $b'$ are equivalent as twisted virtual braids, i.e., they represent the same element of the twisted virtual braid group.
A {\it twisted Markov move of type $1$} or a {\it TM1-move} is a replacement of a twisted virtual braid (or its diagram) $b$ with $b_1 b b_1^{-1}$ where $b_1$ is a twisted virtual braid (or its diagram) of the same degree with $b$. We also call this move a {\it conjugation}.
A {\it twisted Markov move of type $1$} or a {\it TM1-move} may be defined as a replacement of a twisted virtual braid (or its diagram) $b = b_1 b_2$ with $b' = b_2 b_1$ where $b_1$ and $b_2$ are twisted virtual braids (or their diagrams) of the same degree.
See Figure~\ref{tm1}.
\begin{figure}[h]
\centering
\includegraphics[width=8cm,height=3cm]{markov-move1.pdf}
\caption{A twisted Markov move of type 1 or a TM1-move.}
\label{tm1}
\end{figure}
For a twisted virtual braid (or its diagram) $b$ of degree $n$ and non-negative integers $s$ and $t$, we denote by $\iota_s^t(b)$ the twisted virtual braid (or its diagram) of degree $n + s + t$ obtained from $b$ by adding $s$ trivial strands to the left and $t$ trivial strands to the right.
This defines a monomorphism $\iota_s^t: TVB_n \to TVB_{n+s+t}$.
A {\it stabilization of positive, negative or virtual type} is a replacement of a twisted virtual braid (or its diagram) $b$ of degree $n$ with $\iota_0^1(b)\sigma_n$, $\iota_0^1(b)\sigma_n^{-1}$ or $\iota_0^1(b)v_n$, respectively.
\begin{figure}[h]
\centering
\includegraphics[width=10cm,height=2cm]{markov-move2.pdf}
\caption{A twisted Markov move of type 2 or a TM2-move.}
\label{tm2}
\end{figure}
A {\it twisted Markov move of type $2$} or a {\it TM2-move} is a stabilization of positive, negative or virtual type, or its inverse operation.
See Figure~\ref{tm2}.
A {\it right virtual exchange move} is a replacement
$$ \iota_0^1(b_1) \sigma_n^{-1} \iota_0^1(b_2) \sigma_n \longleftrightarrow
\iota_0^1(b_1) v_n \iota_0^1(b_2) v_n, $$
and a {\it left virtual exchange move} is a replacement
$$ \iota_1^0(b_1) \sigma_1^{-1} \iota_1^0(b_2) \sigma_1 \longleftrightarrow
\iota_1^0(b_1) v_1 \iota_1^0(b_2) v_1, $$
where $b_1$ and $b_2$ are twisted virtual braids (or their diagrams).
A {\it twisted Markov move of type $3$} or a {\it TM3-move} is a right/left virtual exchange move or its inverse operation.
See Figure~\ref{tm3}.
\begin{figure}[h]
\centering
\includegraphics[width=10cm,height=3.5cm]{markov-move3.pdf}
\caption{A twisted Markov move of type 3 or a TM3-move.}
\label{tm3}
\end{figure}
\begin{definition}
Two twisted virtual braids (or their diagrams) are {\it Markov equivalent} if they are related by a finite sequence of twisted Markov moves TM1--TM3 (or TM0--TM3 when we discuss them as diagrams).
\end{definition}
\begin{theorem}\label{theorem:MarkovA}
Two twisted virtual braids (or their diagrams) have equivalent closures as twisted links if and only if they are Markov equivalent.
\end{theorem}
\begin{remark}
In Section~\ref{sect:exchange},
it turns out that if two twisted virtual braids (or their diagrams) is related by a left virtual exchange move then they are related by a sequence of TM1-moves (or TM0-moves and TM1-moves when we discuss them as diagrams) and a right virtual exchange move. Thus we may remove left virtual exchange moves from the definition of Markov equivalence.
\end{remark}
Let $K$ and $K'$ be closed twisted virtual braid diagrams and let $b$ and $b'$ be twisted virtual braid diagrams obtained from $K$ and $K'$ by cutting along $\pi^{-1}(\theta)$ and $\pi^{-1}(\theta')$ for some regular values $\theta$ and $\theta'$.
We say that $K'$ is obtained from $K$ by a {\it twisted Markov move of type $0$} or a {\it TM0-move} if they are equivalent as closed twisted virtual braids.
Note that $K'$ is obtained from $K$ by a TM0-move if and only if $b$ and $b'$ are related by a finite sequence of TM0-moves and TM1-moves.
We say that $K'$ is obtained from $K$ by a {\it twisted Markov move of type $2$} or a {\it TM2-move} if $b$ and $b'$ are related by a TM2-move and some TM1-moves.
We say that $K'$ is obtained from $K$ by a {\it twisted Markov move of type $3$} or a {\it TM3-move} if $b$ and $b'$ are related by a TM3-move and some TM1-moves.
\begin{definition}
Two closed twisted virtual braid diagrams $K$ and $K'$ are {\it Markov equivalent} if they are related by a finite sequence of TM0-, TM2- and TM3-moves.
\end{definition}
\begin{prop}\label{prop:closure}
Two closed twisted virtual braid diagrams $K$ and $K'$ are Markov equivalent if and only if twisted virtual braid diagrams $b$ and $b'$ are Markov equivalent, where $b$ and $b'$ are obtained from $K$ and $K'$ by cutting along $\pi^{-1}(\theta)$ and $\pi^{-1}(\theta')$ for some regular values $\theta$ and $\theta'$.
\end{prop}
\begin{proof}
For a given closed twisted virtual braid diagram $K$, $b$ is uniquely determined up to TM1-moves. Then the assertion is trivial by definition.
\end{proof}
By Proposition~\ref{prop:closure},
Theorem~\ref{theorem:MarkovA} is equivalent to the following theorem.
\begin{theorem}\label{theorem:MarkovB}
Two closed twisted virtual braid diagrams are equivalent as twisted links if and only if they are Markov equivalent.
\end{theorem}
To prove Theorem~\ref{theorem:MarkovB}, we require the following lemma.
\begin{lemma}\label{lem:unique}
Two closed twisted virtual braid diagrams with the same Gauss data are Markov equivalent.
\end{lemma}
\begin{proof}
Let $K$ and $K'$ be closed twisted virtual braids with the same Gauss data. Modifying them by isotopies of $\mathbb{R}^2 \setminus \{O\}$, we may assume that they are good.
Let $N_1, N_2, \ldots, N_m$ be regular neighborhoods of the real crossings and bars of $K$, and $N'_1, N'_2, \ldots, N'_m$ be regular neighborhoods of the corresponding real crossings and bars of $K'$.
Case (I). Suppose that $\pi(N_1), \pi(N_2), \ldots, \pi(N_m)$ and $\pi(N'_1), \pi(N'_2), \ldots, \pi(N'_m)$ appear in $S^1$ in the same cyclic order.
Modifying $K$ by an isotopy of $\mathbb{R}^2 \setminus \{O\}$ keeping the condition of a good closed twisted virtual braid,
we may assume that $N_1=N'_1, N_2=N'_2, \ldots, N_m=N'_m$ and the restrictions of $K$ and $K'$ to these disks are identical.
Let $a_1, \dots, a_s$ be the arcs/loops of $K|_W$ and $a'_1, \dots, a'_s$ be the corresponding arcs/loops of $K'|_W$.
Let $\theta \in S^1$ be a regular value for $K$ and $K'$ such that $\pi^{-1}(\theta)$ is disjoint from $N_1 \cup \dots \cup N_m$.
If there exists an arc/loop $a_i$ of $K|_W$ such that $|a_i \cap \pi^{-1}(\theta)|\neq |a'_i \cap \pi^{-1}(\theta)|$, then move a small segment of $a_i$ or $a'_i$ toward the origin $O$ by some V2 moves which are
TM0-moves and apply some VM2-moves of virtual type so that $|a_i \cap \pi^{-1}(\theta)|=|a'_i \cap \pi^{-1}(\theta)|$ after the modification.
Thus without loss of generality, we may assume that $|a_i \cap \pi^{-1}(\theta)|=|a'_i \cap \pi^{-1}(\theta)|$ for all $i=1, \dots, s$.
Let $k : \sqcup S^1 \to \mathbb{R}^2 \setminus \{O\}$ and $k' : \sqcup S^1 \to \mathbb{R}^2 \setminus \{O\}$ be the underlying immersions of $K$ and $K'$, respectively, such that they are identical near the preimages of the real crossings and bars.
Let $I_1, \ldots, I_s$ be arcs/loops in $\sqcup S^1$ with $k(I_i)=a_i$ and $k'(I_i)=a'_i$ for $i=1, \dots, s$.
Note that $\pi \circ k|_{I_i}$ and $\pi \circ k'|_{I_i}$ are orientation-preserving immersions into $S^1$ with $\pi \circ k|_{\partial I_i}=\pi \circ k'|_{\partial I_i}$.
Since $a_i$ and $a'_i$ have the same degree, so we have a homotopy $k_i^t: I_i \to \mathbb{R}^2 \setminus \{O\}$ $(t \in [0,1])$ of $I_i$ relative to the boundary $\partial I_i$ such that $k^0_i=k|_{I_i}$ and $k^1_i=k|_{I_i}$ and $\pi \circ k^t_i$ is an orientation-preserving immersion.
Taking such a homotopy generically with respect to the other arcs/loops of $K|_{W}$ and $K'|_{W}$ and the 2-disks $N_1, N_2, \ldots, N_m$, we see that $a_i$ can be transformed to $a'_i$ by a sequence of TM0-moves.
Apply this procedure inductively, we can change $a_1, \dots, a_s$ to $a'_1, \dots, a'_s$ by a sequence of TM0-moves and TM2-moves.
Thus we see that $K$ is transformed into $K'$ by a finite sequence of TM0 and TM2-moves.
Case (II). Suppose that $\pi(N_1), \pi(N_2), \dots, \pi(N_m)$ and $\pi(N'_1), \pi(N'_2), \dots, \pi(N'_m)$ do not appear in $S^1$ in the same cyclic order.
It is sufficient to show that we can interchange the position of two consecutive $\pi(N_i)$'s.
Suppose that we want to interchange $\pi(N_1)$ and $\pi(N_2)$.
(1) Suppose that $N_2$ is a neighborhood of a real crossing. Figure~\ref{c} shows how to interchange $\pi(N_1)$ and $\pi(N_2)$ by TM0-moves and TM2-moves.
\begin{figure}[h]
\centering
\includegraphics[width=10cm,height=7cm]{cycle-2.pdf}
\caption{Interchange the positions of $N_1$ and $N_2$.}
\label{c}
\end{figure}
(2) Suppose that $N_2$ is a neighborhood of a bar. Figure~\ref{c2} shows how to interchange $\pi(N_1)$ and $\pi(N_2)$ by TM0-moves and TM2-moves.
\begin{figure}[h]
\centering
\includegraphics[width=10cm,height=4cm]{cycle-1.pdf}
\caption{Interchange the positions of $N_1$ and $N_2$.}
\label{c2}
\end{figure}
\sloppy{
Applying this argument, we can make $\pi(N_1), \pi(N_2), \ldots, \pi(N_m)$ and $\pi(N'_1), \pi(N'_2), \ldots, \pi(N'_m)$ to appear in the same cyclic order on $S^1$ using VM0 and VM2-moves.
Then we can reduce the case to Case~(I). }
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem:MarkovB}]
If two closed twisted virtual braids (or their diagrams) are Markov equivalent then they are equivalent as twisted links.
Conversely, suppose that $K$ and $K'$ are closed twisted virtual braid diagrams which are equivalent as twisted links.
There is a finite sequence of twisted link diagrams, say, $K=K_0, K_1, \ldots, K_n=K'$ such that $K_{i+1}$ is obtained from $K_{i}$ by one of the extended Reidemeister moves.
For each $i = 1, \dots, n-1$, $K_i$ may not be a closed twisted virtual braid diagram.
Let $\widetilde K_i$ be a closed twisted virtual braid diagram obtained from $K_i$ by the braiding process in the previous section.
We assume $K_0=\widetilde K_0$ and $K_n =\widetilde K_n$.
Then for each $i =0,1,\dots, n$, $\widetilde K_i$ and $K_i$ have the same Gauss data.
It is sufficient to prove that $\widetilde K_i$ and $\widetilde K_{i+1}$ are Markov equivalent.
It is shown in \cite{sk} that $\widetilde K_i$ and $\widetilde K_{i+1}$ are Markov equivalent when $K_{i+1}$ is obtained from $K_{i}$ by one of R1, R2, R3, V1, V2, V3, and V4.
(In \cite{sk} virtual links and closed virtual braid diagrams are discussed.
However the argument in \cite{sk} is valid in our current situation.)
Thus, it is sufficient to consider a case that $K_{i+1}$ is obtained from $K_{i}$ by a twisted move T1, T2 or T3.
(1) Let $K_{i+1}$ be obtained by $K_i$ from a T1 move.
Then $K_{i}$ and $K_{i+1}$ have same Gauss data, and hence $\widetilde K_{i}$ and $\widetilde K_{i+1}$ have same Gauss data.
By Lemma~\ref{lem:unique}, $\widetilde K_i$ and $\widetilde K_{i+1}$ are Markov equivalent.
(2) Let $K_{i+1}$ be obtained by $K_i$ by a T2 move. Assume that a pair of bars in $K_i$ is removed by the T2 move to obtain $K_{i+1}$.
Let $N$ be a $2$-disk where the T2 move is applied such that $N \cap K_i$ is an arc, say $\alpha$, with two bars and $N \cap K_{i+1}$ is the arc $\alpha$.
Let $N_1$ and $N_2$ be neighborhoods of the two bars such that $N_1 \cup N_2 \subset N$.
By an isotopy of $\mathbb{R}^2$, deform $K_i$, $\alpha$ and $N$ such that $N \cap K_i$ is $\alpha$ with two bars and $\pi|_\alpha: \alpha \to S^1$ is an orientation-preserving embedding.
Let $\widetilde K_i'$ be a closed twisted virtual braid obtained from the deformed $K_i$ by applying the braid process in the previous section such that $N$ is pointwise fixed, and let $\widetilde K_{i+1}'$ be a closed twisted virtual braid obtained from $\widetilde K_i'$ by removing the two bars intersecting $\alpha$.
Then $\widetilde K_i'$ and $\widetilde K_{i+1}'$ are related by a TM0-move.
Since $\widetilde K_i$ and $\widetilde K_i'$ have the same Gauss data, they are Markov equivalent.
Since $\widetilde K_{i+1}$ and $\widetilde K_{i+1}'$ have the same Gauss data, they are Markov equivalent.
Thus $\widetilde K_i$ and $\widetilde K_{i+1}$ are Markov equivalent.
The case that a pair of bars are introduced to $K_i$ to obtain $K_{i+1}$ is shown similarly.
(3) Let $K_{i+1}$ be obtained from $K_i$ by a T3 move.
There are 4 possible orientations for a T3 move, say T3a, T3b, T3c, and T3d as in Figure~\ref{ot3m}.
\begin{figure}[h]
\centering
\includegraphics[width=7cm,height=5cm]{t3moves.pdf}
\caption{Oriented T3 moves.}
\label{ot3m}
\end{figure}
First consider a case that $K_{i+1}$ is obtained from $K_i$ by a move T3a or T3b.
Assume that $K_i$ is as in the left and $K_{i+1}$ is as in the right of Figure~\ref{ot3m}. Let $N$ be a $2$-disk where the move is applied.
Then $N \cap K_i$ is a pair of arcs, say $\alpha_1$ and $\alpha_2$, intersecting transversely at a real crossing and there are four bars.
Let $N_1$ be a neighborhood of the real crossing of $K_i$ and $N_2, \dots, N_5$ be neighborhoods of the four bars of $K_i$ in $N$ such that $N_1 \cup \dots \cup N_5 \subset N$.
By an isotopy of $\mathbb{R}^2$, deform $K_i$, $\alpha_1$, $\alpha_2$, and $N$ such that $\pi|_{\alpha_1}: \alpha_1 \to S^1$ and $\pi|_{\alpha_2}: \alpha_2 \to S^1$ are orientation-preserving embeddings.
Let $\widetilde K_i'$ be a closed twisted virtual braid diagram obtained from the deformed $K_i$ by applying the braid process in the previous section such that $N$ is pointwise fixed, and let $\widetilde K_{i+1}'$ be a closed twisted virtual braid diagram obtained from $\widetilde K_i'$ by applying a T3a (or T3b) move.
Then $\widetilde K_i'$ and $\widetilde K_{i+1}'$ are related by a TM0-move.
Since $\widetilde K_i$ and $\widetilde K_i'$ have the same Gauss data, they are Markov equivalent.
Since $\widetilde K_{i+1}$ and $\widetilde K_{i+1}'$ have the same Gauss data, they are Markov equivalent. Thus $\widetilde K_i$ and $\widetilde K_{i+1}$ are Markov equivalent.
The case that $K_i$ is as in the right and $K_{i+1}$ is as in the left of the figure is shown similarly.
Now consider the case that $K_{i+1}$ is obtained from $K_i$ by a move T3c or T3d.
Note that a move T3c (or T3d) is a consequence of a move T3b (or T3a) modulo moves V1, V2, V3, and V4.
One can see this by rotating the two diagrams in T3c (or T3d) by 90 degrees clockwise.
Then the left hand side becomes the same diagram with the left hand side of T3b (or T3a).
The right hand side of T3c (or T3d) after the rotation has a real crossing and no bars. One can see that the right hand side of T3b (or T3a) also has a real crossing and no bars.
Considering the Gauss data of the tangle in $N$ and applying the same argument to the proof of Lemma~\ref{lem:unique}, we see that the right hand side of T3c (or T3d) after the rotation is transformed to the right hand side of T3b (or T3a) by V1, V2, V3, and V4 moves in $N$.
Thus we can reduce the case to T3a (or T3b) and the case of V1, V2, V3, and V4 moves.
\end{proof}
\section{On virtual exchange moves of twisted virtual braids}
\label{sect:exchange}
It turns out that if two twisted virtual braids (or their diagrams) are related by a left virtual exchange move then they are related by a sequence of TM1-moves (or TM0-moves and TM1-moves) and a right virtual exchange move. Thus we may remove left virtual exchange moves from the definition of Markov equivalence.
Let $f_n: TVB_n \to TVB_n$ be an isomorphism determined by
\begin{align*}
\sigma_i & \mapsto \sigma_{n-i}, & \text{for } & i=1, \dots, n-1 \\
v_i & \mapsto v_{n-i}, & \text{for } & i=1, \dots, n-1 \\
\gamma_i & \mapsto \gamma_{n-i+1}, & \text{for } & i=1, \dots, n.
\end{align*}
For a twisted virtual braid diagram $b$ of degree $n$ which is good,
we also denote by $f_n(b)$ a twisted virtual braid diagram obtained from the diagram $b$ by applying the above correspondence to the preferred word of $b$.
Let $\nabla_n$ be a twisted virtual braid (or its diagram) with
\begin{align*}
\nabla_n = \prod_{i=1}^{n-1} (v_i v_{i-1} \dots v_1) \prod_{j=1}^{n} \gamma_j.
\end{align*}
Let $F_n: TVB_n \to TVB_n$ be an isomorphism determined by
\begin{align*}
b & \mapsto \nabla_n b \nabla_n^{-1} & \text{for } & b \in TVB_n.
\end{align*}
Then $\nabla_n^2 = e$ in $TVB_n$ and $F_n(b) = f_n(b)$ for $b \in TVB_n$.
In particular $b$ and $f_n(b)$ are related by a TM1-move (or TM0-moves and TM1-moves when we discuss them as diagrams).
\begin{theorem}
If two twisted virtual braids of degree $n$ (or their diagrams) are related by a left virtual exchange move,
then they are related by a sequence of TM1-moves (or TM0-moves and TM1-moves) and a right virtual exchange move.
\end{theorem}
\begin{proof}
Let $b$ and $b'$ be twisted virtual braid diagrams of degree $n$ related by a left virtual exchange move.
Suppose that
$$ b= \iota_1^0(b_1) \sigma_1^{-1} \iota_1^0(b_2) \sigma_1 \quad \mbox{and} \quad
b'= \iota_1^0(b_1) v_1 \iota_1^0(b_2) v_1, $$
where $b_1$ and $b_2$ are good twisted virtual braid diagrams of degree~$n-1$.
Then
$$ f_n(b) = \iota_0^1( f_{n-1}(b_1) ) \sigma_n^{-1} \iota_0^1( f_{n-1}(b_2) ) \sigma_n
\quad \mbox{and} \quad
f_n(b') = \iota_0^1( f_{n-1}(b_1) ) v_n \iota_0^1( f_{n-1}(b_2) ) v_n, $$
and hence $f_n(b)$ and $f_n(b')$ are related by a right virtual exchange move.
Since $b$ is conjugate to $F_n(b) = f_n(b)$ as elements of $TVB_n$, and
$b'$ is conjugate to $F_n(b') = f_n(b')$, we see that $b$ and $b'$ are related by a sequence of TM1-moves
(or TM0-moves and TM1-moves when we discuss them as diagrams) and a right virtual exchange move.
\end{proof}
\section{A reduced presentation of the twisted virtual braid group}
\label{sect:reduced}
L. Kauffman and S. Lambropoulou~\cite{kl} gave a reduced presentation of the virtual braid group. Motivated by their work, we give a reduced presentation of the twisted virtual braid group.
Using the reduced presentation, one can deal the twisted virtual braid group with less number of generators and relations.
In this section, we show that the presentation
of the twisted virtual braid group $TVB_n$
given in Theorem~\ref{thm:StandardPresentation} can be reduced to a presentation with $n+1$ generators and less
relations by rewriting $\sigma_i$ $(i=2,\ldots, n-1)$
and $\gamma_i$ $(i=2,\ldots, n)$ in terms of $\sigma_1$, $\gamma_1$ and $v_1, \dots, v_{n-1}$ as follows:
\begin{align}
\sigma_i & =(v_{i-1}\ldots v_1)(v_i \ldots v_2)\sigma_1(v_2 \ldots v_i)(v_1 \ldots v_{i-1}) & \text{ for } & i=2,\ldots, n-1, \label{1st reduction} \\
\gamma_i & =(v_{i-1}\ldots v_1)\gamma_1(v_1 \ldots v_{i-1}) & \text{ for } & i=2,\ldots, n. \label{2nd reduction}
\end{align}
See Figure~\ref{o}. These can be seen geometrically from their diagrams or algebraically from $(\ref{relC-vsv})$ and $(\ref{relC-vb})$.
\begin{figure}[h]
\centering
\includegraphics[width=6cm,height=10cm]{ktbraids.pdf}
\caption{$\sigma_i \text{ and } \gamma_i$.}
\label{o}
\end{figure}
\begin{theorem}\label{thm:ReducedPresentation}
The twisted virtual braid group $TVB_n$ has a presentation whose generators are
$\sigma_1, \gamma_1, v_1,\dots, v_{n-1}$
and the defining relations are as follows:
\begin{align}
v_i^2 & = e & \text{ for } & i=1,\ldots, n-1; \label{relB-inverse-v}\\
v_iv_j & = v_jv_i & \text{ for } & |i-j| > 1 ; \label{relB-height-vv}\\
v_iv_{i+1}v_i & = v_{i+1}v_iv_{i+1} & \text{ for } & i=1,\ldots, n-2; \label{relB-vvv}\\
\sigma_1(v_2v_3v_1v_2\sigma_1v_2v_1v_3v_2) & = (v_2v_3v_1v_2\sigma_1v_2v_1v_3v_2)\sigma_1, & & \label{relB-height-ss} \\
(v_1\sigma_1v_1)(v_2\sigma_{1}v_2)(v_1\sigma_1v_1) & = (v_2\sigma_1v_2)(v_1\sigma_{1}v_1)(v_2\sigma_1v_2), & & \label{relB-sss} \\
\sigma_1v_j & = v_j\sigma_1 & \text{ for } & j = 3, \ldots, n-1; \label{relB-height-sv}\\
\gamma_1^2 & = e, & & \label{relB-inverse-b} \\
\gamma_1v_j & = v_j\gamma_1 & \text{ for } & j = 2, \ldots, n-1; & \label{relB-height-bv}\\
\gamma_1v_1\gamma_1v_1 & =v_1\gamma_1v_1\gamma_1, & & \label{relB-height-bb}\\
\gamma_1v_1v_2\sigma_1v_2v_1 & = v_1v_2\sigma_1v_2v_1\gamma_1, & & \label{relB-height-sb}\\
\gamma_{1}v_1\gamma_1\sigma_{1} \gamma_1v_1\gamma_{1} & = \sigma_1. & & \label{relB-bv}
\end{align}
\end{theorem}
In what follows, we refer to relations $(\ref{rel-inverse-v})$, $(\ref{rel-height-vv})$ and $(\ref{rel-vvv})$ or
equivalently $(\ref{relB-inverse-v})$, $(\ref{relB-height-vv})$ and $(\ref{relB-vvv})$ as the {\it virtual relations}.
\begin{lemma}[cf. \cite{kl}]
Relations $(\ref{rel-vsv})$ follow from relations $(\ref{1st reduction})$ and the virtual relations.
\end{lemma}
This lemma is directly seen. The following three lemmas are proved in \cite{kl}. So we omit the proofs.
\begin{lemma}[Lemma~1 of \cite{kl}]
Relations $(\ref{rel-height-sv})$ follow from relations $(\ref{1st reduction})$, the virtual relations, and
relations $(\ref{relB-height-sv})$.
\end{lemma}
\begin{lemma}[Lemma~3 of \cite{kl}]
Relations $(\ref{rel-height-ss})$ follow from relations $(\ref{1st reduction})$, the virtual relations, and
relations $(\ref{relB-height-ss})$ and $(\ref{relB-height-sv})$.
\end{lemma}
\begin{lemma}[Lemma~2 of \cite{kl}]
Relations $(\ref{rel-sss})$ follow from relations $(\ref{1st reduction})$, the virtual relations, and
relations $(\ref{relB-sss})$ and $(\ref{relB-height-sv})$.
\end{lemma}
In the following proofs, we underline the expressions which we focus on.
\begin{lemma}
Relations $(\ref{rel-inverse-b})$ follow from relations $(\ref{2nd reduction})$, the virtual relations, and
relation $(\ref{relB-inverse-b})$.
\end{lemma}
\begin{proof}
\begin{align*}
\gamma_i^2
& = (v_{i-1}\ldots v_1)\gamma_1 \underline{(v_1 \ldots v_{i-1})(v_{i-1}\ldots v_1)}\gamma_1(v_1 \ldots v_{i-1})\\
& = (v_{i-1}\ldots v_1)\underline{\gamma_1^2}(v_1 \ldots v_{i-1})\\
& = \underline{(v_{i-1}\ldots v_1)(v_1 \ldots v_{i-1})}\\
& = e.
\end{align*}
\end{proof}
\begin{lemma}
Relations $(\ref{rel-height-bv})$ follow from relations $(\ref{2nd reduction})$, the virtual relations, and relations $(\ref{relB-height-bv})$.
\end{lemma}
\begin{proof}
Since $j \neq i, i + 1$, we consider the following two cases.
Case(i) Suppose $j\leq i-1$. Then $i \geq 2$ and we have
\begin{align*}
v_i\gamma_j & = \underline{v_i}(v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\underline{v_i\gamma_1}(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1\underline{v_i}(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})v_i\\
& = \gamma_jv_i.
\end{align*}
Case(ii) Suppose $j\geq i+2$. Then
\begin{align*}
v_i\gamma_j & = \underline{v_i}(v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i+2}\underline{v_iv_{i+1}v_i}v_{i-1} \ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i+2}v_{i+1}v_i\underline{v_{i+1}}v_{i-1} \ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i+2}v_{i+1}v_iv_{i-1} \ldots v_1)\underline{v_{i+1}\gamma_1}(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1\underline{v_{i+1}}(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{i-1}\underline{v_{i+1}v_iv_{i+1}}v_{i+2} \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{i-1}v_iv_{i+1}\underline{v_{i}}v_{i+2} \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})v_i\\
& = \gamma_jv_i.
\end{align*}
\end{proof}
\begin{lemma}
Relations $(\ref{rel-height-bb})$
follow from relations $(\ref{2nd reduction})$, the virtual relations, and relations $(\ref{relB-height-bv})$
and $(\ref{relB-height-bb})$.
\end{lemma}
\begin{proof} By the previous lemma, we may assume relations $(\ref{rel-height-bv})$.
It is sufficient to consider a case of $j > i$.
\begin{align*}
\gamma_i\gamma_j
& = (v_{i-1}\ldots v_1) \gamma_1 (\underline{v_1 \ldots v_{i-1}}) (\underline{v_{j-1} \ldots v_1}) \gamma_1 (v_1 \ldots v_{j-1})\\
& = (v_{i-1}\ldots v_1) \underline{\gamma_1} (v_{j-1} \ldots v_1) (v_2 \ldots v_i) \underline{\gamma_1} (v_1 \ldots v_{j-1})
~~~~~~~~~~~ \text{ (by (\ref{rel-height-bv}))} \\
& = (v_{i-1}\ldots v_1) (v_{j-1} \ldots v_2) \underline{\gamma_1 v_1 \gamma_1} (v_2 \ldots v_i) (v_1 \ldots v_{j-1})
~~~~~~~~~~~ \text{ (by (\ref{relB-height-bb}))} \\
& = (\underline{v_{i-1}\ldots v_1}) \underline{(v_{j-1} \ldots v_2) v_1} \gamma_1 v_1 \gamma_1 v_1 (v_2 \ldots v_i) (v_1 \ldots v_{j-1}) \\
& = (v_{j-1} \ldots v_2 v_1) (v_i \ldots v_2) \gamma_1 v_1 \gamma_1 v_1 (\underline{v_2 \ldots v_i}) (\underline{v_1 \ldots v_{j-1}}) \\
& = (v_{j-1} \ldots v_2 v_1) (v_i \ldots v_2) \gamma_1 v_1 \gamma_1 \underline{v_1} (\underline{v_1} \ldots v_{j-1}) (v_1 \ldots v_{i-1}) \\
& = (v_{j-1} \ldots v_2 v_1) (v_i \ldots v_2) \underline{\gamma_1} v_1 \underline{\gamma_1} (v_2 \ldots v_{j-1}) (v_1 \ldots v_{i-1})
~~~~~~~~~~~ \text{ (by (\ref{rel-height-bv}))} \\
& = (v_{j-1} \ldots v_2 v_1) \gamma_1 (\underline{v_i \ldots v_2}) \underline{v_1 (v_2 \ldots v_{j-1})} \gamma_1 (v_1 \ldots v_{i-1}) \\
& = (v_{j-1} \ldots v_2 v_1) \gamma_1 v_1 (v_2 \ldots v_{j-1}) (v_{i-1} \ldots v_1) \gamma_1 (v_1 \ldots v_{i-1}) \\
& = \gamma_j\gamma_i.
\end{align*}
\end{proof}
\begin{lemma}
Relations $(\ref{rel-height-sb})$
follow from relations $(\ref{1st reduction})$, $(\ref{2nd reduction})$, the virtual relations, relations
$(\ref{relB-height-sv})$,
$(\ref{relB-height-bv})$ and
$(\ref{relB-height-sb})$.
\end{lemma}
\begin{proof}
By previous lemmas, we may assume relations $(\ref{rel-height-sv})$ and $(\ref{rel-vsv})$ or equivalently $(\ref{relC-vsv})$.
First we show $(\ref{rel-height-sb})$ when $j=1$, i.e.,
$\sigma_i\gamma_1=\gamma_1\sigma_i$ for $i \neq 1$.
We apply induction on $i$, with initial condition $i=2$.
The relation $\sigma_2\gamma_1=\gamma_1\sigma_2$ follows from $(\ref{1st reduction})$ and $(\ref{relB-height-sb})$.
Assuming $\sigma_i\gamma_1=\gamma_1\sigma_i$, we obtain $\sigma_{i+1}\gamma_1=\gamma_1\sigma_{i+1}$ as follows:
\begin{align*}
\sigma_{i+1}\gamma_1 & = v_{i}v_{i+1}\sigma_{i}v_{i+1}\underline{v_{i}\gamma_1}\\
& = v_{i}v_{i+1}\sigma_{i}\underline{v_{i+1}\gamma_1}v_{i}\\
& = v_{i}v_{i+1}\underline{\sigma_{i}\gamma_1}v_{i+1}v_{i}\\
& = v_{i}\underline{v_{i+1}\gamma_1}\sigma_{i}v_{i+1}v_{i}\\
& = \underline{v_{i}\gamma_1}v_{i+1}\sigma_{i}v_{i+1}v_{i}\\
& = \gamma_1 v_{i}v_{i+1}\sigma_{i}v_{i+1}v_{i}\\
& = \gamma_1\sigma_{i+1}.
\end{align*}
Hence,\begin{equation}
\sigma_i\gamma_1=\gamma_1\sigma_i \quad \text{ for } i \neq 1. \label{3rd reduction}
\end{equation}
Now, we show relations $(\ref{rel-height-sb})$: $\sigma_i\gamma_j = \gamma_j\sigma_i$ for $j\neq i, i+1$.
Case(i) Suppose $j\leq i-1$. Then
\begin{align*}
\sigma_i\gamma_j & = \underline{\sigma_i }(v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\underline{\sigma_i \gamma_1}(v_1 \ldots v_{j-1}) ~~~~~~~~~~~ \text{ (by (\ref{3rd reduction}))}\\
& = (v_{j-1}\ldots v_1)\gamma_1\underline{\sigma_i }(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\sigma_i\\
& = \gamma_j\sigma_i.
\end{align*}
Case(ii) Suppose $j\geq i+2$. Then
\begin{align*}
\sigma_i\gamma_j & = \underline{\sigma_i }(v_{j-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i+2}) \underline{\sigma_i} (v_{i+1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i+2}) v_{i+1} v_i \sigma_{i+1} \underline{v_i v_{i+1}} (\underline{v_{i+1} v_i} v_{i-1} \ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i})\underline{\sigma_{i+1} }(v_{i-1}\ldots v_1)\gamma_1(v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{i})(v_{i-1}\ldots v_1)\underline{\sigma_{i+1}\gamma_1 }(v_1 \ldots v_{j-1})~~~~~~~~~~~~~~ \text{ (by (\ref{3rd reduction}))}\\
& = (v_{j-1}\ldots v_{1}) \gamma_1 \underline{\sigma_{i+1}} (v_1 \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{1}) \gamma_1 (v_1 \ldots v_{i-1}) \underline{\sigma_{i+1}} (v_i \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{1}) \gamma_1 (v_1 \ldots v_{i-1}) v_i v_{i+1} \sigma_i \underline{v_{i+1} v_i} ( \underline{v_i v_{i+1}} v_{i+2} \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{1}) \gamma_1 (v_1 \ldots v_{i+1}) \underline{\sigma_{i}} (v_{i+2} \ldots v_{j-1})\\
& = (v_{j-1}\ldots v_{1}) \gamma_1 (v_1 \ldots v_{i+1})(v_{i+2} \ldots v_{j-1})\sigma_{i} \\
& = \gamma_j\sigma_i.
\end{align*}
\end{proof}
\begin{lemma}
Relations $(\ref{rel-bv})$
follow from relations $(\ref{2nd reduction})$ and the virtual relations.
\end{lemma}
\begin{proof}
\begin{align*}
\gamma_{i+1} v_i
& = (v_i \ldots v_1) \gamma_1 (v_1 \ldots \underline{v_i}) \underline{v_i} \\
& = v_i ( v_{i-1} \ldots v_1) \gamma_1 (v_1 \ldots v_{i-1}) \\
& = v_i \gamma_i.
\end{align*}
\end{proof}
\begin{lemma}
Relations $(\ref{rel-twist-III})$
follow from relations $(\ref{1st reduction})$, $(\ref{2nd reduction})$, the virtual relations, and relations
$(\ref{relB-height-bv})$ and $(\ref{relB-bv})$.
\end{lemma}
\begin{proof}
\begin{align*}
\gamma_{i+1}\gamma_{i}\sigma_i\gamma_{i}\gamma_{i+1}
& = (v_i\ldots v_1)\gamma_1(v_1\ldots v_i)(v_{i-1} \ldots v_1)\gamma_1 \underline{(v_1 \ldots v_{i-1})(v_{i-1} \ldots v_1)}(v_{i} \ldots v_2)\\
& \sigma_1(v_2 \ldots v_{i}) \underline{(v_1 \ldots v_{i-1})(v_{i-1} \ldots v_1)} \gamma_1(v_1 \ldots v_{i-1})(v_i \ldots v_1)\gamma_1(v_1 \ldots v_i)\\
& = (v_i\ldots v_1)\gamma_1\underline{(v_1\ldots v_{i-1} v_i v_{i-1} \ldots v_1)} \gamma_1(v_{i} \ldots v_2)\sigma_1(v_2 \ldots v_{i})\gamma_1\\
& \underline{(v_1 \ldots v_{i-1} v_i v_{i-1} \ldots v_1)}\gamma_1(v_1 \ldots v_i)\\
& = (v_i\ldots v_1)\underline{\gamma_1}(v_i\ldots v_2 v_1 v_2 \ldots v_i)\underline{\gamma_1}(v_{i} \ldots v_2)\sigma_1(v_2 \ldots v_{i}) \underline{\gamma_1}\\
& (v_i \ldots v_2 v_1 v_2 \ldots v_i) \underline{\gamma_1}(v_1 \ldots v_i)\\
& = (v_i\ldots v_1)(v_i\ldots v_{2})\gamma_1 v_1\underline{(v_{2} \ldots v_i)(v_{i} \ldots v_2)}\gamma_1\sigma_1\gamma_1\underline{(v_2 \ldots v_{i})(v_i \ldots v_{2})}\\
& v_1 \gamma_1 (v_{2} \ldots v_i)(v_1 \ldots v_i)\\
& = (v_i\ldots v_1)(v_i\ldots v_{2})\underline{\gamma_1 v_1\gamma_1\sigma_1\gamma_1v_1\gamma_1}(v_{2} \ldots v_i)(v_1 \ldots v_i)\\
& = v_i\underline{(v_{i-1}\ldots v_1)(v_i\ldots v_{2})\sigma_1(v_{2} \ldots v_i)(v_1 \ldots v_{i-1})}v_i\\
& = v_i\sigma_iv_i.
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:ReducedPresentation}]
In the twisted virtual braid group, it is verified that all relations $(\ref{1st reduction})$--$(\ref{relB-bv})$ are valid by a geometrical argument using diagrams or algebraic argument using the relations $(\ref{rel-height-ss})$--$(\ref{rel-twist-III})$. On the other hands, we see that the relations $(\ref{rel-height-ss})$--$(\ref{rel-twist-III})$ follow from the relations $(\ref{1st reduction})$--$(\ref{relB-bv})$ by the previous lemmas.
\end{proof}
\section*{Concluding remarks}
In this paper we study twisted virtual braids and the twisted virtual braid group, and provide theorems for twisted links corresponding to the Alexander theorem and the Markov theorem. We also provide a group presentation and a reduced group presentation of the twisted virtual braid group.
As a future work, it will be interesting to study the pure twisted virtual braid group and construct invariants for twisted virtual braids and twisted links.
For example, biquandles with structures related to twisted links introduced in \cite{ns} may be discussed by using twisted virtual braids.
\section*{Acknowledgements}
K.~Negi would like to thank the University Grants Commission(UGC), India, for Research Fellowship with NTA Ref.No.191620008047. M.~Prabhakar acknowledges the support given by the Science and Engineering Board(SERB), Department of Science $\&$ Technology, Government of India under grant-in-aid Mathematical Research Impact Centric Support(MATRICS) with F.No.MTR/2021/00394. This work was partially supported by the FIST program of the Department of Science and Technology, Government of India, Reference No. SR/FST/MS-I/2018/22(C), and was supported by JSPS KAKENHI Grant Number JP19H01788.
|
{
"arxiv_id": "2302.13210",
"language": "en",
"timestamp": "2023-02-28T02:13:08",
"url": "https://arxiv.org/abs/2302.13210",
"yymm": "2302"
} | \section{Introduction}
The vast design space in neuromorphic computing is one of the challenges that can slow down the deployment of optimized
architectures. This space encompasses multiple aspects including
network architecture, neuron models, synaptic plasticity mechanisms,
and input encoding. This problem is compounded if we also consider potential hardware implementations, which span from digital ASIC to hybrid digital/analog design to a multiplicity of emergent devices.
Some of that complexity is common to traditional hardware design. Other aspects however are central to the nature of neuromorphic computing, and we can find counterparts in biological systems: beyond
the simplified neuron models, there are hundreds of neuropeptides, neuromodulators, receptors, and signaling mechanisms that have
been selected for their ability to enable complex functionality
of the central nervous system. Disruptions in this space often lead to
dramatic drops in performance, as showcased in the literature (See for instance Ref.
\cite{KOOB20041515}).
In recent years, there has been an increasing interest in the use of machine learning approaches for hardware design, using techniques such as reinforcement learning for automatic placement during the chip design stage.\cite{Zhang_2022} One possible approach is to use automatic machine learning (AutoML) approaches used in deep learning as an inspiration to tackle this issue in the context of neuromorphic computing. Steps such as input encoding, neuron and synapse model, synaptic plasticity rule, and network architecture selection, and hyperparameter optimization can potentially be automated. In the context of neuromorphic architectures, this process would help us quickly identify the subset of the design space that is more promising for a specific application or that would capture the benefits and behavior
of emergent devices. This is a crucial step to
bring neuromorphic computing to the mainstream,
particularly to resource-constrained, SWaP-C scenarios.
The same approach can be used to better understand how to adapt spiking neural networks (SNNs) to specific workflows.
A strategy often used in the literature is to hide that complexity from the user of neuromorphic hardware, for instance by operating at higher levels of
abstraction.\cite{Nengo, Whetstone} While useful in their contexts, this strategy doesn't work when we are trying to leverage
the unique properties of spiking neural networks. In that case, access to the lower level details
of the SNN is crucial. For instance, the two generations of Intel's Loihi,\cite{Loihi} are designed to be flexible and enable a wide range of neurons and synaptic plasticity models. The ability to effectively search for optimal configurations within this vast design space can help accelerate the development of novel algorithms in existing
neuromorphic hardware.
In this work, we explore such an approach: we have demonstrated the ability to carry out massively parallel configuration searches to identify optimal regions in the design space of neuromorphic architectures, using their performance on specific tasks as the optimization target. To this end, we have developed spikelearn, a lightweight framework that can simulate spiking neural networks in a wide range of non-trivial tasks, including on-chip learning with a wide range synaptic plasticity rules spanning from neuro-inspired to memristor-based. This framework allows us to run non-trivial tasks with millions of synapses using a single core on leadership computing machines, opening up the ability to do configuration searches at the exascale. It can also operate on smaller machines relying on just a few cores. To efficiently search the design space of neuromorphic architectures, we have coupled this framework with a parallel asynchronous model-based search approach. As an exemplar, we have explored two configuration spaces focused on online learning using a spiking analog of a covariance learning rule. The optimization run are carried out under stringent conditions both in terms of data availability and number of spikes per sample, which allow us to explore the transfer of these configurations to other conditions.
\section{Methodology}
\subsection{Simulation framework: streamnet and spikelearn}
One of the challenges of applying AutoML to neuromorphic architectures is that any simulation framework used needs to satisfy the following requirements: 1) it has to be able to carry out workflows at the scale of the relevant applications. This may involve running hundreds of thousands of samples in some cases. 2) They have to allow the simulation of a wide range of neurons, architectures, and synaptic plasticity rules. 3) They should be able to incorporate behaviors representative of novel devices which go beyond those typically found in spiking simulators. 4) They should be able to run effectively in high performance computing machines.
In order to incorporate these four requirements we have developed a lightweight tool based on streamnets (Fig. \ref{fig:streamnet}). A \emph{streamnet} is a data structure comprising a graph of \emph{Elements} with two ordered sets of \emph{input nodes} and \emph{output nodes}. A key restriction with respect to what would amount to a netlist is that each input node can be connected to just one output node (indegree of one). This restriction allows us to implement a simple execution model where at each time step, each element pulls the output from its corresponding output node and uses it to compute the next step. In order to interact with the outer world, a streamnet also defines a list of input ports and output ports.
\begin{figure}[thbp]
\centerline{\includegraphics[width=8cm]{streamnet.png}}
\caption{Streamnet is the data structure at the core of the simulation framework: a streamnet represents a directed graph with labeled nodes where inputs to each node are restricted to an indegree of one. This structure enables the simulation of heterogeneous networks where all components advance at a common time step. Computation within each node is hidden from the rest of the network, which allows us to integrate SPICE, HDL, Tensorflow/Pytorch, SNN, and physics-based components in a modular fashion.}
\label{fig:streamnet}
\end{figure}
The streamnet structure is ideally suited to the simulation of neural networks, since: 1) the exact computation carried out within each element is hidden from the streamnet. Consequently, we can integrate multiple simulation tools as long as they can be encapsulated within a single Element; 2) the time step $\Delta t$ can be chosen to represent the minimum propagation delay in the network (axon delay); and 3) It allows swapping elements without breaking the simulation as long as they share the same interface. 4) A streamnet is also an element. This adds a layer of composability that is very similar to that found in machine learning frameworks such as PyTorch.
Spikelearn implements spiking neural networks using the streamnet data structure as the backbone. Spikelearn implements a series of synaptic plasticity rules that are meant to be a superset of that implemented in Loihi,\cite{Loihi} extending it to incorporate ternary synaptic plasticity rules that use a modulatory signal to regulate learning potentially in a synapse-by-synapse fashion, as well as synaptic plasticity models based on the VTEAM memristor model.\cite{VTEAM}
Spikelearn has been released and can be found at \url{https://github.com/spikelearn/spikelearn}
\subsection{Massively parallel optimization with deepHyper}
In this work, we seek to optimize neuromorphic architectures across the choice of neuron models,
synaptic models, and synaptic plasticity models and the tunable parameters
in each of these models. Formally, the parameter space can be written
as $x = (x_{\mathcal{I}},x_{{\mathcal{R}}},x_{{\mathcal{C}}}$), where
$\mathcal{I},\mathcal{R},\mathcal{C}$ respectively denote integer,
continuous, and categorical parameters. The number of neurons in a given layer, the leakage time of a spiking neuron, and the type of synaptic plasticity rule are examples of each of these in the context of this work. The resulting optimization problem,
which seeks to maximize the accuracy of the model as a function of
this mixed space, is, in fact, a non-convex black-box mixed-integer
nonlinear optimization~\cite{burer2012non}.
Due to the heterogeneity of the search space and the black-box nature of the simulation framework and objective function, we adopted a parallel asynchronous
model-based search approach (AMBS)~\cite{Balaprakash_DH_2018}
to find the high-performing parameter configurations.
The AMBS approach consists of sampling-based Bayesian
optimization that samples a number of parameter configurations
from this space $x$ and progressively fits a surrogate model
(\emph{random forests} in our case) over the parameter configurations'
accuracy metric space. This surrogate model is updated asynchronously
as new configurations are evaluated by parallel processes, which are
then used to obtain configurations that will be evaluated in the next
iteration.
We adopt \emph{random forests} over other generic choices
such as \emph{Gaussian process} due to the ability of the former to build
surrogate models effectively in a mixed space. It does so by building
multiple decision trees and using bootstrap aggregation (or bagging)
to combine them to produce a model with better predictive accuracy
and lower variance. In addition, the acquisition function plays an
important role in maintaining the exploration-exploitation balance
during the search. With AMBS we adopt the {\it lower confidence bound}
(LCB) acquisition function.
Since AMBS enables asynchronous search, a large number of hyperparameter
configurations can be evaluated in parallel and inform the surrogate
modeling, which is crucial for search over high-dimensional and complex
mixed parameter spaces. The complexities of running such large numbers of evaluations
on high-performance computing systems are handled by tight integration with
workflow management frameworks such as Ray~\cite{moritz2018ray} and MPI~\cite{gropp1999using}.
\section{Experiments}
In order to explore the ability to efficiently search and optimize neuromorphic architectures we have used
online, on-chip learning as an exemplar.
\subsection{Task structure and optimization metrics}
The tasks explored in this work are all instances
of stream learning, where the architecture is
learning directly from a stream of inputs:
starting from random synaptic weights, learning takes
place in real time and, after
a predetermined number of samples, we evaluate the
achieved accuracy in a separate stream learning assay where learning is disabled. These tasks can be viewed as
\emph{metalearning experiments}, where we are evaluating
and optimizing the system's ability to learn.
In the supervised case, the neuromorphic architecture receives class information as additional input and
the final classification
accuracy is used as target for optimization.
In the unsupervised case, we follow the same task structure, except that we use an l$_2$ metric
to compute the distance between the input and the projection in the feature space defined by the synaptic weights.
\subsection{Neuron model and synaptic plasticity rule}
We have considered leaky integrate and fire neurons given by the following equations:
\begin{eqnarray}
v(n+1) & = & \left(1-s(n)\right)\left(v(n) e^{-1/\tau} + x(n)\right) \\
s(n+1) & = & \mathrm{H}\left(v(n)-v_\mathbf{th}\right)
\end{eqnarray}
Here $v(n)$ is the membrane potential, $s(n)$ is the output spike of the neuron, $x(n)$ represent the sum of all synaptic inputs at $n$,
$\mathrm{H}\left(\cdot\right)$ is the Heaviside function, $\tau$ is
the characteristic decay time of the membrane potential, and $v_\mathrm{th}$ is the firing threshold voltage of the neuron. This
model is analogous to Loihi's model of a leaky integrate and fire
neuron.\cite{Loihi} All synapses can be either pass through or low pass.
In supervised experiments we have used
a ternary, modulated non-Hebbian synaptic plasticity
rule that is inspired on experimental neuroscience.\cite{Aso_MB_2014}
This rule considers presynaptic, postsynaptic, and modulatory traces ($t_e$, $t_o$, $t_m$), where each trace is computed as: $t(n) = a t(n-1) + b s(n)$, so that the change in synaptic weight $\Delta W$ is given by:
\begin{equation}
\label{MSErule}
\Delta W = l_r t_e (R_0 t_m - t_o)
\end{equation}
Here $l_r$ represents the learning rate, and $R_0$ is a proportionality constant relating the modulatory input and the post-synaptic output of the neuron.
Finally, we can further impose upper and lower bounds to the synaptic weights. These can be defined between $[0, W_\mathrm{max}]$ or$[-W_\mathrm{max}, W_\mathrm{max}]$ depending on the nature of the synapse (excitatory/inhibitory vs hybrid).
This model can be viewed in two ways: it resembles some of the covariance rate models,\cite{dayan2005theoretical} where the modulatory signal is a moving threshold and it tracks the anticovariance instead. It can also be viewed as an spiking counterpart of a mean square error loss function. Consequently, in this work we will refer to this rule as the MSE rule.
\subsection{Test cases}
\subsubsection{Learning rule optimization in a shallow network}
The shallow model consists simply of a layer of neurons each receiving direct inputs from the dataset. Inputs are encoded as Poisson spike trains. This shallow model will help us focus on the synaptic plasticity rule and evaluate the fraction of the design space that
is conducive to online learning.
The design space for optimization is described in Table \ref{tab:shallow}.
\begin{table}[htbp]
\caption{Design space for the shallow case}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Component & Parameter & Explanation & Range \\
\hline
output neuron & $\tau$ & Leakage time & 1-8 \\
\hline
& $\tau$ & Low-pass filter of input spk. & 0.1-4 \\
\cline{2-4}
& $\tau_m$ & Low-pass filter of mod. spk. & 0.1-4 \\
\cline{2-4}
& $t_e$: $(a,b)$ & Presynaptic trace & 0.05-1 \\
\cline{2-4}
MSE syn. & $t_o$: $(a,b)$ & Postsynaptic trace & 0.05-1 \\
\cline{2-4}
& $t_m$: $(a,b)$ & Modulatory trace & 0.05-1 \\
\cline{2-4}
& $R_0$ & Weight of modulatory trace & 0.1-4 \\
\cline{2-4}
& $l_r$ & Learning rate & $10^{-4}$-0.5 \\
\cline{2-4}
& $W_\mathrm{lim}$ & Weight limit & 0.01-0.5 \\
\hline
\end{tabular}
\label{tab:shallow}
\end{center}
\end{table}
\begin{figure}[thbp]
\centerline{\includegraphics[width=8cm]{network.png}}
\caption{Architecture for the complex case, inspired on the
olfactive system of the insect brain}
\label{fig:mb}
\end{figure}
\subsubsection{Optimization of a complex network}
The complex model implements a deeper architecture that is inspired in the olfactive system of the insect brain.\cite{Aso_MB_2014,Strausfeld_2012} As shown in Figure \ref{fig:mb}, Inputs are passed to the glomeruli layer and to a lateral neurons which inhibits the neurons in the glomeruli. These glomeruli then project into a sparse layer (Kenyon cells) before densely fan in into a set of output neurons. The glomeruli also project into an inhibitory neuron that inhibits the whole population of Kenyon cells. Finally, a modulatory signal is encoded by a set of modulatory neurons that mimic the dopamine clusters that innervate the synapses between Kenyon cells and the output neurons. These modulatory neurons provide the supervised information for each of the categories, as in the shallow case. Learning takes place at synapses between the mushroom body and the output neurons, for a total of 50,000 active synapses in the case of MNIST.
The resulting design space is shown in Table \ref{tab:complex}, where we include both the parameters and the range considered during the configuration search.
\begin{table}[htbp]
\caption{Design space for the complex case}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Component & Parameter & Explanation & Range \\
\hline
glom. neuron & $\tau$ & Leakage time & 1-8 \\
\hline
lat. neuron & $\tau$ & Leakage time & 1-8 \\
\hline
kcell. neuron & $\tau$ & Leakage time & 1-8 \\
\hline
inhib. neuron & $\tau$ & Leakage time & 1-8 \\
\hline
mbon neuron & $\tau$ & Leakage time & 1-8 \\
\hline
inp2glom syn. & $W_0$ & Synaptic weight & 2-10 \\
\hline
inp2lat syn. & $W_0$ & Synaptic weight & 0-0.5 \\
\hline
lat2glom syn. & $W_0$ & Synaptic weight & 0-2 \\
\hline
glom2inh syn. & $W_0$ & Synaptic weight & 0-0.2 \\
\hline
glom2kc syn. & $W_0$ & Synaptic weight & 0-0.2 \\
\cline{2-4}
& $p_0$ & Connection probability & 0.01-0.05 \\
\hline
& $\tau$ & Low-pass filter of input spk. & 0.1-4 \\
\hline
& $\tau_m$ & Low-pass filter of mod. spk. & 0.1-4 \\
\cline{2-4}
& $t_e$: $(a,b)$ & Presynaptic trace & 0.05-1 \\
\cline{2-4}
kc2mbon syn. & $t_o$: $(a,b)$ & Postsynaptic trace & 0.05-1 \\
\cline{2-4}
& $t_m$: $(a,b)$ & Modulatory trace & 0.05-1 \\
\cline{2-4}
& $R_0$ & Weight of modulatory trace & 0.1-4 \\
\cline{2-4}
& $l_r$ & Learning rate & $10^{-4}$-0.5 \\
\cline{2-4}
& $W_\mathrm{lim}$ & Weight limit & 0.01-0.5 \\
\hline
\end{tabular}
\label{tab:complex}
\end{center}
\end{table}
\section{Results}
In this work, we optimized the two architectures against the online learning of the MNIST dataset. In each case we streamed
10,000 images through the architectures, which is 1/6 of an epoch, starting from a random configuration. The goal is therefore to identify configurations leading to fast learning, a desirable attribute for online learning scenarios. We use 8 timesteps per sample: this reduces the number of output spikes to 4 in the case of the shallow network, and 6 in the complex case.
In Figure \ref{fig:shallow} we show the
evolution on task accuracy during a single configuration search using
spikelearn and deepHyper for the shallow case. The search algorithm explored a total of 10,204 configurations. The results obtained indicate that the algorithm could effectively identify a broad region of high accuracy. In this exemplar case, the implication is that the MSE rule described in Section III can be an effective rule for supervised learning in spiking neural networks. More broadly, this validates our methodology for
architecture optimization.
\begin{figure}[thbp]
\centerline{\includegraphics[width=8cm]{accuracy_v_config_shallow.png}}
\caption{Accuracy on the online learning task obtained during the search of the configuration space for the shallow case. The concentration of data points around the top accuracy values are consistent with the MSE synaptic plasticity rule enabling supervised learning under a wide range of parameters}
\label{fig:shallow}
\end{figure}
Similarly, we show a snapshot of a single search for the complex case in Figure \ref{fig:complex}, comprising 1,059 configurations. In this case, the data points are more evenly spread across the accuracy range. This is consistent with a narrower region of the search space leading to high accuracy in the online learning task, probably due to the higher dimensionality of the configuration space and the fact that in this complex case we are trying to optimize the parameters for all neurons and synapses concurrently. In Figure \ref{fig:hist} we show a comparison between the distributions of task accuracy achieved during the searches in
the shallow and complex case. The distribution for the shallow case is heavily skewed towards the high accuracy values compared to the complex case.
\begin{figure}[thbp]
\centerline{\includegraphics[width=8cm]{accuracy_v_config_mb.png}}
\caption{Accuracy on the online learning task obtained during
a single search of the configuration space for the complex case. The higher dimensional configuration space leads to a more homogeneous distribution of accuracy values, consistent with a smaller subset of the parameter space capable of achieve high accuracy in the online learning task}
\label{fig:complex}
\end{figure}
\begin{figure}[thbp]
\centerline{\includegraphics[width=8cm]{histograms.png}}
\caption{Distribution of task accuracies obtained during a single
configuration search using deep hyper for the shallow and complex cases.}
\label{fig:hist}
\end{figure}
Finally, the optimization experiments were carried out using only 10,000 samples and a small number of simulation steps. This may result on accuracy values that are lower than those expected when using longer encodings. In order to understand the robustness of the configurations identified by the search algorithm we carried out experiments where we explored longer tasks both in terms of the number of samples and in the number of spikes. In Figure \ref{fig:transfer}
we show the evolution of task accuracy during the online learning of
the MNIST task for three different numbers of steps per sample: 16, 20, and 24. The results show that for 20 steps and higher, the classification accuracy reaches values close to 93\%. This are consistent with values reported in prior works training spiking neural networks with stochastic gradient descent methods,\cite{ayg_spikes} indicating that the optimization process indeed identifies configurations that are close to the optimal values. This is a necessary step to explore more complex metalearning experiments involving distributions of tasks.
\begin{figure}[thbp]
\centerline{\includegraphics[width=8cm]{plot_transfer.png}}
\caption{Evolution of classification accuracy during the online learning tasks for different number of steps. When a number of 20 steps is used, the classification accuracy exceed 93\%, which is consistent with the values obtained using stochastic gradient
descent methods.}
\label{fig:transfer}
\end{figure}
\section{Conclusions}
In this work we have demonstrated the application of AutoML-inspired approaches to the exploration and optimization of neuromorphic architectures. Through the combination of a flexible simulation framework (spikelearn) and a parallel asynchronous model-based search strategy (deepHyper), we have been able to explore the configuration space of shallow and deep spiking neural networks and identify optimal configurations that maximize their performance on a predetermined task. The simulation framework has been designed to be lightweight and seamlessly integrate different types of simulation approaches for each of the components, from HDL to physics-based models and conventional artificial neural networks. As part of this work, we have demonstrated its ability to run in high performance computing environment and leverage the extreme parallelism afforded by these facilities to explore a large number of configurations, more than 10,000 in the shallow case.
Through the exemplar problem, we have been able to identify a novel modulated synaptic plasticity rule for online supervised learning in spiking architectures. Beyond this specific case, the results presented in this work demonstrate that AMBS approaches provide
a viable pathway towards application-driven optimization of neuromorphic architectures. We are currently working to extend the design space to incorporate emergent devices such as memristors and memtransistors into the spiking architectures for high energy and nuclear physics detectors.
Finally, it is important to point out that a key requirement for the development of neuromorphic architectures capable of on-chip learning is the identification of synaptic plasticity rules that are robust and performant against a broad distribution of tasks. The exemplar case explored in this work, while in itself trivial, provides a starting point for more robust metalearning experiments involving neuromorphic architectures in areas such as signal processing or autonomous agents.
|
{
"arxiv_id": "2302.13249",
"language": "en",
"timestamp": "2023-02-28T02:14:07",
"url": "https://arxiv.org/abs/2302.13249",
"yymm": "2302"
} | \section{Introduction}
Over the past two decades, 3d $\mathcal N=4$ mirror symmetry has attracted a lot
of attentions from both physicians and mathematicians (see, for example,
\cite{BFN,BDG,N} and references therein). It is also closely related
to the theory of \textit{symplectic duality} of Braden et al. \cite{BPW,BLPW}.
Suppose two (possibly singular) manifolds are symplectic dual to each
other, then there are some highly nontrivial identities between
the geometry and topology of them. One of the properties predicted
by 3d $\mathcal N=4$ mirror symmetry and
symplectic duality is Hikita's conjecture.
Suppose we are given a pair of symplectic dual conical symplectic singularities,
then Hikita's conjecture is a relation of the coordinate ring of one conical
symplectic singularity to the cohomology ring of the symplectic
resolution of the other dual conical symplectic singularity, which is stated as follows.
\begin{conjecture}[Hikita {\cite[Conjecture 1.3]{Hi}}]
Let $X$ and $X^{!}$ be a pair of symplectic dual conical
symplectic singularities over $\mathbb{C}$.
Suppose $X^{!}$ admits
a conical symplectic resolution $\tilde{X}^{!}\rightarrow X^{!}$,
and $T$ is a maximal torus of the Hamiltonian action
on $X$. Then there is an isomorphism of graded algebras
\begin{eqnarray*}
\mathrm{H}^\bullet(\tilde{X}^{!},\mathbb{C})\cong\mathbb{C}[X^T].
\end{eqnarray*}
\end{conjecture}
Hikita also proved this conjecture in several cases, such as
hypertoric varieties, finite type
A quiver varieties, and the Hilbert schemes of points in
the plane (which are self-dual). He then asked whether this phenomenon holds for
other examples of symplectic duality.
Later, Nakajima generalized Hikita's conjecture
to the equivariant case (see \cite[\S8]{KTW}), and Kamnitzer, McBreen and
Proudfoot further generalized it
to the quantum case in \cite{KMP} .
Recently Shlykov proved in \cite{Sh} that Hikita's
conjecture holds for the case of the minimal
nilpotent orbit closure $\overline{\mathcal O}_{min}$
in a simple Lie algebra
$\mathfrak{g}$ of ADE type and the dual conical
symplectic singularity, i.e., the Slodowy slice to the
subregular nilpotent orbit in the same Lie algebra.
This is highly related to the duality discovered by Spaltenstein
\cite{Spa} and Lusztig \cite{Lus} (see also \cite{CM} for more details).
By Slodowy \cite{S}, the Slodowy slice
is isomorphic to the Kleinian singularity
$\mathbb C^2/\Gamma$ of the same type.
If we denote by $\widetilde{\mathbb C^2/\Gamma}$
the minimal resolution of $\mathbb C^2/\Gamma$
and assume that the fixed point scheme
for $T$ is the same as the one for a generic $\mathbb C^\times$-action, then
Shlykov proved in \cite{Sh} that
$$\mathrm H^\bullet(\widetilde{\mathbb C^2/\Gamma})
\cong\mathbb C[\overline{\mathcal O}_{min}^{\mathbb C^\times}]
$$
as graded algebras.
The purpose of this paper is to
generalize his work to the equivariant case.
\begin{theorem}\label{maintheorem}
Let $\mathfrak g$ be a complex semisimple Lie algebra of ADE type.
Let $\widetilde{{\mathbb C^2}/\Gamma}$ be the minimal resolution
of the singularity of the same type, and let $\overline{\mathcal O}_{min}$
be the closure of the minimal nilpotent orbit in $\mathfrak g$. Then the equivariant
Hikita conjecture
holds for the pair $\widetilde{{\mathbb C^2}/\Gamma}$ and
$\overline{\mathcal O}_{min}$; that is,
we have isomorphisms of graded algebras:
$$
\begin{array}{ll}
\mathrm{H}^\bullet_{(\mathbb C^{\times})^2}
(\widetilde{{\mathbb C^2}/\Gamma})
\cong B(\mathscr A[\overline{\mathcal O}_{min}]),
&\mbox{if $\mathbb C^2/\Gamma$ is an $A_n$ singularity,}\\[2mm]
\mathrm{H}^\bullet_{\mathbb C^{\times}}
(\widetilde{{\mathbb C^2}/\Gamma})
\cong B(\mathscr A[\overline{\mathcal O}_{min}]),
&\mbox{otherwise,}
\end{array}
$$
where $\mathscr A[\overline{\mathcal O}_{min}]$
is the quantization of $\mathbb C[\overline{\mathcal O}_{min}]$
in the sense of Joseph (see \S\ref{sect:quantization} for the details), and
$B(-)$ is the associated $B$-algebra (see \S\ref{sect:B-algebra} for the
definition).
\end{theorem}
In the above theorem, the $A_n$ singularities and their resolutions
are toric varieties,
and hence we naturally consider the
$(\mathbb C^\times)^2$-equivariant cohomology for them.
For singularities of DE type, there is only a natural $\mathbb C^\times$ action
on them, and we can only consider
their $\mathbb C^\times$-equivariant cohomology, which
has been studied by Bryan and Gholampour in \cite{BG}.
(In fact, Bryan and Gholampour obtained more; they computed
the quantum $\mathbb C^\times$-equivariant cohomology of these varieties,
including the $A_n$ case. The quantum version of Hikita's conjecture
for these varieties will be studied somewhere else.)
On the other hand, Joseph gave in \cite{Jo}
the quantizations of the minimal orbit closures
in Lie algebras of ADE type.
They are the quotients of the corresponding universal enveloping
algebras by some specific two-sided ideals, which
are called the Joseph ideals.
Later, Garfinkle in her thesis \cite{Ga} constructed
explicitly the Josephs ideals.
Interestingly enough, the Joseph ideals in the type A case
are not unique, but are parameterized by the complex numbers $\mathbb C$.
For the other types of Lie algebras, the Joseph ideals are uniquely determined.
Thus in the type A case, if we view the number that parameterizes
the Joseph ideals as a formal parameter, then the quantizations
of the minimal orbits in this case are over the ring of polynomials of two variables,
which exactly matches the base ring of the $(\mathbb C^\times)^2$-equivariant
cohomology of the dual side. For the other types of minimal orbits,
all the algebras involved are over the polynomials of one variable.
The proof of Theorem \ref{maintheorem} is then based on the explicit
computations of the algebras in the theorem.
The rest of this paper is organized as follows.
In \S\ref{sect:cohomologyofADE} we first recall some
basic facts on Kleinian singularities, and
then compute the equivariant cohomology of
the minimal resolutions of these singularities.
In \S\ref{sect:quantization}
we study the quantizations of the minimal nilpotent orbit closures
in Lie algebras of ADE type, which is due to Joseph \cite{Jo} and Garfinkle \cite{Ga}.
In \S\ref{sect:B-algebra} we study the corresponding $B$-algebra of these quantizations.
In \S\ref{sect:proofofmainthm} we prove Theorem \ref{maintheorem}.
Namely, for each type of Lie algebras, we give an explicit isomorphism
between the two types of algebras.
\begin{ack}
In the spring of 2021, Professor Yongbin Ruan gave a series of lectures
at Zhejiang University
on his project on the mirror symmetry of nilpotent orbits
of semi-simple Lie algebras.
This paper is also motivated
by our study of his lectures. We are extremely grateful to him as well as
IASM, Zhejiang University
for inviting us to attend the lectures and for offering excellent working conditions.
This work is supported by NSFC Nos. 11890660, 11890663 and 12271377.
\end{ack}
\section{Equivariant cohomology of ADE resolutions}\label{sect:cohomologyofADE}
In this section, we study the equivariant
cohomology of the minimal resolutions of Kleinian singularities.
The type A case is discussed
in \S\ref{subsect:typeA}
and the rest cases are discussed in \S\ref{BG}.
\subsection{Kleinian singularities}\label{subsect:Kleiniansing}
Let $\Gamma$ be a finite subgroup of
$\mathrm{SL}_2(\mathbb C)$.
It naturally acts on $\mathbb C^2$ via the canonical
action of $\mathrm{SL}_2(\mathbb C)$.
The singularity
$\mathbb C^2/\Gamma$ is called a Kleinian singularity,
and has been widely studied. The following table summarizes
the classification of Kleinian singularities:
\begin{center}
\begin{tabular}{p{2cm}p{4cm}p{5cm}}
\hline
Type & $\Gamma$ &Defining equation\\
\hline\hline
$A_n$& Cyclic Group $\mathbb{Z}_{n+1}$ & $x^{n+1}-yz=0$\\
$D_n$& Binary Dihedral &$x(y^2-x^{n-2})+z^2=0$\\
$E_6$& Binary Tetrahedral & $x^4+y^3+z^2=0$\\
$E_7$& Binary Octahedral & $x^3+xy^3+z^2=0$\\
$E_8$& Binary Icosahedral & $x^5+y^3+z^2=0$\\
\hline
\end{tabular}
\end{center}
The singularity $\mathbb C^2/\Gamma$ has a unique minimal
resolution, denoted by $\widetilde{\mathbb C^2/\Gamma}$,
whose exceptional fiber is given by a tree of $\mathbb{CP}^1$'s.
The corresponding tree, whose vertices are the $\mathbb{CP}^1$'s
and whose edges between two given vertices are identified with the intersection
points of the corresponding $\mathbb{CP}^1$'s.
It turns out that the trees such constructed are exactly the Dynkin diagrams
of the Lie algebra of the same type.
There is another direct relationship between the Kleinian singularities
and the Lie algebras; namely, the Kleinian singularities
are exactly the Slodowy slices to the subregular nilpotent orbits
in the Lie algebra of the same type.
Let $\mathfrak g$ be a Lie algebra. Recall that
the nilpotent cone of $\mathfrak g$, usually denoted
by $\mathcal N$, is the set
$$\mathcal N:=\left\{x\in\mathfrak g: (\mathrm{ad}_x)^n=0\,\,
\mbox{for some $n\in\mathbb N$}\right\}.$$
\begin{definition}[Slodowy slice \cite{S}]
Let $x \in\mathfrak{g}$ be a nilpotent element. Extend this to a
choice of $\mathfrak{sl}_2(\mathbb{C})$
triple $\langle x, h, y\rangle \subseteq \mathfrak{g}$. The
Slodowy slice associated to $(\mathfrak{g}, x)$ is the
affine sub-variety $S = x + \ker[y, -] \subseteq \mathfrak{g}$.
\end{definition}
It is a transverse slice to the nilpotent orbit $\mathcal{O}$ of the point $x$.
\begin{theorem}[Brieskorn \cite{Br} and Slodowy \cite{S}]
\label{Grothendieck-Brieskorn}
Let $\mathfrak{g}$ be simply-laced, $\mathcal{N} \subseteq \mathfrak{g}$
denote the nilpotent cone, and $S$ be a Slodowy slice to a subregular nilpotent
element $x\in\mathcal{O}_{sub}$. The intersection $S_x \cap \mathcal{N}$
is a Kleinian surface singularity with the same Dynkin diagram as
$\mathfrak{g}$. Moreover, the symplectic resolution
$\widetilde{S_x \cap \mathcal{N}}\rightarrow S_x \cap \mathcal{N}$
is the same as the minimal resolution of the Klein singularity $\widetilde{\mathbb{C}^2/\Gamma}\rightarrow\mathbb{C}^2/\Gamma$.
\end{theorem}
\subsection{Equivariant cohomology of resolutions of $A_n$ singularities}\label{subsect:typeA}
Let $\Gamma=\mathbb{Z}_{n+1}$ with $\xi$ being
the generator of $\Gamma$. The finite group $\Gamma$ acts on $\mathbb C^2$ as
$$\xi \cdot (z_1, z_2)=\left(e^{\frac{2\pi i}{n+1}}z_1, e^{-\frac{2\pi i}{n+1}}z_2\right).$$
The associated singularity, denoted by $A_n$, is given by $\mathbb C^2/\Gamma$.
\subsubsection{Toric construction}
Let $\mathcal{A}_n$ be the minimal resolution of $A_n$,
which is also called the
Hirzebruch-Jung resolution. Then $\mathcal{A}_n$ is a toric variety,
which is given by $n+2$ toric fans in ${\mathbb R}^2$:
\begin{equation*}
v_0=(1, 0), v_1=(0, 1), v_2=(-1,2), \cdots, v_{n+1}=(-n, n+1).
\end{equation*}
By Cox et. al. \cite{Cox}, we can view $\mathcal{A}_n$ as a GIT-quotient, namely,
$$\mathcal{A}_n\cong \big(\mathbb C^{n+2}-Z(\Sigma)\big)/(\mathbb C^\times)^n,$$
where $Z(\Sigma)=\{z_0\cdots\widehat{z_i}\cdots z_{n+1}=0|1\leq i\leq n+1\}$,
and the $(\mathbb C^\times)^n$ action on $\mathbb C^{n+2}$ is as follows:
for any $(\lambda_1,\cdots, \lambda_n)\in (\mathbb C^\times)^n$,
\begin{equation}\label{A_n GIT}
(\lambda_1,\cdots, \lambda_n)\cdot(z_0, \cdots,
z_{n+1}):=(\lambda_1z_0,\lambda_1^{-2}\lambda_2z_1,
\lambda_1\lambda_2^{-2}\lambda_3 z_2,
\cdots, \lambda_{n-1}\lambda_n^{-2}z_n, \lambda_n z_{n+1}).
\end{equation}
Here we use the homogeneous coordinate
$[z_0: z_1:\cdots: z_{n+1}]$ to parametrize the
$({\mathbb C}^\times)^n$-orbit of $(z_0, z_1,\cdots, z_{n+1})$, which is a point in $\mathcal{A}_n$.
The projection $\mathcal{A}_n\rightarrow {\mathbb C}^2/\Gamma$ is
$$[z_0: z_1:\cdots: z_{n+1}]\mapsto
\left((z_0^{n+1} z_1^n z_2^{n-1}\cdots z_n)^{\frac{1}{n+1}},
(z_1z_2^{2}\cdots z_n^n z_{n+1}^{n+1})^{\frac{1}{n+1}}\right).$$
\subsubsection{Equivariant cohomology of $\mathcal{A}_n$}
There is a natural $(\mathbb C^\times)^2$-action on
$\mathcal{A}_n$: for $\eta\in \mathbb C^*$,
\begin{equation}\label{A_n action}
\eta\cdot [z_0: z_2: \cdots : z_{n+1}]=[\eta^{t_1}z_0: z_1:\cdots : z_{n-1}: \eta^{t_2}z_{n+1}],
\end{equation}
which has the following $n+1$ fixed points:
\begin{equation*}
p_0=[\underbrace{1:1:\cdots:1}_{n}:0:0], p_1=[\underbrace{1:1:\cdots:1}_{n-1}:0:0: 1],
\cdots, p_n=[0: 0: \underbrace{1:1:\cdots:1}_{n}].
\end{equation*}
The exceptional fiber of $\mathcal{A}_n$ is given by a
tree of $\mathbb{CP}^1$'s corresponding to the Dynkin diagram of $A_n$.
Using the GIT description, the $i$-th $\mathbb{CP}^1$ is the line connecting
$p_{i-1}$ and $p_i$. More precisely, it is the point set
$$\{[z_0: z_1:\cdots: z_{n+1}]\big|z_{n+1-k}=0\}\subseteq \mathcal{A}_n.$$
Recall the following Atiyah-Bott localization theorem.
\begin{proposition}\label{localization}
Suppose $X$ is a variety with a $T$-action on it, and
$X^T$ is the fixed locus of $T$.
Let $i: X^T\hookrightarrow X$ be the embedding.
Then after localization on $\mathrm{H}^\bullet_T(pt)$, we have
$$i^*: \mathrm{H}^\bullet_T(X)\rightarrow \mathrm{H}^\bullet_T(X^T)$$
is an isomorphism of the $T$-equivariant cohomology groups.
Furthermore, if $X^T$ is proper, then for any $\alpha\in \mathrm{H}^\bullet_T(X)$,
\begin{equation*}
\int_{X}\alpha=\int_{X^T}\frac{i^*\alpha}{e_T(N_{X^T|X})},
\end{equation*}
where $e_T(N_{X^T|X})$ is the equivariant Euler class of the normal bundle $N_{X^T|X}$.
\end{proposition}
For the $({\mathbb C}^\times)^2$-action described above, since
$\mathrm{H}^\bullet_{({\mathbb C}^\times)^2}(pt)={\mathbb C}[t_1, t_2]$, we have:
\begin{corollary}\label{cor:equivofAn}
As a ${\mathbb C}[t_1, t_2]$-vector space,
$$\mathrm{H}^\bullet_{({\mathbb C}^\times)^2}(\mathcal{A}_n)
={\rm Span}_{{\mathbb C}[t_1, t_2]}\{1, e_1, \cdots, e_n\},$$
where $e_i$ is cohomology class corresponding to the
$i$-th $\mathbb{CP}^1$ in the exceptional fiber.
\end{corollary}
\subsubsection{Equivariant Poincar\'e pairing}
Notice that $\mathcal{A}_n$ is not proper, but the $({\mathbb C}^\times)^2$-fixed point set is proper,
so we can define Poincar\'e pairing on the equivariant cohomology
$\mathrm H^\bullet_{({\mathbb C}^\times)^2}(\mathcal{A}_n)$ via Proposition \ref{localization}
\begin{definition}
For $\alpha$, $\beta\in \mathrm{H}^\bullet_{T}(X)$, define the equivariant Poincar\'e pairing to be
\begin{equation}\label{eq pa}
\<\alpha, \beta\>:=\int_{X^T}\frac{i^*(\alpha\cup \beta)}{e_T(N_{X^T|X})}.
\end{equation}
\end{definition}
Now we calculate the equivariant Poincar\'e pairing on
$\mathrm H^\bullet_{({\mathbb C}^\times)^2}(\mathcal{A}_n)$.
\begin{proposition}
On
$\mathrm{H}^\bullet_{({\mathbb C}^\times)^2}(\mathcal{A}_n)$,
with the notion of Corollary \ref{cor:equivofAn},
we have
$$
\<1, 1\>=\frac{1}{(n+1)t_1t_2}, \quad \<1, e_i\>=0,
$$
and
$$
\<e_i, e_j\>=\left\{
\begin{array}{cl}
-2, & \hbox{$i=j,$} \\
1, & \hbox{$|i-j|= 1,$}\\
0, & \hbox{$i-j|\geq 2.$}
\end{array}
\right.
$$
\end{proposition}
\begin{proof}
Firstly we calculate $e_T(N_{X^T|X})$ in \eqref{eq pa}. In our case, $X^T=\sqcup_{i=0}^{n}p_i$,
and therefore $N_{p_i|\mathcal{A}_n}$ is the tangent space
$T_{p_i}\mathcal{A}_n$, which is a trivial bundle.
Now it suffices to figure out the $({\mathbb C}^\times)^2$-action on $T_{p_i}X$. By \eqref{A_n GIT}
and \eqref{A_n action}, near $p_0$, we have
\begin{align*}
&[\eta^{t_1}z_0:z_1\cdots:z_n: \eta^{t_2}z_{n+1}]\\
=&[\underbrace{1:1:\cdots:1}_{n}:\eta^{(n+1)t_1}z_0^{n+1}z_1^{n}
\cdots z_{n-1}^2z_n : \eta^{t_2-nt_1}z_0^{-n}z_1^{-(n-1)}\cdots z_{n-1}^{-1}z_{n+1}],
\end{align*}
which means that $y_1=z_0^{n+1}z_1^{n}\cdots z_{n-1}^2z_n$,
$y_2=z_0^{-n}z_1^{-(n-1)}\cdots z_{n-1}^{-1}z_{n+1}$ is the local coordinate near $p_0$.
And the $({\mathbb C}^\times)^2$-action on $T_{p_0}\mathcal{A}_n$ is
$$\eta \cdot (y_1, y_2)=(\eta^{(n+1)t_1}y_1, \eta^{t_2-nt_1}y_2).$$
So $$e_T(T_{p_0}\mathcal{A}_n)=(n+1)t_1\cdot (t_2-nt_1)\in \mathrm H^\bullet_T(p_0).$$
Similarly,
\begin{equation}\label{weight on tangent}
e_T(T_{p_k}\mathcal{A}_n)=\big((n+1-k)t_1-kt_2\big)\cdot\big((k+1)t_2-(n-k)t_1\big)\in
\mathrm H^\bullet_T(p_k).
\end{equation}
Secondly, we lift $e_j$ to $\mathrm{H}^\bullet_T(\mathcal{A}_n)$,
and calculate $i_k^*(e_j)\in \mathrm{H}^\bullet_T(p_k)$,
where $i_k: p_k\rightarrow \mathcal{A}_n$ is the embedding.
Suppose $L_i$ is the line bundle on $\mathcal{A}_n$ defined as
$({\mathbb C}^{n+2}\times {\mathbb C}_{\chi_i})/({\mathbb C}^\times)^{n}$,
where $\chi_i$ is the character of $({\mathbb C}^\times)^{n}$ by projecting to the
$i$-th component. We can set the $({\mathbb C}^\times)^2$ action on $L_i$ as
\begin{equation}\label{A_n action on L_i}
\eta\cdot [z_0: z_2: \cdots : z_{n+1}: v]=[\eta^{t_1}z_0: z_1:\cdots : z_{n-1}: \eta^{t_2}z_{n+1}: v],
\end{equation}
which makes $L_i$ a $({\mathbb C}^\times)^2$-equivariant line bundle.
By \eqref{A_n GIT}, \eqref{A_n action} and a similar argument as $e_T(T_{p_i}\mathcal{A}_n)$, we have
\begin{align}\label{L_i}
e_T(L_i\big|_{p_k})=\left\{
\begin{array}{ll}
-it_1, & \hbox{$0\leq k\leq n-i$,} \\
-(n+1-i)t_2, & \hbox{$n+1-i \leq k\leq n+1$.}
\end{array}
\right.
\end{align}
Notice that $e_j$ corresponds to the divisor $\{z_{n+1-j}=0\}$,
which is the zero locus of a section of line bundle
$\widetilde{L}_j= L_{n-j}\otimes L_{n-j+1}^{-2}\otimes L_{n-j+2}$
(we set $L_i=\mathcal{O}$ when $i\leq 0$ or $i\geq n+1$).
So $e_T(\widetilde{L}_j)$ is a lift of $e_j$ on $\mathrm H^\bullet_T(\mathcal{A}_n)$. By \eqref{L_i},
\begin{align}\label{e_i}
i_k^*(e_j)=e_T(L_{n-j}\otimes L_{n-j+1}^{-2}\otimes L_{n-j+2}\big|_{p_k})=\left\{
\begin{array}{lll}
(n-j+2)t_1-(j-1)t_2, & \hbox{$k=j-1$;} \\
-(n-j)t_1+(j+1)t_2, & \hbox{$k=j$.}\\
0, & \hbox{otherwise.}
\end{array}
\right.
\end{align}
Finally, plugging \eqref{weight on tangent} and \eqref{L_i} into \eqref{eq pa}, we have
\begin{align*}
\<1, 1\>&=\sum_{k=0}^{n} \frac{1}{\big((n+1-k)t_1-kt_2\big)\cdot \big((k+1)t_2-(n-k)t_1\big)}\\
&=\sum_{k=0}^{n}\frac{1}{t_2-t_1}\cdot \left(\frac{1}{(n+1-k)t_1-kt_2}+\frac{1}{(k+1)t_2-(n-k)t_1}\right)\\
&=\frac{1}{(n+1)t_1t_2}
\end{align*}
and
\begin{align*}
\<e_j, e_j\>=&
\frac{\big((n-j+2)t_1-(j-1)t_2\big)^2}{\big((n+2-j)t_1-(j-1)t_2\big)\cdot \big(jt_2-(n-j+1)t_1\big)}\\
&+\frac{\big(-(n-j)t_1+(j+1)t_2\big)^2}{\big((n-j+1)t_1-jt_2\big)\cdot \big((j+1)t_2-(n-j)t_1\big)}\\
=& -\frac{(-n+j-2)t_1+(j-1)t_2}{jt_2-(n-j+1)t_1}-\frac{(n-j)t_1-(j+1)t_2}{(n-j+1)t_1-jt_2}\\
=&-2.
\end{align*}
The calculation of the other pairings is left to the readers.
\end{proof}
Now we calculate the product structure on
$\mathrm{H}^\bullet_{({\mathbb C}^\times)^2}(\mathcal{A}_n)$.
\begin{proposition}
Denote by $\mathcal{A}_n$ the minimal resolution of
the $A_n$ singularity. Then the equivariant cohomology of
$\mathcal A_n$ is
\begin{eqnarray*}
\mathrm H^{\bullet}_{(\mathbb{C}^\times)^2}(\mathcal{A}_n)
=\mathrm{Span}_{\mathbb{C}[t_1,t_2]}\{1,e_1,e_2,\cdots,e_n\},
\end{eqnarray*}
where
\begin{align}
e_j\cup e_{j+1}=&-t_2(e_1+2e_2+\cdots+je_j)-t_1[(n-j)e_{j+1}+\cdots\notag\\
&+2e_{n-1}+e_n]+(n+1)t_1t_2, \label{ejej+1}\\
e_j\cup e_l=&0 \quad\text{ for $|j-l|\geq2$,} \label{elej} \\
e_j\cup e_j=&2t_2(e_1+2e_2+\cdots+(j-1)e_{j-1})+\big((n+2-j)t_1+(j+1)t_2\big)e_j\notag\\
&+2t_1\big((n-j)e_{j+1}+\cdots+2e_{n-1}+e_n\big)-2(n+1)t_1t_2.\label{ej2}
\end{align}
\end{proposition}
\begin{proof}
Here we only verify \eqref{ej2}; the proof of the other identities is similar and is left to the
readers.
By Proposition \ref{localization}, it suffices to check that, for any $k$,
\begin{align*}
i_k^*(e_j)\cup i_k^*(e_j)=&2t_2\big(i_k^*(e_1)+2i_k^*(e_2)+\cdots+(j-1)i_k^*(e_{j-1})\big)\\
&+\big((n+2-j)t_1+(j+1)t_2\big)i_k^*(e_j)\\
&+2t_1\big((n-j)i_k^*(e_{j+1})+\cdots+2i_k^*(e_{n-1})+i_k^*(e_n)\big)-2(n+1)t_1t_2.
\end{align*}
The above equality is easily checked by
plugging \eqref{e_i} into the two sides of the above equation. For example, for $k=j$,
the left hand side of the above equation equals
$$\big(-(n-j)t_1+(j+1)t_2\big)^2=(n-j)^2t_1^2+(j+1)^2t_2^2-2(n-j)(j+1)t_1t_2$$
while the right hand side equals
$$\big((n+2-j)t_1+(j+1)t_2\big)\cdot \big(-(n-j)t_1+(j+1)t_2\big)
+2(n-j)t_1\cdot \big((n-j+1)t_1-jt_2\big)-2(n+1)t_1t_2,
$$
and they are equal to each other.
\end{proof}
\subsection{Equivariant cohomology of resolutions of DE singularities}\label{BG}
Let $\widetilde{\mathbb C^2/\Gamma}$ be the minimal resolution
of an ADE singularity. Observing that the scalar $\mathbb C^\times$-action
on $\mathbb C^2$ commutes with the action of $\Gamma$,
and thus $\mathbb C^\times$ acts on $\mathbb C^2/\Gamma$.
It lifts to an action on $\widetilde{\mathbb C^2/\Gamma}$.
Let $\{e_1,e_2,\cdots, e_n\}$ be the set of irreducible components
in the exceptional fiber in
$\widetilde{\mathbb C^2/\Gamma}$, which gives a basis
of $\mathrm{H}_2(\widetilde{\mathbb C^2/\Gamma},\mathbb Z)$.
It is direct to see that they are invariant under the $\mathbb C^\times$-action,
and hence lifts to a basis of the $\mathbb C^\times$-equivariant homology.
The intersection matrix $e_i\cap e_j$ defines a perfect pairing on
$\mathrm{H}_2(\widetilde{\mathbb C^2/\Gamma},\mathbb Z)$.
With this pairing we may identify
$\mathrm{H}_2(\widetilde{\mathbb C^2/\Gamma},\mathbb Z)$
with
$\mathrm{H}^2(\widetilde{\mathbb C^2/\Gamma},\mathbb Z)$.
By a slight abuse of notation, the image of $e_i$ in the above
isomorphism is also denoted by $e_i$.
Let $\Phi$ be the root system associated
to the Dynkin diagram given by this pairing. We can identify $e_1,\cdots,e_n$
with the simple roots $\alpha_1,\cdots, \alpha_n$
and the intersection matrix with minus of the Cartan matrix
$$e_i\cup e_j=-\langle \alpha_i,\alpha_j\rangle.$$
Bryan and Gholampour computed the $\mathbb C^\times$-equivariant
cohomology ring
of $\widetilde{\mathbb C^2/\Gamma}$, which is given as follows.
\begin{theorem}[{\cite[Theorem 6]{BG}}]\label{thm:BryanGholam}
The $\mathbb C^\times$-equivariant cohomology ring
$\mathrm H_{\mathbb C^\times}^\bullet(\widetilde{\mathbb C^2/\Gamma})$
of the minimal resolution
of an ADE singularity has the equivariant cup product
\begin{eqnarray*}
e_i\cup e_j=-t^2|\Gamma|\langle\alpha_i,\alpha\rangle
+\sum_{\alpha\in\Delta^+}\langle\alpha_j,\alpha\rangle e_\alpha,
\end{eqnarray*}
where $\Gamma$ is the subgroup of $\mathrm{SL}_2(\mathbb{C})$,
$e_\alpha=c_1e_1+\cdots+c_n e_n$, $c_1,\cdots, c_n\in\mathbb{N}$
if the root $\alpha=c_1\alpha_1+\cdots c_n \alpha_n$
with $\alpha_1,\cdots, \alpha_n$ being the simple roots, and
$\langle -,-\rangle$ is the inner product in the root system.
\end{theorem}
By the root data of Lie algebras of ADE type (see, for example,
Bourbaki \cite[PLATE I-VII]{Bo}), we may explicitly write down the cup product
in all the cases. In what follows, we only give
$\mathbb C^\times$-equivariant homology
of $\widetilde{\mathbb C^2/\Gamma}$ where $\Gamma$ is of DE type.
\begin{corollary}[The $D_n$ case]\label{cohomology of Dn}
Denote by $\mathcal{D}_n$
the minimal resolution of
the $D_n$ singularity. Then the $\mathbb C^\times$-equivariant cohomology of
$\mathcal D_n$
is
\begin{eqnarray*}
\mathrm H^\bullet_{\mathbb{C}^\times}(\mathcal{D}_n)
=\mathrm{Span}_{\mathbb{C}[t]}\{1,e_1,e_2,\cdots,e_n\},
\end{eqnarray*}
where
\begin{align}
e_k\cup e_{k+1}=&-2te_1-\cdots-2kte_k-(2n-4)te_{k+1}-\cdots-(2n-4)te_{n-2}\notag\\
&-(n-2)te_{n-1}-(n-2)te_n+4(n-2)t^2,\quad\quad\text{ for } k\leq n-2, \label{ekek+1Dn}\\
e_k\cup e_l=&0,\quad\quad \text{ for $k\leq n-2$ and $|k-l|\geq2$, } e_{n-1}\cup e_n=0 \label{eiejDn} \\
e_{n-2}\cup e_n=&e_{n-1}\cup e_n=-2te_1-\cdots-(2n-2k)te_{n-k}-\cdots\notag \\
&-(2n-4)te_{n-2}-(n-2)te_{n-1}-(n-2)te_n\label{en-2en} \\
e_k\cup e_k=&4te_1+\cdots+4(k-i)e_{k-i}+\cdots+(2n+2k-2)te_k
+(4n-8)te_{k+1}+\cdots\notag\\
&+(4n-8)te_{n-2}+(2n-4)te_{n-1}+(2n-4)te_n,\quad
\text{ for } k\leq n-2,\label{ek2Dn}\\
e_{n-1}\cup e_{n-1}=&4te_1+\cdots+(4n-8)te_{n-2}+2nte_{n-1}+(2n-4)te_n+4(n-2)t^2,\label{en-1^2}\\
e_n\cup e_n=&4te_1+\cdots+(4n-8)te_{n-2}+(2n-4)te_{n-1}+2nte_n+4(n-2)t^2.\label{en^2}
\end{align}
\end{corollary}
\begin{corollary}[The $E_6$ case]
\label{cohomology of E6}
Denote by $\mathcal{E}_6$ the minimal resolution of
the $E_6$ singularity. Then the $\mathbb C^\times$-equivariant cohomology of
$\mathcal E_6$ is
\begin{eqnarray*}
\mathrm H^\bullet_{\mathbb{C}^\times}
(\mathcal{E}_6)=\mathrm{Span}_{\mathbb{C}[t]}\{1,e_1,e_2,\cdots,e_6\},
\end{eqnarray*}
where
\begin{align*}
e_1\cup e_1=&-48t^2+t(14e_1+12e_2+20e_3+24e_4+16e_5+8e_6),\\
e_2\cup e_2=&-48t^2+t(8e_1+16e_2+16e_3+24e_4+16e_5+8e_6),\\
e_3\cup e_3=&-48t^2+t(8e_1+12e_2+20e_3+24e_4+16e_5+8e_6),\\
e_4\cup e_4=&-48t^2+t(8e_1+12e_2+16e_3+26e_4+16e_5+8e_6),\\
e_5\cup e_5=&-48t^2+t(8e_1+12e_2+16e_3+24e_4+20e_5+8e_6),\\
e_6\cup e_6=&-48t^2+t(8e_1+12e_2+16e_3+24e_4+20e_5+14e_6),\\
e_1\cup e_3=&24t^2-t(4e_1+6e_2+10e_3+12e_4+8e_5+4e_6),\\
e_2\cup e_4=&24t^2-t(4e_1+6e_2+8e_3+12e_4+8e_5+4e_6),\\
e_3\cup e_4=&24t^2-t(4e_1+6e_2+8e_3+12e_4+8e_5+4e_6),\\
e_4\cup e_5=&24t^2-t(4e_1+6e_2+8e_3+12e_4+8e_5+4e_6),\\
e_5\cup e_6=&24t^2-t(4e_1+6e_2+8e_3+12e_4+10e_5+4e_6),
\end{align*}
and $e_i\cup e_j=0$ for all the rest $i, j$.
\end{corollary}
\begin{corollary}[The $E_7$ case]\label{cohomology of E7}
Denote by $\mathcal{E}_7$ the minimal resolution of
the $E_7$ singularity. Then the $\mathbb C^\times$-equivariant cohomology of
$\mathcal E_7$ is
\begin{eqnarray*}
\mathrm H^\bullet_{\mathbb{C}^\times}(\mathcal{E}_7)
=\mathrm{Span}_{\mathbb{C}[t]}\{1,e_1,e_2,\cdots,e_7\},
\end{eqnarray*}
where
\begin{align*}
e_1\cup e_1=&-96t^2+t(22e_1+24e_2+36e_3+48e_4+36e_5+24e_6+12e_7),\\
e_2\cup e_2=&-96t^2+t(16e_1+28e_2+32e_3+48e_4+36e_5+24e_6+12e_7),\\
e_3\cup e_3=&-96t^2+t(16e_1+24e_2+36e_3+48e_4+36e_5+24e_6+12e_7),\\
e_4\cup e_4=&-96t^2+t(16e_1+24e_2+32e_3+50e_4+36e_5+24e_6+12e_7),\\
e_5\cup e_5=&-96t^2+t(16e_1+24e_2+32e_3+48e_4+40e_5+24e_6+12e_7),\\
e_6\cup e_6=&-96t^2+t(16e_1+24e_2+32e_3+48e_4+40e_5+30e_6+12e_7),\\
e_7\cup e_7=&-96t^2+t(16e_1+24e_2+32e_3+48e_4+40e_5+32e_6+20e_7),\\
e_1\cup e_3=&48t^2-t(8e_1+12e_2+18e_3+24e_4+18e_5+12e_6+6e_7),\\
e_2\cup e_4=&48t^2-t(8e_1+12e_2+16e_3+24e_4+18e_5+12e_6+6e_7),\\
e_3\cup e_4=&48t^2-t(8e_1+12e_2+16e_3+24e_4+18e_5+12e_6+6e_7),\\
e_4\cup e_5=&48t^2-t(8e_1+12e_2+16e_3+24e_4+18e_5+12e_6+6e_7),\\
e_5\cup e_6=&48t^2-t(8e_1+12e_2+16e_3+24e_4+20e_5+12e_6+6e_7),\\
e_6\cup e_7=&48t^2-t(8e_1+12e_2+16e_3+24e_4+20e_5+16e_6+6e_7),
\end{align*}
and $e_i\cup e_j=0$ for all the rest $i, j$.
\end{corollary}
\begin{corollary}[The $E_8$ case]\label{cohomology of E8}
Denote by $\mathcal{E}_8$ the minimal resolution of
the $E_8$ singularity. Then the $\mathbb C^\times$-equivariant cohomology of
$\mathcal E_8$ is
\begin{eqnarray*}
\mathrm H^\bullet_{\mathbb{C}^\times}(\mathcal{E}_8)
=\mathrm{Span}_{\mathbb{C}[t]}\{1,e_1,e_2,\cdots,e_8\},
\end{eqnarray*}
where
\begin{align*}
e_1\cup e_1=&-240t^2+t(46e_1+60e_2+84e_3+120e_4+96e_5+72e_6+48e_7+24e_8),\\
e_2\cup e_2=&-240t^2+t(40e_1+64e_2+80e_3+120e_4+96e_5+72e_6+48e_7+24e_8),\\
e_3\cup e_3=&-240t^2+t(40e_1+60e_2+84e_3+120e_4+96e_5+72e_6+48e_7+24e_8),\\
e_4\cup e_4=&-240t^2+t(40e_1+60e_2+80e_3+122e_4+96e_5+72e_6+48e_7+24e_8),\\
e_5\cup e_5=&-240t^2+t(40e_1+60e_2+80e_3+120e_4+100e_5+72e_6+48e_7+24e_8),\\
e_6\cup e_6=&-240t^2+t(40e_1+60e_2+80e_3+120e_4+100e_5+78e_6+48e_7+24e_8),\\
e_7\cup e_7=&-240t^2+t(40e_1+60e_2+80e_3+120e_4+100e_5+80e_6+56e_7+24e_8),\\
e_8\cup e_8=&-240t^2+t(40e_1+60e_2+80e_3+120e_4+100e_5+80e_6+60e_7+34e_8),\\
e_1\cup e_3=&120t^2-t(20e_1+30e_2+42e_3+60e_4+48e_5+36e_6+24e_7+12e_8),\\
e_2\cup e_4=&120t^2-t(20e_1+30e_2+40e_3+60e_4+48e_5+36e_6+24e_7+12e_8),\\
e_3\cup e_4=&120t^2-t(20e_1+30e_2+40e_3+60e_4+48e_5+36e_6+24e_7+12e_8),\\
e_4\cup e_5=&120t^2-t(20e_1+30e_2+40e_3+60e_4+48e_5+36e_6+24e_7+12e_8),\\
e_5\cup e_6=&120t^2-t(20e_1+30e_2+40e_3+60e_4+50e_5+36e_6+24e_7+12e_8),\\
e_6\cup e_7=&120t^2-t(20e_1+30e_2+40e_3+60e_4+50e_5+40e_6+24e_7+12e_8),\\
e_7\cup e_8=&120t^2-t(20e_1+30e_2+40e_3+60e_4+50e_5+40e_6+30e_7+12e_8),
\end{align*}
and $e_i\cup e_j=0$ for all the rest $i, j$.
\end{corollary}
\section{Quantization of the minimal orbits}\label{sect:quantization}
In this section, we study the quantization of the minimal nilpotent
orbits of Lie algebras of ADE type.
In \S\ref{subsect:coordofminimal} we briefly go over some properties
of the minimal orbits. In \S\ref{subsect:Joseph} we recall Joseph's result on the quantization
of the minimal orbits and then in \S\ref{subsect:Garfinkle} we go over Garfinkle's construction
of Joseph's ideals, with explicit formulas.
\subsection{The coordinate ring of minimal orbits}\label{subsect:coordofminimal}
In this subsection, we assume $\mathfrak g$ is a complex semisimple Lie algebra,
and $\mathcal O_{min}$ is the minimal nilpotent orbit of $\mathfrak g$.
Let us first recall the following.
\begin{proposition}[{c.f. \cite[\S 8.3]{Jan}}]
Let $\mathfrak g$ be a complex semisimple Lie algebra
and $\mathcal{O}$ be a nilpotent orbit of $\mathfrak g$. Then
\[
\mathbb{C}[\mathcal{O}]=\mathbb{C}[\overline{\mathcal{O}}]
\]
if and only if $\overline{\mathcal{O}}$ is normal.
\end{proposition}
In particular, $\overline{\mathcal{O}}_{min}$ is normal with isolated singularity (see \cite{VP}), and hence
\[
\mathbb{C}[\mathcal{O}_{min}]=\mathbb{C}[\overline{\mathcal{O}}_{min}].
\]
Due to this proposition, in what follows we shall not distinguish
$\mathbb{C}[\mathcal{O}_{min}]$ and $\mathbb{C}[\overline{\mathcal{O}}_{min}]$.
The following result is proved by Shlykov in \cite{Sh}.
\begin{theorem}[\cite{Sh}] \label{Sh2}
Let $I$ be the defining ideal of $\overline{\mathcal{O}}_{min}$ in $\mathrm{Sym}(\mathfrak{g})$, i.e.,
\[
I:=\{\mu\in \mathrm{Sym}(\mathfrak{g})|\mu(\overline{\mathcal{O}}_{min})=0\},
\]
then its image of the projection
$$
\mathrm{Sym}(\mathfrak{g})\rightarrow \mathrm{Sym}(\mathfrak{h}),\quad
I\mapsto f(I)
$$
induced by the inclusion $\mathfrak{h}^*\hookrightarrow \mathfrak{g}^*$
is given by $\mathrm{Sym}^{\geq 2}\mathfrak{h}$.
\end{theorem}
The following is obtained by Hikita:
\begin{theorem}[{\cite[Theorem 1.1]{Hi}}]
If we choose a generic action of $\mathbb{C}^\times$ such that the fixed point scheme for it and
for torus $T$ are the same, then
$\overline{\mathcal{O}}_{min}^{\mathbb{C}^\times}$=$\mathfrak{h}\cap\overline{\mathcal{O}}_{min}$
as a scheme.
\end{theorem}
Since
$
\mathbb{C}[\overline{\mathcal{O}}_{min}]= \mathrm{Sym}(\mathfrak{g})/I,
$
combining the above two theorems,
we have (see Shlykov \cite{Sh})
\begin{equation*}
\mathbb{C}[\overline{\mathcal{O}}_{min}^{\ \mathbb{C}^*}]=\mathbb{C}[\mathfrak{h}\cap
\overline{\mathcal{O}}_{min}]=\mathrm{Sym}(\mathfrak{h})/f(I)=\mathrm{Sym}(\mathfrak{h})/\mathrm{Sym}^{\geq 2}\mathfrak{h}.
\end{equation*}
\subsection{Quantization of the minimal nilpotent orbits}\label{subsect:Joseph}
We now study the quantization of the minimal nilpotent orbits in Lie algebras of ADE type.
We start with some basic concepts on the quantization of Poisson algebras;
see, for example, Losev \cite{Lo} for more details.
\begin{definition}[Filtered and graded quantizations]
Suppose $A$ is a commutative $\mathbb Z_{\ge 0}$-graded $k$-algebra,
equipped with a Poisson
bracket whose degree is $-1$, where $k$ is a field of characteristic zero.
\begin{enumerate}
\item[$(1)$]
A {\it filtered quantization} of $A$ is a filtered $k$-algebra
$\mathcal A=\bigcup_{i\ge 0}\mathcal A_{i}$ such that
the associated graded algebra $\mathrm{gr}\,\mathcal A$
is isomorphic to $A$ as graded Poisson algebras.
\item[$(2)$]
A {\it graded quantization} of $A$ is a graded $k[\hbar]$-algebra
$A_\hbar$ ($\mathrm{deg}\,\hbar=1$)
which is free as a $k[\hbar]$-module, equipped with an isomorphism of $k$-algebras:
$f: A_\hbar/\hbar\cdot A_\hbar \to A$
such that for any
$a, b\in A_\hbar$, if we denote their images in
$A_\hbar/\hbar\cdot A_\hbar$ by $\overline a, \overline b$ respectively,
then
$$f\left(\overline{\frac{1}{\hbar}[a, b]}\right)=\{f(\overline a), f(\overline b)\}.$$
\end{enumerate}
\end{definition}
Let $A$ be an filtered associative algebra. Recall that the {\it Rees algebra} of $A$ is the
graded algebra $Rees(A):=\bigoplus_{i\in\mathbb{Z}}A_{\leq i}\cdot\hbar^i$,
equipped with the multiplication $(a\hbar^i)(b\hbar^j)=ab\hbar^{i+j}$ for $a,b\in A$.
Now, suppose $\mathcal A$ is a filtered quantization of $A$, then
the associated Rees algebra $Rees(\mathcal A)$ is a graded quantization of $A$.
\begin{example}
The universal enveloping algebra $\mathcal{U}(\mathfrak{g})$ is the filtered quantization of
$\mathbb{C}[\mathfrak{g}^*]=\mathrm{Sym}(\mathfrak{g})$, and the Rees algebra of
$\mathcal{U}(\mathfrak{g})$,
$Rees(\mathcal{U}(\mathfrak{g})):=
\bigoplus_{i\in\mathbb{Z}}\mathcal{U}(\mathfrak{g})_{\leq i}\cdot\hbar^i$
is the graded quantization of $\mathrm{Sym}(\mathfrak{g})$.
On the other hand, there is an isomorphism of
$\mathfrak{g}$-modules:
\begin{eqnarray*}
\beta: \mathrm{Sym}(\mathfrak{g})&\rightarrow & \mathcal{U}(\mathfrak{g}),\\
x_1\cdots x_k &\mapsto & \dfrac{1}{k!}\sum_{\pi\in S_n} x_{\pi(1)}\cdots x_{\pi(k)},
\end{eqnarray*}
which is called {\it symmetrization}.
\end{example}
Since the universal enveloping algebra $\mathcal{U}(\mathfrak{g})$ is
the quantization of the symmetric algebra $\mathrm{Sym}(\mathfrak{g})$,
we need to study the quantization of the ideal $I$ of $\mathrm{Sym}(\mathfrak{g})$.
Joseph in \cite{Jo} found a two-sided ideal of $\mathcal{U}(\mathfrak{g})$
which plays the role of the quantization of $I$.
\subsubsection{Joseph's quantization of the minimal orbits}
Let us first recall the result of Joseph \cite{Jo},
which is stated as follows.
\begin{definition-theorem}[Joseph \cite{Jo} and Garfinkle \cite{Ga}]
Let $\mathfrak{g}$ be a complex semisimple Lie algebra.
\begin{enumerate}
\item[$(1)$]
If $\mathfrak{g}$ is the type A Lie algebra, then there exists a family of
completely prime two-sided primitive ideals $J^z$, parametrized
by a parameter $z\in\mathbb{C}$, such that
$$
\mathrm{gr}J^z=I(\overline{\mathcal{O}}_{min}).
$$
\item[$(2)$]
If $\mathfrak{g}$ is different from the Lie algebra of type A,
then there exists a unique completely prime two-sided primitive ideal $J$ such that
$$
\mathrm{gr}J=I(\overline{\mathcal{O}}_{min}).
$$
\end{enumerate}
In both cases, the ideals $J^z$ and $J$ are called the Joseph ideals.
\end{definition-theorem}
In the above definition,
a two-sided ideal $J$ of $\mathcal{U}(\mathfrak{g})$ is called {\it primitive} if it is
the kernel of an irreducible representation $(\rho, V)$ of $\mathcal{U}(\mathfrak{g})$, i.e.,
$J$ is the annihilator of $V$,
\[
J=Ann(V)=\{u\in\mathcal{U}(\mathfrak{g})|\rho(u)\cdot V=0\}.
\]
An ideal $J$ of $\mathcal{U}(\mathfrak{g})$ is called
{\it completely prime} if for all $u,v\in\mathcal{U}(\mathfrak{g})$, $uv\in J$
implies $u\in J$ or $v\in J$.
In fact, in the original paper \cite{Jo}, Joseph only proved that the Joseph ideals in type A Lie algebras
are not unique. It is Garfinkle who
gave the explicit constructions of the Joseph ideals in Lie algebras of all types,
and in particular, formulated the Joseph ideals in type A Lie algebras in the form given in the above
theorem.
\begin{theorem}\label{thm:quantizationofminimalorbits}
Let $\mathfrak g$ be the Lie algebra of type ADE. Let $J$ be its Joseph ideal.
Then
$U(\mathfrak g)/J$ gives a filtered quantization of $\overline{\mathcal O}_{min}$.
\end{theorem}
\begin{proof
Since
\[
\mathrm{gr} \big(\mathcal{U}(\mathfrak{g})/J\big)=\mathrm{gr}\big(\mathcal{U}(\mathfrak{g})\big)/gr(J)
=\mathrm{Sym}(\mathfrak{g})/I(\overline{\mathcal{O}}_{min})=\mathbb{C}[\overline{\mathcal{O}}_{min}],
\]
that is to say, for the symplectic singularity $\overline{\mathcal{O}}_{min}$, the algebra
$\mathcal{U}(\mathfrak{g})/J$ is its filtered quantization.
\end{proof}
By this theorem, $Rees(\mathcal{U}(\mathfrak{g})/J)$ is the graded quantization of
$\overline{\mathcal{O}}_{min}$, and we sometimes write it
as $\mathscr A[\overline{\mathcal O}_{min}]$; that is,
$\mathscr A[\overline{\mathcal O}_{min}]=Rees\big(\mathcal U(\mathfrak g)/J\big)$.
\subsection{Garfinkle's construction of the Joseph ideals}\label{subsect:Garfinkle}
Garfinkle in her thesis \cite{Ga} gave an explicit construction of the Joseph ideals.
In this subsection, we go over her results with some details.
\begin{notation}
Let us fix some notations in representation theory of Lie algebras.
Let $\mathfrak{g}$ be a complex semisimple Lie algebra, $\mathfrak{h}$ be a
Cartan subalgebra of $\mathfrak{g}$, $\Delta$ be the roots of $\mathfrak{h}$ in
$\mathfrak{g}$ and $\Delta^+$ be a fixed choice of positive roots.
Let $\Phi$ the simple roots of $\Delta$ and $\Phi^+$ be
the simple roots of $\Delta^+$.
The Lie algebra $\mathfrak{g}$ has the root space decomposition
$\mathfrak{g}=\oplus_{\alpha\in\Delta}\mathfrak{g}^\alpha$, and let
\[
\mathfrak{n}^+=\oplus_{\alpha\in\Delta^+}\mathfrak{g}^\alpha,
\mathfrak{n}^-=\oplus_{\alpha\in\Delta^+}\mathfrak{g}^{-\alpha},
\mathfrak{b}=\mathfrak{h}\oplus\mathfrak{n}^+.
\]
denote the associated subalgebras of $\mathfrak{g}$.
Let $(\pi, V)$ be a representation of $\mathfrak{g}$; for any weight
$\lambda\in\mathfrak{h}^*$, let
$V^\lambda=\{v\in V|\pi(h)(v)=\lambda(h) v\;\mbox{for any}\; h\in\mathfrak{h}\}$. Let
$V^{\mathfrak{n}^-}:= \{v\in V|\pi(x)v=0\;\mbox{for any}\; x\in\mathfrak{n}^-\}$.
Denote by $X_{\alpha_i}$ and $Y_{\alpha_i}$ the root vectors in $\mathfrak{g}^{\alpha_i}$ and
$\mathfrak{g}^{-\alpha_i}$ respectively, and denote by $h_i$ the element in
$\mathfrak{h}$ corresponding to $\alpha_i$ such that $\alpha_i(H)=B(H,h_i)$
for all $H\in\mathfrak{h}$, where $B(H,h_i)$ denotes the killing form. By the construction of Chevalley basis,
$h_i=[X_{\alpha_i}, Y_{\alpha_i}]$ and $\{h_i\}\cup \{X_{\alpha_i}\}\cup\{Y_{\alpha_i}\}$
form a basis of $\mathfrak{g}$. Denote by $h_i^{\vee}$ the dual element of $h_i$
via the Killing form, i.e., $B(h_i^{\vee}, h_j)=\delta_{ij}$.
Let $\alpha_1,\cdots,\alpha_n\in\Phi^+$, the subscript of the positive roots are the same as \cite[PLATE I-VII]{Bo}); for convenience, we denote by $X_{12\cdots n}$
and $Y_{12\cdots n}$ the root vectors corresponding to the root
$\alpha_1+\alpha_2+\cdots+\alpha_n\in \Delta^+$ and the root
$-(\alpha_1+\alpha_2+\cdots+\alpha_n)$ respectively.
In the type D Lie algebra case, we denote by
$X_{1\cdots\overline{k}\cdots n}$ and $Y_{1\cdots\overline{k}\cdots n}$
the root vectors corresponding to the root
$\alpha_1+\cdots+ 2\alpha_k+\cdots+\alpha_n\in\Delta^+$
and $-(\alpha_1+\cdots+ 2\alpha_k+\cdots+\alpha_n)$ respectively.
Let $I_{\mathfrak{p},\lambda}$ be the left ideal of the universal enveloping algebra
$\mathcal{U}(\mathfrak{g})$ generated by $\{x-\lambda(x)|x\in\mathfrak{p}\}$.
\end{notation}
\subsubsection{Joseph ideal for type A Lie algebras}
As we stated in the previous subsection, Joseph's theorem
is an existence theorem. It was Garfinkle who gave the explicit construction
of the Joseph's ideals.
The following several propositions are taken from Garfinkle \cite{Ga}.
\begin{proposition}[{\cite[Proposition 3.2]{Ga} and
\cite[\S4.4]{BJ}}]\label{structure of J}
For type A Lie algebras $\mathfrak{g}$, we have the following decomposition of
irreducible representations:
\begin{eqnarray*}
Sym^2(\mathfrak{g})\cong
V(2\theta)\oplus V(\theta+\alpha)\oplus V(\theta)\oplus V(0),
\end{eqnarray*}
where $\alpha$ is the highest root in the sub root system
$\Delta_\theta:=\{\alpha\in\Delta|\alpha \text{ is perpendicular to }
\theta\}$, and $\Delta$ is the root system of $\mathfrak{g}$.
The ideal $I(\overline{\mathcal{O}}_{min})$ is generated by the lowest root
vectors in $V(\theta+\alpha)$ and $V(\theta)$ and elements in $V(0)$,
and $V(0)$ is generated by the Casimir element of $\mathcal U(\mathfrak g)$.
\end{proposition}
By this proposition, we next determine the lowest root
vectors in $V(\theta+\alpha)$ and $V(\theta)$ respectively.
We do this one by one.
Recall that $X_{i\cdots j}$ and $Y_{i\cdots j}$ denote the weight vectors of weights
$\alpha_i+\cdots+\alpha_j$ and $-(\alpha_i+\cdots+\alpha_j)$ respectively in $Sym^2(\mathfrak{g})$.
\begin{proposition}[{\cite[\S IV.3 Definition 7]{Ga}}]\label{prop:lwofVthetaplusalpha}
The lowest weight vector of the subrepresentation $V(\theta+\alpha_2+\cdots+\alpha_{n-1})$ in
Proposition \ref{structure of J} is
\begin{eqnarray}\label{eq:lwofVthetaplusalpha}
Y_\theta Y_{2\cdots n-1}-Y_{1\cdots n-1}Y_{2\cdots n},
\end{eqnarray}
which corresponds to $-(\theta+\alpha_2+\cdots+\alpha_{n-1})$,
the lowest weight in $V(\theta+\alpha_2+\cdots+\alpha_{n-1})$.
\end{proposition}
Next, let us recall the following.
\begin{lemma}[{\cite[\S IV.3 Remark 4]{Ga}}]\label{Vtheta2}
In the $A_n$ case,
$v$ is the lowest weight vector of the representation $V(\theta)$ if and only if
$v\in V(\theta)^{\mathfrak{n}^-}$.
\end{lemma}
By this lemma, we have the following.
\begin{proposition}\label{generalterm1}
The lowest weight vector of the subrepresentation $V(\theta)$ in Proposition \ref{structure of J} is
\begin{eqnarray}\label{vtheta}
-(n+1)(Y_1Y_{2\cdots n}+Y_{12}Y_{3\cdots n}+\cdots+Y_{1\cdots n-1}Y_n)+
\sum_{k=1}^n Y_\theta(2k-1-n)h_k.
\end{eqnarray}
\end{proposition}
\begin{proof}
Since $V(\theta)\cong\mathfrak{g}$ as $\mathfrak g$-representations,
let $v$ be the lowest weight vector in $V(\theta)$, then $v$
corresponds to the weight $-\theta$ in $\mathfrak{g}$. By \cite[\S 25.3]{FH},
\[
v=(\Omega-2(\theta,\theta) Id\otimes Id)(v{'}),
\]
where $v'$ is the weight vector corresponding to the weight
$-\theta$ in the representation $Sym^2(\mathfrak{g})$.
On the other hand, by Lemma \ref{Vtheta2}, $v\in V(\theta)^{\mathfrak{n}^-}$, and therefore
\begin{align*}
v&=(\Omega-2(\theta,\theta) Id\otimes Id)(v{'})\\
&=(\Omega-4Id\otimes Id)\left(Y_\theta\sum_{k=1}^n(3-k)h_k\right)\\
&=-2(n+1)(Y_1Y_{2\cdots n}+Y_{12}Y_{3\cdots n}+
\cdots+2Y_{1\cdots n-1}Y_n)+2\sum_{k=1}^n Y_\theta(2k-1-n)h_k.
\end{align*}
Without affecting the results, we take a half of the above vector and let
\[
v=-(n+1)(Y_1Y_{2\cdots n}+Y_{12}Y_{3\cdots n}+\cdots+
Y_{1\cdots n-1}Y_n)+\sum_{k=1}^n Y_\theta(2k-1-n)h_k.\qedhere
\]
\end{proof}
Having found the generators in $I(\overline{\mathcal O}_{min})$,
our next goal is to find the corresponding generators in the Joseph ideal.
First, for the subrepresentation $V(\theta+\alpha_2+\cdots+\alpha_{n-1})$,
we have the following proposition:
\begin{proposition}[\cite{Ga} \S IV.3 Theorem 2 and \S 5]\label{Garfinkle lemma1}
Let $v_0$ be the lowest weight of the representation $V(\theta+\alpha_2+\cdots+\alpha_{n-1})$. Then
$\beta(v_0)$ is an
element of Joseph ideal $J$ of $\mathcal{U}(\mathfrak{g})$.
\end{proposition}
Thus combining Propositions \ref{prop:lwofVthetaplusalpha}
and \ref{Garfinkle lemma1}, we obtain that
$\beta(v_0)$ is given by
\begin{eqnarray}\label{vtheta+alpha2}
\beta(Y_\theta Y_{2\cdots n-1}-Y_{1\cdots n-1}Y_{2\cdots n})
=Y_\theta Y_{2\cdots n-1}-Y_{1\cdots n-1}Y_{2\cdots n}\in\mathcal{U}(\mathfrak{g}).
\end{eqnarray}
Second, we find the generator of $J$ corresponding to \eqref{vtheta}.
Recall that a subalgebra $\mathfrak{p}\subseteq\mathfrak{g}$ such that
$\mathfrak{p}\supseteq \mathfrak{b}$ is called a parabolic subalgebra.
Let $\Phi'\subset \Phi$, we define a parabolic subalgebra as follows:
Let $\Delta_\mathfrak{l}=\{\gamma\in\Delta|\gamma=\sum_{\alpha\in\Phi'}n_\alpha\alpha,
n_\alpha\in\mathbb{Z}\}$, $\Delta_{\mathfrak{u}^+}=\{\alpha\in\Delta^+|\alpha\notin\Delta_l \}$.
Then, let
$\mathfrak{l}=\mathfrak{h}\oplus\oplus_{\alpha\in\Delta_l}\mathfrak{g}^\alpha$, $\mathfrak{u}
=\oplus_{\alpha\in\Delta_{\mathfrak{u}^+}}\mathfrak{g}^\alpha$. We call $\mathfrak{p}
=\mathfrak{l}\oplus\mathfrak{u}$
the parabolic subalgebra defined by $\Phi'$.
The following proposition is straightforward.
\begin{lemma}\label{prop:lambda}
Let $\mathfrak g$ be a complex semisimple Lie algebra,
$\mathfrak{p}$ is a parabolic subalgebra defined by $\Phi-\{\alpha_n\}$.
Suppose
$\lambda\in\mathfrak{h}^*$. Then the following two conditions are equivalent:
\begin{enumerate}
\item[$(1)$]
$\lambda$ can be extended to a character on $\mathfrak{p}$, i.e.,
$\lambda|_{[\mathfrak{p},\mathfrak{p}]}=0, \lambda|_\mathfrak{h}=\lambda$;
\item[$(2)$]
there exists a complex number
$z\in\mathbb C$ such that
$\lambda (h_n)=z$, while $\lambda (h_1)=\cdots=\lambda (h_{n-1})=0.$
\end{enumerate}
\end{lemma}
\begin{proposition}[{\cite[\S IV.3 Proposition 3, \S IV.6 Theorem 1 and \S V Theorem 1]{Ga}}]\label{Vtheta}
Let $v\in V(\theta)^{\mathfrak{n}^-}$, $\mathfrak{p}$ be the parabolic subalgebra of
$\mathfrak{g}$ defined by $\Phi^+-\{\alpha_n\}$, and $\lambda\in\mathfrak{h}^*$
satisfying the conditions in
Lemma \ref{prop:lambda}. Then, there exists a
$y\in\mathcal{U}_1(\mathfrak{g})^{\mathfrak{n}^-}$ depending on
$\lambda$ such that $\beta(v)-y\in I_{\mathfrak{p},\lambda}$. In this case, $\beta(v)-y\in J$.
\end{proposition}
Thus by combining Proposition \ref{generalterm1}, Lemma \ref{prop:lambda} and Proposition
\ref{Vtheta}, we have that
\begin{align}\label{vinU}
\beta(v)-y=&-(1+n)(Y_{2\cdots n}Y_1+Y_{3\cdots n}Y_{12}+\cdots +Y_nY_{1\cdots n-1})\notag \\
&+Y_\theta\left(\sum_{k=1}^n (2k-1-n)h_k-\lambda\Big(\sum_{k=1}^n (2k-1-n)h_k\Big)\right)\notag\\
=&-(1+n)(Y_{2\cdots n}Y_1+Y_{3\cdots n}Y_{12}+\cdots +Y_nY_{1\cdots n-1})\notag \\
&+Y_\theta\left(\sum_{k=1}^n (2k-1-n)h_k+(1-n)z\right
\end{align}
is an element in the Joseph ideal $J$.
Third, we find the generator of the Joseph ideal that corresponds to
the Casimir element of $\mathfrak g$. Let us denote by $C$ the Casimir element.
We have the following.
\begin{proposition}[{\cite[\S IV.3]{Ga}}]\label{prop:vtheta0An}
Let $\mathfrak g$ be the $A_n$ Lie algebra. Then
\begin{align}\label{vtheta0An}
C-c_\lambda =&\sum_{\alpha\in\Phi^+}(X_\alpha Y_\alpha+ Y_\alpha X_\alpha)+
\sum_{i=1}^n h_i\cdot\dfrac{1}{n+1}\Big((n-i+1)\big(h_1+2h_2+\cdots+(i-1)h_{i-1}\big)\notag\\
&+i\big((n-i+1)h_i+(n-i)h_{i+1}+\cdots+h_n\big)\Big)-n\left(\dfrac{z}{n+1}+1\right)z
\end{align}
is a generator of $J$, where $c_\lambda=(\lambda,\lambda)+(\lambda,2\rho)$ and
$\rho$ is the half of the sum of positive roots.
\end{proposition}
\begin{proof}
The Casimir element is $C=\sum_{\alpha\in\Phi^+}X_\alpha Y_\alpha+
Y_\alpha X_\alpha+\sum_{i=1}^n h_ih_i^{\vee}$,
where $n$ is the rank of the corresponding Lie algebra.
For Lie algebra of $A_n$,
$2\rho=n\alpha_1+2(n-1)\alpha_2+\cdots+i(n-i+1)\alpha_i+\cdots+n\alpha_n$.
By Proposition \ref{prop:lambda}, we have $\lambda=z\lambda_n$. Thus
\[
c_\lambda=n\left(\dfrac{z}{n+1}+1\right)z.
\]
By \cite[\S IV.3 \S IV.6 Theorem 1 and \S V Theorem 1]{Ga}, $C-c_\lambda$
is an element of $J$.
\end{proof}
Garfinkle showed that the Joseph ideal $J$ in the type A case are exactly generated the above
three types of elements. Observe that $J$ depends on an element $z\in\mathbb C$; to specify
its dependence on $z$, in what follows we shall write it as $J^z$.
Summarizing the above propositions, we have the following.
\begin{proposition}[\cite{Ga}]\label{Prop:GaJosephAn}Let $\mathfrak g$ be the type A Lie algebra.
For each $z\in\mathbb C$, there is a Joseph ideal in $U(\mathfrak g)$, denoted by $J^z$, which is
generated by \eqref{vtheta+alpha2},
\eqref{vinU} and \eqref{vtheta0An}.
\end{proposition}
\subsubsection{Joseph ideal for type D Lie algebras}
Let $\alpha$ be the simple root not orthogonal to the highest root $\theta$;
in the case of type D and $E_6$, $E_7$, $E_8$, such an $\alpha$
is unique.
\begin{lemma}[{see \cite{Ga}, \cite[\S 4.4]{BJ} and \cite{GS}}]\label{structureofDE}
Let $\mathfrak g$ be the complex semisimple Lie algebra of DE type.
Let $\alpha_i$ be the highest roots of the complex semisimple Lie algebra obtained from
$\mathfrak{g}$ by deleting $\alpha$ from the Dynkin diagram of $\mathfrak{g}$.
Then we have the following decomposition of irreducible representations:
\begin{eqnarray*}
Sym^2(\mathfrak{g})= V(2\theta)\oplus_i V(\theta+\alpha_i)\oplus V(0).
\end{eqnarray*}
\end{lemma}
For the type D Lie algebra, the unique simple root which is not
perpendicular to $\theta$ is precisely the simple root $\alpha_2$,
and thus we have the following corollary:
\begin{corollary}\label{structure of Dn}
For the $D_n$ Lie algebra $\mathfrak{g}$, we have the decomposition of
irreducible representations:
\begin{eqnarray*}
Sym^2(\mathfrak{g})\cong V(2\theta)\oplus V(\theta+\theta{'})\oplus V(\theta+\alpha_1)\oplus V(0),
\end{eqnarray*}
where $\theta{'}=\alpha_3+2\alpha_4+\cdots+2\alpha_{n-2}+\alpha_{n-1}+\alpha_n$ is the
highest root of the Lie algebra corresponding to the sub-Dynkin diagram $D_{n-2}$ of
$D_n$, which consists of the roots $\alpha_3,\cdots, \alpha_n$.
\end{corollary}
By Garfinkle \cite{Ga}, the ideal $I(\overline{\mathcal{O}}_{min})$ is generated by the lowest
root vectors
$V(\theta+\theta^{'})$, $V(\theta+\alpha_1)$ and elements of $V(0)$.
We have the following.
\begin{proposition}
The lowest weight vector of the subrepresentation $V(\theta+\alpha_1)$ in
Corollary \ref{structure of Dn} is
\begin{align}\label{vtheta+alpha1}
&Y_1Y_\theta+Y_{12}Y_{12\overline{3}\cdots\overline{n-2},n-1,n}+Y_{123}
Y_{123\overline{4}\cdots\overline{n-2},n-1,n}+\notag\\
&\cdots+Y_{123\cdots n-2}Y_{123\cdots n-2,n-1,n}+Y_{123\cdots n-1}Y_{123\cdots n-2,n},
\end{align}
and the lowest weight vector of the subrepresentation
$V(\theta+\alpha_3+2\alpha_4+\cdots+2\alpha_{n-2}+\alpha_{n-1}+\alpha_n )$
in Corollary \ref{structure of Dn} is
\begin{eqnarray}\label{WofDn}
Y_{23\overline{4}\cdots\overline{n-2},n-1,n }Y_{12\overline{3}\cdots\overline{n-2},n-1,n}-
Y_{123\overline{4}\cdots \overline{n-2},n-1,n}Y_{2\overline{3}\cdots \overline{n-2},n-1,n}-
Y_{3\overline{4}\cdots\overline{n-2},n-1,n}Y_\theta.
\end{eqnarray}
\end{proposition}
\begin{proof}
The proof is similar to the proof of Proposition \ref{generalterm1}.
\end{proof}
We then have the following lemma parallel to
Proposition \ref{Garfinkle lemma1}, Lemma \ref{prop:lambda} and Proposition
\ref{prop:vtheta0An}.
\begin{proposition}[{\cite[\S IV.3 Theorem 2, \S IV.6 Theorem 1 and \S V]{Ga}}]\label{Garfinkle lemma1 Dn}
Let $v$ be the lowest weight vector of the representation $V(\theta+\alpha_1)$ or
$V(\theta+\alpha_i)$. Then $\beta(v)\in J$ if and only if $\lambda(h_1)=-(n-2)$,
$\lambda_2=\cdots=\lambda_n=0$.
\end{proposition}
From the above proposition, we also see that $J$ is unique (see also Joseph \cite{Jo}).
Next, we consider the generator of $J$ that corresponds to the Casimir element $C$.
We have the following.
\begin{proposition}[{see \cite[\S IV.3]{Ga}}]\label{proposition-CasimirofDn}
Let $\mathfrak g$ be the $D_n$ Lie algebra.
We have
\begin{align}\label{vtheta0Dn}
C-c_\lambda
=&\sum_{\alpha\in\Phi^+}(X_\alpha Y_\alpha+ Y_\alpha X_\alpha)+
h_1\left(h_1+h_2+\cdots+h_{n-2}+\dfrac{1}{2}h_{n-1}+\dfrac{1}{2}h_n\right)\notag \\
&+h_2\big(h_1+2(h_2+\cdots+h_{n-2}+h_{n-2})+h_{n-1}+h_n\big)\notag \\
&+\cdots+h_{n-2}\left(h_1+2h_2+\cdots+(n-3)h_{n-3}+(n-2)h_{n-2}+
\dfrac{n-2}{2}(h_{n-1}+h_{n})\right)\notag \\
&+\dfrac{1}{2}h_{n-1}\left(h_1+h_2+\cdots+(n-2)h_{n-2}+\dfrac{n}{2}h_{n-1}+
\dfrac{n-2}{2}h_{n}\right)\notag \\
&+\dfrac{1}{2} h_n\left(h_1+h_2+\cdots+(n-2)h_{n-2}+\dfrac{n-2}{2}h_{n-1}+
\dfrac{n}{2}h_{n}\right)-2n+n^2
\end{align}
is an element of $J$.
\end{proposition}
\begin{proof}
The Casimir element is $C=\sum_{\alpha\in\Phi^+}X_\alpha Y_\alpha+ Y_\alpha X_\alpha+
\sum_{i=1}^n h_ih_i^{\vee}$, where $n$ is the rank of the corresponding Lie algebra.
By Proposition \ref{Garfinkle lemma1 Dn},
$c_\lambda=2n-n^2$.
\end{proof}
\begin{remark}
The symmetrization map takes the lowest weight vector to $\mathcal{U}(\mathfrak{g})$,
but does not change the form of (\ref{vtheta+alpha1}) and (\ref{WofDn}).
Thus we identify (\ref{vtheta+alpha1}) and (\ref{WofDn}) with
$\beta(v)$ and $\beta(v_0)$ respectively, which are two elements of Joseph ideal $J$.
\end{remark}
Similarly to Proposition \ref{Prop:GaJosephAn},
by summarizing the above propositions, we have:
\begin{proposition}[\cite{Ga}]\label{Prop:GaJosephDn}
The Joseph ideal $J$ of the $D_n$ Lie algebra is generated by
\eqref{vtheta+alpha1}, \eqref{WofDn} and \eqref{vtheta0Dn}.
\end{proposition}
\subsubsection{Joseph ideal for type E Lie algebras}
Now we turn to the Joseph ideals of type E Lie algebras.
Again, we do this one by one.
\subsubsubsection{The $E_6$ case}
By Lemma \ref{structureofDE}, we have the following.
\begin{corollary}\label{structure of E6
For the $E_6$ Lie algebra $\mathfrak g$,
we have the following decomposition of representation:
\begin{eqnarray*}
Sym^2(\mathfrak{g})\cong V(2\theta)\oplus V(\theta+\alpha_1+\alpha_3+
\alpha_4+\alpha_5+\alpha_6)\oplus V(0),
\end{eqnarray*}
where $\theta$ is the highest root vector of Lie algebra of type $E_6$, i.e.,
$\theta=\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+2\alpha_5+\alpha_6$.
\end{corollary}
For the subrepresentation
$V(\theta+\alpha_1+\alpha_3+\alpha_4+\alpha_5+\alpha_6 )$
in the above corollary, we have the following.
\begin{proposition}
The lowest weight vector of the subrepresentation
$V(\theta+\alpha_1+\alpha_3+\alpha_4+\alpha_5+\alpha_6 )$ in Proposition \ref{structure of E6} is
\begin{eqnarray}\label{highestrootvectorE6}
Y_{13456}Y_\theta-Y_{123456}Y_{12\bar{3}\bar{\bar{4}}\bar{5}6}-Y_{123\bar{4}56}
Y_{12\bar{3}\bar{4}\bar{5}6}-Y_{12\bar{3}\bar{4}56}Y_{123\bar{4}\bar{5}6}.
\end{eqnarray}
\end{proposition}
Similarly to Proposition \ref{proposition-CasimirofDn}, the subrepresentation $V(0)$ in
Corollary \ref{structure of E6} is spanned by $C$, and $C-c_\lambda$ is a generator of
the Joseph ideal $J$ if and only if $\lambda(h_6)=-3$, $\lambda_2=\cdots=\lambda_n=0$
(see {\cite[\S IV.3 Theorem 2, \S IV.6 Theorem 1 and \S V]{Ga}}), and thus we have the following:
\begin{align}\label{CasimirE6}
C-c_\lambda =&C-(\lambda,\lambda)-(\lambda,2\rho)\notag\\
=&\sum_{\alpha\in\Phi^+}(X_\alpha Y_\alpha+ Y_\alpha X_\alpha)+\dfrac{1}{3}
h_1(4h_1+3h_2+5h_3+6h_4+4h_5+2h_6)\notag \\
&+h_2(h_1+2h_2+2h_3+3h_4+2h_5+h_6)\notag\\
&+\dfrac{1}{3} h_3(5h_1+6h_2+10h_3+12h_4+8h_5+4h_6)\notag\\
&+h_4(2h_1+3h_2+4h_3+12h_4+8h_5+4h_6)\notag\\
&+\dfrac{1}{3} h_5(4h_1+6h_2+8h_3+12h_4+10h_5+5h_6)\notag\\
&+\dfrac{1}{3}h_6(2h_1+3h_2+4h_3+6h_4+5h_5+4h_6)+36.
\end{align}
Summarizing the above results, we have the following.
\begin{proposition}[\cite{Ga}]\label{Prop:GaJosephE6}
The Joseph ideal $J$ of type $E_6$ Lie algebra is generated by
\eqref{highestrootvectorE6} and \eqref{CasimirE6}.
\end{proposition}
\subsubsubsection{The $E_7$ case}
By Lemma \ref{structureofDE}, we have the following.
\begin{corollary}\label{structure of E7}
For the $E_7$ Lie algebra $\mathfrak g$,
we have the following decomposition of representation:
\begin{eqnarray*}
Sym^2(\mathfrak{g})\cong V(2\theta)\oplus V(\theta+\alpha_2+
\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7)\oplus V(0),
\end{eqnarray*}
where $\theta$ is the highest root vector of Lie algebra of type $E_7$, i.e.,
$\theta=2\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+3\alpha_5+2\alpha_6+\alpha_7$.
\end{corollary}
For the subrepresentation
$V(\theta+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7)$
in the above corollary, we have the following.
\begin{proposition}[{see \cite[\S 4.3 Definition 7]{Ga}}]
The lowest weight vector of the subrepresentation
$V(\theta+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7)$ in
Proposition \ref{structure of E7} is
\begin{eqnarray}\label{highestrootvectorE7}
Y_{23\bar{4}\bar{5}\bar{6}7}Y_\theta-
Y_{123\bar{4}\bar{5}\bar{6}7}Y_{1\bar{2}
\bar{\bar{3}}\bar{\bar{\bar{4}}}\bar{\bar{5}}\bar{6}7}-
Y_{12\bar{3}\bar{4}\bar{5}\bar{6}7}Y_{1\bar{2}\bar{3}\bar{\bar{\bar{4}}}
\bar{\bar{5}}\bar{6}7}-Y_{12\bar{3}\bar{\bar{4}}\bar{5}\bar{6}7}
Y_{1\bar{2}\bar{3}\bar{\bar{4}}\bar{\bar{5}}\bar{6}7}-
Y_{12\bar{3}\bar{\bar{4}}\bar{\bar{5}}\bar{6}7}Y_{1\bar{2}\bar{3}\bar{\bar{4}}\bar{5}\bar{6}7}.
\end{eqnarray}
\end{proposition}
Similarly to Proposition \ref{proposition-CasimirofDn},
the subrepresentation $V(0)$ in Corollary \ref{structure of E7} is spanned by $C$,
and $C-c_\lambda$ is a generator of the Joseph ideal $J$ if and only if $\lambda(h_7)=-4$,
$\lambda_2=\cdots=\lambda_n=0$
(see {\cite[\S IV.3 Theorem 2, \S IV.6 Theorem 1 and \S V]{Ga}}), and thus we have:
\begin{align}\label{CasimirE7}
C-c_\lambda =& C-(\lambda,\lambda)-(\lambda,2\rho)\notag\\
=&\sum_{\alpha\in\Phi^+}(X_\alpha Y_\alpha+ Y_\alpha X_\alpha) +h_1(2h_1+2h_2+3h_3+4h_4+3h_5+2h_6+h_7)\notag \\
&+\dfrac{1}{2}h_2(4h_1+7h_2+8h_3+12h_4+9h_5+8h_6+3h_7)\notag \\
&+ h_3(3h_1+4h_2+6h_3+8h_4+6h_5+4h_6+2h_7)\notag \\
&+h_4(4h_1+6h_2+8h_3+12h_4+9h_5+6h_6+3h_7)\notag \\
&+\dfrac{1}{2} h_5(6h_1+9h_2+12h_3+18h_4+15h_5+10h_6+5h_7)\notag \\
&+h_6(2h_1+3h_2+4h_3+6h_4+5h_5+4h_6+2h_7)\notag \\
&+\dfrac{1}{2}h_7(2h_1+3h_2+4h_3+6h_4+5h_5+4h_6+3h_7)+84.
\end{align}
Summarizing the above results, we have the following.
\begin{proposition}[\cite{Ga}]\label{Prop:GaJosephE7}
The Joseph ideal $J$ of type $E_7$ Lie algebra is generated by
\eqref{highestrootvectorE7} and \eqref{CasimirE7}.
\end{proposition}
\subsubsubsection{The $E_8$ case}
By Lemma \ref{structureofDE}, we have the following.
\begin{corollary}\label{structure of E8}
For the $E_8$ Lie algebra,
we have the following decomposition of representation:
\begin{eqnarray*}
Sym^2(\mathfrak{g})= V(2\theta)\oplus
V(\theta+2\alpha_1+2\alpha_2+3\alpha_3+
4\alpha_4+3\alpha_5+2\alpha_6+\alpha_7)\oplus V(0),
\end{eqnarray*}
where $\theta$ is the highest root vector of Lie algebra of type $E_8$, i.e.,
$\theta=2\alpha_1+3\alpha_2+4\alpha_3+6\alpha_4+5\alpha_5+4\alpha_6+
3\alpha_7+2\alpha_8$.
\end{corollary}
For the subrepresentation
$V(\theta+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7)$
in the above corollary, we have the following.
\begin{proposition}[ See \cite{Ga} \S 4.3 Definition 7]
The lowest weight vector of the subrepresentation
$V(\theta+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7)$
in Proposition \ref{structure of E8} is
\begin{align}\label{highestrootvectorE8}
&Y_{1^{2}2^{2}3^{3}4^{4}5^{3}6^{2}7^1}Y_\theta
-Y_{1^{2}2^{2}3^{3}4^{4}5^{3}6^{2}7^{1}8^{1}}
Y_{1^{2}2^{3}3^{4}4^{6}5^{5}6^{4}7^{3}8^1}\notag\\
&-Y_{1^{2}2^{2}3^{3}4^{4}5^{3}6^{2}7^{2}8^1}
Y_{1^{2}2^{3}3^{4}4^{6}5^{5}6^{4}7^{2}8^1}
-Y_{1^{2}2^{2}3^{3}4^{4}5^{3}6^{3}7^{2}8^1}
Y_{1^{2}2^{3}3^{4}4^{6}5^{5}6^{3}7^{2}8^1}\notag\\
&-Y_{1^{2}2^{2}3^{3}4^{4}5^{4}6^{3}7^{2}8^1}
Y_{1^{2}2^{3}3^{4}4^{6}5^{3}6^{3}7^{2}8^1}
-Y_{1^{2}2^{2}3^{3}4^{5}5^{4}6^{3}7^{2}8^1}
Y_{1^{2}2^{3}3^{4}4^{5}5^{3}6^{3}7^{2}8^1}\notag\\
&-Y_{1^{2}2^{2}3^{4}4^{5}5^{4}6^{3}7^{2}8^1}
Y_{1^{2}2^{3}3^{3}4^{5}5^{3}6^{3}7^{2}8^1},
\end{align}
where, for simplicity, $Y_{1^{n_1}\cdots k^{n_k}\cdots 8^{n_8}}$
represents the root vector corresponds to the root
$n_1\alpha_1+\cdots n_k\alpha_k+\cdots+n_8\alpha_8$.
\end{proposition}
Similarly to Proposition \ref{proposition-CasimirofDn},
the subrepresentation $V(0)$ in Corollary \ref{structure of E8} is spanned by
$C$, and $C-c_\lambda$ is a generator of the Joseph ideal $J$ if and only if
$\lambda(h_8)=-5$, $\lambda_2=\cdots=\lambda_n=0$
(see {\cite[\S IV.4, \S IV.6 Theorem 1 and \S V]{Ga}}), and thus we have:
\begin{align}\label{CasimirE8}
C-c_\lambda =& C-(\lambda,\lambda)-(\lambda,2\rho)\notag \\
=&\sum_{\alpha\in\Phi^+}(X_\alpha Y_\alpha+
Y_\alpha X_\alpha) +h_1(4h_1+5h_2+7h_3+10h_4+8h_5+6h_6+4h_7+2h_8)\notag \\
&+h_2(5h_1+8h_2+10h_3+15h_4+12h_5+9h_6+6h_7+3h_8)\notag \\
&+ h_3(7h_1+10h_2+14h_3+20h_4+16h_5+12h_6+8h_7+4h_8)\notag \\
&+h_4(10h_1+15h_2+20h_3+30h_4+24h_5+18h_6+12h_7+6h_8)\notag \\
&+ h_5(8h_1+12h_2+16h_3+24h_4+20h_5+15h_6+10h_7+5h_8)\notag \\
&+h_6(6h_1+9h_2+12h_3+18h_4+15h_5+12h_6+8h_7+4h_8)\notag \\
&+h_7(4h_1+6h_2+8h_3+12h_4+10h_5+8h_6+6h_7+3h_8)\notag \\
&+h_8(5h_1+8h_2+10h_3+15h_4+12h_5+9h_6+6h_7+3h_8)+240.
\end{align}
Summarizing the above results, we have the following.
\begin{proposition}[\cite{Ga}]\label{Prop:GaJosephE8}
The Joseph ideal $J$ of type $E_8$ Lie algebra is generated by
\eqref{highestrootvectorE8} and \eqref{CasimirE8}.
\end{proposition}
\section{The $B$-algebras}\label{sect:B-algebra}
The notion of the $B$-algebra of a graded associative algebra
is introduced by Braden et. al. in \cite{BLPW}. It plays an essential
role in the equivariant Hikita conjecture (see \cite{KMP,KTW}).
In this section, we study the $B$-algebra of the quantizations
of the minimal orbits given in the previous section.
\subsection{$B$-algebra of the minimal nilpotent orbits}
Suppose ${\mathfrak g}$ is a simple Lie algebra, and $Q$ is the root lattice.
$\mathcal{U}({\mathfrak g})$ is the universal enveloping algebra of ${\mathfrak g}$,
$J$ is the Joseph ideal. Recall that there is the PBW filtration of $\mathcal{U}({\mathfrak g})$:
$$\mathcal{U}^0\subseteq \mathcal{U}^1 \subseteq \mathcal{U}^2\subseteq \cdots $$
On the other hand, $\mathcal{U}({\mathfrak g})$ can be decomposed as
$$\mathcal{U}({\mathfrak g})=\bigoplus_{\mu\in Q}\mathcal{U}_\mu.$$
Furthermore, the Joseph ideal $J$ satisfies the following
\begin{equation}\label{J decomp}
J=\bigoplus_{\mu\in Q}J_\mu=\bigoplus_{\mu\in Q}J\cap \mathcal{U}_\mu.
\end{equation}
Denote ${\mathscr A}:=Rees(\mathcal{U}({\mathfrak g})/J)$, then there is a weight decomposition induced by that of $\mathcal{U}({\mathfrak g})$,
$${\mathscr A}=\bigoplus_{\mu\in Q}{\mathscr A}_\mu.$$
\begin{definition}
The {\it B-algebra} of ${\mathscr A}$ is defined to be
\begin{equation}\label{B-algebra}
B({\mathscr A}):={\mathscr A}_0\Big/\sum_{\mu>0}\{ab|a\in {\mathscr A}_\mu, b\in {\mathscr A}_{-\mu}\}.
\end{equation}
\end{definition}
The following lemma is immediate.
\begin{lemma}\label{BA}
Let $\mathcal{I}$ be the ideal of $\mathcal{U}_0$ given by
\begin{equation*}
\mathcal{I}:=\sum_{\mu>0}\{ab|a\in \mathcal{U}_\mu, b\in \mathcal{U}_{-\mu}\}.
\end{equation*}
Then
$$B({\mathscr A})=(\mathcal{U}_0/\mathcal{I})/
\big(J_0/(J_0\cap \mathcal{I})
\big).$$
\end{lemma}
Now we describe $\mathcal{U}_0/\mathcal{I}$.
\begin{lemma}\label{UI}
As a (commutative) algebra,
\begin{equation*}
\mathcal{U}_0/\mathcal{I}\simeq \mathcal{U}({\mathfrak h})[\hbar].
\end{equation*}
\end{lemma}
\begin{proof}
Notice that we have the following simple and important decomposition of $\mathcal{U}({\mathfrak g})$ (cf. \cite{Ga}),
\begin{equation}\label{key decomp}
\mathcal{U}({\mathfrak g})=(\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)\oplus \mathcal{U}({\mathfrak h}).
\end{equation}
For any $a\in \mathcal{U}_0$, we denote $a_{\mathfrak h}\in {\mathfrak h}$ to be the ${\mathfrak h}$-summand of $a$. We define a map
\begin{align}
\kappa: \mathcal{U}_0/\mathcal{I}\rightarrow \mathcal{U}({\mathfrak h}), \quad a+\mathcal{I}\mapsto a_{\mathfrak h}.
\end{align}
Firstly, we need to prove $\kappa$ is well-defined. For any $a\in \mathcal{I}$, by definition of $\mathcal{I}$, $a=a_+\cdot a_-$, where $a_+\in \mathcal{U}_{>0}$, $a_-\in \mathcal{U}_{<0}$. Without loss of generality, we assume $a_+\in \mathcal{U}_{>0}^k$ for some $k>0$. Now we perform induction on $k$. For $k=1$, $a_+\in \mathfrak{n}^+$, so $a\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)$. Suppose for $k \leq r$, we still have $a\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)$. Now consider $a_+\in \mathcal{U}_{>0}^{r+1}$, then there exists $v\in \mathfrak{n}^+$, $b, c\in \mathcal{U}$, such that $a_+=b\cdot v \cdot c$. Then
\begin{equation*}
a_+=b\cdot v \cdot c=v\cdot b \cdot c+\hbar [b, v]\cdot c.
\end{equation*}
we have $v\cdot b \cdot c\cdot a_-\in \mathfrak{n}^+\mathcal{U}({\mathfrak g})$. Since $[b, v]\cdot c\in \mathcal{U}_{>0}^{\leq r}$, by assumption of induction, \\
$[b, v]\cdot c\cdot a_-\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)$.
So $a\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)$. This means that $\mathcal{I}\subseteq (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)$, $\kappa$ is well-defined.
The surjectivity of $\kappa$ is easy to check. The injectivity of $\kappa$ is induced by
$$(\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)\cap \mathcal{U}_0\subseteq \mathcal{I}.$$
This follows from the fact that $\mathfrak{n}^+\subseteq \mathcal{U}_{>0}$, $\mathfrak{n}^-\subseteq \mathcal{U}_{<0}$.
\end{proof}
By Lemmas \ref{BA} and \ref{UI}, we have
\begin{equation}\label{U/J}
B(\mathscr{A})=\mathcal{U}({\mathfrak h})/J_{\mathfrak h},
\end{equation}
where $J_{\mathfrak h}=\kappa(J_0/(J_0\cap \mathcal{I}))$.
By the definition of $\kappa$, $J_{\mathfrak h}$ is just the projection of $J_0$ onto $\mathcal{U}({\mathfrak h})$ in \eqref{key decomp}.
Suppose $h_1, h_2, \cdots, h_n$ is a basis of ${\mathfrak h}$, then $\mathcal{U}({\mathfrak h})$ is a polynomial ring generated by $h_1, \cdots, h_n$. The degree of polynomial gives us a natural filtration of $\mathcal{U}({\mathfrak h})$,
$$\mathcal{U}^0({\mathfrak h})\subseteq \mathcal{U}^1({\mathfrak h})\subseteq \mathcal{U}^2({\mathfrak h})\subseteq \cdots .$$
\begin{lemma}\label{J_h generator}
$J_{\mathfrak h}$ is an ideal of $\mathcal{U}({\mathfrak h})$ generated by the set $S_J$, where $S_J$ is the projection of $J_0\cap \mathcal{U}^2$ onto $\mathcal{U}({\mathfrak h})$ via \eqref{key decomp}.
\end{lemma}
\begin{proof}
Starting from $a_{\mathfrak h}\in J_{\mathfrak h}$, by the surjectivity of $\kappa$, there exists $a\in J_0$ such that $\kappa(a+\mathcal{I})=a_{\mathfrak h}$. By Proposition III.3.2 in \cite{Ga}, we know there exists $w\in J\cap \mathcal{U}^2$, $b, c\in \mathcal{U}({\mathfrak g})$ such that
\begin{equation*}
a=b\cdot w\cdot c.
\end{equation*}
Without loss of generality, we assume $b\in \mathcal{U}^i$, $c\in \mathcal{U}^j$ for some $i, j\geq 0$.
Now we claim that, $a=v+a_1\cdot w_0\cdot a_2$, where $w_0\in S_J$, $a_1$, $a_2\in \mathcal{U}({\mathfrak h})$
and $v\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)$. We will prove this claim by
induction on $k=i+j$.
\begin{enumerate}
\item[(1)] For $k=0$, we have $b, c\in {\mathbb C}$.
Therefore $a\in J_0\cap \mathcal{U}^2$, and it is easy to see the claim holds by \eqref{key decomp}.
\item[(2)] Suppose for $k\leq r$, the claim holds. Now consider $k=r+1$.
\begin{enumerate}
\item[(2i)]
If $b, c\in \mathcal{U}({\mathfrak h})$, then $w\in J_0\cap \mathcal{U}^2$. By \eqref{key decomp},
we have $w=v+w_0$, where
$v\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)\cap
\mathcal{U}^0$ and $w_0\in S_J$. Recall that in the proof of Lemma \ref{UI},
we show that $(\mathfrak{n}^+\mathcal{U}({\mathfrak g})
+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)\cap \mathcal{U}^0=\mathcal{I}$ which is an ideal.
Thus $a=b v c+b w_0 c$ and
$b v c\in (\mathfrak{n}^+\mathcal{U}({\mathfrak g})+\mathcal{U}({\mathfrak g})\mathfrak{n}^-)
\cap \mathcal{U}^0$. The claim holds.
\item[(2ii)] If $b\notin \mathcal{U}({\mathfrak h})$, there exits
$x\in \mathfrak{n}^+$ or $\mathfrak{n}^+$ such that $b=b_1 x b_2$.
If $x\in \mathfrak{n}^+$, then
$$a=b_1 x b_2 w c=x b_1 b_2 w c +\hbar [b_1, x] b_2 w c.$$
Here $x b_1 b_2 w c\in \mathfrak{n}^+\mathcal{U}({\mathfrak g})$
and $[b_1, x] b_2\in \mathcal{U}^{< i}$.\\
If $x\in \mathfrak{n}^-$, then
\begin{align*}
a&=b_1 x b_2 w c= b_1 b_2 w c x+\hbar b_1[x, b_2 w c]\\
&= b_1 b_2 w c x+\hbar b_1([x, b_2]wc+b_2[x, w]c+b_2w[x, c]).
\end{align*}
Here $b_1 b_2 w c x\in \mathcal{U}({\mathfrak g})\mathfrak{n}^-$, and the remaining 3 terms in the last
expression is of the form $\mathcal{U}^{<i}\cdot (J\cup \mathcal{U}^2)\cdot \mathcal{U}^{j}$.
So the claim holds by the assumption of induction. Notice that we can perform a similar
discussion on $c$, the claim holds in the case $k=r+1$.
\end{enumerate}
\end{enumerate}
By the above claim, we have $a_{\mathfrak h}=a_1w_0a_2$, the proof of the lemma is complete.
\end{proof}
Lemma \ref{J_h generator} and \eqref{U/J} tell us that, to calculate $B({\mathscr A})$, we only need to calculate $J_0\cap \mathcal{U}^2$. Firstly we figure out the dimension of it.
Suppose $I={\rm gr}J=\bigoplus_{k\in {\mathbb N}}I^k$,
where $I^k=(J\cap \mathcal{U}^k)/(J\cap \mathcal{U}^{k-1})$. Then $I$ is an ideal of the polynomial ring
${\rm gr}(\mathcal{U}({\mathfrak g}))= Sym({\mathfrak g})$, and the degree of elements in $I^k$ is $k$.
For $\mu\in Q$, set $I^k_{\mu}$ to be the component of $I^k$ with weight $\mu$.
\begin{lemma}\label{lemma:dimJU2}With the above notations, we have
\begin{equation}\label{dimJU2}
\dim(J_0\cap \mathcal{U}^2)=\dim I^2_0.
\end{equation}
\end{lemma}
\begin{proof}
Since $J\neq \mathcal{U}({\mathfrak g})$, $1\notin J$, $J\cap \mathcal{U}^0=\{0\}$. Then as a vector space,
\begin{align*}
J\cap \mathcal{U}^1 \simeq I^1=(J\cap \mathcal{U}^1)/(J\cap \mathcal{U}^0).
\end{align*}
By a theorem of Konstant (see \cite[Theorem III.2.1]{Ga}),
as an ideal of $ Sym({\mathfrak g})$, $I$ is generated by $I^2$,
which contains the homogenous elements with degree 2.
Therefore $I^1=\{0\}$, and $ J\cap \mathcal{U}^1 =\{0\}$.
Now we consider $ J\cap \mathcal{U}^2$, as a vector space,
\begin{align*}
J\cap \mathcal{U}^2\simeq I^2=(J\cap \mathcal{U}^2)/(J\cap \mathcal{U}^1).
\end{align*}
Since the projection $J\cap \mathcal{U}^k\rightarrow (J\cap \mathcal{U}^k)/(J\cap \mathcal{U}^{k-1})$ is compatible with the decomposition \eqref{J decomp}, we have $J_0\cap \mathcal{U}^2\simeq I^2_0$.
\end{proof}
\subsubsection{Some calculations on ${\mathfrak g}$-modules}
In this subsection, we calculate $\dim I^2_0$ for ADE Lie algebras.
We have the following decomposition of ${\mathfrak g}$-module (see \cite{Ga} or \cite{Sh}).
\begin{theorem}[Kostant]
Suppose ${\mathfrak g}$ is a semisimple Lie algebra, and $\theta$ is the highest weight of the adjoint
representation ${\mathfrak g}$. As a ${\mathfrak g}$-module,
\begin{equation*}
{\rm Sym}^2 {\mathfrak g}\simeq V(2\theta)\oplus L_2,
\end{equation*}
where $V(2\theta)$ is the irreducible representation of highest weight $2\theta$,
and $L_2$ is a representation with underlying space $I^2$.
\end{theorem}
For a ${\mathfrak g}$-module $V$, we denote $V_0$ as the subspace of $V$ with weight $0$, then we have
the following.
\begin{lemma} With the notations as above, we have:
\begin{equation}\label{dimension of I}
\dim(I^2_0)=\dim({\rm Sym}^2 {\mathfrak g})_0- \dim V(2\theta)_0,
\end{equation}
where
\begin{equation}\label{dimSymh}
\dim({\rm Sym}^2 {\mathfrak g})_0=\frac{\dim{\mathfrak g}-\dim{\mathfrak h}}{2}+\dim ({\rm Sym}^2{\mathfrak h}).
\end{equation}
\end{lemma}
\begin{proof}
Just notice that all elements in $({\rm Sym}^2 {\mathfrak g})_0$ is a linear combination of
$x_\mu x_{-\mu}$, $x_\mu\in {\mathfrak g}_{\mu}$ and $h_i h_j$, $h_i, h_j\in {\mathfrak h}$.
\end{proof}
The calculation of $\dim V(2\theta)_0$ is more difficult. The main tool we use is the following formula
(see \cite[Theorem 22.3]{Hum}).
\begin{lemma}[Freudenthal]
Let $V=V(\lambda)$ be an irreducible ${\mathfrak g}$-module of highest weight $\lambda$.
Let $\Lambda$ be the set of weights of $V$. If $\mu\in \Lambda$,
then the multiplicity $m(\mu)$ of $\mu$ in $V$ is given recursively as follows:
\begin{equation}\label{Freudenthal}
\big((\lambda+\delta, \lambda+\delta)-(\mu+\delta, \mu+\delta)
\big)m(\mu)
=2\sum_{\alpha\in \Delta^+}\sum_{i=1}^{+\infty}m(\mu+i\alpha)(\mu+i\alpha, \alpha),
\end{equation}
where $\delta={1\over 2}\sum_{\alpha\in \Delta^+}\alpha$.
\end{lemma}
\begin{lemma}
For the ADE Lie algebra ${\mathfrak g}$,
\begin{equation}\label{dimV0}
\dim V(2\theta)_0=\frac{\dim{\mathfrak g}-\dim{\mathfrak h}}{2}.
\end{equation}
\end{lemma}
\begin{proof}
Notice that $\dim V(2\theta)_0$ is just the multiplicity $m(0)$ in $ V(2\theta)$.
We prove the lemma case by case.
\noindent\textbf{$A_n$ case:}
Firstly we list some data in $A_n$ case (see \cite{Hum} or \cite{Kac}).
\begin{align*}
Q&=\left\{\sum_{i=1}^{n+1}k_iv_i|k_i\in {\mathbb Z}, \sum_i k_i=0\right\},\\
\Delta &=\{v_i-v_j\}, \quad \Delta^+ =\{v_i-v_j|i<j \} ,\\
\Phi^+&=\{\alpha_1=v_1-v_2, \alpha_2=v_2-v_3, \cdots , \alpha_n=v_n-v_{n+1}\},\\
\theta &=v_1-v_{n+1},\quad \delta={1\over 2}(nv_1+(n-2)v_2+\cdots -(n-2)v_n-nv_{n+1}),\\
W &=\{\text{all permutations of the $v_i$}\}.
\end{align*}
Since $2\theta=2(v_1-v_{n+1})$ is the highest weight of $V$, $m(2\theta)=1$. Since $2\theta=2(v_1-v_{n+1})$, and $m(\mu)$ is invariant under the $W$-action (see \cite[Theorem 21.2]{Hum}), we have
\begin{equation}\label{117}
m(2(v_i-v_j))=1.
\end{equation}
Now we consider $m(2v_1-v_n-v_{n+1})$. By \eqref{Freudenthal}, we have
\begin{align*}
&\big((2\theta+\delta, 2\theta+\delta)-(2v_1-v_n-v_{n+1}+\delta, 2v_1-v_n-v_{n+1}+\delta)\big)
m(2v_1-v_n-v_{n+1})\\
=&2m(2\theta)(2\theta, v_{n}-v_{n+1}).
\end{align*}
One can check that
\begin{align*}
&(2\theta, v_{n}-v_{n+1})=2,\\
&(2\theta+\delta, 2\theta+\delta)-(2v_1-v_n-v_{n+1}+\delta, 2v_1-v_n-v_{n+1}+\delta)=4.
\end{align*}
Therefore
\begin{equation*}
m(2v_1-v_n-v_{n+1})=1.
\end{equation*}
By the $W$-invariance of $m(\mu)$, and $m(\mu)=m(-\mu)$, we have
\begin{equation}\label{118}
m(\pm(2v_i-v_j-v_k))=1.
\end{equation}
Now we consider $m(v_1+v_2-v_{n}-v_{n+1})$. By \eqref{Freudenthal}, we have
\begin{align*}
&\big(
(2\theta+\delta, 2\theta+\delta)-(v_1+v_2-v_{n}-v_{n+1}
+\delta, v_1+v_2-v_{n}-v_{n+1}+\delta)\big)\\
&\cdot m(v_1+v_2-v_{n}-v_{n+1})\\
=&2\big(m(2v_1-v_{n}-v_{n+1})(2v_1-v_{n}-v_{n+1} , v_1-v_2)\\
&+m(v_1+v_2-2v_{n+1})(v_1+v_2-2v_{n+1}, v_n-v_{n+1})\big).
\end{align*}
By \eqref{118}, we have $m(2v_1-v_{n}-v_{n+1})=m(v_1+v_2-2v_{n+1})=1$. Furthermore,
\begin{align*}
&(2v_1-v_{n}-v_{n+1} , v_1-v_2)=(v_1+v_2-2v_{n+1}, v_n-v_{n+1})=2,\\
&(2\theta+\delta, 2\theta+\delta)-(v_1+v_2-v_{n}-v_{n+1}+\delta, v_1+v_2-v_{n}-v_{n+1}+\delta)=8.
\end{align*}
Thus $m(2\theta-\alpha_1-\alpha_n)=1$ by the $W$-invariance of $m(\mu)$, and we have
\begin{equation}\label{119}
m(v_i+v_j-v_k-v_l)=1.
\end{equation}
Now we calculate $m(\theta)$. By \eqref{Freudenthal},
\begin{equation*}
\big((2\theta+\delta, 2\theta+\delta)-(\theta+\delta, \theta+\delta)\big)
m(\theta)=2\sum_{\alpha\in \Phi^+} m(\theta+\alpha)(\theta+\alpha, \alpha).
\end{equation*}
By \eqref{117}, \eqref{118} and \eqref{119}, $m(\theta+\alpha)=1$. Furthermore, we have
\begin{align*}
&(2\theta+\delta, 2\theta+\delta)-(\theta+\delta, \theta+\delta)=6+2n,\\
& \sum_{\alpha\in \Phi^+}(\theta+\alpha, \alpha)=(\theta, 2\delta)+2|\Phi^+|=2n+2\cdot {n(n+1)\over 2} =n(n+3).
\end{align*}
Then $m(\theta)=n$ and by the $W$-invariance of $m(\mu)$,
\begin{equation}\label{120}
m(v_i-v_j)=n.
\end{equation}
Finally, by \eqref{Freudenthal},
\begin{align}\label{A-m(0)}
\big((2\theta+\delta, 2\theta+\delta)-(\delta, \delta)\big)m(0)
=2\sum_{\alpha\in \Phi^+} \big(m(\alpha)(\alpha , \alpha)+m(2\alpha)(2\alpha, \alpha)\big).
\end{align}
By \eqref{117} and \eqref{120}, we have $m(\alpha)=n$ and $m(2\alpha)=1$. Furthermore,
$$(2\theta+\delta, 2\theta+\delta)-(\delta, \delta)=4n+8.$$
Thus \eqref{A-m(0)} is equivalent to
$$(4n+8)m(0)=2(2n+4)|\Phi^+|=4(n+2)\cdot {n(n+1)\over 2},$$
which induces
\begin{equation*}
m(0)={n(n+1)\over 2}.
\end{equation*}
\noindent\textbf{$D_n$ case:} The data of $D_n$ is as follows:
\begin{align*}
Q&=\left\{\sum_{i=1}^{n}k_iv_i|k_i\in {\mathbb Z}, \sum_i k_i\in 2{\mathbb Z}\right\},\\
\Delta &=\{\pm v_i \pm v_j\},\quad \Delta^+=\{v_i \pm v_j| i<j\}\\
\Phi^+&=\{\alpha_1=v_1-v_2, \alpha_2=v_2-v_3, \cdots , \alpha_{n-1}=v_{n-1}-v_{n}, \alpha_n=v_{n-1}+v_n\},\\
\theta &=v_1+v_{2}, \quad \delta=(n-1)v_1+(n_2)v_2+\cdot v_{n-1}, \\
W &=\{\text{all permutations and even number of sign changes of the $v_i$}\}.
\end{align*}
The argument is similar to $A_n$, so we just list the result and omit the details:
\begin{align*}
&m(2\theta)=m(2(v_1+v_2))=m(\pm2(v_i\pm v_j))=1,\\
&m(2v_1+v_2+v_3)=m(\pm 2v_i\pm v_j \pm v_k)=1,\\
&m(v_1+v_2+v_3+v_4)=m(\pm v_i\pm v_j\pm v_k\pm v_l)=2,\\
&m(2v_1)=m(\pm v_i)=n-2,\\
&m(v_1+v_2)=m(\pm v_i\pm v_j)=2n-3,\\
&m(0)=n(n-1).
\end{align*}
\noindent\textbf{Type E case:} By \cite[\S4]{BM}, we know that
for $E_6$, $m(0)=36$, for $E_7$, $m(0)=63$ and for $E_8$, $m(0)=120$.
They are exactly $\displaystyle\frac{\dim{\mathfrak g}-\dim{\mathfrak h}}{2}$ in these cases.
In summary, in all the ADE cases, we have $\displaystyle m(0)=\frac{\dim{\mathfrak g}-\dim{\mathfrak h}}{2}$.
\end{proof}
Combining \eqref{dimension of I}, \eqref{dimSymh} and \eqref{dimV0}, we have
the following.
\begin{proposition}\label{Prop:I^2_0}
If ${\mathfrak g}$ is of ADE type, then
\begin{equation}\label{dimI20}
\dim I^2_0=\dim({\rm Sym}^2{\mathfrak h}).
\end{equation}
\end{proposition}
\begin{proposition}\label{vectorspace of B-algebra}
If ${\mathfrak g}$ is of ADE type, then as ${\mathbb C}[\hbar]$-vector spaces
$$B({\mathscr A})\simeq{\mathfrak h}.$$
\end{proposition}
\begin{proof}
Suppose $\{h_1, h_2 \cdots, h_n\}$ is a basis of ${\mathfrak h}$. By \eqref{dimJU2} and\eqref{dimI20},
$d=\dim(J_0\cap \mathcal{U}^2)$ equals the number of $h_ih_j$s.
Furthermore, in \S\ref{subsection type A}-\ref{subsection En case},
we will find explicitly $d$ elements in $\dim(J_0\cap \mathcal{U}^2)$ which are linearly independent.
Thus the elements we find form a basis of $\dim(J_0\cap \mathcal{U}^2)$; moreover, they are of the form
$$h_ih_j+\sum_\mu {\mathfrak g}_\mu {\mathfrak g}_{-\mu}+\textup{linear term},\quad 1\leq i\leq j\leq n.$$
The proposition follows from \eqref{U/J} and Lemma \ref{J_h generator}.
\end{proof}
\subsection{$B$-algebra in the type A case}\label{subsection type A}
In this and the following two subsections, we
find the $B$-algebra of the
quantization of the minimal nilpotent
orbits in Lie algebras of type A, D and E respectively.
In this subsection, we study the type A Lie algebra case.
As we recalled in the previous section, the Joseph ideals
$J^z$ for type A Lie algebras are parameterized
by $z\in\mathbb C$.
We shall view $z$ as a formal parameter, and
accordingly, for type A Lie algebras,
the universal enveloping algebra $\mathcal U(\mathfrak g)$
are defined over the base ring $\mathbb C[z]$.
With such a convention, we write the whole family of Joseph
ideals by $J$. Therefore the filtered quantization
of the minimal nilpotent orbits $\overline{\mathcal O}_{min}$
are given by $\mathcal U(\mathfrak g)/J$, which
is an algebra over $\mathbb C[z]$,
and the associated graded quantization
$Rees\big(\mathcal{U}(\mathfrak{g})/J\big)$ is an algebra over $\mathbb C[z,\hbar]$.
It is easy to see that
$B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)
=B\big(Rees(\mathcal{U})\big)/B\big(Rees(J)\big)$.
Now we find the relations in
the two-sided ideal $B\big(Rees(J)\big)$.
Since (\ref{vtheta+alpha2}) is an element in $J$ and $J$ is a two-sided ideal,
for $X\in \mathcal U^2(\mathfrak{g})$,
its action
$ad_X:=[X,-]$ on $Y_\theta Y_{2\cdots n-1}-Y_{1\cdots n-1}Y_{2\cdots n}$
is still an element in $J$. In the following we use this approach
to obtain elements in $B\big(Rees(J)\big)$, i.e., relations in $B\big(Rees(\mathcal{U}/J)\big)$.
{\it Step 1}. We first prove that the relation $h_ih_j=0$
in the $B$-algebra $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$, for $|i-j|\geq 2$.
By applying $ad_{X_2}\circ ad_{X_{3\cdots n}}$ on (\ref{vtheta+alpha2}),
we obtain
\begin{equation}\label{formula:Ytheta}
Y_\theta h_2-Y_1Y_{2\cdots n}+Y_{12}Y_{3\cdots n}.
\end{equation}
Then we have two methods to obtain the corresponding ``weight 0" vector:
\begin{enumerate}
\item[(1)] by applying $ad_{X_\theta}$ on \eqref{formula:Ytheta}, we obtain
\begin{eqnarray}\label{0.1}
(h_1+\cdots h_n)h_2+X_{2\cdots n}Y_{2\cdots n}-Y_1X_1-X_{3\cdots n}Y_{3\cdots n}+Y_{12}X_{12} \in J,
\end{eqnarray}
which reduces to
\begin{eqnarray}\label{1.1}
(h_1+\cdots h_n)h_2-h_2\hbar \in B\big(Rees(J)\big).
\end{eqnarray}
\item[(2)] by applying $ad_{X_{1\cdots n-1}}\circ ad_{X_n}$
on \eqref{formula:Ytheta}, we obtain
\begin{eqnarray}\label{0.2}
(h_1+\cdots h_{n-1})h_2+X_{2\cdots n-1}Y_{2\cdots n-1}-Y_1X_1+Y_{12}X_{12}-X_{3\cdots n-1}Y_{3\cdots n-1} \in J,
\end{eqnarray}
which reduces to
\begin{eqnarray}\label{1.2}
(h_1+\cdots +h_{n-1})h_2-h_2\hbar \in B\big(Rees(J)\big).
\end{eqnarray}
\end{enumerate}
By subtracting \eqref{1.2} from \eqref{1.1},
we obtain $h_2h_n=0$ in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
In general, for $2<i<n-1$, by applying $ad_{ X_i}
\circ ad_{ X_{2\cdots i-1}}\circ ad_{ X_{i+1\cdots n-1}}$ on \eqref{formula:Ytheta}, we obtain
\begin{eqnarray}\label{1.2.1}
-Y_\theta h_i-Y_{1,\cdots,i-1}Y_{i,\cdots,n}+Y_{1,\cdots,i}Y_{i+1,\cdots,n} \in J,
\end{eqnarray}
then applying $ad_{ X_\theta}$ to the above element, we obtain
\begin{eqnarray}\label{1.3}
-(h_1+\cdots+h_n)h_i+h_i\hbar=0
\end{eqnarray} in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
There is another choice of the action, i.e., by applying
$ad_{X_{1\cdots n-1}}\circ ad_{X_n}$
on \eqref{1.2.1}, and we obtain
\begin{eqnarray}\label{1.4}
-(h_1+\cdots+h_{n-1})h_i+h_i\hbar=0
\end{eqnarray}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
Subtracting \eqref{1.4} from \eqref{1.3}, we obtain
\[
h_ih_n=0.
\]
Similarly, applying $ad_{ X_{1\cdots j-1}}\circ ad_{ X_{j,\cdots,n}}$
on (\ref{vtheta+alpha2}), we have
\begin{align}\label{hihj1}
(h_1+\cdots+h_{j-1})h_i=&-X_{i\cdots j-1}Y_{i\cdots j-1}+Y_{1\cdots i-1}X_{1\cdots i-1}\notag\\
&-Y_{1\cdots i}X_{1\cdots i}+X_{i+1\cdots j-1}Y_{i+1\cdots j-1}
\end{align}
in $Rees\big(\mathcal{U}(\mathfrak{g})/J\big)$.
And
applying $ ad_{ X_{1\cdots j}}\circ ad_{ X_{j+1,\cdots,n}}$ on (\ref{vtheta+alpha2}), we have
\begin{align}\label{hihj2}
(h_1+\cdots+h_j)h_i=&-X_{i\cdots j}Y_{i\cdots j}+Y_{1\cdots i-1}X_{1\cdots i-1}\notag\\
&-Y_{1\cdots i}X_{1\cdots i}+X_{i+1\cdots j}Y_{i+1\cdots j}
\end{align}
in $Rees\big(\mathcal{U}(\mathfrak{g})/J\big)$. Subtracting \eqref{hihj1} from \eqref{hihj2},
we have
\begin{eqnarray}\label{hihj result}
h_ih_j=0
\end{eqnarray}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$, for $|i-j|\geq 2$ and $1<i<j<n$.
As a by-product,
by plugging \eqref{hihj result} in \eqref{1.3} or \eqref{1.4},
we then have
\begin{eqnarray}\label{Vtheta+alpha}
h_k^2=-h_{k-1}h_k-h_kh_{k+1}+h_k\hbar,
\end{eqnarray}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
{\it Step 2}. Next, we deal with (\ref{vinU}).
Since $J$ is a two-sided ideal, by applying
$ad_{ X_n}\circ ad_{ X_{1\cdots n-1}}$ on \eqref{vinU}
and then using \eqref{hihj result}, we obtain
\begin{eqnarray}\label{2.1}
(1-n)h_n^2-(2n-2)h_{n-1}h_n+(n+1)(n-1)h_n\hbar+(n-1)zh_n\hbar \in B(Rees(J)).
\end{eqnarray}
Similarly, by applying $ad_{ X_{n-1,n}}\circ ad_{ X_{1\cdots n-2}}$ on \eqref{vinU}, we obtain
\begin{align}\label{2.0}
&(1-n)h_n^2+(3-n)h_{n-1}^2-(2n-4)(h_{n-1}h_n+h_{n-2}h_{n-1})\notag\\
&+(n+1)(n-2)h_{n-1}\hbar+(n+1)(n-1)h_n\hbar+(n-1)z(h_{n-1}+h_n)\hbar.
\end{align}
Subtracting \eqref{2.1} from \eqref{2.0}, we obtain
\begin{eqnarray}\label{2.2}
(3-n)h_{n-1}^2+2h_nh_{n-1}-(2n-4)h_{n-2}h_{n-1}+\big((n+1)(n-2)h_{n-1}+(n-1)zh_{n-1}\big)\hbar
\end{eqnarray}
in $B(Rees(J))$. \eqref{2.1} and \eqref{2.2} being in $B(Rees(J))$
implies that we have the relations
\begin{align*}
(1-n)h_n^2=&(2n-2)h_{n-1}h_n-(n+1)(n-1)h_n\hbar-(n-1)zh_n\hbar,\\
(3-n)h_{n-1}^2=&2h_nh_{n-1}+(2n-4)h_{n-2}h_{n-1}-\big((n+1)(n-2)h_{n-1}+(n-1)zh_{n-1}\big)\hbar
\end{align*}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
In general, by applying $ad_{ X_{n-k\cdots n}}\circ ad_{ X_{1\cdots n-k-1}}$ and
$ad_{ X_{n-k-1\cdots n}}\circ ad_{ X_{1\cdots n-k-2}}$ on \eqref{vinU}
respectively and then subtracting them, we obtain the relation
\begin{eqnarray}\label{Vtheta result}
-(2k-1-n)h_k^2=2(k-n)h_{k+1}h_k+(2k-2)h_kh_{k-1}\notag\\-(n+1)(k-1)h_k\hbar -(n-1)zh_k\hbar
\end{eqnarray}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$, for $k=1,\cdots, n$.
Plugging (\ref{Vtheta+alpha}) in (\ref{Vtheta result}), we have
\begin{eqnarray}\label{replace h_k}
h_kh_{k+1}=h_{k-1}h_k-zh_k\hbar-kh_k\hbar
\end{eqnarray}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$,
for $k=2,\cdots, n-1$.
\begin{proposition}
The subrepresentation $V(0)$ in Proposition \ref{structure of J} is spanned by
\begin{eqnarray}\label{Casimir}
\sum_{k=1}^{n-1}\dfrac{2k(n-k)}{n+1}h_kh_{k+1}+\sum_{k=1}^n
\dfrac{k(n-k+1)}{n+1}h_k^2-\big(nh_1+2(n-1)h_2+\cdots \notag\\
+k(n-k+1)h_k+\cdots+nh_n\big)\hbar-nz\left(\dfrac{z}{n+1}+1\right)\hbar^2
\end{eqnarray}
in $B\big(Rees(J)\big)$.
\end{proposition}
\begin{proof}
By plugging \eqref{hihj result} in \eqref{vtheta0An} we get the result.
\end{proof}
Summarizing the above results, we obtain the following relations.
\begin{proposition}Let $J$ be the Joseph ideal in the $A_n$ Lie algebra.
Let $h_1,\cdots, h_n$ be the generators of $B\big(Rees(\mathcal U(\mathfrak g)/J)\big)$.
Then we have
\begin{align}\label{hkhk+1 result}
h_kh_{k+1}=&-\dfrac{z+n+1}{n+1}(h_1+2h_2+\cdots+kh_k)\hbar\notag\\
&-\dfrac{z}{n+1}\big((n-k)h_{k+1}+\cdots+2h_{n-1}+h_n\big)
\hbar+z\left(\dfrac{z}{n+1}+1\right)\hbar^2
\end{align}
for $k=1,\cdots, n-1$, and
\begin{align}\label{hk2 result}
h_k^2=&\dfrac{2(z+n+1)}{n+1}\big(h_1+2h_2+\cdots+(k-1)h_{k-1}\big)\hbar\notag\\
&+\left((n+2-k)\dfrac{z}{n+1}+(k+1)\dfrac{z+n+1}{n+1}\right)h_k\hbar\notag
\\&+\dfrac{2z}{n+1}\big((n-k)h_{k+1}+\cdots+2h_{n-1}+h_n\big)\hbar
-2z\left(\dfrac{z}{n+1}+1\right)\hbar^2
\end{align}
for $k=1,\cdots, n$.
\end{proposition}
\begin{proof}
Plugging (\ref{Vtheta result}) and (\ref{hihj result}) in (\ref{Casimir}), we have
\begin{align}\label{hkhk+1}
&\dfrac{2(n-1)}{(n-3)(n-1)}h_1h_2+\dfrac{4(n-1)}{(n-5)(n-3)}h_1h_2+\cdots
+\dfrac{2k(n-k)}{(n-1-2k)(n+1-2k)}h_kh_{k+1}\notag\\
&+\cdots+\dfrac{2(n-1)}{(1-n)(3-n)}h_{n-1}h_n+\dfrac{k(n-k)(n-k+1)}{2k-1-n}h_k\hbar
+\dfrac{k(n-k+1)(n-1)}{(2k-1-n)(n+1)}zh_k\hbar\notag\\
&-\dfrac{nz(z+n+1)}{n+1}\hbar^2.
\end{align}
Then, plugging \eqref{replace h_k} successively in (\ref{hkhk+1}), we can replace
any $h_ih_{i+1}$ for $i=1,\cdots,n-1$ by a fixed
$h_kh_{k+1}$, and then obtain (\ref{hkhk+1 result}).
Plugging \eqref{Vtheta result} in \eqref{hkhk+1 result}, we obtain \eqref{hk2 result}.
\end{proof}
\subsection{$B$-algebra in the type D case}\label{subsection Dn case}
In this subsection, we compute the $B$-algebra of the quantization
of the minimal orbits for type D Lie algebras.
The task is now to find the generators of $B\big(Rees(J)\big)$.
{\it Step 1}. We first check that
$h_ih_j=0$ and $h_{i-1}h_i+h_i^2+h_ih_{i+1}-h_i\hbar=0$ for $1\leq i<j\leq n-2$.
Similarly to \S\ref{subsection type A}, by applying
$ad_{ X_{23\overline{4}\cdots\overline{n-2},n-1,n }}
\circ ad_{ X_{12\overline{3}\cdots\overline{n-2},n-1,n}}$
on \eqref{WofDn}, we obtain
\begin{eqnarray}\label{3.3.1}
(h_2+h_3+2h_4+\cdots+2h_{n-2}+h_{n-1}+h_n)(h_1+h_2+2h_3+\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\
-h_2\hbar-2h_3\hbar-4h_4\hbar-\cdots-4h_{n-2}\hbar-2h_{n-1}\hbar-2h_n\hbar \in B(Rees(J)).
\end{eqnarray}
Analogously, by applying $ad_{ X_{2\overline{3}\cdots\overline{n-2},n-1,n }}\circ
ad_{ X_{123\overline{4}\cdots\overline{n-2},n-1,n}}$ on \eqref{WofDn}, we obtain
\begin{eqnarray}\label{3.3.2}
(h_1+h_2+h_3+2h_4+\cdots+2h_{n-2}+h_{n-1}+h_n)(h_2+2h_3+
\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\-h_2\hbar-2h_3\hbar-4h_4\hbar-
\cdots-4h_{n-2}\hbar-2h_{n-1}\hbar-2h_n\hbar \in B(Rees(J)),
\end{eqnarray}
and by applying $ad_{ X_{3\overline{4}\cdots\overline{n-2},n-1,n }}
\circ ad_{ X_\theta}$ on \eqref{WofDn}, we obtain
\begin{eqnarray}\label{3.3.3}
(h_3+2h_4+\cdots+2h_{n-2}+h_{n-1}+h_n)(h_1+2h_2+2h_3+\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\-2h_3\hbar-4h_4\hbar-\cdots-4h_{n-2}\hbar-2h_{n-1}\hbar-2h_n\hbar \in B(Rees(J)).
\end{eqnarray}
By subtracting \eqref{3.3.1} from \eqref{3.3.2}, we obtain
\begin{eqnarray*}
h_1h_3=0
\end{eqnarray*}
and by subtracting \eqref{3.3.1} from \eqref{3.3.3}, we obtain
\begin{eqnarray*}
h_1h_2+h_2^2+h_2h_3-h_2\hbar=0,
\end{eqnarray*}
both in $B\big(Rees(\mathcal{U}/J)\big)$.
By applying $ad_{ X_{1234}}$ on \eqref{WofDn}, we obtain
\begin{eqnarray}\label{3.4}
Y_{23\overline{4}\cdots\overline{n-2},n-1,n}Y_{34\overline{5}\cdots\overline{n-2},n-1,n}-Y_{4\overline{5}\cdots\overline{n-2},n-1,n}Y_{2\overline{3}\cdots\overline{n-2},n-1,n}\notag\\-Y_{3\overline{4}\cdots\overline{n-2},n-1,n}Y_{234\overline{5}\cdots\overline{n-2},n-1,n} \in J.
\end{eqnarray}
Similarly to the above procedure obtaining \eqref{3.3.1}-\eqref{3.3.3}, we obtain the following
three relations in $B\big(Rees(\mathcal{U}/J)\big)$ from (\ref{3.4}),
\begin{eqnarray}\label{3.4.1}
&&(h_2+h_3+2h_4+\cdots+2h_{n-2}+h_{n-1}+h_n)(h_3+h_4+2h_5+\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\
&&-h_3\hbar-2h_4\hbar-4h_5\hbar-\cdots-4h_{n-2}\hbar-2h_{n-1}\hbar-2h_n\hbar,\\
&&(h_2+2h_3+\cdots+2h_{n-2}+h_{n-1}+h_n)(h_4+h_5+2h_6+\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\
&&-2h_4\hbar-4h_5\hbar-\cdots-4h_{n-2}\hbar-2h_{n-1}\hbar-2h_n\hbar,\label{3.4.2}
\end{eqnarray}
and
\begin{eqnarray}\label{3.4.3}
&&(h_3+2h_4+\cdots+2h_{n-2}+h_{n-1}+h_n)(h_2+h_3+h_4+2h_5+\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\&&-h_3\hbar-2h_4\hbar-4h_5\hbar-\cdots-4h_{n-2}\hbar-2h_{n-1}\hbar-2h_n\hbar.
\end{eqnarray}
By subtracting \eqref{3.4.1} from \eqref{3.4.3}, we obtain
\begin{eqnarray*}
h_2h_4=0
\end{eqnarray*}
and by subtracting \eqref{3.4.1} from \eqref{3.4.2}
we obtain
\begin{eqnarray*}
h_2h_3+h_3h_4+h_3^2-h_3\hbar=0,
\end{eqnarray*}
both in $B\big(Rees(\mathcal{U}/J)\big)$.
Now, for $i\geq n$, by applying $ad_{ X_{1\cdots i}}$ on \eqref{WofDn}, we obtain
\begin{eqnarray*}
h_2h_i=0 \quad\mbox{and}\quad
h_3h_i=0
\end{eqnarray*}
in $B\big(Rees(\mathcal{U}/J)\big)$.
Next, by applying $ad_{ X_{234}}\circ ad_{ X_{12345}}$ on \eqref{WofDn}, we obtain
\begin{eqnarray}\label{3.5}
Y_{4\overline{5}\cdots\overline{n-2},n-1,n}Y_{345\overline{6}\cdots\overline{n-2},n-1,n}-Y_{45\overline{6}\cdots\overline{n-2},n-1,n}Y_{34\overline{5}\cdots\overline{n-2},n-1,n}\notag\\-Y_{3\overline{4}\cdots\overline{n-2},n-1,n}Y_{5\overline{6}\cdots\overline{n-2},n-1,n} \in J.
\end{eqnarray}
Using the same method as before, we obtain
\begin{eqnarray*}
h_3h_4+h_4^2+h_4h_5-h_4\hbar=0 \quad\mbox{and}\quad
h_4h_i=0, \text{ for } i\geq 6.
\end{eqnarray*}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
In general, for $k\leq n-4$ by appropriate iterated actions of $ad_{ X_{1\cdots k}},\ ad_{ X_{2\cdots k-1 }},\ ad_{ X_{3\cdots k-2 }}$, $\cdots$ on (\ref{WofDn}), we obtain
\begin{eqnarray}\label{3.6.1}\label{hihjDn}
h_ih_j=0, \text{ for } |i-j|\geq 2,i\leq n-3<j,
\end{eqnarray}
and
\begin{eqnarray}\label{3.6.2}
h_{i-1}h_i+h_i^2+h_ih_{i+1}-h_i\hbar=0 \text{ for } i<j\leq n-2.
\end{eqnarray}
in $B\big(Rees(\mathcal{U}/J)\big)$.
{\it Step 2}. Now, let us find the relations for
$h_{n-1}h_n$, $h_{n-2}h_n$ and $h_nh_{n-1}$.
By applying $ad_{ X_{4\cdots n-3}}\circ
ad_{ X_{3\cdots n-5}}\circ ad_{ X_{2\cdots n-5 }}\circ ad_{ X_{1\cdots n}}$
on \eqref{WofDn}, we obtain
\begin{eqnarray}\label{3.6.3.0}
Y_{n-4,n-3,\overline{n-2},n-1,n}Y_{n-3,n-2}-Y_{n-2}
Y_{n-4,\overline{n-3},\overline{n-2},n-1,n}-Y_{n-3,\overline{n-2},n-1,n}Y_{n-4,n-3,n-2}
\end{eqnarray}
in $B\big(Rees(J)\big)$.
Then by applying $ad_{X_{n-2}}\circ ad_{X_{n-4,\overline{n-3},\overline{n-2},n-1,n}}$
on the above element, we obtain
\begin{eqnarray}\label{3.6.3}
(h_{n-4}+2h_{n-3}+2h_{n-2}+h_{n-1}+h_n )h_{n-2}-2h_{n-2}\hbar=0
\end{eqnarray}
in $B\big(Rees(\mathcal{U}/J)\big)$.
Similarly, by $ad_{ X_{4\cdots n-3}}\circ ad_{ X_{3\cdots n-5}}\circ ad_{ X_{2\cdots n-5 }}\circ ad_{ X_{1\cdots,n-2, n}}$ on \eqref{WofDn}, then by applying $ad_{X_{n-2,n-1}}\circ ad_{X_{n-4,\overline{n-3},\overline{n-2},n-1,n}}$, we obtain
\begin{eqnarray}\label{3.6.4}
(h_{n-4}+2h_{n-3}+2h_{n-2}+h_{n-1}+h_n )(h_{n-2}\hbar+h_{n-1}\hbar)-2(h_{n-2}\hbar+h_{n-1}\hbar)=0.
\end{eqnarray}
And by applying $ad_{ X_{4\cdots n-3}}\circ ad_{ X_{3\cdots n-5}}\circ ad_{ X_{2\cdots n-5 }}\circ ad_{ X_{1\cdots n-2,n-1}} $ on \eqref{WofDn}, then by applying $ad_{X_{n-2,n}}\circ ad_{X_{n-4,\overline{n-3},\overline{n-2},n-1,n}}$, we obtain
\begin{eqnarray}\label{3.6.5}
(h_{n-4}+2h_{n-3}+2h_{n-2}+h_{n-1}+h_n )(h_{n-2}\hbar+h_n\hbar)-2(h_{n-2}\hbar+h_n\hbar)=0.
\end{eqnarray}
By subtracting \eqref{3.6.4} from \eqref{3.6.3} and by subtracting \eqref{3.6.5} from \eqref{3.6.3}, we obtain
\begin{eqnarray*}
2h_{n-1}\hbar+h_{n-1}^2-2h_{n-2}h_{n-1}&=&0,
\\2h_n\hbar+h_n^2-2h_{n-2}h_n&=&0
\end{eqnarray*}
in $B\big(Rees(\mathcal{U}/J)\big)$. From (\ref{3.6.3}), we directly obtain
\begin{eqnarray}\label{3.6.3.1}
2h_{n-2}^2+2h_{n-3}h_{n-2}+h_{n-2}h_{n-1}+h_{n-1}h_n-2h_{n-2}\hbar=0
\end{eqnarray}
in $B\big(Rees(\mathcal{U}/J)\big)$.
We continue to consider the action $ad_{ X_{n-2,n}}$ on (\ref{3.6.3.0}), and obtain
\begin{eqnarray}\label{3.6.6}
h_{n-4}h_{n-3}+h_{n-3}^2+2h_{n-3}h_{n-2}+h_{n-2}^2+h_{n-2}h_{n-1}-h_{n-3}\hbar-h_{n-2}\hbar=0
\end{eqnarray}
in $B\big(Rees(\mathcal{U}/J)\big)$. Combining (\ref{3.6.6}) and (\ref{3.6.2}), we obtain
\begin{eqnarray}\label{3.6.7}
h_{n-3}h_{n-2}+h_{n-2}^2+h_{n-2}h_{n-1}-h_{n-2}\hbar=0.
\end{eqnarray}
{\it Step 3}. Now, let us consider the lowest weight vector (\ref{vtheta+alpha1}) in the
subrepresentation $V(\theta+\alpha_1)$. By applying
$ad_{ X_1}\circ ad_{ X_{\theta}}$ on \eqref{vtheta+alpha1}, we obtain
\begin{eqnarray}\label{3.0}
h_1(h_1+2h_2+\cdots+2h_{n-2}+h_{n-1}+h_n)-(n-2)h_1\hbar \in B(Rees(J)).
\end{eqnarray}
And by applying $ad_{ X_{12}}\circ ad_{ X_{12\overline{3}\cdots\overline{n-2},n-1,n}}$
on \eqref{vtheta+alpha1} we obtain
\begin{eqnarray}\label{3.1}
&&(h_1+h_2)(h_1+h_2+2h_3+\cdots+2h_{n-2}+h_{n-1}+h_n)\notag\\
&&-(n-2)h_1\hbar-(n-3)h_2\hbar
\in B(Rees(J)).
\end{eqnarray}
Similarly, for any integer $i\leq n-3$, by applying $ad_{ X_{1\cdots i }}\circ ad_{ X_{1\cdots i \overline{i+1} \cdots\overline{n-2},n-1,n}}$ on \eqref{vtheta+alpha1}, we obtain a family of elements
\begin{eqnarray}\label{3.2}
&&(h_1+h_2+\cdots+h_i)(h_1+\cdots h_i+2h_{i+1}+\cdots 2h_{n-2}+h_{n-1}+h_n)\notag \\
&&-(n-2)h_1\hbar-(n-3)h_2\hbar-\cdots-(n-i-1)h_1\hbar\in B(Rees(J)).
\end{eqnarray}
By subtracting \eqref{3.0} from \eqref{3.1}, we obtain
\begin{eqnarray*}
h_1^2+2h_1h_2+2h_1h_3+\cdots2h_1h_{n-2}+h_1h_{n-1}+h_1h_n-(n-2)h_1\hbar \in B(Rees(J)).
\end{eqnarray*}
Similarly, in (\ref{3.2}), take $i=k$ and $k-1$ respectively and let $k$ run over all $n$,
we subtract the latter from the former, and then obtain a family of elements in $B(Rees(J))$
\begin{eqnarray}
&&h_2^2+2h_2h_3+2h_2h_4+\cdots2h_2h_{n-2}+h_2h_{n-1}+h_2h_n-(n-3)h_2\hbar,\notag\\
&& \vdots \notag\\
&& h_{n-4}^2+2h_{n-4}h_{n-3}+2h_{n-4}h_{n-2}+h_{n-4}h_{n-1}+h_{n-4}h_n-3h_{n-4}\hbar,\notag\\
&& h_{n-3}^2+2h_{n-3}h_{n-2}+h_{n-3}h_{n-1}+h_{n-3}h_n-3h_{n-3}\hbar,\notag\\
&&h_{n-2}^2+h_{n-2}h_{n-1}+h_{n-2}h_{n}-h_{n-2}\hbar,\notag\\
&&h_{n-1}h_n=0.
\end{eqnarray}
Then by plugging (\ref{3.6.1}) in the above elements, we have
\begin{eqnarray}\label{Vtheta+alpha1result}
&&h_1^2+2h_1h_2-(n-2)h_1\hbar=0,\notag \\
&&h_2^2+2h_2h_3-(n-3)h_2\hbar=0,\notag \\
&&\vdots\notag \\
&&h_k^2+2h_kh_{k+1}-(n-k-1)h_k\hbar=0,\notag \\
&&\vdots\notag \\
&&h_{n-3}^2+2h_{n-3}h_{n-2}-2h_{n-3}\hbar=0,\\
&&h_{n-2}^2+h_{n-2}h_{n-1}+h_{n-2}h_{n}-h_{n-2}\hbar=0\label{h_n-2^2},\\
&&h_{n-1}h_n=0. \label{hn-1hnDn}
\end{eqnarray}
in $B\big(Rees(\mathcal{U}/J)\big)$.
{\it Step 4}. We now plug the above equations into $C-c_\lambda$ in
$B\big(Rees(J)\big)$. First, notice that (\ref{vtheta0Dn}) reduces to
\begin{align*}
&\sum_{i=1}^{n} h_ih_i^{\vee}-2\rho^{*}-c_\lambda\\
=&h_1(h_1+h_2+\cdots+h_{n-2}+\dfrac{1}{2}h_{n-1}+\dfrac{1}{2}h_n)\\
&+h_2\big(h_1+2(h_2+\cdots+h_{n-2}+h_{n-2})+h_{n-1}+h_n\big)\\
&+\cdots+h_{n-2}\left(
h_1+2h_2+\cdots+(n-3)h_{n-3}+(n-2)h_{n-2}+\dfrac{n-2}{2}(h_{n-1}+h_{n})\right)\\
&+\dfrac{1}{2}h_{n-1}\left(
h_1+h_2+\cdots+(n-2)h_{n-2}+\dfrac{n}{2}h_{n-1}+\dfrac{n-2}{2}h_{n}\right)\\
&+\dfrac{1}{2} h_n\left(
h_1+h_2+\cdots+(n-2)h_{n-2}+\dfrac{n-2}{2}h_{n-1}+\dfrac{n}{2}h_{n}\right)\\
&-\left(2(n-1)h_1+2(2n-3)h_2+\cdots+2(kn-\dfrac{k(k+1)}{2})h_k\right.\\
&\left.+\cdots+\dfrac{n(n-1)}{2}(h_{n-1}+h_n)\right)\hbar-c_\lambda
\end{align*}
in $B\big(Rees(J)\big)$.
By (\ref{3.6.1}), the above element gives the following identity
\begin{align}
&h_1^2+2h_1h_2+ 2(h_2^2+2h_2h_3)+\cdots+(n-3)(h_{n-3}^2+2h_{n-3}h_{n-2})\notag\\
&+(n-2)(h_{n-2}^2+h_{n-2}h_{n-1}+h_{n-2}h_n)+\dfrac{n}{4}(h_{n-1}^2+h_n^2)\notag\\
&-\left(2(n-1)h_1 +\cdots
+2(kn-\dfrac{k(k+1)}{2})h_k+\cdots+\dfrac{n(n-1)}{2}(h_{n-1}+h_n)\right)\hbar-c_\lambda\notag\\
=&0
\end{align}
and
\begin{align*}
&(n-2)h_1\hbar-2(n-1)h_1\hbar+2(n-3)h_2\hbar-2(2n-3)h_2\hbar+\cdots\\
&+k(n-k-1)h_k\hbar-2(kn-\dfrac{k(k+1)}{2})h_k\hbar+\cdots+2(n-3)h_{n-3}\hbar\\
&-[2(n-3)n-(n-3)(n-2)]h_{n-3}\hbar+(n-2)(h_{n-2}^2+h_{n-2}h_{n-1}+h_{n-2}h_n)\\
&+\dfrac{n}{4}(h_{n-1}^2+h_n^2)-\dfrac{n(n-1)}{2}(h_{n-1}+h_n)\hbar-(n^2-n-2)h_{n-2}\hbar-c_\lambda \\
=&-nh_1\hbar-2nh_2\hbar-\cdots-knh_k\hbar-\cdots-(n^2-3n)h_{n-3}\hbar\\
&+(n-2)(h_{n-2}^2+h_{n-2}h_{n-1}+h_{n-2}h_n)\\
&+\dfrac{n}{4}(h_{n-1}^2+h_n^2)-\dfrac{n(n-1)}{2}(h_{n-1}+h_n)\hbar
-(n^2-n-2)h_{n-2}\hbar-c_\lambda\\
=&-nh_1\hbar-2nh_2\hbar-\cdots-knh_k\hbar-\cdots-(n^2-3n)h_{n-3}\hbar+(n-2)h_{n-2}^2\\
&+(\dfrac{n}{2}-2)( h_{n-2}h_{n-1}+h_{n-2}h_n)
+(n-\dfrac{n^2}{2})(h_{n-1}+h_n)\hbar-(n^2-n-2)h_{n-2}\hbar-c_\lambda\\
=&0
\end{align*}
in $B\big(Rees(\mathcal{U}/J)\big)$.
By (\ref{h_n-2^2}), the above equation becomes
\begin{align*}
&-nh_1\hbar-2nh_2\hbar-\cdots-knh_k\hbar-\cdots-(n^2-3n)h_{n-3}\hbar+(n-2)h_{n-2}^2\\
&+(\dfrac{n}{2}-2)(h_{n-2}\hbar-h_{n-2}^2)+(n-\dfrac{n^2}{2})(h_{n-1}+h_n)\hbar
-(n^2-n-2)h_{n-2}\hbar-c_\lambda=0,
\end{align*}
and hence
\begin{align}\label{hn-2^2result}
h_{n-2}^2 =&2h_1\hbar +4h_2\hbar+\cdots+2kh_k\hbar+\cdots+ (2n-6)h_{n-2}\hbar+(2n-3)h_{n-2}\hbar\notag \\
&+(n-2)(h_{n-1}+h_n)\hbar-(2n-4)\hbar^2.
\end{align}
In general, plugging (\ref{hn-2^2result}) in (\ref{h_n-2^2}),
then combining it with (\ref{3.6.3.1}), we have
\begin{align}\label{hn-3hn-2result}
h_{n-3}h_{n-2}=&-h_1\hbar-\cdots-kh_k\hbar-(n-2)h_{n-2}\hbar-
\dfrac{n-2}{2} h_{n-1}\hbar\notag\\
&-\dfrac{n-2}{2}h_n\hbar+(n-2)\hbar^2.
\end{align}
Plugging (\ref{hn-2^2result}) and (\ref{hn-3hn-2result}) in (\ref{3.6.7}),
we obtain
\begin{align*}
h_{n-2}h_{n-1}=&-h_1\hbar-\cdots-kh_k\hbar-(n-2)h_{n-2}\hbar-\dfrac{n-2}{2} h_{n-1}\hbar\notag\\
&-\dfrac{n-2}{2}h_n\hbar+(n-2)\hbar^2.
\end{align*}
Similarly, by the equations in \eqref{3.6.2}, \eqref{Vtheta+alpha1result}
together with \eqref{hn-2^2result} and
\eqref{hn-3hn-2result}, we obtain
\begin{align}\label{h_k^2}
h_k^2=&2h_1\hbar+\cdots+2(k-i)h_{k-i}\hbar+\cdots+(n+k-1)h_k\hbar
+(2n-4)h_{k+1}\hbar+\cdots\notag\\
&+(2n-4)h_{n-2}\hbar+(n-2)h_{n-1}\hbar+(n-2)h_{n}\hbar-(2n-4)\hbar^2,
\end{align}
and therefore
\begin{align}
h_k h_{k+1}=&-h_1\hbar-\cdots-kh_k\hbar-(n-2)h_{k+1}\hbar-\cdots-(n-2)h_{n-2}\hbar\notag\\
&-\dfrac{n-2}{2} h_{n-1}-\dfrac{n-2}{2}h_n\hbar+(n-2)\hbar^2 \label{hkhk+1Dn}
\end{align}
for $k\leq n-2$, and
\begin{align}
h_{n-2}h_n=&h_{n-1}h_n=-th_1-\cdots-(n-k)h_{n-k}\hbar-\cdots\notag \\
&-(n-2)h_{n-2}\hbar-\dfrac{n-2}{2}h_{n-1}\hbar-h_n\hbar,\label{hn-2hn}\\
h_{n-1}^2=&2h_1\hbar+4h_2\hbar+\cdots+(2n-4)h_{n-2}\hbar+
nh_{n-1}\hbar+(n-2)h_n\hbar-(2n-4)\hbar^2,\label{hn-1^2}
\\
h_n^2=&2h_1\hbar+4h_2\hbar+\cdots+(2n-4)h_{n-2}\hbar
+(n-2)h_{n-1}\hbar+nh_n\hbar-(2n-4)\hbar^2.
\label{hn^2}
\end{align}
In summary, we have the following.
\begin{proposition}\label{prop:relinBDn}
Let $J$ be the Joseph ideal in the $D_n$ Lie algebra.
Let $h_1,\cdots, h_n$ be the generators of $B\big(Rees(\mathcal U(\mathfrak g)/J)\big)$.
Then $h_1,\cdots, h_n$ satisfy the relations \eqref{hihjDn}, \eqref{hn-1hnDn}and \eqref{h_k^2}-\eqref{hn^2}.
\end{proposition}
\subsection{$B$-algebra in the type E case}\label{subsection En case}
In the previous two subsections, we constructed with details
the $B$-algebras
of the quantizations of the minimal orbits of AD type.
For the type E case, the computations are analogous to the type D case,
and therefore we will be sketchy.
\subsubsection{The Kac-Frenkel construction}
For a Lie algebra of type A and D, we have canonical basis consisting of matrices,
but for type E Lie algebra, we do not have such a basis.
Thus for the relation in Chevalley basis
$[X_\alpha,X_\beta]=\varepsilon(\alpha,\beta) X_{\alpha+\beta}$, where
$\alpha, \beta, \alpha+\beta\in\Delta$, we cannot determine the sign of
$\varepsilon(\alpha,\beta) $ via canonical basis.
In \cite[\S7.8]{Kac}, Kac gave a construction, usually called
the {\it Frenkel-Kac construction} in the literature, to
determine the signs.
Let $\varepsilon:\Delta\times\Delta\rightarrow\pm 1$ be a function satisfying the
bimultiplicativity condition
\begin{align*}
\varepsilon(\alpha+\alpha{'},\beta)&=\varepsilon(\alpha,\beta)\varepsilon(\alpha{'},\beta),\\
\varepsilon(\alpha,\beta+\beta{'})&=\varepsilon(\alpha,\beta)\varepsilon(\alpha,\beta{'})
\end{align*}
for $\alpha,\alpha{'},\beta,\beta{'}\in\Delta$, and the condition
\begin{eqnarray*}
\varepsilon(\alpha,\alpha)=(-1)^{(\alpha|\alpha)},
\end{eqnarray*}
where $(\alpha|\alpha)$ is the normalized invariant form on affine algebra
(see \cite[\S 6.2]{Kac}). We call such a function an {\it asymmetry function}.
An asymmetry function
$\varepsilon$ can be constructed as follows:
Choose an orientation of the Dynkin diagram, let
$$
\begin{array}{ll}
\varepsilon(\alpha_i,\alpha_j)=-1,&\text{ if } i=j \text{ or if }
\overset{\alpha_i}{\circ}\rightarrow \overset{\alpha_j}{\circ} \\
\varepsilon(\alpha_i,\alpha_j)=1,& \text{ otherwise, i.e.,}
\overset{\alpha_i}{\circ} \ \overset{\alpha_j}{\circ} \text{ or }
\overset{\alpha_i}{\circ}\leftarrow \overset{\alpha_j}{\circ}.
\end{array}
$$
For example, if we choose the orientation of Lie algebra of type $E_6$ as follows:
\begin{eqnarray*}
\overset{\alpha_1}{\circ}\leftarrow\overset{\alpha_3}{\circ}\leftarrow
&\overset{\alpha_4}{\circ}&\rightarrow\overset{\alpha_5}{\circ}\rightarrow
\overset{\alpha_6}{\circ} \\ &\downarrow &\\
&\overset{\alpha_2}{\circ},
\end{eqnarray*}
then we have
\begin{align*}
&\varepsilon(\alpha_1,\alpha_1)=-1, \quad\varepsilon(\alpha_1,\alpha_j)=1,\ j=2,3,4,5,6 \\
&\varepsilon(\alpha_2,\alpha_2)=-1, \quad \varepsilon(\alpha_2,\alpha_j)=1,\ j=1,3,4,5,6 \\
&\varepsilon(\alpha_3,\alpha_1)=\varepsilon(\alpha_3,\alpha_3)
=-1,\quad \varepsilon(\alpha_3,\alpha_j)=1, j=2,4,5,6\\
&\varepsilon(\alpha_4,\alpha_2)=\varepsilon(\alpha_4,\alpha_3)
=\varepsilon(\alpha_4,\alpha_4)=\varepsilon(\alpha_4,\alpha_5)=-1,
\quad\varepsilon(\alpha_4,\alpha_5)=\varepsilon(\alpha_4,\alpha_6)=1, \\
&\varepsilon(\alpha_5,\alpha_5)=\varepsilon(\alpha_5,\alpha_6)
=-1,\quad \varepsilon(\alpha_5,\alpha_j)=1, j=1,2,3,4,\\
&\varepsilon(\alpha_6,\alpha_6)=-1, \quad\varepsilon(\alpha_6,\alpha_j)=1, j=1,2,3,4,5.
\end{align*}
With the Kac-Frenkel construction, we are able to write
the adjoint action of $\mathcal U(\mathfrak g)$ on itself explicitly,
where $\mathfrak g$ is the type E Lie algebra.
For example, by applying $ad_{X_{12345}}$ on the first term of \eqref{highestrootvectorE6}, we obtain
\begin{eqnarray*}
Y_{13456}\cdot \varepsilon(\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5,\theta)Y_{23\bar{4}56}=Y_{13456}Y_{23\bar{4}56}.
\end{eqnarray*}
By this way we can find all the relations of elements in $B\big(Rees(J)\big)$,
which is completely analogous to the type D case.
In the rest of this section, we only list the necessary results,
and leave the details
to the interested readers.
\subsubsection{The $E_6$ case}
Let us consider the subrepresentation $V(0)$, the generator (\ref{CasimirE6}) reduces to
\begin{align*}
&\dfrac{1}{3} h_1(4h_1+3h_2+5h_3+6h_4+4h_5+2h_6)\\
&+h_2(h_1+2h_2+2h_3+3h_4+2h_5+h_6)\\
&+\dfrac{1}{3} h_3(5h_1+6h_2+10h_3+12h_4+8h_5+4h_6)\\
&+h_4(2h_1+3h_2+4h_3+12h_4+8h_5+4h_6)\\
&+\dfrac{1}{3} h_5(4h_1+6h_2+8h_3+12h_4+10h_5+5h_6)\\
&+\dfrac{1}{3}h_6(2h_1+3h_2+4h_3+6h_4+5h_5+4h_6)\\
&-16h_1\hbar-22h_2\hbar-30h_3\hbar-42h_4\hbar-30h_5\hbar-16h_6\hbar-36\hbar^2
\end{align*}
in $B\big(Rees(J)\big)$.
By the same method as in the type D case, we obtain the relations in
$B\big(Rees(\mathcal{U}/J)\big)$ induced by $(\ref{highestrootvectorE6})$ as follows:
\begin{align*}
& h_1^2+2h_1h_3=3h_1\hbar,\quad h_6^2+2h_5h_6=3h_6\hbar,\\
& h_5^2+2h_4h_5=2h_5\hbar,\quad h_3^2+2h_3h_4=2h_3\hbar, \\
&h_3^2+h_3h_4+h_1h_3=h_3\hbar,\quad h_5^2+h_4h_5+h_5h_6=h_5\hbar,\\
& h_4^2+h_3h_4+h_4h_5=h_4\hbar,\quad h_4^2+h_2h_3+h_2h_4=h_4\hbar,\quad
h_4^2+h_2h_4+h_4h_5=h_4\hbar,\\
& h_1h_2=h_1h_4=h_1h_5=h_1h_6=0,\\
& h_2h_3=h_2h_5=h_2h_6=0,\quad h_3h_5=h_3h_6=0,\quad h_4h_6=0.
\end{align*}
From these equations, we obtain the following.
\begin{proposition}\label{prop:relinBE_6}
Let $J$ be the Joseph ideal of the $E_6$ Lie algebra. Let
$h_1,\cdots, h_6$ be the generators of $B\big(Rees(\mathcal U(\mathfrak g)/J\big)$.
Then $h_1,\cdots, h_6$ satisfy the following relations:
\begin{align*}
h_4^2&=4h_1\hbar+6h_2\hbar+8h_3\hbar+13h_4\hbar+8h_5\hbar+4h_6\hbar-12\hbar^2,\\
h_3^2&=4h_1\hbar+6h_2\hbar+8h_3\hbar+13h_4\hbar+8h_5\hbar+4h_6\hbar-12\hbar^2,\\
h_1^2&=7h_1\hbar+6h_2\hbar+10h_3\hbar+12h_4\hbar+8h_5\hbar+4h_6\hbar-12\hbar^2,\\
h_2^2&=4h_1\hbar+6h_2\hbar+10h_3\hbar+12h_4\hbar+8h_5\hbar+4h_6\hbar-12\hbar^2,\\
h_5^2&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+10h_5\hbar+4h_6\hbar-12\hbar^2,\\
h_6^2&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+10h_5\hbar+7h_6\hbar-12\hbar^2,\\
h_1h_3&=-2h_1\hbar-3h_2\hbar-5h_3\hbar-6h_4\hbar-4h_5\hbar-2h_6\hbar+6\hbar^2,\\
h_2h_4&=-2h_1\hbar-3h_2\hbar-4h_3\hbar-6h_4\hbar-4h_5\hbar-2h_6\hbar+6\hbar^2,\\
h_3h_4&=-2h_1\hbar-3h_2\hbar-4h_3\hbar-6h_4\hbar-4h_5\hbar-2h_6\hbar+6\hbar^2,\\
h_4h_5&=-2h_1\hbar-3h_2\hbar-4h_3\hbar-6h_4\hbar-4h_5\hbar-2h_6\hbar+6\hbar^2,\\
h_5h_6&=-2h_1\hbar-3h_2\hbar-4h_3\hbar-6h_4\hbar-5h_5\hbar-2h_6\hbar+6\hbar^2.
\end{align*}
\end{proposition}
\subsubsection{The $E_7$ case}
We obtain the relations in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$ induced by
$(\ref{highestrootvectorE7})$ as follows
\begin{align*}
& h_1^2+2h_1h_3=3h_1\hbar,\quad h_6^2+2h_5h_6=3h_6\hbar,\\
&h_5^2+2h_4h_5=2h_5\hbar,\quad h_3^2+2h_3h_4=2h_3\hbar, \\
&h_3^2+h_3h_4+h_1h_3=h_3\hbar,\quad h_5^2+h_4h_5+h_5h_6=h_5\hbar,\\
& h_4^2+h_3h_4+h_4h_5=h_4\hbar,\quad h_4^2+h_2h_3+h_2h_4=h_4\hbar,\quad
h_4^2+h_2h_4+h_4h_5=h_4\hbar,\\
& h_7^2+2h_6h_7=4h_7\hbar,\quad h_6^2+h_5h_6+h_6h_7=h_6\hbar,\\
& h_1h_2=h_1h_4=h_1h_5=h_1h_6=h_1h_7=0,\\
& h_2h_3=h_2h_5=h_2h_6=h_2h_7=0,\quad h_3h_5=h_3h_6=h_3h_7=0,\\
& h_4h_6=h_4h_7=0,\quad h_5h_7=0
\end{align*}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
Let us consider the subrepresentation $V(0)$, the generator (\ref{CasimirE7}) reduces to
\begin{align*}
& h_1(2h_1+2h_2+3h_3+4h_4+3h_5+2h_6+h_7)\\
&+\dfrac{1}{2}h_2(4h_1+7h_2+8h_3+12h_4+9h_5+8h_6+3h_7)\\
&+ h_3(3h_1+4h_2+6h_3+8h_4+6h_5+4h_6+2h_7)\\
&+h_4(4h_1+6h_2+8h_3+12h_4+9h_5+6h_6+3h_7)\\
&+\dfrac{1}{2} h_5(6h_1+9h_2+12h_3+18h_4+15h_5+10h_6+5h_7)\\
&+h_6(2h_1+3h_2+4h_3+6h_4+5h_5+4h_6+2h_7)\\
&+\dfrac{1}{2}h_7(2h_1+3h_2+4h_3+6h_4+5h_5+4h_6+3h_7)\\
&-34h_1\hbar-49h_2\hbar-66h_3\hbar-94h_4\hbar
-75h_5\hbar-52h_6\hbar-27h_7+84\hbar \\
=&2h_1^2+6h_1h_3+\dfrac{7}{2}h_2^2+12h_2h_4+6h_3^2+16h_3h_4+12h_4^2+18h_4h_5
+\dfrac{15}{2}h_5^2+10h_5h_6\\
&+4h_6h_7+\dfrac{3}{2}h_7^2
-34h_1\hbar-49h_2\hbar-66h_3\hbar-96h_4\hbar-75h_5\hbar-52h_6\hbar-27h_7\hbar+84\hbar^2
\end{align*}
in $B\big(Rees(J)\big)$.
By these equations, we obtain the following.
\begin{proposition}\label{prop:relinBE_7}
Let $J$ be the Joseph ideal of the $E_7$ Lie algebra. Let
$h_1,\cdots, h_7$ be the generators of $B\big(Rees(\mathcal U(\mathfrak g)/J\big)$.
Then $h_1,\cdots, h_7$ satisfy the following relations:
\begin{align*}
h_2h_4&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+9h_5\hbar+6h_6\hbar+3h_7\hbar-12\hbar^2,\\
h_4^2&=-8h_1\hbar-12h_2\hbar-16h_3\hbar-25h_4\hbar-18h_5\hbar-12h_6\hbar-6h_7\hbar+24\hbar^2,\\
h_3^2&=-8h_1\hbar-12h_2\hbar-18h_3\hbar-24h_4\hbar-18h_5\hbar-12h_6\hbar-6h_7\hbar+24\hbar^2,\\
h_2^2&=-8h_1\hbar-14h_2\hbar-16h_3\hbar-24h_4\hbar-18h_5\hbar-12h_6\hbar-6h_7\hbar+24\hbar^2,\\
h_1^2&=-8h_1\hbar-12h_2\hbar-16h_3\hbar-24h_4\hbar-18h_5\hbar-12h_6\hbar-6h_7\hbar+24\hbar^2,\\
h_5^2&=-8h_1\hbar-12h_2\hbar-16h_3\hbar-24h_4\hbar-20h_5\hbar-12h_6\hbar-6h_7\hbar+24\hbar^2,\\
h_6^2&=-8h_1\hbar-12h_2\hbar-16h_3\hbar-24h_4\hbar-20h_5\hbar-15h_6\hbar-6h_7\hbar+24\hbar^2,\\
h_7^2&=-8h_1\hbar-12h_2\hbar-16h_3\hbar-24h_4\hbar-20h_5\hbar-16h_6\hbar-10h_7\hbar+24\hbar^2,\\
h_3h_4&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+9h_5\hbar+6h_6\hbar+3h_7\hbar-12\hbar^2,\\
h_4h_5&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+9h_5\hbar+6h_6\hbar+3h_7\hbar-12\hbar^2,\\
h_1h_3&=4h_1\hbar+6h_2\hbar+9h_3\hbar+12h_4\hbar+9h_5\hbar+6h_6\hbar+3h_7\hbar-12\hbar^2,\\
h_5h_6&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+10h_5\hbar+6h_6\hbar+3h_7\hbar-12\hbar^2,\\
h_6h_7&=4h_1\hbar+6h_2\hbar+8h_3\hbar+12h_4\hbar+10h_5\hbar+8h_6\hbar+3h_7\hbar-12\hbar^2
\end{align*}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
\end{proposition}
\subsubsection{The $E_8$ case}
We obtain the relations in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$ induced by
$(\ref{highestrootvectorE8})$ as follows
\begin{align*}
& h_1^2+2h_1h_3=3h_1\hbar,\quad h_6^2+2h_5h_6=3h_6\hbar,\\
& h_5^2+2h_4h_5=2h_5\hbar,\quad h_3^2+2h_3h_4=2h_3\hbar, \\
&h_3^2+h_3h_4+h_1h_3=h_3\hbar,\quad h_5^2+h_4h_5+h_5h_6=h_5\hbar,\\
& h_4^2+h_3h_4+h_4h_5=h_4\hbar,\quad h_4^2+h_2h_3+h_2h_4=h_4\hbar,\quad
h_4^2+h_2h_4+h_4h_5=h_4\hbar,\\
& h_7^2+2h_6h_7=4h_7\hbar,\quad h_6^2+h_5h_6+h_6h_7=h_6\hbar,\\
& h_8^2+2h_7h_8=5h_8\hbar,\quad h_7^2+h_6h_7+h_7h_8=h_7\hbar,\\
& h_1h_2=h_1h_4=h_1h_5=h_1h_6=h_1h_7=h_1h_8=0,\\
& h_2h_3=h_2h_5=h_2h_6=h_2h_7=h_2h_8=0,\quad h_3h_5=h_3h_6=h_3h_7=h_3h_8=0,\\
& h_4h_6=h_4h_7=h_4h_8=0,\quad h_5h_7=h_5h_8=0,\quad h_6h_8=0
\end{align*}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$.
Let us consider the subrepresentation $V(0)$, the generator (\ref{CasimirE8}) reduces to
\begin{align*}
&h_1(4h_1+5h_2+7h_3+10h_4+8h_5+6h_6+4h_7+2h_8)\notag \\
&+h_2(5h_1+8h_2+10h_3+15h_4+12h_5+9h_6+6h_7+3h_8)\notag \\
&+h_3(7h_1+10h_2+14h_3+20h_4+16h_5+12h_6+8h_7+4h_8)\notag \\
&+h_4(10h_1+15h_2+20h_3+30h_4+24h_5+18h_6+12h_7+6h_8)\notag \\
&+h_5(8h_1+12h_2+16h_3+24h_4+20h_5+15h_6+10h_7+5h_8)\notag \\
&+h_6(6h_1+9h_2+12h_3+18h_4+15h_5+12h_6+8h_7+4h_8)\notag \\
&+h_7(4h_1+6h_2+8h_3+12h_4+10h_5+8h_6+6h_7+3h_8)\notag \\
&+h_8(5h_1+8h_2+10h_3+15h_4+12h_5+9h_6+6h_7+3h_8)\\
&-92h_1\hbar-136h_2\hbar-182h_3\hbar-270h_4\hbar-220h_5\hbar
-168h_6\hbar-114h_7\hbar-58h_8\hbar+240
\end{align*}
in $B\big(Rees(J)\big)$.
From these equations, we obtain the following.
\begin{proposition}\label{prop:relinBE_8}
Let $J$ be the Joseph ideal of the $E_8$ Lie algebra. Let
$h_1,\cdots, h_8$ be the generators of $B\big(Rees(\mathcal U(\mathfrak g)/J\big)$.
Then $h_1,\cdots, h_8$ satisfy the following relations:
\begin{align*}
h_4^2&=20h_1\hbar+30h_2\hbar+40h_3\hbar+61h_4\hbar
+48h_5\hbar+36h_6\hbar+24h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_2^2&=20h_1\hbar+32h_2\hbar+40h_3\hbar+60h_4\hbar
+48h_5\hbar+36h_6\hbar+24h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_3^2&=20h_1\hbar+30h_2\hbar+42h_3\hbar+61h_4\hbar
+48h_5\hbar+36h_6\hbar+24h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_1^2&=23h_1\hbar+30h_2\hbar+42h_3\hbar+60h_4\hbar
+48h_5\hbar+36h_6\hbar+24h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_5^2&=20h_1\hbar+30h_2\hbar+40h_3\hbar+60h_4\hbar
+50h_5\hbar+36h_6\hbar+24h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_6^2&=20h_1\hbar+30h_2\hbar+40h_3\hbar+60h_4\hbar
+50h_5\hbar+39h_6\hbar+24h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_7^2&=20h_1\hbar+30h_2\hbar+40h_3\hbar+60h_4\hbar
+50h_5\hbar+40h_6\hbar+28h_7\hbar+12h_8\hbar-60\hbar^2,\\
h_8^2&=20h_1\hbar+30h_2\hbar+40h_3\hbar+60h_4\hbar
+50h_5\hbar+40h_6\hbar+30h_7\hbar+17h_8\hbar-60\hbar^2,\\
h_2h_4&=-10h_1\hbar-15h_2\hbar-20h_3\hbar-30h_4\hbar
-24h_5\hbar-18h_6\hbar-12h_7\hbar-6h_8\hbar+30\hbar^2,\\
h_3h_4&=-10h_1\hbar-15h_2\hbar-20h_3\hbar-30h_4\hbar
-24h_5\hbar-18h_6\hbar-12h_7\hbar-6h_8\hbar+30\hbar^2,\\
h_4h_5&=-10h_1\hbar-15h_2\hbar-20h_3\hbar-30h_4\hbar
-24h_5\hbar-18h_6\hbar-12h_7\hbar-6h_8\hbar+30\hbar^2,\\
h_1h_3&=-10h_1\hbar-15h_2\hbar-21h_3\hbar-30h_4\hbar
-24h_5\hbar-18h_6\hbar-12h_7\hbar-6h_8\hbar+30\hbar^2,\\
h_5h_6&=-10h_1\hbar-15h_2\hbar-20h_3\hbar-30h_4\hbar
-25h_5\hbar-18h_6\hbar-12h_7\hbar-6h_8\hbar+30\hbar^2,\\
h_6h_7&=-10h_1\hbar-15h_2\hbar-20h_3\hbar-30h_4\hbar
-25h_5\hbar-20h_6\hbar-12h_7\hbar-6h_8\hbar+30\hbar^2,\\
h_7h_8&=-10h_1\hbar-15h_2\hbar-20h_3\hbar-30h_4\hbar
-25h_5\hbar-20h_6\hbar-15h_7\hbar-6h_8\hbar+30\hbar^2
\end{align*}
in $B\big(Rees(\mathcal{U}(\mathfrak{g})/J)\big)$. \end{proposition}
\section{Proof of the main theorem}\label{sect:proofofmainthm}
We are now ready to prove the main theorem of our paper.
First, notice that by Lemma \ref{lemma:dimJU2} and
Proposition \ref{Prop:I^2_0}, the relations on $h_i$'s that we found
in the previous section are exactly all the relations of
$B(\mathscr{A})$.
\begin{proof}[Proof of Theorem \ref{maintheorem}]
First, by Proposition \ref{vectorspace of B-algebra},
it is direct to see that in all the cases, the isomorphisms to
be checked are isomorphisms of vector spaces.
We only need to prove that they preserve the algebra structures
on both sides.
In the $A_n$ case, let
\[
t_1\mapsto\dfrac{z\hbar}{n+1},\ t_2\mapsto
\dfrac{z+n+1}{n+1}\hbar,\ h_k\mapsto e_k,\text{for } k\in\mathbb{N} \text{ and }1\leq k\leq n
\]
By comparing (\ref{hihj result}),
(\ref{hkhk+1 result}), (\ref{hk2 result}) with (\ref{elej}), (\ref{ejej+1}),
(\ref{ej2}) respectively, we get the theorem.
In the $D_n$ case, let
\begin{eqnarray*}
\hbar \mapsto 2t,\ h_k\mapsto e_k,\ k\in\mathbb{N} \text{ and }1\leq k\leq n,
\end{eqnarray*}
By comparing the relations in Proposition \ref{prop:relinBDn}, namely,
\eqref{hihjDn}, \eqref{hkhk+1Dn}, \eqref{h_k^2}, \eqref{hn-2hn}, (\ref{hn-1^2}), (\ref{hn^2}) with
the relations in Corollary \ref{cohomology of Dn}, namely, \eqref{eiejDn},
\eqref{ekek+1Dn}, \eqref{ek2Dn}, \eqref{en-2en}, \eqref{en-1^2}, \eqref{en^2} respectively, we get the theorem.
In the type E case, let
\begin{eqnarray*}
\hbar \mapsto 2t,\ h_k\mapsto e_k,
\end{eqnarray*}
and compare the relations in Propositions \ref{prop:relinBE_6}, \ref{prop:relinBE_7}
and \ref{prop:relinBE_8}
with the relations in Corollaries \ref{cohomology of E6},
\ref{cohomology of E7} and \ref{cohomology of E8} respectively;
we get the desired isomorphisms.
\end{proof}
In this paper, we prove the main theorem by explicitly calculating out
the product structures of the two types of algebras.
We expect that there exists a closed formula for the $B$-algebras
of the quantizations of the minimal nilpotent orbits,
like the one of Bryan and Gholampour given in Theorem \ref{thm:BryanGholam}.
|
{
"arxiv_id": "2302.13222",
"language": "en",
"timestamp": "2023-02-28T02:13:16",
"url": "https://arxiv.org/abs/2302.13222",
"yymm": "2302"
} | \section{Introduction}
For the automatic speech recognition (ASR), speech annotation is an expensive and time-consuming work. People can easily collect enormous unsupervised corpus from websites, broadcasts, and podcasts, but only a minority of them can be annotated artificially.
Moreover, as the training data matching is crucial for the ASR system \cite{is_robust,robust-wav2vec}, it is important to select a suitable subset for annotation and training according to the target scenarios like accented\cite{google-accent}, far-field\cite{ami-pre} and children\cite{childern} ASR.
Although we can manually select the matching training corpus by accent, speaker, or channel for annotation, it is difficult to describe the corpus's similarity mathematically and select the training speech automatically.
In general, most works\cite{es-ivector,es-gmm-u,es-gmm-u2,select-text, select-text2,es-nbest,es-gmm-s} believes that a well-designed training set should have similar distribution with the target corpus, but it is difficult to measure the speech corpus distribution.
To solve this problem, the most common method is measuring the speech distribution with the transcription\cite{select-text, select-text2, es-nbest, es-gmm-s}. \cite{select-text} uses the frequency of the word, character or phoneme to measure the transcription distribution and then samples data uniformly. And to sample unlabeled speech, \cite{es-nbest} uses a baseline ASR model to decode the N-Best hypothesis and then calculate the term frequency-inverse document frequency (tf-idf) for data selection.
As the text-level distribution is too rough to measure the acoustic difference of the speech, \cite{es-gmm-s} count the distribution according to the context-dependent HMM states to capture more acoustic details. However, the HMM states still largely depends on the speech transcription and the lexicon, it can not still measure the difference between sex, accent or other acoustic characteristics.
Besides selection by the distribution, contrastive sampling\cite{contrastive-selection,cs-gmm,accent-selection,contrastive-ds2,contrastive-new,google-ds} is another recent popular data selection method. Most of them use a universal and a target domain ASR model to score the utterances by the confidence score or the hypothesis perplexity.
Then they will sample the utterances which has largest gap between the target and the universal score one by one.
Using different domain ASR models can evaluate the speech from the acoustic characteristics well, however, it also tend to choose similar speech and reduce the diversity of the selected set.
In this study, we design a novel target-aware data selection method by proposing the speech corpora divergence (SCD). We use the self-supervised learning (SSL) model, Hubert\cite{hubert}, to discretize the speech and then measure the speech distribution in the discrete space.
We count the N-gram of the discrete corpus and use the Kullback-Leibler divergence (KLD) to calculate the SCD.
Then we can select a subset from the universal unlabeled corpus by minimizing the SCD between the selected corpus and the target corpus and further use greedy search to simplify the algorithm complexity.
Compared with the previous works, the Hubert discrete labels can contain both acoustic and semantic information, so it can represent the speech distribution better than the text-related labels like word, char and HMM states. And as the SCD selection method considers the relationship between the whole selected speech and the target set, it can sample more diverse speech than the contrastive sampling.
We evaluate the SCD data selection method on different accented speech from Common Voice.
Experiments prove that our proposed method can realize 14.8\% relative improvements to the random selection and reach or even exceed the human selection result with accent labels.
\section{Related Work}
Hubert\cite{hubert} is one of the most successful SSL model which has been applied on different speech tasks. The Hubert model uses a CNN feature extractor to convert the waveform into hidden representations. Then it will mask a part of representations and use a transformer encoder to predict the discrete labels of the masked part.The discrete labels are initialed by a K-means cluster on MFCC and then are refined by the Hubert model iteratively.
Some research work find that the labels find by Hubert can be used in different tasks.
For example, \cite{ao22_interspeech,arunkumar22_interspeech} use these labels to pre-train the ASR decoder. They find that the Hubert discrete labels can help the decoder learn how to generate text sequences.
\cite{Lakhotia2021OnGS, Lee2021DirectST, Polyak2021hubert_res} find that they can resynthesis the speech by combining the discrete label with some pitch and speaker information. There are also some works apply these discrete labels on emotion conversion\cite{kreuk2021textless} and NLP task\cite{kharitonov2022textless}.
These researches indicates that compared to the traditional label like word or phone, the Hubert discrete labels contain richer acoustic and semantic information. So in this paper, we will use the Hubert discrete labels to represent the continues speech and then design a data selection method according to the distribution of the Hubert labels.
\section{Method}
\begin{figure}[!t]
\centering
\includegraphics[height=2.3in]{pic/SCD.png}
\caption{Calculation of the SCD. We use the Hubert model to convert the speech corpora into label corpora. Then use the N-gram to measure the distribution. The SCD can be defined by the KLD between the N-grams.}
\label{fig_scd}
\end{figure}
\subsection{Speech Corpora Divergence}
In this subsection, we give the definition of the SCD, which measures the similarity between two speech corpora.
A speech corpus $X$ can be represented by a stochastic process with probability distribution $P(X)=P(X_1X_2\dots X_t\dots)$ and each speech utterance $x_i=x_{i1}x_{i2}\dots x_{it}\dots$ is a sampling result, $i$ and $t$ stand for the utterance index and the time step.
Then we can use the KLD to measure the difference between two distribution:
\begin{equation}\label{eq_ce}
\mathrm{SCD}(X,Y) = D_{\mathrm{KL}}(X||Y)
\end{equation}
However, it is not easy to calculate the SCD by eq (\ref{eq_ce}), because corpora $X$ and $Y$ are in continuous space.
Inspired by the recent SSL\cite{VQ-vae,vq-w2v,w2v,hubert,hubert2}, we use the hidden units discovery system in Hubert to discretize the speech corpora.
For each corpus, every utterance $x_i$ will be converted into label sequence $\widetilde{x}_{i1},\widetilde{x}_{i2}\dots \widetilde{x}_{in}\dots$, and $\widetilde{x}_{in} \in \mathcal{L}$, $ \mathcal{L} = \{1,2,…,K\}$. $K$ is the clusters number of the hidden units discovery system.
After obtaining the discrete $\widetilde{X}$, we can use an N-gram model $P_{\widetilde{X}}(L)$ to represent the $P(X)$:
\begin{equation}
P_{\widetilde{X}}(L=l_i) = \frac{\mathrm{cnt}_{\widetilde{X}}(l_i)}{\sum_{l_j} \mathrm{cnt}_{\widetilde{X}}(l_j)}
\end{equation}
where $L \in \mathcal{L}^N$, $\mathcal{L}^N$ is the N-order cartesian power of $\mathcal{L}$.
$\mathrm{cnt}_{\widetilde{X}}$ stands for the count operation in corpora $\widetilde{X}$.
Finally the SCD can be calculated as:
\begin{equation}
\begin{aligned}
\mathrm{SCD}(X,Y) =\sum_{l_i\in \mathcal{L}^N} P_{\widetilde{X}}(L=l_i) \log \frac{P_{\widetilde{X}}(L=l_i)}{P_{\widetilde{Y}}(L=l_i)}
\end{aligned}
\end{equation}
We conclude the calculation of the SCD in Fig.\ref{fig_scd}.
\subsection{Target-aware data selection with SCD}
With the help of the SCD, we can re-define the target-aware data selection as a combinatorial optimization problem.
Given the unlabeled universal speech corpus $U$ and the query speech set $Q$, sample a subset $S$ from $U$ with size $C$ and minimize the $\mathrm{SCD}(Q, S)$ at the same time:
\begin{equation}
\begin{aligned}
S^* = \argmin_S \mathrm{SCD}(Q,S), where \quad |S| = C
\end{aligned}
\end{equation}
\begin{algorithm}[!t]
\caption{Target-Aware Data Selection with Speech Corpus Divergence}
\label{alg:data-selection}
\begin{algorithmic}[1]
\REQUIRE ~~\\
Universal unlabeled speech corpus $U$ sorted by length. \\
Query speech corpus $Q$. \\
A pre-trained Hubert model. \\
Hyperparameters $N$, $C$ and $\lambda$.
\ENSURE ~~\\
Subset $S$ from $U$ and $|S|=C$
\STATE Use the Hubert model to discrete the $U$ and $Q$.
\STATE Calculate the $N$-gram model $P_{\widetilde{U}}(L)$ and $P_{\widetilde{Q}}(L)$,
\STATE $P_{\widetilde{Q'}}(L) \gets (1- \lambda) P_{\widetilde{U}}(L) + \lambda P_{\widetilde{Q}}(L)$
\STATE $S \gets \varnothing$, $r \gets |U|/C$
\FOR{$i=0$ to $C-1$}
\STATE Select the ($ir+1$)-th to the $(i+1)r$-th utterances in $U$ as $U_i$
\STATE $u^i_{best}=\argmin_{u_j} \mathrm{SCD}(Q', S \cup \{u_j\}),$ where $u_j \in U_i$.
\STATE Add $u^i_{best}$ into $S$.
\ENDFOR
\RETURN $S$.
\end{algorithmic}
\end{algorithm}
In practice, the available query corpus $Q$ is always small, it cannot fully represent the target scenario well. So directly using the $P_{\widetilde{Q}}(L)$ to calculate the SCD could make the $S$ overfit the $Q$.
To increase the generalization ability of the selected set $S$, we use the interpolation method with $U$ and $Q$ as follows:
\begin{equation}
\begin{aligned}
P_{\widetilde{Q'}}(L) = \lambda P_{\widetilde{Q}}(L) + (1-\lambda)P_{\widetilde{U}}(L)
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
S^* = \argmin_S \mathrm{SCD}(Q',S), where \quad |S| = C
\end{aligned}
\end{equation}
However, finding the global optimum solution $S^*$ is a NP-hard problem and the solution-space size is $|U| \choose C$. So we use the greedy-search method to find the local optimum solution to reduce the algorithm complexity. Details are shown in Algorithm \ref{alg:data-selection}.
As each utterance is visited only once, the complexity is $O(|U|K^N)$, $O(K^N)$ is the SCD complexity and $O(|U|)$ is the search complexity. And when $N$ is large, we can cut the rare grams to further reduce the complexity.
\begin{table}[!t]
\small
\caption{WER (\%) for models with different selected training corpus.}
\centering
\begin{threeparttable}
\begin{tabular}{|l|l|c c|c c|c c|}
\hline
\multirow{2}{*}{Hours}&Selection& \multicolumn{2}{c|}{cmv} & \multicolumn{2}{c|}{ind} & \multicolumn{2}{c|}{aus} \\
&Method & dev & test & dev & test & dev & test \\
\hline
1 h &random &\textbf{30.1}&\textbf{31.8}&29.2&34.5&24.5&24.8\\
&ind-label &32.0&34.2&\textbf{25.9}&32.5&32.8&32.7\\
&ind-SCD &31.0&32.5&27.5&\textbf{31.5}&27.4&28.0 \\
&aus-label &35.5&38.4&38.9&39.6&\textbf{22.9}&\textbf{23.5}\\
&aus-SCD &30.7&32.7&29.8&35.6&23.1&23.8\\
\hline
10 h &random &\textbf{22.8}&24.1&20.9&24.4&18.0&18.2\\
&ind-label &24.6&25.7&\textbf{17.6}&23.0&25.7&26.3\\
&ind-SCD &22.9&\textbf{23.9}&18.5&\textbf{21.4}&19.7&19.9 \\
&aus-label &26.9&29.4&28.5&33.7&\textbf{14.9}&\textbf{15.6}\\
&aus-SCD &23.3&24.6&21.7&25.0&16.7&17.3\\
\hline
100 h&random & 17.7&17.7&13.9&13.5&12.2&12.8\\
&ind-SCD & \textbf{17.5}&\textbf{17.6}&\textbf{11.2}&\textbf{11.5}&12.2&13.6 \\
&aus-SCD & 17.9&17.8&14.2&13.0&\textbf{11.3}&\textbf{11.9}\\
\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item The cmv stands for the official evaluation set. The ind and aus are our split accent evaluation set.
\end{tablenotes}
\end{threeparttable}
\label{tab:result_main}
\end{table}
\section{Experiment}
\subsection{Dataset}
We conduct the experiments on the English Common Voice (CMV) v5.1 \cite{commonvoice} and take the accent as the basis of data selection.
In CMV, volunteers worldwide contribute these audios by reading sentences provided by the website, and some of the volunteers also provide their accents.
We will select the training subset from the CMV and evaluate on the Indian (ind) and the Australia (aus) accents, which only account for 5\% and 4\% among the whole CMV.
For evaluation, besides the official evaluation set, we split a \textit{dev} set (5 hours) and a \textit{test} set (5 hours) for the ind-accent and aus-accent. These split parts will be excluded during data selection.
\subsection{Data selection}\label{sec:selection}
We use different data selection methods, including random selection, supervised accent-label selection, and the proposed unsupervised SCD selection to sample the training set. For the random selection, we shuffle the corpus and select the front utterances. For the accent-label selection, we only sample the speech with ind-label or aus-label. These two methods can be regarded as the lower bound and the upper bound of the experiments.
For the SCD selection, we use the open source self-supervised Hubert-Large model \footnote{https://dl.fbaipublicfiles.com/hubert/hubert\_large\_ll60k.pt}\cite{hubert2} to discretize the speech into 500 cluster labels.
We count the distributions of the discrete corpus as a uni-gram ($N$=1). During the greedy-search, we use the \textit{dev-ind} or \textit{dev-aus} set as the $Q$ and $\lambda$ is adjusted from 0 to 1.
Finally, we fine-tune a Hubert as the downstream ASR model with 1 hour, 10 hours, and 100 hours of different selected training speech \footnote{We cannot sample the 100 hours set by the accent-label selection because the audios with the ind-label or aus-label are less than 100 hours.}.
\begin{table}[!t]
\small
\caption{Influence of the discretization and the parameter $N$}
\centering
\begin{threeparttable}
\begin{tabular}{|l|l|c c| c c|}
\hline
\multirow{2}{*}{Hours}&\multirow{2}{*}{Selection Method}&\multicolumn{2}{c|}{cmv} & \multicolumn{2}{c|}{ind} \\
&&dev & test & dev & test \\
\hline
1 h&SCD-MFCC-1-gram & 31.1&32.7&27.2&33.5 \\
&SCD-Hubert-B-1-gram & 31.4&33.2&27.9&32.4 \\
&SCD-Hubert-L-1-gram & 31.0&\textbf{32.5}&27.5&\textbf{31.5} \\
&SCD-Hubert-L-2-gram & \textbf{30.9}&32.8&27.3&32.4 \\
&SCD-Hubert-L-3-gram &31.6&33.1&\textbf{27.1}&33.3 \\
&CS-Hubert-L-1-gram &42.8&43.8&38.9&42.1 \\
&CS-Hubert-L-2-gram &38.7&39.7&28.6&36.1 \\
\hline
10 h&SCD-MFCC-1-gram & \textbf{22.4}&23.7&18.5&24.1 \\
&SCD-Hubert-B-1-gram & 22.7&23.7&18.6&22.0 \\
&SCD-Hubert-L-1-gram & 22.8&23.9&18.5&21.4 \\
&SCD-Hubert-L-2-gram & 22.5&\textbf{23.6}&17.0&21.3 \\
&SCD-Hubert-L-3-gram & 22.7&24.0&16.8&21.9 \\
&SCD-Text-Word-1-gram &24.1&25.8&23.0&27.1 \\
&SCD-Text-Char-1-gram &25.4&28.3&24.9&31.1 \\
&CS-Hubert-L-1-gram &27.8&28.2&18.2&21.6 \\
&CS-Hubert-L-2-gram &26.3&27.3&\textbf{16.0}&\textbf{21.2} \\
\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item SCD-Text means that $P_{\widetilde{Q}}(L)$ and $P_{\widetilde{U}}(L)$ are calculated from the transcription.
\item CS is short for contrastive sampling, which use $P_{\widetilde{Q}}(L)$ and $P_{\widetilde{U}}(L)$ to score and choose the utterances.
\end{tablenotes}
\end{threeparttable}
\label{tab:result_feature_N}
\end{table}
\subsection{Main results}
Table \ref{tab:result_main} lists the main results of the experiments. According to the table, we can find that our proposed SCD selection can be better than the random selection on both ind-accent and aus-accent. Furthermore, compared to the supervised accent-label selection, the SCD selection can realize better results on the ind-accent and comparable results on the aus-accent.
We can also find that the generalization of the SCD selection is better than the accent-label selection. The accent-label selection can benefit the recognition performance of the target accent but also hurt other accents at the same time. In contrast, the SCD selection can improve the target accent result with little influence on others.
For example, on the 10 hours experiments, the ind-label selection can bring 5.7\% relative improvement on the \textit{test-ind} but also relatively increase 11.6\% and 44.5\% WER on the \textit{test-cmv} and \textit{test-aus}.
In contrast, the ind-SCD selection can reduce the WER of \textit{test-ind} with 12.7\%, meanwhile, the WER increment on \textit{test-aus} is only 9.3\% and the \textit{test-cmv} result is even better.
By comparing the results with different training set sizes, we can find that the SCD selection can be more powerful with the sample size growing.
When the selected training set increases from 1 to 100 hours, the relative improvement between SCD selection and random selection increases from 5.8\% to 14.8\% for ind-accent, and increases from 4.0\% to 7.0\% for aus-accent. Because the SCD selection is based on statistics, larger sample size will provide a more accurate probability distribution.
\subsection{Analysis}
\subsubsection{Influence of the discretization and N-gram}
We analysis the influence of the discretization and N-gram and show the result in Table \ref{tab:result_feature_N}. For the discretization, we also use the the K-means clustering with MFCC (the original target label of the Hubert) and the Hubert-Base to discrete the speech.
We can find that all of them can exceed the random selection. This means that the SCD selection still can be useful even without a self-supervised model.
Although similar performances are shown on the \textit{dev-ind}, the Hubert-Large discretization gets the best results on the \textit{test-ind}. This indicates that more informative discrete label can bring better generalization ability.
For the N-gram, we also use larger $N$ to measure the distribution of the discrete label corpus and find that when $N$ becomes larger, the \textit{dev-ind} WER always decrease but the \textit{test-ind} result could become worse. This proves the selected training set over matches the \textit{dev-ind}.
We believe the reason is that \textit{dev-ind} only contains 5 hours of speech, which is insufficient for a higher-order statistical model.
\subsubsection{Compare with other methods}
We also use the transcription distribution and contrastive sampling for data selection like \cite{select-text} and \cite{google-ds}. And the results is also shown in Table \ref{tab:result_feature_N}. We can find that select data with the word or character is useless and even harmful in this task, because the transcription can not show the difference between different accents.
And the contrastive sampling can realize similar performance on the 10 hours \textit{ind-test} set, however, much worse on the \textit{cmv-test} set and the 1 hours training task. Because it tend to select similar utterances
especially when the selected size is small.
\subsubsection{Influence of the interpolation factor $\lambda$}
\begin{figure}[!t]
\centering
\subfloat[1 hour results on \textit{dev-ind}]{\includegraphics[width=0.47\columnwidth]{pic/dev-lambda-1h-large.png}%
\label{fig_first_case}}
\hfil
\subfloat[1 hour results on \textit{test-ind}]{\includegraphics[width=0.47\columnwidth]{pic/test-lambda-1h-large.png}%
\label{fig_second_case}}
\hfil
\subfloat[10 hours results on \textit{dev-ind}]{\includegraphics[width=0.47\columnwidth]{pic/dev-lambda-10h-large.png}%
\label{fig_third_case}}
\hfil
\subfloat[10 hours results on \textit{test-ind}]{\includegraphics[width=0.47\columnwidth]{pic/test-lambda-10h-large.png}%
\label{fig_forth_case}}
\caption{Influence of the interpolation factor $\lambda$. We also draw the random selection (blue line) and ind-label selection (red line) in the figures as reference.}
\label{fig_lambda}
\end{figure}
We evaluate the influence of the interpolation factor $\lambda$, which is used to prevent the selected set from overfitting the query corpus. The experiments are based on the ind-accent with 1 or 10 hours training set, and the results are shown in Fig \ref{fig_lambda}.
We can find that on the \textit{dev-ind}, the WER continuously decreases with the $\lambda$ growth, which means that the selected subset fits the \textit{dev-ind} better.
However, for the \textit{test-ind}, the WER will reduce firstly and then rise again until $\lambda$ changes to 1.0.
The phenomenon is more evident when the selected training set size is small.
This means that without interpolating with the universal corpus, the generalization ability of the SCD selection will be hurt.
\subsubsection{Selected training set construction}
We draw the construction of three 10 hours training sets selected by random selection, ind-SCD selection, and aus-SCD selection in Fig \ref{fig_construct}.
We can find that the SCD selection set contains more target accented speech than the random selection. For example, when using random selection, the ind-accented and aus-accented speech only takes 7.5\%.
For the ind-SCD and aus-SCD selection, the proportion of the ind-accented and aus-accented speech will increase to 48\% and 21\%. It should be noticed that during SCD selection, no accent label or transcription is used.
Fig \ref{fig_construct} also shows the relationship between different accents. The ind-SCD selection choose more ind-accented speech rate and less other accent speech. However, the aus-SCD also samples more speech with England (eng) and Newzealand (nzl) labels beside the aus-accented speech.
As we know, the aus-accent has closer relationship with the eng-accent and the nzl-accent than others.
This indicates that the proposed SCD is consistent with human cognition.
\begin{figure}[!t]
\centering
\includegraphics[height=2in]{pic/selection-accent.png}
\caption{Training set construction with different data selection method. The results are counted from the speech with accent label in the selected training set.}
\label{fig_construct}
\end{figure}
\section{Conclusion}
This study proposes SCD to measure the speech corpora similarity by the Hubert discrete label distribution and then select the training speech by the SCD.
Compare to previous works, this SCD selection method can consider both acoustic and semantic information in the speech corpus and can also guarantee the diversity of the selected speech.
Experiments on different accents speech show that with the same training set size, the proposed SCD selection can realize up to 14.8\% relative improvement than the random selection and also realize comparable even better performance than supervised selection with accent label.
As the SCD data selection method is independent from the transcription, we will apply this method on other audio tasks which need to sample the best training subset from a large-scale corpus.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\normalsize
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13195",
"language": "en",
"timestamp": "2023-02-28T02:12:26",
"url": "https://arxiv.org/abs/2302.13195",
"yymm": "2302"
} | \section{Introduction}
\label{section:introduction}
Macular Edema is an eye condition that occurs when there is a leakage of blood vessels into part of the retinal called Macular (central of the eye at the back where the vision is sharpest) and hence impairing the vision severely. There are many eye diseases that can cause this including, Age-related Macular Degeneration (AMD) and Diabetic Macular Edema (DME). Recent study \cite{li:EURETINA2017} indicates that there's a rise of retinal diseases in Europe with more than 34 and 4 million people affected with AMD and DME respectively in the continent.
AMD, is mostly common among older people (50-years and above). The early stage of AMD is asymptomatic and slows to progress to the late stage which is more severe and less common.
DME, is the thickening of the retinal caused by the accumulation of intraretinal fluid in the macular and it's mostly common among diabetic patients.
Currently there is no cure for these diseases and anti-vascular endothelial growth factor (Anti-VEGF) therapy is the main treatment. This requires constant administering of injections which are expensive and hence a socio-economic burden to most patients and the healthcare system.
Therefore early diagnostic and active monitoring the progress of these diseases is vital because the doctors can give some behavioral advice like change of diet or doing regular exercise which will help slow down the progress or in some cases prevents the diseases from getting into a later stage. As of today this is mostly done manually which is laborious, time intensive and prone to error. Therefore an automatic and reliable tool is very crucial in this process and to further exploit the qualitative features of the retinal OCT modality efficiently.
Also, the presence of eye motion artifacts in OCT lowers the signal-to-noise ratio (SNR) due to speckle noise. To circumvent this problem device manufacturers have to find a balance between achieving high SNR, image resolution and the scanning time. Hence the quality of the images varies among device vendors and hence the need to develop an automate tool with high performance that can generalise across images from all the device vendors.
To address the above issues, in this work we propose the nnUNet \cite{isensee:NPG2021} and an enhanced version call nnUnet\_RASPP.
Our main contribution is enhancing the nnUNet by integrating residual blocks and an Atrous Spatial Pyramid Pooling (ASPP) block to the network's architecture.
The rest of the paper is organized as follows. A brief review of the previous studies is provided in Section \ref{section:background}. Section \ref{section:methods} presents the proposed methods. The experiment with results and visualisation are presented in Section \ref{section:experiments}, and finally, the conclusion with our contributions is described in Sections \ref{section:conclusions}.
\section{Background}
\label{section:background}
OCT was first developed in the early 1990s but only became commercially available in 2006 and rapidly became popular due to its high image quality resolution. The segmentation of retinal OCT images have been around for many years from graph-cut\cite{Salazar:jbhi2014, Salazar:icarcv2010}, Markov Random Fields \cite{Salazar:his2012, Wang:jbhi2017}, level set \cite{Dodo:access2019,Dodo:bioimaging2019}, to the recent Deep Learning methods that will be briefly reviewed as below.
Unet, a Deep Learning approach for medical image segmentation is introduced in \cite{Ronneberger:MICCAI2015} by Olaf Ronneberger et al. Like the name: Unet suggest, the architecture has a U-shape and consists of an encoder, bottleneck and a decoder block. It's an end to end framework in which the encoder is use to extract features from the input images/maps, and the decoder is used for pixels localisation. At the end of the decoder path is a classification layer to classify each of the pixels to belong to each of the segmented class. Also, between the encoder and the decoder paths is a bottleneck to ensure the smooth transition from the encoder to the decoder. The encoder, decoder and bottleneck are made up of a series of convolutional layers arranged in a special order.
The Deep-ResUNet++ is presented in \cite{ndipenoch:CISP-BMEI2022} for simultaneous segmentation of layers and fluids in retinal OCT images. The approached incorporated residual connections, ASPP blocks and Squeeze and Exciting blocks into the traditional 2D Unet \cite{Ronneberger:MICCAI2015} architecture to simultaneously segment 3 retinal layers, 3 fluids and 2 background classes from 1136 B-Scans from 24 patients suffering from wet AMD. The algorithm is validated on the Annotated Retinal OCT Images (AROI) \cite{Melinscak:Automatika2021} which is publicly available.
A clinical application for diagnosis and referral of retinal diseases is proposed in \cite{Fauw:NatureMedicine2018} in which 14,884 OCT B-Scans collected from 7,621 patients are trained on a framework consisting of two main parts : The segmentation model (3D Unet \cite{Cicek:MICCAI2016}) and the classification model.
An approach to segment fluids from retinal OCT using Graph-Theory (GT) and Fully Convolutional Networks (FCN) with curvature loss is presented in \cite{xing:IEEETMI2022}. The GT is used to delineate the retinal layers, the FCN for segmentation and the loss function further uses curvature regularization term to smooth boundary and eliminate unnecessary holes inside the predicted fluid. The algorithm was validated on the RETOUCH dataset \cite{bogunovic:IEEE2019} consisting of 3 fluid types.
A combination of Convolutional Neural Networks (CNN) and Graph Search (GS) method is presented in \cite{Fang:BOE2017}. The framework aims to validate nine layers boundaries from 60 retinal OCT volumes (2915 B-scans, from 20 human eyes) obtained from patients suffering from dry AMD. CNN is used for the extraction of the layer boundaries features while the GS is used for the pixels classification.
In \cite{pekala:CBM2019} another Deep Learning approach for retinal OCT segmentation combining a FCN for segmentation with Gaussian Processes for post processing is proposed. The method is validated on the University of Miami dataset \cite{tian:JOB2016} which consists of 50 volumes from 10 patients suffering from diabetic retinopathy. Their approach is divided into two main steps which are the pixel classification using the FCN and the post processing using Gaussian Processes.
Another CNN-based approach for the simultaneous segmentation of layers and fluid is presented in \cite{Roy:BOE2017}. They presented a 2D Unet like architecture with a reduced depth for the segmentation of 10 classes consisting of 8 layers, 1 background and 1 fluid from 10 patients suffering from Diabetic Macular Edema (DME). The Duke DME dataset \cite{Chiu:BOE2015} is used to validate the algorithm.
A 3-part CNN-based and Random Forest (RF) framework was developed by \cite{Lu:arXiv:2017} to segment and detect fluids in OCT images. The first part of the framework is used for pre-processing of the images, the second part consists of a 2D Unet architecture for the extraction of features and a RF classifier is used at the third part to classify the pixels. The framework is validated on the MICCAI 2017 RETOUCH challenge dataset \cite{bogunovic:IEEE2019}.
A combination of CNN and graph-shortest path (GSP) method is presented in \cite{rashno:MICCAI2017} for the segmentation and detection of fluid in retinal OCT images. The algorithm is validated on the MICCAI 2017 RETOUCH challenge dataset \cite{bogunovic:IEEE2019}. In this method the CNN is used for the segmentation of region of interest (ROI) and the GSP is further used for the segmentation of the layers and fluid from the ROI.
A standard double-Unet architecture for the detection and segmentation of fluids in retinal OCT images is proposed in \cite{kang:MICCAI2017}. The method uses 2 Unet architectures connected in series and is validated on the MICCAI 2017 RETOUCH challenge dataset. The output of the first Unet serves as an input to the second Unet.
nnUNet, a self-configuring framework is introduced in \cite{isensee:NPG2021}. The framework aims to eliminate the problem of manual parameters setting "trying an error" by using the dataset's demographic features to determine and automatically set some of the model's key parameters like the batch size. The framework uses the standard Unet \cite{Ronneberger:MICCAI2015} and is evaluated on 11 biomedical image segmentation challenges consisting of 23 datasets for 53 segmentation tasks.
An extended version of the nnUNet \cite{isensee:NPG2021} is presented by McConnell et al in \cite{mcconnell2:CBMS2022} by integrating residual, dense, and inception blocks into the network for the segmentation of medical imaging on multiple datasets. The algorithm is evaluated on eight datasets consisting of 20 target anatomical structures.
ScSE nnU-Net, another extended version of the nnUNet \cite{isensee:NPG2021} is presented in \cite{xie:MICCAI2020} for the segmentation of head and neck cancer tumors. It extends the original nnUnet by incorporating spatial channels with squeeze and excitation blocks into the network's architecture. The algorithm uses nnUNet to extract features from the input images/maps and then the squeeze and excitation blocks to further suppress the weaker pixels. The method was validated on the HECKTOR 2020 training dataset consisting of 201 cases and a test set of 53 cases.
In the medical image segmentation community the Unet is the most common and widely used architecture but most of the parameters are setup manually "try an error". Therefore we aim to improve the performance of the Unet by leveraging on nnUNet, a self-parameterise pipeline for medical image segmentation and adapt it to solve the problem of data source variance as explain in the next part of this paper.
\section{Methods}
\label{section:methods}
nnUnet\_RASPP is inspired and adapted from nnUNet \cite{isensee:NPG2021} developed by Isensee et al. It is a self-configuring and automatic pipeline for medical image segmentation which helps to mitigate the problem of manual parameters setting "try an error".
We enhanced the nnUNet by incorporating residual and ASPP blocks into the network's architecture.
In this section we will give a brief summary of the standard Unet \ref{subsection:UNet}, nnUNet \ref{subsection:nnUNet}, residual connections \ref{subsection:residual_conne}, ASSP \ref{subsection:aspp} block and finally explain how we integrate these components to build nnUnet\_RASPP \ref{subsection:nnUnet_RASPP}.
\subsection{Unet}
\label{subsection:UNet}
\begin{figure}[H]
\centerline{\includegraphics[width=11cm]{Unet.png}}
\caption{An illustration of the standard Unet architecture use in nnUnet}
\label{fig:Unet}
\end{figure}
The Unet \cite{ronneberger:springer2015} is an end to end architecture for medical image segmentation. It consists of 3 main parts : the encoder, the decoder and bottleneck between the encoder and decoder. The encoder captures contextual information (or features extraction) and reduces the size of the feature map by half after every convolutional block as we move down the encoding path by implying strided convolutions.
Pixels localisation is done at the decoder. As we move up the decoder path the size of the feature map is doubled after every convolutional block by implying transposed convolutions, and for the reconstruction process features maps are concatenated to the corresponding map in the encoder path using up-sampling operations.
The bottleneck serves as a bridge, linking the encoding and decoding paths together. It consists of a convolutional block that ensures a smooth transition from the encoder path to the decoder path.
At the encoding path, decoding path and bridge layer each convolutional block consists of a convolutional layer that converts the pixels of the receptive field into a single value before passing it to the next operation followed by an instance normalisation to prevent over-fitting during training, and finally a LeakyReLU activation function to diminish vanishing gradient. A high level diagram to illustrate the architectural structure of the standard Unet is shown in fig \ref{fig:Unet} above.
\subsection{nnUNet Overview}
\label{subsection:nnUNet}
The nnUNet \cite{isensee:NPG2021} is a self-configuring and automatic pipeline for medical image segmentation with the ability to automatically determine and choose the best model hyper-parameters given the data and the hardware availability, thus alleviating the problem of try an error of manual parameters setting.
Given a training data the framework extracts the "data-fingerprint" such as modality, shape, and spacing and base on the hardware (GPU memory) constraints the network topology, image re-sampling methods, and input-image patch sizes are determined. After training is complete the framework determines if post-processing is needed.
During training some parameters are fixed which are : learning rate that is set to 0.01, a maximum training epoch of 1000, the loss function is Cross Entropy plus Dice loss with ADAM as an optimizer, and also data augmentation is done on the fly during training. The framework uses the standard Unet \cite{ronneberger:springer2015} as the network's architecture.
Please refer to the original publication \cite{isensee:NPG2021} for more information.
\subsection{Residual Connections}
\label{subsection:residual_conne}
Residual connection is a technique use to combat the problem of vanishing gradient developed in \cite{he:IEEECCVPR2016}. The Unet architecture uses the chain rule for back propagation during training. This process can sometimes lead to vanishing gradient and one of the ways to circumvent this, is to introduce residual connection into the network's architecture. The diagram of residual connection is demonstrated on fig \ref{fig:residual_connection} below.
\begin{figure}[h]
\centerline{\includegraphics[width=4cm]{residual_connections.png}}
\caption{An illustration of the residual connection block to combat the vanishing gradient problem where X is an input and F(X) is a function of X.}
\label{fig:residual_connection}
\end{figure}
\subsection{Atrous Spatial Pyramid Pooling, ASPP}
\label{subsection:aspp}
ASPP is a technique use to extract or capture global contextual features by applying paralleling filters with different frequencies or dilating rate for a given input filter. The diagram of residual connection is demonstrated on fig \ref{fig:ASPP_Structure} below.
\begin{figure}[H]
\centerline{\includegraphics[width=10cm]{ASPP.png}}
\caption{An illustration of multiple parallel filters at different dilating rates or frequencies to capture global information in an ASPP block.}
\label{fig:ASPP_Structure}
\end{figure}
\subsection{nnUnet\_RASPP}
\label{subsection:nnUnet_RASPP}
\begin{figure}[h]
\centerline{\includegraphics[width=11cm]{nnUnet_RASPP.png}}
\caption{A high level illustration of nnUnet\_RASPP architecture with the ASPP block and the residual connections.}
\label{fig:nnUnet_RASPP}
\end{figure}
Inspired by the success of nnUNet \cite{isensee:NPG2021} we have enhanced the framework by incorporating residual connections and ASPP block in the network's architecture to solve the problem of data source variation. Residual connections were incorporated in every convolutional layer at both the encoding and decoding paths to combat the problem of vanishing gradient and the ASPP was incorporated at the input layer of the encoding path to mitigate the problem of fluid variance.
The diagram of nnUnet\_RASPP is demonstrated in fig \ref{fig:nnUnet_RASPP} above.
\section{Experiments}
\label{section:experiments}
\subsection{Dataset}
\begin{figure}[h]
\centerline{\includegraphics[width=13cm]{dataset.png}}
\caption{B-Scan examples of raw (column 1) and their corresponded annotated mask (column 2) of OCT volumes taken from the 3 device vendors (rows): Cirrus, Spectralis and Topcon. The classes are coloured as follows : Black for the background, blue for the Intraretinal Fluid (IRF), yelow for the Subretinal Fluid (SRF) and red for the Pigment Epithelium Detachments (PED). }
\label{fig:dataset_annotation}
\end{figure}
nnUnet\_RASPP was validated on the MICCAI 2017 RETOUCH challenge dataset \cite{bogunovic:IEEE2019}. The dataset is publicly available and it consists of 112 OCT volumes of patients suffering with AMD and DME collected from 3 device manufacturers: Cirrus, Spectralis and Topcon from 3 clinical centers : Medical University of Vienna (MUV) in Austria, Erasmus University Medical Center (ERASMUS) and Radboud University Medical Center (RUNMC) in The Netherlands.
The dimensions of the OCT volumes per vendor machine are as follows : Each volume of the Cirrus consists 128 B-Scans of $512\times1024$ pixels, Spectralis consists of 49 B-scans of $512\times496$ pixels and 128 B-Scans of $512\times885$ (T-2000) or $512\times650$ (T-1000) pixels for Topcon.
The training set consists of 70 volumes of 24, 24, and 22 acquired with Cirrus, Spectralis, and Topcon, respectively. Both the raw and annotated mask of the training set are made available to the public.
The testing set consists of 42 OCT volumes of 14 volumes per device vendor. The raw or input of the testing set is available publicly but their corresponding annotated masks are held by the organizers of the challenge. Submission and evaluation of prediction on the testing dataset is arranged privately with the organizers and the results are sent to the participants.
Manual annotation was done by 6 grader experts from 2 medical centers : MUV (4 graders supervised by an ophthalmology resident), and RUNMC (2 graders supervised by a retinal specialist). The dataset is annotated for 4 classes of 1 background labelled as 0 and 3 fluids which are : Intraretinal Fluid (IRF) labelled as 1, Subretinal Fluid (SRF) labeled as 2 and Pigment Epithelium Detachments (PED) labelled as 3.
The RETOUCH dataset is particularly interesting because of its high level of variability. It was collected using multiple device vendors, the sizes and number of B-Scans varies per device vendor, and it was collected and annotated in multiple clinical centers.
Also, for fair comparison the annotated testing set is held by the organizers and submission is curbed to a maximum of 3 per participating team.
\subsection{Training and Testing}
Training was done on the 70 OCT volumes of the training set (both raw and mask volumes). The estimated probabilities and predicted segmentation of the testing set (42 raw volumes) were submitted to the challenge organizers for evaluation on the ground truth or masks.
Training was done for both nnUnet (as a baseline) and nnUnet\_RASPP architectures. The environmental set up was the same for both architectures.
Also, to further demonstrate the robustness and generalisability of nnUnet\_RASPP, the predicted segmentation of the algorithm was evaluated on OCT volumes from two vendor devices and tested on the third. In this case OCT volumes from the third vendor device weren't seen during training. For this experiment, two sets of weights were generated which are: (1) Training on 46 OCT volumes from both Spectralis (24 OCT volumes) and Topcon (22 OCT volumes) and evaluated on 14 OCT volumes from the Cirrus testing set and (2) training on
48 OCT volumes from both Cirrus (24 OCT volumes) and Spectralis (24 OCT volumes) and evaluated on 14 OCT volumes from the Topcon testing set. Again the same environmental settings were used to conduct all the experiments.
In the detection task the estimated probabilities of presence of each fluid type is plotted using the receiver operating characteristics (ROC) curve and the area under the curve (AUC) which measures the ability of a binary classifier to distinguish between classes is used as the evaluation
matrice. The AUC gives a score between 0 and 1 with 1 being the perfect score and 0 is the worst.
For the segmentation task, two evaluation matrices are used to measure the performance of the algorithms which are : (1) the Dice Score (DS) which is twice the intersection, divided by the union. It measures the overlapping of the pixels in the range from 0 to 1 with 1 being the perfect score and 0 being the worst.
(2) The Absolute Volume Difference (AVD) which is the absolute difference between the the predicted and the ground truth. The value ranges from 0 to 1 with 0 being the best result and 1 being the worst.
The equation to calculate the DS is shown on Eqn~(\ref{eqn:dsc}) and that for AVD in Eqn~(\ref{eqn:avd}) below.
\begin{equation}
\label{eqn:dsc}
DSC = \frac{ 2 |X \cap Y| } {|X|+|Y|}
\end{equation}
\begin{equation}
\label{eqn:avd}
AVD = {|X|-|Y|}
\end{equation}
The models were trained on a GPU Server with NVIDIA RTX A6000 48GB.
\subsection{Results}
In this section we report the
performance for the detection task measured by the Area Under the Curve (AUC), and the segmentation task measured by the Dice Score (DS) and Absolute Volume Difference (AVD) for the propose: nnUnet and nnUnet\_RASPP, and compare our results to the current state-of-the-arts architectures.
The segmentation performance grouped by segment classes per algorithm measured in DS is illustrated in Table~\ref{tab:results_ds} with the corresponding diagram in Fig. \ref{fig:ds_bar_chart}, and that measured in AVD is illustrated in Table~\ref{tab:results_avd} with corresponding diagram in Fig. \ref{fig:avd_bar_chart}.
A detail break down of the DS and AVD per vendor device is shown in
Table~\ref{tab:results_cirrus_ds_avd_per_device} with the corresponding diagrams of the DS and AVD in Fig. \ref{fig:chart_ds_per_device} and Fig. \ref{fig:chart_avd_per_device} respectively.
Table~\ref{tab:results_dc_avd_cirrus_topcon} with its corresponding diagrams in Fig. \ref{fig:cirrus_topcon_ds} show the results when trained on 2 vendor devices from the training set and tested on the third device from the holding testing set measured in DS. In this case because of the constraint of the evaluation submission (curb to 3 maximum per team) of the predicted segmentation on the testing set, results for nnUnet are unavailable.
The detection performance grouped by segment classes per algorithm measured by the AUC is illustrated in Table~\ref{tab:results_auc} with the corresponding diagram in Fig. \ref{fig:auc_bar_chart}.
The visualizations using orange arrows to highlight the fine details capture by nnUnet\_RASPP when trained on two vendor devices from the training set and tested on the third from the training set are illustrated in Fig. \ref{fig:result_visualization} and Fig. \ref{fig:zoom}.
From these results, we notice the following:
\begin{enumerate}
\item Our propose algorithms: nnUnet\_RASPP and nnUnet outperform the current state-of-the-arts architectures by a clear margin with a mean DS of 0.823 and 0.817 respectively and a mean AVD of 0.036 for nnUnet and 0.041 for nnUnet\_RASPP.
\item Our propose algorithms: nnUnet obtained a perfect AUC score of 1 for all three fluid classes and
nnUnet\_RASPP obtained an AUC score of 0.93, 0.97, and 1.0 for the IRF, SRF, and PED respectively.
\item Also a detail break down of the DS AND AVD shows both nnUnet\_RASPP and nnUnet clearly outperform the current-state-of-the-arts architectures by a clear margin for all 3 data sources.
\item We noticed an increase in performance when trained on the training set from two vendor devices and testing on the third vendor testing set scoring a mean DS of 0.84.
\item Both nnUnet\_RASPP and nnUnet demonstrate a high level of robustness and generalisability with a constant high level of performance measure in DS and AVD and also when tested on dataset from a third device vendor that is not seen at training.
\item nnUnet\_RASPP and nnUnet were the only two algorithms to maintain constant high level performance and generalisability across all 3 data sources.
\item Dataset acquire from Topcon was the most difficult to segment with nnUnet\_RASPP and nnUnet scoring a mean DS of 0.81 each.
\item Apart from the IRF class, the nnUnet\_RASPP has the best DS in every single class when compare to the other models/teams.
\item Further evaluations of the generalisability and high peformance of the propose methods for training on dataset from 2 data sources and tested on the testing set of the third source that isn't seen at training show the nnUnet\_RASPP architecture still outperforms the current state-of-the-arts architectures by a clear margin scoring a mean DS of 0.86 and 0.81 for Cirrus (train on Topcon and Spectralis) and Topcon (train on Cirrus and Spectralis) respectively. In this case, because of the curb of the number of submissions of the predicted segmentation for evaluation to 3 per team we are unable to show the performance of the nnUnet.
\end{enumerate}
\begin{table}[H]
\addtolength{\tabcolsep}{12pt}
\centering
\begin{tabular}{l c c c c c}
\toprule\toprule
Teams & IRF & SRF & PED & \thead{Mean} \\
\midrule
nnUnet\_RASPP &0.84 & \textbf{0.80} & \textbf{0.83} & \textbf{0.823} \\
nnUnet & \textbf{0.85} & 0.78 & 0.82 & 0.817 \\
SFU & 0.81 & 0.75 & 0.74 & 0.78 \\
IAUNet\_SPP\_CL \cite{xing:IEEE2022} & 0.79 & 0.74 & 0.77 & 0.77 \\
UMN & 0.69 & 0.70 & 0.77 & 0.72 \\
MABIC & 0.77 & 0.66 & 0.71 & 0.71 \\
RMIT & 0.72 & 0.70 & 0.69 & 0.70 \\
RetinAI & 0.73 & 0.67 & 0.71 & 0.70 \\
Helios & 0.62 & 0.67 & 0.66 & 0.65 \\
NJUST & 0.56 & 0.53 & 0.64 & 0.58 \\
UCF & 0.49 & 0.54 & 0.63 & 0.55 \\
\hfill \break
\end{tabular}
\caption{Table of the Dice Scores (DS) by segment classes (columns) and teams (rows) for training on the entire 70 OCT volumes of the training set and tested on the holding 42 OCT volumes from the testing set. }
\label{tab:results_ds}
\end{table}
\begin{figure}[H]
\centerline{\includegraphics[width=12cm]{ds_visualisation.png}}
\caption{Performance comparison of segmentation measure in DS of the proposed methods: nnUnet\_RASPP and nnUnet, together with the current-state-of-the arts algorithms grouped by the segment classes when trained on the entire 70 OCT volumes of the training set and tested on the holding 42 OCT volumes from the testing set.
}
\label{fig:ds_bar_chart}
\end{figure}
\begin{table}[h]
\addtolength{\tabcolsep}{12pt}
\centering
\begin{tabular}{l c c c c c}
\toprule\toprule
Teams & IRF & SRF & PED & \thead{Mean} \\
\midrule
nnUnet & \textbf{0.019} & 0.017 & 0.074 & \textbf{0.036} \\
IAUNet\_SPP\_CL \cite{xing:IEEE2022} & 0.021 & 0.026 & \textbf{0.061} & \textbf{0.036} \\
nnUnet\_RASPP &0.023 & \textbf{0.016} & 0.083 & 0.041 \\
SFU & 0.030 & 0.038 & 0.139 & 0.069 \\
UMN & 0.091 & 0.029 & 0.114 & 0.078 \\
MABIC & 0.027 & 0.059 & 0.163 & 0.083 \\
RMIT & 0.040 & 0.072 & 0.1820 & 0.098 \\
RetinAI & 0.077 & 0.0419 & 0.2374 & 0.118 \\
Helios & 0.0517 & 0.055 & 0.288 & 0.132 \\
NJUST & 0.1130 & 0.0963 & 0.248 & 0.153 \\
UCF & 0.2723 & 0.1076 & 0.2762 & 0.219 \\
\hfill \break
\end{tabular}
\caption{Table of the Absolute Volume Difference (AVD) by segment classes (columns) and teams (rows) for training on the entire 70 OCT volumes of the training set and tested on the holding 42 OCT volumes from the testing set. }
\label{tab:results_avd}
\end{table}
\begin{figure}[H]
\centerline{\includegraphics[width=12cm]{avd_visualisation.png}}
\caption{Performance comparison of segmentation measure in AVD of the proposed methods: nnUnet\_RASPP and nnUnet, together with the current-state-of-the arts algorithms grouped by the segment classes when trained on the entire 70 OCT volumes of the training set and tested on the holding 42 OCT volumes from the testing set.
}
\label{fig:avd_bar_chart}
\end{figure}
\begin{table}
\addtolength{\tabcolsep}{10pt}
\begin{center}
\begin{tabular}{ c|cc|cc|cc cc }
\hline
\multicolumn{7}{c}{\thead{Cirrus}} \\
\hline
\hline
Teams & \multicolumn{2}{c|}{IRF}
& \multicolumn{2}{c|}{SRF}
& \multicolumn{2}{c}{PED}
\\
\hline
nnUnet\_RASPP & \textbf{0.91} & \textbf{0.00670} & \textbf{0.80} & \textbf{0.00190} & \textbf{0.89 } & 0.021700 \\
\hline
nnUnet & \textbf{0.91} & 0.00850 & \textbf{0.80} & \textbf{0.00190} & 0.88 & \textbf{0.02060 } \\
\hline
SFU & 0.83 & 0.020388 & 0.72 & 0.008069 & 0.73 & 0.116385 \\
\hline
UMN & 0.73 & 0.076024 & 0.62 & 0.007309 & 0.82 & 0.023110 \\
\hline
MABIC & 0.79 & 0.018695 & 0.67 & 0.008188 & 0.73 & 0.091524 \\
\hline
RMIT & 0.85 & 0.037172 & 0.64 & 0.005207 & 0.76 & 0.079259 \\
\hline
RetinAI & 0.77 & 0.046548 & 0.66 & 0.008857 & 0.82 & 0.040525 \\
\hline
Helios & 0.70 & 0.038073 & 0.66 & 0.008313 & 0.69 & 0.097135 \\
\hline
NJUST & 0.57 & 0.077267 & 0.55 & 0.024092 & 0.69 & 0.144518 \\
\hline
UCF & 0.57 & 0.174140 & 0.54 & 0.028924 & 0.66 & 0.215379 \\
\hline
\hline
\multicolumn{7}{c}{\thead{Spectralis}} \\
\hline
\hline
Teams & \multicolumn{2}{c|}{IRF}
& \multicolumn{2}{c|}{SRF}
& \multicolumn{2}{c}{PED}
\\
\hline
nnUnet\_RASPP & \textbf{0.89} & \textbf{0.030100} & 0.68 & \textbf{0.008400} & \textbf{0.81 } & \textbf{0.068600} \\
\hline
nnUnet & \textbf{0.89} & 0.031400 & 0.62 & 0.012600 & 0.80 & 0.073600 \\
\hline
SFU & 0.87 & 0.033594 & \textbf{0.73} & 0.020017 & 0.76 & 0.135562 \\
\hline
UMN & 0.76 & 0.072541 & 0.72 & 0.013499 & 0.74 & 0.121404 \\
\hline
MABIC & 0.83 & 0.036273 & 0.59 & 0.033384 & 0.75 & 0.181842 \\
\hline
RMIT & 0.69 & 0.121642 & 0.67 & 0.026377 & 0.70 & 0.228323 \\
\hline
RetinAI & 0.77 & 0.026921 & 0.65 & 0.036062 & 0.71 & 0.120528 \\
\hline
Helios & 0.61 & 0.030149 & 0.53 & 0.035625 & 0.63 & 0.330431 \\
\hline
NJUST & 0.60 & 0.080740 & 0.38 & 0.076071 & 0.52 & 0.412231 \\
\hline
UCF & 0.41 & 0.407741 & 0.31 & 0.155769 & 0.52 & 0.414739 \\
\hline
\hline
\multicolumn{7}{c}{\thead{Topcon}} \\
\hline
\hline
Teams & \multicolumn{2}{c|}{IRF}
& \multicolumn{2}{c|}{SRF}
& \multicolumn{2}{c}{PED}
\\
\hline
nnUnet\_RASPP & 0.72 & 0.032500 & \textbf{0.93} & 0.037800 & \textbf{0.78 } & 0.157300 \\
\hline
nnUnet & \textbf{0.74} & \textbf{0.015900} & 0.92 & \textbf{0.036300} & \textbf{0.78 } & \textbf{0.127700} \\
\hline
SFU & 0.72 & 0.039515 & 0.80 & 0.085907 & 0.74 & 0.164926 \\
\hline
UMN & 0.59 & 0.125454 & 0.77 & 0.066680 & 0.76 & 0.197794 \\
\hline
MABIC & 0.68 & 0.025097 & 0.73 & 0.134050 & 0.65 & 0.215687 \\
\hline
RMIT & 0.63 & 0.072609 & 0.78 & 0.094004 & 0.60 & 0.404842 \\
\hline
RetinAI & 0.66 & 0.045674 & 0.70 & 0.171808 & 0.60 & 0.385178 \\
\hline
Helios & 0.56 & 0.086773 & 0.81 & 0.119888 & 0.65 & 0.435057 \\
\hline
NJUST & 0.52 & 0.181237 & 0.66 & 0.188827 & 0.70 & 0.187733 \\
\hline
UCF & 0.48 & 0.235298 & 0.76 & 0.134283 & 0.61 & 0.200602 \\
\hline
\end{tabular}
\end{center}
\caption{Table of the Dice Score (DS) and Absolute Volume Difference (AVD) by segment classes (columns) and teams (rows) for training on the entire 70 OCT volumes of the training set and tested on the holding 42 OCT volumes from the testing set per device. }
\label{tab:results_cirrus_ds_avd_per_device}
\end{table}
\begin{figure}[H]
\centerline{\includegraphics[width=12cm]{device_ds.png}}
\caption{Performance comparison of segmentation measure in DS of the proposed methods: nnUnet\_RASPP and nnUnet, together with the current-state-of-the arts algorithms grouped by the segment classes when trained on the entire 70 OCT volumes of the training set and tested on the holding 42 OCT volumes from the testing set per device.
}
\label{fig:chart_ds_per_device}
\end{figure}
\begin{figure}[H]
\centerline{\includegraphics[width=12cm]{device_avd.png}}
\caption{Performance comparison of segmentation measure in AVD of the proposed methods: nnUnet\_RASPP and nnUnet, together with the current-state-of-the arts algorithms grouped by the segment classes when trained on the entire 70 OCT volumes of the training set and tested on the holding 42 OCT volumes from the testing set per device.
}
\label{fig:chart_avd_per_device}
\end{figure}
\begin{table}
\addtolength{\tabcolsep}{2pt}
\begin{center}
\begin{tabular}{ c|cc|cc|cc|cc cc }
\hline
\multicolumn{9}{c}{\thead{Cirrus}} \\
\hline
\hline
Teams & \multicolumn{2}{c|}{IRF}
& \multicolumn{2}{c|}{SRF}
& \multicolumn{2}{c}{PED}
& \multicolumn{2}{c}{Mean}
\\
\hline
nnUnet\_RASPP & \textbf{0.90} & \textbf{0.0122} & \textbf{0.78} & \textbf{0.0031} & \textbf{0.89 } & \textbf{0.019} & \textbf{0.86 } & \textbf{0.0114}\\
\hline
SFU & 0.83 & 0.0204 & 0.72 & 0.0081 & 0.73 & 0.1164 & 0.76 & 0.0483 \\
\hline
UMN & 0.73 & 0.0760 & 0.62 & 0.0073 & 0.82 & 0.0231 & 0.72 & 0.0355 \\
\hline
MABIC & 0.79 & 0.0187 & 0.67 & 0.0082 & 0.73 & 0.0915 & 0.73 & 0.0395 \\
\hline
RMIT & 0.85 & 0.0372 & 0.64 & 0.0052 & 0.76 & 0.0793 & 0.75 & 0.0406 \\
\hline
RetinAI & 0.77 & 0.0466 & 0.66 & 0.0089 & 0.82 & 0.0405 & 0.75 & 0.0320 \\
\hline
Helios & 0.70 & 0.0381 & 0.66 & 0.0083 & 0.69 & 0.0971 & 0.68 & 0.0478 \\
\hline
SVDNA \cite{koch:MICCAI2022} & 0.61 & -- & 0.66 & -- & 0.74 & -- & 0.67 & -- \\
\hline
NJUST & 0.57 & 0.0773 & 0.55 & 0.0241 & 0.69 & 0.1446 & 0.60 & 0.0820 \\
\hline
UCF & 0.57 & 0.1741 & 0.54 & 0.0289 & 0.66 & 0.2154 & 0.59 & 0.1395 \\
\hline
\hline
\multicolumn{9}{c}{\thead{Topcon}} \\
\hline
\hline
Teams & \multicolumn{2}{c|}{IRF}
& \multicolumn{2}{c|}{SRF}
& \multicolumn{2}{c}{PED}
& \multicolumn{2}{c}{Mean}
\\
\hline
nnUnet\_RASPP & 0.72 & \textbf{0.0201} & \textbf{0.93} & \textbf{0.0298} & \textbf{0.78 } & \textbf{0.2119} & \textbf{0.81 } & \textbf{0.0873} \\
\hline
SFU & 0.72 & 0.0395 & 0.80 & 0.0859 & 0.74 & 0.1649 & 0.75 & 0.0968\\
\hline
UMN & 0.59 & 0.1255 & 0.77 & 0.0667 & 0.76 & 0.1978 & 0.71 & 0.1300 \\
\hline
SVDNA \cite{koch:MICCAI2022} & 0.61 & -- & 0.80 & -- & 0.72 & -- & 0.71 & -- \\
\hline
MABIC & 0.68 & 0.0251 & 0.73 & 0.1341 & 0.65 & 0.2157 & 0.69 & 0.1250\\
\hline
RMIT & 0.63 & 0.0726 & 0.78 & 0.0940 & 0.60 & 0.4048 & 0.67 & 0.1905\\
\hline
RetinAI & 0.66 & 0.0457 & 0.70 & 0.1718 & 0.60 & 0.3852 & 0.65 & 0.2009\\
\hline
Helios & 0.56 & 0.0868 & 0.81 & 0.1199 & 0.65 & 0.4351 & 0.67 & 0.2139 \\
\hline
NJUST & 0.52 & 0.1812 & 0.66 & 0.1888 & 0.70 & 0.1877 & 0.63 & 0.1859\\
\hline
UCF & 0.48 & 0.2353 & 0.76 & 0.1343 & 0.61 & 0.2006 & 0.62 & 0.1900 \\
\hline
\end{tabular}
\end{center}
\caption{Table of the DS and AVD by segment classes (columns) and teams (rows) trained on 48 OCT volumes from 2 device sources and evaluated on 14 OCT volumes from the testing set on the third device that wasn't seen at training. }
\label{tab:results_dc_avd_cirrus_topcon}
\end{table}
\begin{figure}[H]
\centerline{\includegraphics[width=12cm]{cirrus_topcon_ds.png}}
\caption{Performance comparison of segmentation measure in DS of the propose nnUnet\_RASPP, together with the current-state-of-the arts algorithms group by the segment classes train on 46 OCT volumes from both Spectralis (24 OCT volumes) and Topcon (22 OCT volumes) and evaluated on the holding testing set (cirrus top and Topcon below).}
\label{fig:cirrus_topcon_ds}
\end{figure}
\begin{figure}[H]
\centerline{\includegraphics[width=12cm]{cirrus_topcon_avd.png}}
\caption{Performance comparison of segmentation measure in AVD of the propose nnUnet\_RASPP, together with the current-state-of-the arts algorithms group by the segment classes train on 46 OCT volumes from both Spectralis (24 OCT volumes) and Topcon (22 OCT volumes) and evaluated on the holding testing set (cirrus top and Topcon bottom).}
\label{fig:cirrus_topcon_ds}
\end{figure}
\begin{table}[H]
\addtolength{\tabcolsep}{12pt}
\centering
\begin{tabular}{l c c c c c}
\toprule\toprule
Teams & IRF & SRF & PED & \thead{Mean} \\
\midrule
nnUnet &\textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} \\
SFU &\textbf{1.0} & \textbf{1.0} & \textbf{1.0} & \textbf{1.0} \\
nnUnet\_RASPP & 0.93 & 0.97 & \textbf{1.0} & 0.97 \\
Helios & 0.93 & \textbf{1.0} & 0.97 & 0.97 \\
UCF & 0.94 & 0.92 & \textbf{1.0} & 0.95 \\
MABIC & 0.86 & \textbf{1.0} & 0.97 & 0.94 \\
UMN & 0.91 & 0.92 & 0.95 & 0.93 \\
RMIT & 0.71 & 0.92 & \textbf{1.0} & 0.88 \\
RetinAI & 0.99 & 0.78 & 0.82 & 0.86 \\
NJUST & 0.70 & 0.83 & 0.98 & 0.84 \\
\hfill \break
\end{tabular}
\caption{Table of the Area Under the Curve (AUC) by segment classes (columns) and teams (rows) for training on the entire 70 OCT volumes of the training set and tested on the holding 42 OCT volumes from the testing set. }
\label{tab:results_auc}
\end{table}
\begin{figure}[H]
\centerline{\includegraphics[width=12cm]{auc.png}}
\caption{Performance comparison of the detection measure in AUC of the proposed methods: nnUnet\_RASPP and nnUnet, together with the current-state-of-the arts algorithms grouped by the segment classes when trained on the entire 70 OCT volumes of the training set and tested on the holding 42 OCT volumes from the testing set.
}
\label{fig:auc_bar_chart}
\end{figure}
\begin{figure}[H]
\centering
\centerline{\includegraphics[width=12cm ]{visualisation.png}}
\caption{Examples of B-Scans to illustrate the visualization output/predicted of nnUnet\_RASPP, in order of the raw/inputs, mask/annotations and predicted/outputs in columns when trained on the training set of two vendor devices and tested on the training set of the third vendor device (Cirrus and Topcon in row 1 and row 2 respectively). Fine details capture by the model are indicated with orange arrows. }
\label{fig:result_visualization}
\end{figure}
\begin{figure}[H]
\centering
\centerline{\includegraphics[width=12cm ]{zoom.png}}
\caption{An example of a B-Scan to illustrate the visualization output/predicted of nnUnet\_RASPP, in order of the raw/inputs, mask/annotations and predicted/outputs when zoom out to highlights the fine details capture by the model using orange arrows. This is capture when trained on the training set of the Spectralis and Topcon devices and tested on the training set of the Cirrus device.}
\label{fig:zoom}
\end{figure}
\section{Conclusions}
\label{section:conclusions}
In this work, we have investigated the problem of detection and segmentation of multiple fluids in retinal OCT volumes acquired from multiple device vendors. Inspired by the success of of the nnUNet \cite{isensee:NPG2021} we have enhanced the model's architecture to build a novel algorithm call nnUnet\_RASPP. Both nnUNet and nnUnet\_RASPP were evaluated on the MICCAI 2017 RETOUCH challenge dataset \cite{bogunovic:IEEE2019}. We submitted predictions for both architectures and experimental results show that for the current league table and other known published results our algorithms outperform the current state-of-the-arts architectures by a clear margin as they occupy the first and second places for the DS, AVD and AUC evaluation.
Our main contribution is : we have enhanced the nnUNet by incorporating the residual blocks and ASPP block into the network's architecture to solve this particular problem.
The propose algorithms provide useful information for further diagnosis and monitoring the progress of retinal diseases such as AMD, DME and Glaucoma. In the future we look to investigate our algorithms on other medical image datasets once they are available publicly.
\clearpage
|
{
"arxiv_id": "2302.13279",
"language": "en",
"timestamp": "2023-02-28T02:14:59",
"url": "https://arxiv.org/abs/2302.13279",
"yymm": "2302"
} |
\section{Ablation Studies}
\label{sec:ablation}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/ablation/ablation_skin.pdf}
\caption{
Qualitative ablation study for skin tone adjustment. For each identity, the upper and lower rows display the results before and after the skin tone adjustment, respectively.
Left to right: (a) input face images, (b) diffuse albedo, (c) diffuse shading, (d) diffuse reconstruction, (e) specular albedo, (f) specular shading, and (g) specular reconstruction.
}
\label{fig:ablation_skin}
\end{figure}
\begin{table}[t]
\caption{
Quantitative ablation study for makeup face reconstruction results.
}
\label{tab:table_1}
\begin{center}
\begin{tabular}{l|ccc}
\toprule
\ \ Complete data & RMSE & SSIM & LPIPS\\
\hline
\midrule
$FRN$ (w/o specular) & 0.056 & 0.953 & 0.091 \\
$FRN$ (w/ specular) & \ \ \ 0.053 $\downarrow$ & \ \ \ 0.956 $\uparrow$ & \ \ \ 0.088 $\downarrow$ \\
Rendered result & \color{gray}0.060 & \ \ \ \color{gray}0.968 $\Uparrow$ & \ \ \ \color{gray}0.062 $\Downarrow$ \\
\bottomrule
\toprule
\ \ \ \ \ \ \ Test data & RMSE & SSIM & LPIPS\\
\hline
\midrule
$FRN$ (w/o specular) & 0.058 & 0.948 & 0.102 \\
$FRN$ (w/ specular) & \ \ \ 0.055 $\downarrow$ & \ \ \ 0.951 $\uparrow$ & \ \ \ 0.099 $\downarrow$ \\
Rendered result & 0.063 & \ \ \ 0.964 $\Uparrow$ & \ \ \ 0.066 $\Downarrow$ \\
\bottomrule
\end{tabular}
\setlength{\belowcaptionskip}{10pt}
\end{center}
\end{table}
We conducted ablation studies on the 3D face reconstruction step. We performed a comparative experiment with and without skin tone adjustment and assessed the influence on the components of 3DMM. We selected two images from FFHQ~\cite{FFHQ} with strong illumination. As illustrated in Fig.~\ref{fig:ablation_skin}, the skin tone adjustment had only a slight effect, or none at all, on the specular. Although the diffuse reconstruction did not change, the balance of the diffuse albedo and diffuse shading was significantly adjusted. Limited by the 3DMM texture, the diffuse albedos of the two faces before adjusting the skin tones had almost similar colors, resulting in an incorrect estimation of the diffuse shading. This would be misleading for the subsequent refinement and makeup extraction step. To mitigate this error, we adjusted the skin tone so that the average color was the same as that of the original image.
Tab.~\ref{tab:table_1} displays the results of the quantitative evaluation of the 3D makeup face reconstruction. We trained two face reconstruction networks with different illumination models: one used SH lighting without specular estimation, and the other was the full model. The rendered results using the bare skin, makeup, diffuse shading, and specular reconstruction textures are also listed for reference.
We used the complete makeup, which was not used to train the 3D face reconstruction network for the general evaluation. We used a gray color to mark the rendered results, because the generated maps for rendering were trained on the complete dataset. Furthermore, the test makeup dataset, which was not used to train the makeup extraction network, was separated to evaluate the rendered results. The network containing the specular illumination model improved the accuracy of the reconstruction, thereby demonstrating the effectiveness of our model. Our final rendered makeup face exhibited improvement in terms of the perceptual similarity. Thus, the reconstructed results were more compatible with the visual evaluation, as we believe that makeup significantly influences human perception.
\section*{Acknowledgements}
We thank Prof. Yuki Endo in University of Tsukuba for the suggestions and comments on our research. We also thank the reviewers for their constructive feedback and suggestions, which helped us improve our paper.
\section{Applications}
\label{sec:application}
We explore several makeup-related applications using the results of our method and compare them with state-of-the-art methods.
\subsection{3D Face Reconstruction of Makeup}
First, we demonstrate how the extracted makeup dataset can be used to enhance the 3D face reconstruction of makeup. The same process as the diffuse albedo model construction of FLAME~\cite{FLAME:SiggraphAsia2017} was followed, and the collected makeup textures $(1 - \mathbf{A}) \odot \mathbf{D}_{m}^{m}$ were used to construct a PCA-based statistical model of makeup. The randomly sampled textures from the makeup model are depicted in Fig.~\ref{fig:result_pca_layer}. The makeup model is an extension of the diffuse albedo model, and the new model $\mathbf{D}_{c}^{'}$ can be formulated as:
\begin{align}
\mathbf{D}_{c}^{'} = \mathbf{D}_{c} + \mathbf{D}_{c}^{m} ,
\end{align}
where $\mathbf{D}_{c}^{m}$ is the makeup model.
We used an optimization-based manner to reconstruct the 3D face from the makeup portraits because no large-scale makeup dataset is available to train a neural network. A comparison of the 3D makeup face reconstruction using different albedo models $\mathbf{D}_{c}$ and $\mathbf{D}_{c}^{'}$ is presented in Fig.~\ref{fig:result_pca}. $\mathbf{D}_{c}^{'}$ could recover the makeup, and improve the accuracy of the entire reconstruction, especially for lipsticks and eye shadows. The shape of the lips and eyes were also matched more effectively by extending the ability of the diffuse albedo.
We believe that this makeup database can be explored further; for example, by using advanced image generation techniques such as StyleGAN/StyleGAN2~\cite{FFHQ, StyleGAN2} or diffusion model~\cite{diffusion}. The accuracy of the reconstruction results also requires further quantitative evaluation, and we consider these as future research topics.
\subsection{Illumination-Aware Makeup Transfer}
We used the extracted makeup for makeup transfer by employing the same method as that for the makeup extraction network. Equation (\ref{eq:alhpa_blend}) was followed to blend a new makeup face, which was subsequently projected onto the original image.
The functionalities of our method and previous methods are summarized in Tab.~\ref{tab:table_function}. Similar to the approach of CPM~\cite{m_Nguyen-etal-CVPR21}, the makeup textures are UV representations to solve the misalignment of faces. The face region that is used for transfer can be specified, and the makeup is editable. In addition to these advantages, our approach extracts makeup and uses a completed UV representation, which can handle occlusion and illumination.
Our makeup transfer results are depicted in
Fig.~\ref{fig:result_complex_transfer}. The reference images contained various complex factors, including occlusion, face misalignment, and lighting conditions. We extracted the makeup from the reference image. Subsequently, we transferred the makeup to the source image while maintaining the original illumination of the source image.
A qualitative comparison with state-of-art makeup transfer methods is depicted in Fig.~\ref{fig:result_transfer}, in which the source and reference images have different illumination. As existing makeup transfer methods do not consider illumination, they transfer not only the makeup, but also the effect of the illumination. The first two rows present the exchanged makeup results of two identities, one was evenly illuminated, whereas the other had shadows on the cheeks. The existing methods could not retain the illumination from the source image, which resulted in a mismatch between the face and surrounding environment. Note that our method preserved the illumination and shade of the original images in the cheeks.
The final row shows an example in which the source and reference images have contrasting illumination. Our method enabled a natural makeup transfer, whereas the other methods were affected by the illumination, resulting in undesirable results. Note that the pioneering method known as BeautyGAN~\cite{Li:2018:MM} (a) was relatively stable. However, as it is not suitable for transferring eye shadow, the makeup in the source image was not cleaned up.
\subsection{Illumination-Aware Makeup Interpolation and Removal}
As illustrated in Fig.~\ref{fig:result_interpolate}, compared to existing makeup transfer methods, our method could achieve makeup interpolation and removal without a reference image. Moreover, our makeup interpolation maintained constant illumination conditions and achieved natural makeup interpolation. The first two rows present the interpolation and removal results of the face images. The specular and diffuse shading were not changed while the makeup was adjusted from heavy to light. The third row shows how the alpha matte changed in the makeup interpolation process.
The alpha matte $\mathbf{A}$ is adjusted to $\mathbf{A}_{\sigma}$ as follows:
\begin{align}
\mathbf{A}_{\sigma}(p) = clamp(\mathbf{A}(p) + \sigma, 0, 1) , \label{eq:alhpa_lerp}
\end{align}
where $\mathbf{A}_\sigma(p)$ and $\mathbf{A}(p)$ are values of $\mathbf{A}_\sigma$ and $\mathbf{A}$ at pixel $p$, respectively, and $\sigma \in [0, 1]$.
$\mathbf{A}_{\sigma}$ was eventually clipped to between 0 and 1. It can be observed that for the change in $\mathbf{A}_{\sigma}$, the makeup was mainly lipstick and eye shadow in this sample. Thus, the original $\mathbf{A}$ was the black color around the lips and eyes, which means that this part of the skin color was not used, while the makeup is mainly applied. With the increase in $\\sigma$, $\mathbf{A}_{\sigma}$ became whiter; thus the use of makeup was reduced and makeup interpolation was achieved. As the illumination was disentangled from the makeup, it was possible to relight the face while adjusting the makeup. The results are presented in the final row.
In addition to the above applications, the extracted makeup can be used for makeup recommendations, please refer to~\cite{Scherbaum11Makeup, 10.5555/3298239.3298377}. The makeup textures can also facilitate the subsequent processing of traditional graphics pipelines such as physically-based makeup rendering.
\section{Limitations and Conclusions}
\label{sec:conclution}
\begin{figure}[t]
\centering
\includegraphics[width=0.75\linewidth]{fig/conclusion/result_normal}
\caption{Comparison of coarse and refined normal.}
\label{fig:result_normal}
\end{figure}
\noindent\textbf{Limitations.}
Although we integrated a color adjustment to alleviate the inherent skin color bias, the 3DMM skin colors still exhibited a problem. Namely, because the skin colors of the FLAME model~\cite{FLAME:SiggraphAsia2017} were obtained by unwrapping face images of the FFHQ dataset~\cite{FFHQ},
they contained baked-in lighting effects.
Therefore, our coarse albedos that were obtained using the FLAME model contained shading effects, which further caused errors in our refinement step and makeup extraction.
Whereas our method yielded satisfactory results in most cases, we would like to explore a better albedo model.
Our network erroneously extracts a makeup-like material from a makeup-less face because our network is currently not trained with paired data of makeup-less inputs and makeup-less outputs.
Although our refinement step greatly improves the diffuse albedos (see Fig.~\ref{fig:result_albedo}(b)(d)), the difference in the face geometry is subtle before and after refinement. Fig.~\ref{fig:result_normal} shows that the normals around the eyes and mouth were smoothed. We would like to improve the face geometry as well.
At present, we extract makeup from diffuse albedos, but in the real world, makeup contains specular albedos. We would like to account for makeup BRDFs for a more physically-plausible makeup transfer.
Quantitative evaluation is difficult in makeup-related research because no public ground-truth dataset of accurately aligned face images before and after makeups is available.
Therefore, it is also essential to establish a quantitative evaluation criterion.
\noindent\textbf{Conclusions.}
We have presented the first method for extracting makeup for 3D face models from a single makeup portrait, which consists of the following three steps; 1) the extraction of coarse facial materials such as geometry and diffuse/specular albedos via extended regression-based inverse rendering using 3DMM~\cite{FLAME:SiggraphAsia2017}, 2) a newly designed optimization-based refinement of the coarse materials, and 3) a novel network that is designed for extracting makeup. Thanks to the disentangled outputs, we can achieve novel applications such as illumination-aware (i.e., relightable) makeup transfer, interpolation, and removal. The resultant makeup is well aligned in the UV space, from which we built a large-scale makeup texture dataset and a PCA-based makeup model. In future work, we would like to overcome the current limitations and explore better statistical models for facial makeup.
\section{Experiments}
\label{sec:experiments}
We evaluated the results of our approach. First, we present the intermediate outputs of each step of the framework (see Fig.~\ref{fig:result_process}). Thereafter, we discuss the final outputs (see Fig.~\ref{fig:result_final}). Subsequently, we analyze the albedo texture that was associated with the makeup and observe how the makeup changed in the albedo texture (see Fig.~\ref{fig:result_albedo}), Finally, we provide several examples with complex illumination and examine the decomposition (see Fig.~\ref{fig:result_illumination}). Note that the original images of the specular reconstruction were too dark to display, so we adjusted the contrast for better display. Please refer to the supplemental material to see the original images.
\begin{figure*}[t]
\centering
\includegraphics[width=0.95 \linewidth]{fig/result/result_process.pdf}
\caption{Intermediate outputs for each step. Left to right and top to bottom: (a) input makeup portraits,
(b) facial inverse rendering (diffuse albedo, diffuse shading, specular shading, specular albedo, and specular reconstruction),
(c) unwrapped and completed textures,
(d) final refined facial materials (diffuse shading, bare skin, specular reconstruction, and makeup),
and (e) rendered bare skin face and rendered makeup face using textures from (d).}
\label{fig:result_process}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/result/result_final.pdf}
\caption{
Final outputs of our framework.
(a) Input makeup portraits, and (b) fully reconstructed renderings and corresponding textures.
The columns from (c) to (f) are similar to those of Fig.~\ref{fig:teaser}.
}
\label{fig:result_final}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/result/result_albedo.pdf}
\caption{
Texture outputs relating to makeup.
Left to right: (a) input makeup portraits, (b) coarse diffuse albedo, (c) completed UV textures, (d) refined diffuse albedo, (e) bare skin, and (f) makeup.}
\label{fig:result_albedo}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/result/result_illumination.pdf}
\caption{Outputs of decomposed refined facial materials in complex illumination conditions. Left to right: (a) input makeup portraits, (b) completed UV textures, (c) diffuse albedo, (d) diffuse shading, and (e) specular reconstruction.}
\label{fig:result_illumination}
\end{figure}
\noindent\textbf{Intermediate outputs of each step.}
Fig.~\ref{fig:result_process} depicts the intermediate outputs, which are related to the final textures in (d), to illustrate the effectiveness of each step. Four makeup portraits are presented in (a) with different facial features: a neutral face, a face with expression, a face pose with an angle, and a face with uneven illumination. The results of the 3D face reconstruction are shown in (b). It can be observed that the diffuse albedo of the 3DMM contained only a coarse texture without makeup. The specular shading was affected by the limited light source, and thus, the global illumination could not be recovered. Furthermore, the specular albedo was a coarse texture; therefore, the final specular reconstruction is not detailed. Although it was not possible to obtain a reconstruction of the makeup using only the 3D face reconstruction method, these coarse facial materials were helpful for subsequent steps.
We used the original texture and completed it to obtain the refined facial materials as well as makeup. The UV completion results are presented in (c). The final textures were refined using the outputs of the previous steps, as illustrated in (d). For better visualization, we show the blended makeup, which was calculated by $(1 - \mathbf{A}) \odot \mathbf{D}_{m}^{m}$. It can be observed that the makeup and bare skin were disentangled, the diffuse shading was adjusted, and the specular reconstruction was more detailed.
We combined the final textures to achieve effects such as rendering a bare skin face and a makeup face while the lighting remained invariant. These results demonstrate that our method is robust to different makeup, expressions, poses, and illumination.
\noindent\textbf{The final outputs.}
The final outputs are depicted in Fig.~\ref{fig:result_final}. We rendered the disentangled final UV textures separately for improved visualization: (c) bare skin, (d) bare skin with makeup, (e) bare skin with diffuse shading, and (f) bare skin with the full illumination model. Furthermore, we rendered the makeup faces in (b) using these textures which could be considered as a 3D face reconstruction of makeup portraits. It can be observed that the layers of bare skin, makeup, diffuse, and specular are disentangled. For example, a comparison of (c) and (d) indicates that makeup was added while no illumination was involved, and the makeup around the eyebrows, eyes, and lips was disentangled. The presence of makeup on the cheeks can also be clearly observed in the third identity from the top.
\noindent\textbf{Makeup in diffuse albedo.}
The textures relating to the makeup are depicted in Fig.~\ref{fig:result_albedo} to demonstrate the effect of the makeup changes on the diffuse albedo textures. This process represents the main concept under consideration for extracting the makeup. The reconstructed coarse diffuse albedo could not preserve the makeup effectively; although the reconstruction results exhibited some black makeup around the eyes, it was difficult to preserve the makeup beyond the scope of the 3DMM texture space (see (b)). Thus, we used the original texture directly. The completed UV textures, which contained makeup and involved illumination, are depicted in (c). Subsequently, the illumination was removed. A comparison of (d), (e), and (f) demonstrates the validity of our makeup extraction network.
Moreover, the final row presents an example of a makeup portrait with occlusion, for which our method was still effective in extracting the makeup. As we only used the sampled skin region while excluding the others, the missing regions were filled to become complete.
\noindent\textbf{Decomposition of illumination.}
We evaluate the results of the refined facial materials to demonstrate the capabilities of our illumination-aware makeup extraction. It can be observed from the outputs in Fig.~\ref{fig:result_illumination} that portraits containing uneven lighting and shadows (particularly around the bridge of the nose) were captured by diffuse shading, whereas the highlights were reflected in the specular reconstruction. Thus, the diffuse albedo textures became clean and flat, and our makeup extraction was more precise after removing the illumination.
The entire process of our method is executed in the UV space. UV-represented makeup offers numerous advantages. First, 3D makeup avatar creation becomes accessible when the same UV coordinates are used. Furthermore, such makeup can be extended to scanned 3D faces, in which case makeup without illumination will be helpful. Second, the makeup can be further divided into several parts using a corresponding face segmentation mask, which will enable specification of which makeup region is to be used. Third, the textures can be directly edited and incorporated into a traditional rendering pipeline. Finally, the disentangled makeup maps can be collected to form a makeup dataset, which will be useful for 3D makeup face reconstruction or makeup recommendation. We explore several applications in Sec.~\ref{sec:application}.
\section{Implementation Details}
\label{sec:implementation_detail}
\subsection{Coarse Facial Material Reconstruction}
We followed the training strategy of ~\cite{deng2020accurate} and use the same datasets for approximately 260K face images. We used~\cite{facealignment} to detect and crop the faces for alignment. The image size was $256 \times 256$ and the resolution of FLAME albedo textures was $256 \times 256$. We initialized the network with the weights of the pre-trained~\cite{ILSVRC15} and modified the last layer to estimate our own 3DMM parameters. The batch size was 8, and the learning rate was $1 \times 10^{-4}$ using an Adam optimizer with 20 training epochs. For the shininess parameter $\mathbf{\rho}$, we set the initial value to 200 to achieve highlight effects.
\subsection{UV Completion and Facial Material Refinement}
We sampled approximately 100K images from FFHQ~\cite{FFHQ} and CelebA-HQ~\cite{karras2018progressive} in the UV space to train DSD-GAN for the UV completion, and the resolution of the UV textures was $512 \times 512$. Prior to sampling, the face images were segmented using the method of ~\cite{Yu_2018_ECCV} and only the skin region of the face was used. Subsequently, we performed a manual cleanup to remove the low-quality textures.
Eventually, 60,073 textures remained for training.
The optimization-based refinement process was executed with 500 iterations for each texture. The learning rate of the Adam optimizer was set to $1 \times 10^{-2}$ with a 0.1 learning rate decay. The kernel size of the Gaussian blur filter $K$ was set to 11.
\subsection{Makeup Extraction}
We combined two makeup datasets, namely the MT dataset~\cite{Li:2018:MM} and LADN dataset~\cite{gu2019ladn}, which consisted of 3,070 makeup images and 1,449 non-makeup images. A total of 300 makeup images were randomly selected for testing. By implementing the previous steps, both the makeup and non-makeup images were processed into albedo textures to train our network. We trained the network with 40 epochs and batch size of 1. The Adam optimizer used a learning rate of $1 \times 10^{-4}$. The makeup extractor ${E}$ had the same architecture as the generator of DSD-GAN, and PatchGAN ~\cite{isola2017image} was used for the discriminators $M$ and $B$.
We used Nvdiffrast~\cite{nvidiaffrast} for the differentiable renderer and trained the networks using a single NVIDIA GeForce RTX 2080 Ti GPU. Our training required approximately 3 days for the coarse facial reconstruction network and approximately 1 day for the makeup extraction network. Approximately 1 minute was required to process a texture in the refinement step.
\section{Introduction}
Facial makeup is an art of enhancing human appearance, dating back to ancient times. Currently, it is quite commonly used for beautification purposes to improve the quality of life.
Furthermore, facial makeup enriches the user experience of face-related applications such as VR/AR, video games, online commerce, and social camera apps.
These trends have been driving active research on facial makeup in computer graphics and computer vision. In particular, research on makeup for 3D facial models has been gaining increasing attention in the movie and advertising industries for digital humans.
To obtain facial makeup for 3D characters, the following three approaches are currently available; 1) direct painting, 2) capturing of real-world makeup, and 3) makeup transfer from 2D images.
Direct painting is labor-intensive for makeup artists and quite costly.
Capturing real-world makeup requires special devices~\cite{Scherbaum11Makeup, PBCR} and thus is also costly and not scalable.
2D makeup transfer~\cite{Li:2018:MM, makeuptransferSurvey} is the current mainstream of facial makeup research, exploiting a myriad of facial makeup photos available on the Internet.
However, most existing studies have focused on 2D-to-2D transfer and struggled with physical constraints.
For example, faces of in-the-wild photos frequently contain lighting effects such as specular highlights and shadows and occlusions with hands, possibly with various facial expressions and head poses.
Recent 3D face reconstruction techniques~\cite{sfsnetSengupta18, egger20203d} can handle these physical constraints, but none of the existing techniques focus on facial makeup.
In this paper, we propose the first integrated solution for extracting makeup for 3D facial models from a single portrait image.
Fig.~\ref{fig:teaser} shows example outputs of our method.
Our method exploits the strong facial prior of the 3D morphable model (3DMM)~\cite{FLAME:SiggraphAsia2017} via regression-based inverse rendering and extracts coarse facial materials such as geometry and diffuse/specular albedos as UV textures.
Unlike the existing regression-based techniques for facial inverse rendering~\cite{sfsnetSengupta18, egger20203d}, we further refine the coarse facial materials via optimization for higher fidelity.
To alleviate the inherent skin color bias in the 3DMM, we also integrate skin color adjustment inspired by color transfer.
From the refined diffuse albedo, we extract the bare skin, facial makeup, and an alpha matte.
The alpha matte plays a key role in various applications such as manual tweaking of the makeup intensity and makeup interpolation/removal.
The extracted makeup is well aligned in the UV space, from which we build a large-scale makeup texture dataset and a parametric makeup model using principal component analysis (PCA) for 3D faces.
By overlaying rendered 3D faces onto portrait images, we can achieve novel applications such as illumination-aware (\textit{i.e., relightable}) makeup transfer, interpolation, and removal, working on 2D faces (see Fig.~\ref{fig:introduction_app}).
The key contributions are summarized as follows:
\begin{itemize}
\item We present the first method to achieve illumination-aware makeup extraction for 3D face models from in-the-wild face images.
\item We propose a novel framework that improves each of the following steps;
(1) an extended 3D face reconstruction network that infers not only diffuse shading but also specular shading via regression,
(2) a carefully designed inverse rendering method to generate high-fidelity textures without being restricted by the limited lighting setup, and
(3) a novel procedure that is specially designed for extracting makeup by leveraging the makeup transfer technique.
The UV texture representation effectively integrates these three modules into a single framework.
\item Our extracted illumination-independent makeup of the UV texture representation facilitates many makeup-related applications. The disentangled maps are also editable forms. We employ the extracted makeup to build a PCA-based makeup model that is useful for 3D face reconstruction of makeup portraits.
\end{itemize}
\section{Approach}
\label{sec:approach}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{fig/method/method_0.pdf}
\caption{Overview of our framework.
Given a makeup portrait, we extract an illumination-independent bare skin and makeup in the UV space via the following three steps;
First, we reconstruct a 3D face to estimate the coarse facial materials using a 3D face reconstruction network $FRN$~\cite{deng2020accurate} (Sec.~\ref{sec:module_1}).
Second, we refine the coarse facial materials, which may have missing pixels owing to occlusion. We apply an inpainting network DSD-GAN~\cite{dsd-gan} and then apply optimization (Sec.~\ref{sec:module_2}).
Finally, we extract a bare skin, makeup, and an alpha matte from the refined diffuse albedo using makeup extraction network $E$ (Sec.~\ref{sec:module_3}).
}
\label{fig:method_0}
\end{figure*}
We design a coarse-to-fine texture decomposition process. As shown in Fig.~\ref{fig:method_0}, our framework is composed of three steps.
1) We estimate the coarse 3D facial materials using 3D face reconstruction by 3DMM fitting. In order to handle highlights, we extend a general 3DMM fitting algorithm. The reconstructed coarse facial materials are used for the refinement process in the next step (Sec.~\ref{sec:module_1}).
2) We propose an inverse rendering method to obtain refined 3D facial materials via optimization. First, we use the 3DMM shape that is obtained in the previous step to sample the image in UV space. Subsequently, a UV completion method is employed to obtain a high-fidelity entire face texture. Finally, using the completed texture as the objective, and coarse facial materials as priors, the refined facial materials are optimized. (Sec.~\ref{sec:module_2}).
3) The refined diffuse albedo that is obtained from the previous step is used as the input. Inspired by the makeup transfer technique, we design a network to disentangle bare skin and makeup. The key idea is that the makeup albedo can be extracted using an alpha blending manner for bare skin and makeup (Sec.~\ref{sec:module_3}).
The details of each step are described in the following subsections.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{fig/method/method_2.pdf}
\caption{
Facial material optimization module for Step 2 (Sec.~\ref{sec:module_2}).
We optimize the coarse facial materials ${\mathbf{D}_c, \mathbf{N}_c, \mathbf{R}^s_c}$ and SH lighting $\mathbf{L}^{sh}_c$ so that the full reconstruction $\mathbf{R}_f$ resembles the completed texture $\mathbf{T}_f$.
The refined diffuse albedo $\mathbf{D}_{f}$, normal $\mathbf{N}_{f}$, and specular reconstruction $\mathbf{R}_{f}^{s}$ are the outputs.
$\oplus$ and $\otimes$ denote the per-pixel addition and multiplication, respectively.
}
\label{fig:method_2}
\end{figure*}
\subsection{Coarse Facial Material Reconstruction}
\label{sec:module_1}
We obtain the coarse facial materials using a 3D face reconstruction method based on regression-based inverse rendering.
We use the FLAME~\cite{FLAME:SiggraphAsia2017} model with specular albedo from AlbedoMM~\cite{Smith_2020_CVPR}. The diffuse and specular albedo of FLAME are defined in the UV texture space. We only use the facial skin region in our study. Compared to existing methods, we extend the capability of the 3D face reconstruction network~\cite{deng2020accurate} ($FRN$) to estimate the shape, diffuse albedo, and diffuse shading, as well as the specular albedo and specular shading. Inspired by~\cite{PracticalFaceReconsCGF},
a simplified virtual light stage with regular icosahedral parallel light sources is set up to infer the specular shading.
The intensity of 20 light sources is predicted during the reconstruction process, and the direction of the light sources can be adjusted slightly. Furthermore, In order to eliminate the limited color range of the skin tone of FLAME, we estimate the skin tone adjustment parameters to ensure a diffuse albedo that is similar to the original image. The skin tone ablation study is depicted in Fig.~\ref{fig:ablation_skin}. This process can improve the diversity of the diffuse albedo representation capability of FLAME.
The coarse shape geometry $\mathbf{G}_{c}$, diffuse albedo $\mathbf{D}_{c}$, and specular albedo $\mathbf{S}_{c}$ of the 3DMM are defined as follows:
\begin{align}
&\mathbf{G}_{c} = \bar{\mathbf{G}} + \mathbf{B}_{id}\mathbf{\alpha} + \mathbf{B}_{ex}\mathbf{\beta} ,\\
&\mathbf{D}_{c} = \bar{\mathbf{D}} + \mathbf{B}_{d}\mathbf{\gamma} \, \odot \, \mathbf{C}_{gain} + \mathbf{C}_{bias} , \\
&\mathbf{S}_{c} = \bar{\mathbf{S}} + \mathbf{B}_{s}\mathbf{\delta} ,
\end{align}
where $\bar{\mathbf{G}}$, $\bar{\mathbf{D}}$, and $\bar{\mathbf{S}}$ are the average geometry, diffuse albedo, and specular albedo, respectively. ${\odot}$ denotes the Hadamard product. The subscript $c$ indicates coarse facial materials. Moreover, $\mathbf{B}_{id}$, $\mathbf{B}_{ex}$, $\mathbf{B}_{d}$, and $\mathbf{B}_{s}$ are the PCA basis vectors of the identity, expression, diffuse albedo, and specular albedo, respectively, whereas $\mathbf{\alpha}\in\mathbb{R}^{200}$, $\mathbf{\beta}\in\mathbb{R}^{100}$, $\mathbf{\gamma}\in\mathbb{R}^{100}$, and $\mathbf{\delta}\in\mathbb{R}^{100}$ are the corresponding parameters for controlling the geometry and reflectance of a 3D face. Finally, $\mathbf{C}_{gain}$ and $\mathbf{C}_{bias}$ are the skin tone adjustment parameters.
Our reconstructed texture can be formulated as follows:
\begin{align}
\mathbf{R}_{c} &= \mathbf{R}_{c}^{d} + \mathbf{R}_{c}^{s} \\
& = \mathbf{D}_{c} \odot \, \mathbf{D}_{c}^{S} + \mathbf{S}_{c} \odot \, \mathbf{S}_{c}^{S} ,
\end{align}
where $\mathbf{R}_{c}^{d}$, $\mathbf{R}_{c}^{s}$, $\mathbf{D}_{c}^{S}$, and $\mathbf{S}_{c}^{S}$ are the diffuse reconstruction, specular reconstruction, diffuse shading, and specular shading, respectively.
In this paper, a geometrically-derived shading component is referred to as ``shading,'' wheras a multiplication with reflectance is dubbed ``reconstruction.''
Using the normal $\mathbf{N}_{c}$ and second-order SH lighting coefficients $\mathbf{L}_{c}^{sh}\in\mathbb{R}^{27}$,
the diffuse shading is calculated following the Lambertian reflectance model~\cite{lambertian}. The normal $\mathbf{N}_{c}$ is computed from the geometry $\mathbf{G}_{c}$.
The specular shading is calculated following the Blinn–Phong reflection model~\cite{Blinn-Phong-Model}. We define the 20 light sources of the virtual light stage with the light intensity $\mathbf{L}_{i}\in\mathbb{R}^{20}$ and light direction $\mathbf{L}_{d}\in\mathbb{R}^{60}$. $\mathbf{\rho}\in\mathbb{R}^{20}$ is the exponent that controls the shininess.
We employ the 3D face reconstruction network implementation of ~\cite{deng2020accurate}, and extend it to regress the parameters $\mathbf{\chi} = (\mathbf{\alpha}, \mathbf{\beta}, \mathbf{\gamma}, \mathbf{\delta}, \mathbf{C}_{gain}, \mathbf{C}_{bias}, \mathbf{r}, \mathbf{t}, \mathbf{L}_{c}^{sh}, \mathbf{L}_{i}, \mathbf{L}_{d}, \mathbf{\rho})$, where $\mathbf{r}\in\mathbb{R}^{3}$ is the face rotation, and $\mathbf{t}\in\mathbb{R}^{3}$ is the face translation. Using the geometry $\mathbf{G}_c$, reconstructed texture $\mathbf{R}_c$, rotation $\mathbf{r}$, and translation $\mathbf{t}$, the reconstructed image ${I}_{rec}$ can be rendered.
We refer to~\cite{deng2020accurate} to set up the loss functions for the model training as follows:
\begin{equation}
\begin{aligned}
\mathcal{L}_{c}(\mathbf{\chi}) &=
\omega_{photo} \, \mathcal{L}_{photo}(\mathbf{\chi}) + \omega_{lan} \, \mathcal{L}_{lan}(\mathbf{\chi}) \\
&+ \omega_{skin} \, \mathcal{L}_{skin}(\mathbf{C}_{gain}, \mathbf{C}_{bias}) \\
&+ \omega_{reg} \, \mathcal{L}_{reg}(\mathbf{\alpha}, \mathbf{\beta}, \mathbf{\gamma}, \mathbf{\delta}, \mathbf{L}_{i}, \mathbf{L}_{d}) ,
\end{aligned}
\end{equation}
where $\mathcal{L}_{photo}(\mathbf{\chi})$ is the L1 pixel loss between the skin region of input image ${I}_{in}$ and reconstructed image ${I}_{rec}$. $\mathcal{L}_{lan}(\mathbf{\chi})$ is the L2 loss between detected landmarks from ${I}_{in}$ and the projected landmarks of the 3DMM. $\mathcal{L}_{skin}(\mathbf{C}_{gain}, \mathbf{C}_{bias})$ is the L1 loss for computing the mean color error between the skin region of ${I}_{in}$ and diffuse albedo $\mathbf{D}_{c}$. This is our specially designed loss term to adjust the skin tone. $\mathcal{L}_{reg}(\mathbf{\alpha}, \mathbf{\beta}, \mathbf{\gamma}, \mathbf{\delta}, \mathbf{L}_{i}, \mathbf{L}_{d})$ is the regulation loss for preventing a failed face reconstruction result. In contrast to the previous method~\cite{deng2020accurate}, we extend constraints on the specular related coefficients of $\mathbf{\delta}$, $\mathbf{L}_i$, and $\mathbf{L}_d$. $\mathcal{L}_{reg}(\mathbf{\alpha}, \mathbf{\beta}, \mathbf{\gamma}, \mathbf{\delta}, \mathbf{L}_{i}, \mathbf{L}_{d})$ is determined as follows:
\begin{equation}
\begin{aligned}
\mathcal{L}_{reg} &=
\omega_{\alpha}\Vert{\mathbf{\alpha}}\Vert^{2}_{2}
+\omega_{\beta}\Vert{\mathbf{\beta}}\Vert^{2}_{2}
+\omega_{\gamma}\Vert{\mathbf{\gamma}}\Vert^{2}_{2} \\
&+\omega_{\delta}\Vert{\mathbf{\delta}}\Vert^{2}_{2}
+\omega_{L}\Vert{\mathbf{L}_i}\Vert^{2}_{2}
+ \omega_{L}\Vert{\mathbf{L}_d}\Vert^{2}_{2} ,
\end{aligned}
\end{equation}
where $\Vert\cdot\Vert_{2}$ denotes the L2 norm. The balance weights are set to
$\omega_{photo}=19.2$,
$\omega_{lan}=5$,
$\omega_{skin}=3$,
$\omega_{reg}=3 \times 10^{-4}$,
$\omega_{\alpha}=1.0$,
$\omega_{\beta}=0.8$,
$\omega_{\gamma}=1.7 \times 10^{-2}$,
$\omega_{\delta}=1.0$, and $\omega_{L}=1.0$ in all experiments.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{fig/method/method_3.pdf}
\caption{Makeup extraction with network $E$ for Step 3 (Sec.~\ref{sec:module_3}).
Considering alpha blending in mind, the network $E$ decomposes the refined diffuse albedo into bare skin, alpha matte, and makeup.
}
\label{fig:method_3}
\end{figure*}
\subsection{UV Completion and Facial Material Refinement}
\label{sec:module_2}
The goal of this step is to obtain the disentangled refined facial materials.
We use the geometry $\mathbf{G}_{c}$ to sample the colors from the input image ${I}_{in}$ and project them to the UV texture space $\mathbf{T}_{u}$. $\mathbf{T}_{u}$ contains the missing area owing to self-occlusion or obstacles. As opposed to the 3D vertex color sampling approach used in related work~\cite{m_Nguyen-etal-CVPR21, Chen_Han_Shan_2022}, we adopt the image-to-UV rendering approach used in DSD-GAN~\cite{dsd-gan} to obtain a high-quality texture. The direct use of incomplete textures will cause many problems, such as noise and error. Thereafter, we fill the missing areas following DSD-GAN to obtain the completed UV texture $\mathbf{T}_{f}$. The subscript $f$ indicates refined facial materials. The difference between our method and DSD-GAN is that we use the FLAME model for implementation, whereas DSD-GAN uses the BFM model~\cite{bfm}. The results of the UV completion are presented in Fig.~\ref{fig:result_process}.
Using the completed UV texture as an objective with facial details, the optimization-based refinement module (see Fig.~\ref{fig:method_2}) is designed. Given the target texture $\mathbf{T}_{f}$, the coarse prior $\mathbf{D}_{c}$, $\mathbf{N}_{c}$, $\mathbf{L}_{c}^{sh}$, and $\mathbf{R}_{c}^{s}$ are used for initialization, and we optimize the refined materials of $\mathbf{D}_{f}$, $\mathbf{N}_{f}$, $\mathbf{L}_{f}^{sh}$, and $\mathbf{R}_{f}^{s}$ to reconstruct the refined texture $\mathbf{R}_{f}$. The diffuse shading is calculated following the Lambertian reflectance model~\cite{lambertian}. For the specular, we directly optimize the specular reconstruction $\mathbf{R}_{f}^{s}$, rather than the specular albedo and specular shading, which is not restricted by the light source settings of the virtual light stage in Sec.~\ref{sec:module_1}. Note that the specular reconstruction $\mathbf{R}_{c}^{s}$ is three channels image due to the specular albedo $\mathbf{S}_{c}$ of 3DMM, while $\mathbf{R}_{f}^{s}$ is converted to one channel image for stable and efficient optimization.
The loss functions for the optimization are calculated as follows:
\begin{align}
\mathcal{L}_{f}(\Psi) &=
\omega_{recons}\mathcal{L}_{recons}(\mathbf{R}_{f})
+ \omega_{vgg}\mathcal{L}_{vgg}(\mathbf{R}_{f}) \notag \\
&+ \omega_{tv}\mathcal{L}_{tv}(\mathbf{D}_{f}, \mathbf{N}_{f}, \mathbf{R}_{f}^{s}) \\
&+ \omega_{prior}\mathcal{L}_{prior}(\mathbf{D}_{f}, \mathbf{N}_{f}, \mathbf{R}_{f}^{s}, \mathbf{L}_{f}^{sh}) , \notag
\end{align}
where $\Psi = (\mathbf{R}_{f}, \mathbf{D}_{f}, \mathbf{N}_{f}, \mathbf{R}_{f}^{s}, \mathbf{L}_{f}^{sh})$ is a new parameter set for optimization. $\mathcal{L}_{recons}(\mathbf{R}_{f})$ ensures L1 consistency between $\mathbf{R}_{f}$ and $\mathbf{T}_{f}$. $\mathcal{L}_{vgg}(\mathbf{R}_{f})$ is the perceptual loss~\cite{vggloss} that aims to preserve the facial details. $\mathcal{L}_{tv}(\mathbf{D}_{f}, \mathbf{N}_{f}, \mathbf{R}_{f}^{s})$ is the total variation loss~\cite{tvloss} that encourages spatial smoothness in the optimized textures $\mathbf{D}_{f}$, $\mathbf{N}_{f}$, and $\mathbf{R}_{f}^{s}$.
\begin{equation}
\begin{aligned}
\mathcal{L}_{prior}(\mathbf{D}_{f}, \mathbf{N}_{f}, \mathbf{R}_{f}^{s}, \mathbf{L}_{f}^{sh}) &=
\omega_{D} \, \mathcal{L}_{D}(\mathbf{D}_{f}) + \omega_{N} \, \mathcal{L}_{N}(\mathbf{N}_{f}) \\
&+ \omega_{R}^{s} \, \mathcal{L}_{R}^{s}(\mathbf{R}_{f}^{s}) + \omega_{sh} \, \mathcal{L}_{sh}(\mathbf{L}_{f}^{sh}) ,
\end{aligned}
\end{equation}
where $\mathcal{L}_{prior}(\mathbf{D}_{f}, \mathbf{N}_{f}, \mathbf{R}_{f}^{s}, \mathbf{L}_{f}^{sh})$ regulates the optimized texture to be similar to the coarse prior. We compute the L1 loss over the diffuse albedo, normal, and specular reconstruction as $\mathcal{L}_{D}(\mathbf{D}_{f})$, $\mathcal{L}_{N}(\mathbf{N}_{f})$, and $\mathcal{L}_{R}^{s}(\mathbf{R}_{f}^{s})$, respectively. The coarse textures are resized to the same resolution as that of refined textures. We note that the coarse and refined textures are not exactly equal; they differ in clarity, and with or without makeup. Therefore, before calculating the losses between the coarse and refined textures, we simply use a Gaussian blur filter $K$ to blur the refined textures in each iteration. We use the blurred refined texture to calculate the loss, while the original refined texture is not changed. $\mathcal{L}_{sh}(\mathbf{L}_{f}^{sh})$ is the L2 loss over the SH lighting. We normalize $\mathbf{N}_{f}$ to [-1, 1] following each optimization iteration to guarantee the correctness of the normal. The parameters
$\omega_{recons}=40$,
$\omega_{vgg}=5$,
$\omega_{tv}=10$,
$\omega_{prior}=1.0$,
$\omega_{D}=4$,
$\omega_{N}=1.0$,
$\omega_{R}^{s}=1.0$,
and $\omega_{sh}=1.0$
balance the importance of the terms. The refined textures of our coarse-to-fine optimization step are illustrated in Figs.~\ref{fig:result_process}, ~\ref{fig:result_final}, and ~\ref{fig:result_albedo}.
\subsection{Makeup Extraction}
\label{sec:module_3}
In this step,
we only use the refined diffuse albedo $\mathbf{D}_{f}$, and decompose the texture into the makeup, bare skin, and an alpha matte. To train the network, we create two diffuse albedo datasets with and without makeup following the previous process; the makeup albedo $\mathbf{D}_{m}$ and non-makeup albedo $\mathbf{D}_{n}$, respectively.
As shown in Fig.~\ref{fig:method_3}, we design a makeup extraction network based on alpha blending. Our network consists of a makeup extractor ${E}$, a makeup discriminator ${M}$, and a bare skin discriminator ${B}$. ${M}$ and ${B}$ attempt to distinguish the makeup and non-makeup image. The core idea is that we regard the diffuse albedo as a combination of makeup and bare skin that is achieved by alpha blending. Therefore, ${E}$ extracts the bare skin $\mathbf{D}_{m}^{b}$, makeup $\mathbf{D}_{m}^{m}$, and alpha matte $\mathbf{A}$.
$\mathbf{A}$ is used to blend the extracted makeup $\mathbf{D}_{m}^{m}$ and a non-makeup albedo $\mathbf{D}_{n}$ to generate a new makeup albedo $\mathbf{D}_{n}^{m}$ which has the identity from $\mathbf{D}_{n}$ and makeup from $\mathbf{D}_{m}$.
The reconstructed makeup albedo $\mathbf{D}_{m}^{r}$ and the reconstructed bare skin $\mathbf{D}_{n}^{r}$ should be consistent with their original input.
The reconstructed $\mathbf{D}_{m}^{r}$ and generated $\mathbf{D}_{n}^{m}$ are formulated as follows:
\begin{equation}
\begin{aligned}
\mathbf{D}_{m}^{r} = \mathbf{A} \odot \mathbf{D}_{m}^{b} + (1 - \mathbf{A}) \odot \mathbf{D}_{m}^{m} ,\\
\mathbf{D}_{n}^{m} = \mathbf{A} \odot \mathbf{D}_{n} + (1 - \mathbf{A}) \odot \mathbf{D}_{m}^{m} , \label{eq:alhpa_blend}
\end{aligned}
\end{equation}
where $(1 - \mathbf{A})$ is an inverted version of $\mathbf{A}$ with pixel values of $[0,1]$.
The discriminators ensure that the generated bare skin $\mathbf{D}_{m}^{b}$ and makeup $\mathbf{D}_{n}^{m}$ are reliable.
As the network takes advantage of the uniformity of the UV space, the makeup transfer is straightforward, and the problem of face misalignment does not need to be considered.
The loss function is given by:
\begin{equation}
\begin{aligned}
\mathcal{L}_{m}(\Phi) &=
\omega_{m}^{cyc}\mathcal{L}_{m}^{cyc}(\mathbf{D}_{m}^{r}, \mathbf{D}_{n}^{r})
+ \omega_{m}^{vgg}\mathcal{L}_{m}^{vgg}(\mathbf{D}_{m}^{r}, \mathbf{D}_{n}^{r}) \\
&+ \omega_{m}^{adv}\mathcal{L}_{m}^{adv}(\mathbf{D}_{m}^{b}, \mathbf{D}_{n}^{m})
+ \omega_{m}^{tv}\mathcal{L}_{m}^{tv}(\mathbf{D}_{m}^{b}, \mathbf{D}_{n}^{m}) \\
&+ \omega_{m}^{make}\mathcal{L}_{m}^{make}(\mathbf{D}_{m}^{b}, \mathbf{D}_{n}^{m}) ,
\end{aligned}
\end{equation}
where $\Phi = (\mathbf{D}_{m}^{r}, \mathbf{D}_{n}^{r}, \mathbf{D}_{m}^{b}, \mathbf{D}_{n}^{m})$, $\mathcal{L}_{m}^{cyc}(\mathbf{D}_{m}^{r}, \mathbf{D}_{n}^{r})$ and $\mathcal{L}_{m}^{vgg}(\mathbf{D}_{m}^{r}, \mathbf{D}_{n}^{r})$ are the L1 loss and perceptual loss for the reconstruction, respectively.
$\mathcal{L}_{m}^{adv}(\mathbf{D}_{m}^{b}, \mathbf{D}_{n}^{m})$ is the adversarial loss for the discriminators and generators.
$\mathcal{L}_{m}^{tv}(\mathbf{D}_{m}^{b}, \mathbf{D}_{n}^{m})$ is the total variation loss for $\mathbf{D}_{m}^{b}$ and $\mathbf{D}_{m}^{m}$ to provide smooth texture generation, whereas $\mathcal{L}_{m}^{make}(\mathbf{D}_{m}^{b}, \mathbf{D}_{n}^{m})$ is the makeup loss introduced by BeautyGAN~\cite{Li:2018:MM}.
We defined the corresponding face region ${F}$ of the brows, eyes, and lips in the UV texture space to compute makeup loss, and the compared textures are $\mathbf{D}_{m}$ and $\mathbf{D}_{n}^{m}$, and $\mathbf{D}_{n}$ and $\mathbf{D}_{m}^{b}$.
Note that, in contrast with previous makeup transfer methods, the skin region is not calculated for the makeup loss because we believe that the difference in skin tone between individuals cannot be considered as makeup, while it can also be a restriction if the foundation of the makeup changes the entire face color.
This is a trade-off between skin tone and makeup color. In this work, we assume that the makeup will not drastically change the skin tone. Moreover, as indicated in Fig.~\ref{fig:result_final}, even without the makeup loss of the skin region, our network precisely extracts the makeup of the cheeks using the alpha blending approach because we use two discriminators to distinguish the makeup and non-makeup textures. The details are discussed in Sec.~\ref{sec:experiments}.
We use $\omega_{m}^{cyc}=20$, $\omega_{m}^{vgg}=2$, $\omega_{m}^{adv}=5$, $\omega_{m}^{tv}=8$, and $\omega_{m}^{make}=1$ as the balancing terms. Our makeup extraction results are presented in Figs.~\ref{fig:result_process}, ~\ref{fig:result_final}, and ~\ref{fig:result_albedo}.
\section{Related Work}
\label{sec:related_work}
Our framework consists of three steps for extracting makeup from a portrait. It is related to recent approaches in terms of three aspects.
First, we discuss facial makeup-related research. Subsequently, we review intrinsic image decomposition which can separate the input image into several elements. Finally, we discuss 3D face reconstruction methods that generate a 3D face model from a single portrait image.
\subsection{Facial Makeup}
Facial makeup recommendation and makeup transfer methods have been developed in the computer graphics and computer vision research fields.
Makeup recommendation methods can provide appropriate makeup for faces. The method in \cite{Scherbaum11Makeup} captured 56 female faces with and without facial makeup. Suitable makeup was recommended by analyzing the principal components of the makeup and combining them with the facial appearance.
An examples-rules guided deep neural network for makeup recommendation has been designed~\cite{10.5555/3298239.3298377}. Professional makeup knowledge was combined with the corresponding before and after makeup data to train the network.
However, it is challenging to collect a large-scale makeup dataset by following the existing methods.
Recently, we proposed BareSkinNet~\cite{bareskinnet}, which can remove makeup and lighting effects from input face images. However, BareSkinNet cannot be used for subsequent makeup applications because it discards the makeup patterns and specular reflection.
In contrast, our method can automatically extract the makeup layer information from portrait image inputs. As a result, a large-scale makeup dataset can be constructed.
The aim of makeup transfer is to transfer the makeup of a person to others. Deep learning technologies have significantly accelerated makeup transfer research. BeautyGAN ~\cite{Li:2018:MM} is a method based on CycleGAN~\cite{cyclegan} that does not require a before and after makeup image pair. A makeup loss function was also designed to calculate the difference between makeups. The makeup loss is a histogram matching loss that approximates the color distribution of the relative areas of different faces. Subsequent approaches that have used BeautyGAN as the baseline can solve more challenging problems. LADN~\cite{gu2019ladn} achieves heavy makeup transfer and removal, whereas PSGAN~\cite{jiang2019psgan} and SCGAN~\cite{Deng_2021_CVPR} solve the problem of makeup transfer under different facial expressions and head poses. CPM~\cite{m_Nguyen-etal-CVPR21} is a color and pattern transfer method and SOGAN~\cite{SOGAN} is a shadow and occlusion robust GAN for makeup transfer. EleGANt~\cite{yang2022elegant} is a locally controllable makeup transfer method. Makeup transfer can also be incorporated into the Virtual-Try-On applications~\cite{ca-gan, 10.1111:cgf.14456}. However, existing methods do not consider illumination and can only handle 2D images. Our method takes advantage of the makeup transfer technique. The extracted makeup is disentangled into bare skin, facial makeup, and illumination in the UV space. Furthermore, the extracted makeup is editable.
\subsection{Intrinsic Image Decomposition}
We mainly review the recent intrinsic image decomposition methods relating to the portrait image input.
An intrinsic image decomposition method~\cite{SimulatingCVPR2015} has been employed, thereby enabling accurate and realistic makeup simulation from a photo.
SfSNet~\cite{sfsnetSengupta18} is an end-to-end network that decomposes face images into the shape, reflectance, and illuminance. This method uses real and synthetic images to train the network. Inspired by SfSNet, Relighting Humans~\cite{relighting_humans} attempts to infer a light transport map to solve the problem of light occlusion from an input portrait. The aforementioned methods can handle only diffuse reflection using spherical harmonic (SH)~\cite{sh} lighting. Certain methods use a light stage to obtain a large amount of portrait data with illumination~\cite{10.1145/3306346.3323008, Pandey:2021, 10.1145/3414685.3417824}. The global illumination can be inferred by learning these data. As data collection using a light stage requires substantial resources, several more affordable methods have been proposed~\cite{tajimaPG21, Lagunas2021SingleimageFH, ji2022relight, wimbauer2022rendering, Tan2022VoLuxGANAG, yeh2022learning}.
In this study, we focus on makeup with the aim of decomposing a makeup portrait into bare skin, makeup, diffuse, and specular layers without using light stage data.
\subsection{Image-Based 3D Face Reconstruction}
3D face reconstruction from a single-view portrait is challenging because in-the-wild photos always contain invisible areas or complex illumination. Existing 3D face reconstruction methods~\cite{1227983, tran2016regressing, tewari2017mofa, tewari2017self, GuoTPAMI2018, genova2018unsupervised, RingNet:CVPR:2019, deng2020accurate, Shang2020Self, PracticalFaceReconsCGF, 10.1145/3450626.3459936, EMOCA, MICA} generally use a parametric face model (known as a 3DMM)~\cite{Blanz3DMM99, FLAME:SiggraphAsia2017, bfm, 10.1007/s11263-017-1009-7} to overcome this problem. In general, 3D face reconstruction is achieved by fitting the projected 3DMM to the input image. These methods estimate the SH lighting while inferring the shape and texture. We recommend that readers refer to ~\cite{egger20203d, 3dreconssurvey}, as these surveys provide a more comprehensive description of the 3DMM and 3D face reconstruction.
The 3DMM-based 3D face reconstruction method estimates coarse facial materials and cannot achieve a high-fidelity facial appearance.
A detailed 3D face reconstruction method was proposed in~\cite{PracticalFaceReconsCGF}, which can reconstruct the roughness and specular compared to the previous methods. This method involves the setup of a virtual light stage of illumination and the use of a two-stage coarse-to-fine technique to refine the facial materials.
Inspired by ~\cite{PracticalFaceReconsCGF}, we employ coarse facial materials and take specular into consideration. Furthermore, we extend and train a deep learning network using the method in ~\cite{deng2020accurate} as the backbone with a large-scale dataset. We optimize the textures in UV space to refine facial materials, to solve the problem of self-occlusion and obstacles of the face. The completed UV texture is advantageous because the complete makeup can be extracted to achieve high-fidelity makeup reconstruction.
The generation of a completed facial UV texture is a technique for obtaining refined facial materials. Several methods use UV texture datasets to train an image-translation network for the generation of the completed texture via supervised learning~\cite{saito2017photorealistic, Yamaguchi:2018:ToG, deng2017uvgan, Gecer_2019_CVPR, 9156897, 9156897, Lattas_2020_CVPR, Bao:2021:ToG}. Other methods~\cite{Chen_Han_Shan_2022,Gecer_2021_CVPR} use the GAN inversion~\cite{gan_inversion} technique to generate faces in different directions for completion. However, both of these methods are limited by the training dataset. We adopt a state-of-the-art UV completion method known as DSD-GAN~\cite{dsd-gan} to obtain a completed high-fidelity face texture, which is to be used as the objective for the optimization of the refined facial materials. This method employs self-supervised learning to fill the missing areas without the need for paired training data.
|
{
"arxiv_id": "2302.13280",
"language": "en",
"timestamp": "2023-02-28T02:14:59",
"url": "https://arxiv.org/abs/2302.13280",
"yymm": "2302"
} | \section{Introduction}\label{sec:intro}
\subsection{Background and Literature Review}
\IEEEPARstart{T}{he} coordinated optimal dispatch (COD) of power systems in different regions, voltage levels, and communities is vital for improving the operational economy and security. Conventional coordinated optimization methods rely on iterative information exchange among subsystems, which results in drawbacks including the convergence issue, communication burden, scalability issue, and incompatibility with the serial coordination scheme in practice. In this regard, realizing the COD in a non-iterative fashion, i.e., without iterative information exchange among subsystems, is of great interest to both academia and industry. This two-part paper aims at this issue and proposes the equivalent projection (EP)-based solution. The basic idea is to make external equivalence of technical and economic features of the subsystem for the upper-level coordinated optimization, which requires only a single-round exchange of some boundary information. Following the theory and framework introduced in Part I, the calculation method and typical applications of the EP will be discussed in this present paper.
The EP is mathematically a geometric projection problem, which eliminates internal decision variables from the secure and economic operation constraints of the subsystem and yields a low-dimensional region regarding coordination variables to represent the system. Projection calculation is a challenging task even for linear systems due to the exponential complexity in worst-case scenarios. The most classic projection algorithm is the Fourier-Motzkin Elimination (FME). The FME eliminates variables one by one through linear combinations of inequalities with opposite signs before the coefficient of each variable to be eliminated \cite{ref:fme0}. However, numerous redundant constraints will be generated after eliminating each variable, which not only complicates the projection result, but also increases the number of constraints needed to be processed for eliminating the next variable and thus aggravates the computation burden dramatically \cite{ref:loadability1}. To over these drawbacks, existing studies propose the redundancy identification technique to accelerate the FME calculation and obtain the minimal representation of the projection result \cite{ref:ifme, ref:loadability2}. Another basic algorithm for polyhedral projection is the block elimination \cite{ref:block}, which identifies facets of the projected polytope based on the projection lemma. Though one-by-one elimination of variables is avoided, the block elimination may also generate redundant constraints. The polyhedral projection problem is theoretically proven to be equivalent to the multi-parametric programming (MPP) problem \cite{ref:mpp}, which enables the solution of the projection problem via MPP algorithms. This method also leverages the projection lemma to identify irredundant inequalities of the projection.
The application of above projection methods to the EP calculation in the power system is limited due to features of the coordinated dispatch problem. First, these methods have exponential complexity to the number of variables to be eliminated and thus, are inefficient for power system dispatch problems where the number of internal variables is large. Second, a conservative approximation of the exact EP model is usually desired in engineering applications to reduce the computation effort. However, the aforementioned methods cannot yield an inner approximation since they characterize the projection by generating inequalities and will overestimate the projection result when terminated prematurely without finding all the inequalities. In this regard, existing studies in the power system literature propose different models to approximate the projection region, e.g., the box model \cite{ref:box}, the Zonotope model \cite{ref:zono}, the ellipse model \cite{ref:ellipse}, and the robust optimization-based model \cite{ref:robust}. These methods approximate the projection region with simple shapes at the cost of accuracy. As the approximation shape is pre-fixed, these methods are only applicable to projection regions with specific geometric structures and the approximation accuracy cannot be adjusted flexibly.
Another critical issue regarding the EP calculation is the error metric for controlling the calculation accuracy of the algorithm. The error metric for the EP model measures the difference between the approximated region and the exact one. Reference \cite{ref:zono} uses the distances between parallel facets of the region to measure the approximation quality. However, this metric can only be applied to a certain kind of polytope namely the Zonotope. The volume is a more general metric for comparing two regions. Reference \cite{ref:pq1} measures the calculation accuracy of the active and reactive flexibility region of the distribution network by comparing areas of regions. Reference \cite{ref:pq1} uses the same metric to measure the flexibility region enlargement brought by power flow routers. Based on the volume metric, the Jaccard similarity can be used to measure the relative error of the approximation \cite{ref:jaccard}. However, the evaluation of volume metrics is time-consuming, especially for high-dimensional problems. Furthermore, The evaluation of the volume-based error metric also relies on the knowledge of the exact projected region. Hence, this metric can only be used to test the approximation quality when the ground truth of the projection is known, but can hardly be used to control the approximation accuracy in the projection algorithm.
\subsection{Contributions and Paper Organization}
To meet practical requirements of the EP calculation for power system applications, the Part II of this paper proposes a novel projection algorithm with an explicit accuracy guarantee. With this algorithm, the EP theory and the non-iterative COD method are instantiated for multi-area system coordination and transmission-distribution coordination. The contribution of the Part II is threefold,
1) The progressive vertex enumeration (PVE) algorithm is proposed for the EP calculation in power system applications. Compared to existing methods that solve the projection problem by eliminating internal variables, the proposed method directly identifies the projection region in the lower-dimensional coordination space and thus, has lower computational complexity. Compared to existing approximation methods, the proposed algorithm is proven to be accurate for the polyhedral projection problem. The proposed method can also be used to get an inner approximation with an explicit and adjustable error tolerance according to practical requirements.
2) The Hausdorff distance is employed to measure the approximation error of the PVE algorithm. Compared to the volume metric and the Jaccard similarity, the proposed error metric is a byproduct of the vertex identification in each step of the algorithm and thus, is computationally efficient and can be used to balance the accuracy and computation effort of the algorithm.
3) The non-iterative COD for the multi-area system and transmission-distribution system is realized based on the EP calculation, which thoroughly overcomes the disadvantages of conventional coordinated optimization methods brought by iterations. The EP-based COD is verified to yield identical solutions as the joint optimization and consumes less computation time in scenarios with numerous subsystems.
The remainder of this paper is organized as follows. Section II introduces the PVE algorithm and its properties. Sections III and IV apply the EP-based COD method to the multi-area coordinated dispatch problem and transmission-distribution coordination problem, respectively. Section V concludes this paper and discusses future research.
\section{Calculation Method for EP}\label{sec:algorithm}
\subsection{Problem Setup}
In power system applications, the linear model is mostly utilized for optimal dispatch for the sake of computational efficiency, reliability, and clear economic interpretation. A broad spectrum of literature has investigated the linearized modeling of the transmission network \cite{ref:md_dc, ref:md_liudd}, distribution network \cite{ref:distflow2, ref:dist_wangyi}, and multi-energy system \cite{ref:md_hub}. Nonlinear models, in contrast, have intractable computation and may lead to incentive issues when non-convexity exists \cite{ref:nonconv_price1}. The application of nonlinear optimization is difficult even for centralized dispatch, let alone coordinated dispatch. Part II of this paper focuses on the practice-oriented coordinated dispatch method and thus, the widely used linear dispatch model is considered.
As introduced in Part I, the EP eliminates internal variables from the operation feasible region of the subsystem and makes external equivalence of the subsystem model. The operation feasible region of the subsystem is enforced by both technical constraints and the epigraph of the objective function. Without loss of generality, represent the operation feasible region of the subsystem in the following compact form,
\begin{equation}
\label{md:ofr}
\Omega := \left\{(x,y) \in \mathbb{R}^{N_x} \times \mathbb{R}^{N_y} : Ax + By \leq c \right\}.
\end{equation}
In the model, $x$ and $y$ denote the coordination variable and internal variable, respectively. Constant $N_x$ and $N_y$ are dimensions of $x$ and $y$. The partition of variables is introduced in Part I of this paper. Matrix $A$, $B$, and $c$ are coefficients defining operation constraints of the subsystem. Note that variables are all bounded in power system dispatch problems due to operation limits of devices. Hence, $\Omega$ formulated in \eqref{md:ofr} is a polytope in the space of $\mathbb{R}^{N_x} \times \mathbb{R}^{N_y}$.
As per Definition 2 in Part I of this paper, the EP model of the subsystem is expressed as follows,
\begin{equation}
\label{md:esr}
\Phi := \left\{x \in \mathbb{R}^{N_x}: \exists y, \ \text{s.t.} \ (x,y) \in \Omega \right\}.
\end{equation}
The EP model depicts the technical and economic operation characteristics in the subspace of the coordination variable. The EP model can replace the original model of the subsystem in the coordinated optimization and can ensure the equivalent optimality in a non-iterative and privacy-protected manner, as proven in Part I.
\subsection{Representations of the EP model}
As defined in \eqref{md:esr}, EP model $\Phi$ is the geometric projection of $\Omega$ onto the space of $\mathbb{R}^{N_x}$. Since $\Omega$ is a polytope, its projection $\Phi$ is also a polytope \cite{ref:equal_set}. According to the Minkowski-Weyl Theorem \cite{ref:mw_theorem}, a polytope has the following two equivalent representations, known as the double descriptions.
\begin{itemize}
\item Hyperplane-representation (H-rep)
\end{itemize}
\begin{equation}
\label{md:h_rep}
\Phi = \left\{x \in \mathbb{R}^{N_x}: \tilde{A}x \leq \tilde{c} \right\}.
\end{equation}
\begin{itemize}
\item Vertex-representation (V-rep)
\end{itemize}
\begin{equation}
\label{md:v_rep}
\begin{split}
\Phi = & \text{conv}(V)\\
:= & \left\{\sum_{i=1}^{N_v} \lambda_i \hat{x}_i: \hat{x}_i \in V, \lambda_i \ge 0, \sum_{i=1}^{N_v}\lambda_i=1 \right\}.
\end{split}
\end{equation}
In equation \eqref{md:h_rep}, $\tilde{A}$ and $\tilde{c}$ respectively denote the coefficient and the right-hand-side of the constraints that represent hyperplanes of the polytope. In equation \eqref{md:v_rep}, function $\text{conv}(\cdot)$ denote the convex hull of a set of vectors. Set $V$ contains all the vertices of $\Phi$ and $N_v$ is the number of vertices. $\hat{x}_i$ is the $i$th vertex of $\Phi$. The H-rep and V-rep of a polytope are convertible \cite{ref:mw_theorem}.
The EP calculation is to determine either the H-rep or the V-rep of $\Phi$. Though the two representations are equivalent, they will lead to different philosophies for designing the projection algorithm. In power system optimization problems, the H-rep is mostly used to model the operation feasible region. When calculating the projection, however, we find that the V-rep is more suitable due to some features of power systems. First, the dimension of the coordination variable is lower compared with internal variables of subsystems, since networks among subsystems are relatively weak-connected compared with networks inside subsystems. Hyperplane-oriented projection methods, e.g., the FME and block elimination, calculate the projection by eliminating internal variables and thus have exponential complexity regarding the dimension of the internal variable. By identifying vertices, in contrast, the projected polytope is directly characterized in the lower-dimensional coordination space, which can lower the computational difficulty. Second, from the perspective of practical implementation, the projection algorithm is desired to terminate within a given time and yield a conservative approximation of the EP model. If the hyperplane-oriented method is terminated prematurely, some critical hyperplanes enclosing the EP model will be missed and the calculation result may contain infeasible operating points. On the contrary, if the EP is calculated by identifying vertices, the intermediate output of the algorithm will always be a subset of the exact result due to the property of the convex set. In this regard, this paper develops a vertex-oriented projection method namely the PVE to calculate the EP efficiently and practically.
\begin{algorithm}[t]
\caption{PVE Algorithm}
\label{alg:pve}
\begin{algorithmic}[1]
\STATE Initialization: $k=0, \ V = V^{(0)}$
\REPEAT
\STATE $k=k+1$
\STATE Construct convex hull of existing vertices:\\ $\hat{\Phi}^{(k)} = \text{conv}(V) = \left\{x: \tilde{A}_j^{(k)} x \leq \tilde{d}_j^{(k)}, j\in[J^{(k)}]\right\}$
\STATE Initialize set of IRs: $H^{(k)}=\emptyset$
\FOR{$j\in[J^{(k)}]$}
\STATE Vertex identification: solve \eqref{md:vertex} with $\alpha^\top = \tilde{A}^{(k)}_j$, obtain vertex $\hat{x}^{(k,j)}$ and IR of the vertex $\Delta h^{(k,j)}$
\IF{$\Delta h^{(k,j)} > 0$ and $\hat{x}^{(k,j)} \notin V$}
\STATE Save new vertex: $V=\{V, \hat{x}^{(k,j)}\}$
\STATE Save IR: $H^{(k)} = \{H^{(k)}, \Delta h^{(k,j)}\}$
\ENDIF
\ENDFOR
\STATE Evaluate error metric: $D^{(k)} = \max H^{(k)}$
\UNTIL{$D^{(k)} \leq \varepsilon$}
\STATE Output: $\hat{\Phi}=\text{conv}(V)$
\end{algorithmic}
\end{algorithm}
\subsection{PVE Algorithm} \label{sec:pve}
The PVE algorithm calculates the projected polytope by identifying its vertices. The key of the algorithm is twofold. First, the error metric has to be carefully designed to provide a clear interpretation of the calculation results and allow the pre-specification of the error tolerance. Second, the vertices have to be identified in a proper sequence to improve computational efficiency. To this end, the Hausdorff distance is employed to measure the approximation error and a double-loop framework is developed to identify vertices of the projection along a path with the steepest descent of the approximation error. The basic idea of the algorithm is first finding vertices that are critical to the overall shape of the projection, and then expanding the convex hull of existing vertices to find new vertices outside the current approximation. The flowchart is summarized in Algorithm \ref{alg:pve}. The PVE algorithm contains two layers of iterative loops; however, this algorithm is run locally by each subsystem to calculate the EP model and its loops do not conflict with that the proposed coordinated optimization method does not require iterative information exchange among subsystems. Details of the PVE algorithm will be introduced as follows along with a numerical example illustrated in Fig. \ref{fig:pve}.
\subsubsection{Vertex identification problem}
Each vertex of the projected polytope $\Phi$ is an extreme point, which can be identified by solving the following problem,
\begin{equation}
\label{md:vertex}
\begin{split}
\max_{x,y} \ & h = \alpha^\top x\\
\text{s.t.} \ & Ax + By \leq c
\end{split}
\end{equation}
The value vector $\alpha$ in the objective function represents the direction for identifying the vertex.
\begin{figure}[t!]
\centering
\includegraphics[width=3.5in]{Photo/fig_illu_pve.pdf}
\caption{Illustrative example of the PVE algorithm. (a) Vertex identification in outer loop 1. (b) Number of new vertices and the error metric of each outer loop.}
\label{fig:pve}
\end{figure}
\subsubsection{Initialization}
At least $N_x+1$ initial vertices are required to ensure the convex hull of vertices is degenerate. These vertices can be searched along each axis, i.e., solving problem \eqref{md:vertex} with $\alpha = \pm e_i$, where $e_i$ is the $i$th standard basis whose $i$th component equals 1 while others equal 0. The set of initial vertices is denoted as $V^{(0)}$. In the illustrative example in Fig. \ref{fig:pve} (a), the blue polygon is the exact projection of a randomly-generated polytope. Four initial vertices of the projection, i.e., $\hat{x}^{(0,1)},\hat{x}^{(0,2)},\hat{x}^{(0,3)}$, and $\hat{x}^{(0,4)}$, are identified for initialization.
\subsubsection{Inner loop}
The inner loop of the PVE algorithm constructs convex hull of existing vertices and identifies new vertices outside the convex hull. The convex hull of existing vertices in the $k$th outer loop is denoted as $\hat{\Phi}^{(k)}$, which is an intermediate approximation of the projection. Assume $\hat{\Phi}^{(k)}$ is enclosed by $J^{(k)}$ facets and the affine hull of the $j$th facet is $\tilde{A}_j^{(k)} x \leq \tilde{d}_j^{(k)}$. New vertices are searched along outer normal directions of facets of $\hat{\Phi}^{(k)}$. For the $j$th facet, its outer normal vector is $\tilde{A}_j^{(k)}$. Thereby, the $j$th inner loop identifies the new vertex by solving problem \eqref{md:vertex} with $\alpha^\top = \tilde{A}^{(k)}_j$. The newly identified vertex is the optimal solution of problem \eqref{md:vertex}, denoted as $\hat{x}^{(k,j)}$. The improvement ratio (IR) is calculated as \eqref{def:ir} to measure the contribution of vertex $\hat{x}^{(k,j)}$ to improve the current approximation.
\begin{equation}
\label{def:ir}
\Delta h^{(k,j)} = \frac{(\tilde{A}^{(k)}_j)^\top \hat{x}^{(k,j)} - \tilde{d}^{(k,j)}}{\| \tilde{A}^{(k)}_j\|}.
\end{equation}
The IR measures the distance from vertex $\hat{x}^{(k,j)}$ to the $j$th facet of $\hat{\Phi}^{(k)}$ and is non-negative. If $\Delta h^{(k,j)}>0$, indicating that $\hat{x}^{(k,j)}$ is outside $\hat{\Phi}^{(k)}$ and will contribute to improve the current approximation, then $\hat{x}^{(k,j)}$ will be appended to the vertex set $V$ and $\Delta h^{(k,j)}$ will be recorded in set $H^{(k)}$. If $\Delta h^{(k,j)} = 0$, indicating that $\hat{x}^{(k,j)}$ is on the facet of $\hat{\Phi}^{(k)}$ and will not contribute to improve the current approximation, then this vertex will be omitted.
The example in Fig. \ref{fig:pve} (a) exhibits the vertex identification in the first outer loop ($k=1$). The convex hull of existing vertices is $\hat{\Phi}^{(1)}$, as the red polygon shows. Vertices $\hat{x}^{(1,1)},\hat{x}^{(1,2)},\hat{x}^{(1,3)}$, and $\hat{x}^{(1,4)}$ are identified along the outer normal vector of the four facets of $\hat{\Phi}^{(1)}$, respectively. The Distance of each vertex to the corresponding facet of $\hat{\Phi}^{(1)}$ is the IR, as the dotted line mark in Fig. \ref{fig:pve} (a).
\subsubsection{Outer loop and termination criterion}
The outer loop evaluates the error of the current approximation and compares it with the pre-specified error tolerance to decide whether to terminate the algorithm. The Hausdorff distance between the real projection $\Phi$ and the current polytope $\hat{\Phi}^{(k)}$ is employed as the error metric, which is defined as follows \cite{ref:hd},
\begin{equation}
D^{(k)} := \max_{x_1 \in \Phi} \min_{x_2 \in \hat{\Phi}^{(k)}} \|x_2-x_1\|_2.
\end{equation}
The interpretation of $D^{(k)}$ is the maximum distance from points in $\Phi$ to polytope $\hat{\Phi}^{(k)}$. Note that $\hat{\Phi}^{(k)}$ is the convex hull of existing vertices of $\Phi$, therefore $\hat{\Phi}^{(k)} \subset \Phi$. Hence, $D^{(k)}$ will be the maximum distance from vertices of $\Phi$ to polytope $\hat{\Phi}^{(k)}$. According to the definition in \eqref{def:ir}, the distance from each identified vertex of $\Phi$ to polytope $\hat{\Phi}^{(k)}$ is actually the corresponding IR. Thereby, the error metric can be evaluated as follows,
\begin{equation}
\label{eq:cal_hd}
\begin{split}
D^{(k)} & = \max_{x_1 \in V} \min_{x_2 \in \hat{\Phi}^{(k)}} \|x_2-x_1\|_2 \\
& =\max_{j} \Delta h^{(k,j)}\\
& = \max H^{(k)}.
\end{split}
\end{equation}
From the above derivation, the Hausdorff distance can be evaluated by selecting the largest value from a set of scalars, which will consume much less computation effort compared to error metrics based on the region volume. If $D^{(k)}$ is no larger than the pre-specified error tolerance $\varepsilon$, the algorithm terminates and outputs the convex hull of existing vertices as the approximation of the real projection. The error metric based on the Hausdorff distance ensures that the maximum deviation of the approximation is within $\varepsilon$. If the termination criterion is not met, the algorithm moves into the next outer loop to identify more vertices outside the current approximation.
In Fig. \ref{fig:pve} (a), the error metric $D^{(1)}$ for outer loop $k=1$ equals the IR of vertex $\hat{x}^{(1,3)}$, which has the farthest distance to $\hat{\Phi}^{(1)}$. Fig. \ref{fig:pve} (b) exhibits the number of new vertices and the error metric in each outer loop of the PVE algorithm. The algorithm terminates at the $4$th outer loop when $D^{(4)}=0$. A total of 22 vertices of the projection are identified. As shown in the figure, the error metric keeps decreasing along the steepest path during the algorithm, indicating that vertices that have significant influences on the projection are identified with higher priority.
With the Hausdorff distance-based error metric, the process of the PVE algorithm can be interpreted as searching vertices that make the approximation error decline fast. This error metric also has another two advantages. First, the Hausdorff distance can be evaluated conveniently according to \eqref{eq:cal_hd}, which does not bring additional computational complexity. Second, the evaluation of the Hausdorff distance does not rely on the ground truth of the projection result and thus, the proposed error metric can be used to control the accuracy during the projection algorithm. In contrast, other error metrics, e.g., the volume difference metric and the Jaccard similarity, rely on the knowledge of the exact projection result and consequently, are hard to calculate and cannot be used to control the accuracy of the projection algorithm.
\subsection{Discussions}
\subsubsection{Properties of the PVE algorithm}
The convergence of the PVE algorithm and the conservatism of the approximation are guaranteed by the following theorem.
\begin{theorem}
If error tolerance $\varepsilon>0$, the output of the PVE algorithm is a conservative approximation of the exact projection. If $\varepsilon=0$, the exact projection can be obtained via the PVE algorithm within finite calculations.
\end{theorem}
\begin{proof}
In each round of the outer loop of the PVE algorithm, the convex hull of existing vertices is an intermediate approximation of the projection. Since the projected region is a polytope, the convex hull of its vertices is a subset of the exact projection, i.e., $\hat{\Phi}^{(k)} \subset \Phi$ for any $k$. Hence, if $\varepsilon>0$, a conservative approximation of the exact projection will be obtained with the maximum Hausdorff distance no larger than $\varepsilon$. If $\varepsilon$ is set 0, indicating that no point of $\Phi$ is outside $\hat{\Phi}^{(k)}$ when the algorithm terminates, then $\Phi \subset \hat{\Phi}^{(k)}$. Note that $\hat{\Phi}^{(k)} \subset \Phi$ and thus, $\hat{\Phi}^{(k)} = \Phi$. Since the number of vertices of a polytope is finite, the algorithm will terminate within finite steps.
\end{proof}
\subsubsection{Accelerating strategies}
Three strategies are proposed to further accelerate the PVE algorithm. The first is to accelerate the solving of the vertex identification problem in \eqref{md:vertex}. Note that only the value vector in the objective function is varying for identifying different vertices, the structure and parameters of constraints are invariant. Thereby, the redundant constraint elimination technique and the warm simplex basis technique can be used to accelerate the solution of \eqref{md:vertex}. The redundant constraint elimination technique removes constraints that do not impact the feasible region to reduce the problem scale. Typical methods can be found in literature \cite{ref:redundant1} and \cite{ref:redundant2}. The warm simplex basis can be specified for problem \eqref{md:vertex} using the solution of the adjacent vertex that is on the same facet of which the outer normal vector is used as the searching direction. The specification of the start basis can reduce the iteration number of the simplex algorithm and thus accelerating the vertex identification. The second strategy is to accelerate the inner loop of the PVE algorithm by parallel computing. In line 7 of the algorithm, the identification of each vertex is independent, which can be calculated in parallel to save the computation time. The third is to dynamically update the convex hull when adding new vertices in each outer loop (line 4 of Algorithm \ref{alg:pve}), instead of building a new convex hull for all vertices. The updating of the convex hull can be realized by the quick hull method \cite{ref:qhull}, which is not only computationally efficient but also capable of constructing high-dimensional convex hulls.
\subsubsection{Analysis of adaptability}
First, for the problem where the projection region is a general convex set, the proposed PVE algorithm can also be applied. Perimeter points of the projection region will be identified by the algorithm, and the convex hull of these perimeter points is an inner approximation of the projection result. The approximation error measured by the Hausdorff distance is less than $\varepsilon$. Second, if the projection region is non-convex, the PVE algorithm cannot be directly applied. A potential solution is decomposing the non-convex projection region into the union of several convex sub-regions and using the PVE algorithm to identify each sub-region. Third, for the problem involving multiple time intervals, the coordination variable may have a high dimension and the direct application of the PVE algorithm will be inefficient. There are strategies to decompose the time-coupled projection problem into a series of lower-dimensional subproblems regarding each single time interval and adjacent intervals \cite{ref:lin}. After decomposition, each low-dimensional projection can be calculated efficiently by the PVE algorithm.
\section{Application to Multi-Area Coordinated Optimal Dispatch}\label{sec:use1}
\subsection{Problem Formulation}
Power systems in different areas are interconnected physically through inter-area transmission lines, but different areas may be operated by different system operators. The multi-area coordinated optimal dispatch (MACOD) is thus becoming indispensable for the economic and secure operation of large-scale power systems. The MACOD minimizes the total operation cost of the multi-area system in a decomposed manner.
Use $P$, $\pi$, and $\theta$ to represent variables of active power, operation cost, and voltage phase angle, respectively. Use superscript $G$, $D$, $B$, $RN$, $MA$, and $TL$ to label generation, load, power exchange at boundary node, regional network, multi-area system, and tie-line, respectively. Let $r\in \mathcal{R}^{RN}$ and $s \in \mathcal{S}$ index areas and tie-lines, respectively. Let $n \in \mathcal{N}^{RN}_r$ and $l\in \mathcal{L}^{RN}_r$ index network nodes and internal branches of the $r$th area, respectively. Then the objective function of a basic MACOD is as follows,
\begin{equation}
\label{macd:obj}
\min \quad \pi^{MA} = \sum_{r \in \mathcal{R}^{RN}} \pi^{RN}_r
\end{equation}
where $\pi^{RN}$ is the operation cost of area $r$ and $\pi^{MA}$ is the total cost of the multi-area system.
The following constraints need to be satisfied for area $r$.
\begin{subequations}
\label{macd:cst_area}
\begin{align}
& \overline{\pi}^{RN}_r \ge \pi^{RN}_r \ge \sum_{n \in \mathcal{N}^{RN}_r} \pi^{RN,G}_{r,n},\label{macd:cost_1}\\
& \pi^{RN,G}_{r,n} \ge a^{RN,G}_{r,n,i} P^{RN,G}_{r,n} + b^{RN,G}_{r,n,i}, \forall n,i,\label{macd:cost_2}\\
& \sum_{n \in \mathcal{N}^{RN}_r} P^{RN,G}_{r,n} = \sum_{n \in \mathcal{N}^{RN}_r} P^{RN,D}_{r,n} + \sum_{n \in \mathcal{N}^{RN,B}_r} P^{RN,B}_{r,n}, \label{macd:bal}\\
& \begin{aligned}
-\overline{F}^{RN}_{r,l} \leq & \sum_{n \in \mathcal{N}^{RN}_r} T_{r,l,n}P^{RN,G}_{r,n} - \sum_{n \in \mathcal{N}^{RN}_r} T_{r,l,n}P^{RN,D}_{r,n} \\
& - \sum_{n \in \mathcal{N}^{RN,B}_r} T_{r,l,n}P^{RN,B}_{r,n} \leq \overline{F}^{RN}_{r,l}, \forall l,
\end{aligned} \label{macd:branch}\\
& \underline{P}^{RN,G}_{r,n} \leq P^{RN,G}_{r,n} \leq \overline{P}^{RN,G}_{r,n}, \forall n.\label{macd:gen}
\end{align}
\end{subequations}
In the above model, equation \eqref{macd:cost_1} models the operation cost of area $r$ in the epigraph form, where $\pi^{RN,G}_{r,n}$ is the cost of generator $n$ and $\overline{\pi}^{RN}_r$ is a large enough constant to bound $\pi^{RN}_r$. Equation \eqref{macd:cost_2} is the epigraph of the piece-wise linear cost function of generator $n$, where $a^{RN,G}_{r,n,i}$ and $b^{RN,G}_{r,n,i}$ are coefficients of each bidding segment, respectively. Equation \eqref{macd:bal}, \eqref{macd:branch}, and \eqref{macd:gen} enforce the power balance, transmission limits, and generation limits of area $r$. In the equations, $T_{r,l,n}$ is the power transfer distribution factor of node $n$ to branch $l$, $\overline{F}^{RN}_{r,l}$ is the capacity of branch $l$, $\underline{P}^{RN,G}_{r,n}$ and $\overline{P}^{RN,G}_{r,n}$ are lower and upper bounds of generation output, respectively.
The operating constraints of tie-lines are as follows,
\begin{subequations}
\label{macd:cst_tie}
\begin{align}
& P^{TL}_{s} = \frac{1}{x_s}\left(\theta_{r^F_s} - \theta_{r^T_s} \right), \forall s,\label{macd:tie1}\\
& P^{RN,B}_{r,n} = \sum_{\substack{r^F_s = r, \\ n^F_s = n}} P^{TL}_s - \sum_{\substack{r^T_s = r,\\ n^T_s = n}} P^{TL}_s, \forall n, r,\label{macd:tie2}\\
& \underline{F}^{TL}_s \leq P^{TL}_{s} \leq \overline{F}^{TL}_s, \forall s. \label{macd:tie3}
\end{align}
\end{subequations}
Equation \eqref{macd:tie1} is the direct-current power flow model for tie-lines, where $x_s$ is the reactance of tie-line $s$, $r^F_s$ and $r^T_s$ denote the from and end region of the tie-line, respectively. In this work, each area is aggregated as a node when evaluating power flows on tie-lines, which is widely accepted in the literature \cite{ref:node1} and is also implemented in the flow-based market integration in Europe \cite{ref:node2}. Equation \eqref{macd:tie2} maps tie-line power flows to boundary power injections of each area, where $n^F_s$ and $n^T_s$ denote the from and end node of tie-line $s$, respectively. Equation \eqref{macd:tie3} enforces transmission limits of tie-lines.
\subsection{EP-Based Solution for MACOD}\label{cp:esr_macd}
In the MACOD problem, variable $P^{RN,B}_{r,n}$ appears in constraints of both the regional system and tie-line system, which restricts the independent optimization of each area. Based on the EP theory proposed in Part I of this paper, the MACOD problem can be solved in a decomposed manner without iterative information exchange.
From the perspective of primal decomposition, $P^{RN,B}_{r}=(P^{RN,B}_{r,1},\cdots,P^{RN,B}_{r,|\mathcal{N}^{RN,B}_r|})$ and $\pi^{RN}_r$ are seen as the coordination variable corresponding to area $r$, while $P^{RN,G}_{r}=(P^{RN,G}_{r,1},\cdots,P^{RN,G}_{r,|\mathcal{N}^{RN,G}_r|})$ is the internal variable. According to \eqref{md:ofr}, the operation feasible region of each area is as follows,
\begin{equation}
\Omega^{RN}_r = \left\{(P^{RN,B}_r, \pi^{RN}_r, P^{RN,G}_{r}): \text{Eq.} \ \eqref{macd:cst_area}\right\}.
\end{equation}
According to \eqref{md:esr}, the EP model of the area is the projection of $\Omega^{RN}_r$ onto the subspace of coordination variable, i.e.,
\begin{equation}
\label{md_area:esr}
\begin{split}
\Phi^{RN}_r = \{& (P^{RN,B}_r, \pi^{RN}_r): \exists P^{RN,G}_{r}, \\
& \text{s.t. } (P^{RN,B}_r, \pi^{RN}_r, P^{RN,G}_{r}) \in \Omega^{RN}_r \}.
\end{split}
\end{equation}
The EP model of the regional system contains all values of $(P^{RN,B}_r, \pi^{RN}_r)$ that can be met by at least one generation schedule subject to regional operation constraints with generation cost no larger than $\pi^{RN}_r$. Note that the objective of the MACOD is to minimize the sum of $\pi^{RN}_r$. Using the EP model as a substitute for the regional system model \eqref{macd:cst_area} in the MACOD, the cost variable $\pi^{RN}_r$ will be minimized and the decision result of $P^{RN,B}_r$ will be ensured to be feasible for the regional system, which leads to the coordinated optimality of multiple areas. Theoretical proof of the optimality is in Theorem 3 in Part I of this paper.
Then the EP-based MACOD is realized following three steps.
\begin{enumerate}[1)]
\item Equivalent projection. Each area calculates the EP model $\Phi^{RN}_r$ of the local system according to the definition in \eqref{md_area:esr}, which can be realized by the PVE algorithm introduced in Section \ref{sec:pve}. Whereafter, $\Phi^{RN}_r$ is submitted to the multi-area coordinator.
\item Coordinated optimization. The coordinator solves the EP-based MACOD problem formulated as $\min \{\sum_{r} \pi^{RN}_r : \text{Eq. } \eqref{macd:cst_tie}, (P^{RN,B}_r, \pi^{RN}_r) \in \Phi^{RN}_r , \forall r \in \mathcal{R}^{RN} \}$. The optimal value $(\hat{P}^{RN,B}_r, \hat{\pi}^{RN}_r)$ is then published to each area.
\item Regional system operation. Each area fixes $\hat{P}^{RN,B}_r$ as the boundary condition and solves the local optimal dispatch problem formulated as $\min \{\pi^{RN}_r : \text{Eq.} \eqref{macd:cst_area}, P^{RN,B}_r = \hat{P}^{RN,B}_r\}$.
\end{enumerate}
The above process is executed sequentially, which naturally overcomes drawbacks caused by repetitive iterations of conventional coordinated optimization methods.
\subsection{Case Study}
The EP of the regional transmission system is visualized, and the computational performance of the EP calculation and the EP-based MACOD is tested. Case studies in this paper are simulated on a Lenovo ThinkPad X13 laptop with an Intel Core i7-10510U 1.80-GHz CPU. Algorithms are programmed in MATLAB R2020a with GUROBI V9.0.0.
\subsubsection{EP of regional system}
\begin{figure}[t!]
\centering
\includegraphics[width=3.5in]{Photo/fig_region_cfr.pdf}
\caption{EP models of the IEEE-24 system at (a) valley hour and (b) peak hour.}
\label{fig:reg_esr}
\end{figure}
We employ the IEEE-24 system for visualizing the EP model of the regional transmission system. Two tie-lines are connected to node 1 and node 3 of the system, and each of them has the capacity of $\SI{510}{MW}$ (15\% of the total generation capacity). The results are shown in Fig \ref{fig:reg_esr}, where subplots (a) and (b) exhibit EP models at the valley-load hour (77\% of the peak load) and peak-load hour, respectively. Red regions are EP models of the system, which are 3-dimensional polytopes. As can be seen, the volume of the EP model at the peak hour is smaller than that at the valley hour. This is because the increase of load occupies the export capacity and upraising the power supply cost. The projection of the EP model on the subspace of $(P^{RN,B}_{r,1}, P^{RN,B}_{r,2})$, denoted as $\Xi^{RN}_r$, is also plotted, which characterizes the admissible set of tie-lie flows that can be executed by the regional system.
\subsubsection{EP calculation}
\begin{table}[t!]
\centering
\begin{threeparttable}[c]
\caption{Computational Efficiency of EP Calculation}
\label{tab:esr_eff}
\renewcommand\arraystretch{1.3}
\setlength{\tabcolsep}{2.5mm}{
\begin{tabular}{ccccc}
\toprule
System & \makecell{$|\mathcal{N}^B_r|$} & \makecell{Time of \\ FME (s)} & \makecell{Time of \\PVE (s)} & \makecell{Rate of \\ model reduction}\\
\midrule
\multirow{2}{*}{IEEE-24}& 3 & \textgreater1200 & 0.34 & 95.4\% \\
& 6 & \textgreater1200 & 3.54 & 92.7\% \\
\multirow{2}{*}{SG-200} & 3 & \textgreater1200 & 0.46 & 99.8\% \\
& 6 & \textgreater1200 & 2.69 & 99.6\% \\
\multirow{2}{*}{SG-500} & 3 & \textgreater1200 & 0.47 & 99.9\% \\
& 6 & \textgreater1200 & 2.99 & 99.6\% \\
\bottomrule
\end{tabular}
}
\end{threeparttable}
\end{table}
We test the computational performance of the proposed PVE algorithm for EP calculation based on the IEEE-24 test system, the 200 node-synthetic grid (SG-200), and the 500 node-synthetic grid (SG-500) \cite{ref:synthetic}. Cases with 3 and 6 tie-lines are considered for each test system. The FME is employed as the benchmark method. As summarized in TABLE \ref{tab:esr_eff}, the FME fails to obtain the EP model within 1200s even for the small-scale IEEE-24 system. The proposed PVE algorithm, in contrast, can yield the EP model within 4s for all the 6 cases, which is acceptable for practical application. Measure the scale of the regional system model \eqref{macd:cst_area} by the production of numbers of variables and constraints. With the EP, the model scale of the regional system is reduced by 92.7\%$\sim$99.9\%, which will greatly alleviate the communication and computation burden of the coordinated dispatch of multiple areas as well as protecting the private information of regional systems.
\subsubsection{EP-based MACOD}
\begin{table}[t]
\centering
\caption{Computational Efficiency of EP-based MACOD}
\label{tab:macd_eff}
\renewcommand\arraystretch{1.3}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{System} & \multirow{2}{*}{\makecell{Time of joint\\ optimization (s)}} & \multicolumn{3}{c}{Time of EP-based coordination (s)} \\
\cmidrule{3-5}
& & \makecell{System \\ reduction} & \makecell{Coordinated \\ optimization} & Total \\
\midrule
$20\times \text{SG-200}$ & 0.42 & 0.38 & 0.11 & 0.49\\
$40\times \text{SG-200}$ & 0.61 & 0.32 & 0.10 & 0.42\\
$20\times \text{SG-500}$ & 0.51 & 0.49 & 0.09 & 0.58\\
$40\times \text{SG-500}$ & 0.78 & 0.51 & 0.10 & 0.61\\
\bottomrule
\end{tabular}
}
\end{table}
Four test systems composed of 20 and 40 synthetic grids of different scales are constructed according to the tie-line topology in \cite{ref:tan_macd}. The MACOD is solved by the joint optimization and the EP-based decomposed solution, respectively. The optimization results from the two methods are verified to be identical, which validates the accuracy of the proposed coordination method. According to the process introduced in Section \ref{cp:esr_macd}, the time for system reduction depends on the region that takes the longest computation time, since EP models of different regions are calculated in parallel. The total time to obtain the optimal tie-line scheduling is the sum of the system reduction time and the coordinated optimization time. Results are summarized in TABLE \ref{tab:macd_eff}. As is shown, the total computation time of the EP-based MACOD is comparable with that of the joint optimization. With the EP, the time consumed by the coordinated optimization is reduced by more than 73.8\% compared with the joint optimization. This is because the scale of the coordination problem is significantly reduced with the EP of each region. In addition to the computation efficiency, the primary advantages of the proposed coordination method are avoiding private data disclosure compared with the joint optimization and avoiding iterative information exchange compared with conventional coordinated optimization methods.
\section{Application to Transmission-Distribution Coordinated Optimal Dispatch}\label{sec:use2}
\subsection{Problem Formulation}
Distribution networks installed with DERs can provide flexibility to the transmission system by adjusting the power exchange at the substation, which calls for the transmission-distribution coordinated optimal dispatch (TDCOD). In this study, optimal active power dispatch is considered for the transmission system, while the co-optimization of active and reactive power is considered for the distribution system with voltage limits. Let $P$, $Q$, $V$, and $\pi$ denote variables of active power, reactive power, voltage magnitude, and operation cost, respectively. Similar to the notation in Section \ref{sec:use1}, use superscript $G$, $D$, $F$ to label generation, load, and power flow, respectively. Use superscript $TN$ and $DN$ to label variables of transmission and distribution systems, respectively. Let $r\in\mathcal{R}^{DN}$ index distribution networks participate in the coordinated dispatch. Let $n\in\mathcal{N}^{DN}_r$ and $l\in\mathcal{L}^{DN}_r$ index nodes and branches of distribution network $r$. Let $n\in\mathcal{N}^{TN}$ and $l\in\mathcal{L}^{TN}$ index nodes and branches of the transmission network.
The objective of the TDCOD is minimizing the total operation cost of the transmission and distribution systems,
\begin{equation}
\label{tdcd:obj}
\min \ \pi^{TD} = \sum_{n\in\mathcal{N}^{TN}}C_n(P^{TN,G}_n) + \sum_{r\in\mathcal{R}^{DN}} \pi^{DN}_r
\end{equation}
The power balance constraint, transmission limits, and generation limits are considered for the transmission network,
\begin{subequations}
\label{tdcd:tn}
\begin{align}
& \sum_{n\in\mathcal{N}^{TN}}P^{TN,G}_n + \sum_{r\in\mathcal{R}^{DN}}P^{DN}_{r,0} = \sum_{n\in\mathcal{N}^{TN}}P^{TN,D}_n, \label{tdcd:tn1}\\
& \begin{aligned}
-\overline{F}^{TN}_l \leq & \sum_{n\in\mathcal{N}^{TN}} T^{TN}_{l,n}\left(P^{TN,G}_n-P^{TN,D}_n \right) \\
& +\sum_{r\in\mathcal{R}^{DN}} T^{TN}_{l,r} P^{DN}_{r,0} \leq \overline{F}^{TN}_l, \forall l,
\end{aligned} \label{tdcd:tn2}\\
& \underline{P}^{TN,G}_n \leq P^{TN,G}_n \leq \overline{P}^{TN,G}_n , \forall n. \label{tdcd:tn3}
\end{align}
\end{subequations}
The operation constraints of the $r$th distribution network is as follows,
\begin{subequations}
\label{tdcd:dn}
\begin{align}
& \overline{\pi}^{DN}_r \ge \pi^{DN}_r \ge \sum_{n\in\mathcal{N}^{DN}_r} \pi_{r,n}^{DN,G},\label{tdcd:dn_cost1}\\
& \pi_{r,n}^{DN,G} \ge a^{DN,G}_{r,n,i} P^{DN,G}_{r,n} + b^{DN,G}_{r,n,i}, \label{tdcd:dn_cost2}\\
& \begin{aligned}
& P^{DN,G}_{r,n} + \sum_{m \in \mathcal{A}_n} P^{DN,F}_{r,mn}= P^{DN,D}_{r,n}, P^{DN}_{r,0} = \sum_{m \in \mathcal{A}_0} P^{DN,F}_{r,m0}, \\
& Q^{DN,G}_{r,n} + \sum_{m \in \mathcal{A}_n} Q^{DN,F}_{r,mn}= Q^{DN,D}_{r,n}, Q^{DN}_{r,0} = \sum_{m \in \mathcal{A}_0} Q^{DN,F}_{r,m0},
\end{aligned} \label{tdcd:dn_bal}\\
& V^2_{r,m} - V^2_{r,n} = 2(r_{r,mn} P^{DN,F}_{r,mn} + x_{r,mn} Q^{DN,F}_{r,mn}), \label{tdcd:dn_pf}\\
& (\cos \frac{2k\pi}{N_s})P^{DN,F}_{r,mn}+(\sin \frac{2k\pi}{N_s})Q^{DN,F}_{r,mn} \leq (\cos \frac{\pi}{N_s})\overline{F}^{DN}_{r,mn},\label{tdcd:dn_cap}\\
& \underline{V}^2_{r,n} \leq V^2_{r,n} \leq \overline{V}^2_{r,n}, \label{tdcd:dn_vol}\\
& \underline{P}^{DN,G}_{r,n} \leq P^{DN,G}_{r,n} \leq \overline{P}^{DN,G}_{r,n},\underline{Q}^{DN,G}_{r,n} \leq Q^{DN,G}_{r,n} \leq \overline{Q}^{DN,G}_{r,n}. \label{tdcd:dn_gen}
\end{align}
\end{subequations}
Equation \eqref{tdcd:dn_cost1} and \eqref{tdcd:dn_cost2} model operation costs of the distribution system and DER $n$ in the epigraph form, where $\overline{\pi}^{DN}_r$ is a large enough constant, $a^{DN,G}_{r,n,i}$ and $b^{DN,G}_{r,n,i}$ are coefficients of the $i$th segment of the piece-wise linear cost function of DER $n$. Equation \eqref{tdcd:dn_bal} enforces nodal power balance, where $P^{DN,F}_{r,mn}$ and $Q^{DN,F}_{r,mn}$ respectively denote active and reactive power flow on the feeder from node $m$ to node $n$, $\mathcal{A}_n$ is the set of nodes adjacent to node $n$. $P^{DN}_{r,0}$ and $Q^{DN}_{r,0}$ are net active and reactive power the distribution network injects into the transmission system. Equation \eqref{tdcd:dn_pf} is the simplified DistFlow model of the distribution network \cite{ref:distflow}, which incorporates the reactive power and can be used to the distribution network with large R/X ratio. Equation \eqref{tdcd:dn_cap} enforces the transmission limit of branch $mn$ with piece-wise inner approximation and $\overline{F}^{DN}_{r,mn}$ is the capacity of the branch. Equation \eqref{tdcd:dn_vol} and \eqref{tdcd:dn_gen} enforce limits on nodal voltage magnitudes and power output of DERs, respectively. Taking $V^2_{r,n}$ as the independent variable, the above constraints are linear.
\subsection{EP-based Solution for TDCOD}
In problem \eqref{tdcd:obj}-\eqref{tdcd:dn}, the optimization of transmission and distribution systems are coupled by $P^{DN}_{r,0}$ and $\pi^{DN}_r$. For the $r$th distribution network, take $x^{DN}_r = (P^{DN}_{r,0}, \pi^{DN}_r)$ as the coordination variable and take $y^{DN}_r = (Q^{DN}_{r,0}, P^{DN,G}_{r}, Q^{DN,G}_{r}, P^{DN,F}_{r}, Q^{DN,F}_{r}, V^2_r)$ as the internal variable. The operation feasible region of the distribution network, denoted as $\Omega^{DN}_r$, is the set of $(x^{DN}_r,y^{DN}_r)$ subject to constraints in \eqref{tdcd:dn}. Then the EP model of the distribution network can be formulated according to the definition in \eqref{md:esr}, denoted as $\Phi^{DN}_r$. The EP model is the projection of $\Omega^{DN}_r$ onto the subspace of $x^{DN}_r$. Note that constraints \eqref{tdcd:dn} are linear and thus, both $\Omega^{DN}_r$ and its projection $\Phi^{DN}_r$ are polytopes, and $\Phi^{DN}_r$ can be calculated by the proposed PVE algorithm. The EP model $\Phi^{DN}_r$ contains all possible combinations of $P^{DN}_{r,0}$ and $\pi^{DN}_r$ that can be executed by the distribution network with at least one internal generation schedule satisfying constraints in \eqref{tdcd:dn}. Using EP models of distribution networks to replace their original model in the TDCOD problem, the operation cost $\pi^{DN}_r$ of the distribution network will be minimized as in \eqref{tdcd:obj}, and the active power exchange $P^{DN}_{r,0}$ determined by the transmission system operator will be ensured to be feasible for the distribution network. Hence, the EP-based coordinated optimization of transmission and distribution systems will be identical to the joint optimization.
The process of the EP-based TDCOD contains three successive steps.
\begin{enumerate}[1)]
\item Equivalent projection. Each distribution system calculates the EP model $\Phi^{DN}_r$ and reports it to the transmission system operator.
\item Coordinated optimization. The transmission system operator solves the EP-based TDCOD problem, i.e., minimizing the objective function in \eqref{tdcd:obj} subject to constraint \eqref{tdcd:tn} and $(P^{DN}_{r,0}, \pi^{DN}_r) \in \mathcal{R}^{DN}$. The optimal value $(\hat{P}^{DN}_{r,0}, \hat{\pi}^{DN}_r)$ is then published to each distribution system.
\item Distribution system operation. Each DSO fixes $\hat{P}^{DN}_{r,0}$ as the boundary condition and optimally dispatches the local system by solving the problem $\min \{\pi^{DN}_r : \text{Eq.} \eqref{tdcd:dn}, P^{DN}_{r,0}=\hat{P}^{DN}_{r,0} \}$.
\end{enumerate}
The above coordination process does not require iterations between the transmission and distribution systems. This coordination scheme is also compatible with the existing transmission system dispatch that schedules the optimal output of resources with their submitted information.
\subsection{Case Study}
\subsubsection{EP of the distribution network}
\begin{figure}[t!]
\centering
\includegraphics[width=3.5in]{Photo/fig_esr_dist.pdf}
\caption{EP models of the IEEE-13 distribution network at (a) valley load and (b) peak load.}
\label{fig:dist_esr}
\end{figure}
The modified IEEE-13 test feeder with 6 DERs in reference \cite{ref:tan_rcc} is used to demonstrate the EP of the distribution network. Red polygons in Fig. \ref{fig:dist_esr} exhibit EP models of the test system at the valley and peak hours, which are 2-dimensional regions regarding the net power exchange $P^{DN}_{r,0}$ and the operation cost $\pi^{DN}_r$. The projection of the EP model on the subspace of $P^{DN}_{r,0}$ is denoted as $\Xi^{DN}_r$, which characterizes the range of flexibility that the distribution network can provide. The green curves with triangular markers are cumulative cost functions of DERs. As can be seen, the cumulative cost curve is beneath the EP model and has a wider range of net power exchange. This is because network limits are omitted when generating the cumulative cost curve. Hence, only aggregating cost curves of DERs for transmission-level dispatch is not enough, the EP of the distribution system incorporating both cost functions and network constraints is needed for the optimal coordination between transmission and distribution systems.
\subsubsection{EP calculation}
\begin{table}[t!]
\centering
\caption{Computational Efficiency of EP Calculation}
\label{tab:dist_esr_eff}
\renewcommand\arraystretch{1.3}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{ccccc}
\toprule
System & \makecell{Number\\ of DERs} & \makecell{Time of\\FME (s)} & \makecell{Time of\\PVE (s)} & \makecell{Rate of \\ model reduction}\\
\midrule
DN-13 & 6 & 3.75 & 0.34 & 98.4\%\\
DN-25 & 12 & 41.29 & 0.32 & 99.5\%\\
DN-49 & 24 & \textgreater1200 & 0.37 & \textgreater99.9\% \\
DN-241 & 120 & \textgreater1200 & 0.62 & \textgreater99.9\% \\
DN-2401 & 1200 & \textgreater1200 & 2.37 & \textgreater99.9\%\\
\bottomrule
\end{tabular}
}
\end{table}
Multiple IEEE-13 feeders are connected at the root node to create larger-scale test systems with 25, 49, 241, and 2401 nodes to examine the computation performance of the EP calculation. As shown in TABLE \ref{tab:dist_esr_eff}, the proposed PVE algorithm successfully obtains EP results of all five test systems within 3 seconds. In contrast, the conventional FME method takes more than 10 and 129 times of computational effort to calculate the EP for the 13-node and 25-node test systems, respectively. For test systems with more than 49 nodes, the FME fails to yield the EP results within 1200s. This result shows the superiority of the PVE-based EP calculation in terms of computational efficiency and scalability. As for the reduction of the model scale, the EP model of the distribution network is described by a group of 2-dimensional linear constraints, which reduces the model scale of the distribution network by more than 98.4\%.
\subsubsection{EP-based TDCOD}
\begin{table}[t]
\centering
\caption{Computational Efficiency of EP-based TDCOD}
\label{tab:tdcd_eff}
\renewcommand\arraystretch{1.3}
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{ccccc}
\toprule
\multirow{2}{*}{\makecell{Num of\\DNs}} & \multirow{2}{*}{\makecell{Time of joint\\ optimization (s)}} & \multicolumn{3}{c}{Time of EP-based coordination (s)} \\
\cmidrule{3-5}
& & \makecell{System \\ reduction} & \makecell{Coordinated \\ optimization} & Total \\
\midrule
$50$ & 0.26 & 0.31 & 0.08 & 0.39\\
$100$ & 0.47 & 0.30 & 0.09 & 0.39\\
$200$ & 0.68 & 0.31 & 0.12 & 0.43\\
$400$ & 0.97 & 0.31 & 0.20 & 0.51\\
\bottomrule
\end{tabular}
}
\end{table}
The IEEE-24 system is connected with different numbers of IEEE-13 distribution feeders to simulate the coordinated dispatch between transmission and distribution systems. The coordinated dispatch is solved by the joint optimization and the EP-based coordination method respectively. Optimization results from the two methods are verified to be identical for all the test cases. The computation time of the joint optimization and the EP-based coordination is summarized in TABLE \ref{tab:tdcd_eff}. The computation time of the EP-based coordination is dominated by the system reduction process. With the EP, the time of the coordinated optimization is reduced by more than 69.2\%, which may help to relieve the computation burden of the transmission system. The total computation time of the EP-based coordination is also reduced compared with the joint optimization in cases with more than 100 distribution networks. This is because the EP distributes the computation burden of solving a large-scale centralized optimization to each subsystem in parallel and thus, is more efficient for the coordinated management of numerous distribution networks.
\section{Conclusion}
This two-part paper proposes the EP theory to make external equivalence of the subsystem and realizes the COD of power systems in a non-iterative fashion. To calculate the EP efficiently and accurately, Part II of this paper proposes a novel polyhedral projection algorithm termed the PVE, which characterizes the EP model by identifying its vertices and building the convex hull of vertices. The vertex identification process in the PVE algorithm is finely designed to give higher priority to vertices that are critical to the overall shape of the projection, which makes the approximation error decreases along the steepest path. The Hausdorff distance is employed to measure and control the calculation accuracy of the PVE algorithm, which can provide flexibility to balance the computation accuracy and computation effort of the EP calculation in practice. The EP theory and the PVE algorithm are applied to the non-iterative COD of multi-area systems and transmission-distribution systems. Case studies of different scales testify the superiority of the proposed PVE algorithm in terms of computational efficiency and scalability compared with the conventional FME algorithm. The EP is verified to reduce the subsystem model scale by more than 92.7\% for regional transmission systems and more than 98.4\% for distribution networks, which can alleviate the computation and communication burden in coordinated dispatch. The effectiveness of the EP-based non-iterative coordinated dispatch is also validated in comparison with the joint optimization.
The proposed EP theory and the non-iterative coordinated optimization framework are general and may find a broad spectrum of applications in addition to instances in this paper, e.g., the coordinated dispatch of the multi-energy network, the coupling of power and traffic networks, and the integration of user-side flexible resources. Applications of the PVE algorithm to other projection problems, e.g., flexibility region aggregation, loadability set \cite{ref:loadability1}, and the projection-based robust optimization \cite{ref:ro_prj}, are also worthy of future investigation. Incorporating the robust feasibility and chance constraints to deal with the uncertainty and incorporating inter-temporal constraints in the projection calculation should be future extensions of the proposed method.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
{
"arxiv_id": "2302.13262",
"language": "en",
"timestamp": "2023-02-28T02:14:33",
"url": "https://arxiv.org/abs/2302.13262",
"yymm": "2302"
} |
\section{Introduction}
Differential equations are the \textit{de facto} standard for learning dynamics of biological \citep{hirsch2012differential} and physical \citep{tenenbaum1985ordinary} systems.
When the observed phenomenon is deterministic, the dynamics are typically expressed in terms of ordinary differential equations (ODEs).
Traditionally, ODEs have been built from a mechanistic perspective, in which states of the observed system, as well as the governing differential function and its parameters, are specified by domain experts.
The recent surge of auto-differentiation tools in machine learning has enabled \textit{black-box ODEs}, where the differential function is defined by a neural network, e.g. Neural ODEs \citep{chen2018neural}, or a Gaussian process \citep{heinonen2018learning}.
These approaches have proven successful in learning surrogates for unknown differential functions in a completely data-driven way.
Furthermore, when used in conjunction with encoder-decoder structures, this family of models can accurately capture the dynamics from high-dimensional sequences such as videos \citep{park2021vid, kanaa2021simple}.
Despite their expressive power and impressive in-distribution predictive accuracy, neural ODEs often suffer from worsening performance when extrapolated over time \citep{rubanova2019latent}.
Moreover, in contrast to hand-crafted mechanistic models, latent variable models such as neural ODEs are non-identifiable \citep{wang2021posterior}.
This gets even worse when the dynamics involve discontinuous jumps and additional non-linear modules such as recurrent neural nets \citep{rubanova2019latent,de2019gru}.
An orthogonal line of work incorporates inductive biases into architectures by defining Hamiltonian \citep{zhong2019symplectic}, Lagrangian \citep{lutter2019deep}, second-order \citep{yildiz2019ode2vae}, or graph neural network based dynamics \citep{poli2019graph}. However, these methods do not address the issue that often the dynamics information in convoluted with content information within the data observations.
This work aims at disentangling dynamics from static variables in the context of neural ODEs, enabling robust neural ODE models that can generalize across different settings and improve long-term predictions.
More specifically, we propose to equip the latent dynamics space with \textit{time-invariant} variables that control either the differential function or the mapping from latent variables to observations.
Considering an example of bouncing objects, our approach allows us to disentangle the dynamic states (\textit{e.g.} object positions and velocities) from the static parameters modulating the dynamics (\textit{e.g.} object sizes and friction), as well as from the static features irrelevant to the dynamics (\textit{e.g.} colors of the objects).
In our experiments, we show that a disentangled representation is key to achieving better future predictions, as well as generalization across dynamics governed by different static parameters.
Our contributions are as follows:
\begin{itemize}
\item We extend the standard latent neural ODE model by disentangling the physical modeling space from static, time-invariant variables. Inspired by well-established techniques \citep{kondor2008group, van2018learning}, we obtain time-invariant representations by averaging over the embeddings of individual time points or subsequences (Section~\ref{sec:inv-var}).
\item Our framework paves the way to use the recent advances in deep representation learning \citep{bengio2013representation} for dynamic modeling.
To demonstrate this, we introduce a simple contrastive objective based on self-supervised representation learning \citep{grill2020bootstrap} (Section~\ref{sec:ssl}).
\item Our model consistently leads to more accurate long-term predictions as empirically shown on simulated sequences (Sections~\ref{sec:sin}~and~\ref{sec:lv}), rotating images (Section~\ref{sec:rot-mnist}), and bouncing ball image sequences (Section~\ref{sec:bb}).
\end{itemize}
While introduced for latent neural ODEs, discrete and/or stochastic dynamical systems can also benefit from the same construction.
We discuss the implications of our method on learning object-centric representations and its potential Bayesian extensions (Section~\ref{sec:disc}).
Our implementation will be made available in GitHub upon acceptance.
\section{Background}
\label{sec:backgr}
\subsection{Ordinary differential equations}
\label{sec:odes}
Multivariate ordinary differential equations are defined as
\begin{align}
\dot{\mathbf{x}}(t) := \hb{}{\mathbf{x}(t)}{t} = \mathbf{f}(t,\mathbf{x}(t)), \label{eq:ode1}
\end{align}
where $t \in \mathbb{R}_+$ denotes \emph{time}, the vector $\mathbf{x}(t) \in \mathbb{R}^d$ captures the \emph{state} of the system at time $t$, and $\dot{\mathbf{x}}(t) \in \mathbb{R}^d$ is the \emph{time derivative} of the state $\mathbf{x}(t)$.
In this work, we focus on ODE systems that do not explicitly depend on time, implying a vector-valued \emph{(time) differential function} $\mathbf{f}: \mathbb{R}^{d} \mapsto \mathbb{R}^d$.
The ODE state solution $\mathbf{x}(t_1)$ is computed by integrating the differential function starting from an initial value $\mathbf{x}(t_0)$:
\begin{align}
\mathbf{x}(t_1) = \mathbf{x}(t_0) + \int_{t_0}^{t_1} \mathbf{f}(\mathbf{x}(\tau))\d \tau. \label{eq:int}
\end{align}
Due to the deterministic nature of the differential function, the ODE state solution $\mathbf{x}(t_1)$ is completely determined by the corresponding initial value $\mathbf{x}(t_0)$.
\subsection{Latent neural ODEs}
\label{sec:lnodes}
\citet{chen2018neural} proposed a sequential generative model, where each observed sequence $\mathbf{y}_{1:N_t} \in \mathbb{R}^{N_t\times D}$ at given time points $t_{1:N_t}$ is mapped into a \textit{latent trajectory} $\mathbf{x}_{1:N_t} \in \mathbb{R}^{N_t\times q_x}$ with the notation $\mathbf{x}_i \equiv \mathbf{x}(t_i)$.
The generative model relies on random initial values,
their continuous-time transformation, and finally an observation mapping from latent to data space, as described below:
\begin{align}
\mathbf{x}_1 &\sim p(\mathbf{x}_1) \\
\mathbf{x}_i &= \mathbf{x}_1 + \int_{t_1}^{t_i} \mathbf{f}_{{\boldsymbol{\theta}}}(\mathbf{x}(\tau))\d \tau \label{eq:node2} \\
\mathbf{y}_i &\sim p_{\boldsymbol{\xi}}(\mathbf{y}_i \mid \mathbf{x}_i).
\end{align}
Here, the time differential $\mathbf{f}_{{\boldsymbol{\theta}}}$ is a neural network with parameters ${\boldsymbol{\theta}}$ (hence the name ``neural ODEs'').
Similar to variational auto-encoders \citep{kingma2013auto,rezende2014stochastic}, the ``decoding'' of the observations is performed by another non-linear neural network with a suitable architecture and parameters $\boldsymbol{\xi}$.
\subsection{Contrastive self-supervised learning}
\label{sec:bg-ssl}
Contrastive representation learning aims to learn a feature extractor, as well as a representation space, where given a prescribed notion of similarity, similar data points are embedded nearby and dissimilar points are far apart \citep{chopra2005learning}.
Given a data point $\mathbf{x}$ and a pair $(\mathbf{x}^+,\mathbf{x}^-)$ of positive (similar) and negative (dissimilar) samples, contrastive learning in its most general form learns a function $ {\mathbf{g}}$ that minimizes
\begin{align}
\min ~~ \| {\mathbf{g}}(\mathbf{x})- {\mathbf{g}}(\mathbf{x}^+) \|_2^2 - \| {\mathbf{g}}(\mathbf{x})- {\mathbf{g}}(\mathbf{x}^-) \|_2^2,
\end{align}
where the similarity is defined by the Euclidean distance.
While earlier works \citep{chopra2005learning, schroff2015facenet} define similarity using class labels and additional side information, more recent approaches rely on data augmentation \citep{cubuk2018autoaugment, chen2020simple, zbontar2021barlow, grill2020bootstrap}.
In our work, data points or subsequences are considered similar if they are from the same data trajectory.
Furthermore, as \citet{grill2020bootstrap}, our learning scheme does not utilize negative pairs nor the corresponding optimization objective. We see different time frames as different instances of the same object, so there are no true negative pairs.
\section{INODE: Invariant Neural ODEs}
\label{sec:inodes}
Most physical systems involve parameters that remain constant throughout a sequence\footnote{We use sequence to refer to a single data trajectory generated by a differential equation.}.
Such constant parameters could govern the dynamics, such as a pendulum system with different pendulum lengths/masses, or characterize the mapping from the physical modeling space to the observation space, \textit{e.g.,} colors of objects following the same physical rules \citep{li2018disentangled}.
The generative model of the standard latent neural ODEs does not explicitly represent the above-mentioned static variables.
NODEs learn the dynamical system by simply fitting the data (sequences), hence they do not enforce the constraint that certain parts of the latent state representation $\mathbf{x}(t)$ should remain constant.
Hence, we propose to extend the generative model as follows:
\begin{align}
\mathbf{c} &\sim p(\mathbf{c}) ~\qquad \texttt{// content variable} \\
\mathbf{m} &\sim p(\mathbf{m}) \qquad \texttt{// dynamics modulator}\\
\mathbf{x}_1 &\sim p(\mathbf{x}_1) \\
\mathbf{x}_i &= \mathbf{x}_1 + \int_{t_1}^{t_i} \mathbf{f}_{{\boldsymbol{\theta}}}(\mathbf{x}(\tau);\mathbf{m})~\d \tau \\
\mathbf{y}_i &\sim p_{\boldsymbol{\xi}}(\mathbf{y}_i \mid \mathbf{x}_i ~;~ \mathbf{c}).
\end{align}
We broadly refer to $\mathbf{c}$ and $\mathbf{m}$ as \emph{time-invariant} variables, and thus dub our method \emph{invariant neural ODE} (\textsc{INODE}).
Currently, for standard latent neural ODEs there is no mechanism that would explicitly allow capturing either type of time-invariant variables.
In the following, we describe how our framework, \textsc{INODE}, explicitly models such time-invariant variables, resulting in better extrapolation and generalization abilities.
\subsection{Learning latent time-invariant variables}
\label{sec:inv-var}
We consider a set of observed sequences, denoted as $\mathbf{y}^{(n)}_{1:N_t}$, where $n \in \{1, \dots, N_s\}$ is the sequence index, $N_s$ is the total number of sequences and $N_t$ the number of time points.
Since we are interested in modeling per-sequence, time-invariant variables, a straightforward modeling approach would be learning a \emph{global} set of variables $\mathbf{C} = \{\mathbf{c}^{(1)},\ldots,\mathbf{c}^{(N_s)} \} \in \mathbb{R}^{N_s \times q_c}$ by mean-field inference \citep{blei2017variational} (similarly for $\mathbf{m}^{(n)}$).
While mean-field inference usually works well in practice, it does not specify how to obtain time-invariant variables for an unobserved sequence.
As we want our method to be able to generalise to unseen dynamic parametrisations, we opt for amortized inference instead, which relies on an encoder network $\mathbf{g}_{\bpsi}$. Next, we discuss how to learn these time-invariant variables.
\paragraph{(i) Content variables}
In order to learn a latent content variable $\mathbf{c}^{(n)}$ that captures the time-invariant characteristics of the observed sequence, we average over the observation embeddings provided by a feature extractor $\mathbf{g}_{\bpsi}(\cdot)$ (\textit{e.g., }a convolutional neural network) with parameters ${\boldsymbol{\psi}}$:
\begin{align}
\mathbf{c}^{(n)} &= \frac{1}{N_t} \sum_{i=1}^{N_t} \mathbf{c}^{(n)}_i, \quad \mathrm{where~~} \ \mathbf{c}^{(n)}_i = \mathbf{g}_{\bpsi}\big(\mathbf{y}^{(n)}_i\big).
\end{align}
By construction, $\mathbf{c}^{(n)}$ is invariant to time (or more rigorously, invariant to time-dependent effects).
In turn, we map back to data space via a decoder that jointly maps from the latent dynamic state and latent content variable to the observed space (similarly to \citet{franceschi2020stochastic}):
\begin{align}
\mathbf{y}^{(n)}_i &\sim p_{\boldsymbol{\xi}}\Big(\mathbf{y}^{(n)}_i \mid \big[\mathbf{x}^{(n)}_i,\mathbf{c}^{(n)}\big]\Big) ~~ \forall i \in [1,\ldots,N_t].
\end{align}
Note that the same content variable $\mathbf{c}^{(n)}$ is fed as input for all observations within a sequence.
\paragraph{(ii) Dynamics modulators}
Unlike content variables, dynamics modulators cannot be inferred from individual time frames as such variables manifest themselves over time.
Therefore, we propose to extract these latent variables from multiple subsequent time frames of a sequence (\textit{i.e.,} a subsequence) instead of individual observations. Accordingly, the averaging now is over all subsequences of length $N_e$:
\begin{align}
\mathbf{m}^{(n)} &= \frac{1}{N_t-N_e} \sum_{i=1}^{N_t-N_e} \mathbf{m}^{(n)}_i, \quad \mathbf{m}^{(n)}_i = \mathbf{g}_{\bpsi}\big(\mathbf{y}^{(n)}_{i:i+N_e}\big),
\end{align}
where $N_e$ is the number of subsequent observation in a subsequence and $\mathbf{g}_{\bpsi}$ is a feature extractor with parameters ${\boldsymbol{\psi}}$.
In practice, the differential function takes as input the concatenation of the dynamic state, $\mathbf{x}^{(n)}(\tau)\in \mathbb{R}^{q_x}$, and the dynamics modulator, $\mathbf{m}^{(n)} \in \mathbb{R}^{q_c}$.
Therefore, we redefine the input space of the differential function: $\mathbf{f}: \mathbb{R}^{q_x+q_c} \mapsto \mathbb{R}^{q_x}$.
The resulting ODE system resembles augmented neural ODEs (ANODEs) \citep{dupont2019augmented}, except that the augmented variables of our interest $\mathbf{m}^{(n)}$ are constant across time.
It is important to note that $N_e$ should be sufficiently big, so that the subsequences $\mathbf{y}^{(n)}_{i:i+N_e}$ would be informative about the parameter.
For instance, it must hold that $N_e \geq 3$ if $\mathbf{m}$ impacts the acceleration, which is the second-order derivative of position.
\subsection{Optimization objective}
\label{sec:opt}
The marginal likelihood $p(Y)$ for a dataset $Y \equiv \big \{ \mathbf{y}_{1:N_t}^{(1)}, \ldots,\mathbf{y}_{1:N_t}^{(N_s)} \big\} \in \mathbb{R}^{N_s \times N_t\times D}$ is analytically intractable due to the non-linear differential function and the decoder network. Therefore, we resort to variational inference, a common practice for neural ODE models \citep{chen2018neural, rubanova2019latent}).
We approximate the unknown latent initial states $\mathbf{x}_1^{(n)}$ by amortized inference:
\begin{align}
\mathbf{x}_1^{(n)} &\sim q_{\bnu}(\mathbf{x}_1^{(n)}\mid\mathbf{y}_{1:N}),
\end{align}
where $q_{\bnu}(\cdot)$ stands for an \emph{encoding} distribution with parameters ${\boldsymbol{\nu}}$, which we choose to be a Gaussian with diagonal covariance.
For simplicity, we chose to maintain point estimates on the content variables $\mathbf{c}^{(n)}$. The maximization objective of \textsc{INODE} then becomes a lower-bound ($\ELBO$) of the marginal likelihood \citep{blei2017variational}:
\begin{align}
\log p(Y) & \geq \textup{ELBO} = \sum_n \mathbb{E}_{q_{\bnu}} \left[ \log p\big(\mathbf{y}_{1:N_t}^{(n)} \mid \mathbf{x}_{1:N}^{(n)} \big) \right] - \notag \\ &\hspace*{2.5cm} \textup{KL}\left[ q_{\bnu}(\mathbf{x}_1^{(n)}) \| p(\mathbf{x}_1) \right].
\label{eq:ELBO}
\end{align}
We set the prior distribution to a $q_x$-dimensional isotropic Gaussian where $q_x$ is specified in the experiments.
We approximate the first expectation with Monte Carlo sampling of the latent trajectories.
As in variational auto-encoders, the decoder outputs the parameters of the likelihood model.
Note that the optimization w.r.t. the extractor network $\mathbf{g}_{\bpsi}(\cdot)$ is implicit. More specifically, the extractor network $\mathbf{g}_{\bpsi}(\cdot)$ is trained jointly with other modules, while maximizing the ELBO objective (Eq.~ \ref{eq:ELBO}) similarly to \citet{franceschi2020stochastic}.
\subsection{\textsc{SINODE} : Self-supervised learning of invariant variables}
\label{sec:ssl}
So far our framework does not prescribe an explicit training objective for the time-invariant variables.
Yet, we could exploit the fact that the content variables $\mathbf{c}^{(n)}_i, \mathbf{c}^{(n)}_j$ of two different time points $t_i \neq t_j$ from the same $n$'th data trajectory should match
due to the definition of time-invariance.
Inspired by the recent advances in the self-supervised learning (SSL) literature \citep{grill2020bootstrap}, we propose an SSL objective that aims to maximize the cosine similarity between \emph{positive pairs}\footnote{We call \emph{positive pairs} the content variables computed from the same sequence.} of content variables
\begin{align}
\max ~~ \mathcal{L}_c = \sum_{n,i,j} \frac{{\mathbf{c}^{(n)}_i}^\top \mathbf{c}^{(n)}_j}{\|\mathbf{c}^{(n)}_i\| ~ \|\mathbf{c}^{(n)}_j\|} \label{eq:contr}
\end{align}
A similar objective can be also used for the dynamics modulators $\mathbf{m}^{(n)}$.
More specifically, the above learning objective is equivalent to computing the cosine of an angle between two vectors. Therefore, when maximised, the objective enforces the vectors to point in the same direction, or in other words for the time-invariant variables to be similar. The benefit of this additional loss term is to prevent \emph{temporal information leaking} into the time-invariant variables. We empirically found out that minimizing the similarity between \emph{negative pairs} (content variables that belong to different sequences) deteriorates the overall performance. This can be explained by the observation that the sequences are not different enough for the negative pairs to exist. For example, the sequences share the underlying differential function, where the dynamics varies due to the parametrisation, hence there are no \emph{true} negative pairs. Fig.~\ref{fig:cossim} provides an example of the content variable embeddings $\big\{\mathbf{c}_i^{(n)}\big\}_{i,n=1,1}^{N_t,N_s}$, where each 16x16 block depicts the content variable similarity within a sequence and the goal of the proposed loss term is to maximise this value (a higher similarity).
Finally, we adjust our optimization object from section~\ref{sec:opt} with the SLL objective (eq.\ref{eq:contr}) and maximize it with respect to all parameters:
\begin{align}
\argmax_{ {\boldsymbol{\theta}}, {\boldsymbol{\nu}}, {\boldsymbol{\psi}}, \boldsymbol{\xi}} ~~ \textup{ELBO} + \lambda \mathcal{L}_c,
\end{align}
where $\lambda$ is a weighting term to account for the different scales of the ELBO and $\mathcal{L}_c$ terms. We empirically validate that the model is robust to $\lambda$; hence, for experiments with SLL objective we set $\lambda=1$ and call the resulting model \textsc{SINODE}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{cossim.png}
\vspace{-.5cm}
\caption{The cosine similarity between the content variables obtained from $N_s=25$ data sequences with a fixed sequence length $N_t=16$.
Since each observation is mapped to a unique content variable, we obtain $N_s\times N_s=400$ embeddings (hence, the matrix is $400 \times 400$).
As indicated by the $16\times 16$ block-matrix structure along the diagonal, the content variables $\mathbf{c}_i^{(n)}$ and $\mathbf{c}_j^{(n)}$ that belong to the same sequence have higher similarity (lighter colors) compared to pairs $\mathbf{c}_i^{(n)}$ and $\mathbf{c}_j^{(n)}$ that belong to different sequences.
}
\label{fig:cossim}
\end{figure}
\section{Related work}
\paragraph{Neural ODEs}
Since the neural ODE breakthrough \citep{chen2018neural}, there has been a growing interest in continuous-time dynamics modeling.
Such attempts include combining recurrent neural nets with neural ODE dynamics \citep{rubanova2019latent, de2019gru} where latent trajectories are updated upon observations, as well as Hamiltonian \citep{zhong2019symplectic}, Lagrangian \citep{lutter2019deep}, second-order \citep{yildiz2019ode2vae}, or graph neural network based dynamics \citep{poli2019graph}.
While our method \textsc{INODE} (as well as the extension \textsc{SINODE}) has been introduced in the context of latent neural ODEs, it can be directly utilized within these frameworks as well.
\paragraph{Augmented dynamics}
\citet{dupont2019augmented} augment data-space neural ODEs with additional latent variables and test their method only on classification problems. \citet{norcliffe2021neural} extend neural ODEs to stochastic processes by means of a stochastic, latent variable modulating the dynamics. In a related study, \citet{yildiz2022learning} propose to infer latent variables modulating data-space for Gaussian process based ODEs, with applications to low-dimensional problems. To the best of our knowledge, we are the first to explicitly enforce time-invariant variables.
\paragraph{Disentanglement and learning invariances}
Our learning scheme for time-invariant variables is closest to \citet{grill2020bootstrap} in that neither formulation involves negative pairs.
The idea of averaging for invariant function estimation was used in \citep{kondor2008group, van2018learning, franceschi2020stochastic}.
The latter proposes using such variables in the context of discrete-time stochastic video prediction.
Although relevant, their model involves two sets of dynamic latent variables, coupled with an LSTM.
\section{Experiments}
\label{sec:exp}
\paragraph{Datasets}
We benchmark on four different datasets with varying complexity of the dynamics, as well as data dimensionality: sinusoidal dataset (Section~\ref{sec:sin}), Lotka-Volterra dataset (Section~\ref{sec:lv}), Rotating MNIST \citet{casale2018gaussian}, Section~\ref{sec:rot-mnist}), and bouncing balls (\citet{sutskever2008recurrent}, Section~\ref{sec:bb}).
For the details of the datasets, please see App.~\ref{ap:exp-details}.
\paragraph{Implementation details}
We implement our model in PyTorch \cite{paszke2017automatic}.
The encoder, decoder, differential function, and invariant encoder network are all jointly optimized with the Adam optimizer \citep{kingma2014adam} with learning rate $0.002$.
For solving the ODE systems we use \texttt{torchdiffeq} \cite{torchdiffeq} package.
For more details on hyperparameters, ODE solvers and architecture details, see App.~\ref{app:exp_details}.
\paragraph{Compared methods}
In all experiments, we compare standard neural ODE (NODE, \citet{chen2018neural}) with our frameworks:
\textit{(a)} \textsc{INODE} - adding time-invariant variables, and \textit{(b)} \textsc{SINODE} - adding
time-invariant variables and an additional SSL training objective.
We use the same network architectures for all compared models to isolate the effects of time-invariant representation and SSL objective.
We do not compare against augmented neural ODEs \citep{dupont2019augmented}, as they present their methodology in the context of density estimation and classification.
Furthermore, the predictions of another similar work by \citet{norcliffe2021neural} have fallen back to the prior; hence, we did not include any results in this version.
\paragraph{Comparison goals}
In all experiments, we aim to demonstrate whether time-invariant variables improve \textit{(i)} generalization to new test sequences that follow different content variables $\mathbf{c}$ and dynamics modulators $\mathbf{m}$, and \textit{(ii)} extrapolation to longer time horizons.
In particular, the sinusoidal, Lotka-Volterra, and bouncing balls datasets are used to confirm the utility of \emph{dynamics modulators}, while the rotating MNIST dataset is used to verify the advantage of the \emph{content variables}.
Note that during training, all models use the first $T_{in}$ observations as input to encode the latent initial states $\mathbf{x}^{(n)}_0$.
In addition, our framework uses $T_{inv}$ observations to extract the time-invariant variables $\mathbf{c}$ and $\mathbf{m}$.
For a fair comparison with NODE, we ensure that all models use the same number of data points to extract latent variables, and also the latent dimensionalities match.
We test the extrapolation performance by comparing the mean squared errors (MSE) over time horizons that exceed the training length $N_t$.
\paragraph{Reported metrics} In order to evaluate the performance of our framework, we report the test MSE at different sequence lengths (time-horizons), empirically compare the reconstructed sequences, as well as investigate the latent space learned by the models.
\subsection{Sinusoidal data}
\label{sec:sin}
We experiment with sinusoidal sequences \citep{rubanova2019latent, norcliffe2021neural}, where the generated sequences have different initial position, frequency, and amplitude across them (for the equations, see Eq.~\ref{eq:sin} in App.~\ref{ap:exp-details}). The empirical results are presented in Fig.~\ref{fig:sin} (for additional sequences see Fig.~\ref{fig:sin_add} in App.~\ref{ap:add_results}), where we show that our framework generalizes across unseen dynamic parametrisation, as well as improves forecasting accuracy. More specifically, standard NODE model fails to extrapolate beyond the training regime, especially for sin waves with low frequency. In contrast, \textsc{INODE} and \textsc{SINODE} show consistent predictions also far in the future. The latent space trajectories plotted in Fig.~\ref{fig:sin_latent} in App.~\ref{ap:add_results} confirm the benefit of disentangling the dynamics modulators from the underlying dynamics.
Moreover, the higher predictive accuracy of our methods is also shown in Fig.~\ref{fig:mse_all} (\textbf{a}), where our framework obtains a lower MSE than standard NODE across both medium and long sequence lengths. Lastly, the addition of the SSL objective brings minor improvements, as seen by more consistent amplitude across different time steps in Fig.~\ref{fig:sin} and higher explained variance in Fig.~\ref{fig:sin_latent} in App.~\ref{ap:add_results} .
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth,height=4cm]{figs/lv/lv_latent_all.pdf}
\vspace{-0.4cm}
\caption{
PCA embeddings of six latent Lotka-Volterra trajectories, each with a different color.
In total six samples are plotted for a horizon of $200$.
We argue that the true underlying two-dimensional dynamics are better captured by our INODE and SINODE.
}
\label{fig:lv_latent}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\textwidth,height=2.5cm]{figs/rot_mnist/rec_rot_id0_ap.pdf}
\caption{Model predictions on a rotating MNIST test sequence. (\textbf{a}) NODE \citet{chen2018neural} (\textbf{b}) INODE (ours). (\textbf{c}) SINODE (ours).
At training time, NODE is conditioned on $T_{in}=5$, while \textsc{INODE} and \textsc{SINODE} condition on $T_{in}=1$. Reconstructions are performed for $N_{t}=16$. The time-invariant content variable is learned from $T_{inv}=16$ time frames.
At test time, we roll out the model for a longer prediction horizon ($N_t=30$).
The red colored box indicates where our model succeeds, but standard NODE fails.}
\label{fig:rec_rot}
\end{figure*}
\subsection{Lotka-Volterra (LV) benchmark}
\label{sec:lv}
Next, we test our methodology on the Lotka-Volterra
benchmark \citep{rubanova2019latent,norcliffe2021neural}, governed by a pair of first-order nonlinear differential equations describing the predator-prey dynamics (Eq.~\ref{eq:lv} in App.~\ref{ap:exp-details}). The generated trajectories have different initial positions and parameters that describe the interaction between the two populations across data sequences. Similar to sinusoidal data, the time-invariant variable of interest is the one that governs the dynamics, i.e. the interaction parameters.
The empirical results confirm the initial observations gained from the sinusoidal dataset: \emph{dynamics modulators} improve the predictive abilities of a NODE model for a dynamical system, as seen in Fig.~\ref{fig:lv} in App.~\ref{ap:add_results}. In particular, even though NODE fits the training sequences well ($N_t=200$) outside this training regime the performance of the model drops massively, while our \textsc{INODE, SINODE} circumvent this. The test MSE confirms the empirical observations, see Fig.~\ref{fig:mse_all} (\textbf{b}). For sequences $<T_N$, the performance of NODE is on par with our framework (see Fig.~\ref{fig:lv_phase_ap} in App.~\ref{ap:add_results}), but on longer sequences our approach is superior as indicated by a lower MSE.
Furthermore, we also visualise the generated latent space of the test sequences in Fig.~\ref{fig:lv_latent}.
The latent plots of our framework are more aligned with the true dynamics as they better match the commonly observed phase diagrams of LV dynamics (see Fig.~\ref{fig:lv_phase_ap} in App.~\ref{ap:add_results} ). Lastly, the addition of the SSL loss term brings minor improvements. In particular, it is beneficial when the frequency of the observation is low, see Fig.~\ref{fig:lv} in App.~\ref{ap:add_results}.
In such cases, when the data is limited, having an additional constraint might be beneficial for the model.
\subsection{Rotating MNINST}
\label{sec:rot-mnist}
We show that time-invariant variables are also useful for high-dimensional data, e.g. video sequences, by generating a dataset of rotating images of handwritten ``3'' digits taken from the MNIST dataset \cite{lecun1998mnist}.
We consider the same number of rotation angles ($N_t=16$) as \citet{casale2018gaussian, solin2021scalable}.
We randomize the initial rotational angle, but the rotation speed is kept fixed across all sequences.
In this set-up the content variable is the style of the digit. Hence, it does not affect the underlying dynamics.
The empirical results in Fig.~\ref{fig:rec_rot} confirm that our framework is beneficial for learning dynamics also from high-dimensional data sequences (for more sequences see Fig.~\ref{fig:rec_rot_ap} in App.~\ref{ap:add_results}).
The reconstruction quality is good across all models within the sequence length seen during training ($N_t=16$). However, already within a single forward pass NODE model fails to reconstruct the true digit (see the red frame in Fig.~\ref{fig:rec_rot}), while \textsc{INODE} and \textsc{SINODE} not only reconstruct the original style of the digit, but also capture the correct frequency of the periodic movement.
The test MSE scores align with this observation. Our framework outperforms NODE with increasing sequence length, see Fig.~\ref{fig:mse_all} (\textbf{c}).
The latent plots of the test trajectories explain why NODE fails on longer sequences ( Fig~\ref{fig:rot_latent} in App.~\ref{ap:add_results} ). Even though the NODE model has learned a rotation, the latent space is noisier as the latent variable also contains information about the style of the digit, while for our framework the latent space forms almost perfect circles.
Lastly, the additional SSL loss objective does not seem to bring any benefits for this use case.
As an ablation study, we test our model on the same dataset with a sequence length of $N_t=6$ and twice the number of training sequences.
This constitutes a more challenging dataset since sequences do not even form half of a loop.
We replicated all the results above with MSE worsened by $30\%$.
\begin{figure*}[t]
\centering
\includegraphics[width=0.98\textwidth]{figs/bb/bb_rec_top.pdf}
\caption{Bouncing ball test reconstructions for every other observation.
The rows contrast the ground truth predictions (red circles) against the three models (note that the dataset is greyscaled, colors here are only used for demonstration). At training time, NODE and \textsc{INODE}, \textsc{SINODE} condition on $T_{in}=10$. The time-invariant content variable is learned from $T_{inv}=10$ time frames. At test time, we roll out the model for a larger time interval ($N_t=50$). The red colored box indicates where discrepancy across model can be observed.}
\label{fig:rec_bb}
\end{figure*}
\subsection{Bouncing ball with friction}
\label{sec:bb}
To investigate our model's performance on video sequences we test it on the bouncing balls dataset, a benchmark often used in temporal generative modeling \cite{sutskever2008recurrent, gan2015deep, yildiz2019ode2vae}.
For data generation, we modify the original implementation of \citet{sutskever2008recurrent} by adding friction to the system.
We assume a per-sequence friction constant, which slows down the ball by a constant factor.
Our framework improves predictive capability for video sequences as confirmed by empirical observations, see Fig.~\ref{fig:rec_bb} (for additional sequences see Fig.~\ref{fig:bb-pred} in App.~\ref{ap:add_results} ) and test MSE (Fig.~\ref{fig:mse_all} (\textbf{c})).
More specifically, standard NODE model fails to predict the correct position of the object at further time horizons, while our framework \textsc{INODE} corrects this error (see Fig.\ref{fig:rec_bb} red box).
Furthermore, the dynamics modulating variable is beneficial even at short prediction horizons, as visible in Fig.~\ref{fig:mse_all} (\textbf{c}).
Nonetheless, the addition of the SSL loss term does not bring any benefits.
\subsection{Ablation studies}
Lastly, we perform ablation studies on Lotka-Volterra dataset.
In particular, we demonstrate the effect of \textit{(i)} the number $N_s$ of training sequences, \textit{(ii)} the length $T_{in}$ of the subsequence from which the models extract latent variables, \textit{(iii)} the numerical ODE solver, \textit{(iv)} the dimensionalitiy $q_x,q_c$ of the latent dynamic and time-invariant variables, and \textit{(v)} the loss weighting term $\lambda$.
We repeat each experiment 4 times, and report the mean and standard deviation across repetitions.
Here we summarize the main findings and refer to Table~\ref{tab:supp:abl1} in the Appendix for further details.
First, we observe that the error decreases with more training sequences and longer inputs from which latent variables are extracted.
Next, the fixed-step ODE solvers (\texttt{Euler} and \texttt{RK4}) give slightly better results than an adaptive-step solver (\texttt{dopri5}).
The model seems to be somewhat robust to the latent dimensionality, while very small/large values impair learning as usual.
Finally, we observe that a moderate value for the self-supervised learning weight $\lambda$ leads to a better MSE than having very strong or no self-supervision.
Although the variance of the results makes comparisons difficult, the results indicate that $\lambda$ should be carefully chosen with cross-validation.
\section{Discussion}
\label{sec:disc}
We have presented an extension to NODE models for invariant dynamical modeling.
We introduce a time-invariant variable $\mathbf{c}^{(n)}$ that can model the content of the observed system and time-invariant variables $\mathbf{m}^{(n)}$ that can modulate the dynamics, resulting in an \textsc{INODE} model.
Furthermore, we investigate an additional constraint on these time-invariant variables via a self-supervised loss \eqref{eq:contr} that enforces similarity across time points, hence we call the subsequent model \textsc{SINODE}.
Our empirical results confirm that our framework leads to more accurate long-term predictions on simulated and video sequences, is able to generalise to new dynamic parametrisations and, as such, improves the existing continuous latent space models.
The main focus of the presented work is on deterministic periodic systems.
This could be extended to stochastic dynamics via an auxiliary variable that models the noise, similarly to \citet{franceschi2020stochastic}.
Furthermore, our framework can easily be used also in other temporal models.
Specifically, it could be combined with a Gaussian process-based ODEs \citep{hegde2022variational}, leading to a Bayesian extension to the present framework.
Likewise, time invariances can be inferred via marginal likelihood as in \cite{van2018learning, schwobel2022last}, which would lead to a more holistic Bayesian framework.
Lastly, our framework could be extended to object-centric dynamical modeling, which would learn per-object representations coupled with a single dynamical model.
\section{Experiment details}
\label{ap:exp-details}
Below, we describe the data simulation.
Please refer to Table~\ref{tab:supp:datasets} for other properties of the datasets..
\paragraph{Sin data}
In the same vein as \cite{rubanova2019latent, norcliffe2021neural}, we first define a set of time points $T=\{0,0.1,0.2,\ldots,t_i,\ldots,4.9\}$.
Then each sequence $\mathbf{y}^{(n)}$ is generated as follows:
\begin{align}
a^{(n)} &\sim \mathbb{U}[1,3] \\
f^{(n)} &\sim \mathbb{U}[0.5,1.0] \\
\phi^{(n)} &\sim \mathbb{U}[0,1.0] \\
\label{eq:sin}
x^{(n)}_i &= a^{(n)} \sin \left( f^{(n)}t_i + \phi^{(n)} \right) \\
y^{(n)}_i &\sim \mathcal{N}( x^{(n)}_i , 0.1)
\end{align}
where $\mathbb{U}$ denotes the uniform distribution.
\paragraph{Lotka-Volterra data}
To generate Lotka-Volterra sequences, we follow a similar generative model as above.
\begin{align}
\alpha^{(n)} &\sim \mathbb{U}[.1,.4] \\
\gamma^{(n)} &\sim \mathbb{U}[.1,.4] \\
\mathbf{x}_0^{(n)} &\sim \mathbb{U}[2,10] \\
\label{eq:lv}
\mathbf{x}_i^{(n)} &= \mathbf{x}_0^{(n)}
+ \int_0^{t_i}
\begin{bmatrix}
\alpha^{(n)} x_1^{(n)}(\tau) - x_1^{(n)}(\tau) x_2^{(n)}(\tau) /2 \\ x_1^{(n)}(\tau) x_2^{(n)}(\tau)/5 - \gamma^{(n)}x_2^{(n)}(\tau)
\end{bmatrix}
\d \tau \\
\mathbf{y}^{(n)}_i &\sim \mathcal{N}( \mathbf{x}^{(n)}_i , 0.1)
\end{align}
\paragraph{Bouncing ball with friction}
For data generation, we use the script provided by \citet{sutskever2008recurrent}.
As a tiny update, each observed sequence has a friction constant drawn from a uniform distribution $\mathbb{U}[0,0.02]$.
We set the initial velocities to be unit vectors with random directions.
\begin{table*}[h]
\begin{center}
\caption{Details of the datasets. Below, $N$ denotes the number of sequences (for training, validation, and test), $N_t$ is the sequence length, $\Delta t$ denotes the time between two observations, $\sigma$ is the standard deviation of the added noise, $T_{in}$ denotes the number of initial time points the initial value encoder takes as input, $T_{inv}$ is the number of time points used to extract the invariant representations $\mathbf{c}^{(n)}$.
Further, $q_{x}$ and $q_{c}$ denote, respectively, the dimensionality of the dynamic state $\mathbf{x}(t)$ and the invariant representation for our model variant. For a fair comparison, the latent dimensionality of the neural ODE model is always set to $q_{x}+q_{c}$.}
\vspace{.2cm}
\label{tab:supp:datasets}
\begin{tabular}{c | c | c | c | c | c | c | c | c | c | c | c }
\toprule
\textsc{Dataset} & $N_{tr}$ & $N_{val}$ & $N_{test}$ & $N_t$ & $\Delta t$ & $\sigma$ & $T_{in}$ (NODE) & $T_{in}$ & $T_{inv}$ & $q_{x}$ & $q_{c}$ \tabularnewline \midrule
\textsc{Sinusoidal Data} & 80 & 25 & 25 & 50 & 0.1 & 0.1 & 10 & 3 & 10 & 4 & 4 \tabularnewline \midrule
\textsc{Lotka-Volterra} & 500 & 100 & 100 & 200 & 0.1 & 0.1 & 40 & 8 & 40 & 8 & 8 \tabularnewline \midrule
\textsc{Rotating MNINST} & 500 & 60 & 60 & 16 & 0.1 & 0.0 & 5 & 1 & 16 & 10 & 16 \tabularnewline \midrule
\textsc{Bouncing ball with friction} & 2000 & 100 & 100 & 25 & 1.0 & 0 & 10 & 10 & 10 & 10 & 2 \tabularnewline \midrule
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[!h]
\begin{center}
\caption{
Ablation studies performed on Lotka-Volterra dataset.
We repeat each experiment 4 times, and report the mean and standard deviation across repetitions.
In each table, we vary one factor and compare the results against the standard setup marked by $\star$ in each table ($N_{tr}=500, ~T_{inv}=40, \textsc{solver} = \textsc{euler}, ~q_x=8, ~q_c=8, ~\lambda=1.0)$.
\textbf{(a-b)} As expected, the error decreases with more data points and input to which the predictions are conditioned on.
\textbf{(c)} dopri5 solver leads to slightly worse predictions than fixed step solvers Euler and RK4.
\textbf{(d)} The model seems to be somewhat robust to the dimensionality of the dynamic states $q_x$ and time-invariant representations $q_c$.
\textbf{(e)} Finally, we observe that a moderate value for the self-supervised learning weight $\lambda$ leads to a better MSE than having very strong or no self-supervision. Although the variance of the results makes comparisons difficult, the results indicate that $\lambda$ should be carefully chosen with cross-validation.
\\}
\label{tab:supp:abl1}
\begin{minipage}{.32\linewidth}
\centering
\textbf{(a)} \\
\begin{tabular}{ | c | c | }
\toprule
$N_{tr}$ & \textsc{MSE} \tabularnewline \midrule
100 & $9.750\pm1.296$ \tabularnewline \midrule
250 & $4.999\pm0.671$ \tabularnewline \midrule
$\mathbf{500^\star}$ & $\mathbf{3.643\pm0.406}$ \tabularnewline \midrule
\end{tabular}
\end{minipage}
\begin{minipage}{.32\linewidth}
\centering
\textbf{(b)} \\
\begin{tabular}{| c | c |}
\toprule
$T_{inv}$ & \textsc{MSE} \tabularnewline \midrule
10 & $5.283\pm1.344$ \tabularnewline \midrule
20 & $4.490\pm0.562$ \tabularnewline \midrule
$40^\star$ & $3.643\pm0.406$ \tabularnewline \midrule
$\mathbf{80}$ & $\mathbf{2.412\pm0.177}$ \tabularnewline \midrule
\end{tabular}
\end{minipage}
\begin{minipage}{.32\linewidth}
\centering
\textbf{(c)} \\
\begin{tabular}{ | c | c | }
\toprule
\textsc{ODE solver} & \textsc{MSE} \tabularnewline \midrule
$\textsc{\textbf{euler}}^\star$ & $\mathbf{3.643\pm0.406}$ \tabularnewline \midrule
\textbf{\textsc{RK4}} & $\mathbf{3.646\pm0.777}$ \tabularnewline \midrule
\textsc{dopri5} & $4.147\pm0.391$ \tabularnewline \midrule
\end{tabular}
\end{minipage}
\begin{minipage}{.32\linewidth}
\centering
\textbf{(d)} \\
\begin{tabular}{ | c | c | c | }
\toprule
$q_x$ & $q_c$ & \textsc{MSE} \tabularnewline \midrule
2 & 2 & $4.617\pm1.632$ \tabularnewline \midrule
2 & 8 & $3.818\pm0.679$ \tabularnewline \midrule
8 & 2 & $3.503\pm0.389$ \tabularnewline \midrule
4 & 4 & $3.762\pm0.262$ \tabularnewline \midrule
$8^\star$ & $8^\star$ & $3.643\pm0.406$ \tabularnewline \midrule
16 & 16 & $3.915\pm0.601$ \tabularnewline \midrule
\end{tabular}
\end{minipage}
\begin{minipage}{.32\linewidth}
\centering
\textbf{(e)} \\
\begin{tabular}{ | c | c | }
\toprule
$\lambda$ & \textsc{MSE} \tabularnewline \midrule
0 & $3.452\pm0.182$ \tabularnewline \midrule
$1.0^\star$ & $3.643\pm0.406$ \tabularnewline \midrule
\textbf{10.0} & $\mathbf{3.103\pm0.457}$ \tabularnewline \midrule
100.0 & $3.665\pm0.322$ \tabularnewline \midrule
1000.0 & $4.405\pm0.338$ \tabularnewline \midrule
\end{tabular}
\end{minipage}
\end{center}
\end{table*}
\clearpage
\section{Architecture and Hyperparameter Details}
\label{app:exp_details}
\begin{table*}[h!]
\begin{center}
\caption{Brief overview of the model architecture per dataset. Overall, the architecture of our model consists of 4 main parts: (a) Position Encoder; (b) Invariant Encoder; (c) Differential Function; and (d) Decoder.}
\vspace{.2cm}
\label{tab:supp:architecture}
\begin{tabular}{c | c | c | c | c | c }
\toprule
\textsc{Dataset} & Position Encoder & Invariant Encoder & Differential Function & Solver (dt) & Decoder \tabularnewline \midrule
\textsc{Sinusoidal Data} & RNN & RNN & MLP & euler (0.1) & MLP \tabularnewline \midrule
\textsc{Lotka-Volterra} & RNN & RNN & MLP & euler (0.1)& MLP \tabularnewline \midrule
\textsc{Rotating MNINST} & CNN & CNN & MLP & dopri5 (0.1) & CNN\tabularnewline \midrule
\textsc{Bouncing ball with friction} & CNN & RNN & MLP & dopri5 (0.1) & CNN \tabularnewline \midrule
\end{tabular}
\end{center}
\end{table*}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.7\textwidth, height=8cm]{figs/sin/sin_arch.pdf}
\caption{\textsc{INODE} or \textsc{SINODE} architecture for Sinusoidal data. For NODE architecture is similar, but without the 'Invariant Encoder'.}
\label{fig:sin_arch}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.7\textwidth, height=8cm]{figs/lv/lv_arch.pdf}
\caption{\textsc{INODE} or \textsc{SINODE} architecture for Lotka-Volterra data. For NODE architecture is similar, but without the 'Invariant Encoder'.}
\label{fig:lv_arch}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.7\textwidth, height=8cm]{figs/rot_mnist/rot_mnist_arch.pdf}
\caption{\textsc{INODE} or \textsc{SINODE} architecture for Rotating MNIST. For NODE architecture is similar, but without the 'Invariant Encoder'.}
\label{fig:rot_arch}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.7\textwidth, height=8cm]{figs/bb/bb_arch.pdf}
\caption{\textsc{INODE} or \textsc{SINODE} architecture for Bouncing Balls. For NODE architecture is similar, but without the 'Invariant Encoder'.}
\label{fig:rot_arch}
\end{figure*}
\clearpage
\section{Additional Results}
\label{ap:add_results}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.98\textwidth,height=8cm]{figs/sin/sin_all6.pdf}
\caption{Model forecasting accuracy on test sinusoidal sequences.(\textbf{a}) NODE \citet{chen2018neural};(\textbf{b}) \textsc{INODE} (ours); (\textbf{c}) \textsc{SINODE} (ours). The black line is the ground truth, thick colored line is the mean across 20 samples, while lighter lines represent each sample (for \textsc{INODE} and \textsc{SINODE} not visible as the predictions match almost perfectly across samples). At training time, the model conditions on points in the red area and reconstructs points in the blue area. For \textsc{INODE} and \textsc{SINODE} the dynamics modulating variable is learned from points in the pink area (including the the proceeding red area). At test time, the model is rolled out for a longer time interval ($N_t=150$).}
\label{fig:sin_add}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.8\textwidth,height=4cm]{figs/sin/sin_latent6_all_ap.pdf}
\caption{Sampled sinusoidal sequence latent space. (\textbf{a}) NODE as in \citet{chen2018neural}. (\textbf{b}) INODE (ours). (\textbf{c}) SINODE (ours). A star indicates the beginning of a trajectory, while color indicates a data sample. In total six samples are plotted for $N_{t}$=150. Fainted lines correspond to samples from the model, in total 20, while the thicker line is the mean value.}
\label{fig:sin_latent_ap}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.98\textwidth,height=4cm]{figs/lv/lv_phase_ap.pdf}
\caption{Lotka-Volterra Phase Diagram. Black line indicates the underlying ground truth; green: \textsc{SINODE}; purple: \textsc{INODE}; blue: NODE \cite{chen2018neural}. Dashed lines correspond to samples from the corresponding model (L=20), while ticker colored line is the mean }
\label{fig:lv_phase_ap}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=0.98\textwidth,height=5cm]{figs/lv/lv_all4.pdf}
\caption{Model forecasting accruacy on test Lotka-Volterra sequences.(\textbf{a}) NODE \citet{chen2018neural};(\textbf{b}) \textsc{INODE} (ours); (\textbf{c}) \textsc{SINODE} (ours). The black line is the ground truth, thick colored line is the mean across 20 samples, while lighter lines represent each sample (for \textsc{INODE} and \textsc{SINODE} not visible as the predictions match almost perfectly across samples). At training time, the model conditions on points in the red area, and reconstructs points in the blue area. For \textsc{INODE} and \textsc{SINODE} the dynamics modulating variable is learned from points in the pink area (including the the proceeding red area). At test time, the model is rolled out for a longer time interval ($N_t=600$).}
\label{fig:lv}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.8\textwidth,height=4cm]{figs/lv/lv_latent6_app.pdf}
\vspace{-0.4cm}
\caption{Sampled LV sequence latent space. (\textbf{a}) NODE \citet{chen2018neural}. (\textbf{b}) \textsc{INODE}(ours). (\textbf{c}) \textsc{SIONDE} (ours). A star indicates the beginning of a trajectory, while color indicates a data sample. In total six samples are plotted for $N_{t}$=200. Fainted lines correspond to samples from the model, in total 20, while the thicker line is the mean value.}
\label{fig:lv_latent_ap}
\end{figure*}
\begin{figure*}[h!]
\centering
\includegraphics[width=1.0\textwidth,height=2.5cm]{figs/rot_mnist/rec_rot_id0_ap.pdf}
\includegraphics[width=1.0\textwidth,height=2.5cm]{figs/rot_mnist/rec_rot_id3_ap.pdf}
\includegraphics[width=1.0\textwidth,height=2.5cm]{figs/rot_mnist/rec_rot_id9_ap.pdf}
\includegraphics[width=1.0\textwidth,height=2.5cm]{figs/rot_mnist/rec_rot_id17_ap.pdf}
\caption{Model prediction acuracy on test Rotating MNIST trajectories.(\textbf{a}) NODE \citet{chen2018neural} (\textbf{b}) INODE (ours). (\textbf{c}) SINODE (ours). At training time, NODE conditions on $T_{in}=5$, while \textsc{INODE} and \textsc{SINODE} condition on $T_{in}=1$. Reconstructs are performed for $N_{t}=16$. Time-invariant content variable is learned from $T_{inv}=16$ time frames. At test time, we roll out the model for a larger time internal ($N_t=30$). The red colored box indicates where our model succeeds but standard NODE fails.}
\label{fig:rec_rot_ap}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.8\textwidth,height=4cm]{figs/rot_mnist/rot_latent_all.pdf}
\caption{Sampled Rotating MNIST sequences latent space. (\textbf{a}) NODE \citet{chen2018neural}. (\textbf{b}) \textsc{INODE}(ours). (\textbf{c}) \textsc{SIONDE} (ours). A star indicates the beginning of a trajectory, while color indicates a data sample. In total six samples are plotted for $N_{t}$=30. Fainted lines correspond to samples from the model, in total 50, while the thicker line is the mean value.}
\label{fig:rot_latent}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.98\textwidth]{figs/bb/bb_rec_all.pdf}
\caption{Bouncing ball test reconstructions for every even time frame.(\textbf{a}) NODE \citet{chen2018neural} (\textbf{b}) INODE (ours). (\textbf{c}) SINODE (ours). The rows contrast the ground truth predictions (red) against the 3 models. At training time, NODE and \textsc{INODE}, \textsc{SINODE} condition on $T_{in}=10$. Time-invariant content variable is learned from $T_{inv}=10$ time frames. At test time, we roll out the model for a larger time internal ($N_t=50$).}
\label{fig:bb-pred}
\end{figure*}
\section{Experiments}
Main story line: invariance + contrastive learning for latent dynamical modelling.
Key phrases: contrastive learning helps learning invariances; latent models are not identifiable, therefore, more compact/structured latent space is needed (put prior knowledge as lightly as possible (generic)) vai invariance (aka some structure better than no).
\textbf{Add all experiments already to this file}
\paragraph{Model list (Benchmarks)}
\begin{itemize}
\item NODE, GPODE (standard)
\item INODE, IGPODE (invariant, not contrastive)
\item ICNODE, ICGPODE (invariant and contrastive)
\item \textcolor{gray}{BICNODE, BICGPODE (invariant and contrastive, with last layer GP) (perhaps anb overkill for this paper}
\item \textcolor{gray}{RNN-ODE (Rubanova)}
\end{itemize}
\subsection{Sin data (augmented dynamics)}
\begin{enumerate}
\item Amplitude and frequency drawn from a uniform distribution
\begin{itemize}
\item \textbf{Show:} Invariance learning helps
\item \textcolor{gray}{\textbf{Show:} NODE can overfit but not GP-ODE on small data}
\end{itemize}
\item \textcolor{gray} {Amplitude and frequency drawn from a mixture of Gaussians. One mixture component is reserved for testing (for BICNODE or BICGPODE).}
\begin{itemize}
\item \textcolor{gray} {\textbf{Show:} Bayesian modeling helps avoid overfitting}
\end{itemize}
Code works, put on:
\begin{itemize}
\item standard Neural ODE
\item invariant Neural ODE
\item contrastive + invaraint Neural ODE
\item \textcolor{gray}{Rubanova ODE}
\end{itemize}
\end{enumerate}
\subsection{Rotating MNIST}
\begin{itemize}
\item Rotating digit (3s) with random inital angle
\begin{itemize}
\item \textbf{Show:} Invariance learning helps
\end{itemize}
\item Few-shot learning (fix the dynamics and only adjust the content) using only a few sequences
\begin{itemize}
\item \textbf{Show:} Few shot learning on a new digt (5s)
\end{itemize}
\item Dynamics governed by an actual underlying function (in simulation) rather than by a fixed constant (argument here being that we say we are learning a function)
\begin{itemize}
\item \textbf{Show:}
$\frac{dr(t)}{dt} = f(r(t)) ?= sin(r(t)) + b$ \\
here, then sin data experiment would not even be needed
\end{itemize}
\end{itemize}
\subsection{Moving MNIST}
\begin{enumerate}
\item Multiple digits moving in a frame
\begin{itemize}
\item \textbf{Show:} Invariant dynamics is better than non-invariant learning (currently we show that by a faster learning speed rate by Cagatay)
\item \textbf{Show:} As a byproduct of invariant representation learning the dynamics function can be shared across multiple objects. (we show that it is better than invariant learning alone)
\end{itemize}
\begin{itemize}
\item \color{orange}{IN PROGRESS}: Increase data volume to 10'000 in total (max we have) train 9000 valid 1000, train on whole sequence length T=15 (then can check extrapolation)
\begin{itemize}
\item NeuralODE with their encoder/decoder DCGAN64 + our 10 lat and 16 cont (their latent encoding 128, content 256)
\item NeuralODE with their encoder/decoder DCGAN64 + adjust to 128 lat and 256 cont
\item GP-ODE with their encoder/decoder DCGAN64 + our 10 lat and 16 cont
\end{itemize}
\end{itemize}
\end{enumerate}
\subsection{Object-centric idea}
\begin{align}
[\mathbf{s}_0^i,\ldots,\mathbf{s}_0^A] &\sim q(\mathbf{y}_{1:N}) \quad \forall i ~\text{where i denotes object id} \\
[\mathbf{c}_0^i,\ldots,\mathbf{c}_0^A] &\sim q_\text{inv}(\mathbf{y}_{1:N}) \quad \forall i ~\text{where i denotes object id} \\
\mathbf{s}_t^i &= \mathbf{s}_0^i + \int_0^t \mathbf{f}(s(\tau)^i d\tau \\
\hat{\mathbf{y}}(n) &= \text{dec}(\mathbf{s}^1(t_n),\ldots,\mathbf{s}^A(t_n),\mathbf{c}^1(t_n),\ldots,\mathbf{c}^A(t_n))
\end{align}
\begin{itemize}
\item \textbf{Show:} Can generalize dynamics to multiple objects
\end{itemize}
Take away: we introduce a modular add on that helps to learn dynamics from an invariance perspective
\subsection{Experiments from \cite{franceschi2020stochastic}}
Main task: stochastic video prediction. Uses the following metrics to evaluate Peak Signal-to-Noise Ratio (PSNR), Structured Similarity, Learned Perceptual Image Patch Similarity. Compare their model to variational models (SV2P, SVG, SAVP, StructVRNN), SVG haveing the closest resemblance
\begin{enumerate}
\item Stochastic Moving MNIST (and deterministic)
\item KTH Action dataset
\item Human3.6M
\item BAIR
\item Varying frame rate in testing
\item Disentangling dynamics and content (achieve what we want) see fig 8 from their paper
\item interpolation of dynamics
\end{enumerate}
\section{Introduction}
In the present work we examine fully stochastic latent temporal models which untangle temporal dynamics from frame content.
\textit{note on deterministic models}: In contrast to deterministic models that fail to generate sharp long-term video frames (see (Babaeizadeh et al., 2018; Denton & Fergus, 2018).
A known advantage of latent temporal models is that they leverage the compuational advantages of State-Space Models (SSMs) as both operate in a low-dimensional latent spaces (Ref). However, where SSMs are .. and .., latent temporal models are efficient to train \cite{franceschi2020stochastic}.
\section{Related Work}
The focus of this paper is prediction for high-dimensional data.
\paragraph{Latent-Space Models}
State-Space Models (SSMs), Temporal Latent Space models (Discrete RNN, Continuous Neural ODEs)
\citet{franceschi2020stochastic} introduces a stochastic temporal model whose dynamics are governed in a latent space by a residual update rule for stochastic video prediction. The temporal evolution of the system is governed throuugh resiudal updates of the latent space, conditioned on learned stochastic variables. They call upon the limnitaion of neural odes to (i) low-data setttins, (ii) prone to overfit, (iii) unable to handle stochasticity.
The goal is to learn the distribution of possible future frames (for us maybe related to GP-ODE distribution of functions)
Temporal Model: stochastic residual network (stochastic comes from auxilary random variable z sampled from a Gaussian prior conditioned on previous state). In their work, having the residual formulation brings strong benefits.
His formulation of euqation (1) where z represents stochastic discrete-time variables, they change every time sample
he does not have explicit conditioning (aka time invariant dynamic variables) that we have
\paragraph{Content Variable} Despite image sequence being a temporal process, certain components of it are static, such as, background or the shape of the moving object. As such, they can be modeled separately as content variables \cite{franceschi2020stochastic} (check also spirit as Denton & Birodkar (2017) and Yingzhen & Mandt (2018).), where the shown benefit is the resulting model being lighter and more stable. However, \citet{franceschi2020stochastic} introduces the content variable only in its architecture.
\section{Model}
\subsection{Invariant Latent Dynamical Model}
\paragraph{Content Variable}
Similarly to earlier works (ref franceschi 2020, check denton 2017, yingzhen 2018) we introduce a content variable which is computed as follows:
\begin{align}
\mathbf{c} = \frac{1}{N}\sum_i^{N} q_{\text{inv-enc}}(\mathbf{x}_{i})
\end{align}
where $N$ is the number of input frames $N\leq T$ and $q_{\text{inv-enc}}$ could either be neural network or a deep kernel \cite{schwobel2022last}.
\subsection{Variational Inference}
\paragraph{Evidence Lower Bound}
\begin{align}
\textup{log}\:p(X)
&\geq \underbrace{\mathbb{E}_{q(\mathbf{f,U},\mathbf{z}_{0}|X)}\begin{bmatrix} \textup{log} \:p(X|\mathbf{z}_{0},\mathbf{f}) \end{bmatrix}}_\text{$L_{x}$} \\
&+ \underbrace{\mathbb{E}_{q(\mathbf{z}_{0}|X)}\begin{bmatrix} \textup{log}\: \frac{p(\mathbf{z}_{0})}{q(\mathbf{z}_{0}|X)} \end{bmatrix}}_\text{$L_{z}$} \\
&+ \underbrace{\mathbb{E}_{q(\mathbf{f},\mathbf{U})}\begin{bmatrix} \textup{log}\frac{p(\mathbf{f|U})p(\mathbf{U})}{p(\mathbf{f|U})q(\mathbf{U})} \end{bmatrix}}_\text{$L_{u}$}
\end{align}
where $L_{x}$ is the likelihood term, $L_{z}$ is the regularization KL term and $L_{u}$ inducing KL term.
\paragraph{Contrastive Loss}
To enforce similarity across embeddings we introduce an additional contrastive loss term to our ELBO. This prevents dynamic information leaking into the content variable. We compute this term as follows:
\begin{align}
\mathbf{Z}_{ij} &= \frac{\mathbf{C}}{|\mathbf{C}|} \cdot \frac{\mathbf{C}}{|\mathbf{C}|} \\
L_{c} &= \sum_{j}^{T} \mathbf{Z}_{ij},
\end{align}
where $\mathbf{C}$ is a matrix of content vectors $\mathbf{c}$ per data point, $\mathbf{Z}$ is a dot product matrix of the normalized content vectors, $L_{c}$ is the sum over the time dimension of matrix $\mathbf{Z}$. We want to maximise this variable as it enforces that the content variables across different time points have to be similar.
\paragraph{Optimization Objective}
\begin{align}
L = L_{x} + L_{z} + L_{u} - L_{c}
\end{align}
|
{
"arxiv_id": "2302.13260",
"language": "en",
"timestamp": "2023-02-28T02:14:29",
"url": "https://arxiv.org/abs/2302.13260",
"yymm": "2302"
} | \section{Introduction}
\subsection{Background}
In \cite{HNY}, He-Nie-Yu studies the affine Deligne-Lusztig varieties with finite Coxeter parts. They study such types of varieties using the Deligne-Lusztig reduction method from \cite{DL76} and carefully investigating the reduction path. In the approach, they establish the ``multiplicity one'' result which is, roughly speaking, for any $\sigma$-conjugacy class $[b]\in B(G)$, there is at most one path in the reduction tree that corresponds to $[b]$. The proof of the ``multiplicity one'' result is obtained by showing that a certain combinatorial identity (of two $q$-polynomials, or more precisely, of the class polynomials) of the following form holds:
\[\sum_{[b]\in B(G,\mu)_\text{indec}} (q-1)^?q^{-??}=1.\]
They first reduce this to the case when $G$ is split and simply-laced and $\mu$ is a fundamental coweight (\cite{HNY} 6.5 and 6.6). Then, for type $A$, they check the identity using some geometric properties of affine Deligne-Lusztig varieties, such as dimension formulae and injectiveness of the projection map from the affine flag variety to the affine Grassmannian (\cite{HNY} 5.4). At the end (\cite{HNY} 6.9), they ask if a combinatorial proof of this identity exists. Our goal is to give such a proof.
\subsection{The identity and the sketch of proof}
The following identity is of interest to us.
\begin{prop}\label{mainthm}
Fix natural numbers $i< n$ and let $j\colonequals n-i>0$ for the sake of simplicity, then
\[\sum_{\substack{k\ge1\\ ((a_l,b_l))_{l\le k}\in D_k}} (q-1)^{k-1}q^{1-k+\frac{\sum_{1\le l_1<l_2\le k}(a_{l_1}b_{l_2}-a_{l_2}b_{l_1}) +\sum_{1\le l\le k}\gcd(a_l,b_l)}{2}}=q^{\frac{ij-n}{2}+1},\]where $D_k=\{((a_l,b_l))_{l\le k}: a_l,b_l\text{ are natural numbers and } a_1+\cdots+ a_k=i,~b_1+\cdots+b_k=n,\text{ and } 1>\frac{a_1}{b_1}>\cdots>\frac{a_k}{b_k}>0.\}$.
\end{prop}
\subsubsection{Notations}
Our strategy is to use a coordinate plane to understand the (index set of the) identity. A polygon always means a polygon \textit{whose vertices are all lattice points (i.e., the coordinates are integers)} and we count segments as $2$-gons. We denote by $\area(P)$ the area of $P$, by $i(P)$ the number of lattice points interior to $P$, and by $b(P)$ the number of lattice points on the boundary of $P$.
We will use $\mathbb{N}$ to mean the set of natural numbers.
\subsubsection{A few lemmas and the proof of the identity}
We first observe that changing $b_l$ into $b_l-a_l$ does not affect and helps us remove the condition that the slope is bounded by $1$.
\begin{lem}\label{btob-a}
Let $C_k$ be the set of $(x_l,y_l)\in\mathbb{N}^2$'s for $l\le k$ such that $x_1+\cdots+x_k=i$, $y_1+\cdots+y_k=j$, and $\frac{y_1}{x_1}<\cdots<\frac{y_k}{x_k}$. Then,
\[\sum_{\substack{k\ge1\\ ((a_l,b_l))_{l\le k}\in ?_k}} (q-1)^{k-1}q^{1-k+\frac{\sum_{1\le l_1<l_2\le k}(a_{l_1}b_{l_2}-a_{l_2}b_{l_1}) +\sum_{1\le l\le k}\gcd(a_l,b_l)}{2}}\] for $?=C$ and $?=D$ are equal.
\end{lem}
Next, the idea to interpret this summation is by making a one-to-one correspondence between an element $((x_l,y_l))_{l\le k}\in C_k$ and a convex polygon satisfying some condition. Let us denote by $\Delta$ the triangle whose vertices are $(0,0)$, $(i,0)$, and $(i,j)$. For simplicity, let $L$ denote the segment(=$2$-gon) connecting $(0,0)$ and $(i,j)$.
\begin{lem}\label{pick}There is a one-to-one correspondence between $C_k$ and the set of convex $(k+1)$-gons lying in $\Delta$ but not touching the horizontal and vertical edges of $\Delta$ such that $L$ is an edge. Under this correspondence, we have\begin{align*}
\frac{1}{2}\left(\sum_{1\le l_1<l_2\le k }(x_{l_1}y_{l_2}-x_{l_2}y_{l_1})+\sum_{1\le l\le k}\gcd(x_l,y_l)\right)\\=i(P)&+b(P)-1-\frac{1}{2}\gcd(i,j),
\end{align*}where $P$ is the corresponding $(k+1)$-gon.
\end{lem}
We will denote the set of such $(k+1)$-gons by $C_k$ abusing notation.\newpage
The main lemma in our proof is the following:
\begin{lem}\label{mainlem}
Let $C$ be the set of all convex polygons which lies in $\Delta$ not touching the horizontal and vertical edges and contains $L$ as an edge. Then, the following identity holds:
\[\sum_{P\in C}x^{u(P)} (1-x)^{v(P)-2}=1,\]where $u(P)$ is the number of lattice points interior to $\Delta\setminus P$ and $v(P)$ is the number of vertices of $P$.
\end{lem}
Finally, let us recall the well-known Pick's Theorem.
\begin{namedtheorem}[Theorem]
Let $P$ be a polygon in a coordinate plane. Then,\[\area(P)=i(P)+\frac{b(P)}{2}-1.\]
\end{namedtheorem}
We are now ready to prove \Cref{mainthm}.
\begin{proof}[Proof of \Cref{mainthm}]
By \Cref{btob-a,pick}, it is enough to show that
\[\sum_{k\ge1,~ P\in C_k} (q-1)^{k-1}q^{-(k-1)+i(P)+b(P)}=q^{\frac{ij-n+\gcd(i,j)}{2}+2}.\]
Now, we observe that the exponent part on the right-hand side can be written as $i(\Delta)+b(\Delta)-(n-1)$ simply using the facts that $\area(\Delta)=\frac{ij}{2}$ and $b(\Delta)=n+\gcd(i,j)$ and applying Pick's Theorem.
As $C=\cup_{k\ge1} C_k$, the index set of the summation is $C$. Now, observing that $u(P)=i(\Delta)+b(\Delta)-(n-1)-(i(P)+b(P))$, we are reduced to show
\[\sum_C \left(\frac{q-1}{q}\right)^{k-1}q^{-u(P)}=1.\]
This is nothing but the resulting identity of \Cref{mainlem} by letting $x=\frac{1}{q}$ because, for any $P\in C_k$, we have $v(P)-2=k+1-2=k-1$.
\end{proof}
\begin{rem}
We do not know if the identity of \Cref{mainlem} is well-known. It looks interesting to us because the left-hand side is not homogeneous in the sense that $u(P)+v(P)-2$ is not constant but this gives a way to generate $1$ using polynomials of the form $x^a(1-x)^b$.
We wonder if $\{(u(P),v(P)-2):P\in C\}$ parametrizes all such pairs $\{(a_i,b_i)\in\mathbb{N}^2:i\in I\}$ such that $\sum_{i\in I}x^{a_i}(1-x)^{b_i}=1$. More precisely, let $S=\{(a_i,b_i)\in\mathbb{N}^2: 1\le i\le k\}$ be a set satisfying \[\sum_{i=1}^k x^{a_i}(1-x)^{b_i}=1,\]and $(0,1)\in S$. Then we would like to ask if there exist $m,n\in\mathbb{N}$ such that $S=\{(u(P),v(P)-2):P\in C_{m,n}\}$ where $C_{m,n}$ is the set defined in \Cref{mainlem} corresponding to the triangle $\Delta_{m,n}$ whose vertices are $(0,0)$, $(m,0)$, and $(m,n)$.
\end{rem}
\section{Proofs of lemmas}
\begin{proof}[Proof of \Cref{btob-a}]
The one-to-one correspondence from $C_k$ to $D_k$ is given by $(x_l,y_l)\mapsto (x_l,x_l+y_l)$. It is easy to check that the conditions on the sums and the slopes are all equivalent. The one that needs justification is the exponent part. However, $x_{l_1}(x_{l_2}+y_{l_2})-x_{l_2}(x_{l_1}+y_{l_1})=x_{l_1}y_{l_2}-x_{l_2}y_{l_1}$ and $\gcd(x_l,x_l+y_l)=\gcd(x_l,y_l)$ obviously.
\end{proof}
\begin{proof}[Proof of \Cref{pick}]
Given $((x_l,y_l))_{l\le k}\in C_k$, consider the convex polygon $P$ with vertices $(0,0)$, $(x_1,y_1)$, $(x_1+x_2,y_1+y_2)$, $\cdots$, $(x_1+\cdots+x_k, y_1+\cdots+y_k)=(i,j)$. As $(\frac{x_i}{y_i})_i$ is increasing, the polygon $P$ is convex. Now, $y_1,x_k>0$ implies that $P$ does not touch the horizontal and vertical edges of $\Delta$. The inverse map from the set of polygons to the set of pairs is the obvious one.
Regarding the formula, it is easy to see that, using induction,
\[\frac{1}{2}\sum_{1\le l_1<l_2\le k}(x_{l_1}y_{l_2}-x_{l_2}y_{l_1})=\area(P).\]
Noting that $\gcd(a,b)+1$ is the number of lattice points on the segment connecting $(m,n)$ and $(m+a,n+b)$ for any integers $m$ and $n$, we get\[\sum_{1\le l\le k}\gcd(x_l,y_l)=b(P)-\gcd(i,n-i).\]
Applying Pick's Theorem, we get the conclusion.
\end{proof}
\begin{proof}[Proof of \Cref{mainlem}]\label{mainlemproof}
Both sides are polynomials in $x$, so we only need to prove it for all $0<x<1$. Let us consider the following probabilistic process:
For each point interior to $\Delta$, choose it with the probability $x$ and abandon it with the probability $1-x$. Then, we form the convex hull containing $(0,0)$, $(i,j)$, and the chosen points. It is easy to see that the resulting convex hull is an element of $C$. For example, if all interior points are abandoned, we end up getting $L$ and so the probability of obtaining $L$ is $(1-x)^{u(L)}$.
For $P\in C$, let $\prob(P)$ be the probability of obtaining $P$ as a result of the aforementioned process. Obviously, $\sum_{P\in C}\prob(P)=1$. So, it is enough to show that $\prob(P)=(1-x)^{u(P)}x^{v(P)-2}$ for all $P\in C$. However, this holds because the case when the resulting convex hull is $P$ is exactly when
1) the vertices of $P$ (except $(0,0)$ and $(i,j)$) are chosen ($=x^{v(P)-2}$) and
2) the vertices outside of $P$ are abandoned ($=(1-x)^{u(P)}$)
\noindent with no conditions on the other remaining points.
\end{proof}
\bibliographystyle{alpha}
|
{
"arxiv_id": "2302.13257",
"language": "en",
"timestamp": "2023-02-28T02:14:23",
"url": "https://arxiv.org/abs/2302.13257",
"yymm": "2302"
} | \section{Introduction}
Carrollian limit described as the speed of light $(c)$ going to zero was first introduced in \cite{Levy1965}\cite{1966ND} as a nontrivial contraction, as opposed to the well-known Galilean limit $(c \to \infty)$ of the Poincar\'{e} transformations. Owing to the deviation from the Lorentzian character, these two limits are also called non-Lorentzian limits. An illustrative way of understanding the Carrollian limit is the closing of the light cone to the time axis as depicted in \autoref{fig:clight}. A peculiar consequence of taking the Carrollian limit on the Poincar\'{e} transformation is that it renders the space as absolute i.e, not affected by boosts. Under such a setting causality almost disappears and the only way for two events to interact causally is if they happen at the same space and time point. For this very reason, the Carrollian limit is sometimes referred to as the ultra-local limit. \\[5pt]
The last decade has seen a flurry of research activity in constructing field theories that are consistent with Carrollian symmetry (see \cite{Bagchi:2019clu}\cite{Banerjee:2020qjj}\cite{Baiguera:2022lsw}\cite{Chen:2023pqf} and references therein). Carrollian symmetry is described by a set of symmetry generators viz. spatial and temporal translations, homogeneous rotations and Carrollian boosts. These symmetry generators can be obtained by taking $c \to 0$ limit of Poincar\'{e} symmetry generators. Equivalently, one may also wish to work in the natural system of units where $c$ is set to unity and rescale the space $(x_i)$ and time $(t)$ instead. The Carrollian limit is then defined as
\begin{equation*}
t \to \epsilon t \qquad, \qquad x_i \to x_i \qquad,\qquad \epsilon \to 0
\end{equation*}
which also leads to the Carrollian symmetry generators \cite{Bagchi:2019clu}\cite{Banerjee:2020qjj}.\\[5pt]
Over these years Carrollian symmetry has paved its way into many physics systems ranging from condensed matter\cite{Marsot:2022imf}\cite{Bagchi:2022eui} to black holes\cite{Donnay:2019jiz}. For example, it has been realized recently that a Carroll particle subjected to an external electromagnetic field mimics a Hall-type scenario\cite{Marsot:2022imf}. Furthermore, the emergence of Carrollian physics in the study of bi layer graphene\cite{Bagchi:2022eui}, the relation of Carrollian symmetry with plane gravitational waves\cite{Duval:2017els}, and motion of particles on a black hole horizon\cite{Gray:2022svz}\ further fuels the need of Carrollian physics.\\[5pt]
However, much of the work carried out in the Carrollian sector has largely remained classical so far and not much heed has been paid to the quantization. As a matter of fact, the whole program of quantization of non-Lorentzian theories is fairly recent. For example, quantum studies on the Galilean field theories have surfaced in the last few years only (see \cite{Banerjee:2022uqj}\cite{Sharma:2023chs}\cite{Chapman:2020vtn}\cite{Baiguera:2022cbp}). This paper is the first attempt to understand the quantum `nature' of Carrollian field theories.\begin{figure}[h]
\centering
\includegraphics[scale=0.77]{clight}
\caption{\small{The figure in panel (a) is the light cone in Minkowski spacetime. Light travels along the path $x = ct$. In panel (b), we can see the light rays starts to collapse on the t axis as we approach closer and closer to the Carrollian limit. Finally the light cone collapse into $\displaystyle x=\lim_{c \to 0} ct \to 0$ in panel (c) above.}}
\label{fig:clight}
\end{figure}\\[5pt]
Understanding the quantum nature of Carrollian field theories is important on many levels. Firstly, as mentioned in the beginning, the Carrollian limit causes the light cone to close on the time axis and thus, time ordering is preserved only along the time axis. This results in two-point correlation functions of a Carrollian field theory to exhibit ultra-local behaviour at the tree level (see \autoref{section:prop}).
It becomes intriguing to ask how Carrollian fields interact at the quantum level. Secondly, in the massless regime, certain Carrollian field theories at the classical level, admits invariance under the infinite conformal symmetries (for example \cite{Bagchi:2019clu}\cite{Banerjee:2020qjj}). It is then natural to ask whether these symmetries survive the quantization or not. Finally, it has been well established that the black hole horizon is a natural Carroll surface\cite{Donnay:2019jiz}. Thus, a quantum field theory living on the black hole horizon could be a Carrollian quantum field theory. \\[5pt]
In this paper, we have attempted to probe into the quantum field description of Carrollian electrodynamics\footnotemark\cite{Bagchi:2019clu} minimally coupled to a massive Carrollian scalar. \footnotetext{Here, by Carrollian electrodynamics we mean the electric sector of Carrollian electrodynamics\cite{Bagchi:2019clu}. In actuality, Carrollian electrodynamics also admits another sector known as the magnetic sector. For more details on the magnetic sector of Carrollian electrodynamics the reader is referred to \cite{Banerjee:2020qjj}.}At the classical level, the Lagrangian for the theory is obtained by Carroll limiting the massless Lorentzian scalar electrodynamics. The resulting theory consists of a gauge couplet $(B, A_i)$ minimally coupled to a complex scalar field $\phi$ through the coupling $e$. We then incorporate a mass term in the theory strictly constrained by the Carrollian symmetry. Owing to an interaction between gauge fields and a scalar field, we name the theory scalar Carrollian electrodynamics (sCED). To explore the quantum field description, we have made use of path integral techniques. We strictly restrict the renormalization scheme up to the first order in the perturbation i.e, 1 loop. Although the theory is renormalizable, there are serious unphysical ramifications, especially regarding the notion of mass and coupling in the Carrollian setting. The renormalization scheme leads to the notion of gauge-dependent mass and coupling which invalidates the conventional arguments of gauge independence for mass and coupling. \\[5pt]
This paper is organized as follows: We have a total of 4 sections including the introduction. In \autoref{section:classical} we present the classical field description of sCED. A brief discussion on the Carrollian symmetry is presented followed by the Lagrangian formulation of sCED. Relevant Noether charges are constructed and it is shown that Carrollian algebra is satisfied at the level of charges. In \autoref{section:quantum} we proceed with the quantum field description of sCED. We propose path integral quantization and study renormalization of the theory up to 1 loop. Relevant results on the quantization are then discussed and concluded in \autoref{section:conclusion}.
\section{Classical analysis of scalar Carrollian electrodynamics}
\label{section:classical}
\subsection{Carrollian Symmetry: A cursory visit!}
Carrollian symmetrty of a $(d+1)$ dimensional spacetime is described by time translations $(H)$, space translations $(P_i)$, homogeneous rotations $(J_{ij})$ and Carrollian boosts $(B_i)$. In an adaptive coordinate chart $x^I =(t,x^i)$ we can express them as
\begin{equation}
\label{eqn:generators}
H =\partial_t \quad,\quad P_i =\partial_i \quad, \quad J_{ij}=(x_i \partial_j-x_j \partial_i) \quad,\quad B_i=x_i \partial_t,
\end{equation}
The symmetry generators (\ref{eqn:generators}) can be obtained by Carroll limiting the Poincar\'{e} symmetry generators \cite{Bagchi:2019clu}\cite{Banerjee:2020qjj}. However, there also exists a yet another way i.e, geometric way to arrive at the Carrollian symmetry generators (see for example \cite{Duval:2014uoa} or \autoref{section:geometry}). The symmetry generators (\ref{eqn:generators}) form a closed Lie algebra called Carrollian algebra given by
\begin{eqnarray}
&[J_{ij}, B_k ]=\delta_{k[j}B_{i]} \quad, \quad [J_{ij}, P_k ]=\delta_{k[j}P_{i]} \quad, \quad [B_i,P_j]=-\delta_{ij}H \nonumber \\
\label{eqn:algebra}
&[J_{ij}, H ]=0 \quad,\quad [P_i,H]=0 \quad,\quad [P_i,P_j]=0 \quad,\quad [B_i,H]=0
\end{eqnarray}
The generators $\{H,P_i,J_{ij},B_i\}$ can be used to study the action of symmetry generators on the fields at a general spacetime point i.e, for a generic scalar field $\varphi$ and a generic vector field $V_i$, we can write (see \cite{Bagchi:2019clu} and references therein for complete details)
\begin{equation}
\label{eqn:fieldaction}
\begin{split}
&\text{Spatial rotations: }\delta_{\omega} \varphi(t,x) =\omega^{ij} (x_{[i} \partial_{j]}) \varphi(t,x) \\
&\qquad \qquad \qquad \qquad \delta_{\omega} V_l(t,x)=\omega^{ij} \big[(x_{[i} \partial_{j]}) V_l(t,x)+\delta_{l[i}V_{j]}\big]\\[5pt]
&\text{Carrollian boosts: }\delta_{\scriptscriptstyle B} \varphi(t,x) =b^j[x_j\partial_t \varphi(t,x)]\\
&\qquad \quad \qquad \qquad \quad \delta_{B}V_l(t,x)=b^j \big[x_j\partial_t V_l(t,x)+ \delta_{lj} \varphi(t,x) \big]\\[5pt]
&\text{Space translation: }\delta_{p} \varphi(t,x) =p^j \partial_j \varphi(t,x)\\
&\qquad \qquad \qquad \qquad \delta_{p} V_i(t,x) =p^j \partial_j V_i(t,x)\\[5pt]
&\text{Time translation: }\delta_{H} \varphi(t,x) =\partial_t \varphi(t,x)\\
&\qquad \qquad \qquad \qquad \delta_{H} V_i(t,x) =\partial_t V_i(t,x)\\[5pt]
\end{split}
\end{equation}
where $\omega^{ij}$ is an antisymmetric matrix and $b^i$ and $p^i$ are the boosts and spatial translation parameters. We shall be employing (\ref{eqn:fieldaction}) to demonstrate the invariance of sCED and then later again to construct the conserved charges associated to these symmetry generators for sCED.
\subsection{Lagrangian and conserved charges for sCED}
We begin our discussion by proposing the Lagrangian for massive sCED. It must be noted that the Lagrangian for the massless scalar Carrollian electrdoyanmics was propsed in \cite{Bagchi:2019clu}. Their technique relied on Helmholtz integrability conditions\footnotemark.\footnotetext{Helmholtz conditions are the necessary and sufficient conditions which when satisfied by a set of second order partial differential equations, guarantees an \emph{action}. We request the reader to check \cite{Bagchi:2019clu}\cite{Banerjee:2020qjj}\cite{10.2307/1989912} for more details on the method and its applications} In a coordinate chart $x^I=(t,x^i)$ the Carroll invariant Lagrangian $\mathcal{\tilde{L}}$ for massless sCED is given by,
\begin{equation}
\mathcal{\tilde{L}}=\frac{1}{2} \Big \{(\partial_i B)^2+(\partial_t A_i)^2 -2 (\partial_t B) (\partial_i A_i)\Big\}-(D_t \phi)^* (D_t \phi)
\end{equation}
where
$D_t \phi= \partial_t\phi+ie B \phi$ and $(D_t \phi)^*=\partial_t {\phi}^*-ie B \phi^*$. We add a mass term strictly constrained by the Carrollian symmetry (\ref{eqn:fieldaction}) to the above Lagrangian such that the Lagrangian $\mathcal{L}$ for massive sCED\footnotemark \;is given by, \footnotetext{From here onwards, we shall simply call massive scalar Carrollian electrodyanmics as sCED.}
\begin{equation}\label{lag}
\mathcal{L}=\frac{1}{2} \Big \{(\partial_i B)^2+(\partial_t A_i)^2 -2 (\partial_t B) (\partial_i A_i)\Big\}-(D_t \phi)^* (D_t \phi) + m^2 \phi^* \phi
\end{equation}
Equivalently, we can write
\begin{equation}
\label{eqn:lag}
\begin{split}
\mathcal{L}&=\frac{1}{2} \Big \{(\partial_i B)^2+(\partial_t A_i)^2 -2 (\partial_t B) (\partial_i A_i)\Big\}-(\partial_t \phi^*)(\partial_t \phi) + m^2 \phi^* \phi\\
&\qquad \qquad \qquad \qquad \qquad \qquad \quad -i e B \Big[\phi \partial_t \phi^*-\phi^* \partial_t \phi\Big]-e^2 B^2 \phi^* \phi
\end{split}
\end{equation}
The equations of motion for sCED can be obtained by varying (\ref{eqn:lag}) with respect to the fields $B,A_i$ and $\phi$, resulting in:
\begin{eqnarray}
&\partial_t\partial_t A_i-\partial_t\partial_i B&=0 \nonumber\\
\label{eqn:eom}
&D_t D_t \phi+m^2\phi&=0\\
&\partial_i\partial_t A_i-\partial_i\partial_i B&-ie\(\phi D_t^* \phi^* - \phi^* D_t \phi\)=0 \nonumber.
\end{eqnarray}
which agrees with \cite{Bagchi:2019clu} if we set $m=0$ in (\ref{eqn:eom}).\\[5pt]
Noether theorem suggests that associated to every continuous symmetry of the Lagrangian, there exists a corresponding global conserved charge. Since the Lagrangian (\ref{eqn:lag}) is invariant under Carrollian symmetry \eqref{eqn:fieldaction}, the associated Noether charges for sCED are given by
\begin{subequations}{}
\label{char1}
\bea{}
&&\text{Spatial rotation:}~Q(\omega)=\int d^{d-1}x ~\omega^{ij} \Big[ \dot{A}_k\(x_{[i}\partial_{j]}A_k+\delta_{k[i}A_{j]}\)-(\partial \cdot A)\(x_{[i}\partial_{j]}B\) \nonumber\\
&&\hspace{6cm}\nonumber~-(D_t \phi)^*(x_{[i}\partial_{j]}\phi)- (x_{[i}\partial_{j]}\phi)^* (D_t \phi) \Big] \nonumber\\
&&\text{Space translation:}~Q(p)=\int d^{d-1}x ~ p^l\Big[ \dot{A}_i \partial_l A_i- \partial_l B \, \partial \cdot A - (D_t \phi)^* \partial_l \phi-\partial_l \phi^* D_t\phi \Big] \hspace{1.5cm} \nonumber\\
&&\text{Time translation:}~Q(h)=\int d^{d-1}x ~ \Big[ \frac{1}{2}(\dot{A}_i^2-(\partial_i B)^2)+ (D_t \phi)^* (D_t \phi)-(D_t \phi)^* (\partial_t \phi) \nonumber \\&&\hspace{8.5cm}-(\partial_t \phi)^* (D_t \phi)-m^2\phi\phi^{*} \Big] \nonumber\\
&&\text{Boost:}~Q(b)=\int d^{d-1}x ~b^l x_l \Big[ \frac{1}{2}(\dot{A}_i^2-(\partial_i B)^2)+ (D_t \phi)^* (D_t \phi)-(D_t \phi)^* (\partial_t \phi) \nonumber\\&&\hspace{6.5cm}-(\partial_t \phi)^* (D_t \phi)-m^2\phi\phi^{*} \Big]+b^l (\dot{A}_l B)\nonumber
\end{eqnarray}\end{subequations}
Correspondingly, after a bit of lengthy but straightforward calculation we can arrive at the charge algebra. The non vanishing Poisson brackets for sCED are,
\begin{eqnarray*}
\{Q(\omega), Q(p)\} = Q(\tilde{p})\\
\{Q(\omega), Q(b)\} =Q(\tilde{b}) \\
\{Q(p), Q(b)\} = Q(h)
\end{eqnarray*}
where $\tilde{p} \equiv \tilde{p}^{k}\partial_k=\omega^{ij}p_{[j}\partial_{i]}$ and $\tilde{b} \equiv \tilde{b}^{k}\partial_k=\omega^{ij}b_{[j}\partial_{i]}$. Clearly the Carrollian algebra is realized at the level of Noether charge algebra. We now proceed to probe into the quantum field description for sCED.
\section{Quantum field description of sCED}
\label{section:quantum}
In the previous section we studied the classical field description of scalar Carrollian electrodynamics. In this section, we propose a quantization prescription. We shall put to use functional techniques to explore the quantum field description of scalar Carrollian electrodynamics. The \emph{action} $S$, for the sCED using \eqref{eqn:lag} takes the following form
\begin{equation}
\label{eqn:action}
\begin{split}
S&=\bigintss dt d^3x\;\; \Bigg[\frac{1}{2} \Big \{(\partial_i B)^2+(\partial_t A_i)^2 -2 (\partial_t B) (\partial_i A_i)\Big\}-(\partial_t \phi^*)(\partial_t \phi)\\
&\qquad \qquad \qquad -i e B \Big[\phi \partial_t \phi^*-\phi^* \partial_t \phi\Big]-e^2 B^2 \phi^* \phi + m^2 \phi^* \phi \Bigg]
\end{split}
\end{equation}
The gauge field couplet $\varphi^I \equiv (B,A^i)$ and the complex scalar field $\phi$, carries the mass dimensions $[B]=[A_i] =[\phi]=[\phi^{*}]=1$ rendering us with a case of a marginally renormalizable theory with $[e]=0$. An instructive thing to note in (\ref{eqn:action}) is that the gauge field $A_i$ does not participate in any interaction with $\phi$ or $\phi^*$. As a consequence, the propagators and vertices shall admit loop corrections offered only due to the interaction between the gauge field $B$ and the complex scalar $\phi$. For the rest of the paper, we shall focus only on the 1 loop corrections in the theory.
\subsection{Feynman Rules}
Since sCED is a gauge theory it is important that we gauge fix the theory. We shall employ the gauge fixing technique developed by Faddeev and Popov \cite{ryder_1996}\cite{Peskin:1995ev}\cite{FADDEEV196729} i.e, the gauge fixed action is
\begin{equation}
\label{eqn:gact1}
\begin{split}
S&=\bigintss dt d^3x\;\; \Bigg[\frac{1}{2} \Big \{(\partial_i B)^2+(\partial_t A_i)^2 -2 (\partial_t B) (\partial_i A_i)\Big\}-(\partial_t \phi^*)(\partial_t \phi)\\
&\qquad \qquad -i e B \Big[\phi \partial_t \phi^*-\phi^* \partial_t \phi\Big]-e^2 B^2 \phi^* \phi + m^2 \phi^* \phi \Bigg] +\bigintss dt d^3 x\; \mathcal{L}_{\text{gauge fixed}}
\end{split}
\end{equation}
with $\mathcal{L}_{\text{gauge fixed}}$ given by
\begin{equation*}
\mathcal{L}_{\text{gauge fixed}} = -\frac{1}{2 \xi} \bigg(G[B(t,x^i), A^i(t,x^i)]\bigg)^2
\end{equation*}
\linebreak
where $G[B,A^i]$ is the gauge fixing condition and $\xi$ is the gauge fixing parameter. \\[5pt]
We choose $G[B,A^i] = (\partial_t B)$ such that the gauge fixed action (\ref{eqn:gact1}) becomes
\begin{equation}
\label{eqn:gact2}
\begin{split}
S&=\bigintss dt d^3x\;\; \Bigg[\frac{1}{2} \Big \{(\partial_i B)^2+(\partial_t A_i)^2 -2 (\partial_t B) (\partial_i A_i)\Big\}-(\partial_t \phi^*)(\partial_t \phi)\\
&\qquad \qquad -i e B \Big[\phi \partial_t \phi^*-\phi^* \partial_t \phi\Big]-e^2 B^2 \phi^* \phi + m^2 \phi^* \phi -\frac{1}{2 \xi} (\partial_t B)^2 \Bigg]
\end{split}
\end{equation}
Observe that we can arrive at the same gauge fixed action by Carroll limiting the Lorentz gauge fixing condition for Lorentzian scalar Electrodynamics. Also, notice that we have omitted the Fadeev-Popov ghost term in (\ref{eqn:gact2}). This is because the Faddeev-Popov ghosts does not interact with the gauge field couplet $(B,A_i)$ and hence does not contribute to any of the loop corrections.\\[5pt]
Now, with the gauge fixed action (\ref{eqn:gact2}) at our disposal, we can evaluate the propagator for the gauge couplet $\varphi^I$. For the sake of brevity, we introduce $\bm{p}$=$(\omega, p_i)$ such that the gauge field propagator $D_{IJ} =\big< \varphi_I, \varphi_J \big>$ reads,
\begin{equation}
\label{eqn:matprop}
D_{IJ}=-i\begin{pmatrix}
\dfrac{\xi}{\omega^2}\qquad & \dfrac{\xi}{\omega^3} p_i\\[20pt]
\;\dfrac{\xi}{\omega^3} p_i\qquad & \;\;\;-\dfrac{\delta_{ij}}{\omega^2}+ \dfrac{p_i p_j}{\omega^4} \xi\;
\end{pmatrix}
\end{equation}\\
and the propagator for the complex scalar field $\phi$ takes the following form
\begin{equation}
\label{eqn:sprop}
\big<\phi,\phi^*\big> = \frac{i}{-\omega^2+m^2}
\end{equation}
Before we proceed further, notice that the gauge field propagator (\ref{eqn:matprop}) admits a pole at $\omega =0$ which essentially captures the ultra local behaviour of Carrollian field theories i.e, two events are causally related to each other only if they happen at the same spacetime point. This can be confirmed further by Fourier transforming the propagator in position space (see \autoref{section:prop} for more details). A similar feature can be observed for the complex scalar field propagator (\ref{eqn:sprop}). However, it must be noted that \eqref{eqn:sprop} admits a pole at $\omega^2=m^2$, which is precisely how mass is defined for a free theory under quantum field theory setting \cite{ryder_1996}\cite{Peskin:1995ev}.\\[5pt]
The Feynman rules for sCED are then given by
\begin{equation}
\label{eqn:feynrule}
\begin{split}
&1. \qquad \text{Gauge scalar propagator,} \qquad \Big\<B\;\;,\;\;B \Big\>\;\;=\;\;\dfrac{-i}{\omega^2} \xi\\[4pt]
& 2. \qquad \text{Scalar propagator,} \qquad \qquad \quad \Big\<\phi^*\;\;,\;\;\phi\Big\>\;\;=\;\; \dfrac{i}{-\omega^2+m^2}\\[4pt]
&3. \qquad \text{Three point vertex,} \qquad \quad \;\; \quad V_{\tiny{B \phi^* \phi}}= \;\;i e (\omega_p-\omega_q)\\[4pt]
&4. \qquad \text{Four point vertex,} \qquad \qquad \quad V_{\tiny{B^2 \phi^* \phi}}= \;\; -2 i e^2\\[4pt]
\end{split}
\end{equation}
The diagrammatic representation of (\ref{eqn:feynrule}) is given in \autoref{fig:feynt}.
\begin{table}
\begin{center}
\begin{tabular} { | m{1cm} | m{4cm} | m{5cm} | }
\hline
1. & $\qquad \;\; \Big<B, B\Big>$ &\qquad \includegraphics[scale=0.6]{prop1}\qquad \;\\[7pt]
\hline
2. & \qquad \; $\Big<\phi, \phi^*\Big>$ &\qquad \includegraphics[scale=0.65]{prop2}\qquad \;\\[7pt]
\hline
3. & \qquad \quad $V_{\tiny{B \phi^* \phi}}$ &\qquad \includegraphics[scale=0.65]{vert1}\qquad\\[7pt]
\hline
4. & \qquad \quad $V_{\tiny{B^2 \phi^* \phi}}$ & \quad \includegraphics[scale=0.65]{vert2}\qquad\\
\hline
\end{tabular}
\caption{\label{fig:feynt}Feynman rules for sCED}
\end{center}
\end{table}
Notice that we have purposefully omitted the propagators $\big< B,A_i\big>$ and $\big<A_i, A_j\big>$ while writing down the \eqref{eqn:feynrule}. This is because the only allowed interaction in the theory is between the fields $B$ and $\phi$ (and its complex conjugate) and thus $\big< B,A_i\big>$ and $\big<A_i, A_j\big>$ will not contribute to any loop corrections in the theory. In what follows, we shall evaluate the necessary 1 loop corrections to the propagators and vertices.
\subsection{Renormalization}
Owing to the 3-point and 4-point interactions between the gauge field $B$ and the complex scalar $\phi^*$ and $\phi$, the theory of sCED admits 1 loop corrections to the propagators and the vertices. Generally, these loop integrals diverge at large values of energy ($\omega$) and momentum ($|p|$) and lead to what is known as UV divergences. In order to make sense of these divergent integrals, we employ the technique of cut-off regularization where we set an upper cutoff, $\Omega$ in the energy sector and $\Lambda$ in the momentum sector. In addition to UV divergences, the loop integrals may also diverge at low energy (or momentum) scales. This is called IR divergence. Most often, such divergences are encountered in massless theories where the pole of the propagator admits a mass-shell singularity. It is important to realize that the gauge propagator for sCED (\ref{eqn:feynrule}) showcases a similar pole structure. Thus some of the loop corrections shall admit IR divergences. However, physical observables such as correlation functions shall not depend on IR divergences. This essentially means that renormalized gauge propagator should not contain any IR divergence. Interestingly, we shall see later that under the renormalization scheme, the gauge field propagator $\big<B,B\big>$ does not admit any IR divergence\footnotemark.\footnotetext{It must be pointed out that the propagator for the complex scalar field is not gauge invariant and hence, its renormalization may depend on the gauge fixing parameter $\xi$.}\\[5pt]
For the present discussion, we are concerned with the renormalization of sCED, hence we shall only retain UV divergent terms and ignore IR divergences. But before we proceed any further, we shall comment on the issue of impromptu ignoring IR divergences. Recall that in Lorentzian quantum electrodynamics (QED), IR divergences are handled by the inclusion of soft photons of mass $\mu$ such that, in the limit $\mu \to 0$ IR divergences neatly cancels. This technique does not hold for the case of sCED
A similar problem of IR divergences occurs in the study of scattering amplitude for non-relativistic QED, where ignoring the IR divergences at the first few orders of the perturbation leads to the correct results\cite{Caswell:1985ui}\cite{Labelle:1996en}. Lastly, the problem of IR divergences has also been observed for the case of scalar Galilean electrodynamics (sGED)\cite{Chapman:2020vtn} where ignoring IR divergences leads to a renormalized theory of sGED. For the rest of the discussion, we shall abide by this approach and plan to examine the resolution of IR divergences in the Carrollian setting in the future.
\subsubsection{Loop corrections and renormalization conditions}
The two propagators we are interested in are gauge field propagator $\big< B,B\>$ and the complex scalar propagator $\big< \phi, \phi^*\big>$. We shall begin our discussion with the gauge field propagator $\big< B, B\big>$. The relevant 1 loop corrections are drawn in \autoref{fig:corrB}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.9]{propcb}
\caption{In this panel, diagram (a) is the three point correction to the $\big<B,B\big>$ propagator and diagram (b) is the four point correction to the $\big<B,B\big>$ propagator.}
\label{fig:corrB}
\end{figure}\\
The loop correction $(\Sigma_1)$ offered to the $\big< B, B\big>$ propagator due to $V_{B\phi^* \phi}$ can be evaluated by integrating along unconstrained variable $(\omega_q, q)$ of diagram (a) in \autoref{fig:corrB} i.e,
\begin{equation}
\Sigma_1= \bigintsss d\omega_q d^3 q\; \frac{e^2 (2 \omega_q+\omega_p)^2}{(-\omega_q^2+m^2)(m^2-(\omega_q+\omega_p)^2)}
\end{equation}
The superficial degree of divergence suggests that the integral converges in the energy sector but diverges cubically at large values of $q$. To this end, we put a UV cut-off $\Lambda$ in the momentum sector. Also, it must be observed that the integral does not contain any IR divergence since the integrand is well defined at $\omega_q \to 0$. A straight forward calculation then gives
\begin{equation}
\label{eqn:gcor3}
\Sigma_1 =i\frac{8\pi^2 e^2 \Lambda^3}{3 m}
\end{equation}
Notice that the degree of divergence of $\Sigma_1$ is cubic which agrees with the predicted degree of divergence. Next, we shall evaluate the correction $(\Sigma_2)$ offered due to $V_{B^2 \phi^* \phi}$. The Feynman diagram is given in diagram (b) of \autoref{fig:corrB}. The integral $\Sigma_2$ reads
\begin{equation}
\Sigma_2= \bigintsss d\omega_q d^3q\; \frac{2 e^2}{(m^2-\omega_q^2)}
\end{equation}
As before, the integral diverges cubically at large values of $q$ but remains convergent in $\omega_q$. The integral evaluates to
\begin{equation}
\label{eqn:gcor4}
\Sigma_2= -i\frac{8\pi^2 e^2 \Lambda^3}{3 m}
\end{equation}
With (\ref{eqn:gcor3}) and (\ref{eqn:gcor4}) at are disposal, the propagator $\big<B, B\big>$ upto first order in the perturbation i.e, $\mathcal{O}(e^2)$ is given by
\begin{equation*}
\includegraphics[scale=0.7]{propbrenor}
\end{equation*}
Mathematically, we can write
\begin{equation*}
-\frac{i \xi}{\omega^2}+ \bigg(-\frac{i \xi}{\omega^2}\bigg) \Bigg[i\frac{8\pi^2 e^2 \Lambda^3}{3 m}\Bigg]\bigg(-\frac{i \xi}{\omega^2}\bigg)+\bigg(-\frac{i \xi}{\omega^2}\bigg)\Bigg[-i\frac{8\pi^2 e^2 \Lambda^3}{3 m}\Bigg] \bigg(-\frac{i \xi}{\omega^2}\bigg) = \text{finite}
\end{equation*}\\
Since the contribution from the three-point correction exactly cancels the contribution from the four-point correction, we end up with a finite value, which essentially means that to the order $\mathcal{O}(e^2)$ in the perturbation the gauge field propagator $\Big< B, B,\Big>$ remains finite and does not require any counter term. This allows us to make a redefinition $B_{(b)} =B$, where the subscript $b$, represents the bare field. Also recall that there is no interaction allowed for the vector field $A^i$ in the theory \eqref{eqn:gact2} which essentially means that the gauge field $B$ and $A_i$ follow the field redefinitions:
\begin{eqnarray}
\label{eqn:gaugerenor}
&B_{(b)}& = B\\
& A^i_{(b)}&= A^i
\end{eqnarray}
We now turn our attention to 1 loop corrections to the $\big< \phi^*, \phi\big>$. The allowed Feynman diagrams are given in \autoref{fig:corrS}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.93]{propcs}
\caption{In this panel, diagram (a) is the three point correction to the $\big<\phi^*,\phi\big>$ propagator and diagram (b) is the four point correction to the $\big<\phi^*,\phi\big>$ propagator.}
\label{fig:corrS}
\end{figure}\\
The expression for the loop integral ($\Pi_1$) in diagram (a) of \autoref{fig:corrS} takes the following form
\begin{equation}
\Pi_1= \bigintsss d\omega_q d^3q\; \frac{2 e^2 \xi (\omega_q+2 \omega_p)^2}{\omega_q^2 (m^2-(\omega_q+ \omega_p)^2)}
\end{equation}
As before, the integral diverges cubically at large value of $q$ and thus we put a UV cut off $\Lambda$ in the momentum sector. In addition, the integral also admits an IR divergence. The source of the IR divergence is the mass shell singularity present in the pole structure of the gauge field propagator and thus the integrand diverges at $\omega_q \to 0$. As already discussed, we shall ignore the IR divergence piece and retain only the UV divergent part of the integral. The integral evaluates to
\begin{equation}
\label{eqn:phicorr3}
\Pi_1= -i\frac{8 \pi^2 e^2 \xi \Lambda^3}{3} \Bigg[\frac{1}{m}+\frac{8 m \omega_p^2}{(m^2-\omega_p^2)^2} \Bigg]
\end{equation}
Finally, the loop integral $(\Pi_2)$ in diagram (b) of \autoref{fig:corrS} reads
\begin{equation}
\Pi_2 =-2 e^2 \xi \bigintsss d\omega_q d^3 q\; \frac{1}{\omega_q^2}
\end{equation}
Clearly, the integrand diverges at $\omega_q \to 0$ leading to an IR divergent piece which along the previous lines shall be ignored. Thus, the only UV divergent piece we have is $\Pi_1$ which we shall be able to absorb by introducing the counter term i.e,
\begin{equation*}
\includegraphics[scale=0.9]{propphirenor}
\end{equation*}
where the last term is the counter term that we have added with $D$ as its coefficient. Mathematically, we can then write down\footnotemark \footnotetext{Note that for notational agreement, $\omega_p$ is now denoted by $\omega$.}
\begin{equation}
\label{eqn:renorcondmass}
\frac{i}{-\omega^2+m^2-i(D-i e^2 f(\xi, m,\omega, \Lambda))} =\text{finite}
\end{equation}
where
\begin{equation}
\label{eqn:call}
f(\xi, m,\omega, \Lambda) =\frac{8 \pi^2 \xi \Lambda^3}{3} \Bigg[\frac{1}{m}+\frac{8 m \omega^2}{(m^2-\omega^2)^2} \Bigg]
\end{equation}
such that (\ref{eqn:renorcondmass}) leads to a finite value for
\begin{equation*}
D= i e^2 f(\xi, m,\omega, \Lambda) = ie^2 \frac{8 \pi^2 \xi \Lambda^3}{3} \Bigg[\frac{1}{m}+\frac{8 m \omega^2}{(m^2-\omega^2)^2} \Bigg]
\end{equation*}
Notice that the mass dimensions of $D$ is 2 i.e, $[D]=2$, which essentially means that the pole of (\ref{eqn:renorcondmass}) defines the mass renormalization condition for sCED. We can then write
\begin{equation*}
D= i \delta m^2
\end{equation*}
where,
\begin{equation}
\label{eqn:renormass}
\delta m^2 =e^2 \frac{8 \pi^2 \xi \Lambda^3}{3} \Bigg[\frac{1}{m}+\frac{8 m \omega^2}{(m^2-\omega^2)^2} \Bigg]
\end{equation}
Clearly, the corresponding counter term in the Lagrangian is
\begin{equation}
\label{eqn:mLagcounter}
(\mathcal{L}_{ct})_1 = \delta m^2 \phi^* \phi
\end{equation}
Although we managed to absorb the divergences via counter term (\ref{eqn:mLagcounter}), there is something very unsettling about it. Notice that $\delta m^2$ depends upon gauge parameter $\xi$. This is unphysical, for mass should remain independent of the choice of gauge parameter. In fact, it is not just about the mass, even coupling turns out to depend on $\xi$ upon renormalization. This can be demonstrated by carrying out the renormalization for three point vertex $V_{B \phi^* \phi}$. The only possible correction to the vertex is given in \autoref{fig:corrV3}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.55]{verc1}
\caption{Correction to the three point vertex $V_{B \phi^* \phi}$}
\label{fig:corrV3}
\end{figure}\\
Following the renormalization scheme we can check that the counter term needed to absorb the divergences for the three point vertex is
\begin{equation}
\label{eqn:eLagcounter}
(\mathcal{L}_{ct})_2= -G \;i e B (\phi \partial_t \phi^*-\phi^* \partial_t \phi)
\end{equation}
where
\begin{equation}
\label{eqn:renormcoup}
G=-\frac{8 \pi^2 e^2 \xi \Lambda^3}{3 (m^2-\omega^2)} \Bigg[\frac{1}{m}+\frac{8 m \omega^2}{(m^2-\omega^2)^2}\Bigg]
\end{equation}
is the renormalization coefficient and evidently depends on the the gauge fixing parameter $\xi$. The procedure of absorbing the UV divergent terms is not unique in quantum field theory. It is instructive to note here that in the absence of counter terms, the role of the correction (\ref{eqn:phicorr3}) is to shift the mass $m$ (appearing in the Lagrangian) to the physical (renormalized) mass $m_{\tiny{\text{phy}}}$. Physically, this is interpreted as- mass $m$ is infinite and it takes infinite shift to bring it down to $m_{\tiny{\text{phy}}}$ i.e, devoid of the counter term, the mass renormalization condition using (\ref{eqn:renorcondmass}) is given by,
\begin{equation*}
-\omega^2+m^2-e^2 f(\xi, m,\omega, \Lambda))\Bigg{|}_{\omega^2 =m^2_{\tiny{\text{phy}}}} =0
\end{equation*}
which implies
\begin{equation}
\label{eqn:phymass}
m^2_{\tiny{\text{phy}}}= m^2-e^2 f
\end{equation}
where $f$ is given by (\ref{eqn:call}). However, an interesting thing to note here is that $m_{\tiny{\text{phy}}}$ is heavily gauged. For any physical theory, $m^2_{\tiny{\text{phy}}}$ should remain independent of the gauge fixing parameter. A similar calculation when carried out for the coupling leads to the same arguments. This invalidates the conventional arguments of gauge independence of mass and coupling. For any physical theory, the physical observables such as mass or coupling should not depend upon the choice of gauge parameter $\xi$. For example, in Lorentzian QED, it does not matter whether we work in the Feynman gauge $(\xi =1)$ or Landau gauge $(\xi =0)$, the coupling of the theory which is t\^{e}te-\`a-t\^{e}te related to the fine structure constant remains independent of the gauge choice.\\[5pt]
The occurrence of $\xi$ in the renormalization coefficients $(\delta m^2, G)$ renders an ambiguity in the definitions of mass and coupling strength. However, this ambiguity is not new in the quantum field theory arena. As a matter of fact, such behaviour has been observed in Lorentz's invariant quantum field theories as well. For example, in the massive Schwinger model in $(1+1)$ dimensions, the presence of mass shell singularities is known to invalidate the standard requirement for gauge independence of renormalized mass \cite{Das:2012qz}\cite{Das:2013vua}. In the case of the Schwinger model, these ambiguities are resolved by using Nielsen identities which requires one to formulate the Lagrangian in `physical' gauge\footnotemark\; \cite{Das:2012qz} \cite{Das:2013iha}. Nielsen identities provide a useful way to construct a notion of gauge independent renormalized mass \cite{Nielsen:1975fs} \cite{Breckenridge:1994gs}.\footnotetext{Physical gauge refers to a gauge choice where the unphysical degree of freedom such as Faddeev-Popov ghosts decouple from a theory. For example axial gauge and Coulomb gauge. An advantage of working in physical gauges is that IR divergences are often softer and neatly separated.} It should be noted that the pole structure of the propagator for sCED shares a massive similarity to the Schwinger model in $1+1$ dimensions. However, to resolve the issue of gauge dependence of mass in sCED, we first need to formulate the Lagrangian in a physical gauge. One shortcoming of working with physical gauges such as axial gauges is that it does not fix the gauge completely and thus leaves a residual gauge degree of freedom. However, as far as gauge invariant quantities are concerned, it shall not matter what gauge we work with. Obviously, mass and coupling strength are the physical observables in a theory and should thus remain independent of the gauge choice. With the aim of resolving these ambiguities, it shall be interesting to study the quantization of sCED in this framework. We plan to address this problem in detail in the future.\\
\subsection{Counter term and bare Lagrangian}
In the preceding section we realized that the renormalized mass and coupling admits ambiguities, for they turn out to depend upon gauge parameter $\xi$. However, as already mentioned, the renormalization scheme is not unique. One of the way to counter off the UV divergences in the theory is to adhere to the method of counter terms. We conclude from the renormalization of three point vertex and propagators that the complex scalar field enjoys the following field redefinitions
\begin{equation}
\label{eqn:comrenor}
\phi_{(b)} =\phi \qquad \implies \quad \phi^*_{(b)} =\phi^*
\end{equation}
It then follows from (\ref{eqn:mLagcounter}) and (\ref{eqn:comrenor}) that bare mass term in the Lagrangian i.e, $\mathcal{L}_{(b)}^{(mass)}$ is given by
\begin{eqnarray}
&\mathcal{L}_{(b)}^{(mass)}& = m^2 \phi^* \phi + \delta m^2 \phi^* \phi \nonumber \\[5pt]
\implies &\mathcal{L}_{(b)}^{(mass)}& =\big(m^2 +\delta m^2 \big) \phi^*_{(b)} \phi_{(b)} \nonumber \\[5pt]
\label{eqn:baremass}
\implies &\mathcal{L}_{(b)}^{(mass)}& = m^2_{(b)} \phi^*_{(b)} \phi_{(b)}
\end{eqnarray}
where $m^2_{(b)}= \big(m^2 +\delta m^2\big)$ defines the bare mass of the theory. Similarly using \eqref{eqn:gaugerenor}, (\ref{eqn:eLagcounter}) and \eqref{eqn:comrenor} we can write the bare coupling term $\mathcal{L}_{(b)}^{(coupling)}$ as
\begin{eqnarray}
&\mathcal{L}_{(b)}^{(coupling)}& = -i e B \big(\phi \partial_t \phi^*-\phi^* \partial_t \phi \big) -G \;i e B \big(\phi \partial_t \phi^*-\phi^* \partial_t \phi \big) \nonumber \\[5pt]
\implies &\mathcal{L}_{(b)}^{(coupling)}& =-(1+G) i e B_{(b)} \big (\phi_{(b)} \partial_t \phi^*_{(b)}-\phi^*_{(b)} \partial_t \phi_{(b)} \big )\nonumber \\[5pt]
\label{eqn:barecoup}
\implies &\mathcal{L}_{(b)}^{(coupling)}& =- i e_{(b)} B_{(b)} \big (\phi_{(b)} \partial_t \phi^*_{(b)}-\phi^*_{(b)} \partial_t \phi_{(b)} \big)
\end{eqnarray}
where $e_{(b)}=e (1+G)$ defines the bare coupling in the theory. Lastly, demanding the consistency of field and coupling redefinition we can write down bare term involving four point interaction i.e, $\mathcal{L}_{(b)}^{quartic}$
\begin{eqnarray}
&\mathcal{L}_{(b)}^{quartic}& = -e^2 B^2 \phi^* \phi-\alpha^2e^2 B^2 \phi^* \phi \nonumber\\[5pt]
\label{eqn:barecoup1}
\implies &\mathcal{L}_{(b)}^{quartic}& = -e^2_{(b)} B^2_{(b)} \phi^*_{(b)} \phi_{(b)}
\end{eqnarray}
where $\alpha^2 = G(2+G)$. Finally, the bare Lagrangian $\mathcal{L}_{(b)}$ follows from (\ref{eqn:gaugerenor}), \eqref{eqn:comrenor}, \eqref{eqn:baremass}, \eqref{eqn:barecoup} and (\ref{eqn:barecoup1}) i.e,
\begin{equation}
\label{eqn:bareLag}
\begin{split}
\mathcal{L}_{(b)} = \Bigg[\frac{1}{2} \Big \{(\partial_i B_{(b)})^2+(\partial_t A^i_{(b)})^2 -2 (\partial_t B_{(b)}) (\partial_i A^i_{(b)})\Big\}-(\partial_t \phi^*_{(b)})(\partial_t \phi_{(b)}) \Bigg]\\
+m^2_{(b)} \phi^*_{(b)} \phi_{(b)}- i e_{(b)} B_{(b)} \Big (\phi_{(b)} \partial_t \phi^*_{(b)}-\phi^*_{(b)} \partial_t \phi_{(b)} \Big) -e^2_{(b)} B^2_{(b)} \phi^*_{(b)} \phi_{(b)}
\end{split}
\end{equation}
This completes the renormalization process for sCED. However, there are several things to note here. First of all, mass and coupling redefinitions have turned out to be heavily gauged. Secondly, the leading divergent terms in the bare mass and bare coupling i.e, (\ref{eqn:renormass}) and (\ref{eqn:renormcoup}) admits a mass shell singularity at $m^2 \to\omega^2$. Off course, one might be tempted to take the limit, $m \to 0$ such that the mass shell singularity term drops. However, this complicates the situation even more as bare mass and bare couplings then become infrared divergent. Recall that the massless limit of sCED is actually a conformal theory at the classical level \cite{Bagchi:2019clu}. The emergence of IR divergences at the quantum level further complicates the matter. An important thing to observe here is that IR divergences are present even in the massless scalar Carrollian theory. For example, consider the Lagrangian for a massless Carrollian $\varphi^4$ theory
\begin{equation*}
\mathcal{L}= \frac{1}{2} (\partial_t \varphi)^2 -\lambda \varphi^4
\end{equation*}
where $\varphi$ is the scalar field and $\lambda$ is the coupling constant. The propagator $\< \varphi, \varphi\>$ is given by
\begin{equation*}
\< \varphi, \varphi\> = \frac{i}{\omega^2}
\end{equation*}
It then is obvious that the first order loop correction will require one to evaluate integrals of the type $\sim \bigintsss d\omega \frac{i}{\omega^2}$, which clearly leads to IR divergences when $\omega \to 0$. The source of these IR divergences is the mass shell singularity and it is a generic feature of the conformal Carrollian theories (presently known). \\[5pt]
We refrain ourself to expand more on the renormalization such as beta function and renormalization group flow for sCED until the issue of gauge dependence and IR divergences gets settled. Clearly, in this work, we have demonstrated that the standard procedure of quantizing Carrollian field theories lead to the violation of conventional arguments of gauge independence of mass and coupling. Further, the bare quantities defined above diverge severely on the mass shell. Lastly, the massless limit renders an IR divergent notion of bare mass and bare coupling which further complicates the renormalization structure
Clearly, the quantization of Carrollian gauge theories is not well understood at the moment and the potential issues mentioned above seem rather unavoidable as of now. Some more work in the Carrollian quantum sector is hereby needed. This paper should thus be viewed as the first step towards understanding the quantum ``properties" of Carrollian field theories.
\section{Conclusion}
\label{section:conclusion}
Let us now summarize our findings. In this paper, we explored the quantum properties of a massive sCED in $3+1$ dimensions prescribed via functional techniques. We essentially highlighted the potential issues that crop up while quantizing a Carrollian abelian gauge theory such as sCED (at first order in the perturbation) via standard functional techniques.\\[5pt]
To begin with, we propose an action for massive sCED consistent with Carrollian symmetries. Owing to the symmetries of the action, we construct the associated Noether charges and confirm that the Carrollian algebra is realized at the level of charges. We then implement path integral techniques to explore the quantum field description of the theory. Since sCED is a gauge theory, we gauge fix the action by implementing the Faddeev-Popov trick. A trivial dimensional analysis suggests that the theory falls into the category of marginally renormalizable theories. We state the Feynman rules for the theory and study the renormalization valid up to the first order in the perturbation. To this end, we evaluate the allowed 1-loop correction to the propagators and the vertices. However, the renormalization condition renders an unphysical notion of mass and coupling, in that they turn out to be gauge dependent. This behaviour bears a stark resemblance to the massive Schwinger model in $1+1$ dimensions where the fermion mass turns out to be gauge dependent. Since mass and coupling strength are physical observables for a theory, the issue of their gauge dependence has to be settled which brings us to the list of open questions that we shall be addressing in our upcoming works. \\[5pt]
The first and most prominent question to address is to have a gauge-independent notion of mass and coupling for a renormalized sCED. Our first guess is to draw on the wisdom from the Lorentzian case. Generally in Lorentz invariant field theories, we employ Nielsen identities to redefine the mass renormalization conditions which then renders us with a gauge-independent notion of renormalized mass. It shall be interesting to see if we can carry out a similar procedure for sCED and establish the gauge independence of mass and coupling. Another possible way is to study the renormalization under quenched rainbow approximation\cite{10.1143/PTP.52.1326}. This approximation has also been used to establish gauge independence of fermion mass for massive Schwiger model \cite{Das:2012qz}\cite{Das:2013vua}\cite{Das:2013iha}. However, one serious limitation of this approach is that higher loop correction becomes computationally difficult making it harder to establish the renormalizability at higher order in the perturbation.\\[5pt]
A natural question that follows; is gauge dependence (of mass and coupling) a generic feature of all gauge Carrollian quantum field theories? To this end, an interesting thing to study would be to see how the renormalization conditions modify if we replace a massive Carrollian scalar with a massive Carrollian fermion
One of the research work that we are currently looking forward to is the canonical quantization of Carrollian theories. Some work in this direction is already in progress and shall be reported in the near future. Extending the quantization program to the case of conformal Carrollian theories would be one of the directions of future works.
\section*{Acknowledgements}
We would like to thank Kinjal Banerjee and Rudranil Basu for careful reading of the manuscript and several useful discussions. We would also like to thank JaxoDraw\cite{Binosi:2003yf} for developing their free Java program for drawing the Feynman diagrams. AM is supported by the Royal Society URF of Jelle Hartong through the Enhanced Research Expenses 2021 Award vide grant number: RF\texttt{\symbol{92}}ERE\texttt{\symbol{92}}210139.
|
{
"arxiv_id": "2302.13274",
"language": "en",
"timestamp": "2023-02-28T02:14:52",
"url": "https://arxiv.org/abs/2302.13274",
"yymm": "2302"
} | \section{Conclusion}
Third-party recursive DNS services offer a number of valuable features, from high resolution performance to
improved privacy through encrypted and Oblivious DNS. At the same time, these DNS variants are operated by
a handful of providers such as Google, Amazon and Cloudflare, strengthening a concerning trend toward DNS
centralization and its implications, ranging from user privacy and QoE, to system resilience and market
competition. We presented {\it \'Onoma}, an end-system DNS resolver that let users take advantage of third-party
DNS services without sacrificing privacy or performance. Our evaluation shows the benefits of \'Onoma~across
locales, with different DNS services, content providers, and CDNs.
\begin{comment}
\section*{Acknowledgment}
Shucheng (Alex) Liu contributed to an early version of \'Onoma. We thank Marcel Flores and Esteban Carisimo for their insightful comments on early draft of the paper. This work is partially supported by a grant from the Comcast Innovation Fund.
\end{comment}
\clearpage
\SuspendCounters{totalpages}
\bibliographystyle{ACM-Reference-Format}
\section{Result}
\label{sec:result}
\section{System Design}
\label{sec:system_design}
In the following paragraphs, we present the key features of \'Onoma's design,
experimentally motivating and illustrating the benefits of each of its features. We
close the section with an overview of \'Onoma's system design.
\subsection{Privacy and Third-party DNS}
A recursive DNS resolver is privy to much information about a user's taste, preferences and habits.
Prior studies~\cite{olejnik:browsinghist,bird:browsinghist} have analyzed the uniqueness of users'
web browsing histories. Olejnik et al.~\cite{olejnik:browsinghist}, among other findings, show that
out of the 382,269 users that completed their popular site test, 94\% had unique browsing histories.
They found that with just the 50 most popular sites in their users' histories -- a subset of their
DNS request streams -- they were able to get a very similar distribution of distinctive histories
compared to complete knowledge of the site list. The replication study by Bird et al.~\cite{bird:browsinghist}
confirm this and show that such histories are also highly stable and thus a viable tracking vector.
Beyond having access to detailed DNS request traces of million of users~\cite{google:publicdns},
large third-party services are also an easier target for court order to release data in
bulk~\cite{valentino:subpoenas} and no privacy agreement policy can escape that.
\'Onoma{} improves privacy by avoiding DNS-based user identification through $(i)$ domain specific sharding~\cite{hoang:sharding,hounsel:ddns,jari:sharding} and $(ii)$ DNS request insertion.
\subsection{Evaluating Privacy Gains}
\label{subsection:privacy}
Since DNS resolvers are, by definition of their functionality, privy to the unencrypted version of the
requested resources, the best privacy a user could expect is in terms of k-anonymity. A dataset is said to
be k-anonymous if the information for each person contained in the release cannot be distinguished from at
least $k - 1$ individuals whose information also appear in the release. To measure the k-anonymity value,
we group users with the same profile (computed using jaccard distance between the profiles) in the same
cluster. Users in clusters of size 1 (k-value = 1), can be re-identified on the basis of unique profiles.
We evaluate the privacy gains of the different techniques used by \'Onoma~in terms of their impact on k-anonymity.
We use for this two weeks of users' DNS logs, separated by one week, collected (using CLI~\cite{oracle:cli} and
Splunk~\cite{splunk:www}) and made available (anonymized) by the Information Technology Center (IT) team at our
institution. This two-week log correspond to 85,967 users. For every user, the DNS request logs consist of a
browsing history, over a window of time, including all urls resolved by the user during that time.
From the DNS request logs of every user, we construct a boolean vector of distinct domains, thereby
removing all frequency measures. This boolean vector comprises the union of the 50 most popular domains
visited by each user in our set and ranks the domains based on their popularity across all users. The
length of our data collection window, and the format and size of the vector of popular domains we used,
follows Olejnik et al.~\cite{olejnik:browsinghist} and Bird et al.~\cite{bird:browsinghist}'s approaches.
\paragraph{Ethics.} Since DNS requests are by their nature sensitive, some ethical questions arise that
we address here.
To preserve the privacy of our users, the data we receive from the IT team consists of the sites visited by
each user stored against random non-persistent ids so our data does not give us access to any personal
identifiers of users. These ids are one-way hashes and can not be mapped back to the users. They are, however,
unique for each user allowing us to check if we can correctly re-identify users (anonymized) based on their
DNS request logs. Since we are not able to identify individual users, we cannot
associate the specific queries of any sensitive nature with any particular user. At most we can only identify
the queries that are being made.
In addition, we submitted our methodology to our institution’s ethical review board and were given a
determination of ``not human research''.
\subsubsection{Domain-specific Sharding}
\label{subsubsection:domainsharding}
Previous work has suggested \textit{sharding} -- scattering DNS requests, randomly, across different resolvers -- as a way to address centralization concerns and to improve privacy~\cite{hoang:sharding,hounsel:ddns,jari:sharding}. Sharding prevents any single resolver from acquiring the complete user's request stream. However, if all DNS requests were distributed randomly across the available DNS servers, after a sufficiently long period of time, each DNS providers will have been queried for most DNS questions. Thus, a basic random distribution of request across resolvers makes each DNS service provider able to obtain a full view of the client's request stream and construct their profile.
\'Onoma{} avoids this by using domain-specific sharding~\cite{arko:centraliseddns}. After selecting a DNS provider for a query, \'Onoma{} stores this mapping in a domain-to-DNS map. All of subsequent queries of a user under the same SLD will be directed to the same DNS server. This form of sharding~\cite{jari:sharding} limits the view of DNS service providers to a subset of the users' queries, preventing any provider from constructing a full user profile.
To evaluate the privacy benefits of domain specific sharding, we select users, from the two-week data set, with DNS request streams including more than 50 unique SLDs.\footnote{We ran our analysis over a window of different days in the two-week period and obtained similar results.} We apply the domain specific sharding algorithm on the request streams of each user and distribute the requests across a set of resolvers, $R$, which we referred to as the sharding value.
Each resolver stores a bit vector for each user (a union of the 50 most popular domains visited by all users, initially set to zero) and sets the bit to one for the user if the corresponding domain is found in the user's request stream. Our privacy goal is to challenge re-identification, ensuring that the vector maps at each resolver for different users are similar for several ($k > 1$) users.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{figs/sharding.png}
\caption{K-anonymity values of the set of users with different sharding values.}
\label{fig:sharding}
\end{figure}
Figure~\ref{fig:sharding} shows the cumulative distribution function (CDF) of k-anonymity, averaged across resolvers, for different sharding values. \textit{No sharding} provides the baseline and has the lowest level of k-anonymity with k-values of 1, i.e., unique profiles, for over 88\% of users. This is consistent with previous findings on the uniqueness of browsing histories~\cite{olejnik:browsinghist,bird:browsinghist}. A sharding value of 2, that is when a user's request stream is sharded across two resolvers, drops the set of users that can be re-identify to 79\%; the maximum sharding value of 8 in this analyzes reduces this to 49\%. A concern with high sharding values is that while relying on more resolvers may improve improve k-anonymity, it also increases the chance of finding an under-performing resolver which can impact the user's average resolution time. We illustrate this and discuss \'Onoma~response in Sec.~\ref{subsec:performance}.
\subsubsection{Popular Request Insertion}
\label{subsubsection:insertion}
Domain specific sharding, while clearly useful, has limited impact on k-anonymity. \'Onoma~further improves the k-anonymity of users by adaptively inserting popular requests on a users' DNS request stream based on its uniqueness. The more distinctive the request streams,
the higher the fraction of popular DNS request streams needed to make it similar (higher $k$) to other users' request streams.\footnote{\'Onoma~lets users to direct truly distinctive or otherwise problematic domains to an Oblivious DNS~\cite{odns:cloudflare,schmitt:odns}, thus limiting its performance impact.}
Our current implementation inserts a number, an ``insertion factor'', of popular domains randomly sampled from the top50 most popular sites,\footnote{We exclude a configurable blacklist of domains.} for every \textit{unique} domain in the users' request stream.
We consider a domain as \textit{unique} if it is not part of the top500 most popular sites. The approach is clearly independent of both the specific
list and the subsets (top50 and top500) we use.
To analyze the privacy benefits of popular request insertion, we follow the same approach as in our evaluation of sharding, and apply our insertion algorithm with different values of the insertion factor $f$. As before, each recursive resolver stores a bit vector per user, recomputed for each value of $f$ as insertion causes an addition of requests to the users' request streams. Each resolver, as before, sets the bit to one for a given user if it sees the corresponding domain in the user's request stream.
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{figs/sharding_2.png}
\caption{K-anonymity values for different insertion factors and a sharding value of 2.}
\label{fig:insertion_sharding1}
\end{figure}
Figure~\ref{fig:insertion_sharding1} shows the CDF of k-anonymity values for different insertion factors (no insertion and insertion factors of 1, 2 and 3) and sharding value of 2. The figure shows that, even for the lowest sharding value, an insertion factor of 2 results in k-anonymity larger than 2 for over 98\% of users. Insertion factors of 1, 2 and 3, yield k-anonymity larger than two for 79\%, 98\% and 100\% of users respectively. With a sharding value of 8 (not shown), increasing the insertion values by the same degree yields 2-anonymity for 86\%, 99\% and 100\% with insertion factors of 1, 2 and 3.
Figure~\ref{fig:insertion_sharding2} shows the benefit of higher sharding values for a given insertion factor. For an insertion value of 3, sharding values of 2 and 8 increase the number of users with 2-anonymity to 100\% (and 4-anonymity by 53\% of users).
The insertion of popular requests comes at the cost of additional requests. Even for a fixed insertion factor, the number of additional requests inserted will vary as this is a function of the \textit{uniqueness} of a user's request stream. We estimated the mean overhead (and standard deviations) of different insertion factors for users in our dataset. Table~\ref{tab:privacy_overhead} summarizes this overhead,
as percentage of bytes added to a user's request stream, for three different insertion factors -- 1, 2 and 3. For these factors, the average overhead ranges from 20.27\% to 42.29\%. Given that DNS is responsible for less than 0.1\% of total traffic volume~\cite{schomp:harmful}, even the highest overhead should minimally impact network load.
\begin{table}[h!]
\footnotesize
\caption{Privacy overhead for insertion factors. }
\centering
\begin{tabular}{| l | c | c |}
\hline
\textbf{Insertion Factor} & Mean (\%) & Std Dev \\
\hline
1 & 20.27 & 3.91\\
2 & 33.37 & 5.33\\
3 & 42.29 & 5.89\\
\hline
\end{tabular}
\label{tab:privacy_overhead}
\end{table}
\textit{\'Onoma{} uses a combination of popular request insertion and domain specific sharding} to achieve k-anonymity for a large number of users, with relatively low overhead, while avoiding other potential costs of centralization. Figure ~\ref{fig:insertion_sharding2} shows the combined effect of sharding and insertion in the distribution of k-anonymity values across all users in our dataset for an insertion factor of 3. Sharding over 4 resolvers with an insertion factor of 3 yields 2-anonymity for over 100\% of users (and 4-anonymity for $\approx$60\% of users).
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\columnwidth]{figs/insertion_3.png}
\caption{K-anonymity values of users in our dataset with insertion factor of 3.}
\label{fig:insertion_sharding2}
\end{figure}
\subsection{Performance and Third-party DNS}
\label{subsec:performance}
Despite the global scale of many of these DNS services' infrastructures~\cite{calder:mapping}, not all regions are equally covered by all services. For specific locales, a particular third-party DNS service may not have nearby servers to offer users, or have fewer or farther away servers than a different service. This can negatively impact both DNS resolution times and web QoE~\cite{otto2012content} for users of the many CDNs that continue to rely on DNS for replica selection, translating into lower engagement~\cite{kirshnan:quasi} and potential loss of revenue~\cite{whitenton:needspeed}.
To illustrate the extent to which the \textit{best} DNS service is location specific, we measure the DNS resolution times with different DNS services in different countries. We select Argentina, Germany, India and the US; one country in each of the top four continents in terms of Internet users (Asia, North America, Europe, South America). In each locale, we determine the performance of three popular services -- Google, Quad9 and Cloudflare -- as well as two regional DNS services. For each country, we select the top50 sites and issue multiple DNS requests (three) for each of the resources included in those pages. We record the minimum TTFB from the three repeated runs to ignore any transient spikes in the latency. To ensure we are assigned to the correct resolver for clients in each specific country, we use a VPN (NordVPN) with servers in the selected countries.
Figure~\ref{fig:resolutiontime} plots the CDFs of the resolution time for popular resources in two of these countries, India and the US (full set of graphs in \S\ref{sec:appendix}). The resolution times of all resources, for each DNS provider in a country, are plotted relative to the DNS service performing best in the average case. Service resolution times are relative to Quad9 in the US and to Cloudflare in India. We can see that in the median case, Google performs as well as Cloudflare but Regional2 performs ~40ms worse than Cloudflare and Regional1 performs 87ms worse than Cloudflare in India. In the US, Regional 1, Regional2 and Cloudflare have a difference of less than 10ms from the best resolver(Quad9) whereas Google performs ~25ms worse than Quad9(for the 50th percentile).
\begin{figure}[ht]
\centering
\subfigure[IN\label{fig:INResolution}]{\includegraphics[width=0.49\columnwidth]{figs/IN_Resolution.png}}
\subfigure[US\label{fig:USResolution}]{\includegraphics[width=0.49\columnwidth]{figs/US_Resolution.png}}
\caption{DNS resolution times of individual, public DoH and DNS providers in IN and the US relative to the best performing DNS service in each country (Cloudflare for IN and Quad9 for the US).}
\label{fig:resolutiontime}
\end{figure}
Figure~\ref{fig:resolutiontimeranking} is a bump graph that summarizes the rankings of the evaluated services, based on DNS resolution times, across countries. Each service is associated with a differently colored line. The different lines change positions as the associated service changes rank positions between countries. The plot clearly illustrates that the ranking among DNS services varies widely across locales\footnote{We repeated this measurement over a period of a week and noticed that the ranking changed over time as well} . Figure~\ref{fig:resolutiontime} illustrates the differences in relative resolution times using Germany and the US as examples (full set of graphs in \S\ref{sec:appendix}). While Quad9 has the best resolution times in the US, Google ranks the lowest. On the other hand, Cloudflare has the best resolution times in India while Google performs second to the best. The maximum (average) Kendall tau distance\footnote{The Kendall tau rank distance is a metric, ranging form 0 to 1, that counts the number of pairwise disagreements between two ranking lists.} between rankings is 0.6 (0.37) clearly showing that there is no \textit{best} DNS service for every location.
\begin{table}[h!]
\footnotesize
\centering
\caption{Regional DNS resolvers IP references. }
\begin{tabular}{| l | c | c |}
\hline
\textbf{Countries} & Regional 1 & Regional 2 \\
\hline
AR & 190.151.144.21 & 200.110.130.194 \\
DE & 46.182.19.48 & 159.69.114.157 \\
IN & 182.71.213.139 & 111.93.163.56 \\
US & 208.67.222.222 & 208.67.220.220 \\
\hline
\end{tabular}
\label{tab:regional_ip}
\end{table}
\begin{figure}[htb!]
\includegraphics[width=0.9\columnwidth]{figs/DNSResolutionTimesRanking.png}
\caption{A bump graph of DNS resolution times ranking across countries (Regional DNS services in Table~\ref{tab:regional_ip}). The maximum (avg) Kendall tau distance between rankings is 0.6 (0.37) showing that there is no \textit{best} DNS service for every location.}
\label{fig:resolutiontimeranking}
\end{figure}
The differences in performance across DNS services means that sharding requests will result in high performance variability. To illustrate this we plot CDFs of the resolution time for popular resources in different countries using different DNS services and basic sharding. Figure~\ref{fig:resolutiontime_sharding} shows the resolution times for each service relative to the best performing resolver in Germany (full set of graphs in \S\ref{sec:appendix}). The figures clearly show that the resolution times of sharding in each locale are affected by the worst performing DNS service in a locale (Regional1 service in Germany) thereby increasing the resolution times of sharding in comparison to the better performing DNS services.
Domain specific sharding, while improving privacy, will most likely not improve performance variability as all queries belonging to a given domain could be assigned to a randomly selected (potentially the slowest/fastest for the locale) DNS service and the association persists for all resolution request of the associated domain.
\begin{figure}[ht]
\centering
\subfigure[DE\label{fig:DEResolution_sharding}]{\includegraphics[width=0.8\columnwidth]{figs/DE_Resolution_sharding.png}}
\caption{DNS resolution times of individual, public DoH and DNS providers and sharding in Germany relative to the best performing DNS service (Quad9).}
\label{fig:resolutiontime_sharding}
\end{figure}
\subsubsection{Racing}
\label{subsubsection:racing}
\'Onoma{} addresses this potential problem by using resolution races~\cite{vulimiri:racing} that place DNS services in competition with each other. For each DNS request, \'Onoma{}~uses a configurable number of DNS resolvers and issues the same request to each of them simultaneously, returning to the client the first response from the first resolver that successfully resolves the query addressing the problem of high latency caused by latency variance. This redundancy comes with negligible cost on bandwidth for potentially significant improvements on resolution times.
\begin{comment}
content...
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\linewidth]{figs/shardingvsracing}
\caption{Distribution of requests across DNS services (top) and their average DNS resolution times with different racing configurations (bottom).}
\label{fig:shardingvsracing}
\end{figure}
\end{comment}
A potential issue of racing DNS services is the associated tradeoff between privacy and performance. Racing a larger number of DNS services increases the chances of finding the \textit{best} resolver for that request and locale, but also increases the fraction of the user's DNS request stream seen by any particular service.
To illustrate this tradeoff, we conduct an experiment measuring the performance and request stream exposure of different racing configurations. We shard 100 resources, in the top50 sites in the US, across different DNS providers. We issue resolution requests for these resources to a randomly selected resolver, and run races between two and three resolvers for each resolution request. Table~\ref{table:fraction_of_requests} presents a summary of these results.
\begin{table}[htb!]
\centering
\caption{Resolution times with \'Onoma{} running no resolution races, racing 2 and racing 3 resolvers, and the associated percentage of requests each resolver receives.
}
\begin{tabular}{|c|c|c|c|c|}
\hline
& \multicolumn{2}{|c|}{Resol Time} & \multicolumn{2}{|c|}{Request \%}\\
\hline
& \textbf{Mean} & \textbf{Std Dev} &\textbf{Max} &\textbf{Avg} \\
\hline
No Racing & 42.22 & 38.13 &20.34&12.5\\
Racing 2& 25.83 & 12.42&28.10&25.0 \\
Racing 3& 21.34 & 6.53&42.31&37.5 \\
\hline
\end{tabular}
\label{table:fraction_of_requests}
\end{table}
The table show the performance advantages of racing. Just racing two resolvers offers a percentage improvement of 38.8\%, while racing three provides an additional 17.88\% improvement on resolution time.\footnote{A reminder of \textit{the power of two random choices}~\cite{mitzenmacher:2choice}.}
Table~\ref{table:fraction_of_requests} also show the tradeoff posed by increasing the number of raced resolvers showing the percentage increase (maximum and average) of the DNS request stream exposed by each of these strategies to the different resolvers. The maximum percentage of requests seen by each resolver increases from 20.34\%, with \textit{no racing}, to 28.10\% when racing 2 resolvers and 42.31\% when racing 3 resolvers. Racing 2 resolvers achieves the best balance between privacy and performance in our analysis and we set this as the default configuration of \'Onoma.
Finally, we also show the tradeoff between privacy and performance across different DNS variants, such as DNS over Tor, our ISP DNS, third-party DNS services, sharding and \'Onoma. Figure~\ref{fig:perfvsprivacy} shows box plots of resolution times (x axis) for each DNS variant and the y-axis shows the privacy gain for each DNS variant in terms of the percentage of users that have at least 2-anonymity. The figure shows that DNS over Tor gives the maximum privacy but significantly worse resolution times, whereas ISP DNS while providing the best performance compromises on privacy significantly. We show that sharding improves 2-anonymity but by randomly distributing requests across popular DNS providers it can also significantly hurt performance. \'Onoma, on the other hand, gives the best privacy and performance for users.
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\linewidth]{figs/perfvsprivacy}
\caption{Privacy and/or Performance. Average DNS resolution times and corresponding 2-anonymity values of different DNS variants and \'Onoma.}
\label{fig:perfvsprivacy}
\end{figure}
\subsubsection{Direct Resolution}
\label{subsubsection:directresolution}
Even if certain third-party DNS services result in faster resolution time for their users, the same users can still experience worsening Web QoE, depending on the specific deployment of the DNS services and popular CDNs infrastructures in their region, and the mapping policies of these CDNs. Differences in relative distance between the users and their recursive DNS resolvers can result in poor mapping of a user to a CDN replica server and higher delays in getting objects hosted by those replicas.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{figs/CDNDistribution}
\caption{Relative popularity of CDNs in top50 Regional sites in Argentina, Germany, India and the US.}
\label{fig:CDNDistribution}
\end{figure}
\textbf{The popularity of CDNs varies across locales.} Figure~\ref{fig:CDNDistribution} shows
the relative popularity of different CDNs per country. Each cluster represents a country and each bar indicates the number of resources, from the top50 regional sites, hosted on a particular CDN. Clearly, the relative ranking of CDNs as host to the most poular resources varies across locales. For instance Germany's popular sites and resources seem to be preferentially hosted on Fastly (44\% resources) whereas, the other three countries rely primarily -- although to different degrees -- on Akamai (ranging from 44\% to 50\%). Additionally Amazon has relatively better presence in the US, but not in the other three countries.
\begin{figure*}[ht]
\centering
\subfigure[AR,Akamai\label{fig:AR_akamai_ttfb}]{\includegraphics[width=0.24\textwidth]{figs/AR_Akamai_ttfb.png}}
\subfigure[DE,Fastly\label{fig:DE_fastly_ttfb}]{\includegraphics[width=0.24\textwidth]{figs/DE_Fastly_ttfb.png}}
\subfigure[IN,Akamai\label{fig:IN_akamai_ttfb}]{\includegraphics[width=0.24\textwidth]{figs/IN_Akamai_ttfb.png}}
\subfigure[US,Akamai\label{fig:US_akamai_ttfb}]{\includegraphics[width=0.24\textwidth]{figs/US_Akamai_ttfb.png}}
\newline
\caption{Percentage Difference in TTFB of individual, public DoH and DNS resolvers per country, relative to the best DNS service in that locale, across different countries. Cloudflare which performs worst in Germany for content hosted on Fastly, shows 10\% greater TTFB in the median case compared to the best service (Regional).}
\label{fig:ttfb}
\end{figure*}
\begin{figure}[ht]
\centering
\subfigure[Akamai\label{fig:ttfbRankingAkamai}]{\includegraphics[width=0.8\columnwidth]{figs/DNSRankingTTFBAkamai.png}}
\subfigure[Amazon\label{fig:ttfbRankingAmazon}]{\includegraphics[width=0.8\columnwidth]{figs/DNSRankingTTFBAmazon.png}}
\caption{DNS resolvers ranking by TTFB for resources hosted on different CDN across different countries.
The maximum (avg) Kendall tau distance between rankings is 0.4 (0.27) for Akamai and 0.7 (0.53) for Amazon.}
\label{fig:ttfbRanking}
\end{figure}
\textbf{User's QoE is impacted by the choice of DNS service depending on the locale.} To evaluate how users' QoE in different locales for content hosted on different CDNs varies with the choice of DNS service, we then conduct an experiment measuring time-to-first-byte (TTFB) as a proxy for Web QoE. TTFB gives the minimum response time for the content obtained from the CDN replica and thus measures the effectiveness of the CDN mapping. For this experiment, we use resources from the top50 regional sites hosted on the most popular CDN in each country (Fig.~\ref{fig:CDNDistribution}).
We load each resource three times using Google's lighthouse~\cite{lighthouse} for different (CDN, country) combinations using the same VPN service and record the minimum TTFB for each resource. We use the minimum value to ignore any transient spikes in TTFB measurements.
From the recorded TTFB measurements with each resolver for the (CDN, country) combination, we find the best performing resolver, in the median case, and calculate the percentage difference in TTFB of each DNS service from this baseline for the given CDN and locale.
Figure~\ref{fig:ttfb} plots a CDF of this difference in TTFB. The figure shows that different DNS providers result in different TTFB for resources hosted on different CDNs, across different locales. Cloudflare which performs worst in Germany for content hosted on Fastly, shows 10\% higher TTFB in the median case compared to the best service (Regional1) and Google (with the worst TTFB in IN,Akamai) shows 1\% higher TTFB in the median case compared to the best service (Regional2).
Figure~\ref{fig:ttfbRanking} is a bump graph further highlighting this point (full set of graphs in \S\ref{sec:appendix}). The figure ranks DNS resolvers by TTFB for resources hosted on different CDNs across different countries and highlight their relative changes in ranks. For instance, Google gives the worst QoE for content hosted on Akamai in the US and India (even though Google gives second best DNS resolution times in India as shown in Fig.~\ref{fig:resolutiontimeranking}), but results in best QoE for content hosted on Amazon in Argentina (where it has the second worst resolution time as seen from Fig.~\ref{fig:resolutiontimeranking}). This reiterates that the user's web QoE in fact depends on the specific deployment of the DNS services and popular CDNs infrastructures in their region, and the mapping policies of these CDNs.
\textit{To address the QoE impact of using different third-party DNS resolvers and CDNs in different locales, \'Onoma~combines racing with direct resolution~\cite{otto2012content} when resolving content that is hosted by a CDN.} It uses the response from the fastest DNS resolver for direct resolution. Direct resolution combines the best aspect of interative and recursive DNS resolution to obtain improved CDN redirections for each resource. It leverages the cache of a recursive
resolver to efficiently map a Canonical Name (CNAME) to an authoritative name server, but directly contacts the CDN's
authoritative server to obtain a precise redirection for the client. This gives the CDN's authoritative server better information of the clients' actual network location and allows for a better replica assignment~\cite{otto2012content}.
To understand the aggregated benefits of the combined approach, we compare the performance of \'Onoma{} with that of sharding. Since sharding, by itself, randomly distributes DNS requests across a set of public DNS servers, the resulting performance matches the average performance of public DNS services in each region and can serve as a baseline for comparison. We load each resource, of the regional top50 sites three times and compute the minimum TTFB.
\begin{figure}[htb!]
\centering
\subfigure[AR,Akamai\label{fig:AR_OnomavsProxyAkamai}]{\includegraphics[width=0.4\textwidth]{figs/AR_OnomavsProxyAkamai.png}}
\caption{Percentage Difference in TTFB of Sharding and \'Onoma{} relative to the best DNS service across the US and Argentina.}
\label{fig:OnomavsProxy1}
\end{figure}
Figure~\ref{fig:OnomavsProxy1} presents a CDN of the percentage difference in TTFB for both \'Onoma~and sharding, relative to the best performing DNS service for a particular (CDN, country) combination (complete graphs in \S\ref{sec:appendix}). The figure includes plots for Argentina for content hosted on Akamai. The graphs show the clear dominance of \'Onoma~over sharding across CDNs in both locales, with a median improvement of 9.34\% in Argentina with Akamai CDN.
\subsection{Overview}
\'Onoma~lets users leverage the benefits of third-party DNS services while $(i)$ providing improved k-anonymity, via a combination of sharding and popular domain insertion, $(ii)$ consistently better resolution times than individual DNS resolvers, independently of the locale, by running resolution races, and $(iii)$ and without impacting users' web experience via direct resolution.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figs/OnomaFlowChart.png}
\caption{\'Onoma~interaction with DNS resolvers.}
\label{fig:model}
\end{figure}
Pseudocode~\ref{algo:code} provides a high-level description of \'Onoma's DNS request resolution process, while Fig.~\ref{fig:model} illustrates its interaction with the different DNS resolvers.
\begin{algorithm}[htb!]
\caption{Resolution process followed by \'Onoma.} \label{algo:code}
\begin{algorithmic}\scriptsize
\For{DNS resolution}
\If{domain $\notin$ DomainMap}
\State Run races among resolution
\State Add domain and winner resolver to DomainMap
\EndIf
\If{domain $\notin$ popular domains}
\For{insertion-factor}
\State Sample a popular domain
\State Insert the domain in request stream
\EndFor
\EndIf
\If{CDN-ized domain}
\State Direct resolution for domain
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Background}
\label{sec:background}
In the following paragraphs, we provide background on DNS, the relation between DNS and CDNs and recent trends in the DNS market.
The process of DNS resolution involves at least a stub resolver that asks questions from a recursive resolver which, in turn, finds the
answer by asking one or more authoritative DNS servers. Traditionally, the user's recursive resolver has been provided by the user's ISP
and found, in networking terms, generally close to their users.
ISP DNS recursive resolvers are, however, not always the best in terms of resolution time~\cite{ager:dns,rula:behind}, and slower DNS
resolutions can significantly impact users' QoE. Besides resolution performance, there are several other potential drawbacks of using
an ISP provided resolver in terms of reliability, privacy and censorship~\cite{comcast:outage,ripe:censorship,liu:ispPrivacy}.
Partially in response to these issues, a third-party ecosystem has evolved around DNS over the years. Services such as OpenDNS, UltraDNS
and Google Public DNS~\cite{google:publicdns} offer a number of valuable features, from load balancing, nonce prepending and shared caching
for performance, to phishing and botnet detection for security. More recently, some of these same providers have begun to offer DNS over
HTTPs (DoH), DNS over TLS (DoT) and DNSCrypt~\cite{dnscrypt:www} using encryption between the DNS stub and the provided recursive resolver
to prevent on-path eavesdropping, spoofing, and blocking~\cite{cloudflaredoh,quad9doh}. These DNS services typically host servers on multiple
sites around the globe and use anycast to guide DNS clients to the nearest resolver. Note, of course, that the resolution times experienced
by any particular client will depend in part on their location and proximity to these ``nearest'' resolvers for their chosen DNS service.
The proximity, or lack thereof, between clients and their DNS resolvers can impact not just resolution times. This observation has been at
the core of CDNs mapping systems, sometimes referred to as \textit{redirection}, the process by which the CDN decides the replica server
where to redirect a client's request for content to improve user-perceived performance and balance the load across servers~\cite{hao:sec18}.
The proximity, or lack thereof, between clients and their DNS resolvers can impact not just resolution times, depending on the CDN redirection
method. While most of the time DNS-based redirection performs well, the ``client-LDNS mismatch'' hurts performance
by directing users to distant servers. EDNS Client Subnet (ECS) aims to address this problem by allowing a client's resolver to forward a
portion of the client’s IP address to an authoritative DNS server. While a number of large-scale DNS services and some CDNs have adopted ECS,
older et al.~\cite{colder:edns-adoption} shows that adoption of ECS across the Internet is minuscule and even large providers, such as Cloudflare,
have stopped supporting it due to privacy concerns~\cite{cloudflare:ecs}.
To illustrate the potential impact of client-LDNS mismatch, we select top50 Alexa sites\footnote{We discuss the use of Alexa regional rankings in
Sec.\ref{sec:limitation}.} in the US and resolve the resources of these sites, hosted on the two most popular CDNs -- Akamai and Amazon CloudFront,
using the Ripe Atlas DNS API. We use probes in the US and the RIPE probes' resolvers as local resolvers, and use distant regional resolvers in
Argentina and India. We measure the median round-trip time (RTT) of five runs to the replicas assigned using the local and distant resolvers. If
a CDN does not rely on DNS for redirection, one would expect no RTT differences between the replicas assigned when using the different resolvers.
If, on the other hand, a CDN does rely on DNS-based redirection, there should be larger RTTs to the replicas assigned with increasingly distant
resolvers, even when the client location does not change.
\begin{figure}
\centering
\subfigure[Akamai CDN\label{fig:Akamai_DNS_localization}]{\includegraphics[width=0.8\columnwidth]{figs/Akamai_DNS_localization.png}}
\caption{\textit{Client-LDNS mismatch problem and CDN redirection.} Latency to assigned replica servers for Akamai and Amazon CloudFront for resources in top50 US websites. The client in the US is using local DNS resolvers (cyan) and distant resolvers in Argentina and India.}
\label{fig:DNS_localization}
\end{figure}
Figure~\ref{fig:DNS_localization} clearly illustrates the client-LDNS mismatch problem with the two most popular CDNs (full set of graphs in \S\ref{sec:appendix}). The RTTs to replicas assigned when using a local resolver are consistently lower - up to an order of magnitude lower at the median than those to replicas assigned when using \textit{any} of the distant resolvers (e.g., 9ms for the local resolver and 160ms with a resolver in India in the case of Akamai), and the overall distributions are ordered matching the relative geographic distances of the replicas.
\section{End-to-end Evaluation}
\label{sec:evaluation}
In this section, we present results from an end-to-end evaluation of \'Onoma, combining all the design features discussed. In our evaluation we compare \'Onoma's performance with that of different third-party DNS services, across locales and CDNs. We use TTFB, as the performance metric for comparison, and the resources hosted on the top50 regional sites, as a proxy of users' QoE in those regions.
The reported results were collected by the measurement module that is part of \'Onoma{}, which runs all the experiments in the background (\S\ref{sec:implementation}). We set up the evaluation measurements in the selected locations using a VPN service, using different DNS settings including public DoH resolvers, regional resolvers and \'Onoma{}. We focus on three CDN services -- Akamai, Amazon and Fastly -- and the resources they host.
For each DNS setting and (CDN, country) combination, the measurement module loads the set of resources with Google Lighthouse. It records the minimum TTFB from three repeated runs, finds the best performing resolver, in the median case, and calculates the percentage difference in TTFB between each DNS service and the indentified best DNS service for a given CDN and locale.
Figure~\ref{fig:ttfb_wOnoma1} presents boxplots of the performance of different DNS services and \'Onoma{} (full set of graphs in \S\ref{sec:appendix}). The plots are relative to the best performing resolver for a given CDN and locale (e.g., Cloudflare for Akamai in the US). Boxplots clearly show the median and the first and third quartiles, as well as the minimum and maximum values (upper and lower whiskers) of the collected TTFB data. The outliers of the data are shown by circles above and below the whiskers.
The figure shows that the performance of \'Onoma{} is either best, or comparable to the average performance of the public DNS services. \'Onoma{} gives consistently a lower inter-quartile range (third quartile - first quartile), indicated by the colored boxes in the plot (blue box for other providers and green for \'Onoma{}), for most cases. Median performance of \'Onoma{} is better than the best Service for most CDN, country combinations or at least comparable to all the DNS services tested for that locale. We see significant performance improvement with content hosted on Amazon in Germany and with content hosted on Akamai in India and the US. For instance, the inter-quartile range (IQR) of \'Onoma{} is 18.4\% in the US, for Akamai's content, but for other DNS services it ranges between 18.7\% to 24.8\%. For Akamai's content in India, \'Onoma{} gives an IQR of 1.7\% and for other DNS services the IQR ranges between 1.7\% to 2.5\%. Note that \'Onoma{} attains this for any combination of CDN and locale and with high values of k-anonymity (\S\ref{subsection:privacy}).
\begin{figure*}[ht]
\centering
\subfigure[AR,Akamai\label{fig:AR_akamai_ttfb}]{\includegraphics[width=0.27\textwidth]{figs/AR_Akamai_ttfb_onoma_boxplots.png}}
\subfigure[IN,Akamai\label{fig:IN_akamai_ttfb}]{\includegraphics[width=0.27\textwidth]{figs/IN_Akamai_ttfb_onoma_boxplots.png}}
\subfigure[US,Akamai\label{fig:US_akamai_ttfb}]{\includegraphics[width=0.27\textwidth]{figs/US_Akamai_ttfb_onoma_boxplots.png}}
\newline
\subfigure[DE,Akamai\label{fig:DE_akamai_ttfb}]{\includegraphics[width=0.27\textwidth]{figs/DE_Akamai_ttfb_onoma_boxplots.png}}
\subfigure[DE,Amazon\label{fig:DE_amazon_ttfb}]{\includegraphics[width=0.27\textwidth]{figs/DE_Amazon_ttfb_onoma_boxplots.png}}
\subfigure[DE,Fastly\label{fig:DE_fastly_ttfb}]{\includegraphics[width=0.27\textwidth]{figs/DE_Fastly_ttfb_onoma_boxplots.png}}
\newline
\caption{Performance of \'Onoma~and of individual, public DoH and DNS resolvers per country, relative to the best DNS service across CDNs and countries. The top set focuses on one CDN, Akamai, across countries while the bottom set focuses on different CDNs in Germany. The graph for Akamai in Germany is included only once. \'Onoma~offers consistently better performance across CDN and country combinations.}
\label{fig:ttfb_wOnoma1}
\end{figure*}
\section{\'Onoma's Motivations and Goals}
\label{sec:goals}
Concerns about user privacy and QoE have served as motivation for a number of alternative DNS techniques and third-party services~\cite{schmitt:odns,odns:cloudflare,kinnear:adaptivedns,google:publicdns,odns:cloudflare,dnscrypt:www}. While
offering some valuable features, these DNS variants are operated by a handful of providers such as Google, Cloudflare or
IBM, strengthening a concerning trend toward DNS centralization and its implications~\cite{ietf:consolidation,huston:centralization, wang:consolidation, ietf:consolidation}. We briefly discuss these implications, as they motivate the design of \'Onoma,
and present its derived goals.
For starters, DNS-over-Encryption services are not privacy panacea. While they protect user data from ISPs and
man-in-the-middle attacks, they expose all information to the recursive DNS providers. Prior studies~\cite{olejnik:browsinghist, bird:browsinghist} have analyzed the uniqueness of users' web browsing histories, a subset of their DNS request streams,
and shown that users' browsing profiles are both highly distinctive and stable (and thus a viable tracking vector).
Centralized DNS is also an easier target for court order to release data in bulk~\cite{valentino:subpoenas} and no
privacy agreement policy by a DNS provider can escape that. Recent work on Oblivious DNS addresses some of these concerns
by preventing queried domains from being associated with the client identity~\cite{schmitt:odns,odns:cloudflare}. However, Oblivious DNS also increases the cost of DNS resolutions and contributes to the client-LDNS mismatch problem impacting users' web QoE.
In addition, consolidation may also result in lower infrastructure resilience and increases the risk of a captive market~\cite{ietf:consolidation}. Using measurements from 100,000 users of the OONI app, Radu et al.~\cite{radu:centralization}
show that Google and Cloudflare control 49.7\% of the resolver market. Part of the Internet's inherent resilience lies
in its diversity (i.e., different networks, peerings, code base and servers), and centralization leads to a clustering
of multiple such risks, at once increasing the probability of failures \textit{and} the potential impact of any single failure~\cite{cloudflare:down,abhishta:dyn}.
Finally, while many of these infrastructures are pervasive~\cite{calder:mapping}, for specific locales a particular
third-party DNS service may not have nearby servers to offer~\cite{otto2012content}, negatively impacting both DNS
resolution times and web QoE for users of the many CDNs that continue to rely on DNS for replica selection. Poor
user-mappings can lead to increased delays and worsening QoE, lower engagement~\cite{kirshnan:quasi} and revenue
loss~\cite{whitenton:needspeed}.
\textbf{\'Onoma's Goals.} The emergence of a handful of third-party recursive DNS services is a natural result of a maturing Internet market and the benefits of economies of scale. \'Onoma's design aims to let users take advantage of third-party DNS services while avoiding the potential costs. Specifically, \'Onoma~aims to improve privacy by avoiding re-identification of users based on their DNS request streams, and achieve performance comparable to that of the best performing services in the user's locale, independently of any deployment of any specific DNS service, while avoiding reliance on any single DNS service. In addition, \'Onoma{} should be an easy-to-install, readily-deployable solution that bypasses the need for agreements and coordination between providers (CDNs, DNS or ISPs).
\section{Introduction}
\label{sec:introduction}
Domain Name System (DNS) is a key component of the Internet. Originally designed to translate domain names to
IP addresses~\cite{mockapetris:rfc1034}, it has evolved into a complex infrastructure providing platform for a
wide range of applications
Today DNS is a key determinant, directly and indirectly, of users' quality of experience (QoE) and privy to their
tastes, preferences, and even the devices they own. It directly determines user performance as, for instance,
accessing any website requires tens of DNS resolutions~\cite{butkiweicz:complexity, bottger:dohqoe,bozkurt:slow}. %
Indirectly, a user's specific DNS resolver determines their QoE as many content delivery networks
(CDNs) continue to rely on DNS for replica selection. %
As we show in Sec.~\ref{sec:background}, despite some adoption of anycast~\cite{calder:anycastcdn,alzoubi:ayncastcdn}
and EDNS Client Subnet (ECS)~\cite{chen:akamai}, and the promise of CDN-ISP collaborations~\cite{pujol:steering,poese:padis,xie:p4p,alimi:alto},
many CDNs continue to rely on the assumption that the location of a client's resolver provides a good proxy for the client's own location~\cite{hounsel:ddns,colder:edns-adoption}.
Beyond its critical role in the user's experience, clear text DNS requests can have a significant impact on privacy and security
and, in certain parts of the world, on human rights~\cite{dnscensor:venezuela, dnscensor:iran, dnscensor:ccr12}. %
The set of DNS requests issued by a user reveals much about their tastes, preferences, or the devices they own and how they used them~\cite{bortzmeyer:dnsprivacy,gchq,morecowbell,ghuston:dnsprivacy}, even when using VPN or Tor~\cite{greschbach:dnstor}. %
Prior work~\cite{olejnik:browsinghist,bird:browsinghist} shows that browsing profiles, a subset of users' DNS request streams,
are both highly distinctive (e.g., 98-99\% of browsing profiles are unique) and stable (e.g., re-identifiability rate as high
as ~80\%), suggesting the viability of browser profiles as user identifiers.
Concerns about user privacy and QoE have served as motivation for a number of alternative DNS techniques and third-party services,
from public DNS to encrypted and Oblivious DNS~\cite{schmitt:odns,odns:cloudflare,kinnear:adaptivedns,schmitt:odns}. %
Google Public DNS was announced in late 2009 promising better performance and higher reliability~\cite{google:publicdns},
while DNS over HTTPS (DoH), DNS over TLS (DoT), DNSCrypt, and Oblivious DNS are some of the latest proposals to make DNS more secure~\cite{liu:doe,odns:cloudflare,dnscrypt:www}.
While offering some valuable features, these DNS variants are operated by a handful of providers such as Google, Cloudflare or
IBM, strengthening a concerning trend toward DNS centralization and its implications~\cite{ietf:consolidation,huston:centralization, wang:consolidation}.
First, the use of encrypted DNS, while avoiding man-in-the-middle attacks, does not necessarily improve client's privacy as the DNS provider has access to the highly-unique, unencrypted request streams of millions of clients and is a clear target for a court order to release data in bulk~\cite{valentino:subpoenas}.
Second, while centralized DNS could offer better DNS response times, it may result in worst Web QoE as it breaks CDNs' assumption
about the proximity between users and their resolvers~\cite{otto2012content}. Our evaluation (\S\ref{sec:system_design}) shows that, despite the large-scale deployment of many third-party DNS services, the proximity to users of their resolvers varies widely across locales~\cite{colder:edns-adoption}. While EDNS IP Subnet extension is meant to address this problem, prior work~\cite{colder:edns-adoption} shows that adoption of ECS across the Internet is minuscule and even large providers, such as Cloudflare, have stopped supporting it due to privacy concerns~\cite{cloudflare:ecs}. \footnote{The Cloudflare's CEO commented that his company is instead exploring an alternative to the EDNS IP Subnet extension for user geolocation~\cite{cloudflare:ecs}.}
Last, part of the Internet's inherent resilience lies in its diversity; centralization leads to a clustering of multiple risks, from technical to economic ones~\cite{huston:centralization}, while increasing the potential impact of any single failure~\cite{cloudflare:down}.
\textit{The goal of our work is to let users take advantage of third-party DNS services without sacrificing privacy or performance. Our approach follows Wheeler's advice, adding another level of indirection with an end-system DNS resolver.}\footnote{``All problems in computer science can be solved by another level of indirection'', David Wheeler}
We present {\it \'Onoma}, an end-system resolver that:
\begin{enumerate}
\item enhances privacy by sharding users' requests across \textit{multiple} resolvers~\cite{hoang:sharding,hounsel:ddns,jari:sharding} (\S\ref{subsubsection:domainsharding}),
\item reduces reidentifiablitly by adaptively inserting popular requests on a user DNS request stream based on the uniqueness of a requested domain (\S\ref{subsubsection:insertion}),
\item improves DNS performance, independently of a user's location and specific DNS or CDN services' deployments, by running resolution races~\cite{vulimiri:racing} among resolvers (\S\ref{subsubsection:racing}) and
\item addresses the QoE impact of client-LDNS mismatch problem~\cite{otto2012content} (\S\ref{subsubsection:directresolution}).
\end{enumerate}
We evaluate \'Onoma~across geographic locales and with different DNS services, content providers, and CDNs and a range of settings for sharding, insertion and racing. %
Our evaluation results show that \'Onoma~can take advantage of third-party DNS while avoiding user re-identification -- 100\% of the users can not be re-identified (are k-anonymous) with low insertion and sharding requests across 8 resolvers (\S\ref{subsection:privacy}). Our results also demonstrate that, across a number of DNS resolvers and CDNs, \'Onoma~can improve average resolution time by 38.8\% by racing just two resolvers, and can yield up to ~10\% shorter Time-To-First-Byte(\S\ref{subsec:performance}). More importantly, the average performance of \'Onoma~is better or equal to the best service for most CDN and individual DNS at any locale (\S\ref{sec:evaluation}).
While there may not be an ideal service for all clients in \textit{all} places, \'Onoma~dynamically selects the \textit{best} service for \textit{any} given location.
\section{Limitations and Future Work}
\label{sec:limitation}
In this section we discuss limitations of our approach and its evaluation. For starters, we focus on challenging attempts at user re-identification by third-party DNS resolvers, motivated by the findings of Olejnik et al.~\cite{olejnik:browsinghist} and Bird et al.~\cite{bird:browsinghist}.
Due to the privileged position of DNS resolvers in the resolution path, the best privacy a user could expect is in terms of k-anonymity. We adopt the user identification approach and follow the evaluation model and settings of prior work~\cite{olejnik:browsinghist,bird:browsinghist}. We do not claim this to be the only approach to (re)identify users, but we posit that the combination of popular domain insertion and sharding will complicate the task of user identification by most third-party resolvers. The opportunistic use of Oblivious DNS~\cite{odns:cloudflare,schmitt:odns} for resolving truly distinctive or otherwise problematic domains further challenge re-identification while limiting the performance impact of the technique.
In this work we opted for Alexa’s regional ranking as this offers country specific rankings available to the community, and includes in its listing regional domain aliases for instance yahoo.com.jp and yahoo.com. Other popular rankings, such as Tranco compile their regional rankings by computing an intersection global ranking and the domains appearing in the country-specific Chrome User Experience Report list. These rankings, therefore, miss all region specific domain aliases. While we are exploring the use similarweb's~\cite{similarweb} regional ranking for future work, we note that our analysis and evaluation of \'Onoma~is largely independent of specific website rankings.
Finally, we also note that the dataset used for our evaluation, collected from our university network for a two-week period, can be biased both in terms of the population and the time window over which it was collected. In future work, we are exploring ways to expand our evaluation dataset.
\section{Implementation}
\label{sec:implementation}
We have implemented \'Onoma{} as an end-system resolver running on users' machines. \'Onoma{} is implemented in 3,510 lines of Go code, excluding comments, and is designed as a background service. It is readily deployable and easy-to-install on any operating system and neither requires any modification to the user's operating system nor coordination with existing CDN or DNS services. The implementation includes a light-weight, client-side application (under 500MB) hosting a simple user interface to let users install, start and stop the service and modify default configuration settings such as changing the set of DNS resolvers used for sharding and racing, and the request insertion factor. \'Onoma{} maintains a set of DNS services which includes popular public encrypted services -- Google, Cloudflare and Quad9 -- as well as a set of regional public DNS services available at the clients' location.
\'Onoma{} also implements a measurement module to perform experimentation on the clients' side. \'Onoma{} uses the collected results to indentify optimal default values for sharding, racing and insertion, as well as dynamically select the best DNS resolvers for the user's locale.
\section{Related Work}
\label{sec:related_work}
Our work builds on an extensive body of research on characterizing and improving DNS performance, privacy and its impact on users' QoE. For over two decades since Mockapetris~\cite{mockapetris:rfc1034}, DNS has been the subject of several measurement studies in the wild~\cite{ager:dns, huitema:dnsperf, wills:dnsweb, jung:dnscaching, liston:dnsperf, mao:proximity, shaikh:selection, otto2012content}. Much of this work has focused on caching, performance and scalability across locales (e.g., \cite{jung:dnscaching, liston:dnsperf}, and the interaction between DNS and content distribution (e.g., \cite{mao:proximity,shaikh:selection,otto2012content}).
The appearance of open DNS resolvers, offered by companies such as Google and OpenDNS, and the follow-up EDNS-client-subnet DNS extension (ECS)~\cite{contavalli:rfc7871}, motivated prior work to understand these services' relative performance~\cite{ager:dns} and their potential impact on QoE~\cite{otto2012content,pujol:steering,fan:ecs}. These and other related efforts have shown that user-mapping accuracy is worsening, leading to increased delays, lower engagement~\cite{kirshnan:quasi} and revenue loss~\cite{whitenton:needspeed}.
Despite some adoption of anycast~\cite{calder:anycastcdn,alzoubi:ayncastcdn} and the promise of CDN-ISP collaborations~\cite{pujol:steering,poese:padis,xie:p4p,alimi:alto}, many CDNs continue to rely on DNS-based server redirection. Otto et al.~\cite{otto2012content} proposed \textit{direct resolution} as an alternative end-user approach to address the impact of remote DNS resolvers on Web QoE despite the limited adoption of ECS. Direct resolution leverages the recursive DNS server to obtain the authoritative server for CDN-hosted content, but have the client itself do the final resolution by directly contacting the authoritative server operated by the CDN. This yields better localization and, thus QoE performance, without additional privacy costs~\cite{otto2012namehelp}. \'Onoma~adopts direct resolution, including caching of the CDN-run authoritative name server to avoid additional delay in resolution (Sec.\ref{sec:system_design}).
Beyond performance, increasing concerns about user privacy has served as motivation for new DNS techniques and services. Recent work evaluates the relative performance and QoE impact of DNS-over-Encryption variants~\cite{bottger:dohqoe,hounsel:enccost}, and assess their implementations, deployment configurations~\cite{liu:doe,deccio:dnsprivacy} and policy implications~\cite{dns:dprive,edns:policy}. Deccio et al.\cite{deccio:dnsprivacy} also show that only 0.15\% of open resolvers and a negligible number of authoritative servers support DNS privacy, and those that do comes from a handful of popular DNS providers such as Cloudflare, Facebook, Google, Quad9.
Although the benefits provided by DNS-over-Encryption against certain threats are clear, there are significant privacy concerns associated with the need to trust the DNS operator with the entire request trace of users. Schmitt et al.~\cite{schmitt:odns} proposes \textit{Oblivious DNS} to address centralization, by obfuscating the queries that the recursive resolver sees by adding its own authoritative ODNS resolver. A few closely related efforts~\cite{hoang:sharding,hounsel:ddns,jari:sharding} have proposed and evaluated the use of DNS request sharding to mitigate the privacy concerns from exposing all DNS resolution to a single DNS-over-Encryption service, potentially grouping requests per domain to avoid distribution of similar names to different resolvers~\cite{jari:sharding}. \'Onoma{} builds on this idea (Sec.~\ref{sec:goals}) and shows the degree of additional privacy it provides by avoiding DNS-based user-reidentification through domain specific sharding and by adaptively inserting popular requests on a user DNS request stream based on the uniqueness of a requested domain. \'Onoma{} also avoids the potential performance costs of sharding by dynamically selecting the different DNS services to use, depending on the user's location, and racing resolvers~\cite{vulimiri:racing}. Other recent works have questioned the actual privacy benefits of encrypted DNS and proposed solutions to address some of them (e.g.,~\cite{siby:encrypted,bushart:padding}) which could be easily adopted by \'Onoma{}.
The design of \'Onoma, leverages an extensive body of work to introduce, to the best of our knowledge, the first end-user DNS resolver to take advantage of the privacy and performance benefits of centralized DNS services while avoiding the associated costs.
\section{Appendix}
\label{sec:appendix}
In this section we include the remaining graphs from our motivation, system design and evaluation of \'Onoma.
\subsection{The importance of DNS for CDN replica selection}
Figure~\ref{fig:DNS_localization_appendix} shows the remaining graphs to illustrate the importance of DNS for redirection. The client in the US is using a local DNS resolver (cyan) and distant ones in Argentina and India.
\begin{figure}[ht!]
\centering
\subfigure[Amazon \label{fig:Amazon_DNS_localization_a}]{\includegraphics[width=0.48\columnwidth]{figs/Amazon_DNS_localization.png}}
\subfigure[Fastly\label{fig:Fastly_DNS_localization_a}]{\includegraphics[width=0.48\columnwidth]{figs/Fastly_DNS_localization.png}}
\caption{Latency to assigned replica servers (/24) for Amazon and Fastly for resources of top50 US websites. }
\label{fig:DNS_localization_appendix}
\end{figure}
\begin{figure}
\centering
\subfigure[Akamai\label{fig:Akamai_DNS_localization_DE}]{\includegraphics[width=0.2\textwidth]{figs/Akamai_DNS_localization_DE.png}}
\subfigure[Amazon\label{fig:Amazon_DNS_localization_DE}]{\includegraphics[width=0.2\textwidth]{figs/Amazon_DNS_localization_DE.png}}
\subfigure[Fastly\label{fig:Fastly_DNS_localization_DE}]{\includegraphics[width=0.2\textwidth]{figs/Fastly_DNS_localization_DE.png}}
\caption{Latency to assigned replica servers (/24) for Akamai, Amazon CloudFront and Fastly for resources of top50 German websites. The client in Germany uses a recursive DNS resolvers local (cyan) and distant ones in Sweden and India.}
\label{fig:DNS_localization_DE}
\end{figure}
Figure~\ref{fig:DNS_localization_DE} shows the full set of the graphs to illustrate the importance of DNS for replica selection for the client in Germany using a local DNS resolver (cyan) and distant ones in Sweden and India.
\subsection{Performance and Third-party DNS}
Figure~\ref{fig:resolutiontime_a} shows the remaining graphs to show how resolution times of different resolvers varies in different locales.
\begin{figure}
\centering
\subfigure[AR\label{fig:ARResolution}]{\includegraphics[width=0.48\columnwidth]{figs/AR_Resolution.png}}
\subfigure[DE\label{fig:DEResolution}]{\includegraphics[width=0.48\columnwidth]{figs/DE_Resolution.png}}
\caption{DNS resolution times of individual, public DoH and DNS providers in the Argentina and Germany, relative to the best performing DNS service, across each country.}
\label{fig:resolutiontime_a}
\end{figure}
\subsection{Sharding Performance and Third-party DNS}
Figure~\ref{fig:resolutiontime_sharding_a} shows the remaining graphs to show that resolution times of different resolvers and sharding varies in different locales.
\begin{figure*}
\centering
\subfigure[US\label{fig:USResolution_sharding_a}]{\includegraphics[width=0.48\columnwidth]{figs/US_Resolution_sharding.png}}
\subfigure[AR\label{fig:ARResolution_sharding_a}]{\includegraphics[width=0.48\columnwidth]{figs/AR_Resolution_sharding.png}}
\subfigure[IN\label{fig:INResolution_sharding_a}]{\includegraphics[width=0.48\columnwidth]{figs/IN_Resolution_sharding.png}}
\caption{DNS resolution times of individual, public DoH and DNS providers and sharding in the Argentina, India and the US relative to the best performing DNS service, across each country.}
\label{fig:resolutiontime_sharding_a}
\end{figure*}
\subsection{The Impact of CDN and DNS on Web QoE across locales}
\begin{comment}
content...
\begin{figure}
\centering
\subfigure[Fastly\label{fig:ttfbRankingFastlya}]{\includegraphics[width=0.8\columnwidth]{figs/DNSRankingTTFBFastly.png}}
\caption{DNS resolvers ranking by Time-to-First-Byte for resources hosted on Fastly across different countries. }
\label{fig:ttfbRankinga}
\end{figure}
\end{comment}
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{figs/DNSRankingTTFBFastly.png}
\caption{DNS resolvers ranking by Time-to-First-Byte for resources hosted on Fastly across different countries. }
\label{fig:ttfbRankinga}
\end{figure}
Figure~\ref{fig:ttfbRankinga} shows the graph for the ranking of public and regional DNS resolvers by TTFB for resources hosted on Fastly in different locales.
\begin{figure}
\centering
\subfigure[US,Amazon\label{fig:US_OnomavsProxyAmazon}]{\includegraphics[width=0.2\textwidth]{figs/US_OnomavsProxyAmazon.png}}
\subfigure[US,Fastly\label{fig:US_OnomavsProxyFastly}]{\includegraphics[width=0.2\textwidth]{figs/US_OnomavsProxyFastly.png}}
\subfigure[AR,Amazon\label{fig:AR_OnomavsProxyAmazon}]{\includegraphics[width=0.2\textwidth]{figs/AR_OnomavsProxyAmazon.png}}
\subfigure[AR,Fastly\label{fig:AR_OnomavsProxyFastly}]{\includegraphics[width=0.2\textwidth]{figs/AR_OnomavsProxyFastly.png}}
\caption{Percentage Difference in Time-to-First-Byte of Sharding and \'Onoma{} relative to the best DNS service across the US and Argentina. For every CDN, country combination shown in the graph \'Onoma{} performs better than sharding.}
\label{fig:OnomavsProxy}
\end{figure}
\subsection{End-to-end Evaluation of \'Onoma{}}
Figure~\ref{fig:ttfb_wOnomaappendix} shows the remaining graphs for the end-to-end evaluation of \'Onoma{}, comparing \'Onoma{} to other DNS providers for content hosted on different CDNs across the locales.
\begin{figure}
\centering
\subfigure[AR,Amazon\label{fig:AR_amazon_ttfb}]{\includegraphics[width=0.2\textwidth]{figs/AR_Amazon_ttfb_onoma_boxplots.png}}
\subfigure[AR,Fastly\label{fig:AR_fastly_ttfb}]{\includegraphics[width=0.2\textwidth]{figs/AR_Fastly_ttfb_onoma_boxplots.png}}
\subfigure[IN,Amazon\label{fig:IN_amazon_ttfb}]{\includegraphics[width=0.2\textwidth]{figs/IN_Amazon_ttfb_onoma_boxplots.png}}
\subfigure[IN,Fastly\label{fig:IN_fastly_ttfb}]{\includegraphics[width=0.2\textwidth]{figs/IN_Fastly_ttfb_onoma_boxplots.png}}
\subfigure[US,Amazon\label{fig:US_amazon_ttfb}]{\includegraphics[width=0.2\textwidth]{figs/US_Amazon_ttfb_onoma_boxplots.png}}
\subfigure[US,Fastly\label{fig:US_fastly_ttfb}]{\includegraphics[width=0.2\textwidth]{figs/US_Fastly_ttfb_onoma_boxplots.png}}
\caption{Performance of individual, public DoH and DNS resolvers in each country and using \'Onoma{}, relative to the best DNS service across different CDNs and countries.}
\label{fig:ttfb_wOnomaappendix}
\end{figure}
|
{
"arxiv_id": "2302.13192",
"language": "en",
"timestamp": "2023-02-28T02:12:12",
"url": "https://arxiv.org/abs/2302.13192",
"yymm": "2302"
} |
\section{Methodology}\label{sec:Methodology}
\subsection{Preliminaries}
\subsubsection{Problem statement}
The goal is to develop a RL based approach that provides a high rate of successful UAV landings on a horizontally moving platform. A landing trial is considered successful if the UAV touches down on the surface of the moving platform. The approach should require less training time than the baseline method and generalize well enough to be deployed on real hardware. Furthermore, required hyperparameters should be determined in an interpretable way.
\subsubsection{Base Notations} \label{sec:prelim_notations}
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.35]{./pics/multi_rotor_drawing_small.pdf}
\caption{Coordinate frames (blue, $e$: earth frame, $mp$: body-fixed frame of moving platform, $mr$: body-fixed frame of multi-rotor UAV, $s$: stability frame), forces (red) and attitude angle (green) associated with 1D motion in longitudinal direction.}
\label{fig:model}
\end{figure}
We first define a fly zone $\mathcal{F}=[-x_{max},x_{max}]\times [-y_{max},y_{max}] \times [0,z_{max}]\subset\mathbb{R}^3[\si{m}]$ in which the motion of the multi-rotor UAV $mr$ and the moving platform $mp$ are considered, as is illustrated by Fig.~\ref{fig:model}. Furthermore, for the goal of landing on the moving platform, the RL agent controlling the multi-rotor vehicle has to consider only the relative translational and rotational motion. This is why we express the relative dynamics and kinematics in the stability frame $s$, to be able to formulate the RL problem from the UAV's and thus the agent's point of view.
For this purpose we first denote the Euler angles roll, pitch and yaw in the stability frame with $_s\boldsymbol\varphi= (\phi,\theta,\psi)^T$. Each landing trial will begin with the UAV being in hover state, leading to the following initial conditions for rotational movement $ {}_s\dot{\boldsymbol\varphi}_{mr,0} = {}_s\dot{\boldsymbol\varphi}_{mp,0} = \mathbf{0} \si{rad/s} $, $ {}_s\boldsymbol\varphi_{mr,0} = {}_s\boldsymbol\varphi_{mp,0} = \mathbf{0} \si{rad}$.
The initial conditions for the multi-rotor vehicle's translational motion are expressed in the earth-fixed frame $e$, it is $_e\dot{\mathbf{r}}_{mr,0} =\mathbf{0}\si{m/s}$ and $_e\mathbf{r}_{mr,0} \in \mathcal{F}$. Transforming the translational initial conditions of both, multi-rotor vehicle and moving platform into the stability frame allows to fully express the relative motion as the second order differential equations
\begin{small}
\begin{equation}
_{s}\ddot{\boldsymbol\varphi}_{rel} = \mathbf{0} - {}_s\ddot{\boldsymbol\varphi}_{mr} ~~~ \textrm{and} ~~~ {}_{s}\ddot{\mathbf{r}}_{rel} = {}_s\ddot{\mathbf{r}}_{mp} - {}_s\ddot{\mathbf{r}}_{mr}.
\label{eq:non_linear_rel_motion}
\end{equation}
\end{small}
\vspace{-11pt}
For the remainder of this work we specify the following values that will serve as continuous observations of the environment (see Sec.~\ref{sec:discrete_state_space} and Fig.~\ref{fig:sim_framework})
\vspace{-5pt}
\begin{small}
\begin{align}
\mathbf{p}_c &= [p_{c,x},p_{c,y},p_{c,z}]^T = {}_s\mathbf{r}_{rel} \label{eq:cont_obs_p_cx}\\
\mathbf{v}_c &= [v_{c,x},v_{c,y},v_{c,z}]^T = {}_s\dot{\mathbf{r}}_{rel}\\
\mathbf{a}_c &= [a_{c,x},a_{c,y},a_{c,z}]^T = {}_s\ddot{\mathbf{r}}_{rel}\\
\boldsymbol\varphi_c &= [\phi_{rel},\theta_{rel},\psi_{rel}]^T = {}_{s}\boldsymbol\varphi_{rel}.\label{eq:cont_obs_phi_rel}
\end{align}
\end{small}
\vspace{-11pt}
$\mathbf{p}_c$ denotes the relative position, $\mathbf{v}_c$ the relative velocity, $\mathbf{a}_c$ the relative acceleration and $\boldsymbol\varphi_c$ the relative orientation between vehicle and platform, expressed in the stability frame.
\subsection{Motion of the Landing Platform}
We introduce the rectilinear periodic movement (RPM) of the platform that is applied during training. With the initial conditions $_e\dot{\mathbf{r}}_{mp,0} = \mathbf{0}\si{m/s}$ and $_e\mathbf{r}_{mp,0} = \mathbf{0}\si{m}$, the translational motion of the platform is expressed as a second order differential equation in the earth-fixed frame \textit{e}.
\begin{small}
\begin{equation}
\begin{split}
_e\ddot{\mathbf{r}}_{mp} &=-\left(v_{mp}^2/r_{mp}\right)\left[\sin(\omega_{mp} t),0,0\right]^T , \text{ } \omega_{mp} = v_{mp}/r_{mp}\\
\end{split}
\label{eq:platform_acceleration_rpm}
\end{equation}
\end{small}
\vspace{-10pt}
\noindent where $r_{mp}$ denotes the maximum amplitude of the trajectory, $v_{mp}$ the maximum platform velocity and $\omega_{mp}$ the angular frequency of the rectilinear movement. The platform is not subjected to any rotational movement. As a consequence, the maximum acceleration of the platform that is required as part of the hyperparameter determination in Sec.~\ref{sec:hyperparemter_estimation} is
\begin{small}
\begin{equation}
a_{mp,max} = v_{mp}^2/r_{mp}.
\label{eq:a_mpmax}
\end{equation}
\end{small}
\vspace{-15pt}
\subsection{Basis Learning Task} \label{sec:basis_learning_task}
\subsubsection{Dynamic Decoupling} \label{sec:basis_learning_task_preliminaries}
The nonlinear dynamics of a multi-rotor vehicle such as a quadcopter can be greatly simplified when the following assumptions are applied. i) Linearization around hover flight, ii) small angles iii) a rigid, symmetric vehicle body, iv) no aerodynamic effects \cite{Wang2016}.
Under these conditions, the axes of motion are decoupled. Thus, four individual controllers can be used to independently control the movement in longitudinal, lateral, vertical direction and around the vertical axis. For this purpose, low-level controllers track a setpoint $\theta_{ref}$ for the pitch angle (longitudinal motion), $\phi_{ref}$ for the roll angle (lateral motion), $T_{ref}$ for the total thrust (vertical motion) and $\psi_{rel}$ for the yaw angle (rotation around vertical axis). We leverage this fact in our controller structure as is further illustrated in Sec.~\ref{sec:Implementation}.
Furthermore, this enables us to introduce a learning task for longitudinal 1D motion only. Its purpose is to learn a policy for the pitch angle setpoint $\theta_{ref}$ that induces a 1D movement in longitudinal direction, allowing the vehicle to center over the platform moving in longitudinal direction as well. After the training, we then apply a second instance of the trained agent for controlling the lateral motion, where it produces set point values for the roll angle $\phi_{ref}$. This way, full 2D motion capability is achieved. We use the basis learning task to compose a curriculum later in Sec.~\ref{sec:curriculum_discretization}. PID controllers ensure that $\phi_{rel} =0\si{rad},\psi_{rel} = 0\si{rad}$ and $ v_{z_{rel}} = \text{const} < 0\si{m/s}$ during training.
\subsubsection{Markov Decision Process}
For the RL task formulation we consider the finite, discrete Markov Decision Process \cite{Sutton2015}.
The goal of the agent is to learn the optimal action value function $Q^*$ to obtain the optimal policy $\pi^* = \argmax_a Q^*(s,a)$, maximizing the sum of discounted rewards $\Sigma^{T-t-1}_{k = 0}\gamma^{k}r_{t+k+1}$.
\subsubsection{Discrete Action Space}
We choose a discrete action space comprising three actions, namely \textit{increasing pitch}, \textit{decreasing pitch} and \textit{do nothing}, denoted by
\begin{small}
\begin{align}
\mathbb{A_d} = \left[ \Delta\theta^{+},\Delta\theta^{-}, -\right].
\label{eq:discrete_action_space}
\end{align}
\end{small}
\vspace{-10pt}
The pitch angle increment is defined by
\begin{small}
\begin{equation}
\Delta\theta = \frac{\theta_{max}}{n_{\theta}}, n_{\theta}\in \mathbb{N}^+,
\label{eq:normalized_pitch_set}
\end{equation}
\end{small}
\noindent where $\theta_{max}$ denotes the maximum pitch angle and $n_{\theta}$ the number of intervals intersecting the range $[0,\theta_{max}]$.
The set of possible pitch angles that can be taken by the multi-rotor vehicle is then
\begin{small}
\begin{align}
\begin{split}
\Theta &= \left\lbrace -\theta_{max} + i_{\theta}\Delta\theta\lvert i_{\theta} \in \left\lbrace 0,\ldots,2n_{\theta}\right\rbrace\right\rbrace,
\end{split}
\end{align}
\end{small}
\vspace{-10pt}
\noindent where $i_{\theta}$ is used to select a specific element.
\subsubsection{Discrete State Space} \label{sec:discrete_state_space}
For the discrete state space we first scale and clip the continuous observations of the environment that are associated with motion in the longitudinal direction to a value range of $[-1,1]$. For this purpose, we use the function $\text{clip}(x,x_{min},x_{max})$ that clips the value of $x$ to the range of $[x_{min},x_{max}]$.
\begin{small}
\begin{align}
p_x &= \text{clip}(p_{c,x}/p_{max},-1,1) ~~~ v_x = \text{clip}(v_{c,x}/v_{max},-1,1) \nonumber \\
a_x &= \text{clip}(a_{c,x}/a_{max},-1,1),
\label{eq:xyz_c}
\end{align}
\end{small}
\vspace{-10pt}
\noindent where $p_{max},v_{max},a_{max}$ are the values used for the normalization of the observations. The reason for the clipping is that our state space discretization technique, a crucial part of the sequential curriculum, is derived assuming a worst case scenario for the platform movement in which the multi-rotor vehicle is hovering (see Sec.~\ref{sec:curriculum_discretization}) and where scaling an observation with its maximum values, $p_{max}, v_{max}$ and $a_{max}$, would constitute a normalization. However, once the multi-rotor vehicle starts moving too, the scaled observation values could exceed the value range of $[-1,1]$. Clipping allows the application of the discretization technique also for a moving UAV.
Next, we define a general discretization function $d(x,x_1,x_2)$ that can be used to map a continuous observation of the environment to a discrete state value.
\begin{small}
\begin{equation}
d(x,x_1,x_2) =
\begin{cases}
& 0 \text{ if } x\in [-x_2,-x_1)\\
& 1 \text{ if } x\in [-x_1,x_1] \\
& 2 \text{ if } x\in (x_1,x_2]
\end{cases}
\label{eq:mapping}
\end{equation}
\end{small}
We apply \eqref{eq:mapping} to the normalized observations \eqref{eq:xyz_c} to determine the discrete state $s =(p_d, v_d,a_d,i_{\theta}) \in \mathbb{S}$, where $i_{\theta} \in \left\lbrace 0,\ldots,2n_{\theta}\right\rbrace$, $\mathbb{S}= \mathbb{N}_0^{ 3\times 3\times 3\times 2n_{\theta}+1}$ and
\begin{small}
\begin{align}
p_d &= d\left(p_x,p_{goal},p_{lim}\right), ~~~~ v_d = d\left(v_x,v_{goal},v_{lim}\right) \label{eq:p_d}\\
\textrm{and} ~~~ a_d &= d\left(a_x,a_{goal},a_{lim}\right).\label{eq:a_d}
\end{align}
\end{small}
\vspace{-10pt}
In \eqref{eq:p_d}-\eqref{eq:a_d}, the normalized values $\pm p_{goal}\pm ,v_{goal}, \pm a_{goal}$ define the boundaries of the discrete states the agent should learn to reach whereas the normalized values of $\pm p_{lim}\pm ,v_{lim}, \pm a_{lim}$ denote the limits in the observations the agent should learn not to exceed when controlling the multi-rotor vehicle.
\subsubsection{Goal State}
We define the goal state $s^* \in \mathbb{S}$
\begin{small}
\begin{equation}
s^* = \left\lbrace 1,1,1,*\right\rbrace.
\label{eq:goal_state}
\end{equation}
\end{small}
\vspace{-10pt}
This means the goal state is reached if $-p_{goal}\leq p_x\leq p_{goal}$, $-v_{goal}\leq v_x\leq v_{goal}$ and $-a_{goal}\leq a_x\leq a_{goal}$ regardless of the value of $i_{\theta}$.
\subsubsection{Reward Function}
Our reward function $r_t$, inspired by a shaping approach \cite{Rodriguez-Ramos2018} is given as
\begin{small}
\begin{align}
r_t = r_p + r_v + r_{\theta} + r_{dur} + r_{term},
\label{eq:reward}
\end{align}
\end{small}
\vspace{-10pt}
\noindent where $r_{term} = r_{suc}$ if $s=s^*$ and $r_{term} = r_{fail}$ if $|p_x|>p_{lim}$. In all other cases, $r_{term} = 0$. $r_{suc}$ and $r_{fail}$ will be defined at the end of this section. Positive rewards $r_p$, $r_v$ and $r_{\theta}$ are given for a reduction of the relative position, relative velocity and the pitch angle, respectively. The negative reward $r_{dur}$ gives the agent an incentive to start moving.
Considering the negative weights $w_{p},w_{v},w_{\theta}, w_{dur}$ and the agent frequency $f_{ag}$, derived in Sec.~\ref{sec:hyperparemter_estimation}, with which actions are drawn from $\mathbb{A}$, we define one time step as $\Delta t=1/f_{ag}$ and $r_p,r_v,r_{\theta},r_{dur}$ as
\begin{small}
\begin{align}
&r_p =\text{clip}(w_p(|p_{x,t}|-|p_{x,t-1}|),-r_{p,max},r_{p,max}) \label{eq:clip_r_p}\\
&r_v = \text{clip}(w_v(|v_{x,t}|-|v_{x,t-1}|),-r_{v,max},r_{v,max})\label{eq:clip_r_v}\\
&r_ {\theta} = w_{\theta}( |\theta_{d,t}|-|\theta_{d,t-1}|)/\theta_{max} v_{lim}\label{eq:scale_r_theta}\\
&r_{dur} = w_{dur} v_{lim}\Delta t.\label{eq:scale_r_dur}
\end{align}
\end{small}
\vspace{-10pt}
The weights are negative so that decreasing the relative position, relative velocity or the pitch angle yields positive reward values. Clipping $r_p$ and $r_v$ to $\pm r_{p,max}$ and $\pm r_{v,max}$, as well as scaling $r_{\theta}$ and $r_{dur}$ with $v_{lim}$ is necessary due to their influence on the value of $r_{max}$. $r_{max}$ denotes the maximum achievable reward in a non-terminal timestep if $v_x \leq v_{lim}$ and $a_x \leq a_{lim}$ and thus complies with the limits set by the motion scenario described in Sec.~\ref{sec:curriculum_discretization} during the derivation of the sequential curriculum. To ensure the applicability of the curriculum also in situations in which that compliance is not given is why the clipping is applied in \eqref{eq:clip_r_p} and \eqref{eq:clip_r_v}. $r_{max}$ plays a role in the scaling of Q-values that is performed as part of the knowledge transfer during different curriculum steps. It is defined as follows
\begin{small}
\begin{align}
r_{p,max} &= |w_p| v_{lim} \Delta t ~~~~~~~~~~~~~ r_{v,max} = |w_v| a_{lim} \Delta t\\
r_{\theta,max} &= |w_{\theta}| v_{lim} \Delta\theta/\theta_{max} ~~~~ r_{dur,max} = w_{dur} v_{lim}\Delta t\\
r_{max} &= r_{p,max}+r_{v,max}+r_{\theta,max}+r_{dur,max}.
\end{align}
\end{small}
\vspace{-12pt}
With $r_{max}$ derived and the weights $w_{suc}$ and $w_{fail}$, we can finally define the success and failure rewards as
\begin{small}
\begin{equation}
r_{suc} = w_{suc} r_{max} ~~ \textrm{and}~~~ r_{fail} = w_{fail} r_{max}.\label{eq:r_suc_r_fail}
\end{equation}
\end{small}
\vspace{-12pt}
All weights of the reward function were determined experimentally.
\subsubsection{Double Q-Learning}
We use the Double Q-Learning algorithm \cite{Hasselt2010} to address the problem of overestimating action values and increasing training stabilty. Inspired by \cite{Even-Dar2003}, we apply a state-action pair specific learning rate that decreases the more often the state-action pair has been visited.
\begin{small}
\begin{align}
\alpha(s_t,a_t) = \max\left\lbrace \left( n_c(s_t,a_t)+1\right)^{-\omega},\alpha_{min}\right\rbrace,
\end{align}
\end{small}
\vspace{-12pt}
\noindent where $ n_c(s_t,a_t)$ denotes the number of visits the state action-pair $\left(s_t,a_t \right)$ has been experiencing until the timestep $t$ and $\omega$ is a decay factor. To keep a certain minimal learning ability, the learning rate is constant once the value of $\alpha_{min}$ has been reached. Sufficient exploration of the state space can be ensured by applying an $\epsilon$-greedy policy for the action selection during training. The exploration rate $\epsilon$ is varied according to a schedule for the episode number.
\subsection{Curriculum and Discretization}\label{sec:curriculum_discretization}
Our curriculum and discretization approach builds on \cite{Lampton2009}, where a multiresolution method for state space discretization is presented. There, starting with a coarse discretization, a region around a pre-known learning goal is defined. Once the agent has learned to reach that region, it is further refined by discretizing it with additional states. Then, training starts anew but only in the refined region, repeatedly until a pre-defined resolution around the learning goal has been reached. Consequently, a complex task is reduced to a series of smaller, quickly-learnable sub tasks.
\begin{figure}
\centering
\includegraphics[scale = 0.35]{./pics/curriculum.pdf}
\caption{Illustration of the sequential curriculum for $n_{cs}+1$ curriculum steps. Each curriculum step consists of an instance of the basis learning task.}
\label{fig:seq_cur}
\end{figure}
In our novel approach, we introduce a similar multiresolution technique combined with transfer learning as part of a sequential curriculum \cite{Narvekar2020} to accelerate and stabilize training even further.
Each new round of training in the multiresolution setting constitutes a different curriculum step. A curriculum step is an instance of the basis learning task introduced in Sec.~\ref{sec:basis_learning_task}. This means in particular that each curriculum step has its own Q-table. Since the size of the state and action space is the same throughout all steps, knowledge can be easily transferred to the subsequent curriculum step by copying the Q-table that serves then as a starting point for its training. Figure~\ref{fig:seq_cur} illustrates the procedure of the sequential curriculum.\\
However, throughout the curriculum the goal regions become smaller, leading to less spacious maneuvers of the multi-rotor. Thus, the achievable reward, which depends on relative change in position and velocity, is reduced. In order to achieve a consistent adaptability of the Q-values over all curriculum steps, the initial Q-values need to be scaled to match the maximum achievable reward per timestep. Therefore, we define
\begin{small}
\begin{equation}
Q_{init, i+1} = r_{max,i+1}/r_{max,i} Q_{result,i},
\end{equation}
\end{small}
\vspace{-10pt}
\noindent where $i\geq 1$ is the current curriculum step. Furthermore, adding more curriculum steps with smaller goal regions possibly leads to previously unconsidered effects, such as overshooting the goal state. This is why in our work, training is not only restricted to the latest curriculum step but performed also on the states of the previous curriculum steps when they are visited by the agent.
\begin{figure}
\centering
\includegraphics[scale = 0.3]{./pics/discrete.pdf}
\caption{Illustration of the mapping of the normalized observations $p_x$ and $v_x$ to the discrete states (red - $0$, yellow - $1$, blue - $2$) for a curriculum with three steps $(i=0,i=1,i=2)$. The yellow chequered regions illustrate the size of the goal state when the curriculum step is the last of the sequence.}
\label{fig:discretization}
\end{figure}
Furthermore, the discretization introduced in Sec.~\ref{sec:discrete_state_space}, which is now associated with a curriculum step, covers only a small part of the continuous observation space with few discrete states. In order to completely capture the predominant environment characteristics of the subspace in which the agent is expected to learn the task, it is necessary for it to have the knowledge of the state space topology \cite{Anderson1994}. Against the background of the multiresolution approach, this can be rephrased as the question of how to define suitable goal regions, i.e., the size of the goal states. To this end, we again leverage the insight that accurate knowledge about the relative position and velocity is only required when landing is imminent. We begin by choosing an exponential contraction of the size of the unnormalized goal state $p_{c,x,goal,i}$ associated with the $i^{th}$ curriculum step such that
\begin{small}
\begin{align}
&p_{c,x,goal,0} = \sigma^2 x_{max} , ~~~ p_{c,x,goal,i} = \sigma^{2(i+1)}x_{max},\\
&p_{c,x,goal,n_{cs}} = l_{mp}/2 = \sigma^{2(n_{cs}+1)}x_{max} \label{eq:exp_contraction}
\end{align}
\end{small}
\vspace{-12pt}
\noindent where $0<\sigma<1$ is a contraction factor, $l_{mp} \in \mathbb{R}[\si{m}]$ is the edge length of the squared moving platform, $x_{max}$ is the boundary of the fly zone $\mathcal{F}$ (see Sec.~\ref{sec:prelim_notations}) and $n_{cs}$ is the number of curriculum steps following the initial step. The contraction factor $\sigma$ plays a role in how easy it is for the agent to reach the goal state. If it is set too low, the agent will receive success rewards only rarely, thus leading to an increased training time. \eqref{eq:exp_contraction} can be solved for $n_{cs}$ because $\sigma$, $l_{mp}$ and $x_{max}$ are known parameters.
To determine the relationship between the contraction of the positional goal state and the contraction of the velocity goal state, we need a kinematics model that relates these variables. For this purpose, we envision the following worst case scenario for the relative motion \eqref{eq:non_linear_rel_motion}.
Assume that the vehicle hovers right above the platform which is located on the ground in the center of the fly zone $\mathcal{F}$, with $v_x = 0\si{m/s}$ and $\psi_{rel} = 0$. Now, the platform starts to constantly accelerate with the maximum possible acceleration $a_{mp,max}$ defined by \eqref{eq:a_mpmax} until it has reached $x_{max}$ after a time $t_{0} = \sqrt{2 x_{max}/a_{mp,max}}$.
This is considered to be the worst case since the platform never slows down, unlike the rectilinear periodic movement \eqref{eq:platform_acceleration_rpm} used for training.
The evolution of the continuous observations over time is then expressed by
\begin{small}
\begin{align}
a_{x,c,wc}(t) = a_{mp,max}, &~~~~ v_{x,c,wc}(t) =a_{mp,max} t \label{eq:a_p_s_rel_max}\\
\textrm{and} ~~ p_{x,c,wc}(t) &=0.5 a_{mp,max}t^2,\label{eq:a_v_s_rel_max}
\end{align}
\end{small}
\vspace{-12pt}
\noindent which constitute a simple kinematics model that relates the relative position and relative velocity via the time $t$. Next, we consider $p_{max}=x_{max}$, $v_{max}= a_{mp,max}t_{0}$ and $a_{max} = a_{mp,max}$ for the normalization \eqref{eq:xyz_c} and discretize \eqref{eq:a_p_s_rel_max} and \eqref{eq:a_v_s_rel_max} via the time $t_i = \sigma^i t_0$. This allows us to determine the values of $p_{lim,i}$, $v_{lim,i}$, $a_{lim,i}$ and $p_{goal,i}$, $v_{goal,i}$, $a_{goal,i}$. They are required for the basis learning task presented in Sec.~\ref{sec:basis_learning_task} that now constitutes the $i^{th}$ curriculum step.
For this purpose, we define with $i \in \left\lbrace 0,1,\ldots,n_{cs}\right\rbrace$
\begin{small}
\begin{align}
p_{lim,i}&= \sigma^{2i}= 0.5 a_{mp,max} t_i^2/p_{max}, \\
v_{lim,i}&=\sigma^{i}= a_{mp,max} t_i /v_{max}, \\
a_{lim,i} &= 1 , \\
p_{goal,i}&=\beta_p \sigma^{2i}=\beta_p p_{lim,i}, \\
v_{goal,i}&=\beta_v\sigma^i=\beta_v v_{lim,i}, \\
a_{goal,i} &= \beta_{a}\sigma_{a}=\beta_{a}\sigma_{a} a_{lim,i} ,
\end{align}
\end{small}
\vspace{-12pt}
\noindent where $\beta_p=\beta_v =\beta_{a} = 1/3$ if the $i$th curriculum step is the curriculum step that was most recently added to the curriculum sequence, and $\beta_p=\sigma^2,\beta_v = \sigma,\beta_{a}=1$ otherwise. Thus, for the latest curriculum step the goal values result from scaling the associated limit value with a factor of $1/3$, a value that has been found empirically, and for all previous steps from the discretized time applied to \eqref{eq:a_p_s_rel_max} and \eqref{eq:a_v_s_rel_max}. The entire discretization procedure is illustrated by Fig.~\ref{fig:discretization}.
Introducing a different scaling value only for the latest curriculum step has been empirically found to improve the agent's ability to follow the moving platform. The discretization of the acceleration is the same for all curriculum steps and is defined by a contraction factor $\sigma_a$ which has been empirically set. Finding a suitable value for $\sigma_a$ has been driven by the notion that if this value is chosen too high, the agent will have difficulties reacting to changes in the relative acceleration. This is due to the fact that the goal state would cover an exuberant range in the discretization of the relative acceleration. However, if it is chosen too low the opposite is the case. The agent would only rarely be able to visit the discrete goal state $a_{goal,i}$, thus unnecessarily foregoing one of only three discrete states. \\
During the training of the different curriculum steps the following episodic terminal criteria have been applied. On the one hand, the episode terminates with success if the goal state $s^*$ of the latest curriculum step is reached if the agent has been in that curriculum step's discrete states for at least one second without interruption. On the other hand, the episode is terminated with a failure if the UAV leaves the fly-zone $\mathcal{F}$ or if the success terminal criterion has not been met after the maximum episode duration $t_{max}$. The reward received by the agent in the terminal time step for successful or failing termination of the episode is defined by \eqref{eq:r_suc_r_fail}.
Once the trained agent is deployed, the agent selects the actions based on the Q-table of the latest curriculum step to which the continuous observations can be mapped (see Fig.~\ref{fig:seq_cur}).
\subsection{Hyperparameter Determination} \label{sec:hyperparemter_estimation}
We leverage the discrete action space \eqref{eq:discrete_action_space} to determine the hyperparameters agent frequency $f_{ag}$ and maximum pitch angle $\theta_{max}$ in an interpretable way. The purpose is to ensure sufficient maneuverability of the UAV to enable it to follow the platform. For sufficient maneuverability, the UAV needs to possess two core abilities. i) produce a maximum acceleration bigger than the one the platform is capable of and ii) change direction of acceleration quicker than the moving platform. Against the background of the assumptions explained in Sec.~\ref{sec:basis_learning_task_preliminaries} complemented with thrust compensation, we consider the first aspect by the maximum pitch angle $\theta_{max}$
\begin{small}
\begin{equation}
\theta_{max} = \tan^{-1}\left( \frac{k_a a_{mp,max}}{g}\right).\label{eq:theta_max}
\end{equation}
\end{small}
\vspace{-10pt}
$k_{a}$ denotes the multiple of the platform's maximum acceleration of which the UAV needs to be capable. For the second aspect, we leverage the platform's known frequency of the rectilinear periodic movement. According to \eqref{eq:platform_acceleration_rpm}, the moving platform requires the duration of one period to entirely traverse the available range of acceleration. Leveraging equation \eqref{eq:normalized_pitch_set}, we can calculate the time required by the copter to do the same as $4n_{\theta}\Delta t$, where $\Delta t = 1/f_{ag}$. Next, we introduce a factor $k_{man}$ which specifies how many times faster the UAV should be able to roam through the entire range of acceleration than the moving platform. Against this background, we obtain the agent frequency as
\begin{small}
\begin{align}
f_{ag} = 4 n_{\theta} k_{man} \frac{\omega_{mp}}{2\pi}= 2n_{\theta} k_{man} \frac{\omega_{mp}}{\pi}. \label{eq:f_ag}
\end{align}
\end{small}
Both hyperparameters $\theta_{max}$ and $f_{ag}$ are eventually based on the maximum acceleration of the moving platform $a_{mp,max}$ as is the discretization of the state space. Therefore, we argue that these hyperparameters pose a matching set of values that is suitable to prevent excessive jittering in the agent's actions.
\section{Implementation}\label{sec:Implementation}
\subsection{General}
We set up the experiments to showcase the following.
\begin{itemize}[leftmargin=*]
\item Empirically show that our method is able to outperform the approach presented in \cite{Rodriguez-Ramos2019} with regard to the rate of successful landings while requiring a shorter training time.
\item Empirically show that our method is able to perform successful landings for more complex platform trajectories such as an 8-shape.
\item Demonstrate our method on real hardware.
\end{itemize}
\subsection{Simulation Environment}
The environment is built within the physics simulator Gazebo 11, in which the moving platform and the UAV are realized. We use the RotorS simulator \cite{Furrer2016} to simulate the latter. Furthermore, all required tasks such as computing the observations of the environment, the state space discretization as well as the Double Q-learning algorithm are implemented as nodes in the ROS Noetic framework using Python 3.\\
The setup and data flow are illustrated by Fig.~\ref{fig:sim_framework}. The associated parameters are given in Tab.~\ref{tab:environment_params}. We obtain $\mathbf{a}_c$ by taking the first order derivative of $\mathbf{v}_c$ and applying a first order Butterworth-Filter to it. The cut-off frequency is set to $0.3 \si{hz}$.
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.7]{./pics/framework.pdf}
\caption{Overview of the framework's structure. Red are components relying on Gazebo. Green are components designed for converting and extracting / merging data. Yellow are components dealing with the control of the UAV. }
\label{fig:sim_framework}
\end{figure}
\begin{table}[]
\centering
\fontsize{7}{10}\selectfont
\begin{tabular}{@{}ccccccccc@{}}
\toprule
Group & \multicolumn{2}{c}{Gazebo} & \thead{Rel. St.} & \thead{Observ.} & \thead{1D RL } & \multicolumn{2}{c}{\thead{PID }} \\ \midrule
Name & \thead{Max. step \\size\\ $[\si{s}]$} & \thead{Real time \\ factor\\$[-]$} & \thead{{ }\\Freq. \\$[\si{hz}]$} & \thead{{ }\\Freq.\\ $[\si{hz}]$ } &\thead{Pitch / Roll\\ Freq. \\$[\si{hz}]$} & \thead{Yaw\\Freq. $[\si{hz}]$ \\ $k_p,k_i,k_d$} & \thead{$v_z$\\Freq. $[\si{hz}]$\\ $k_p,k_i,k_d$} \\ \midrule
Value &$ 0.002$& $1$& $100$ & $100$ &\thead{see\\ \eqref{eq:f_ag}} &\thead{$\sim 110$\\ $8,1,0$} & \thead{$\sim 110$\\ $5,10,0$} \\ \bottomrule
\end{tabular}
\caption{Parameters of the training environment.}
\label{tab:environment_params}
\end{table}
\subsection{Initialization}
In \cite{Kooi2021}, it is indicated that training can be accelerated when the agent is initialized close to the goal state at the beginning of each episode. For this reason, we use the following normal distribution to determine the UAV's initial position within the fly zone $\mathcal{F}$ during the first curriculum step.
\begin{small}
\begin{equation}
(x_{init},y_{init}) = \left(\text{clip}\left(N(\mu,\sigma_{\mathcal{F}}),-x_{max},x_{max},\right),0\right)
\end{equation}
\end{small}
We set $\sigma_{\mathcal{F}} = p_{max}/3$, which will ensure that the UAV is initialized close to the center of the flyzone and thus in proximity to the moving platform more frequently. All subsequent curriculum steps as well as the testing of a fully trained agent are then conducted using a uniform distribution over the entire fly zone.
\subsection{Training Hardware}
Each training is run on a desktop computer with the following specifications. Ubuntu 20, AMD Ryzen threadripper 3960x 24-core processor, 128 GB RAM, 6TB SSD. This allows us to run up to four individual trainings in parallel. Note, that being a tabular method, the Double Q-learning algorithm does not depend on a powerful GPU for training since it does not use a neural network.
\subsection{Training}
We design two training cases, \textit{simulation} and \textit{hardware}. Case \textit{simulation} is similar to the training conditions in the baseline method \cite{Rodriguez-Ramos2019} so that we can compare our approach in simulation. Case \textit{hardware} is created to match the spatial limitations of our real flying environment, so that we can evaluate our approach on real hardware.
For both cases, we apply the rectilinear periodic movement (RPM) of the platform specified by \eqref{eq:platform_acceleration_rpm} during training. We consider different scenarios for training regarding the maximum velocity of the platform, which are denoted by ``RPM $v_{mp,train}$''. For each velocity $v_{mp,train}$, we train four agents using the same parameters. The purpose is to provide evidence of reproducibility of the training results instead of only presenting a manually selected, best result. We choose the same UAV (Hummingbird) of the RotorS package that is also used in the baseline method. Other notable differences between the baseline and the training cases \textit{simulation} and \textit{hardware} are summarized in Tab.~\ref{tab:training_differences}.
\begin{table}
\fontsize{7}{10}\selectfont
\centering
\begin{tabular}{ccccccc}
\toprule
Method & \thead{Fly zone\\size $[m]$} & \thead{Platform\\size $[m]$} & \makecell{$r_{mp}$\\$[\si{m}]$} & \thead{$v_{mp}$\\$[\si{m/s}]$} & \thead{$a_{mp,max}$\\$[\si{m/s^2}]$} & \thead{$f_{ag}$\\$[hz]$}\\ \hline
Baseline & $3\times 6$ & $1\times1\times\sim 0.3$ & $\sim 2.5$ & $1$ &$\sim 0.4$ & $20$ \\ \hline
\multirow{3}{*}{Case \textit{simulation}} &\multirow{3}{*}{$9\times9$}&\multirow{3}{*}{$1\times1\times0.3$}& \multirow{3}{*}{2} & $0.8$ & $0.32$ & $11.46$ \\
& & & & $1.2$ & $0.72$ & $17.19$ \\
& & & & $1.6$ & $1.28$ & $22.92$ \\\hline
Case \textit{hardware}&$2\times2$ &$0.5\times0.5\times0.3$& $0.5$& $0.4$& $0.32$ & $22.92$\\
\bottomrule
\end{tabular}
\caption{Differences between the training parameters of our method and the baseline \cite{Rodriguez-Ramos2019}.
}
\label{tab:training_differences}
\end{table}
Note, that some of our training cases deal with a higher maximum acceleration of the moving platform than the training case used for the baseline method and are therefore considered as more challenging. For all trainings, we use an initial altitude for the UAV of $z_{init} = 4\si{m}$ and a vertical velocity of $v_z = -0.1\si{m/s}$ so that the UAV is descending during an entire episode. The values used for these variables in the baseline \cite{Rodriguez-Ramos2019} can not be inferred from the paper. For the first curriculum step, the exploration rate schedule is empirically set to $\varepsilon= 1$ (episode $0 - 800$) before it is linearly reduced to $\varepsilon=0.01$ (episode $800-2000$). For all later curriculum steps, it is $\varepsilon=0$. For choosing these values, we could exploit the discrete state-action space where for each state-action pair the number of visits of the agent can be tracked. The exploration rate schedule presented above is chosen so that most state action pairs have at least received one visit.
Other training parameters are presented in Tab.~\ref{tab:common_training_params}.
The training is ended as soon as the agent manages to reach the goal state \eqref{eq:goal_state} associated with the latest step in the sequential curriculum in $96\%$ of the last $100$ episodes. This value has been empirically set. For all trainings, we use noiseless data to compute the observations fed into the agent. However, we test selected agents for their robustness against noisy observations as part of the evaluation.
\begin{table}[]
\fontsize{7}{10}\selectfont
\centering
\begin{tabular}{@{}ccccccccc@{}}
\toprule
Group & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Double Q-Learning\end{tabular}} & \multicolumn{3}{c}{Reward function} & \multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}Discretization\end{tabular}} \\ \midrule
Name & \begin{tabular}[c]{@{}c@{}}$\gamma$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\alpha_{min}$ \\$\omega$\end{tabular} &\begin{tabular}[c]{@{}c@{}}$w_p$\\ $w_v$ \\ $w_{\theta}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$w_{dur}$\\ $w_{suc}$ \\ $w_{fail}$\end{tabular} &$t_{max}$& \begin{tabular}[c]{@{}c@{}}$\sigma_{a}$\\$\sigma $\end{tabular} & \begin{tabular}[c]{@{}c@{}}$n_{\theta}$\\$n_{cs,sim.}$\\$n_{cs,hardw.}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$k_{a}$\\$k_{man}$\end{tabular} \\ \midrule
Value & $0.99$ & \begin{tabular}[c]{@{}c@{}} $0.02949$ \\$0.51$\end{tabular} &\begin{tabular}[c]{@{}c@{}}$-100$\\ $-10$ \\ $-1.55$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$-6$\\$ 2.6$\\ $-2.6$\end{tabular} &$20\si{s}$ &\begin{tabular}[c]{@{}c@{}}$0.416$\\$0.8 $\end{tabular} & \begin{tabular}[c]{@{}c@{}}$3$\\$4$\\$3$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$3$\\$15$\end{tabular} \\ \bottomrule
\end{tabular}
\caption{Other training parameters of our approach.}
\label{tab:common_training_params}
\end{table}
\vspace{-10pt}
\subsection{Initiation of Motion} \label{sec:state_lock}
During training, the yaw controller ensures $\psi_{rel} = 0$. However, during evaluation in simulation this sometimes led to the situation that the agent commanding the lateral motion of the UAV occasionally was not able to leave its initial state (see Fig.~\ref{fig:state_lock}). We hypothesize that the reason for this behavior is that the agent is trained on a platform that \textit{moves} while considering \textit{relative} motion in its observations. As a consequence, the agent learns a policy that, while being in certain states, exclusively relies on the platform movement to achieve a desired state change. However, for evaluation using $\psi_{rel} = 0$, the agent controlling the lateral motion observes no relative movement if the UAV is hovering and the platform following a rectilinear trajectory in longitudinal direction.
The issue can be addressed by setting initial $\psi_{rel} \neq 0$.
In this case, the platform's movement shows a component in the values ${p}_{c,y},{v}_{c,y}$ and ${a}_{c,y}$ that are used as observations for the lateral motion's agent. A change in the states is therefore much more likely, allowing the agent to enter states in which the policy selects an action other than ``do nothing''. For this reason, we apply $\psi_{rel} = \pi/4$ for all experiments.
\begin{figure}
\centering
\includegraphics[scale = 0.25]{./pics/relative_trajectories_combined.pdf}
\caption{Illustration of the problem of motion initiation. After initialization, commanding a yaw angle of $\psi_{rel} = \pi/4$ allows the lateral agent to enter a state associated with another action than "do nothing" due to the state change induced by the longitudinal platform movement. The platform movement is now reflected in $p_y,v_y,a_y$ that are the observations fed into the lateral agent.}
\label{fig:state_lock}
\end{figure}
\section{Results}\label{sec:Experiments}
\subsubsection{Evaluation in Simulation without Noise}
In this scenario, all agents trained for case \textit{simulation} and \textit{hardware} are evaluated in simulation using noiseless observations of the environment, just as in training. Besides a static platform, we use two types of platform trajectories. The first is the rectilinear periodic movement (RPM) specified by \eqref{eq:platform_acceleration_rpm} and the second is an eight-shaped trajectory defined by
\begin{small}
\begin{equation}
_e\mathbf{r}_{mp} =r_{mp}\left[\sin(\omega_{mp} t),\sin(0.5\omega_{mp} t),0\right]^T, ~ \omega_{mp} = v_{mp}/r_{mp}.\\
\label{eq:platform_acceleration}
\end{equation}
\end{small}
\vspace{-10pt}
For all landing attempts, we specify an initial altitude of $z_{init} = 2.5\si{m}$. {A landing attempt is ended once the UAV touches the surface of the moving platform or reaches an altitude that is lower than the platform surface, i. e. misses the platform. If the center of the UAV is located above the moving platform at the moment of touchdown, the landing trial is considered successful. The value of $z_{init}$ leads to a duration of a landing attempt which corresponds roughly to the time $t_{max}$ used as maximum episode length during training, see reward function \eqref{eq:reward}. The information regarding the training duration of the agents is summarized in Tab.~\ref{tab:duration_results} for case \textit{simulation} and case \textit{hardware}. \\
The training durations of the different curriculum steps suggest that the majority of the required knowledge is learned during the first curriculum step. The later curriculum steps required significantly fewer episodes to reach the end condition. This is because the exploration rate is $\varepsilon=0$. Thus, the agent is only exploiting previously acquired knowledge, which is also supported by the accumulated sum of rewards (Fig.~\ref{fig:rewards}). This also implies that the decomposition of the landing procedure into several, similar sub tasks is a suitable approach to solve the problem.
The achieved success rates in simulation with noiseless observations of the environment are presented
in Fig.~\ref{fig:success_rates} for training case \textit{simulation} and in Tab.~\ref{tab:success_hardware_case} for training case \textit{hardware}.
\aboverulesep = 0.0mm
\belowrulesep = 0.0mm
\begin{table}[]
\fontsize{7}{10}\selectfont
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\thead{Case $\rightarrow$} & \multicolumn{3}{c|}{Case \textit{simulation}} & Case \textit{hardware} \\
\cmidrule{2-5}
\thead{Curricul.\\ step$\downarrow$} & RPM 0.8 & RPM 1.2 & RPM 1.6 & RPM 0.4 \\ \hline
0 & $112,109,112,119$ & $102,87,87,91$ & $92,83,88,90$ & $ 89,92,107,110$\\
\hline
1 & $7,6,7,6$ & $6,5,5,5$ & $5,4,6,6$ & $5,4,5,6$ \\
\hline
2 & $7,6,7,6$ & $7,5,6,5$ & $6,5,9,6 $ & $5,6,5,7 $ \\
\hline
3 & $9,7,8,8$ & $8,6,7,7$ & $7,6,8,8$ & $6,7,6,8$ \\
\hline
4 & $8,8,10,9$ & $10,8,8,9$ & $8,8,8,9$ &N.A. \\
\hline
\multirow{2}{*}{Total} &$143,136$ & $133,111$ & $118,106$ &$105,109$ \\
&$144,148$ & $113,117$ & $119,119$ &$124,133$ \\
\hline
\end{tabular}
\caption{Training time in minutes required for the different curriculum steps by four agents trained with identical parameters for the different training cases. The values are rounded to the nearest integer. The training scenarios are identified by the platform's rectilinear periodic movement denoted RPM $v_{mp,train}$.}
\label{tab:duration_results}
\end{table}
\begin{figure}[thpb]
\centering
\includegraphics[width=8.5cm]{./pics/success_plot_v5.pdf}
\caption{Success rates of four agents trained with same parameters on a platform's rectilinear periodic movement with $v_{mp,train} = 0.8\si{m/s}$ (RPM 0.8 - red), $v_{mp,train} = 1.2\si{m/s}$ (RPM 1.2 - blue) and $v_{mp,train} = 1.6\si{m/s}$ (RPM 1.6 - green). The agents are evaluated on different types of platform movement, indicated by the values of the abscissa. Error bars illustrating the mean success rate and standard deviation are depicted in grey. The success rates have been determined for 150 landing trials for each evaluation scenario. }
\label{fig:success_rates}
\end{figure}
They were determined over a larger set of RPMs than in the baseline method and indicate a good performance of the approach. The success rates become higher when the platform velocity is lower during evaluation compared to the one applied during training.
This is to be expected since the equations presented in Sec.~\ref{sec:hyperparemter_estimation} for hyperparameter determination ensure a sufficient maneuverability up to the velocity of the rectilinear periodic movement used in training.
For the training case RPM 0.8, the fourth agent has a comparably poor performance. For the static platform the agent occasionally suffers from the problem of motion initiation as described in Sec.~\ref{sec:state_lock}, despite setting $\psi_{rel} = \pi/4 \si{rad}$. For the evaluation case where the platform moves with RPM 0.4, the reason is an oscillating movement, which causes the agent to occasionally overshoot the platform. A similar problem arises for agent four of training case \textit{hardware} where the agent occasionally expects a maneuver of the platform during the landing procedure. As a consequence, this agent achieves a success rate of only $82\%$ for a static platform. However, for higher platform velocities, the success rate improves significantly.
\subsubsection{Selection of Agents for further Evaluation}
We select the first agent of training case \textit{simulation} which is performing best over all evaluation scenarios and was trained on a rectilinear periodic movement with a platform velocity of $v_{mp}=1.6\si{m/s}$. It is denoted RPM 1.6/1. Its reward curve is depicted in Fig.~\ref{fig:rewards}.
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.25]{./pics/reward_rpm_1_6_1_rpm_0_4_3.pdf}
\caption{Accumulated reward achieved by agent RPM 1.6/1 of case \textit{simulation} (top) and agent RPM 0.4/3 of case \textit{hardware} (bottom) during training. Red lines indicate the end of a curriculum step. }
\label{fig:rewards}
\end{figure}
We compare its results with the baseline method \cite{Rodriguez-Ramos2019} in Tab.~\ref{tab:comparison}. The comparison shows that our approach is able to outperform the baseline method. For the RPM $0.4$ evaluation scenario with noiseless observations we achieve a success rate of $99\%$, which is $+8\% $ better than the baseline. For the RPM $1.2$ evaluation scenario, our method is successful in $99\%$ of the landing trials, increasing the baseline's success rate by $+26\%$. Note that the maximum radius of the rectilinear periodic movement is $r_{mp} = \sim2.5\si{m}$ in the baseline and $r_{mp}=2\si{m}$ in our approach. However, the value used for our approach poses a more difficult challenge since the acceleration acting on the platform is higher, due to the same maximum platform velocity, see Tab.~\ref{tab:training_differences}. Furthermore, our method requires $\sim 80\%$ less time to train and $53\%$ less episodes. We select agent 3 of training case \textit{hardware} for further evaluation, denoted RPM 0.4/3. Its reward curve is also depicted in Fig.~\ref{fig:rewards}. It is able to achieve a success rate of $99\%$ for the evaluation scenario with a static platform, $100\%$ in case of a platform movement of RPM $0.2$, $99\%$ for RPM $0.4$ and $97\%$ for the eight-shaped trajectory of the platform. It required $2343$ episodes to train which took $123\si{min}$.
\aboverulesep = 0.0mm
\belowrulesep = 0.0mm
\begin{table}[thpb]
\fontsize{7}{10}\selectfont
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Sim. eval. $\rightarrow$ & Static $[\%]$ & RPM 0.2 $[\%]$ & RPM 0.4 $[\%]$ & 8-shape $[\%]$ \\
\hline
\thead{Case \\ \textit{hardware}} & $100,100,99,82$ & $99,100,100,90$ & $98,99,99,97$ & $ 99,96,97,94$\\
\hline
\end{tabular}
\caption{Success rates in percent achieved in simulation with noiseless observations by four agents trained with same parameters for case \textit{hardware}. They are evaluated in four different scenarios of platform movement, as is indicated by the column titles. }
\label{tab:success_hardware_case}
\end{table}
\aboverulesep = 0.0mm
\belowrulesep = 0.0mm
\begin{table}[]
\fontsize{7}{10}\selectfont
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\thead{Category$\rightarrow$\\ Method$\downarrow$ } & \thead{Fly zone \\size} & \thead{Training\\ duration} & \thead{Success rate\\RPM 0.4} & \thead{Success rate\\RPM 1.2} \\ \hline
\thead{Baseline \cite{Rodriguez-Ramos2019}} & $5\si{m}\times9\si{m}$ & \thead{$\sim600\si{min}$\\$4500\si{ep}.$} & $91\%$ & $73\%$\\
\hline
\thead{Our method } & $9\si{m}\times9\si{m}$ & \thead{$118\si{min}$\\$2113\si{ep}.$} & $99\%$ & $99\%$\\
\hline
\thead{Difference } & $-$ &\thead{$\sim -80\%$\\$-53\%$} & $+8\%$ & $ +26\%$\\
\hline
\end{tabular}
\caption{Comparison of our approach with the baseline method. The agent selected for the comparison is agent RPM 1.6/1. Our method significantly improves the success rate in the two reference evaluation scenarios while requiring significantly less time and fewer episodes to train. Furthermore, the fly zone used in our approach covers a larger area. }
\label{tab:comparison}
\end{table}
\subsubsection{Evaluation in Simulation with Noise}
We evaluate the selected agent of case \textit{simulation} and case \textit{hardware} for robustness against noise. For this purpose, we define a set of values $\sigma_{noise}= \left\lbrace \sigma_{p_x},\sigma_{p_y}, \sigma_{p_z},\sigma_{v_x},\sigma_{v_y},\sigma_{v_z}\right\rbrace$ specifying a level of zero mean Gaussian noise that is added to the noiseless observations in simulation. The noise level corresponds to the noise present in an EKF-based pose estimation of a multi-rotor UAV recorded during real flight experiments.
\begin{small}
\begin{align}
\sigma_{noise} &= \left\lbrace 0.1\si{m},0.1\si{m},0.1\si{m},0.25\si{m/s},0.25\si{m/s},0.25\si{m/s}\right\rbrace
\label{eq:noise_level}
\end{align}
\end{small}
\vspace{-10pt}
We evaluate the landing performance again for static, periodic and eight-shaped trajectories of the landing platform in Tab.~\ref{tab:noise_evaluation_case_simulation} for training case \textit{simulation} and in Tab.~\ref{tab:noise_evaluation_case_hardware} for training case \textit{hardware}.
For the selected agent of training case \textit{simulation}, adding the realistic noise $\sigma_{noise}$ leads to a slightly reduced performance. However, the achieved success rates are still higher than the agent's performance in the baseline without noise. For the evaluation scenario with RPM $0.4$, our success rate is $+4\%$ higher. With RPM $1.2$, it is $+20\%$ higher.
For the selected agent of training case \textit{hardware}, the drop in performance is slightly more pronounced. The reason is that the fly zone and platform specified for this training case are significantly smaller than for the case \textit{simulation}. As a consequence, the size of the discrete states is also significantly reduced, whereas the noise level stays the same. Thus, noise affects the UAV more, since it is more likely that the agent takes a suboptimal action due to an observation that was biased by noise.
\aboverulesep = 0.0mm
\belowrulesep = 0.0mm
\begin{table}[]
\fontsize{7}{10}\selectfont
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Simulation eval. $\rightarrow$ & Static & RPM 0.4 & RPM 0.8 &RPM 1.2& RPM 1.6 &8-shape \\
\hline
\thead{Case \textit{simulation} \\Agent RPM 1.6/1} & $97\%$ & $95\%$ & $98\%$ & $93\%$ & $85\%$ &$85\%$\\
\hline
\end{tabular}
\caption{Success rates in percent achieved in simulation with noisy observations by the best performing agent of training case \textit{simulation}. It has been evaluated over 150 landing trials for six different scenarios of platform movement, as is indicated by the column titles. }
\label{tab:noise_evaluation_case_simulation}
\end{table}
\aboverulesep = 0.0mm
\belowrulesep = 0.0mm
\begin{table}[]
\fontsize{7}{10}\selectfont
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Simulation eval. $\rightarrow$ & Static & RPM 0.2 & RPM 0.4 &8-shape \\
\hline
\thead{Case \textit{hardware} \\Agent RPM 0.4/3} & $95\%$ &$91\%$ &$85\%$ & $85\%$\\
\hline
\end{tabular}
\caption{Success rates in percent achieved in simulation with noisy observations by the selected agent of training case \textit{hardware}. It has been evaluated over 150 landing trials for four different scenarios of platform movement, as is indicated by the column titles. }
\label{tab:noise_evaluation_case_hardware}
\end{table}
\subsubsection{Evaluation in Real Flight Experiment}
Unlike the baseline method, we do not evaluate our approach on real hardware in single flights only. Instead, we provide statistics on the agent's performance for different evaluation scenarios to illustrate the sim-to-real-gap.
For this purpose, the selected agent of case \textit{hardware} was deployed on a quadcopter, see Figs.~\ref{fig:title_image} and \ref{fig:real_flights_equipment}, that has a mass of $m_{uav} = 0.72 \si{kg}$ and a diameter of $d_{uav}=0.28 \si{cm}$ and deviates from the UAV used in simulation (mass $6\%$, diameter $18\%$).
\begin{figure}
\centering
\includegraphics[width=8.5cm]{./pics/copter_platform.png}
\caption{Multi-rotor vehicle and autonomous platform moving on rails that were used for the flight experiments in the real world. The squared platform has an edge length of $0.5\si{m}$.}
\label{fig:real_flights_equipment}
\end{figure}
The quadcopter is equipped with a Raspberry Pi 4B providing a ROS interface and a LibrePilot Revolution flight controller to enable the tracking of the attitude angles commanded by the agents via ROS. A motion capture system (Vicon) provides almost noiseless values of the position and velocity of the moving platform and the UAV. However, we do not use these information directly to compute the observations \eqref{eq:cont_obs_p_cx}-\eqref{eq:cont_obs_phi_rel}. The reason is that the motion capture system occasionally has wrong detections of markers. This can result in abrupt jumps in the orientation estimate of the UAV. Since the flight controller would immediately react, this could lead to a dangerous condition in our restricted indoor environment. To avoid any dangerous flight condition, we obtain the position and velocity of the UAV from an EKF-based state estimation method running on the flight controller. For this purpose, we provide the flight controller with a fake GPS signal (using Vicon) with a frequency of $10\si{hz}$. It is then fused with other noisy sensor data (accelerometer, gyroscope, magnetometer, barometer). The EKF is robust to short periods of wrong orientation estimation. Our algorithm is run off-board and the setpoint values for the attitude angles generated by the agents are sent to the flight controller. Due to hardware limitations regarding the moving platform, we only evaluate the agent for a static platform and the rectilinear periodic movement.
Generally, the Vicon system allows for a state estimate of the copter that has a lower noise level than the one specified by \eqref{eq:noise_level}. However, the velocity of the platform is determined purely by means of the Vicon system. The rough surface of the ground caused vibrations of the platform that induced an unrealistic high level of noise in the Vicon system's readings of the platform velocity. For this reason, it is filtered using a low-pass filter with a cut-off frequency of $10\si{hz}$. \\
Figure \ref{fig:landing_trajectories_real} shows the trajectory of the UAV and moving platform for a landing trial where the platform was executing a rectilinear periodic movement with a maximum velocity of $0.4\si{m/s}$. The depicted $x$ and $y$ component of the multi-rotor vehicle's trajectory are based on the state estimate calculated by the EKF. All other presented trajectory values are based on the readings of the Vicon system. Table~\ref{tab:real_flights_evaluation_with_ground_effect} contains the success rates achieved in the experiments with real hardware. The starting positions of the UAV were manually selected and as uniformly distributed as possible, in an altitude of about $2.5\si{m}$.
\begin{figure}
\centering
\includegraphics[scale=0.25]{./pics/vicon_trajectory.pdf}
\caption{Example trajectory of the multi-rotor vehicle and moving platform during the real flight experiment. The platform's position is determined using the Vicon system, the multi-rotor vehicle's position results from the state estimate of the flight controller. }
\label{fig:landing_trajectories_real}
\end{figure}
\aboverulesep = 0.0mm
\belowrulesep = 0.0mm
\begin{table}[]
\fontsize{7}{10}\selectfont
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Real flights eval. $\rightarrow$ & Static & RPM 0.2 & RPM 0.4 \\
\hline
\thead{Case \textit{hardware} \\Agent RPM 0.4/3} & \thead{$96\%$\\$23$ trials} &\thead{$72\%$\\$25$ trials} &\thead{$82\%$\\$28$ trials} \\
\hline
\end{tabular}
\caption{Success rates in percent achieved in real flights with ground effect by the selected agent of training case \textit{hardware}. It has been evaluated over at least 20 landing trials for three different scenarios of platform movement, as is indicated by the column titles. }
\label{tab:real_flights_evaluation_with_ground_effect}
\end{table}
Whereas for the static platform a success rate of $96\%$ could be reached, there is a noticeable drop in performance for the evaluation scenarios in which the platform is performing the rectilinear periodic movement. We argue that these can be attributed to the following main effects.
\begin{enumerate}
\item The deviation in the size of the UAV and the different mass plays a role. Especially the different mass is important since it changes the dynamics of the multi-rotor vehicle. Since the platform is only slightly larger than the copter, the effect of this deviation could be decisive regarding situations in which the copter overshoots the platform shortly before touchdown.
\item A reason for the drop in performance could also be a slightly different behaviour of the low-level controllers than in simulation. They have been tuned manually. A possible solution approach to reduce the sim-to-real gap here could be to vary the controller gains within a small range during training. This should help the agent towards better generalization properties.
\item Trials in which occasional glitches in the state estimate occurred were not treated specially although they could result in extra disturbances the RL controllers had to compensate. Furthermore, we counted also small violations of the boundary conditions as failure, such as leaving the fly zone by a marginal distance even if the agent was able to complete the landing successfully hereafter.
\item The ground effect can play an important role for multi-rotor vehicles \cite{Sanches-Cuevas2017}. Since it was not considered in the Gazebo based simulation environment used for training, it contributes to the sim-to-real gap.
\end{enumerate}
Furthermore, during the real flight experiments no significant jittering in the agent's actions could be observed. This substantiates the approach of calculating values for the maximum pitch angle and agent frequency presented in Sec.~\ref{sec:hyperparemter_estimation}.
\section{Conclusion and Future Work}
In this work, we presented a RL based method for autonomous multi-rotor landing. Our method splits the overall task into simpler 1-D tasks, then formulates each of those as a sequential curriculum in combination with a knowledge transfer between curriculum steps. Through rigorous experiments, we demonstrate significantly shorter training time ($\sim -80\%$) and higher success rates (up to $+26\%$) than the DRL-based actor-critic method presented in \cite{Rodriguez-Ramos2019}. We present statistics of the performance of our approach on real hardware and show interpretable ways to set hyperparameters. In future, we plan to extend the approach to control the vertical movement and yaw angle of the UAV, and to address complex landing problems, such as on an inclined or on a flying platform.
\section{Introduction}
Classical control approaches suffer from the fact that accurate models of the plant and the environment are required for the controller design. These models are necessary to consider non-linearities of the plant behavior. However, they are subjected to limitations, e.g., with regard to disturbance rejection \cite{Rodriguez-Ramos2019} and parametric uncertainties \cite{Mo2018}.
To overcome these problems, reinforcement learning (RL) has been studied for robot control in the last decades, including in the context of UAV control \cite{Mo2018}.
In model-free, (action-) value based RL methods, an approximation of a value function is learned exclusively through interaction with the environment. On this basis, a policy then selects an optimal action while being in a certain state \cite{Sutton2015}. Classical RL approaches such as Q-Learning \cite{Sutton2015} store a tabular representation of that value function approximation. Deep Reinforcement Learning (DRL) methods\cite{Mnih2013}, on the other hand, leverage a deep neural network (DNN) to learn an approximate value function over a continuous state and action space, thus making it a powerful approach for many complex applications. Nevertheless, in both RL and DRL, the training itself is in general sensitive to the formulation of the learning task and in particular to the selection of hyperparameters. This can result in long training times, the necessity to perform a time-consuming hyperparameter search and an unstable convergence behavior. Furthermore, especially in DRL, the neural network acts like a black box, i.e. it is difficult to trace back which training experience caused certain aspects of the learned behaviour.
\begin{figure}[!t]
\centering
\includegraphics[width=8.5cm, trim={0 0 0 0cm},clip]{./pics/title_image.png}
\caption{Our multi-rotor vehicle landing on a platform moving on rails.}
\label{fig:title_image}
\end{figure}
The purpose of this work is to address the aforementioned issues for the task of landing a multi-rotor UAV on a moving platform. Our RL based method aims at i) achieving a high rate of successful landing attempts, ii) requiring short training time, and iii) providing intrepretable ways to compute hyperparameters necessary for training.
To achieve these aims, we leverage the tabular off-policy algorithm Double Q-Learning \cite{Hasselt2010}, due to the few required hyperparameters which are the decay factor $\gamma$ and the learning rate $\alpha$. Using a tabular method does not require defining the DNN's architecture or finding values for additional hyperparameters such as minibatch and buffer size or the number of gradient steps. Furthermore, unlike a NN-based deep learning algorithm, it does not require a powerful GPU for training, making it also suitable for computers with lower performance and less power-consuming.
However, tabular methods only provide discrete state and action spaces and therefore suffer from the ``curse of dimensionality'' \cite{Lampton2009}. Furthermore, the training performance and control performance is influenced by the sampling rate. When the sampling and discretization do not match, the performance of the agent can be low, e. g. due to jittering, where the agent rapidly alternates between discrete states. To solve these problems, our method's \textbf{novel aspects} are outlined below. The first two of these allow us to address the ``curse of dimensionality'' issue by reducing the complexity of the learning task. The third addresses the problem to find a matching discretization and sampling rate.
\begin{enumerate}[leftmargin=*]
\item Under the assumption of a symmetric UAV and decoupled dynamics, common in literature \cite{Wang2016}, our method is able to control the vehicle's motion in longitudinal and lateral direction with two instances of the \emph{same} RL agent. Thus, the learning task is simplified to 1D movement only. The vertical and yaw movements are controlled by PID controllers. The concept of using independent controllers for different directions of motion is an approach which is often used, e.g. when PID controllers are applied \cite{Wenzel2011}, \cite{Araar2017}.
\item We introduce a novel state-space discretization approach, motivated by the insight that precise knowledge of the relative position and relative velocity between the UAV and the moving platform is required only when touchdown is imminent. To this end, we leverage a multiresolution technique \cite{Lampton2009} and augment it with information about the state space topology, derived from a simple kinematics model of the moving platform. This allows us to restructure the learning task of the 1D movement as a sequence of even simpler problems by means of a sequential curriculum, in which learned action values are transferred to the subsequent curriculum step. Furthermore, the discrete state space allows us to accurately track how often a state has been visited during training.
\item We leverage the discrete action space to ensure sufficient maneuverability of the UAV to follow the platform. To this end, we derive equations that compute the values of hyperparameters, such as the agent frequency and the maximum value of the roll/pitch angle of the UAV. The intention of these equations is twofold. First, they link the derived values to the maneuverability of the UAV in an interpretable way. Second, they ensure that the discretization of the state space matches the agent frequency. The aim is to reduce unwanted side effects resulting from the application of a discrete state space, such as jittering.
\end{enumerate}
Section~\ref{sec:related_work} presents related work, followed by Sec.~\ref{sec:Methodology} describing the proposed approach in detail. Section~\ref{sec:Implementation} presents the implementation and experimental setup. The results are discussed in Sec.~\ref{sec:Experiments}, with comments on future work.
\section{Related Work} \label{sec:related_work}
\subsection{Classical Control Approaches}
So far, the problem of landing a multi-rotor aerial vehicle has been tackled for different levels of complexity regarding platform movement.
One-dimensional platform movement is considered in \cite{Hu2015} in the context of maritime applications, where a platform is oscillating vertically. Two-dimensional platform movement is treated in \cite{Wenzel2011},\cite{Ling2014}, \cite{Gautam2015}, \cite{Vlantis2015}, \cite{Araar2017}, \cite{Borowczyk2017} and \cite{Falanga2017}. Three-dimensional translational movement of the landing platform is covered for docking scenarios involving multi-rotor vehicles in \cite{Zhong2016} and \cite{Miyazaki2018}.
Various control techniques have been applied to enable a multi-rotor UAV to land in one of these scenarios. In \cite{Hu2015} an adaptive robust controller is used to control the vehicle during a descend maneuver onto a vertically oscillating platform while considering the ground effect during tracking of a reference trajectory.
The authors of \cite{Gautam2015} apply guidance laws that are based on missile control principles. \cite{Vlantis2015} uses a model predictive controller to land on an inclined moving platform. \cite{Falanga2017} relies on a non-linear control approach involving LQR-controllers. However, the most used controller type is a PID controller.
Reference \cite{Wenzel2011} uses four independent PID controllers to follow the platform that has been identified with visual tracking methods. In \cite{Ling2014}, the landing maneuver onto a maritime vessel is structured into different phases. Rendezvous with the vessel, followed by aquiring a visual fiducial marker and descending onto the ship. During all phases, PID controllers are used to control the UAV.
Perception and relative pose estimation based methods are the focus of \cite{Araar2017}, where again four PID controllers provide the UAV with the required autonomous motion capability. A PID controller is also applied in \cite{Borowczyk2017} to handle the final touchdown on a ground vehicle moving up to $50\si{km/h}$. Also the landing on a 3D moving platform can be solved with PID controllers \cite{Zhong2016}, \cite{Miyazaki2018}. Although automatic methods for tuning the gains of a PID controller exist, tuning is often a manual, time-consuming procedure. However, learning-based control methods enable obtaining a suitable control policy exclusively from data through interaction with the environment, making it an attractive and superior approach for controlling an UAV to land on a moving platform. Furthermore, methods such as Q-Learning enable the approximation of an optimal action-value function and thus lead to a (near-) optimal action selection policy \cite{Sutton2015}.
\subsection{Learning-based Control Approaches}
The landing problem has been approached with (Deep) Reinforcement Learning for static platforms \cite{Kooi2021}, \cite{Shi2019}, \cite{Polvara2018}, \cite{Polvara2019} and for moving platforms \cite{Rodriguez-Ramos2019}, \cite{Rodriguez-Ramos2018}, \cite{Lee2018}.
The authors of \cite{Kooi2021} accelerate training by means of a curriculum, where a policy is learned using Proximal Policy Optimization (PPO) to land on an inclined, static platform. Their curriculum is tailored, involving several hyperparameters, whereas in this work we present a structured approach for deriving a curriculum for different scenarios.
In \cite{Polvara2018} and \cite{Polvara2019} authors assign different control tasks to different agents. A DQN-agent aligns a quadcopter with a marker on the ground, whereas a second agent commands a descending maneuver before a closed-loop controller handles the touchdown. Our approach also leverages separate RL agents, but for controlling the longitudinal and lateral motion.
The authors of \cite{Shi2019} present a deep-learning-based robust non-linear controller to increase the precision of landing and close ground trajectory following. A nominal dynamics model is combined with a DNN to learn the ground effect on aerodynamics and UAV dynamics.
\cite{Lee2018} presents an actor-critic approach for landing using continuous state and action spaces to control the roll and pitch angle of the drone, but no statistics on its performance.
In \cite{Rodriguez-Ramos2019}, a DRL-framework is introduced for training in simulation and evaluation in real world. The Deep Deterministic Policy Gradient (DDPG) algorithm, an actor-critic approach, is used to command the movement of a multi-rotor UAV in longitudinal and lateral direction with continuous states and actions. Detailed data about the agent's success rate in landing in simulation is provided. We use this work as a baseline method and show how we outperform it, providing also statistics about our agent's performance in the real world. However, the baseline method does not provide any systematic and explainable way for deriving hyperparameters used for the learning problem. For our method, we present equations that link values of hyperparameters to problem properties such as the state space discretization and maneuverability of the UAV in an intuitively understandable way.
|
{
"arxiv_id": "2302.13213",
"language": "en",
"timestamp": "2023-02-28T02:13:09",
"url": "https://arxiv.org/abs/2302.13213",
"yymm": "2302"
} | \section{The extended Hermitian Hamiltonian formalism}\label{sec.1}
In this section we provide details for the extended Hermitian Hamiltonian formalism of non-Hermitian system with PHS. The non-Hermitian single-particle Hamiltonian $H$ respects the PHS $S=UP$ as $U H^T U^{-1}=-H$, where $U$ is a unitary local operation satisfying $UU^*=1$ and $P$ is the transpose operation. The spectrum of this system is centrosymmetric due to the PHS, {\it i.e.}, if $|E\rangle$ is a right eigenstate of $H$ with eigenvalue $E$ as $H|E\rangle_{\rm skin}=E|E\rangle_{\rm skin}$, then $\langle\langle-E|_{\rm skin}=(U^{\dagger}|E\rangle_{\rm skin})^{T}$ is a left eigenstate with eigenvalue $-E$ as $\langle\langle-E|_{\rm skin}H=-E\langle\langle-E|_{\rm skin}$. To study the localization behavior, topological origin and relations of the skin modes, we map $H$ to the following extended Hermitian Hamiltonian
\begin{equation}
\tilde{H}_{E}=\left(\begin{array}{cccc}
& & & H^\dagger+E^*\\
& & H-E\\
& H^{\dagger}-E^{*}\\
H+E
\end{array}\right),
\end{equation}
which is defined on the quadruplicated Hilbert space and could be decomposed into two copies with a proper unitary transformation
\begin{equation}
\tilde{H}_{E}\rightarrow\left(\begin{array}{cc}
& H-E\\
H^{\dagger}-E^{*}
\end{array}\right)\oplus\left(\begin{array}{cc}
& H+E\\
H^\dagger+E^*
\end{array}\right)=H_{E}\oplus H_{-E},
\end{equation}
where $H_{\pm E}$ are the standard forms of doubled Hamiltonian. Note that here we focus on the $E\neq 0$ case since the degeneracy of $|E=0\rangle_{\rm skin}$ and $\langle\langle E=0|_{\rm skin}$ force them to be extended in general.
The extended Hamiltonian has both symmetries inherited from the non-Hermitian Hamiltonian and its own symmetries by construction. From the PHS of the original non-Hermitian Hamiltonian we obtain the PHS symmetry of the extended Hamiltonian, $\tilde{S}=\tilde{U}\tilde{P}$, satisfying $\tilde{U}\tilde{H}_{E}^T\tilde{U}^{-1}=-\tilde{H}_{E}$, where $\tilde{P}$ is the transpose operation on the extended Hamiltonian and
\begin{equation}
\tilde{U}=\begin{pmatrix}
&U&&\\
U&&&\\
&&&-U\\
&&-U&
\end{pmatrix}.
\end{equation}
Besides, $\tilde{H}_{E}$ respects the artificial chiral symmetry $\Gamma=\text{diag}(1,1,-1,-1)$ and thus the time-reversal symmetry $T=\Gamma\tilde{U}\hat{K}$ where $\hat{K}$ is the complex conjugation. Another important observation is that the two copies respects their own chiral symmetry $\Gamma_{a(b)}=\text{diag}(1,1,\mp 1,\pm 1)$ as $\Gamma_{a(b)}H_{E(-E)}\Gamma_{a(b)}^{-1}=-H_{E(-E)}$. Although $\Gamma_{a(b)}$ are not symmetries of the first-quantized Hamiltonian $\tilde{H}_E$, they become symmetries of the many-body Hamiltonian corresponding to $\tilde{H}_E$ after second quantization: $\hat{\Gamma}_{a(b)}=\Gamma_{a(b)}\hat{K}_{a(b)}\hat{P}_{a(b)}$, where $\hat{K}_{a(b)}$ and $\hat{P}_{a(b)}=\prod_{js}(c^\dagger_{js,a(b)}+c_{js,a(b)})$ are the complex conjugation and particle-hole conjugation on the $H_{E(-E)}$ degree of freedom (not affecting the other copy $H_{-E(E)}$), with $j$ and $s$ being the position and (pseudo-)spin indices.
An important property, whether the extended Hamiltonian is gapped or not, depends on the dimensions of the system. For 1D systems, if the eigenstate $|E\rangle_{\rm skin}$ with $E\neq0$ is a skin mode of the non-Hermitian Hamiltonian $H$, then $\tilde{H}_{E}$ possesses a gapped bulk spectrum, for which the proof is straightforward: $\det(\tilde{H}_{E}(\vec{k}))=|\det(H(\vec{k})-E)\det(H(\vec{k})+E)|^2\neq 0,\forall \vec{k}\in 1BZ$ because $E$ is excluded from the PBC spectrum of $H$, indicating that zero value is not included in the bulk continuum of $\tilde{H}_{E}$. As an aside, due to the chiral symmetry and the gapped bulk spectrum, $H_{\pm E}$ belongs to the AIII class classified by the $Z$ integer topological invariant $n_{\pm E}=\int\frac{dk}{2\pi i}\frac{\partial}{\partial k}\ln\det(H(k)\mp E)$, which is nothing but the winding number of the PBC spectral of $H$ around the reference point $\pm E$. In higher dimensions, the OBC spectrum of the non-Hermitian system is often covered by the PBC spectrum in the thermodynamic limit (see Fig.~\ref{Fig.2Dspec}(a1, b1) for example), and thus $\exists \vec{k}\in BZ,s.t.\det(\tilde{H}_{E}(\vec{k}))=0$, indicating the gapless bulk of $\tilde{H}_E$.
\section{Localization properties of the skin modes and topological zero modes}\label{sec.2}
In this section, we further map the skin modes to topological zero modes of the extended Hamiltonian and by this means discuss the localization properties of the skin modes. From the right and left eigenvectors of the pair of skin modes of $H$ with eigenenergy $\pm E$ we obtain four topological zero modes of $\tilde{H}_{E}$ under OBC
\begin{equation}\label{eq.zeromode}
|\chi_{1,2,3,4}\rangle_{\rm topo}=\left(\begin{array}{c}
|-E\rangle_{\rm skin}\\
\\
\\
\\
\end{array}\right)\left(\begin{array}{c}
\\
|E\rangle\rangle_{\rm skin}\\
\\
\\
\end{array}\right),\left(\begin{array}{c}
\\
\\
|E\rangle_{\rm skin}\\
\\
\end{array}\right),\left(\begin{array}{c}
\\
\\
\\
|-E\rangle\rangle_{\rm skin}
\end{array}\right).
\end{equation}
Since the eigenmodes of $H$ with eigeneneregy $E$ have an exact one-to-two correspondence to the zero modes of $\tilde{H}_E$, if no more symmetries of $H$ other than the PHS are assumed and thus both skin modes $|\pm E\rangle_{\rm skin}$ are non-degenerate, there are only four zero modes of $\tilde{H}_E$. These zero modes are related by the PHS $\tilde{S}=\tilde{U}\tilde{P}$ as
\begin{eqnarray}
\tilde{U}|\chi_{1}\rangle_{\rm topo}^*=|\chi_{2}\rangle_{\rm topo},&\ &\tilde{U}|\chi_{3}\rangle_{\rm topo}^*=-|\chi_{4}\rangle_{\rm topo},\nonumber\\
\tilde{U}|\chi_{2}\rangle_{\rm topo}^*=|\chi_{1}\rangle_{\rm topo},&\ &\tilde{U}|\chi_{4}\rangle_{\rm topo}^*=-|\chi_{3}\rangle_{\rm topo}.
\end{eqnarray}
The localization properties of the topological zero modes are determined by the symmetries. (i) Since $\tilde{U}$ is an local operation, $|\chi_{1,2}\rangle_{\rm topo}$ localize in the same boundary and so do $|\chi_{3,4}\rangle_{\rm topo}$. (ii) In the 1D case, the topological zero modes $|\chi_{1}\rangle_{\rm topo}$ and $|\chi_{4}\rangle_{\rm topo}$ have opposite chiralities of $\Gamma_a$, thus localized in different ends. Therefore, $|\chi_{1}\rangle_{\rm topo}$ and $|\chi_{2}\rangle_{\rm topo}$ localize in the same end while $|\chi_{3}\rangle_{\rm topo}$ and $|\chi_{4}\rangle_{\rm topo}$ in the other end. For higher-dimensional systems, the only difference is that the bulk spectrum of $\tilde{H}_E$ is usually gapless, and the mapped zero modes are essentially the boundary (corner) modes of a higher-order topological semimetal. Without other symmetries' protection, there are no more zero modes and the eigenstates with opposite chiralities must localize separately. Then we can draw the conclusion holding in all dimensions: $|\chi_{1,2}\rangle_{\rm topo}$ localize in the same boundary while $|\chi_{3,4}\rangle_{\rm topo}$ in the opposite boundary.
For completeness, we discuss the case with $E=0$ where the extended Hamiltonian has more symmetries and the results above no longer holds. For the topological zero modes of the non-Hermitian Hamiltonian $H$ the analysis could be carried out without the help of the extended Hamiltonian. Applying the PHS of $H$ on a topological zero mode of $H$, $|0\rangle$ and $\langle\langle 0|$, we obtain another pair of right and left eigenvectors $U|0\rangle\rangle^*$ and $\langle 0|^*U$. There are two possibilities: (i)$|0\rangle$ and $U|0\rangle\rangle^{*}$ are two different zero modes of $H$, which means $|0\rangle$ and $\langle0|U$ (as the left eigenvector of $U|0\rangle\rangle^{*})$ are Hermitian orthogonal, and (ii) $|0\rangle\propto U|0\rangle\rangle^{*}$ are the right eigenstates of the same zero mode. If (i) is true, $\langle0|U(|0\rangle^{*})=0$. But since in our case $UU^{*}=1$ (rather than $UU^{*}=-1$, in which case $\langle0|U(|0\rangle^{*})=\langle0|U^{T}(|0\rangle^{*})=-\langle0|U(|0\rangle^{*})=0)$, generally $\langle0|U|0\rangle^{*}$ should be nonzero if $|0\rangle$ is a topological zero mode. Thus $\langle0|U$ and $|0\rangle$ should be the left and right eigenvectors of the same topological zero mode of $H$ and are localized in the same end.
\section{Nonlocal correspondence of the skin modes}\label{sec.3}
In this section we establish the nonlocal correspondence of the topological zero modes of the extended Hermitian Hamiltonian and then map this back to obtain the nonlocal correspondence of the non-Hermitian skin modes. We first show that the topological zero modes of $\tilde{H}_{E}$ localized in different boundaries are related by emergent nonlocal symmetry $\tilde{S}'=\hat{\Gamma}_a\tilde{S}$, which is the symmetry of the second-quantized Hamiltonian of $\tilde{H}_E$ and has no corresponding symmetry in the first-quantized form. The nonlocal nature of $\tilde{S}'$ is manifested by its action on the many-body state $\gamma^\dagger_{3}|\Phi_{\rm bulk}\rangle$ with $|\Phi_{\rm bulk}\rangle=\prod_{E'<0}c_{E',a}^{\dagger}\prod_{E''<0}c_{E'',b}^{\dagger}|vac\rangle$, where $\gamma_i^\dagger$ creates the topological zero mode $|\chi_i\rangle_{\rm topo}$ ($|\chi_3\rangle_{\rm topo}=(0,0,|E\rangle_{\rm skin},0)^T$) and the two products create all eigenstates with negative eigenenergies of $H_{E}$ and $H_{-E}$ respectively. By definition of $\hat{\Gamma}_{a}=\Gamma_a\hat{K}_a\prod_{js}(c^\dag_{js,a}+c_{js,a})$, we have
\begin{equation}
\hat{\Gamma}_{a}\gamma_{2,3}^{\dagger}\hat{\Gamma}_{a}^{-1}=\gamma_{2,3},\ \hat{\Gamma}_{a}c_{E',a}^{\dagger}\hat{\Gamma}_{a}^{-1}=c_{-E',a}.
\end{equation}
$\hat{\Gamma}_{a}$ doesn't act on the other block $H_{-E}$ and creates all the eigenstates of $H_{E}$ from the vacuum as $\hat{\Gamma}_{a}|vac\rangle=\gamma_2^\dagger\gamma^\dagger_3\prod_{E'}c_{E',a}^{\dagger}|vac\rangle$. The PHS $\tilde{S}$ acts on the eigen-operators as
\begin{equation}
\tilde{S}\gamma_{1,2,3,4}^\dag\tilde{S}^{-1}=\gamma_{2,1,4,3},\tilde{S}c^\dagger_{E',a(b)}\tilde{S}^{-1}=c_{-E',b(a)},
\end{equation}
and creates all the eigenstates of $\tilde{H}_E$ from the vacuum as $\tilde{S}|vac\rangle=\gamma_1^\dagger\gamma_2^\dagger\gamma_3^\dagger\gamma^\dagger_4\prod_{E'}c_{E',a}^{\dagger}c_{E',b}^{\dagger}|vac\rangle$.
Collecting the facts above,
\begin{align}
\tilde{S}' \gamma_3^{\dagger}|\Phi_{\rm bulk}\rangle &=\hat{\Gamma}_{a}(\tilde{S}\gamma_3^\dagger\tilde{S}^{-1})\prod_{E'<0}(\tilde{S}c_{E',a}^{\dagger}\tilde{S}^{-1})(\tilde{S}c_{E',b}^{\dagger}\tilde{S}^{-1})\tilde{S}|vac\rangle\nonumber\\
&=\hat{\Gamma}_{a}\gamma_4\prod_{E'>0}c_{E',b}c_{E',a}\gamma_1^\dagger\gamma_2^\dagger\gamma_3^\dagger\gamma_4^\dagger\prod_{E''}c_{E'',a}^{\dagger}c_{E'',b}^{\dagger}|vac\rangle\nonumber\\
&=\hat{\Gamma}_{a}\gamma_1^{\dagger}\gamma_2^{\dagger}\gamma_3^{\dagger}\prod_{E'<0}c_{E',a}^{\dagger}c_{E',b}^{\dagger}|vac\rangle\nonumber\\
&=\gamma_1^{\dagger}\gamma_2\gamma_3\prod_{E'<0}c_{-E',a}^{\dagger}c_{E',b}^{\dagger}\gamma_2^\dag\gamma_3^\dag\prod_{E''}c_{E',a}^{\dagger}|vac\rangle\nonumber\\
&=\gamma_1^{\dagger}|\Phi_{\rm bulk}\rangle.
\end{align}
Noting that $\tilde{S}'$ leaves the bulk states invariant and to obtain the direct relation between the zero modes, it's natural to project the symmetry onto the boundary $S_{\rm proj}=\Pi\tilde{S}'\Pi$, where $\Pi$ is the projection operator onto the subspace of boundary modes. In this way it's clear that $|\chi_1\rangle_{\rm topo}$ and $|\chi_3\rangle_{\rm topo}$ are nonlocally related by
\begin{equation}
S_{\rm proj}|\chi_3\rangle_{\rm topo}=|\chi_1\rangle_{\rm topo}.
\end{equation}
Finally, we map this nonlocal correspondence back to the non-Hermitian system and uncover the nonlocal correspondence of the skin modes. Remembering that the wavefunction of a skin mode is identical to that of the corresponding topological zero mode by Eq.~\eqref{eq.zeromode}, we know that the pair of skin modes $|E\rangle_{\rm skin}$ and $|-E\rangle_{\rm skin}$ are localized at opposite positions and are related to each other by
\begin{equation}
S_{\rm proj}|E\rangle_{\rm skin}=|-E\rangle_{\rm skin}.
\end{equation}
Similarily, $S_{\rm proj}$ relates the other two zero modes directly by $S_{\rm proj}|\chi_2\rangle_{\rm topo}=|\chi_4\rangle_{\rm topo}$ and so are the pair of skin modes' left eigenvectors as $\langle\langle E|_{\rm skin}S_{\rm proj}^\dag=\langle\langle -E|_{\rm skin}$. Since $S_{\rm proj}$ originates from the PHS of $H$, we have established the emergent nonlocal correspondence of the skin modes of non-Hermitian systems with local PHS. As a remark, since our whole proof just exploit the basic properties of the local symmetries, this should be an universal result for non-Hermitian systems with PHS, including the quasicrystals, amorphous systems and fractals. For example, one can see this symmetric NHSE in a quasiperiodic system as shown in Fig.~\ref{Fig.1Dspec}(b3). And the system if being Hermitian, {\em i.e.} the loss rate $\gamma_\downarrow=0$, is in the localized phase and the total wavefunction distribution is uniform in real space.
\section{Optical Lattice models with particle-hole symmetry in 1D, 2D and 3D}\label{sec.4}
In this section we propose the non-Hermitian lattice models with the PHS in different dimensions that are readily realizable in optical Raman lattice experiments. For each model we show numerically the spectrum and the wavefunction distribution in real space and give the experimental scheme in Raman optical lattice.
\subsection{1D non-Hermitian model with PHS}
Based on the researches on the 1D Raman optical lattices~\cite{Liu2013SM, Jo2018SM}, here we propose a 1D non-Hermitian lattice model with the PHS which exhibits the nonlocal symmetric skin effect as we predicted:
\begin{align}
H_{1D} = & \sum_{j}(m_{z}+i\gamma_{\downarrow}/2)\bigr(|j\uparrow\rangle\langle j\uparrow|-|j\downarrow\rangle\langle j\downarrow|\bigr)-t_{0}\bigr(|j\uparrow\rangle\langle j+1,\uparrow|-e^{-iK}|j\downarrow\rangle\langle j+1\downarrow|+h.c.\bigr)\nonumber\\
&+t_{so}\bigr(|j\downarrow\rangle\langle j+1\uparrow|-e^{iK}|j+1\downarrow\rangle\langle j\uparrow|+h.c.\bigr), \label{eq.1Dmodel}
\end{align}
where the non-Hermiticity is given by $\gamma_\downarrow$ term, the Zeeman term $m_z$ and spin-conserved (flip) hopping coefficient $t_{0(so)}$ are controllable constants, and $K$ is the projection of the Raman beam's wave vector on the axis of the optical lattice. The PHS is $S=UP=e^{-iKj}\sigma_x P$ where the Pauli matrix acts on the spin degree of freedom at the $j$'s site, and $P$ is the transpose operation. As we can see in Fig.~\ref{Fig.1Dspec}(b1-b3), the total wavefunction distribution of this model is symmetric because the skin modes related by the PHS show up in different ends symmetrically. This also agrees with the fact that the spectral winding numbers around the interior points of the band are $\pm1$ for the two PBC bands on the right and left on the complex plane respectively in Fig.~\ref{Fig.1Dspec}(a1, a2). Moreover, the symmetric skin effect is robust against perturbations whether breaking the PHS or not, because the PHS itself doesn't bring degeneracy in general and thus the skin modes in different ends are not easily mixed by the perturbations. We can see this robustness in Fig.~\ref{Fig.1Dspec}(a3, b3). Besides, a 1D non-Hermitian system with a point gap and respecting the PHS is topologically classified by a $Z_2$ index~\cite{Satotopo2019SM} and the model here could be either trivial or topological as shown in Fig.~\ref{Fig.1Dspec}(a1, a2) corresponding to the index $\nu=0,1$ respectively. In the topological case there is a pair of topological zero modes similar to the majorana zero modes in topological superconductors, whose right eigenvectors and left eigenvectors are localized in the same end just as predicted, while a skin mode's right and left eigenvectors are always localized in different ends as shown in Fig.~\ref{Fig.1Dspec}(e1-e4).
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{1Dspec.pdf}
\caption{\label{Fig.1Dspec}Spectrum, wavefunction distributions and the experiment scheme of the 1D model. (a1, a2) The OBC (blue) and PBC (red) spectra of the model in Eq.~\eqref{eq.1Dmodel} with $t_0=1,m_z=0.3,\gamma_\downarrow=0.6,K=2\pi/3$ and $t_{so}=0.7,1.0$ respectively, corresponding to the trivial and topological cases. (a3) The OBC spectrum when $t_{so}=0.7$, $t_\downarrow=2t_\uparrow=2$. (b1, b2) The real space wavefunction distribution (summed over all right eigenstates and spin degree of freedom) for the trivial and topological cases. (b3) The wavefunction distribution when the symmetry is broken (blue dots) corresponding to (a3), and when the Zeeman term in Eq.~\eqref{eq.1Dmodel} is in a quasiperiodic form $(m_z\cos(2\pi j\alpha)+\rm I\gamma_\downarrow/2)\sigma_z$ (green line), where $m_z=0.3$ and $\alpha=(\sqrt{5}-1)/2$. (c) The scheme of Raman optical lattice experiment to realize the 1D model in Eq.\eqref{eq.1Dmodel}. The $^{\rm{87}}\rm{Rb}$ atoms are confined in the 1D optical lattice along $y$ direction formed by the standing wave (red beam) and the spin states are defined to be the atomic states $|\uparrow\rangle=|1,-1\rangle,|\downarrow\rangle=|1,0\rangle$ of the $F=1$ manifold in the hyperfine structure. (d) All relevant transitions for the atoms, including the two photon process driven by the Raman beam (black running wave in (c)) and the standing wave inducing spin-flipped couplings (SOC). The loss beam (green) is applied to drive the transition between the spin down state and some excited state with a short lifetime, giving the damping of the spin down state effectively. (e1-e4) The wavefunction distributions of the right and left eigenvectors of a pair of skin modes $\psi_{\pm E,R/L}$ with eigenenergy $\pm E=\pm(0.27-0.05\rm{I})$ (denoted by red and blue respectively) and the topological zero modes $\psi_{\rm topo1/2,R/L}$ of the model in Eq.~\eqref{eq.1Dmodel} when $t_{so}=1.0$.}
\end{figure}
The experiment scheme illustrated by Fig.~\ref{Fig.1Dspec}(c, d) prepares a $^{\rm{87}} \rm{Rb}$ cold-atom system described by
\begin{align}
H_{1D}' = &\int dy \Big[ \sum_{s=\uparrow,\downarrow} |ys\rangle\bigr(-\frac{\hbar^2\partial_y^2}{2m}+V(y)+\frac{\delta}{2}(\sigma_z)_{ss})\bigr)\langle ys|\nonumber\\
&-i\gamma_{\downarrow}|y\downarrow\rangle\langle y\downarrow|+\bigr(M_R(y)|y\uparrow\rangle\langle y\downarrow|+h.c.\bigr)\Big].
\end{align}
where $\delta$ is the two photon detuning simulating the Zeeman field, the optical lattice potential $V(y)\propto |E_y|^2\cos^2(k_0y)$ comes from the AC stark shift by the standing wave, and the Raman potential $M_R(y)\propto E_{R,z}E_y\cos(k_0y)e^{iK_y y}$ is given by the two photon process formed by the standing waves and the Raman beam. The non-Hermiticity is in the damping of the spin down component, {\it i.e.} the $\gamma_\downarrow$ term, which could be achieved effectively by the transition between the spin down state and an excited atomic state with a short lifetime~\cite{Jo2022SM}, driven by the green beam in Fig.~\ref{Fig.1Dspec}(d). Under the s-band tight-binding approximation, the effective model is captured by
\begin{align}
H_{1D} =& \sum_{j}(m_{z}+i\gamma_{\downarrow}/2)\bigr(|j\uparrow\rangle\langle j\uparrow|-|j\downarrow\rangle\langle j\downarrow|\bigr)-t_{0}\bigr(|j\uparrow\rangle\langle j+1\uparrow|-|j\downarrow\rangle\langle j+1\downarrow|+h.c.\bigr)\nonumber\\
&+t_{so}\Big[e^{-iKj}\bigr(|j\downarrow\rangle\langle j+1\uparrow|-|j+1\downarrow\rangle\langle j,\uparrow|\bigr)+h.c.\Big],\label{eq.1Dmodel2}
\end{align}
where a staggered gauge transformation on the spin down states $|j\downarrow\rangle\rightarrow (-1)^j |j\downarrow\rangle$ and an overall energy shift $-i\gamma_\downarrow/2$ have been adopted. And the coefficients are determined by
\begin{align}
&t_0=\int dy \phi_{s}(y)[\frac{\hbar^2\partial_y^2}{2m}-V(y)]\phi_{s}(y-a),\nonumber\\
&t_{so}=\int dy \phi_{s}(y)M_R(y)\phi_{s}(y-a),\ m_z=\delta.
\end{align}
With a gauge transformation $|j\downarrow\rangle\rightarrow e^{-iKj}|j\downarrow\rangle$ we can rewrite the Hamiltonian just as Eq.~\eqref{eq.1Dmodel}.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{2Dspec.pdf}
\caption{\label{Fig.2Dspec}Spectrum, wavefunction distribution and experiment scheme of the 2D models. (a1, a2) The OBC (blue) and PBC (red) spectrums and the wavefunction distribution (summed over all eigenstates and spin) of the model in Eq.~\eqref{eq.2Dmodel} when $t_x=t_y=1, t_{so}^y=-it_{so}^x=t_{so}=0.4, m_z=1.3,\gamma_\downarrow=0.6,\vec{K}=\pi/a(\cos\theta,\sin\theta), \theta=50^\circ$ and (b1, b2) The results of Eq.~\eqref{eq.Dirac} when $t_x=t_y=1, t_{so}^y=t_{so}=0.4, m_z=0.3,\gamma_\downarrow=0.6,\theta=50^\circ$. (c1, c2) The right eigenvectors' distribution of a pair of skin modes of Eq.~\eqref{eq.2Dmodel} with eigenenergies $\pm E=\mp(1.63+0.17 \rm{I})$. (d1) The scheme of Raman optical lattice experiment to realize the 2D model in Eq.\eqref{eq.2Dmodel} (when $t_{so}^y=-it_{so}^x$). The $^{\rm{87}}\rm{Rb}$ atoms are confined in the 2D optical lattice formed by the standing waves (red and blue beams) and the spin states are defined to be the atomic states $|\uparrow\rangle=|1,-1\rangle,|\downarrow\rangle=|1,0\rangle$ in the $F=1$ manifold. (d2) All relevant transitions for the atoms, including the two photon process driven by the Raman beam (black running wave in (c)) and the standing waves, inducing spin-flipped couplings (SOC). The loss beam (green) is applied to drive the transition between the spin down state and excited states to give the non-Hermiticity just as in the 1D model.}
\end{figure}
\subsection{2D non-Hermitian models with PHS}
Based on the researches on 2D Raman optical lattices~\cite{Liu2018SM, LiuPan2018SM}, we extend the 1D model to a 2D non-Hermitian lattice model:
\begin{align}
H_{2D} =& \sum_{\vec{j}}\Big\{(m_{z}+i\gamma_{\downarrow}/2)\bigr(|\vec{j}\uparrow\rangle\langle\vec{j}\uparrow|-|\vec{j}\downarrow\rangle\langle \vec{j}\downarrow|\bigr)
-\sum_{k=x,y}\Big[t_0^k\bigr(|\vec{j}\uparrow\rangle\langle\vec{j}+\vec{e}_k\uparrow|-e^{-i\vec{K}\cdot \vec{e}_k}|\vec{j}\downarrow\rangle\langle\vec{j}+\vec{e}_k\downarrow|\bigr)\nonumber\\
&+t_{\rm so}^k\bigr(|\vec{j}\downarrow\rangle\langle\vec{j}+\vec{e}_k\uparrow|-e^{i\vec{K}\cdot \vec{e}_k}|\vec{j}+\vec{e}_k\downarrow\rangle\langle\vec{j}\uparrow|\bigr)+h.c.\Big]\Big\},\label{eq.2Dmodel}
\end{align}
respecting the PHS $S=UP=e^{-i\vec{K}\cdot\vec{j}}\sigma_x P$, where $\vec{K}$ is the wave vector of the Raman beam and $t_{0(\rm so)}^k,k=(x,y)$ are the spin-conserved(flipped) hopping coefficients in different directions. This model hosts symmetric corner skin effect due to the PHS as shown in Fig.~\ref{Fig.2Dspec}(a1, a2, c1, c2). Similar to the 1D case, to see the experimental feasibility we apply a gauge transformation $|\vec{j}\downarrow\rangle\rightarrow e^{-i\vec{K}\cdot\vec{j}}|\vec{j},\downarrow\rangle$ and rewrite the Hamiltonian as:
\begin{align}
H_{2D} =& \sum_{\vec{j}}\Big\{(m_{z}+i\gamma_{\downarrow}/2)\bigr(|\vec{j}\uparrow\rangle\langle\vec{j}\uparrow|-|\vec{j}\downarrow\rangle\langle \vec{j}\downarrow|\bigr)
-\sum_{k=x,y}\Big[t_0^k\bigr(|\vec{j}\uparrow\rangle\langle\vec{j}+\vec{e}_k\uparrow|-|\vec{j}\downarrow\rangle\langle\vec{j}+\vec{e}_k\downarrow|\bigr)\nonumber\\
&+t_{\rm so}^ke^{-i\vec{K}\cdot \vec{j}}\bigr(|\vec{j}\downarrow\rangle\langle\vec{j}+\vec{e}_k\uparrow|-|\vec{j}+\vec{e}_k\downarrow\rangle\langle\vec{j}\uparrow|\bigr)+h.c.\Big]\Big\},\label{eq.2Dmodel2}
\end{align}
which is the low energy effective model under the s-band tight-binding approximation of the cold-atom system shown in Fig.~\ref{Fig.2Dspec}(d1, d2) described by
\begin{align}
H_{2D}' = &\int d^2\vec{r} \Big[ \sum_{s=\uparrow,\downarrow} |\vec{r}s\rangle\bigr(-\frac{\hbar^2\vec{\nabla}^2}{2m}+V(\vec{r})+\frac{\delta}{2}(\sigma_z)_{ss}\bigr)\langle\vec{r}s|\nonumber\\
&-i\gamma_{\downarrow}|\vec{r}\downarrow\rangle\langle\vec{r}\downarrow|+\bigr(M_R(\vec{r})|\vec{r}\uparrow\rangle\langle\vec{r}\downarrow|+h.c.\bigr)\Big].
\label{eq.2Dcontinous}
\end{align}
The optical lattice potential $V(\vec{r})= V_{0x}\cos^2(k_0x)+V_{0y}\cos^2(k_0y)$ (with $V_{0x/y}\propto |E_{x/y}^2|$) and Raman potential $M_R(\vec{r})=(M_{0x}\cos(k_0x)+M_{0y}\cos(k_0y))e^{i\vec{K}\cdot\vec{r}}$ (with$(M_{0x},M_{0y})\propto (E_{R}E_x,E_y E_{R})$) are induced by the standing waves and the Raman beam. The damping term is achieved by transitions to excited states just as in the 1D model case. The coefficients are determined by:
\begin{align}
&t_0^k=\int d^2\vec{r} \phi_{s}(\vec{r})[\frac{\hbar^2\vec{\nabla}^2}{2m}-V(\vec{r})]\phi_{s}(\vec{r}-a\vec{e}_k),\nonumber\\
&t_{so}^k=\int d^2\vec{r} \phi_{s}(\vec{r})M_R(\vec{r})\phi_{s}(\vec{r}-a\vec{e}_k),\ m_z=\delta,
\end{align}
where $\psi_s(\vec{r})$ is the s-band Wannier function, so that the hopping coefficients could be controlled by tuning the beams independently. For example if we want $t_{so}^x=-it_{so}^y$ (so that a 2D Chern insulator is recovered without the loss term) the Raman beam could be circular polarized and the two linear components of it couples with the two optical beams respectively, driving the spin-flipped hoppings in the two directions in Eq.~\eqref{eq.2Dmodel2}. For the best observation of the symmetric NHSE, the effective loss rate $\gamma_\downarrow$ should be neither too small or large compared with $t_{0,so}^{x/y}$ because the system recovers the Hermiticity in the two extremes and the localization reaches it maximum in the middle regime, which is numerically illustrated by Fig.~\ref{Fig.varyingloss}. According to the previous studies~\cite{LiuPan2018SM, Jo2022SM} and the calculation with tight-binding approximation, here the appropriate parameters can be taken that $V_{0x/y}= 4.0,M_{0x/y}= 1.0,\delta=0.1,\gamma_\downarrow=0.1$ in the unit of the recoil energy $E_r=\frac{\hbar^2k_r^2}{2m}$, which give $t_{0}\approx 0.17,t_{so}^{x/y}\approx 0.07$ and $m_z=0.05$. The numerical results of the model with similar parameters given in~\ref{Fig.2Dspec} and~\ref{Fig.varyingloss} show the clear existence of NHSE in this regime.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{varyingloss.pdf}
\caption{\label{Fig.varyingloss} The symmetric NHSE with varying loss rate $\gamma_\downarrow$. The localization of the skin modes, which is characterized by the maximum of the wavefunction distribution on the lattice (circled in the figure), reaches its maximum when it's close to the coefficients of the Hermitian part $t_{0}^{x/y}=1.0,t_{so}^{x/y}=0.4$.}
\end{figure}
If the Raman beam's electric field is linearly polarized along $z$ direction, we realize a 2D Dirac semimetal with non-Hermiticity described by:
\begin{align}
H_{Dirac} =& \sum_{\vec{j}}\Big\{(m_{z}+i\gamma_{\downarrow}/2)\bigr(|\vec{j}\uparrow\rangle\langle\vec{j}\uparrow|-|\vec{j}\downarrow\rangle\langle \vec{j}\downarrow|\bigr)
-\sum_{k=x,y}\Big[t_0^k\bigr(|\vec{j}\uparrow\rangle\langle\vec{j}+\vec{e}_k\uparrow|-e^{-i\vec{K}\cdot \vec{e}_k}|\vec{j}\downarrow\rangle\langle\vec{j}+\vec{e}_k\downarrow|\bigr)\Big]\nonumber\\
&+t_{\rm so}\bigr(|\vec{j}\downarrow\rangle\langle\vec{j}+\vec{e}_y\uparrow|-e^{i\vec{K}\cdot \vec{e}_y}|\vec{j}+\vec{e}_y\downarrow\rangle\langle\vec{j}\uparrow|\bigr)+h.c.\Big]\Big\},\label{eq.Dirac}
\end{align}
which respects the PHS $S=U\hat{P}=e^{-i\vec{K}\cdot\vec{j}}\sigma_x\hat{P}$ and hosts the symmetric NHSE are shown in Fig.~\ref{Fig.2Dspec}(b1, b2).
\begin{figure}[h]
\centering
\includegraphics[width=0.7\columnwidth]{3D.pdf}
\caption{\label{Fig.3D} OBC spectrum and wavefunction distribution of the 3D non-Hermitian Weyl semimetal in Eq.~\eqref{eq.3D} where $t_x=t_y=t_z=1, t_{so}=0.4, m_z=0.3,\gamma_\downarrow=0.6,\vec{K}=\pi/a(\cos\theta,\sin\theta,0), \theta=50^\circ$ with the size of the system $L_x=L_y=15,L_z=7$. (a) The OBC spectrum on the complex plane. (b) The wavefunction distribution on the 3D lattice summed over all eigenstates and spin degree of freedom, in which the bigger blue dots mean larger value than the white small dots.}
\end{figure}
\subsection{3D non-Hermitian Weyl semimetal with PHS}
A 3D non-Hermitian Weyl semimetal lattice model is given by
\begin{align}
H_{3D} =& \sum_{\vec{j}}\Big\{(m_{z}+i\gamma_{\downarrow}/2)\bigr(|\vec{j}\uparrow\rangle\langle\vec{j}\uparrow|-|\vec{j}\downarrow\rangle\langle \vec{j}\downarrow|\bigr)
-\sum_{k=x,y,z}\Big[t_0^k\bigr(|\vec{j}\uparrow\rangle\langle\vec{j}+\vec{e}_k\uparrow|-e^{-i\vec{K}\cdot \vec{e}_k}|\vec{j}\downarrow\rangle\langle\vec{j}+\vec{e}_k\downarrow|\bigr)\Big]\nonumber\\
&+\sum_{k=x,y}t_{\rm so}^k\bigr(|\vec{j}\downarrow\rangle\langle\vec{j}+\vec{e}_k\uparrow|-e^{i\vec{K}\cdot \vec{e}_k}|\vec{j}+\vec{e}_k\downarrow\rangle\langle\vec{j}\uparrow|\bigr)+h.c.\Big]\Big\},\label{eq.3D}
\end{align}
whose OBC spectrum and symmetric skin effect are shown in Fig.~\ref{Fig.3D}. This model could be realized by replacing the trap along $z$ direction by an optical lattice beam in the 2D setup of Fig.~\ref{Fig.2Dspec}(d1), and the frequency of this standing wave has a detuning from the $x,y$ directions' optical beams to prevent the spin-flipped hoppings along $z$~\cite{Liu2020SM, LiuPan2021SM}.
|
{
"arxiv_id": "2302.13277",
"language": "en",
"timestamp": "2023-03-02T02:09:17",
"url": "https://arxiv.org/abs/2302.13277",
"yymm": "2302"
} | \section{Introduction}
Speech emotion recognition (SER) refers to recognizing human emotion and affective states from audio. Benefiting from different emotional datasets annotated with categorical or dimensional labels, deep neural networks (DNNs) have shown great capability of recognizing emotion from speech efficiently \cite{wu2022neural}. The standard practice is training from scratch with conventional features \cite{rajamani2021novel}. However, the relatively small SER datasets \cite{busso2008iemocap}, containing only a few speakers and utterances, and conventional hand-crafted acoustic features restrict the performance \cite{chen2022wavlm}.\blfootnote{\ This paper is funded by Special Project on Support for Regulative Intelligence Technology under the Shanghai Science and Technology Innovation Plan 2022 on "Research on the Theory, Assessment System and Prototype System for Enhancing Emotional Cognition in the General Population" (Grant No. 22511105901).\\
$^{\star}$ Corresponding author. amzhou@cs.ecnu.edu.cn}
Inspired by the success of pre-trained features like wav2vec 2.0 \cite{baevski2020wav2vec} and HuBERT \cite{hsu2021hubert} in other speech tasks, some researchers begin to validate its superiority over hand-engineered features in SER \cite{xia2021temporal}. Some works present fusion methods of pre-trained and traditional features \cite{pepino2021emotion} while others explore task adaptive pre-training strategies for SER \cite{chen2021exploring}. However, most of previous works mainly focus on exploiting pre-trained representations and just adopt linear head on top of the pre-trained model, neglecting the design of downstream network. The building of specialist models for speech downstream tasks is also necessary for pre-trained representations.
\begin{figure}[!t]
\centering
\subfigure[Origin.]{
\includegraphics[width=0.33\linewidth]{Figures/Figure1a.pdf}
\label{fig1:a}
}
\hfil
\subfigure[Unidirection.]{
\includegraphics[width=0.25\linewidth]{Figures/Figure1b.pdf}
\label{fig1:b}
}
\hfil
\subfigure[Bidirection.]{
\includegraphics[width=0.25\linewidth]{Figures/Figure1c.pdf}
\label{fig1:c}
}
\caption{The original speech representation and temporal shift.}
\label{fig1}
\end{figure}
Therefore, we take initiatives to explore the architecture of the networks for SER on pre-trained representations and propose a parameter-free, FLOP-free temporal shift module to promote channel mingling. We first draw inspirations from the advancement of Transformers \cite{vaswani2017attention} as well as modern CNNs \cite{liu2022convnet} and investigate the common network configurations to set strong baselines. Then, as shown in Figure \ref{fig1}, we introduce an efficient channel-wise network module, where partial channels are shifted along the temporal dimension by one frame to mingle the past feature to the present one. The core idea of our temporal shift is to introduce channel-wise information mingling. It is noteworthy that the features are shifted partially and thus endows the whole model with partial receptive field enlargement, different from the sufficient frame-level receptive field enlargement by stacking pretrained CNN or Transformer blocks. Moreover, our temporal shift serves as the network module similar to the shift operations in computer vision \cite{lin2019tsm} rather than the common augmentation applications in audio processing \cite{wang22interspeech}. However, such channel-wise partial shift seems to go against the characteristic of alignment in many speech tasks \cite{graves2012sequence}. From the perspective of alignment, the information contained in the shifted channels becomes inaccessible for the original frame, indicating misalignment along with mingling. To balance the trade-off between mingling and misalignment, we propose two strategies for applying temporal shift, including proportion and placement. Proportion of shift is defined to adjust the ratio of shifted channels and the placement of shift can be unified into two manners (Figure \ref{fig3}). To the best of our knowledge, we are the first to propose channel-wise shift operation in audio processing and apply temporal shift to three mainstream building blocks. The family of shift models, including ShiftCNN, Shiftformer and ShiftLSTM, all outperform the state-of-the-art methods on the benchmark IEMOCAP dataset under both finetuning and feature extraction settings.
\section{Methodology}
In this section, we first illustrate the property of temporal shift and its application strategy. Then we specify the design of basic building blocks and the shift variants.
\subsection{Temporal Shift}
As motivated, our temporal shift is to introduce channel-wise information mingling to the standard building block. Part of the channels will be shifted to its neighbouring time stamp and the rest channels remain as shown in Figure \ref{fig1}. The unidirectional shift mingles the past feature to the current one while the bidirectional shift combines the future feature with the current one additionally.
Nonetheless, the partial shifted data leads to frame misalignment and the original information contained in the shifted channels is inaccessible for that time stamp. Specifically, for each time stamp, the un-shifted features are forced to adapt to the neighboring shifted channels while the shifted channels are moved aside from the position it used to be. To balance the trade-off between mingling and misalignment, we provide two strategies for temporal shift.
\textbf{Proportion of shift.} The proportion of shifted channels can be defined as the hyperparameter $\alpha$, adjusting the bargain between mingling and misalignment. Intuitively, if $\alpha$ is kept small enough, the negative effect can be negligible. If $\alpha$ is set large, the information mingling is promoted but the underlying misalignment may cause performance degradation. We will measure the variants with shift proportion in $\left \{ 1/2,1/4,1/8,1/16 \right \}$.
\textbf{Placement of shift.} Exploring the proper location of temporal shift in the neural network is essential. As shown in Figure \ref{fig3}, The manner of applying temporal shift to the model can be unified in two ways, including residual shift and in-place shift. In-place shift (Figure \ref{fig3:a}) forces the model to fit the shifted data, where the effects of both sides are great. Residual shift (Figure \ref{fig3:b}) is placed on the branch of the network, maintaining both the aligned data and misaligned data. The shifted representations from the residual connection can serve as a complementary to the original representations, behaving like an \emph{ensemble} \cite{veit2016residual}.
In addition to technical strategies, the underlying misalignment can be remitted by the SER task from pre-trained representations itself. Specifically, the characteristic of SER task indicates its focus on invariant and discriminative representations rather than alignments between the input and target sequence in other speech tasks \cite{graves2012sequence}.
\begin{figure}[!t]
\centering
\subfigure[In-place Shift.]{
\includegraphics[width=0.55\linewidth]{Figures/Figure3a.pdf}
\label{fig3:a}
}
\subfigure[Residual Shift.]{
\includegraphics[width=0.38\linewidth]{Figures/Figure3b.pdf}
\label{fig3:b}
}
\caption{Two types of the building blocks of our temporal-shift networks, namely in-place shift and residual shift.}
\label{fig3}
\end{figure}
\subsection{Specification of Temporal Shift Models}
We insert temporal shift module into three types of popular building blocks in SER, including convolution neural network, recurrent neural network, and Transformer. The number of channels $C$ matches the representations of pre-trained wav2vec 2.0 Base and HuBERT Base. The number of blocks $B$ is set to ensure the number of parameters roughly the same (9.5M). It is worth noting that our temporal shift can be inserted flexibly and the proportion of shift can be adjusted as the hyperparameter. For uniformity, we adopt the same architecture with fixed proportion in our experiments to validate its effectiveness. The isotropic architectures are summarized as follows.
\begin{itemize}
\item ShiftCNN: $C=(768,3072,768), B=2, \alpha=1/16$
\item Shiftformer: $C=(768,3072,768), B=2, \alpha=1/4$
\item ShiftLSTM: $C=(768,1536), B=1, \alpha=1/4$
\end{itemize}
\textbf{CNN.} We draw inspirations from the advancement of ConvNext blocks \cite{liu2022convnet} and time-channel separable convolution. This pure convolution neural network architecture described in Figure \ref{figtrans:a} can serve as a strong baseline. For its shift variant ShiftCNN with kernel size 7, we have the last block with residual shift for 1/16 proportion, which enables channel-aligned modeling capability and subtle channel-wise information fusion.
\textbf{Transformer.} We employ the standard Transformer with relative positional encoding (RPE) as shown in Figure \ref{figtrans:b}. However, the standard Transformer with residual shift brings no more improvement (Table \ref{tableab} residual$\rightarrow$shift of Transformer). Inspired by the recent literature \cite{yu2022metaformer}, we further sidestep self-attention and view Transformer as a more general architecture, consisting of token mixer and MLP. The building of Shiftformer (Figure \ref{figtrans:c}), closer to MLP-like architecture, is based on the replacement of self-attention with temporal shift as token mixer, involving a more radical bidirectional shift manner. Our shiftformer enjoys effectiveness of the general Transformer architecture as well as the channel-wise mingling of temporal shift.
\textbf{RNN.} Bidirectional long short-term memory (LSTM) \cite{hochreiter1997long} is set to capture both past and future contexts. ShiftLSTM is constructed by straightforward in-place shift with moderate 1/4 shift proportion.
\begin{figure}[t]
\centering
\subfigure[CNN]{
\includegraphics[width=0.25\linewidth]{Figures/Figure_CNN.pdf}
\label{figtrans:a}
}
\hspace{0.03\linewidth}
\subfigure[Transformer]{
\includegraphics[width=0.245\linewidth]{Figures/Figure_transformer.pdf}
\label{figtrans:b}
}
\hspace{0.03\linewidth}
\subfigure[Shiftformer]{
\includegraphics[width=0.235\linewidth]{Figures/Figure_shiftformer.pdf}
\label{figtrans:c}
}
\caption{Block design of CNN, Transformer, and Shiftformer. $T$ and $C$ denote temporal and channel dimensions of feature maps respectively. Temporal shift can be inserted flexibly and our choice of residual shift is colored blue.}
\label{figtrans}
\end{figure}
\section{Experiments}
In this section, we first observe the main properties of temporal shift module. Next we demonstrate the superiority of the family of shift models on the benchmark dataset IEMOCAP under both finetuning and feature extraction settings. Finally, we ablate key components of our proposed models.
\subsection{Experimental Setup}
\textbf{Dataset.} Interactive emotional dyadic motion capture database (IEMOCAP) \cite{busso2008iemocap} is a widely-used benchmark SER dataset with total length of about 12h, containing five sessions with one male and one female each. We adopt leave-one-session-out 5-fold cross-validation and four emotion categories, including 1636 happy utterances, 1084 sad utterances, 1103 angry utterances, and 1708 neutral utterances. Each sample is clipped to 7.5 seconds and the sampling rate is 16kHz.
\textbf{Evaluation.} We do comparisons under two configurations, feature extraction and finetuning. Feature extraction \cite{pepino2021emotion} is to evaluate the sole downstream model, where the frozen pre-trained model is taken as the feature extractor and the downstream models are trained in the supervised manner. And under the finetuning configuration, the parameters of both upstream pre-trained models (only the Transformer layers) and downstream models are updated during training.
The unweighted accuracy (UA) and weighted accuracy (WA) are used as evaluation metrics. UA is computed as the average over individual class accuracies while WA is the correct prediction over the total samples. The reported accuracy is the average of 5 folds results.
\textbf{Implementation Details.} We conduct experiments with two representative self-supervised pre-trained models, wav2vec 2.0 Base \cite{baevski2020wav2vec} and HuBERT Base \cite{hsu2021hubert}. We finetune the models using Adam optimizer with learning rate of $5\times10^{-4}$. We use AdamW optimizer with cosine scheduler and decay rate of 0.1 for feature extraction. We train at batch size of 32 for 100 epochs with 5 epochs of linear warm-up.
\begin{figure}[!t]
\centering
\subfigure[]{
\includegraphics[width=0.47\linewidth]{Figures/Figure_CNN_kernel.pdf}
\label{figcrnn:a}
}
\hfil
\subfigure[]{
\includegraphics[width=0.47\linewidth]{Figures/Figure_RNN.pdf}
\label{figcrnn:b}
}
\caption{(a) The impact of temporal shift on ShiftCNN as kernel size varies. (b) The impact of shift proportion on ShiftLSTM and Shiftformer.}
\label{figcrnn}
\end{figure}
\subsection{Properties of shift variants}
We conduct experiments on HuBERT feature extraction and observe the behavior of temporal shift. Since our temporal shift module provides channel-wise enlargement of receptive field, the effect is closely related to that of the baseline model. Figure \ref{figcrnn:a} varies CNN kernel sizes to probe the potential of temporal shift, where residual shift brings more gain with small kernels (from 71.3\% to 72.1\% for size 7) than large ones. Residual shift also achieves better performance than in-place shift for all relatively small kernels. From similar perspective as \cite{veit2016residual}, the residual shift makes the model an implicit \emph{ensemble} composed of shift and origin modules, thus benefiting from both channel-aligned and channel-mingled information at no extra computation. However, the projection shortcut added to bidirectional LSTM in Figure \ref{figcrnn:b} introduces extra parameters and puts residual shift at a disadvantage, leading to nearly 1\% degradation. The accuracy of Shiftformer increases steadily with proportion of bidirectional shift in Figure \ref{figcrnn:b}, surpassing the standard Transformer by about 1\% with 27\% fewer parameters. This suggests high proportion with bidirectional manner improves temporal modeling for MLP-like architecture and meanwhile avoids high complexity of self-attention operation.
\subsection{Comparison with State-of-the-Arts}
\textbf{Baselines.} We compare our proposed models with existing state-of-the-art methods under both finetuning and feature extraction settings. Finetuning methods include vanilla finetuning, P-TAPT, TAPT \cite{chen2021exploring}, CNN-BiLSTM \cite{xia2021temporal} and SNP, TAP \cite{gat2022speaker}. Feature extraction methods include conventional feature (AR-GRU \cite{rajamani2021novel}, SeqCap with NAS \cite{wu2022neural}), feature fusion (eGeMAPS+wav2vec 2.0 \cite{pepino2021emotion}, MFCC+Spectrogram+W2E \cite{zou2022speech}) and pre-trained features (wav2vec 2.0 \cite{baevski2020wav2vec}, HuBERT \cite{hsu2021hubert} and WavLM \cite{chen2022wavlm}). Moreover, we adopt temporal shift as data augmentation on hidden states for comparison, namely employed randomly just in the training stage.
\begin{table}[t]
\centering
\small
\begin{tabular}{@{}l|ll@{}}
\toprule
\textbf{Type} & \textbf{Method} & \textbf{UA (\%)} \\ \midrule
\multirow{4}{*}{Baseline} & CNN-BiLSTM & 66.9 \\
& Vanilla Finetuning & 69.9 \\
& P-TAPT & 74.3 \\
& TAP+HuBERT Large & 74.2 \\ \midrule
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Temporal\\ Shift\\ Augment\end{tabular}} & LSTM+Augment & 74.1 \\
& Transformer+Augment & \textbf{74.8} \\
& CNN+Augment & 74.3 \\ \midrule
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Temporal\\ Shift\end{tabular}} & ShiftLSTM (ours) & 74.5 \\
& Shiftformer (ours) & 74.7 \\
& ShiftCNN (ours) & \textbf{74.8} \\ \bottomrule
\end{tabular}
\caption{Comparison of UA with finetuning methods on IEMOCAP. Results are from original paper.}
\label{tableFinetune}
\end{table}
\begin{table}[t]
\centering
\small
\begin{tabular}{@{}l|l|cc@{}}
\toprule
\textbf{Feature} & \textbf{Method} & \textbf{WA (\%)} & \textbf{UA (\%)} \\ \midrule
\multirow{2}{*}{Convention} & AR-GRU & 66.9 & 68.3 \\
& SeqCap with NAS & 70.5 & 56.9 \\ \midrule
\multirow{2}{*}{Fusion} & eGeMAPS+wav2vec 2.0 & - & 66.3 \\
& MFCC+Spec.+w2E & 69.8 & 71.1 \\ \midrule
\multirow{6}{*}{Pre-trained} & wav2vec 2.0 Large & 65.6 & - \\
& HuBERT Large & 67.6 & - \\
& WavLM Large & 70.6 & - \\
& ShiftCNN (ours) & 71.9 & \textbf{72.8} \\
& ShiftLSTM (ours) & 69.8 & 70.6 \\
& Shiftformer (ours) & \textbf{72.1} & 72.7 \\ \bottomrule
\end{tabular}
\caption{Comparison with feature extraction on IEMOCAP. The results of pre-trained models are cited from \cite{chen2022wavlm} and others are from original paper. To be consistent with prior works, we adopt wav2vec 2.0 Base features. }
\label{tableFeatureEX}
\end{table}
\begin{table}[t]
\centering
\small
\begin{tabular}{@{}l|l|cc@{}}
\toprule
\textbf{Ablation} & \textbf{Variant} & \textbf{FeatEx.} & \textbf{Finetune} \\ \midrule
\multirow{4}{*}{CNN} & baseline & 71.3 & 74.1 \\
& conv$\rightarrow$depthwise & 68.7 & 74.2 \\
& LN$\rightarrow$BN & 70.2 & 73.7 \\
& \textbf{residual$\rightarrow$shift} & \textbf{71.7} & \textbf{74.8} \\ \midrule
\multirow{4}{*}{Transformer} & baseline & 69.5 & 74.5 \\
& RPE$\rightarrow$APE & 61.6 & 73.7 \\
& residual$\rightarrow$shift & 69.3 & 74.5 \\
& MHSA$\rightarrow$pooling & 69.5 & 73.7 \\
& \textbf{MHSA$\rightarrow$shift} & \textbf{70.3} & \textbf{74.7} \\ \midrule
\multirow{2}{*}{LSTM} & baseline & 71.8 & 74.3 \\
& \textbf{identity$\rightarrow$shift} & \textbf{71.9} & \textbf{74.5} \\ \bottomrule
\end{tabular}
\caption{Ablation study for components of building blocks.}
\label{tableab}
\end{table}
\textbf{Comparison on finetuning.} As shown in Table \ref{tableFinetune}, the family of temporal shift models outperforms the state-of-the-art finetuning methods. For the adaptive finetuning methods, TAPT requires another Hubert-like pre-training stage on IEMOCAP while TAP utilizes a more advanced pre-trained model HuBERT Large, but still falling behind ours trained solely on wav2vec 2.0 Base by 0.6 \%. Notably, our temporal shift taken as data augmentation also shows competitive performance, signifying the channel-wise mingling mechanism and our strong baseline building blocks.
\textbf{Comparison on feature extraction.} Table \ref{tableFeatureEX} is split into multiple parts to include methods adopting different types of features. We follow the evaluation protocol as \cite{pepino2021emotion}, namely adopting the trainable weighted sum of embeddings. Taking the wav2vec 2.0 Base as the feature extractor, our ShiftCNN and Shiftformer outperform all the other methods, even attaining better performance than one of the latest advanced pre-trained model WavLM Large by 1.5\%.
\subsection{Ablation Studies}
We conduct ablation studies on CNN, Transformer, and LSTM respectively in Table \ref{tableab} under both wav2vec 2.0 finetuning and HuBERT feature extraction (dubbed FeatEx.). UA is reported in the table. The advancement of the overall architecture is verified by the ablation of key components, covering types of normalization, convolution and position encoding. Interestingly, Transformer with residual shift fails to catch up with the standard one while our shiftformer, replacing multi-head self-attention (MHSA) with shift operation, outperforming the others with 3.6M fewer parameters. This indicates the benefit of reducing complexity and introducing channel-wise information mingling.
\section{Conclusion}
In this paper, we propose temporal shift module to introduce channel-wise mingling for downstream models. We hope our shift variants inspire future research on downstream model design and look forward further application of temporal shift.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
|
{
"arxiv_id": "2302.13211",
"language": "en",
"timestamp": "2023-02-28T02:13:08",
"url": "https://arxiv.org/abs/2302.13211",
"yymm": "2302"
} | \section{Introduction}
Here, we consider the general, constrained problem of finding a root of a
function $y$ that we cannot measure directly, but for which we can measure its
derivatives. Specifically, we imagine that we are given a particular point
$(x_0, y_0)$ on the curve -- i.e., an initialization point, where the function
value is provided -- and we want to use this and available derivatives to find
a nearby root $x_*$ of $y$. Standard methods for root finding -- e.g.,
Newton's method and the bisection method -- cannot be applied directly in this
case because each requires a function evaluation with each iterative search
step. We suggest two methods for root finding that avoid this, both of which
involve making use of derivatives to estimate how much $y$ has changed as we
adjust $x$ to new points, hopefully closer to a root of $y$. The first of
these, we call the local inversion method -- this proceeds by ``inching"
towards the root in $N$ steps, at each step adjusting $x$ by the amount needed
to approximately decrease $y$ by $y_0 / N$. The second method, an approximate
Newton's method, makes use of approximate integrals of $y'$ to estimate the
function value at subsequent root estimates.
In general, we expect the methods we discuss here to provide competitive
strategies whenever the target function $y$ -- whose root we seek -- is hard to
evaluate, relative to its derivatives. This can of course occur in various
one-off situations, but there are also certain broad classes of problem where
this situation can arise. For example, many physical systems are well-modeled
via differential equations that relate one or more derivatives of a target
function to a specified driving term -- Newton's laws of motion take this form.
Given an equation like this, we can often more readily solve for the
derivatives of the target than for the target itself -- whose evaluation may
require numerical integration. One simple example: Consider a flying, EV drone
whose battery drain rate is a given function of its instantaneous height $h$,
horizontal speed $\partial_t r$, and rate of climb $\partial_t h$. If the
drone moves along a planned trajectory beginning with a full battery, root
finding techniques can be applied to the total charge function -- the integral
of the instantaneous rate of discharge -- to forecast when in the future the
battery will reach a particular level. With the methods we describe here, this
can be done without having to repeatedly evaluate the precise charge at
candidate times.
Whereas standard methods for root finding -- e.g., Newton's method and the
bisection method -- generally converge exponentially quickly in the number of
function evaluations applied, the error in the methods we discuss here converge
to zero only algebraically. However, the power at which we get convergence
increases with the number of derivatives supplied, and so we can often obtain
accurate estimates quite quickly. Another virtue of the methods we discuss
here is that the simplest forms of the equations we consider are quite easy to
implement in code. Given these points, these methods may be of general
interest for those who work with applied mathematics.
The remainder of our paper proceeds as follows: In the next two sections, we
provide overviews of the two principal methods we discuss as well as a hybrid
strategy. In the following section, we cover some example applications. We
then conclude with a quick summary discussion.
Note: We have written up the main strategies we discuss here in an open source
python package, \texttt{inchwormrf}. This is available for download on pypi,
and on github at \texttt{github.com/efavdb/inchwormrf}. The code can be
applied directly or used as demo code from which to base other implementations.
\section{Local inversion}
\subsection{Algorithm strategy}
The first strategy we will discuss is the local inversion method. We suppose we
measure $(x_0,y_0)$ and seek a root of $y$. Our strategy will be to
iteratively work towards the root in $N$ steps, with $x_k$ chosen so that
$y(x_k) \approx y_0 (1 - k /N)$, so that $x_N$ is an approximate root. At a
given step, then, the goal of the algorithm is to inch its way ``downhill" by
an amount $y_0 / N$, towards $y=0$. This process will terminate near a root,
provided one does indeed sit downhill of the initial $(x_0, y_0)$ point --
i.e., provided one can get to a root without having to cross any zero
derivative points of $y$. Fig.\ \ref{fig:downhill} illustrates the idea.
\begin{figure}[h]\scalebox{.4}
{\includegraphics{downhill.pdf}}
\caption{\label{fig:downhill}
The local inversion method works its way ``downhill" and will find a root,
provided we can get to one without first encountering a zero derivative point
of $y$. E.g., in this cartoon, a root will be found if we initialize at the
left-most black dot, but not if we initialize at either of the other two points
indicated.
}
\end{figure}
In order to accomplish the above, we need an approximate local inverse of $y$
that we can employ to identify how far to move in $x$ to drop $y$ by $y_0 /N$
at each step. Here, we will focus on use of a Taylor series to approximate the
inverse, but other methods can also be applied. To begin, we posit an
available and locally convergent Taylor series for $y$ at each $x_k$,
\begin{eqnarray} \label{series}
dy \equiv\sum_{j=1}^m a_j d x^j + O \left( d x^{m+1} \right)
\end{eqnarray}
We can invert this series to obtain an estimate for the change $dx$ needed to
have $y$ adjust by $dy$, as
\begin{eqnarray}\label{series_inversion}
dx = \sum_{j=1}^m A_j dy^j + O \left( dy^{m+1} \right)
\end{eqnarray}
where the coefficient $A_J$ is given by \cite{morsemethods},
\begin{equation}\label{inversion_coefs}
A_n = \frac{1}{na_1^ n}\sum_{s_1, s_2, s_3,\cdots}(-1)^{s_1+s_2+s_3+\cdots}
\frac{n(n+1)\cdots(n-1+s_1+s_2+s_3+\cdots)}{s_1!s_2!s_3!\cdots}
\left(\frac{a_2}{a_1}\right)^{s_1}\left(\frac{a_3}{a_1}\right)^{s_2}\cdots.
\end{equation}
Here, the sum over $s$ values is restricted to partitions of $n-1$,
\begin{equation}
s_1+2s_2+3s_3+\cdots = n-1.
\end{equation}
The first few $A_i$ given by (\ref{inversion_coefs}) are,
\begin{eqnarray}
A_1 &=& \frac{1}{a_1} \\
A_2 &=& -\frac{1}{a_1^3} a_2 \\
A_3 &=& \frac{1}{a_1^5} \left(2 a_2^2 - a_1 a_3 \right).
\end{eqnarray}
In a first order approximation, we would use only the expression for $A_1$
above in (\ref{series_inversion}). This would correspond to approximating the
function $y$ as linear about each $x_k$, using just the local slope to estimate
how far to move to get to $x_{k+1}$. We can obtain a more refined estimate by
using the first two terms, which would correspond to approximating $y$ using a
local quadratic, and so on. In practice, it is convenient to ``hard-code" the
expressions for the first few coefficients, but the number of terms $p(k)$
present in $A_k$ increases quickly with $k$, scaling as $\log p(k) \sim
\sqrt{n}$ \cite{andrews}. For general code, then, it's useful to also develop
a method that can directly evaluate the sums (\ref{inversion_coefs}) for large
$n$.
This gives our first strategy: Begin at $(x_0, y_0)$, then write
\begin{eqnarray}
x_k = x_{k-1} + d x_{k-1}, \text{\ \ for $k \in \{1, \ldots, N \}$}
\end{eqnarray}
with $x_N$ being the final root estimate and $d x_{k-1}$ being the value
returned by (\ref{series_inversion}), in an expansion about $x_{k-1}$, with the
shift in $y$ set to $dy = -y_0 / N$. Example code illustrating this approach
with only one derivative is shown in the boxed Algorithm 1.
\subsection{Asymptotic error analysis}
\begin{tcolorbox}[float, floatplacement=t]
\textbf{Algorithm 1: Local inversion (with m = 1 derivative only)}
\begin{verbatim}
def local_inversion_simple(x0, y0, dfunc, N):
x_val = x0 * 1.0
dy = y0 / float(N)
for _ in range(N):
x_val -= dy / dfunc(x_val)
return x_val
\end{verbatim}
\end{tcolorbox}
Upon termination, the local inversion algorithm produces one root estimate,
$x_N \approx x_*$. As noted above, one requirement for this to be a valid root
estimate is that there be a root downhill from $x_0$, not separated from it by
any zero derivative points of $y$. A second condition is that the inverse
series (\ref{series_inversion}) provide reasonable estimates for the local
inverse at each $x_k$. This should hold provided the inverse series
converges to the local inverse at each point, within some finite radius
sufficiently large to cover the distance between adjacent sample points.
We can identify a sufficient set of conditions for the above to hold as
follows: If at the point $x$, $y$ is analytic, with (\ref{series}) converging
within a radius $R$ about $x$ and $\vert y(\tilde{x}) - y(x) \vert \leq M$ for
$\tilde{x}$ within this radius, and $y'(x) = s$ is non-zero at $x$, then the
inverse of $y$ will also be analytic at $x$, and its Taylor series
(\ref{series_inversion}) will converge within a radius of at least
\cite{redheffer}
\begin{eqnarray} \label{redhf}
r = \frac{1}{4} \frac{(s R)^2}{M}.
\end{eqnarray}
If the minimum of (\ref{redhf}) between $x_0$ and $x_*$ is some positive value
$\tilde{r} > 0$, it follows that each of inverse expansions applied in our
algorithm will converge between adjacent sample points, provided $N \geq \vert
y_0 \vert / \tilde{r}$.
Under the conditions noted above, we can readily characterize the $N$
dependence in the error in the root estimate $x_N$. Setting $dy =
-\frac{y_0}{N}$ and plugging into (\ref{series_inversion}), Taylor's theorem
tells us that the change in $x$ as $y$ moves by $dy$ will be given by
\begin{eqnarray}
d x_{k} = \sum_{j=1}^m A_j \left (-\frac{y_0}{N} \right )^j + O
\left(N^{-(m+1)} \right)
\end{eqnarray}
That is, when using the truncated series expansion to approximate how far we
must move to drop $y$ by $y_0 / N$, we'll be off by amount that scales like
$N^{-(m+1)}$. After taking $N$ steps like this, the errors at each step will
accumulate to give a net error in the root estimate that scales like
\begin{eqnarray}\label{inverse_method_error}
x_N \sim x_* + O(N^{-m}).
\end{eqnarray}
That is, the local inversion root estimate's error converges to zero
algebraically with $N$, with a power that increases with $m$, the number of
derivatives used in the expansions.
The left plot of Fig.\ \ref{fig:convergence} illustrates this scaling in a
simple numerical experiment. Here, we've applied the local inversion algorithm
to find the single real root of $y=x^5 - 3$. Plots of the error versus $N$ are
consistent with (\ref{inverse_method_error}) for each $m$.
\subsection{Formal expression for root}
\begin{figure}[h]\scalebox{.5}
{\includegraphics{convergence.pdf}}
\caption{\label{fig:convergence}
(left) Here, we plot the error in the root estimates obtained from the local
inversion method when applied to the function $y(x) = x^5 - 3$, the real root
of which is $3^{1/5} \approx 1.245731$. We ran the algorithm initializing it
at the point $(x_0, y_0) = (2, 29)$, passing it $m=1$, $2$, or $4$ derivatives,
and obtained algebraic convergence to the correct root with errors scaling as
in (\ref{inverse_method_error}). E.g., we get about four digits of accuracy if
we use just one derivative ($m=1$) and take $N=10^4$ steps, but six digits of
accuracy if we use four derivatives and only $N=10^2$ steps. (right) Here, we
apply local inversion with a final, approximate Newton hop to the same problem
as above, again passing it $m=1$, $2$, or $4$, derivatives. The errors
converge to zero more quickly now, in a manner consistent with
(\ref{Newton_error}). Now, with four derivatives, we get around $12$ digits of
accuracy using $N=10^2$ steps. }
\end{figure}
If we take the limit of $N \to \infty$ of the $m=1$ version of the local
inverse algorithm, we obtain
\begin{eqnarray}\label{formal_expression}
x_* = x_0 - \int_0^{y_0} \frac{dy}{y'(x)},
\end{eqnarray}
a formal, implicit expression for the root we're estimating. Note that this
depends only on the initial point $(x_0, y_0)$ and the derivative function,
$y'$.
Although we won't pursue this here in detail, we note that
(\ref{formal_expression}) provides a convenient starting point for identifying
many methods for estimating roots. For example, we can recover Newton's method
by using a linear approximation to the integrand, expanding about $(x_0, y_0)$,
integrating this, then iterating. Using a discrete approximation to the
integral, we can recover the local inversion algorithm above. We can also
combine these ideas to obtain an iterated version of the local inversion
method.
A special class of functions for which (\ref{formal_expression}) can be quite
useful are those for which we can obtain an explicit expression for $y'$ in
terms of $y$. In this case, (\ref{formal_expression}) becomes an explicit
integral which can be approximated in various ways, sometimes giving results
that converge much more quickly than does the general local inversion method.
For example, with $y = \cos(x)$ and $y' =- \sqrt{1-y^2}$,
(\ref{formal_expression}) returns a familiar integral for $\arccos$. In a
numerical experiment, we applied Gaussian quadrature to obtain an estimate for
the integral from $y = 1/\sqrt{2}$ to $0$ -- giving estimates for $\pi / 4$.
The errors in these estimates converged exponentially to zero with the number
of samples $N$ used to estimate the integral -- beating the algebraic
convergence rate of local inversion.
\subsection{Behavior near a zero derivative}
When a zero derivative point separates $x_0$ and $x_*$, the local inversion
method will not return a valid root estimate. However, we can still apply the
local inversion strategy if either $x_0$ or $x_*$ is itself a zero derivative
point. The challenge in this case is that the inverse Taylor series
(\ref{series_inversion}) will not converge at the zero derivative point, and
some other method of approximating the inverse will be required near this
point.
To sketch how this can sometimes be done, we consider here the case where the
first derivative of $y$ is zero at $x_0$, but the second derivative is not.
This situation occurs in some physical applications. E.g., it can arise when
seeking turning points of an oscillator, moving about a local minimum at $x_0$.
To estimate the change in $x$ needed for $y$ to move by $dy$ in this case, we
can apply the quadratic formula, which gives
\begin{eqnarray} \label{quadratic_formula}
d x \sim \frac{-a_1 + \sqrt{a_1^2 + 4 a_2 dy}}{2 a_2} + O(d x^3).
\end{eqnarray}
If we then run the local inversion algorithm with (\ref{quadratic_formula})
used in place of the inverse Taylor expansion, we can again obtain good root
estimates. In this case, applying an error analysis like that we applied
above, one can see that the error here will be dominated by the region near the
zero derivative, giving a net root estimate error that scales like $N^{-3/2}$.
If there is no zero derivative point, we can still use
(\ref{quadratic_formula}) as an alternative to (\ref{series_inversion}). In
this case, the root estimates that result again have errors going to zero as
$N^{-2}$ -- the rate of convergence that we get using series inversion with two
derivatives.
To improve on this approach using higher derivative information, one can simply
carry out a full asymptotic expansion of (\ref{series}) near the zero
derivative point. This can be applied to obtain more refined estimates for the
$dx$ needed at each step in this region. Away from the zero derivative point
this local asymptotic expansion will not converge, and one must switch over to
the inverse Taylor expansion (\ref{series_inversion}). The resulting algorithm
is a little more complicated, but does allow one to obtain root estimates whose
errors go to zero more quickly with $N$.
\newline
This completes our overview of the local inversion root finding strategy. We
now turn to our second approach, an approximate Newton hop strategy.
\section{approximate Newton hops}
\subsection{Basic strategy}
Here, we discuss our second method for root finding -- approximate Newton hops.
Recall that in the standard Newton's method, one iteratively improves upon a
root estimate by fitting a line to the function $y$ about the current position,
then moving to the root of that line. That is, we take
\begin{eqnarray}\label{newtonsmethod}
x_{k+1} = x_k - \frac{y(x_k)}{y'(x_k)}
\end{eqnarray}
Iteratively applying (\ref{newtonsmethod}), the $x_k$ values often quickly
approach a root of the function $y$, with the error in the root estimate
converging exponentially quickly to zero.
We can't apply Newton's method directly under the condition that we consider
here, that $y$ is difficult to measure. However, we assume that $y^{\prime}$
can be evaluated relatively quickly, and so we can apply Newton's method if we
replace the $y$ value in (\ref{newtonsmethod}) above by an approximate integral
of $y^{\prime}$, writing
\begin{eqnarray}\label{yprimeint}
y(x_k) = y(x_0) + \int_{x_0}^{x_k} y'(x) dx
\end{eqnarray}
A simple strategy we can apply to evaluate the above is to sample $y^{\prime}$
at the $N$ equally-spaced points
\begin{eqnarray}
\tilde{x}_i = x_0 + \frac{i}{N-1} (x_{k} - x_0), \text{\ \ $i \in 0, \ldots, N
- 1$}.
\end{eqnarray}
Given samples of $y^{\prime}$ at these points, we can then apply the trapezoid
rule to estimate the integral in (\ref{yprimeint}). This gives an estimate for
$y(x_k)$ that is accurate to $O(N^{-2})$. Plugging this into
(\ref{newtonsmethod}), we can then obtain an approximate, Newton hop root
update estimate. If we iterate, we then obtain an approximate Newton's method.
We find that this approach often brings the system to within the $O(N^{-2})$
``noise floor" of the root within $5$ to $10$ iterations. If $y^{\prime} \not
= 0$ at the root, this then gives an estimate for the root that is also
accurate to $O(N^{-2})$. A simple python implementation of this strategy is
given in the boxed Algorithm 2.
\begin{tcolorbox}[float, floatplacement=t]
\textbf{Algorithm 2: Approximate Newton hops (with m = 1 derivative only)}
\begin{verbatim}
def approximate_newton_simple(x0, y0, dfunc, N, iterations):
x_val = x0 * 1.0
for _ in range(iterations):
# Step 1: Estimate y using approximate integral of y'
y_val = y0 * 1.0
dx = (x_val - x0) / float(N)
for i in range(N + 1):
x = x0 + i * dx
y_val += dfunc(x) * dx
y_val -= 0.5 * dfunc(x0) * dx
y_val -= 0.5 * dfunc(x_val) * dx
# Step 2: Newton hop
x_val -= y_val / dfunc(x_val)
return x_val
\end{verbatim}
\end{tcolorbox}
\subsection{Refinements}
We can obtain more quickly converging versions of the approximate Newton hop
strategy if we make use of higher order derivatives. The Euler-Maclaurin
formula provides a convenient method for incorporating this information
\cite{apostol}. We quote the formula below in a special limit, but first
discuss the convergence rates that result from its application: If $m$
derivatives are supplied and we use $N$ points to estimate the integral at
right in (\ref{yprimeint}), this approach will give root estimates that
converge to the correct values as
\begin{eqnarray}\label{Newton_error}
x_N \sim x_* + O(N^{-2 \lfloor \frac{m}{2} \rfloor - 2})
\end{eqnarray}
For $m=1$, the error term goes as $O(N^{-2})$, matching that of the trapezoid
rule we discussed just above. For even $m \geq 2$, we get $O(N^{-m - 2})$.
That is, the error in this case goes to zero with two extra powers of $N$,
relative to that of the local inversion method, (\ref{inverse_method_error}).
This can give us a very significant speed up.
In our applications of the Euler-Maclaurin formula, we are often interested in a
slightly more general situation than that noted above, where the samples are
always equally spaced. To that end, we'll posit now that we have samples of
$y^{\prime}$ at an ordered set of values $\tilde{x}_i$, with $i \in (0, \ldots,
N)$. Direct application of the formula requires evenly spaced samples. To
apply it in this case, then, we consider a change of variables, writing
\begin{eqnarray} \label{change_of_variables}
\int_{\tilde{x}_0}^{\tilde{x}_N} y^{\prime}(x) dx = \int_0^N
y^{\prime}(\tilde{x}(g)) \frac{d\tilde{x}}{dg} dg
\end{eqnarray}
Here, $\tilde{x}(g)$ is now some interpolating function that maps the domain
$[0, N]$ to $[\tilde{x}_0, \tilde{x}_N]$, satisfying
\begin{eqnarray}
\tilde{x}(g) = \tilde{x}_g, \text{ \ \ \ \ \ \ for $g \in \{0, 1, \ldots N\}$}
\end{eqnarray}
With this change of variables, the integral at right in
(\ref{change_of_variables}) is now sampled at an evenly spaced set of points,
and we can use the Euler-Maclaurin formula to estimate its value. If we plan
to use $m$ derivatives in our analysis, the interpolating function $\tilde{x}$
should also have at least $m$ continuous derivatives. In our open source
implementation, we have used a polynomial spline of degree $2 \lfloor m / 2
\rfloor + 1$ for $m> 1$ and a cubic spline for $m = 1$.
We now quote the Euler-Maclaurin formula, as applied to the right side of
(\ref{change_of_variables}). This reads,
\begin{eqnarray} \label{em}
\int_{\tilde{x}_0}^{\tilde{x}_N} y^{\prime}(x) dx \sim \sum_{g=0}^N
y^{\prime}(\tilde{x}_g) \left .
\frac{d\tilde{x}}{dg} \right \vert_g - \frac{y^{\prime}(\tilde{x}_0) \left .
\frac{d\tilde{x}}{dg} \right \vert_{g=0} + y^{\prime}(x_N) \left .
\frac{d\tilde{x}}{dg} \right \vert_{g=N}}{2} + \sum_{k=1}^{\lfloor \frac{m}{2}
\rfloor} \frac{B_{2k}}{2k!} \left ( G^{(2k-1)}(\tilde{x}_N) -
G^{(2k-1)}(\tilde{x}_0)\right)
\end{eqnarray}
Here $B_k$ is the $k$-th Bernoulli number and $G$ is the function of $g$ given
by
\begin{eqnarray}
G(g) = y^{\prime}(\tilde{x}(g)) \frac{d\tilde{x}}{dg}
\end{eqnarray}
Notice from (\ref{em}) that the higher order derivatives of $G$ are only needed
at the boundaries of integration, where $g=0, N$. We can evaluate these
higher-order derivatives using the Faa di Bruno theorem \cite{roman}, which
generalizes the first derivative chain rule for derivatives of composite
functions. Each extra derivative with respect to $g$ gives another factor
scaling like $O(N^{-1})$, resulting in the scaling of the error quoted in
(\ref{Newton_error}) at fixed $m$.
With the formulas noted above, we can implement more refined versions of the
approximate Newton hop method: With each iteration, we apply (\ref{em}) to
approximate the current $y$ value, then plug this into (\ref{newtonsmethod}) to
obtain a new root estimate. The samples used to estimate the integral of
$y^{\prime}$ can be equally spaced, or not. Again, the former situation is
more convenient to code up, but the latter might be used to allow for more
efficient sampling: E.g., one might re-use prior samples of $y^{\prime}$ in
each iteration, perhaps adding only a single new sample at the current root
estimate. Doing this, the samples will no longer each be evenly spaced, which
will force use of an interpolation function. However, this can allow for a
significant speed up when $y^{\prime}$ evaluations are also expensive.
A hybrid local-inversion, final Newton hop approach can also be taken:
First carrying out the local inversion process with $m$ derivatives, will get
us to a root estimate that is within $O(N^{-m})$ of the true root. If we save
the $y^{\prime}$ values observed throughout this process, we can then carry out
a single, final Newton hop using the estimate (\ref{em}) for the $y$ value at
termination of the local inversion process. This will then bring us to a root
estimate accurate to $O(N^{-2 \lfloor \frac{m}{2} \rfloor - 2})$, as in
(\ref{Newton_error}).
In our open source package, we include a higher order Newton's method, but for
simplicity have only coded up the case where $N$ new samples of $y^{\prime}$
are evaluated with each iteration, always evenly spaced. When $y^{\prime}$ is
easy to evaluate, this approach is much faster than that where a new
interpolation is required with each iteration. However, to support the
situation where $y^{\prime}$ is also expensive to evaluate, we have also
implemented the hybrid local inversion-Newton method. This requires only one
interpolation to be carried out in the final step, makes efficient use of
$y^{\prime}$ evaluations, and results in convergence rates consistent with our
analysis here. The right plot of Fig.\ \ref{fig:convergence} illustrates this
rate of convergence in a simple application.
\section{Example applications}
\subsection{Functions with high curvature}
Here, we review the point that Newton's method will sometimes not converge for
functions that are highly curved, but that in cases like this the local
inversion method generally will. Consider the function
\begin{eqnarray}
y = \vert x \vert^{\gamma}
\end{eqnarray}
where $\gamma > 0$ is a fixed parameter. If we initialize Newton's method at
the point $(x_0, x_0 ^{\gamma})$, with $x_0
> 0$, a bit of algebra shows that Newton's method will move us to the point
\begin{eqnarray}
x_1 = x_0 \left( 1 - \frac{1}{\gamma} \right)
\end{eqnarray}
The distance of $x_1$ from the origin is a factor of $\vert 1- \gamma^{-1}
\vert$ times that of $x_0$. Repeated applications of Newton's method will
result in $\vert x_k \vert = x_0 \vert 1 - \gamma^{-1} \vert ^ k$. If $\gamma
> 1/2$, the method converges exponentially quickly to the root. However, if
$\gamma < 1/2$, each iteration drives the estimate further away from the root
of $y$, due to ``overshooting".
High curvature is not an issue for the local inversion method: By inching
along, it can quickly recalibrate to changes in slope, preventing overshooting.
Note, however, that applying a final, approximate Newton hop after the local
inversion process will no longer improve convergence in cases of high curvature
such as this.
\subsection{Smoothstep}
The smoothstep function \cite{bailey2009graphics}
\begin{eqnarray}
s_1(x) = 3 x^2 - 2 x^3, \ \ x \in (0, 1)
\end{eqnarray}
is an ``s-shaped" curve that is commonly used in computer graphics to generate
smooth transitions, both in shading applications and for specifying motions.
This function passes through $(0,0)$ and $(1,1)$ and has zero derivatives at
both of these points. More generally, one can consider the degree-$(2n + 1)$
polynomial $s_n$ that passes through these points and has $n\geq 1$ zero
derivatives at both end points. Simple inverse functions are not available for
$n>1$, so to determine where $s_n(x)$ takes on a specific value, one must often
resort to root finding.
The following identity makes our present approach a tidy one for identifying
inverse values for general $n$: If we define,
\begin{eqnarray} \label{smoothstep_r}
r_n(x) = \frac{2 \Gamma \left(n+\frac{3}{2}\right)}{\sqrt{\pi } \Gamma (n+1)}
\int_0^x (1 - u^2)^n dx, \ \ x \in (-1, 1)
\end{eqnarray}
then
\begin{eqnarray}\label{smoothstep_rs}
s_n(x) = \frac{1}{2} r_n(2 x - 1) - \frac{1}{2}.
\end{eqnarray}
Whereas evaluation of $r_n(x)$ requires us to evaluate the integral above,
the first derivative of $r_n$ is simply
\begin{eqnarray}
r_n^{(1)}(x) &=& \frac{2 \Gamma \left(n+\frac{3}{2}\right)}{\sqrt{\pi } \Gamma
(n+1)} (1 - x^2)^n
\end{eqnarray}
Higher order derivatives of $r_n$ are easy to obtain from this last line as
well. Given these expressions, we can easily invert $r_n$ for any value of $n$
using our methods. Fig.\ \ref{fig:smoothstep} provides an illustration.
\begin{figure}[h]\scalebox{.45}
{\includegraphics{smoothstep.pdf}}
\caption{\label{fig:smoothstep}
Here, we plot the $r_n(x)$ functions defined in (\ref{smoothstep_r}) for
$n = 1, 2, 10, 100$, and mark the points where each of these curves are
equal to $0.9$. These values were found using the first and second derivatives
of $r$, initializing the approximate Newton method at the point $(0,-0.9)$.
}
\end{figure}
\subsection{Multiple dimensions}
Solutions to coupled sets of equations can also be treated using the local
measurement method. We illustrate this here by example. For simplicity, we'll
consider a cost function that can be written out by hand, but understand that
the method we describe will be most useful in cases where the derivatives of a
cost function are more easy to evaluate than the cost function itself. The
function we'll consider is
\begin{eqnarray} \label{cost_function}
f(x_1, x_2) = x_1^4 + x_1^2 + 3 x_1 x_2 + x_2^2 + 7 x_1 + 9 x_2.
\end{eqnarray}
If we wish to find a local extremum of this function, we can set the gradient
of the above to zero, which gives
\begin{eqnarray} \label{gradient} \nonumber
\nabla f &=& (4 x_1^3 + 2 x_1 + 3 x_2 + 7, 3 x_1 + 2 x_2 + 9) \\
&\equiv& (0, 0),
\end{eqnarray}
a coupled set of two equations for $x_1$ and $x_2$.
To apply the local inversion technique over $N$ steps, we need to expand the
gradient about a given position. To first order, this gives
\begin{eqnarray} \label{2d_expansion}
d \nabla f =
H \cdot (d x_1, d x_2)^T + O(d x^2),
\end{eqnarray}
where
\begin{eqnarray}
H = \left( \begin{array}{cc}
12 x_1^2 + 2 & 3 \\
3 & 2 \end{array} \right)
\end{eqnarray}
is the Hessian of (\ref{cost_function}), and
\begin{eqnarray} \label{target_grad_shift}
d \nabla f = - \frac{1}{N} \nabla f(x_{1,0}, x_{2,0}),
\end{eqnarray}
is the target gradient shift per step. Inverting (\ref{2d_expansion}), we
obtain
\begin{eqnarray}\label{2d_inversion}
(d x_1, d x_2)^T = H^{-1} \cdot d \nabla f + O( d \nabla f^2).
\end{eqnarray}
There is a single real solution to (\ref{gradient}) at $(x_1, x_2) \approx (
1.35172698, -6.25699505)$. Running the local inversion algorithm
(\ref{2d_inversion}) finds this solution with error decreasing like
$O(N^{-1})$.
To improve upon the above solution, we can either include more terms in the
expansion (\ref{2d_expansion}), or we can apply a final, approximate Newton
hop to refine the solution after the inching process. To carry that out, we
need to consider how to estimate the final values of our gradient function.
To that end, we note that the integral we are concerned with is now a path
integral
\begin{eqnarray}\label{em_target_integral}
\nabla f = \nabla f(x_{1,0}, x_{2,0}) + \int_s H \cdot d \textbf{s}.
\end{eqnarray}
Taking a discrete approximation to this integral, and then applying a final
Newton hop, we obtain estimates with errors that converge to zero like
$O(N^{-2})$.
\section{Discussion}
Here, we have introduced two methods for evaluating roots of a function that
require only the ability to evaluate the derivative(s) of the function in
question, not the function itself. This approach might be computationally
more convenient than standard methods whenever evaluation of the function
itself is relatively inconvenient for some reason. The main cost of applying
these methods is that convergence is algebraic in the number of steps taken,
rather than exponential -- the typical convergence form of more familiar
methods for root finding, such as Newton's method and the bi-section method.
However, one virtue of these approaches is that higher order derivative
information is very easily incorporated, with each extra derivative provided
generally offering an increase to the convergence rate. This property can
allow for the relatively slow rate of convergence concern to be mitigated.
The three main approaches that we have detailed here are as follows: (1)
the local inversion method, (2) the approximate Newton's method, and (3) the
hybrid local inversion-Newton method. The first of these should be preferred
when looking at a function that is highly curved, since Newton's method is
subject to overshooting in this case. The second will converge more quickly
for functions that are not too highly curved, and so is preferable in this
case. Finally, the third method provides a convenient choice when working with
functions where Newton's method applies, but whose derivatives are also
somewhat costly to evaluate: Our implementation here gives the convergence
rate of Newton's method and also minimizes the number of derivative calls
required.
|
{
"arxiv_id": "2302.13228",
"language": "en",
"timestamp": "2023-02-28T02:13:21",
"url": "https://arxiv.org/abs/2302.13228",
"yymm": "2302"
} | \section{Introduction}
A neural network utilizes data to find a function consistent
with the data and with further ``conceptual'' data such as desired smoothness,
boundedness, or integrability.
The weights for a neural net and the functions embodied in the hidden units
can be thought of as determining a finite sum that approximates some
function. This finite sum is a kind of quadrature for an integral formula
that would represent the function exactly.
This chapter uses abstract analysis to investigate neural networks.
Our approach is one of {\it enrichment}: not only is summation replaced
by integration, but also numbers are
replaced by real-valued functions on an input set $\Omega$,
the functions lying in a function space ${\mathcal X}$. The functions, in turn,
are replaced
by ${\mathcal X}$-valued measurable functions $\Phi$ on a measure space $Y$ of
parameters.
The goal is to understand approximation of functions by neural
networks so that one can make effective choices of the parameters
to produce a good approximation.
To achieve this, we utilize Bochner integration. The idea of applying this
tool to neural nets is in Girosi and Anzellotti \cite{gian93} and we developed
it further in Kainen and K\r{u}rkov\'{a} \cite{kavk09}.
Bochner integrals are now being used in the theory of support vector machines
and reproducing kernel Hilbert spaces; see the recent book by Steinwart
and Christmann \cite{sc08}, which has an appendix of
more than 80 pages of material on operator theory and Banach-space-valued integrals. Bochner integrals are also widely used in probability theory in
connection with stochastic processes of martingale-type;
see, e.g., \cite{ab82,cpmr07}. The
corresponding functional
analytic theory may help to bridge the gap between probabilistic questions
and deterministic ones, and may be well-suited for issues that
arise in approximation via neural nets.
Training to replicate given numerical data
does not give a useful neural network for the same
reason that parrots make poor conversationalists. The phenomenon of
overfitting shows that achieving fidelity to data at all
costs is not desirable;
see, e.g., the discussion on
interpolation in our other chapter in this book (Kainen, K\accent23urkov\'{a},
and Sanguineti \cite{kks-h}). In approximation, we try to find a function close to
the data that achieves desired criteria
such as sufficient smoothness, decay at infinity, etc. Thus, a method
of integration which produces functions {\it in toto}
rather than numbers could be quite useful.
Enrichment has lately been utilized by
applied mathematicians to perform image analysis and even
to deduce global properties of sensor networks from local information. For
instance, the Euler characteristic, ordinarily thought of as a discrete
invariant, can be made into a variable of
integration \cite{bg09}.
In the case of sensor networks, such an analysis can lead to effective
computations in which theory determines a minimal set of sensors \cite{dsgh07}.
By modifying
the traditional neural net focus on training sets of data so that we get
to families of functions in a natural way,
we aim to achieve methodological insight. Such a framework may lead to
artificial neural networks capable of performing more sophisticated tasks.
The main result of this chapter is Theorem \ref{th:main}
which characterizes functions to be approximated in terms of
pointwise integrals and Bochner integrals, and provides inequalities
that relate corresponding norms. The relationship between integral
formulas and neural networks has long been noted; e.g.,
\cite{it91,ba93,mhmi94,fg95-paris,ma96,vkkakr97}
We examine integral formulas in depth and extend their significance
to a broader context.
An earlier version of the Main Theorem, including the bounds
on variational norm by the $L^1$-norm of the weight function
in a corresponding integral formula, was given in \cite{kavk09}
and it also utilized {\it functional} (i.e., Bochner) integration.
However, the version here is more general and further shows that
if $\phi$ is a real-valued function on $\Omega \times Y$ (the cartesian
product of input and parameter spaces), then the associated map
$\Phi$ which maps the measure space to the Banach space defined by
$\Phi(y)(x) = \phi(x,y)$ is measurable; cf. \cite[Lemma 4.25, p. 125]{sc08}
where $\Phi$ is the ``feature map.''
Other proof techniques are available for parts of the Main Theorem.
In particular, K\accent23urkov\'a \cite{vk09}
gave a different argument for part (iv) of the theorem, using a characterization of
variation via peak functionals \cite{vksh98} as well as
the theorem of Mazur (Theorem \ref{th:mazur})
used in the proof of Lemma \ref{pr:mvt}.
But the Bochner integral approach reveals some unexpected aspects of functional approximation which may be relevant for neural network applications.
Furthermore, the treatment of analysis and topology utilizes a number of
basic theorems from the literature and provides an introduction
to functional analysis motivated by its applicability. This is a case where
neural nets provide a fresh perspective on classical mathematics.
Indeed, theoretical results proved here were obtained
in an attempt to better understand neural networks.
An outline of the paper is as follows: In section 2 we discuss variational
norms; sections 3 and 4 present needed material on Bochner integrals. The
Main Theorem (Theorem \ref{th:main}) on integral formulas is given in Section
5. In section 6 we show how to apply the Main
Theorem to an integral formula for the Bessel
potential function in terms of Gaussians. In section 7 we show how this leads
to an inequality involving Gamma functions and provide an alternative
proof by classical means. Section 8 interprets and extends the Main Theorem
in the language of tensor products. Using tensor products, we replace
individual ${\mathcal X}$-valued
$\Phi$'s by families $\{\Phi_j: j \in J\}$ of such functions.
This allows more nuanced representation of the function to be approximated.
In section 9 we give a detailed example
of concepts related to $G$-variation, while section 10 considers the
relationship between pointwise integrals and evaluation
of the corresponding Bochner integrals. Remarks on future
directions are in section 11, and the chapter concludes
with two appendices and references.
\section{Variational norms and completeness}
We assume that the reader has a reasonable acquaintance with functional
analysis but have attempted to keep this chapter self-contained.
Notations and basic definitions are given in Appendix I, while
Appendix II has the precise statement of several important theorems
from the literature which will be needed in our development.
Throughout this chapter, all linear spaces are over the reals ${\mathbb R}$.
For $A$ any subset of a linear
space $X$, $b \in X$, and $r \in {\mathbb R}$,
$$b + rA := \{b + ra \,|\, a \in A \} = \{y \in X : y = b + ra, a \in A\}.$$
Also, we sometimes use the abbreviated notation
\begin{equation}
\|\cdot\|_1 = \|\cdot\|_{L^1(Y,\mu)} \;\;\mbox{and}
\;\; \|\cdot\|_\infty = \|\cdot\|_{L^\infty(Y,\mu;{\mathcal X})};
\label{eq:notation}
\end{equation}
\noindent the standard notations on the right are explained in sections 12 and 4,
resp. The symbol ``$\ni$'' stands for ``such that.''
A set $G$ in a normed linear space ${\mathcal X}$ is {\it fundamental} (with
respect to ${\mathcal X}$) if ${\rm cl}_{{\mathcal X}} \; ({\rm span} \; G) = {\mathcal X}$,
where closure depends only on the topology induced by the norm.
We call $G$ {\it bounded} with respect to ${\mathcal X}$ if
$$s_{G,{\mathcal X}} := \sup_{g \in G} \|g\|_{{\mathcal X}} < \infty.$$
We now review $G$-{\it variation} norms.
These norms, which arise in connection with approximation of functions,
were first considered by Barron \cite{ba92}, \cite{ba93}. He treated a
case where $G$ is a family of characteristic functions of sets satisfying a
special condition. The general concept, formulated by
K\r{u}rkov\'{a} \cite{vk97}, has been developed in such papers as
\cite{vksa02,vk03,vk08,JOTA09,JOTA10a,gs11}.
Consider the set
\begin{equation}
B_{G,{\mathcal X}} := {\rm cl}_{{\mathcal X}} \; ({\rm conv}( \; \pm G)), \mbox{where} \; \pm G
:= G \cup -G. \label{eq:BGX}
\end{equation}
\noindent
This is a symmetric, closed, convex subset of ${\mathcal X}$, with
Minkowski functional
$$\|f\|_{G,{\mathcal X}} := \inf \{\lambda > 0: f/\lambda \in B_{G,{\mathcal X}} \}.$$
The subset ${\mathcal X}_G$ of ${\mathcal X}$ on which this functional
is finite is given by
$${\mathcal X}_G := \{f \in {\mathcal X}: \exists \lambda > 0 \, \ni \,
f/\lambda \in B_{G,{\mathcal X}} \}.$$
If $G$ is bounded, then $\|\cdot\|_{G,{\mathcal X}}$ is a norm on
${\mathcal X}_G$. In general ${\mathcal X}_G$ may be a proper subset of ${\mathcal X}$ even if $G$
is bounded and fundamental w.r.t. ${\mathcal X}$. See the example at the end of
this section. The inclusion
$\iota: {\mathcal X}_G \subseteq {\mathcal X}$ is linear and for
every $f \in {\mathcal X}_G$
\begin{equation}
\|f\|_{{\mathcal X}} \leq \|f\|_{G,{\mathcal X}} \,s_{G,{\mathcal X}} \label{eq:basicIneq}
\end{equation}
\noindent Indeed, if $f/\lambda \in B_{G,{\mathcal X}}$, then
$f/\lambda$ is a convex combination of elements of ${\mathcal X}$-norm
at most $s_{G,{\mathcal X}}$, so
$\|f\|_{{\mathcal X}} \leq \lambda \, s_{G,{\mathcal X}}$
establishing (\ref{eq:basicIneq})
by definition of variational norm.
Hence, if $G$ is bounded in ${\mathcal X}$, the operator $\iota$ is
bounded with operator norm not exceeding $s_{G,{\mathcal X}}$.
\begin{proposition}
Let nonempty $G \subseteq {\mathcal X}$ a normed linear space. Then\\
\\
(i) ${\rm span} \,G \subseteq {\mathcal X}_G \subseteq {\rm cl}_{\mathcal X}\, {\rm span} \,G$;\\
\\
(ii) $G$ is fundamental if and only if ${\mathcal X}_G$ is dense in ${\mathcal X}$;\\
\\
(iii) For $G$ bounded and ${\mathcal X}$ complete,
$({\mathcal X}_G, \|\cdot\|_{G,{\mathcal X}})$ is a Banach space.
\label{pr:varban}
\end{proposition}
\noindent {\bf Proof.}$ \;$ (i) Let $f \in {\rm span} \,G$, then $f = \sum_{i=1}^n a_i g_i$,
for real numbers $a_i$ and $g_i \in G$. We assume the $a_i$ are
not all zero since $0$ is in ${\mathcal X}_G$. Then
$f = \lambda \sum_{i=1}^n |a_i|/\lambda (\pm g_i)$, where
$\lambda = \sum_{i=1}^n |a_i|$. Thus, $f$ is in
$\lambda {\rm conv} (\pm G) \subseteq \lambda B_{G,{\mathcal X}}$.
So $\|f\|_{G,{\mathcal X}} \leq \lambda$ and $f$ is in ${\mathcal X}_G$.
Likewise if $f$ is in ${\mathcal X}_G$, then for some $\lambda > 0$,
$f/\lambda$ is in
$$B_{G,{\mathcal X}} = {\rm cl}_{\mathcal X}({\rm conv}(\pm G)) \subseteq {\rm cl}_X({\rm span}(G)),$$
so $f$ is in ${\rm cl}_X({\rm span}(G))$.
(ii) Suppose $G$ is fundamental. Then
${\mathcal X} = cl_{\mathcal X}({\rm span} \,G) = {\rm cl}_{\mathcal X}({\mathcal X}_G)$ by part (i). Conversely,
if ${\mathcal X}_G$ is dense in ${\mathcal X}$, then
${\mathcal X} = {\rm cl}_{\mathcal X}({\mathcal X}_G) \subseteq {\rm cl}_{\mathcal X}({\rm span} \,G) \subseteq {\mathcal X}$,
and $G$ is fundamental.
(iii)
Let $\{f_n\}$ be a Cauchy sequence in ${\mathcal X}_G$. By
(\ref{eq:basicIneq}) $\{f_n\}$ is a Cauchy sequence in ${\mathcal X}$ and
has a limit $f$ in ${\mathcal X}$. The sequence $\|f_n\|_{G,{\mathcal X}}$ is bounded
in ${\mathcal X}_G$, that is, there is a positive number M such that for all
n $f_n/M \in B_{G,{\mathcal X}}$. Since $B_{G,{\mathcal X}}$ is closed in ${\mathcal X}$, $f/M$
is also in $B_{G,{\mathcal X}}$. Hence $\|f\|_{G,{\mathcal X}} \leq M$ and $f$ is in
${\mathcal X}_G$. Now given $\epsilon > 0$ choose a positive integer $N$ such
that $\|f_n - f_k\|_{G,{\mathcal X}} < \epsilon$ for $n, k \geq N$. In
particular fix $n \geq N$, and consider a variable integer $k \geq
N$. Then $\|f_k - f_n\|_{G,{\mathcal X}} < \epsilon$. So $(f_k -
f_n)/\epsilon \in B_{G,{\mathcal X}}$, and $f_k \in f_n + \epsilon B_{G,{\mathcal X}}$
for all $k \geq N$. But $f_n + \epsilon B_{G,{\mathcal X}}$ is closed in
${\mathcal X}$. Hence $f \in f_n + \epsilon B_{G,{\mathcal X}}$, and $\|f -
f_n\|_{G,{\mathcal X}} \leq \epsilon$. So the sequence converges to
$f$ in ${\mathcal X}_G$.
\hfill $\Box$ \vspace{0.5cm}
The following example illustrates several of the above concepts.
Take ${\mathcal X}$ to be a real
separable Hilbert space with orthonormal basis $\{e_n : n = 0, 1,
...\}$. Let $G = \{e_n : n = 0, 1, ...\}$. Then
$$B_{G,{\mathcal X}} =
\left \{ \sum_{n \geq 1}
c_ne_n - \sum_{n \geq 1} d_ne_n\; : \forall n,\; c_n \geq 0, d_n \geq 0,
\sum_{n \geq 1}(c_n + d_n) = 1 \right \}.$$
\noindent Now $f \in {\mathcal X}$ is of the form $\sum_{n \geq 1} a_ne_n$
where $\|f\|_{{\mathcal X}} = \sqrt{\sum_{n \geq 1} a_n^2}$, and if $f \in
{\mathcal X}_G$, then $a_n = \lambda (c_n - d_n)$ for all $n$ and suitable
$c_n, d_n$. The minimal $\lambda$ can
be obtained by taking $a_n = \lambda c_n$ when $a_n \geq 0$, and
$a_n = -\lambda d_n$ when $a_n < 0$. It then follows that
$\|f\|_{G,{\mathcal X}} = \sum_{n \geq 1} |a_n|$. Hence when ${\mathcal X}$ is
isomorphic to $\ell_2$, ${\mathcal X}_G$ is isomorphic to
$\ell_1$. As $G$ is fundamental, by part(ii) above, the
closure of $\ell_1$ in $\ell_2$ is $\ell_2$.
This provides an example where ${\mathcal X}_G$ is not
a closed subspace of ${\mathcal X}$ and so, while it is a Banach
space w.r.t. the variational norm, it is not complete in
the ambient-space norm.\\
\section{Bochner integrals}
The Bochner integral replaces numbers with functions and represents
a broadranging extension,
generalizing the Lebesgue integral from real-valued
functions to functions with values in an arbitrary Banach space.
Key definitions and theorems are summarized here for convenience,
following the treatment in \cite{za61} (cf. \cite{lc85}). Bochner
integrals are used here (as in \cite{kavk09}) in order to prove a
bound on variational norm.
Let $(Y, \mu)$ be a measure space. Let ${\mathcal X}$ be a Banach space
with norm $\|\cdot\|_{{\mathcal X}}$. A function $s: Y \to {\mathcal X}$ is {\it simple} if it
has a finite set of nonzero values $f_j \in {\mathcal X}$,
each on a measurable subset $P_j$ of $Y$ with $\mu(P_j) < \infty$,
$1 \leq j \leq m$, and the $P_j$ are pairwise-disjoint.
Equivalently, a function $s$ is simple if it can be written in the
following form:
\begin{equation}
\hspace{1 in} s = \sum_{j=1}^m \kappa(f_j) \chi_{P_j} ,
\label{eq:sf}\end{equation}
where $\kappa(f_j):Y \to {\mathcal X}$ denotes the
constant function with value $f_j$ and $\chi_{P}$ denotes the
characteristic function of a subset $P$ of $Y$.
This decomposition is nonunique and we identify two
functions if they agree $\mu$-almost everywhere - i.e., the
subset of $Y$ on which they disagree has $\mu$-measure zero.
Define an ${\mathcal X}$-valued function $I$ on the simple functions
by setting for $s$ of form (\ref{eq:sf})
$$I(s,\mu) := \sum_{j=1}^m \mu(P_j) f_j \in {\mathcal X}.$$
This is independent of the decomposition of $s$ \cite[pp.130--132]{za61}.
A function $h:Y \to {\mathcal X}$ is {\it strongly measurable} (w.r.t. $\mu$) if there exists a sequence $\{ s_k\}$ of
simple functions such that for $\mu$-a.e. $y \in Y$
$$\lim_{k \to \infty} \|s_k(y) - h(y)\|_{{\mathcal X}} = 0.$$
A function
$h:Y \to {\mathcal X}$ is {\it Bochner integrable} (with respect to $\mu$)
if it is strongly measurable and there exists a sequence $\{s_k\}$
of simple functions $s_k: Y \to {\mathcal X}$ such that
\begin{equation}
\hspace{1 in} \lim_{k \to \infty} \int_Y \|s_k(y) - h(y)\|_{{\mathcal X}} d\mu(y) = 0.
\label{eq:bis} \end{equation}
If $h$ is strongly measurable and (\ref{eq:bis}) holds, then
the sequence $\{I(s_k,\mu)\}$ is Cauchy and by completeness converges to an
element in ${\mathcal X}$. This element, which is independent of the
sequence of simple functions satisfying (\ref{eq:bis}), is called
the {\it Bochner integral of} $h$ (w.r.t. $\mu$) and denoted
$$I(h,\mu) \;\;\; \mbox{or} \;\;\; {\mathcal B}-\int_Y h(y) d\mu(y).$$
Let ${\mathcal L}^1(Y,\mu;{\mathcal X})$ denote the linear space of all
strongly measurable functions from $Y$
to ${\mathcal X}$ which are Bochner integrable w.r.t. $\mu$;
let $L^1(Y,\mu;{\mathcal X})$ be the corresponding set of equivalence
classes (modulo $\mu$-a.e. equality). It is easily shown that
equivalent functions have the same Bochner integral.
Then the following elegant characterization holds.
\begin{theorem}[Bochner]
Let $({\mathcal X},\|\cdot\|_{{\mathcal X}})$ be a Banach space and $(Y, \mu)$ a
measure space. Let $h:Y \to {\mathcal X}$ be strongly measurable. Then
$$h \in {\mathcal L}^1(Y,\mu;{\mathcal X}) \mbox{ if and only
if } \int_Y \|h(y)\|_{{\mathcal X}} d\mu(y) < \infty.$$\label{th:bo}
\end{theorem}
\noindent A consequence of this theorem is that
$I: L^1(Y,\mu;{\mathcal X}) \to {\mathcal X}$ is a continuous linear operator
and
\begin{equation}
\hspace{-.3 in}\|I(h,\mu)\|_{\mathcal X} = \left \| {\mathcal B}-\int_Y h(y) \,d\mu(y) \right \|_{\mathcal X}
\leq \|h\|_{L^1(Y,\mu;{\mathcal X})} := \int_Y \|h(y)\|_X d\mu(y).
\label{ineq:boch}
\end{equation}
In particular, the Bochner norm of $s$, $\|s\|_{L^1(Y,\mu;{\mathcal X})}$,
is $\sum_i \mu(P_i) \|g_i\|_{\mathcal X}$, where $s$ is a simple function
satisfying (\ref{eq:sf}).
For $Y$ a measure space and ${\mathcal X}$ a Banach space,
$h:Y \to {\mathcal X}$ is {\it weakly measurable} if for every
continuous linear
functional $F$ on $X$ the composite real-valued function
$F \circ h$ is measurable \cite[pp. 130--134]{yo65}.
If $h$ is measurable, then it is weakly measurable
since measurable followed by continuous is measurable:
for $U$ open in ${\mathbb R}$,
$(F \circ h)^{-1}(U) = h^{-1}(F^{-1}(U))$.
Recall that a topological space
is {\it separable} if it has a countable dense subset.
Let $\lambda$ denote Lebesgue measure on ${\mathbb R}^d$ and let $\Omega \subseteq {\mathbb R}^d$
be $\lambda$-measurable, $d \geq 1$.
Then $L^q(\Omega,\lambda)$ is separable when $1 \leq q < \infty$; e.g.,
\cite[pp. 208]{mb59}.
A function $h:Y \to {\mathcal X}$ is {\it $\mu$-almost separably valued} ($\mu$-a.s.v.) if there exists a $\mu$-measurable subset
$Y_0 \subset Y$ with $\mu(Y_0)=0$ and $h(Y \setminus Y_0)$ is
a separable subset of ${\mathcal X}$.
\begin{theorem}[Pettis]
Let $({\mathcal X},\|\cdot\|_{{\mathcal X}})$ be a Banach space and $(Y, \mu)$ a
measure space. Suppose $h:Y \to {\mathcal X}$. Then $h$ is strongly measurable
if and only if $h$ is weakly measurable and $\mu$-a.s.v.
\label{th:pe}
\end{theorem}
The following basic result (see, e.g., \cite{sb33}) was later extended
by Hille to the more general class of closed operators. But we only
need the result for bounded linear functionals, in which case the
Bochner integral coincides with ordinary integration.
\begin{theorem}
Let $(Y,\nu)$ be a measure space, let ${\mathcal X}$, ${\mathcal X}'$ be Banach
spaces, and let $h \in {\mathcal L}^1(Y,\nu;{\mathcal X})$. If $T: {\mathcal X} \to {\mathcal X}'$ is a
bounded linear operator, then $T \circ h \in {\mathcal L}^1(Y,\nu;{\mathcal X}')$ and
$$T\left( {\mathcal B}-\int_Y h(y) \,d\nu(y) \right) = {\mathcal B}-\int_Y (T \circ h)(y) \,d\nu(y).$$
\label{pr:blf}\end{theorem}
There is a mean-value
theorem for Bochner integrals (Diestel and Uhl
\cite[Lemma 8, p. 48]{du77}). We give their
argument with a slightly clarified reference to the Hahn-Banach theorem.
\begin{lemma}
Let $(Y,\nu)$ be a finite measure space, let $X$ be a Banach space, and
let $h:Y \to {\mathcal X}$ be Bochner integrable w.r.t. $\nu$.
Then $${\mathcal B}-\int_Y h(y) \,d\nu(y) \;\in\; \nu(Y) \;{\rm cl}_X({\rm conv}(\{\pm h(y): y \in Y\}).$$
\label{pr:mvt}
\end{lemma}
\noindent {\bf Proof.}$ \;$
Without loss of generality, $\nu(Y) = 1$.
Suppose $f := I(h,\nu) \notin {\rm cl}_X({\rm conv}(\{\pm h(y): y \in Y\})$.
By a consequence of the
Hahn-Banach theorem
given as Theorem \ref{th:mazur} in Appendix II below),
there is a continuous linear functional $F$ on $X$ such that $F(f) >
\sup_{y \in Y} F(h(y))$. Hence, by Theorem \ref{pr:blf},
$$\sup_{y \in Y} F(h(y)) \geq
\int_Y F(h(y)) d\nu(y) = F(f) > \sup_{y \in Y} F(h(y)).$$
which is absurd. \hfill $\Box$ \vspace{0.5cm}
\section{Spaces of Bochner integrable functions}
In this section, we derive a few consequences of the results from
the previous section which we shall need below.
A measurable function $h$ from
a measure space $(Y,\nu)$ to a normed linear space ${\mathcal X}$ is called
{\it essentially bounded} (w.r.t. $\nu$) if there exists a $\nu$-null set
$N$ for which $$\sup_{y \in Y \setminus N} \|h(y)\|_{\mathcal X} < \infty.$$ Let
${\mathcal L}^{\infty}(Y,\nu; {\mathcal X})$ denote the linear space of all
strongly measurable, essentially
bounded functions from $(Y,\nu)$ to ${\mathcal X}$. Let $L^\infty(Y, \nu; {\mathcal X})$ be
its quotient space mod the relation of equality $\nu$-a.e.
This is a Banach space with norm
$$\|h\|_{L^{\infty}(Y,\nu;X)} := \inf \{B \geq 0: \exists \; \nu \mbox{-null }
N \subset Y \; \ni \;\|h(y)\|_X \leq B, \; \forall y \in Y \setminus N \}.$$
To simplify notation, we sometimes write $\|h\|_\infty$ for
$\|h\|_{L^{\infty}(Y,\nu;X)}$
Note that if $\|h\|_\infty = c$, then $\|h(y)\|_{\mathcal X} \leq c$
for $\nu$-a.e. $y$. Indeed, for positive integers $k$,
$\|h(y)\|_{\mathcal X} \leq c + (1/k)$ for $y$ not in a set of measure zero $N_k$
so $\|h(y\|_{\mathcal X} \leq c$ for $y$ not in the union $\bigcup_{k\geq 1} N_k$
also a set of measure zero.
We also have a useful fact whose proof is immediate.
\begin{lemma}
For every measure space $(Y,\mu)$ and Banach space ${\mathcal X}$,
the natural map $\kappa_{\mathcal X}: {\mathcal X} \to L^\infty(Y,\mu;{\mathcal X})$
associating to each element $g \in {\mathcal X}$ the constant function
from $Y$ to ${\mathcal X}$ given by $(\kappa_{\mathcal X}(g))(y) \equiv g$ for all $y$ in $Y$
is an isometric linear embedding.
\end{lemma}
\begin{lemma}
Let ${\mathcal X}$ be a separable Banach space, let $(Y,\mu)$ be a measure space,
and let $w:Y \to {\mathbb R}$ and $\Psi:Y \to {\mathcal X}$ be $\mu$-measurable functions.
Then $w \Psi$ is strongly measurable.
\label{lm:sm}
\end{lemma}
\noindent {\bf Proof.}$ \;$
By definition, $w \Psi$ is the function from $Y$ to ${\mathcal X}$ defined by
$$w \Psi: \; y \mapsto w(y) \Psi(y),$$
where the multiplication is that of a Banach space element by a real number.
Then $w \Psi$ is measurable because it is obtained from a pair of
measurable functions by applying scalar multiplication which is continuous.
Hence, by separability, Pettis' Theorem \ref{th:pe}, and the
fact that measurable implies weakly measurable, we have
strong measurability for $w \Psi$ (cf. \cite[Lemma 10.3]{lc85}).
\hfill $\Box$ \vspace{0.5cm}
If $(Y,\nu)$ is a finite measure space, ${\mathcal X}$ is a Banach space,
and $h:Y \to {\mathcal X}$ is
strongly measurable and essentially bounded, then $h$ is Bochner
integrable by Theorem \ref{th:bo}. The following lemma, which follows from
Lemma \ref{lm:sm}, allows us to
weaken the hypothesis on the function by further constraining the space ${\mathcal X}$.
\begin{lemma}
Let $(Y,\nu)$ be a finite measure space, ${\mathcal X}$ a separable Banach space,
and $h:Y \to {\mathcal X}$ be $\nu$-measurable and essentially bounded w.r.t. $\nu$.
Then $h \in {\mathcal L}^1(Y,\nu; {\mathcal X})$ and
$$\int_Y \|h(y)\|_{\mathcal X} d\nu(y) \leq \nu(Y) \|h\|_{L^\infty(Y,\nu; {\mathcal X})}.$$
\label{lm:bi}
\end{lemma}
Let $w \in {\mathcal L}^1(Y,\mu)$, and
let $\mu_w$ be defined for $\mu$-measurable
$S \subseteq Y$ by $\mu_w(S) := \int_S |w(y)| d\mu(y)$.
For $t \neq 0$, ${\rm sgn}(t) := t/|t|$.
\begin{theorem}
Let $(Y,\mu)$ be a measure space,
${\mathcal X}$ a separable Banach space; let $w \in {\mathcal L}^1(Y,\mu)$ be nonzero
$\mu$-a.e., let $\mu_w$ be the measure defined above, and let
$\Phi:Y \to {\mathcal X}$ be $\mu$-measurable. If one of
the Bochner integrals
$${\mathcal B}-\int_Y w(y) \Phi(y) d\mu(y), \; \;\;
{\mathcal B}-\int_Y {\rm sgn}(w(y)) \Phi(y) d\mu_w(y)$$
exists, then both exist and are equal.
\label{th:boeq}
\end{theorem}
\noindent {\bf Proof.}$ \;$
By Lemma \ref{lm:sm}, both $w \Phi$ and $({\rm sgn} \circ w) \Phi$ are strongly measurable.
Hence, by Theorem \ref{th:bo}, the respective Bochner integrals
exist if and only
if the ${\mathcal X}$-norms of the respective integrands have finite
ordinary integral. But
\begin{equation}
\int_Y \| [({\rm sgn} \circ w)\Phi ](y)\|_{{\mathcal X}} d\mu_w(y)
= \int_Y \|w(y) \Phi(y)\|_{{\mathcal X}} d\mu(y),
\label{eq:same}
\end{equation}
so the Bochner integral $I(({\rm sgn} \circ w)\Phi, \mu_w)$ exists exactly
when $I(w \Phi, \mu)$ does.
Further, the respective Bochner integrals are equal since for
any continuous linear functional $F$ in ${\mathcal X}^*$, by Theorem \ref{pr:blf}
\begin{eqnarray*}
F\left({\mathcal B}-\int_Y w(y) \Phi(y) d\mu(y) \right)
= \int_Y F(w(y) \Phi(y)) d\mu(y) \\
= \int_Y w(y) F(\Phi(y)) d\mu(y)
= \int_Y sgn(w(y))|w(y)| F(\Phi(y)) d\mu(y)\\
= \int_Y sgn(w(y)) F(\Phi(y)) d\mu_w(y)
= \int_Y F(sgn(w(y)) \Phi(y)) d\mu_w(y) \\
= F\left({\mathcal B}-\int_Y sgn(w(y)) \Phi(y) d\mu_w(y)\right).
\label{eq:fin-boch}
\end{eqnarray*}
\hfill $\Box$ \vspace{0.5cm}
\begin{corollary}
Let $(Y,\mu)$ be a $\sigma$-finite measure space,
${\mathcal X}$ a separable Banach space, $w: Y \to {\mathbb R}$ be in ${\mathcal L}^1(Y,\mu)$
and $\Phi:Y \to {\mathcal X}$ be in ${\mathcal L}^\infty(Y,\mu; {\mathcal X})$. Then $w \Phi$ is
Bochner integrable w.r.t. $\mu$.
\label{co:bi}
\end{corollary}
\noindent {\bf Proof.}$ \;$
By Lemma \ref{lm:sm}, $({\rm sgn} \circ w) \Phi$ is strongly measurable,
and Lemma \ref{lm:bi} then implies that the Bochner integral
$I(({\rm sgn} \circ w) \Phi, \mu_w)$ exists since $\mu_w(Y) = \|w\|_{L^1(Y,\mu)} < \infty$.
So $w \Phi$ is Bochner integrable by Theorem \ref{th:boeq}.
\hfill $\Box$ \vspace{0.5cm}
\section{Main theorem}
In the next result, we show that certain types of integrands yield
integral formulas for functions $f$ in a Banach space of $L^p$-type both
pointwise and at the level of Bochner integrals. Furthermore, the variational norm of $f$ is shown to be bounded
by the $L^1$-norm of the weight function from the integral formula.
Equations (\ref{eq:th2}) and (\ref{eq:fbi}) and part (iv) of this theorem were
derived in a similar fashion by one of us with K\r{u}rkov\'{a} in \cite{kavk09}
under more stringent hypotheses; see also \cite[eq. (12)]{fg95-paris}.
\begin{theorem}
Let $(\Omega,\rho)$, $(Y,\mu)$ be $\sigma$-finite measure spaces,
let $w$ be in ${\mathcal L}^1(Y,\mu)$, let
${\mathcal X} = L^q(\Omega,\rho)$, $q \in [1,\infty)$, be separable, let
$\phi: \Omega \times Y \to {\mathbb R}$ be $\rho \times \mu$-measurable, let
$\Phi: Y \to {\mathcal X}$ be defined for each $y$ in $Y$ by
$\Phi(y)(x) := \phi(x,y)$ for $\rho$-a.e. $x \in \Omega$
and suppose that for some $M < \infty$,
$\|\Phi(y)\|_{\mathcal X} \leq M$ for $\mu$-a.e. $y$.
Then the following hold:\\
(i) For $\rho$-a.e. $x \in \Omega$, the integral
$\int_Y w(y) \phi(x,y)d\mu(y)$ exists and is finite.\\
(ii) The function $f$ defined by
\begin{equation}
\;\;\; f(x) = \int_Y w(y) \phi(x,y)d\mu(y)
\label{eq:th}
\end{equation}
is in ${\mathcal L}^q(\Omega, \rho)$ and its equivalence class, also denoted
by $f$, is in $L^q(\Omega, \rho) = {\mathcal X}$ and satisfies
\begin{equation}
\;\;\; \|f\|_{\mathcal X} \leq \|w\|_{L^1(Y,\mu)}\;M.
\label{eq:th2}
\end{equation}
(iii) The function $\Phi$ is measurable and hence in ${\mathcal L}^\infty(Y,\mu;{\mathcal X})$,
and $f$ is the Bochner integral of $w \Phi$ w.r.t. $\mu$, i.e.,
\begin{equation}
f = {\mathcal B}-\int_Y (w \Phi)(y) d\mu(y).
\label{eq:fbi}
\end{equation}
(iv) For $G = \{\Phi(y) : \|\Phi(y)\|_{\mathcal X} \leq
\|\Phi\|_{L^\infty(Y,\mu; {\mathcal X})} \}$, $f$ is in ${\mathcal X}_G$, and
\begin{equation}
\|f\|_{G,{\mathcal X}} \leq \|w\|_{L^1(Y,\mu)}
\label{ineq:varL1}
\end{equation}
and as in (\ref{eq:notation})
\begin{equation}
\|f\|_{\mathcal X} \leq
\|f\|_{G,{\mathcal X}} s_{G,{\mathcal X}} \leq \|w\|_1 \|\Phi\|_\infty.\;\;\;
\label{ineq:thm}
\end{equation}
\label{th:main}
\end{theorem}
\noindent {\bf Proof.}$ \;$
(i) Consider the function $(x, y) \longmapsto |w(y)||\phi(x,y)|^q$. This
is a well-defined $\rho \times \mu$-measurable function on $\Omega
\times Y$. Furthermore its repeated integral
$$\int_Y \int_{\Omega} \, |w(y)||\phi(x,y)|^q d\rho(x) d\mu(y)$$
\noindent exists and is bounded by $\|w\|_1 M^q$ since $\Phi(y) \in
L^q(\Omega,\rho)$ and $\|\Phi(y)\|_q^q \leq M^q$ for a. e. y. and $w
\in L^1(Y,\mu)$. By Fubini's Theorem \ref{th:fubini} the
function $y \longmapsto |w(y)||\phi(x,y)|^q$ is in $L^1(Y,\mu)$ for
a.e. x. But the inequality
$$|w(y)||\phi(x,y)| \leq \max\{|w(y)||\phi(x,y)|^q,|w(y)|\} \leq
(|w(y)||\phi(x,y)|^q + |w(y)|) $$
\noindent shows that the function $y \longmapsto |w(y)||\phi(x,y)|$ is
dominated by the sum of two integrable functions. Hence the
integrand in the definition of $f(x)$ is integrable for a. e. x, and
$f$ is well-defined almost everywhere.
(ii) The function $G(u) = u^q$ is a convex function for $u \geq 0$.
Accordingly by Jensen's inequality (Theorem \ref{th:jensen} below),
$$ G \left (\int_Y |\phi(x,y)| d\sigma(y) \right ) \leq \ \int_Y
G(|\phi(x,y)|)d\sigma(y)$$
\noindent provided both integrals exist and $\sigma$ is a probability
measure on the measurable space $Y$. We take $\sigma$ to be defined
by the familiar formula:
$$ \sigma(A) = \frac{\int_A |w(y)| d\mu(y)}{\int_Y |w(y)|
d\mu(y)}$$
\noindent for $\mu$-measurable sets A in Y, so that integration with
respect to $\sigma$ reduces to a scale factor times integration of
$|w(y|d\mu(y)$.
Since we have established that both $|w(y)||\phi(x,y)|$ and
$|w(y)||\phi(x,y)|^q$ are integrable with respect to $\mu$ for a.e.
x, we obtain:
$$ |f(x)|^q \leq \|w\|_1^q G(\int_Y |\phi(x,y)| d\sigma(y)) \leq
\|w\|_1^q \int_Y G(|\phi(x,y)|)d\sigma(y)$$
$$\;\;\; = \|w\|_1^{q-1}\int_Y
|w(y)||\phi(x,y)|^q d\mu(y) $$
\noindent for a.e. x. But we can now integrate both side with respect to
$d\rho(x) $ over $\Omega$ because of the integrability noted above in
connection with Fubini's Theorem. Thus $f \in {\mathcal X} =
L^q(\Omega,\rho)$ and $\|f\|^q_{\mathcal X} \leq \|w\|_1^q \, M^q$, again
interchanging order.
(iii) First we show that $\Phi^{-1}$ of the open ball centered at $g$ of radius ${\varepsilon}$,
$B(g,{\varepsilon}) := \{y: \|\Phi(y) - g\|_{\mathcal X} < {\varepsilon} \}$, is a
$\mu$-measurable subset
of $Y$ for each $g$ in ${\mathcal X}$ and ${\varepsilon} > 0$. Note that
$$\|\Phi(y) - g\|_{\mathcal X}^q = \int_{\Omega} |\phi(x,y) - g(x)|^q d\rho(x)$$
\noindent for all $y$ in $Y$ where $x \mapsto \phi(x,y)$ and $x \mapsto g(x)$
are $\rho$-measurable functions representing the elements $\Phi(y)$ and $g$
belonging to ${\mathcal X} = L^q(Y,\mu)$. Since $(Y,\mu)$ is $\sigma$-finite, we
can find a strictly positive function $w_0$ in ${\mathcal L}^1(Y,\mu)$. (For example,
let $w_0 = \sum_{n \geq 1} (1/n^2) \chi_{Y_n}$, where $\{Y_n: n \geq 1 \}$
is a countable disjoint partition of $Y$ into $\mu$-measurable sets of finite
measure.) Then $w_0(y) |\phi(x,y) - g(x)|^q$ is a $\rho \times \mu$-measurable
function on $\Omega \times Y$, and
$$\int_Y \int_{\Omega} w_0(y) |\phi(x,y) - g(x)|^q d\rho(x) d\mu(y) \leq
\|w_0\|_{L^1(Y,\mu)} {\varepsilon}^q.$$
By Fubini's Theorem \ref{th:fubini},
$y \mapsto w_0(y) \|\Phi(y) - g\|_{\mathcal X}^q$ is $\mu$-measurable. Since $w_0$ is $\mu$-measurable and strictly positive,
$y \mapsto \|\Phi(y) - g\|_{\mathcal X}^q$ is also $\mu$-measurable and so
$B(g,{\varepsilon})$ is measurable. Hence, $\Phi: Y \to {\mathcal X}$ is measurable.
Thus, $\Phi$ is essentially bounded, with essential sup
$\|\Phi\|_{L^\infty(Y,\mu;{\mathcal X})} \leq M$. (In (\ref{eq:th2}), $M$ can be
replaced by this essential sup.)
By Corollary \ref{co:bi}, $w \Phi$ is Bochner integrable.
To prove that $f$ is the Bochner integral, using Theorem \ref{th:boeq},
we show that for each bounded linear functional
$F \in {\mathcal X}^*$, $F(I({\rm sgn} \circ w \Phi,\mu_w)) = F(f)$. By
the Riesz representation theorem \cite[p. 316]{masa01}, for any such $F$
there exists a (unique) $g_F \in {\mathcal L}^{p}(\Omega,\rho)$, $p = 1/(1-q^{-1})$,
such that for
all $g \in {\mathcal L}^q(\Omega,\rho)$, $F(g) = \int_{\Omega} g_F(x) g(x)
d\rho(x)$.
By Theorem \ref{pr:blf},
$$F(I(({\rm sgn} \circ w) \Phi,\mu_w)) = \int_Y F\left({\rm sgn}(w(y)) \Phi(y)\right) d\mu_w(y).$$
But for $y \in Y$, $F({\rm sgn}(w(y)) \Phi(y)) = {\rm sgn}(w(y)) F(\Phi(y)$, so
$$F(I(({\rm sgn} \circ w) \Phi,\mu_w)) =
\int_Y \int_{\Omega} w(y) g_F(x) \phi(x,y) d\rho(x) d\mu(y).$$
Also, using (\ref{eq:th}),
$$F(f) = \int_{\Omega} g_F(x) f(x) d\rho(x) = \int_{\Omega} \int_Y w(y)
g_F(x) \phi(x,y) d\mu(y) d\rho(x).$$
The integrand of the iterated integrals is measurable with
respect to the product measure $\rho \times \mu$, so by
Fubini's Theorem the iterated integrals are equal
provided that one of the corresponding absolute integrals is finite. Indeed,
\begin{equation}
\int_Y \int_{\Omega} |w(y) g_F(x) \phi(x,y)| d\rho(x) d\mu(y)
= \int_Y \|g_F \Phi(y) \|_{L^1(\Omega,\rho)} d\mu_w(y).\;\;
\label{eq:fub}
\end{equation}
By H\"{o}lder's inequality, for every $y$,
$$\|g_F \Phi(y) \|_{L^1(\Omega,\rho)} \leq \|g_F\|_{L^p(\Omega,\rho)}
\|\Phi(y)\|_{L^q(\Omega,\rho)},$$ using the fact that ${\mathcal X} = L^q(\Omega,\rho)$.
Therefore, by the essential boundedness of $\Phi$ w.r.t. $\mu$,
the integrals in (\ref{eq:fub}) are at most
$$\|g_F\|_{L^p(\Omega,\rho)} \|\Phi\|_{{\mathcal L}^{\infty}(Y,\mu;X}
\|w\|_{L^1(Y,\mu)} < \infty.$$
Hence, $f$ is the Bochner integral of $w \Phi$ w.r.t. $\mu$.
(iv) We again use Lemma \ref{pr:mvt}. Let $Y_0$ be a measurable
subset of $Y$ with $\mu(Y_0) = 0$ and for $Y' = Y \setminus Y_0$,
$\Phi(Y') = G$; see the remark following the definition of
essential supremum. But restricting ${\rm sgn} \circ w$ and $\Phi$ to $Y'$,
one has $$f = {\mathcal B}-\int_{Y'} {\rm sgn}(w(y)) \Phi(y) d\mu_w(y);$$
hence, $f \in \mu_w(Y) {\rm cl}_{\mathcal X} {\rm conv} (\pm G)$. Thus,
$\|w\|_{L^1(Y,\mu)} = \mu_w(Y) \geq \|f\|_{G,{\mathcal X}}$.
\hfill $\Box$ \vspace{0.5cm}
\section{An example involving the Bessel potential}
Here we review an example related to the Bessel functions
which was considered in \cite{kks09} for $q=2$.
In the following section, this Bessel-potential example is used to find
an inequality related to the Gamma function.
Let ${\mathcal F}$ denote the Fourier transform, given for $f \in L^1({\mathbb R}^d,\lambda)$
and $s \in {\mathbb R}^d$ by
$$\hat{f}(s) = {\mathcal F}(f)(s) = (2 \pi)^{-d/2} \int_{{\mathbb R}^d} f(x) \exp(-i s \cdot x) \; dx, $$
where $\lambda$ is Lebesgue measure and $dx$ means $d\lambda(x)$. For $r>0$, let
$${\hat {\beta_r}}(s)= (1+\|s\|^2)^{-r/2}\,.$$
Since the Fourier transform is an isometry of ${\mathcal L}^2$ onto itself (Parseval's
identity), and ${\hat {\beta_r}}$ is in ${\mathcal L}^2({\mathbb R}^d)$ for $r > d/2$ (which
we now assume),
there is a unique function $\beta_r$, called the {\em Bessel potential}
of order $r$, having
${\hat {\beta_r}}$ as its Fourier transform. See, e.g., \cite[p. 252]{ada75}.
If $1 \leq q <
\infty$ and $r > d/q$, then ${\hat {\beta_r}} \in {\mathcal L}^q({\mathbb R}^d)$
and
\begin{equation}
\hspace{1 in} \|\hat{\beta_r}\|_{{\mathcal L}^q} = \pi^{d/2q} \left
(\frac{\Gamma(qr/2 - d/2)}{\Gamma(qr/2)} \right )^{1/q}.
\label{eq:lbeta}
\end{equation} \label{lm:Lbes}
\noindent
Indeed, by radial symmetry,
$ (\|{\hat {\beta_r}}\|_{{\mathcal L}^q})^q =
\int_{{\mathbb R}^d} (1 + \| x \|^2)^{-qr/2} dx =\omega_d I$, where
$I = \int_0^{\infty}
(1+\rho^2)^{-qr/2} \rho^{d-1}d\rho$ and
$\omega_d := 2 \pi^{d/2} /\Gamma(d/2)$ is the area of the unit
sphere in ${\mathbb R}^d$ \cite[p. 303]{co60}. Substituting $\sigma
=\rho^2$ and $d\rho = (1/2) \sigma^{-1/2} d\sigma$, and using
\cite[ p. 60]{ca77}, we find that
$$I = (1/2) \int_0^{\infty}
\frac{\sigma^{d/2 -1}}{(1+ \sigma)^{qr/2}} d\sigma =
\frac{\Gamma(d/2) \Gamma(qr/2 - d/2)}{2\Gamma(qr/2)},$$
establishing (\ref{eq:lbeta}).
For $b>0$, let $\gamma_b: {\mathbb R}^d \to {\mathbb R}$ denote the scaled Gaussian
$\gamma_b(x) = e^{-b\| x\|^2}$.
A simple calculation shows that the $L^q$-norm of $\gamma_b$:
\begin{equation}
\hspace{1.4 in} \|\gamma_b\|_{{\mathcal L}^q} = (\pi/qb)^{d/2q}.
\label{eq:normg} \end{equation}
Indeed, using $\int_{-\infty}^\infty \exp(-t^2) dt = \pi^{1/2}$, we obtain:
$$\|\gamma_b\|_{{\mathcal L}^q}^q = \int_{{\mathbb R}^d} \exp(-b \|x\|^2)^q dx
= \left (\int_{{\mathbb R}} \exp(-qb\,t^2) dt \right )^d = (\pi/qb)^{d/2}.$$
\\
\\
\noindent We now express the Bessel potential as an integral combination of
Gaussians. The Gaussians are normalized in $L^q$ and the corresponding
weight function $w$ is explicitly given. The
integral formula is similar to one in Stein \cite{st70}. By our main
theorem, this is an example of
(\ref{eq:th}) and can be interpreted either as a pointwise integral
or as a Bochner integral.
\begin{proposition}
For $d$ a positive integer, $q \in [1, \infty)$, $r > d/q$, and $s
\in {\mathbb R}^d$
\begin{displaymath}
\hspace{1 in} \hat{\beta}_{r}(s) = \int_0^{\infty}w_r(t) \gamma_t^o(s) \, dt \,,
\end{displaymath}
where $$\gamma_t^o(s) = \gamma_t(s)/\|\gamma_t\|_{{\mathcal L}^q}$$
and
$$w_r(t) = (\pi/qt)^{d/2q} \, t^{r/2 - 1}\,
e^{-t}/\Gamma(r/2).$$
\label{pr:calpha}
\end{proposition}
\noindent {\bf Proof.}$ \;$ Let
$$I = \int_0^{\infty}t^{r/2-1}\,e^{-t} \, e^{-t\|s\|^2}\,dt.$$
Putting $u = t(1 +\|s\|^2)$ and $dt = du (1
+\|s\|^2)^{-1}$, we obtain
$$I = (1 + \|s\|^2)^{-r/2}
\int_0^{\infty}u ^{r/2-1}\,e^{-u}\,du = \hat{\beta}_{r}(s)
\Gamma(r/2).$$
\noindent Using the norm of the Gaussian
(\ref{eq:normg}), we arrive at
$$\hat{\beta}_{r}(s) = I / \Gamma(r/2) = \left (\int_0^{\infty} (\pi/qt)^{d/2q}\,
t^{r/2-1}\,e^{-t}\, \gamma^o_t(s)dt \right ) {\Large /\; }
\Gamma(r/2),$$
which is the result desired. \hfill $\Box$ \vspace{0.5cm}
Now we apply Theorem \ref{th:main} with $Y = (0, \infty)$
and $\phi(s,t) = \gamma_t^o(s) = \gamma_t(s)/\|\gamma_t\|_{L^q({\mathbb R}^d)}$
to bound the variational norm of $\hat{\beta_r}$ by the $L^1$-norm of
the weight function.
\begin{proposition} For $d$ a positive integer, $q \in [1, \infty)$,
and $r > d/q$,
$$\|\hat{\beta_r}\|_{G, {\mathcal X}} \leq (\pi/q)^{d/2q}
\frac{\Gamma(r/2 - d/2q)}{\Gamma(r/2)},$$ where $G =
\{\gamma_t^o : 0 < t < \infty \}$ and ${\mathcal X} = {\mathcal L}^q({\mathbb R}^d)$.
\label{co:brvar}
\end{proposition}
\noindent {\bf Proof.}$ \;$ By (\ref{ineq:varL1}) and Proposition \ref{pr:calpha}, we
have
$$\|\hat{\beta_r}\|_{G, {\mathcal X}} \leq \|w_r\|_{{\mathcal L}^1(Y)}
= k \int_0^{\infty} e^{-t} t^{r/2 + d/2q -1} dt,$$ where $k =
(\pi/q)^{d/2q} /\Gamma(r/2)$, and by definition, the integral is $\Gamma(r/2 -
d/2q)$. \hfill $\Box$ \vspace{0.5cm}
\section{Application: A Gamma function inequality}
The inequalities among the variational norm $\|\cdot\|_{G,{\mathcal X}}$,
the Banach space norm $\|\cdot\|_{\mathcal X}$, and the $L^1$-norm of the
weight function, established in the Main Theorem,
allow us to derive other inequalities. The Bessel
potential $\beta_r$ of order $r$ considered above provides an example.
Let $d$ be a positive integer, $q \in [1, \infty)$, and $r > d/q$.
By Proposition \ref{co:brvar} and (\ref{eq:lbeta}) of the last section, and
by (\ref{ineq:thm}) of the Main Theorem,
we have
\begin{equation} \hspace{.5 in}
\pi^{d/2q}\left (\frac{\Gamma(qr/2 - d/2)}{\Gamma(qr/2)}
\right )^{1/q} \leq (\pi/q)^{d/2q} \;\frac{\Gamma(r/2 -
d/2q)}{\Gamma(r/2)}.
\label{eq:gfineq}
\end{equation}
Hence, with $a = r/2 - d/2q\,$ and $s = r/2$, this becomes
\begin{equation}
\hspace{1 in} q^{d/2q} \;\left(\frac{\Gamma(qa)}{\Gamma(qs)}\right)^{1/q} \leq
\frac{\Gamma(a)}{\Gamma(s)}.
\label{eq:g2id}
\end{equation}
In fact, (\ref{eq:g2id}) holds if $s,a,d,q$ satisfy (i) $s > a > 0$ and
(ii) $s - a = d/2q$ for some $d \in Z^+$ and $q \in [1,\infty)$. As $a>0$,
$r > d/q$. If
$T = \{t > 0 : t = d/2q \;$ for some $\; d \in Z^+, q \in [1,\infty)\}$,
then $T = (0,\frac 1 2] \cup (0,1] \cup (0,\frac 3 2] \cup \ldots = (0,\infty)$,
so there always exist $d$, $q$ satisfying (ii);
the smallest such $d$ is $ \lceil 2 (s - a)\rceil$.
The inequality (\ref{eq:g2id}) suggests that the Main Theorem can be
used to establish other inequalities of interest among classical functions.
We now give a direct argument for the inequality.
Its independent proof confirms our
function-theoretic methods and provides additional generalization.
We begin by noting that in (\ref{eq:g2id}) it suffices to take
$d = 2q(s-a)$. If the inequality is true in that case, it is true
for all real numbers $d \leq 2q(s-a)$. Thus, we wish to establish that
\begin{eqnarray*} s \longmapsto \frac{\Gamma(qs)}{\Gamma(s)^q
q^{sq}}\end{eqnarray*}
\noindent is a strictly increasing function of $s$ for $q > 1$ and
$s > 0$. (For $q = 1$ this function is constant.)
Equivalently, we show that
\begin{eqnarray*} H_q(s) := \log{\Gamma(qs)} - q\log{\Gamma(s)}
- sq \log{q} \end{eqnarray*}
\noindent is a strictly increasing function of $s$ for $q > 1$ and
$s > 0$.
Differentiating with respect to $s$, we obtain:
\begin{eqnarray*} \frac{dH_q(s)}{ds} & = & q
\frac{\Gamma'(qs)}{\Gamma(qs)} - q\frac{\Gamma'(s)}{\Gamma(s)} - q
\log{q} \\
& = & q (\psi(qs) - \psi(s) - \log{q}) \\
& =: & q A_s(q)
\end{eqnarray*}
\noindent where $\psi$ is the digamma function. It suffices to
establish that $A_s(q) > 0$ for $q > 1$, $s > 0$. Note that $A_s(1)
= 0$. Now consider
\begin{eqnarray*} \frac{dA_s(q)}{dq} = s \psi'(qs) - \frac{1}{q}.
\end{eqnarray*}
\noindent This derivative is positive if and only if $\psi'(qs) >
\frac{1}{qs}$ for $q > 1$, $s >0 $.
It remains to show that $\psi'(x) > \frac{1}{x}$ for $x > 0$. Using
the power series for $\psi'$ \cite[6.4.10]{as}, we have
for $x > 0$,
\begin{eqnarray*} \psi'(x)& = & \sum_{n = 0}^{\infty} \frac{1}{(x +
n)^2} = \frac{1}{x^2} + \frac{1}{(x + 1)^2} + \frac{1}{(x + 2)^2} +
\ldots \\ & > & \frac{1}{x(x + 1)} + \frac{1}{(x + 1)( x + 2)} +
\frac{1}{(x + 2)(x + 3)} + \dots\\
& = & \frac{1}{x} - \frac{1}{x + 1} + \frac{1}{x + 1} - \frac{1}{x +
2} +
\ldots \;= \;\frac{1}{x}.
\end{eqnarray*}
\section{Tensor-product interpretation}
The basic paradigm of feedforward neural nets is to select a single type of computational
unit and then build a network based on this single type through a choice of
controlling internal and external parameters so that the resulting network function
approximates the target function; see \cite{kks-h}.
However, a single type of hidden unit may not be as effective as one
based on a {\it plurality} of hidden-unit types. Here we explore a
tensor-product interpretation which may facilitate such a change in
perspective.
Long ago Hille and Phillips \cite[p. 86]{hp57} observed that
the Banach space of Bochner integrable functions
from a measure space $(Y,\mu)$ into a Banach space ${\mathcal X}$ has a fundamental set
consisting of two-valued functions, achieving a single non-zero value on
a measurable set of finite measure. Indeed, every Bochner integrable function
is a limit of simple functions, and each simple function (with a finite
set of values achieved on disjoint parts $P_j$ of the partition) can be written as
a sum of characteristic functions, weighted by members of the Banach space.
If $s$ is such a simple function, then $$s = \sum_{i=1}^n \chi_j g_j,$$
where the $\chi_j$ are the characteristic functions of the $P_j$ and the $g_j$
are in ${\mathcal X}$.
(If, for example, $Y$ is embedded in a finite-dimensional
Euclidean space, the partition could consist of generalized rectangles.)
Hence, if $f = {\mathcal B}-\int_Y h(y) d\mu(y)$ is the Bochner integral of $h$ with respect to some measure $\mu$,
then $f$ can be approximated as closely as desired by elements in ${\mathcal X}$ of
the form
$$\sum_{i=1}^n \mu(P_i) g_i,$$
where $Y = \bigcup_{i=1}^n P_i$ is a $\mu$-measurable partition of $Y$.
Note that given a $\sigma$-finite measure space $(Y,\mu)$ and a separable
Banach space ${\mathcal X}$, every element $f$ in ${\mathcal X}$ is
(trivially) the Bochner integral of any
integrand $w \cdot \kappa(f)$, where $w$ is a nonnegative function on $Y$ with
$\|w\|_{L^1(Y,\mu)} = 1$ (see part (iii) of Theorem \ref{th:main})
and $\kappa(f)$ denotes the
constant function on $Y$ with value $f$.
In effect, $f$ is in ${\mathcal X}_G$ when $G = \{f\}$.
When $\Phi$ is chosen first (or more precisely $\phi$ as in our Main
Theorem), then $f$ may or may not be in ${\mathcal X}_G$. According to the
Main Theorem, $f$ is in ${\mathcal X}_G$ when it is given by an integral formula
involving $\Phi$ and some $L^1$ weight function. In this case,
$G = \Phi(Y) \cap B$ where $B$ is the ball in ${\mathcal X}$ of radius
$\|\Phi\|_{L^\infty(Y,\mu;{\mathcal X})}$.
In general, the elements $\Phi(y),\; y \in Y$ of the Banach space
involved in some particular approximation for $f$ will be distinct functions
of some general type obtained by varying the parameter $y$.
For instance,
kernels, radial basis functions, perceptrons, or various other classes
of computational units can be used, and when these computational-unit-classes determine fundamental sets, by Proposition \ref{pr:varban}, it is
possible to obtain arbitrarily good approximations. However,
Theorem \ref{th:tensor} below suggests that having a finite set of
distinct types $\Phi_i: Y \to {\mathcal X}$ may allow a smaller ``cost'' for
approximation, if we regard
$$\sum_{i=1}^n\|w_i\|_1 \|\Phi_i\|_\infty$$
as the cost of the approximation
$$f = {\mathcal B}-\int_Y \left (\sum_{i=1}^n w_i \Phi_i \right )(y) d\mu(y).$$
We give a brief sketch of the ideas, following Light and Cheney \cite{lc85}.
Let ${\mathcal X}$ and ${\mathcal Z}$ be Banach spaces.
Let ${\mathcal X} \otimes {\mathcal Z}$ denote the linear space of equivalence classes of
formal expressions
$$
\sum_{i=1}^n f_i \otimes h_i, \; f_i \in {\mathcal X},\; h_i \in {\mathcal Z}, \; n \in {\mathbb N},\;\;
\;\sum_{i=1}^m f'_i \otimes h'_i, \; f'_i \in {\mathcal X},\; h'_i \in {\mathcal Z}, \; m \in {\mathbb N},$$
where these expressions are equivalent if for every $F \in {\mathcal X}^*$
$$\sum_{i=1}^n F(f_i)h_i = \sum_{i=1}^m F(f'_i)h'_i,$$
that is, if the associated operators from ${\mathcal X}^* \to {\mathcal Z}$ are identical,
where ${\mathcal X}^*$ is the algebraic dual of ${\mathcal X}$.
The resulting linear space ${\mathcal X} \otimes {\mathcal Z}$ is called the {\it algebraic
tensor product} of ${\mathcal X}$ and ${\mathcal Z}$.
We can extend ${\mathcal X} \otimes {\mathcal Z}$ to a Banach space by completing it with
respect to a suitable norm.
Consider the norm defined for $t \in {\mathcal X} \otimes {\mathcal Z}$,
\begin{equation}
\gamma(t) = \inf \left \{ \sum_{i=1}^n \|f_i\|_{{\mathcal X}} \|h_i\|_{\mathcal Z} \;:
\; t = \sum_{i=1}^n f_i \otimes h_i \right \}.
\label{eq:gamma}
\end{equation}
and complete the algebraic tensor product with respect
to this norm; the result is denoted ${\mathcal X} \otimes_\gamma {\mathcal Z}$.
In \cite[Thm. 1.15, p. 11]{lc85}, Light and Cheney showed
that for any measure space $(Y,\mu)$ and any Banach space
${\mathcal X}$ the linear map
$$\Lambda_{\mathcal X}: L^1(Y,\mu) \otimes {\mathcal X} \to {\mathcal L}^1(Y,\mu;{\mathcal X})$$ given by
$$\sum_{i=1}^r w_i \otimes g_i \mapsto \sum_{i=1}^r w_i g_i.$$
is well-defined and extends to a map
$$\Lambda_{\mathcal X}^\gamma: L^1(Y,\mu) \otimes_\gamma {\mathcal X} \to L^1(Y,\mu;{\mathcal X}),$$
which is an isometric isomorphism
of the completed tensor product onto the
space $L^1(Y,\mu;X)$ of Bochner-integrable functions.
The following theorem extends the function $\Lambda_{\mathcal X}$ via the natural
embedding $\kappa_{\mathcal X}$ of ${\mathcal X}$ into the space of essentially bounded
${\mathcal X}$-valued functions defined in section 4.
\begin{theorem}
Let ${\mathcal X}$ be a separable Banach space and let $(Y,\mu)$ be a $\sigma$-finite
measure space. Then there exists a continuous linear surjection
$$e = \Lambda^{\infty,\gamma}_{\mathcal X}: L^1(Y,\mu) \otimes_\gamma L^\infty(Y,\mu;{\mathcal X})
\to L^1(Y,\mu;{\mathcal X}).$$
Furthermore, $e$ makes the following diagram commutative:
\begin{equation}
\begin{array}[c]{ccc}
L^1(Y,\mu) \otimes_\gamma {\mathcal X} \;&\stackrel{a}{\longrightarrow}\;&
L^1(Y,\mu;{\mathcal X})\\
\downarrow\scriptstyle{b}&\nearrow\scriptstyle{e}&\downarrow\scriptstyle{c}\\
L^1(Y,\mu) \otimes_\gamma L^\infty(Y,\mu;{\mathcal X}) \;
&\stackrel{d}{\longrightarrow}&
L^1(Y,\mu;L^\infty(Y,\mu;{\mathcal X}))
\end{array}
\label{eq:diag}
\end{equation}
where the two horizontal arrows $a$ and $d$ are the isometric isomorphisms
$\Lambda_{\mathcal X}^\gamma$ and $\Lambda_{L^\infty(Y,\mu;{\mathcal X})}^\gamma$;
the left-hand vertical arrow $b$ is induced by $1 \otimes \kappa_{\mathcal X}$, while
the right-hand vertical arrow $c$ is induced by post-composition with
$\kappa_{\mathcal X}$, i.e., for any $h$ in $L^1(Y,\mu;{\mathcal X})$,
$$c(h) = \kappa_{\mathcal X} \circ h: Y \to L^{\infty}(Y,\mu; {\mathcal X}).$$
\label{th:tensor}
\end{theorem}
\noindent {\bf Proof.}$ \;$
The map $$e':\sum_{i=1}^n w_i \otimes \Phi_i \mapsto \sum_{i=1}^n w_i \Phi_i$$
defines a linear function $L^1(Y,\mu) \otimes L^\infty(Y,\mu;{\mathcal X})
\to L^1(Y,\mu;{\mathcal X})$; indeed, it
takes values in the Bochner integrable functions as by our Main Theorem
each summand is in the class.
To see that $e'$ extends to $e$ on the
$\gamma$-completion,
\begin{eqnarray*}
\|e(t)\|_{L^1(Y,\mu;{\mathcal X})} & = & \|{\mathcal B}-\int_Y e(t) d\mu(y) \|_{\mathcal X}
\leq \int_Y \|e(t)\|_{\mathcal X} d\mu(y) =
\end{eqnarray*}
\begin{eqnarray*}
\int_Y \|\sum_i w_i(y) \Phi_i(y)\|_{\mathcal X} d\mu(y) & \leq &
\int_Y \sum_i |w_i(y)| \|\Phi_i(y)\|_{\mathcal X} d\mu(y)
\end{eqnarray*}
\begin{eqnarray*}
\leq \sum_i \|w_i\|_1 \|\Phi_i\|_\infty
\end{eqnarray*}
Hence, $\|e(t)\|_{L^1(Y,\mu;{\mathcal X})} \leq \gamma(t)$, so the map $e$ is continuous.
\hfill $\Box$ \vspace{0.5cm}
\section{An example involving bounded variation on an interval}
The following example, more elaborate than the one following
Proposition \ref{pr:varban},
is treated in part by Barron \cite{ba93}
and K\accent23urkov\'a \cite{vk02}.
Let ${\mathcal X}$ be the set of equivalence classes of
(essentially) bounded Lebesgue-measurable
functions on $[a,b]$,
$a,b < \infty$, i.e., ${\mathcal X} = L^\infty([a,b])$, with norm
$\|f\|_{\mathcal X} := \inf \{M : |f(x)| \leq M \;\mbox{for almost every}\; x \in [a,b] \}$. Let $G$ be the set of equivalence classes of all
characteristic functions of closed intervals
of the forms $[a,b]$, or $[a,c]$ or $[c,b]$ with $a < c < b$. These functions
are the restrictions of
characteristic functions of closed half-lines to $[a,b]$.
The equivalence relation is $f \sim g$ if and only if $f(x) = g(x)$
for almost every $x$ in $[a,b]$ (with respect to Lebesgue measure).
Let $BV([a,b])$ be the set of all equivalence classes of functions
on $[a,b]$ with bounded variation; that is, each equivalence class contains
a function $f$ such that the {\it total variation} $V(f,[a,b])$ is finite,
where total variation is the largest possible total movement of a discrete
point which makes a finite number of stops as x varies from $a$ to $b$,
maximized over all possible ways to choose a finite list of intermediate
points, that is,
\begin{eqnarray*}
\hspace{-.85 cm}
V(f,[a,b]) &:=&
\sup \{\sum_{i=1}^{n-1} |f(x_{i+1}) - f(x_i)| :
n \geq 1, a \leq x_1 < x_2 < \cdots < x_n \leq b \}.
\end{eqnarray*}
In fact, each equivalence class $[f]$ contains exactly one function $f^*$ of
bounded variation that satisfies the {\it continuity conditions}:\\
\\
(i) $f^*$ is right-continuous at $c$ for $ c \in [a,b)$, and\\
\\
(ii) $f^*$ is left-continuous at $b$.\\
\\
\noindent Moreover, $V(f^*,[a,b]) \leq V(f,[a,b])$ for
all $f \sim f^*$.
To see this,
recall that every function $f$ of bounded variation is the difference
of two nondecreasing functions $f = f_1 - f_2$, and $f_1, f_2$ are necessarily right-continuous except at a countable set. We can take
$f_1(x) := V(f,[a,x]) + K$, where $K$ is an arbitrary constant,
and $f_2(x) := V(f,[a,x]) + K - f(x)$ for $x \in [a,b]$. Now
redefine both $f_1$ and $f_2$
at countable sets to form $f_1^*$ and $f_2^*$ which satisfy the continuity
conditions and are still nondecreasing on $[a,b]$.
Then $f^* := f_1^* - f_2^*$ also satisfies the continuity conditions.
It is easily shown that $V(f^*,[a,b]) \leq V(f,[a,b]).$
Since any equivalence class in ${\mathcal X}$ can contain at most one function satisfying
(i) and (ii) above, it follows that $f^*$ is unique and that $V(f^*,[a,b])$
minimizes the total variation for all functions in the equivalence class.
Recall that $\chi_{[a,b]} = \chi([a,b])$ denotes the characteristic function
of the interval $[a,b]$, etc.
\begin{proposition}
Let ${\mathcal X} = L^\infty([a,b])$ and let $G$ be the subset
of characteristic functions\\ $\chi([a,b]), \chi([a,c]), \chi([c,b]), a < c < b$
(up to sets of Lebesgue-measure zero).\\
Then ${\mathcal X}_G = BV([a,b])$, and
$$\|[f]\|_{\mathcal X} \leq \|[f]\|_{G,{\mathcal X}} \leq 2 V(f^*,[a,b]) + |f^*(a)|,$$
where $f^*$ is the member of $[f]$ satisfying the continuity conditions
(i) and (ii).
\label{pr:pr}
\end{proposition}
\noindent {\bf Proof.}$ \;$ Let
$C_{G,{\mathcal X}}$ be the set of equivalence classes of functions of the form
\begin{equation}
(q-r) \chi_{[a,b]} + \sum_{n=1}^k
(s_n - t_n) \chi_{[a,c_n]} + \sum_{n=1}^k (u_n - v_n) \chi_{[c_n,b]},
\label{eq:convex}
\end{equation}
where $k$ is a positive integer, $q, r \geq 0$, for $1 \leq n \leq k$,
$ s_n , t_n, u_n, v_n \geq 0,\;$ and
$$(q + r) + \sum_{n=1}^k (s_n + t_n + u_n + v_n) = 1.$$
All of the functions so exhibited have bounded variation $\leq 1$
and hence $C_{G,{\mathcal X}} \subseteq BV([a,b])$.
We will prove that a sequence in $C_{G,{\mathcal X}}$ converges
in ${\mathcal X}$-norm to a member of $BV([a,b])$ and this will establish
that $B_{G,{\mathcal X}}$ is a subset of $BV([a,b])$ and hence that ${\mathcal X}_G$ is
a subset of $BV([a,b])$.
Let $\{[f_k]\}$ be a sequence in $C_{G,{\mathcal X}}$
that is Cauchy in the ${\mathcal X}$-norm. Without loss of generality, we pass
to the sequence $\{f^*_k\}$, which is Cauchy in the sup-norm
since $x \mapsto |f^*_k(x) - f^*_j(x)|$ satisfies the continuity
conditions (i) and (ii).
Thus, $\{f^*_k\}$ converges pointwise-uniformly and in the sup-norm
to a function $f$ on $[a,b]$ also satisfying (i) and (ii)
with finite sup-norm and whose equivalence class has finite ${\mathcal X}$-norm.
Let $\{x_1, \ldots, x_n \}$ satisfy $a \leq x_1 < x_2 < \cdots < x_n \leq b$.
Then
$$\sum_{i=1}^{n-1} |f^*_k(x_{i+1}) - f^*_k(x_i)| \leq V(f_k^*,[a,b])
\leq V(f_k,[a,b]) \leq 1$$
for every $k$
, where {\it par abus de notation} $f_k$ denotes the member of $[f_k]$
satisfying (\ref{eq:convex}).
Letting $k$ tend to infinity and then varying $n$ and $x_1, \ldots, x_n$,
we obtain $V(f,[a,b]) \leq 1$ and so $[f] \in BV([a,b])$.
It remains to show that everything in $BV([a,b])$ is actually in ${\mathcal X}_G$.
Let $g$ be a nonnegative
nondecreasing function on $[a,b]$ satisfying the continuity
conditions (i) and (ii) above. Given a positive integer $n$, there exists
a positive integer $m \geq 2$ and $a = a_1 < a_2 < \cdots < a_m = b$ such that
$g(a_{i+1}^-) - g(a_i) \leq 1/n$ for $i = 1, \ldots, m-1$. Indeed,
for $2 \leq i \leq m-1$, let
$a_i := \min \{x | g(a) + \frac{(i - 1)}{n} \leq g(x) \}$. (Moreover,
it follows that the
set of $a_i$'s include all points of left-discontinuity of $g$ such that
the jump $g(a_i) - g(a_i^-)$ is greater than $1/n$.)
Let $g_n:[a,b] \to {\mathbb R}$ be defined as follows:
$$g_n := g(a_1) \chi_{[a_1,a_2)} + g(a_2) \chi_{[a_2,a_3)} + \cdots
+ g(a_{m-1}) \chi_{[a_{m-1},a_m]}$$
$$ = g(a_1) (\chi_{[a_1,b]} - \chi_{[a_2,b]})
+ g(a_2) (\chi_{[a_2,b]} - \chi_{[a_3,b]}) + \cdots
+ g(a_{m-1}) (\chi_{[a_{m-1},b]})$$
$$ = g(a_1) \chi_{[a_1,b]} + (g(a_2) - g(a_1))\chi_{[a_2,b]} + \cdots
+ (g(a_{m-1}) - g(a_{m-2}))\chi_{[a_{m-1},b]}.$$
Then $[g_n]$ belongs to $g(a_{m-1}) C_{G,{\mathcal X}}$, and
{\it a fortiori} to $g(b) C_{G,{\mathcal X}}$ as well as of ${\mathcal X}_G$, and
$\|[g_n]\|_{G,{\mathcal X}} \leq g(b)$.
Moreover, $\|[g_n] - [g]\|_{\mathcal X} \leq 1/n$. Therefore, since
$B_{G,{\mathcal X}} = {\rm cl}_{\mathcal X}(C_{G,{\mathcal X}})$, $[g]$ is in $g(b) B_{G,{\mathcal X}}$
and accordingly $[g]$ is in ${\mathcal X}_G$ and $$\|\,[g]\,\|_{G,{\mathcal X}} \leq g(b).$$
Let $[f]$ be in $BV([a,b])$ and let $f^* = f^*_1 - f^*_2$,
as defined above, for this purpose we take $K = |f^*(a)|$.
This guarantees that both $f^*_1$ and $f^*_2$ are nonnegative.
Accordingly, $[f] = [f^*_1] - [f^*_2]$, and is in $ {\mathcal X}_G$.
Furthermore,
$\|[f]\|_{G,{\mathcal X}} \leq \|[f^*_1]\|_{G,{\mathcal X}} + \|[f^*_2]\|_{G,{\mathcal X}} \leq f^*_1(b) + f^*_2(b) = V(f^*,[a,b]) + |f^*(a)|
+ V(f^*,[a,b]) + |f^*(a)| - f^*(b) \leq 2V(f^*,[a,b]) + |f^*(a)|$.
The last inequality follows from the fact that
$V(f^*,[a,b]) + |f^*(a)| - f^*(b) \geq |f^*(b) - f^*(a)| + |f^*(a)| - f^*(b)
\geq 0$. \hfill $\Box$ \vspace{0.5cm}
An argument similar to the above shows that $BV([a,b])$ is a Banach space
under the norm $2 V(f^*,[a,b]) + |f^*(a)|$ (with or without the $2$).
The identity map from $BV([a,b])$
(with this norm) to $({\mathcal X}_G, \|\cdot\|_{G,{\mathcal X}}$,
is continuous (by Proposition \ref{pr:pr}) and it is also onto.
Accordingly, by the Open Mapping Theorem (e.g., Yosida \cite[p. 75]{yo65})
the map is open, hence a homeomorphism, so the norms are equivalent.
Thus, in this example, ${\mathcal X}_G$ is a Banach space under these two equivalent
norms.
Note however that the ${\mathcal X}$-norm restricted to ${\mathcal X}_G$ does not
give a Banach space structure; i.e., ${\mathcal X}_G$ is not complete in the ${\mathcal X}$-norm.
Indeed, with ${\mathcal X} = L^\infty([0,1])$. Let $f_n$ be $1/n$ times the
characteristic function of the disjoint union of $n^2$ closed intervals
contained within the unit interval. Then $\|[f_n]\|_{\mathcal X} = 1/n$
but $\|[f_n]\|_{G,{\mathcal X}} \geq C n$, some $C > 0$, since the $\|\cdot\|_{G,{\mathcal X}}$
is equivalent to the total-variation norm. While $\{f_n\}$ converges to zero
in one norm, in the other it blows up. If ${\mathcal X}_G$ were a Banach space under
$\|\cdot\|_{\mathcal X}$, it would be another Cauchy sequence, a contradiction.
\section{Pointwise-integrals vs. Bochner integrals}
\subsection*{Evaluation of Bochner integrals}
A natural conjecture is that the Bochner integral, evaluated pointwise,
is the pointwise integral; that is, if
$h \in {\mathcal L}^1(Y,\mu,{\mathcal X})$, where ${\mathcal X}$ is any Banach space of functions
defined on a measure space $\Omega$, then
\begin{equation}
\left ({\mathcal B}-\int_Y h(y)\, d\mu(y) \right )(x) = \int_Y h(y)(x)\, d\mu(y)
\label{eq:nat}
\end{equation}
for all $x \in \Omega$. Usually, however, one is dealing with equivalence
classes of functions and thus can expect the equation (\ref{eq:nat}) to
hold only for almost every $x$ in $\Omega$. Furthermore, to specify
$h(y)(x)$, it is necessary to take a particular function representing
$h(y) \in {\mathcal X}$
The Main Theorem implies that (\ref{eq:nat}) holds for $\rho$-a.e.
$x \in \Omega$
when ${\mathcal X} = L^q(\Omega, \rho)$, for $1 \leq q < \infty$, is separable
provided that $h = w \Phi$, where $w: Y \to {\mathbb R}$ is a
weight function with finite $L^1$-norm and $\Phi: Y \to {\mathcal X}$ is essentially bounded, where for each $y \in Y$
$\Phi(y)(x) = \phi(x,y)$ for $\rho$-a.e. $x \in \Omega$
and $\phi: \Omega \times Y \to {\mathbb R}$
is $\rho \times \mu$-measurable. More generally, we can show the following.
\begin{theorem}
Let $(\Omega,\rho)$, $(Y,\mu)$ be $\sigma$-finite measure spaces, let
${\mathcal X} = L^q(\Omega,\rho)$, $q \in [1,\infty]$, and
let $h \in {\mathcal L}^1(Y,\mu;{\mathcal X})$ so that for each $y$ in $Y$,
$h(y)(x) = H(x,y)$ for $\rho$-a.e. $x$, where $H$ is a
$\rho \times \mu$-measurable real-valued function on $\Omega \times Y$. Then\\
(i) $y \mapsto H(x,y)$ is integrable for $\rho$-a.e. $x \in \Omega$,\\
(ii) the equivalence class of $x \mapsto \int_Y H(x,y)\, d\mu(y) \mbox{ is in }{\mathcal X}$,
and\\
(iii) for $\rho$-a.e. $x \in \Omega$
$$\left ({\mathcal B}-\int_Y h(y)\, d\mu(y) \right )(x) = \int_Y H(x,y)\, d\mu(y).$$
\label{th:eval}
\end{theorem}
\noindent {\bf Proof.}$ \;$ We first consider the case $1 \leq q < \infty$.
Let $g$ be in ${\mathcal L}^p(\Omega,\rho)$, where $1/p + 1/q = 1$. Then
\begin{eqnarray}
\int_Y \int_\Omega |g(x) H(x,y)| d\rho(x) d\mu(y)
\leq \int_Y \|g\|_p \|h(y)\|_q \,d\mu(y)\\
= \|g\|_p \int_Y \|h(y)\|_{\mathcal X} \,d\mu(y) < \infty.
\label{eq:hs}
\end{eqnarray}
Here we have used Young's inequality and Bochner's theorem.
By Fubini's theorem, (i) follows. In addition, the map
$g \mapsto \int_\Omega g(x)\left (\int_Y H(x,y) \,d\mu(y) \right) d\rho(x)$
is a continuous linear functional $F$ on $L^p$ with
$\|F\|_{{\mathcal X}^*} \leq \int_Y \|h(y)\|_{\mathcal X} \,d\mu(y)$.
Since $(L^p)^*$ is $L^q$ for $1 < q < \infty$, then the function
$x \mapsto \int_Y H(x,y) \,d\mu(y)$ is in ${\mathcal X} = L^q$ and has norm
$\leq \int_Y \|h(y)\|_{\mathcal X} \,d\mu(y)$.
The case $q=1$ is covered by taking $g \equiv 1$, a member of $L^\infty$,
and noting that $\|\int_Y H(x,y) \,d\mu(y)\|_{L^1}
= \int_\Omega | \int_Y H(x,y) \,d\mu(y) | d\rho(x)
\leq \int_\Omega \int_Y |H(x,y)| \,d\mu(y) d\rho(x)
= \int_Y \|h(y)\|_{\mathcal X} \,d\mu(y)$.
Thus, (ii) holds for $1 \leq q < \infty$.
Also by Fubini's theorem and Theorem \ref{pr:blf}, for all $g \in {\mathcal X}^*$,
$$ \int_\Omega g(x) \left ({\mathcal B}-\int_Y h(y) \,d\mu(y) \right )(x) d\rho(x) =
\int_Y \left (\int_\Omega g(x) H(x,y) d\rho(x) \right ) d\mu(y) $$
$$=
\int_\Omega g(x) \left (\int_Y H(x,y) d\mu(y)\right ) d\rho(x).$$
Hence (iii) holds for all $q < \infty$, including $q = 1$.
Now consider the case $q = \infty$. For
$g \in L^1(\Omega,\rho) = \left(L^\infty(\Omega,\rho)\right)^*$,
the inequality (\ref{eq:hs}) holds, and by \cite[pp. 348--9]{hs65},
(i) and (ii) hold and
$\|\int_Y H(x,y) \,d\mu(y)\|_\infty \leq \int_Y \|h(y)\|_\infty \,d\mu(y) < \infty$.
For $g \in {\mathcal L}^1(\Omega, \rho)$,
$$\int_\Omega g(x) \left ({\mathcal B}-\int_Y h(y) \,d\mu(y) \right )(x) d\rho(x) =
\int_Y \left ( \int_\Omega g(x) h(y)(x) d\rho(x) \right ) d\mu(y)$$
$$= \int_Y \left ( \int_\Omega g(x) H(x,y) d\rho(x) \right ) d\mu(y)
= \int_\Omega g(x) \left (\int_Y H(x,y) \,d\mu(y) \right ) d\rho(x).$$
The two functions integrated against $g$ are in $L^\infty(\Omega, \rho)$
and agree, so the functions must be the same $\rho$-a.e. \hfill $\Box$ \vspace{0.5cm}
There are cases where ${\mathcal X}$ consists of pointwise-defined functions
and (\ref{eq:nat}) can be taken literally.
If ${\mathcal X}$ is a separable Banach space of pointwise-defined functions from
$\Omega$ to ${\mathbb R}$ in which the evaluation functionals are bounded
(and so in particular if ${\mathcal X}$ is a reproducing kernel Hilbert space \cite{ar50}),
then (\ref{eq:nat}) holds for all $x$ (not just $\rho$-a.e.). Indeed, for each
$x \in \Omega$, the evaluation functional $E_x: f \mapsto f(x)$ is bounded and linear,
so by Theorem \ref{pr:blf}, $E_x$ commutes with the Bochner integral operator.
As non-separable reproducing kernel Hilbert spaces exist \cite[p.26]{da01}, one still needs the hypothesis of separability.
In a special case involving Bochner integrals with values in
Marcinkiewicz spaces, Nelson \cite{rn82} showed that
(\ref{eq:nat}) holds. His result involves going from equivalence
classes to functions, and uses a ``measurable selection.''
Reproducing kernel Hilbert spaces were studied by Le Page in \cite{rl72}
who showed that (\ref{eq:nat}) holds when $\mu$ is a probability measure on $Y$
under a Gaussian distribution assumption on variables in the dual
space. Another special case of (\ref{eq:nat}) is derived in
Hille and Phillips \cite[Theorem 3.3.4, p. 66]{hp57}, where
the parameter space is an interval of the real line and the
Banach space is a space of bounded linear transformations (i.e., the
Bochner integrals are operator-valued).
\subsection*{Essential boundedness is needed for the Main Theorem}
The following is an example of a function $h:Y \to {\mathcal X}$ which is not
Bochner integrable.
Let $Y = (0,1) = \Omega$ with $\rho = \mu =\;$ Lebesgue measure
and $q=1=d$ so ${\mathcal X}=L^1((0,1))$. Put $h(y)(x) = y^{-x}$. Then for all $y \in (0,1)$
$$\|h(y)\|_X = \int_0^1 y^{-x} dx = \frac{1 - \frac 1 y}{\log y}.$$
By l'Hospital's rule
$$\lim_{y \to 0^+} \|h(y)\|_X = +\infty.$$
Thus, the function $y \mapsto \|h(y)\|_X$ is not essentially bounded
on $(0,1)$ and Theorem \ref{th:main} does not apply. Furthermore,
for $y \leq 1/2$, $$\|h(y)\|_{\mathcal X} \geq \frac {1} {-2y \log y}$$
and $$\int_0^1 \|h(y)\|_{\mathcal X} \,dy \geq \int_0^{1/2} \frac {1} {-2y \log y} \,dy
= -(1/2) \log{(\log y)}|_0^{1/2} = \infty.$$
Hence, by Theorem \ref{th:bo}, $h$ is not Bochner integrable.
Note however that
$$f(x) = \int_Y h(y)(x) d\mu(y) = \int_0^1 y^{-x} dy = \frac {1} {1-x}$$ for every $x \in \Omega$.
Thus $h(y)(x)$ has a pointwise integral $f(x)$ for all $x \in (0,1)$,
but $f$ is not in ${\mathcal X} = L^1((0,1))$.
\subsection*{Connection with sup norm}
In \cite{kavkvo07}, we take ${\mathcal X}$ to be the space of bounded measurable
functions on ${\mathbb R}^d$, $Y$ equal to the product $S^{d-1} \times {\mathbb R}$ with
measure $\nu$ which is the (completion of the) product measure determined by
the standard (unnormalized) measure $d(e)$ on the sphere and ordinary Lebesgue measure on ${\mathbb R}$. We take $\phi(x,y) := \phi(x,e,b) = \vartheta(e \cdot x + b)$,
so $x \mapsto \vartheta(e \cdot x + b)$ is the characteristic function of the
closed half-space $\{x: e \cdot x + b \geq 0 \}$.
We showed that if a function $f$ on ${\mathbb R}^d$
decays, along with its partials of order $\leq d$, at a sufficient rate,
then there is an integral formula expressing $f(x)$ as an integral combination
of the characteristic functions of closed half-spaces weighted by iterated Laplacians integrated over half-spaces. The characteristic functions all have
sup-norm of 1 and the weight-function is in $L^1$ of $(Y,\nu)$, where
$Y = S^{d-1} \times {\mathbb R}$ and $\nu$ is the (completion of the)
product measure determined by
the standard (unnormalized) measure $d(e)$ on the sphere $S^{d-1}$ of unit
vectors in ${\mathbb R}^d$ and ordinary Lebesgue measure on ${\mathbb R}$.
For example, when $d$ is odd,
$$f(x) = \int_{S^{d-1} \times {\mathbb R}} w_f(e,b) \vartheta(e \cdot x + b) d\nu(e,b),$$
\noindent where
$$w_f(e,b) := a_d \int_{H_{e,b}} D_e^{(d)}f(y) d_H(y),$$
\noindent with $a_d$ a scalar exponentially decreasing with $d$.
The integral is of the iterated directional derivative over the
hyperplane with normal vector $e$ and offset $b$,
$$H_{e,b} := \{y \in {\mathbb R}^d : e \cdot y + b = 0 \}.$$
For ${\mathcal X} = {\mathcal M}({\mathbb R}^d)$, the space of bounded Lebesgue-measurable functions on
${\mathbb R}^d$, which is a Banach space w.r.t. sup-norm, and $G$ the family $H_d$ consisting of the set of all characteristic functions
for closed half-spaces in ${\mathbb R}^d$, it follows from Theorem \ref{th:main}
that $f \in {\mathcal X}_G$.
Hence, from the Main Theorem,
$$f = {\mathcal B}-\int_{S^{d-1} \times {\mathbb R}} w_f(e,b) \Theta(e,b)\, d\nu(e,b)$$
is a Bochner integral, where $\Theta(e,b) \in {\mathcal M}({\mathbb R}^d)$ is given by
$$\Theta(e,b)(x) := \vartheta(e \cdot x + b).$$
Application of the Main Theorem requires only that $w_f$ be in $L^1$,
but \cite{kavkvo07} gives explicit formulas for $w_f$ (in both
even and odd dimensions) provided that $f$ satisfies the decay conditions
described above and in our paper; see also the other chapter in this book
referenced earlier.
\section{Some concluding remarks}
Neural networks express a function $f$
in terms of a combination of members of a given family $G$ of functions.
It is reasonable to expect that a function $f$ can be so represented
if $f$ is in ${\mathcal X}_G$. The choice of $G$ thus dictates the $f$'s
that can be represented (if we leave aside what combinations are permissible).
Here we have focused on the case $G = \{\Phi(y) : y \in Y \}$.
The form $\Phi(y)$ is usually associated with a specific family such as Gaussians
or Heavisides. The tensor-product interpretation suggests the
possibility of using multiple families $\{\Phi_j: j \in J \}$ or multiple $G$'s to
represent a larger class of $f$'s. Alternatively, one may replace $Y$ by
$Y \times J$ with a suitable extension of the measure.
The Bochner integral approach also permits ${\mathcal X}$ to be an arbitrary Banach space
(not necessarily an $L^p$-space). For example,
if ${\mathcal X}$ is a space of bounded linear transformations and $\Phi(Y)$ is
a family of such transformations, we can approximate other members
$f$ of this Banach space ${\mathcal X}$ in a neural-network-like manner. Even more
abstractly, we can approximate an {\it evolving} function $f_t$, where
$t$ is time, using weights that evolve over time and/or a family $\Phi_t(y)$
whose members evolve in a prescribed fashion. Such an approach would require
some axiomatics about permissible evolutions of $f_t$, perhaps similar to methods
used in time-series analysis and stochastic calculus. See, e.g., \cite{ab82}.
Many of the restrictions we have imposed in earlier sections are not truly
essential. For example, the separability constraints can be weakened. Moreover,
$\sigma$-finiteness of $Y$ need not be required
since an integrable function $w$ on $Y$ must vanish outside a $\sigma$-finite
subset. More drastically, the integrable function $w$ can be replaced by a
distribution or a measure. Indeed,
we believe that both finite combinations and integrals can be
subsumed in generalized combinations derived from Choquet's theorem.
The abstract transformations of the concept of neural network discussed here
provide an ``enrichment'' that may have practical consequences.
\section{Appendix I: Some Banach space background}
The following is a brief account of the machinery of functional analysis
used in this chapter. See, e.g., \cite{yo65}.
For $G \subseteq {\mathcal X}$, with ${\mathcal X}$ any linear space, let
$${\rm span}_n(G) := \left \{x \in {\mathcal X} : \exists w_i \in {\mathbb R},
g_i \in G, \; 1 \leq i \leq n, \; \ni \;
x = \sum_{i=1}^n w_i g_i \; \right \}$$
denote the set of all $n$-fold linear combinations from $G$. If the
$w_i$ are non-negative with sum $1$, then the combination is called
a {\it convex} combination; ${\rm conv}_n(G)$ denotes the set of all
$n$-fold convex combinations from $G$. Let
$${\rm span}(G) := \bigcup_{n=1}^\infty \, {\rm span}_n(G) \;\;\mbox{and} \;\;
{\rm conv}(G) := \bigcup_{n=1}^\infty \, {\rm conv}_n(G).$$
A {\it norm} on a linear space ${\mathcal X}$ is a function
which associates to each element $f$ of ${\mathcal X}$ a real number $\|f\| \geq 0$
such that\\
\\
(1) $\|f\| = 0$ $\iff$ $f = 0$;\\
(2) $\|r f\| = |r| \|f\|$ for all $r \in {\mathbb R};\; \mbox{and}$\\
(3) the triangle inequality holds:
$\|f + g\| \leq \|f\| + \|g\|,\; \forall f,g \in {\mathcal X}$.\\
\noindent A metric $d(x,y) := \|x - y\|$ is defined by the norm, and both
addition and scalar multiplication become continuous functions
with respect to the topology induced by the norm-metric.
A metric space is {\it complete} if
every sequence in the space that satisfies the Cauchy criterion is convergent.
In particular, if a normed linear space is complete in the metric
induced by its norm, then it is called a {\it Banach} space.
Let $(Y,\mu)$ be a measure space; it is called $\sigma$-finite provided
that there exists a countable family $Y_1, Y_2, \ldots$ of subsets of $Y$
pairwise-disjoint
and measurable with finite $\mu$-measure such that $Y = \bigcup_i Y_i$.
The condition of $\sigma$-finiteness is required for Fubini's theorem.
A set $N$ is called a $\mu$-null set if it is measurable with $\mu(N) = 0$.
A function from a measure space to another measure space
is called {\it measurable} if the pre-image of each measurable
subset is measurable. When the range space is merely a topological
space, then functions are measurable if the pre-image of each open
set is measurable.
Let $(\Omega,\rho)$ be a measure space.
If $q \in [1,\infty)$, we write $L^q(\Omega, \rho)$ for the
Banach space consisting of all equivalence classes of the set
${\mathcal L}^q(\Omega, \rho)$ of all $\rho$-measurable functions from $\Omega$
to ${\mathbb R}$ with absolutely integrable $q$-th powers,
where $f$ and $g$ are equivalent if they agree $\rho$-almost everywhere
($\rho$-a.e.) - that is, if the set of points where $f$ and $g$ disagree
has $\rho$-measure zero, and
$\|f\|_{L^q(\Omega, \rho)} := (\int_{\Omega} |f(x)|^q d\rho(x))^{1/q}$,
or $\|f\|_q$ for short.
\section{Appendix II: Some key theorems}
We include, for the reader's convenience, the statements of some crucial
theorems cited in the text.
The following consequence of the Hahn-Banach
Theorem, due to Mazur, is given by Yosida \cite[Theorem 3', p. 109]{yo65}.
The hypotheses on ${\mathcal X}$ are satisfied by any Banach space, but the
theorem holds much more generally. See \cite{yo65} for examples where
${\mathcal X}$ is not a Banach space.
\begin{theorem}
Let $X$ be a real locally convex linear topological space, $M$ a closed convex
subset, and $x_0 \in X \setminus M$.
Then $\exists$ continuous linear functional
$$F: X \to {\mathbb R} \; \ni \; F(x_0) > 1,\; F(x) \leq 1 \;\; \forall x \in M.$$
\label{th:mazur}
\end{theorem}
\smallskip
Fubini's Theorem relates iterated integrals to product integrals. Let
$Y, Z$ be sets and ${\mathcal M}$ be a $\sigma$-algebra of subsets of $Y$ and ${\mathcal N}$
a $\sigma$-algebra of subsets of $Z$. If $M \in {\mathcal M}$ and $N \in {\mathcal N}$, then
$M \times N \subseteq Y \times Z$ is called a {\it measurable rectangle}.
We denote the smallest $\sigma$-algebra on $Y \times Z$ which contains all
the measurable rectangles by ${\mathcal M} \times {\mathcal N}$. Now let $(Y,{\mathcal M},\mu)$ and
$(Z,{\mathcal N},\nu)$ be $\sigma$-finite measure spaces, and for $E \in {\mathcal M} \times {\mathcal N}$,
define
$$(\mu \times \nu)(E) := \int_Y \nu(E_y) d\mu(y) = \int_Z \mu(E^z) d\nu(z),$$
where $E_y := \{z \in Z : (y,z) \in E \}$ and
$E^z := \{y \in Y : (y,z) \in E \}$. Also, $\mu \times \nu$
is a $\sigma$-finite measure on $Y \times Z$ with ${\mathcal M} \times {\mathcal N}$ as the
family of measurable sets. For the following, see
Hewitt and Stromberg \cite[p. 386]{hs65}.
\begin{theorem}
Let $(Y,{\mathcal M},\mu)$ and $(Z,{\mathcal N},\nu)$ be $\sigma$-finite measure spaces.
Let $f$ be a complex-valued ${\mathcal M} \times {\mathcal N}$-measurable function on $Y \times Z$,
and suppose that at least one of the following three absolute integrals is finite:
$\int_{Y \times Z} |f(y,z)| d(\mu \times \nu)(y,z)$,
$\int_Z \int_Y |f(y,z)| d\mu(y) d\nu(z)$,
$\int_Y \int_Z |f(y,z)| d\nu(z) d\mu(y)$.
Then the following statements hold:\\
(i) $y \mapsto f(y,z)$ is in ${\mathcal L}^1(Y,{\mathcal M},\mu)$ for $\nu$-a.e. $z \in Z$;\\
(ii) $z \mapsto f(y,z)$ is in ${\mathcal L}^1(Z,{\mathcal N},\nu)$ for $\mu$-a.e. $y \in Y$;\\
(iii) $z \mapsto \int_Y f(y,z) d\mu(y)$ is in ${\mathcal L}^1(Z,{\mathcal N},\nu)$;\\
(iv) $y \mapsto \int_Z f(y,z) d\nu(z)$ is in ${\mathcal L}^1(Y,{\mathcal M},\mu)$;\\
(v) all three of the following integrals are equal:
$$\int_{Y \times Z} f(y,z) d(\mu \times \nu)(y,z) = $$
$$\int_Z \int_Y f(y,z) d\mu(y) d\nu(z) = $$
$$\int_Y \int_Z f(y,z) d\nu(z) d\mu(y).$$
\label{th:fubini}
\end{theorem}
\smallskip
A function $G:I \to {\mathbb R}$, $I$ any subinterval of ${\mathbb R}$,
is called {\it convex} if
$$\forall x_1, x_2 \in I, 0 \leq t \leq 1, \;\;
G(t x_1 + (1-t) x_2) \leq t G(x_1) + (1-t) G(x_2).$$
The following formulation is from Hewitt and Stromberg \cite[p. 202]{hs65}.
\begin{theorem}[Jensen's inequality]
Let $(Y,\sigma)$ be a probability measure space. Let $G$ be
a convex function from an interval $I$ into ${\mathbb R}$ and let
$f$ be in ${\mathcal L}^1(Y,\sigma)$ with $f(Y) \subseteq I$ such that
$G \circ f$ is also in ${\mathcal L}^1(Y,\sigma)$. Then $\int_Y f(y) d\sigma(y)$
is in $I$ and
$$G \left (\int_Y f(y) d\sigma(y) \right ) \leq \int_Y (G \circ f)(y)
d\sigma(y).$$
\label{th:jensen}
\end{theorem}
\section*{Acknowledgements}
We thank Victor Bogdan for helpful comments on earlier versions.
|
{
"arxiv_id": "2302.13212",
"language": "en",
"timestamp": "2023-02-28T02:13:09",
"url": "https://arxiv.org/abs/2302.13212",
"yymm": "2302"
} | \section{Introduction}
This paper studies using a mobile manipulator to manipulate objects beyond the robot's maximum payload. Fig. \ref{fig:ts} exemplifies how a human manipulates a large, heavy object beyond the person's maximally affordable weight. The human took advantage of environmental support to share partial object weight and thus unloaded the object from the table onto the ground. This paper develops a planning and optimization approach for a mobile manipulator to handle objects like the human in the figure. The approach uses an expanded object mesh model to examine contact and randomly explore object motion while keeping contact and securing affordable grasping force. It generates robotic motion trajectories after obtaining object motion using an optimization-based algorithm. With the proposed methods' help, we can plan contact-rich manipulation without particularly analyzing an object's contact modes and their transitions, and thus allow a mobile manipulator to move heavy objects while leveraging supporting forces from environmental obstacles.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=\linewidth]{imgs/teaser_ria.jpg}
\caption{A human moved a large and heavy shelf down from a table. The human took advantage of environmental support to (a, b) push, (c) slide, and (d) pivot the shelf. It is a sequence of contact-rich manipulation actions. This paper aims to develop a generalized planning and optimization method for a mobile manipulator to handle objects like the human.}
\label{fig:ts}
\end{center}
\end{figure}
Handling large and heavy objects are challenging for robotic manipulators, as high-quality motor and reducer modules and careful mechanical design are needed to increase a robotic manipulator's payload. The module and design costs make heavy-duty robots unaffordable for small and medium manufacturers. On the other hand, collaborative manipulators are safe and can be quickly and flexibly deployed to manufacturing sites. They have a limited payload and cannot lift and move heavy objects that exceed the limitation. Previously, researchers have been developing methods for collaborative manipulators to handle large and heavy objects. The methods took advantage of environmental support to share partial object weight. The robot manipulator performs non-prehensile actions to push, flip, or pivot large and heavy objects \cite{fakhari2021}\cite{zhang2021}\cite{hayakawa2021dual}\cite{fan19}. Using collaborative manipulators while considering environmental support allowed the manipulation of large and heavy objects with affordable hardware costs.
However, the previous methods used uniform sampling and search graphs to model the object-environment and object-robot contact states. The methods inherently have completeness problems. Whether a contact and grasping state will be considered for exploration and optimization depends on if the state was encoded in the graph. The methods cannot exploit an unmodeled state to achieve given goals.
Unlike previous methods, we propose a single-short probabilistic contact-rich planning method to explore contact states that meet force constraints automatically. The method is based on the Rapidly-exploring Random Tree (RRT) routine. It randomly explores object motion while maintaining object-environment contact. The contact is not defined as a limited number of points, edges, or faces. It is instead maintained using an expanded contact mesh and may change freely as long as the ``crust'' formed by the expanded object mesh is in touch with the environment. The expanded mesh is determined functionally by considering the mechanical compliance of a manipulator. Robotic motion are generated using an optimization-based algorithm based on the probabilistically planned object motion. With the proposed method's help, we can find contact-rich manipulation without explicitly analyzing an object's contact states and their transitions. The planner and optimizer determine them automatically.
We conducted experiments using a mobile manipulator to analyze the proposed method's performance. The method could help find contact-rich manipulation trajectories allowing the mobile manipulator to move large, heavy objects with environmental support. The optimizer ensured the manipulator's joints bore affordable forces not to damage themselves. The proposed method is expected to be an important contribution to contact-rich manipulation and starts a new view for solving related problems.
The organization of this paper is as follows. Section II reviews related work. Section III presents an overview of the proposed method using a schematic diagram. Section IV$\sim$V show algorithmic details like the expanded collision models, RRT-based planning, and robotic trajectory optimization. Section V presents experiments and analysis. Conclusions and future work are drawn and discussed in Section VI.
\section{Related Work}
\label{sec-related}
\subsection{Manipulating Heavy Objects}
Using robots to manipulate heavy objects is a challenging topic \cite{ohashi2016realization}. In contemporary studies, people developed special-purpose mechatronic systems, manipulators with a large payload \cite{du2022}, robot-human collaboration \cite{kim18}\cite{stouraitis2020online}, robot-machine collaboration \cite{hayakawa2021adr}\cite{recker21}\cite{ikeda18}\cite{balatti20}, non-prehensile manipulation policies \cite{yoshida2010pivoting}\cite{specian18}\cite{nazir21}, etc., to solve the problems.
Particularly, O'Neill et al. \cite{oneill21} developed an autonomous method for safely tumbling a heavy block sitting on a surface using a two-cable crane. Kayhani et al. \cite{kayhani21} developed an automated lift path planning method for heavy crawler cranes in no-walk scenarios employing a robotics approach. Du et al. \cite{du2022} developed a high-duty manipulator to replace heavy disc cutters installed on the head of a tunnel boring machine. Kim et al. \cite{kim18} proposed a novel human–robot collaboration (HRC) control approach to alert and reduce a human partner's static joint torque overloading while executing shared tasks with a robot.
In this work, we employ environmental contact as essential support \cite{patankar2020} to manipulate heavy objects. In order to realize the manipulation of objects that exceed the payload, we optimized the robot motion by considering contacted object trajectories, supporting forces, manipulator poses, and the load on each joint. One advantage of using the environment is that no additional equipment or facilities are required. While additional equipment can be very effective in accomplishing specific tasks, issues such as loss of versatility still need to be addressed. For example, using a cart to load and move objects makes it very difficult to maneuver in an environment with steps \cite{ohashi16}. Also, when robots are integrated with additional equipment, it is necessary to solve a highly constrained closed-chain problem that occurs when robots work together, which results in a considerable restriction on the IK range and requires planning for re-grasping. In contrast, taking advantage of support from environmental obstacles provides more freedom for maintaining contact. With the proposed object trajectory planning and robot motion optimization methods in this work, a robot can find a contact-rich manipulation motion without analyzing an object's contact modes and transitions and thus move heavy objects while leveraging supporting forces from environmental obstacles.
\subsection{Manipulation Considering Changing Contact Modes}
Contact modes have been an old topic in robotic manipulation \cite{hirukawa94}\cite{yu96}\cite{ji01}\cite{aiyama01}\cite{maeda2005planning}. Modern studies used contact modes and mode transition graphs to guide probabilistic planning or optimization. For example, Raessa et al. \cite{raessa2021planning} developed a hierarchical motion planner for planning the manipulation motion to repose long and heavy objects considering external support surfaces. Cheng et al. \cite{cheng2022contact} proposed Contact Mode Guided Manipulation Planning (CMGMP) for 3D quasistatic and quasi-dynamic rigid body motion planning in dexterous manipulation. Murooka et al. \cite{murooka2017global} proposed a planning method of whole-body manipulation for various manipulation strategies. Hou et al. \cite{hou2020} proposed a method to select the best-shared grasp and robot actions for a desired object motion. Some others do not perform detailed planning and leave contact mode selection to optimization. For instance, Sleiman et al. \cite{sleiman2019contact} presented a reformulation of a Contact-Implicit Optimization (CIO) approach that computes optimal trajectories for rigid-body systems in contact-rich settings. Aceituno-Cabezas et al. \cite{aceituno2020global} proposed a global optimization model (CTO, Contact-Trajectory Optimization) for this problem with alternated-sticking contact.
In this work, we develop a planner that implicitly takes advantage of environment contact and switches among contact modes to manipulate heavy objects. To enable this contact mode switching, we introduce an extended contact model. This allows objects to change freely, with contact not being fixed to a limited number of edges and faces, and enables planning of contact-rich path panning without detailed analysis of object contact modes and their transitions.
\section{Schematic View of the Proposed Method}
Fig. \ref{fig:wf} shows the schematic diagram of the proposed method. The red box represents the user-defined input. The green box is the output. The proposed method accepts ``Object Mesh Model'', ``Initial and Goal Object Poses'', ``Environmental Mesh Models'', ``Robot Parameters'', and ``Object Weight'' as the input. It first computes a set of pre-annotated grasp poses and creates an expanded mesh using the ``Object Mesh Model'' input data. The pre-annotated grasp poses will be used for incremental planning and optimization. The method incrementally selects a grasp pose, determines if it is accessible at the initial and goal object poses, and plans the object motion and optimizes the robot trajectory. When planning the object motion, the method uses the ``crust'' formed by the expanded object mesh model and the original one to ensure continuous contact. The planner also examines the force born by the selected grasp pose at each randomly sampled roadmap node to make sure the robot can hold and move the object.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=\linewidth]{imgs/workflow_ria.jpg}
\caption{Schematic diagram of the proposed method.}
\label{fig:wf}
\end{center}
\end{figure}
The planning part is not necessarily successful due to strict constraints. When failed, the workflow will return to the ``Select a Grasp Pose'' stage to try different grasp poses until a feasible object motion is obtained. Once a feasible object motion is obtained, the workflow will switch to the ``Optimize the Robot Trajectory'' stage to generate robotic motion trajectories. The final robotic motion trajectories may not involve the IK solutions obtained in the ``Plan the Object Motion'' stage. It is formulated as an optimization problem to minimize the robotic movement while considering the selected grasp pose and its motion in association with the object. Like the planning stage, the optimization is not necessarily successful. It will return to the ``Select a Grasp Pose'' stage in case of failure. If all stages were performed successfully, the method would produce an optimized robotic motion and the predicted object trajectory under the robotic motion as the output.
\section{Planning Object Motion}
\subsection{Expanding Models Considering Mechanical Compliance}
The concept behind expanding object mesh models is to build a ``crust'' and judge if the object is in contact with the environment by examining if the ``crust'' overlaps. Fig. \ref{fig:concept} illustrates the concept. Here, $\textnormal{M}_\textnormal{o}$ indicates the original object mesh model. $\textnormal{M}_\textnormal{e}$ indicates the ``crust''. We judge the relationship between the object and the environment by recognizing the overlap between $\textnormal{M}_\textnormal{o}$ and the environment and $\textnormal{M}_\textnormal{e}$ and the environment. The object is considered to be in contact with the environment when $\textnormal{M}_\textnormal{e}$ overlaps with it, while $\textnormal{M}_\textnormal{o}$ does not. If both $\textnormal{M}_\textnormal{o}$ and $\textnormal{M}_\textnormal{e}$ overlap with the environment, we consider the object is in a collision.
\begin{figure*}[!htpb]
\begin{center}
\includegraphics[width=\linewidth]{imgs/concept.jpg}
\caption{Concept of the expanded contact detection model.}
\label{fig:concept}
\end{center}
\end{figure*}
The thickness of the ``crust'' is crucial for the assumption to be valid. We define it by considering the mechanical compliance of a mobile manipulator. Since a mobile manipulator is not fixed and the mobile base is not entirely stiff due to springs installed at the wheels, there is slight mechanical compliance at the Tool Center Point (TCP) of the arm component. Fig. \ref{fig:compliance}(a) illustrates the compliance. We compute the compliant values by assuming two orthogonal virtual rotation joints at the base of the mobile manipulator. The two virtual rotational joints will induce absolute compliance, and the ``crust'' thickness can be computed by considering the rotation ranges of the virtual joints. Fig. \ref{fig:compliance}(b) and (c) illustrates the two orthogonal virtual rotation joints.
\begin{figure*}[!htpb]
\begin{center}
\includegraphics[width=\linewidth]{imgs/compliance.jpg}
\caption{(a) Mechanical compliance of a mobile manipulator. (b, c) Two virtual joints. (d) Trigonometric relation about the virtual joint around the local $x$ direction. (e) Changes of compliance at different robot poses.}
\label{fig:compliance}
\end{center}
\end{figure*}
We set the mobile manipulator in a vertical pose, applied forces to its TCP, and measured the TCP's displacements to obtain the rotation ranges. Particularly, we computed the rotation ranges of the virtual joints using the trigonometric relations between the arm length and the displacements. Fig. \ref{fig:compliance}(d) shows the trigonometric relation about the local $x$ direction. Here, $l_{arm}$ is the distance between the assumed joint position and the TCP. $\delta d$ is the measured hand displacement. The rotation range around the local $x$ direction can be computed using
\begin{equation}\label{eq:dt}
\delta\theta=l_{arm}\tan^{-1}(\delta d).
\end{equation}
After obtaining the rotation ranges, we can compute the compliance of the robot hand at an arbitrary arm pose by using an $l_{arm}$ updated using forward kinematics, as shown by Fig. \ref{fig:compliance}(e). The robot will have more minor compliance when folded and larger compliance when expanded. The thickness of the ``crust'' will be determined dynamically considering the changing $\delta\theta$.
\subsection{RRT-Based Object Motion Planning}
The object motion is planned using RRT as the backbone. When sampling new nodes using the RRT backbone, we will examine the following constraints to secure continuous support forces from the environment as well as affordable ensuing robot manipulation.
\begin{itemize}
\item Contact Constraint: The object at a newly sampled pose is in contact with environmental obstacles. The contact is assured by the collision and non-collision relationship of $\textnormal{M}_\textnormal{o}$, $\textnormal{M}_\textnormal{e}$, and the environmental obstacles' mesh models, as discussed in the previous subsection.
\item Force Constraint: The minimum force required for the robot hand must be smaller than the mobile manipulator's maximum payload.
\end{itemize}
Especially for the force constraint, the thickness of $\textnormal{M}_\textnormal{e}$ is determined dynamically at each object pose. When a new pose is sampled during RRT planning, we will update $\textnormal{M}_\textnormal{e}$ following the robot's TCP position at the pose and determine if the object meets the contact constraint using the updated values.
For a better understanding, the force constraint is illustrated in Fig. \ref{fig:forces}(a). During a contact-rich manipulation, an object is affected by gravitational force, environmental supporting forces, and robotic holding force. These forces are labeled as $\mathbf{G}$, $\mathbf{F_s}$, and $\mathbf{F_h}$ in the figure. We assume the environment is stationary and strong enough to provide enough supporting force for the target objects. There is thus no limitation on $\mathbf{F_s}$. Under this assumption, the $\mathbf{F_h}$ can be obtained by solving the following optimization problem.
\begin{subequations}\allowdisplaybreaks
\begin{align}
& {\small\underset{\mathbf{F_h}}{\text{min}}} & \small\mathbf{F_h}^T\mathbf{F_h} \label{eq:obj_opgoal} \\
& {\text{s.t.}} & \small \sum\mathbf{F_s}+\mathbf{G} = 0 \label{eq:balance_sub1} \\
&& \sum\mathbf{r_s}\times\mathbf{F_s}+\mathbf{r_h}\times\mathbf{F_h}+\mathbf{r_g}\times\mathbf{G} = 0 \label{eq:balance_sub2} \\
&& \mathbf{F_s}\in\mathbf{FC_s}, ~\mathbf{F_h}^T\mathbf{F_h} \leqslant F_{max} \label{eq:cone_contact}
\end{align}
\label{eq:handforce}
\end{subequations}
The optimization finds the minimum force the robotic hand needs to balance the object while considering supporting forces from the contact. Equation \eqref{eq:obj_opgoal} is the optimization goal. Equation \eqref{eq:balance_sub1} and \eqref{eq:balance_sub2} are constraints for balanced forces and torques. Equation \eqref{eq:cone_contact} represents the constraints about friction cones and the maximum robot payload. A grasping pose is judged invalid if this optimization problem has no solution. The system will switch to a different candidate grasping pose and restart the object motion planning, considering the force constraints at the new grasping pose. The candidate grasping poses are planned using a grasp planner developed by the same authors \cite{wan2021tro}, as shown in Fig. \ref{fig:forces}(b.i). The manipulators' kinematic constraints are considered at the starting and goal object poses, as shown by Fig. \ref{fig:forces}(b.ii) and (b.iii). Here, the red hands indicate collided grasping poses. The yellow hands indicate the unreachable grasping poses. The hands with original colors are the feasible poses. The candidate grasping poses are the intersection of the feasible ones at the starting and goal object poses.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=\linewidth]{imgs/forces.jpg}
\caption{(a) Force constraints. (b) Determination of candidate grasping poses.}
\label{fig:forces}
\end{center}
\end{figure}
The result of RRT-based object motion planning is an object motion path and the associated grasp poses for each element on the path. We represent them using $\{(\mathbf{p_o}(t), \mathbf{R_o}(t))\}$ and $\{(\mathbf{p_h}(t), \mathbf{R_h}(t))\}$, where subscripts $\mathbf{o}$ and $\mathbf{h}$ represent object and hand respectively. Notation $t={1,2,...}$ in the parenthesis is a time sequence identifier. The result will be used to optimize the motion of the mobile manipulator later, as presented in the next section.
\section{Optimizing Robot Motion}
We use optimization to produce the joint-level robot motion for moving the object following the planned object motion and hand motion. The optimization problem is formulated as follows.
\begin{subequations}\allowdisplaybreaks
\begin{align}
& {\underset{\mathbf{q}(t)}{\text{min}}} & ||\mathbf{q}(t+1)-\mathbf{q}(t)||_2 \label{eq:rbt_opgoal} \\
& {\text{s.t.}}
&
\begin{cases}
{\mathbf{q}(t+1) = [q(i+1)^0, q(i+1)^1, ..., q(i+1)^d]^T} \\
{\mathbf{q}(t+1)^k \in [\theta_k^-, \theta_k^+], k=JointID} \\
\end{cases} \label{eq:joints} \\
&&
\begin{cases}
\mathbf{p_r}(t+1), \mathbf{R_r}(t+1) = FK(\mathbf{q}(t+1)) \\
||\mathbf{p_r}(t+1)-\mathbf{p_h}(t+1)||_2\leqslant\varepsilon_p\\
\angle(\mathbf{R_r}(t+1), \mathbf{R_g}(t+1)) \leqslant\varepsilon_R
\end{cases} \label{eq:ik} \\
&&
\begin{cases}
\boldsymbol{\tau}(t+1)=\boldsymbol{J}(\mathbf{q}(t+1))^T\mathbf{F}(t+1)\\
\tau^k(t+1)\in[\tau_k^-, \tau_k^+], k=JointID
\end{cases} \label{eq:joint_tq} \\
&&
\begin{cases}
||q^0(t+1)-q^0(t)||\leqslant d_x\\
||q^1(t+1)-q^1(t)||=0\\
||q^2(t+1)-q^2(t)||=0
\end{cases} \label{eq:base}
\end{align}
\label{eq:optproblem}
\end{subequations}
This optimization aims to find a robot configuration such that the load born by each joint of the robot does not exceed the upper limit of each joint torque. The optimization is expected to be carried out iteratively between every two sequentially adjacent object poses, and their associated grasp poses. Equation \eqref{eq:rbt_opgoal} is the optimization goal. Equation \eqref{eq:joints} is the constraint on joint ranges. Equation \eqref{eq:ik} is the constraint on the grasping pose, where $FK$ is the forward kinematics. Equation \eqref{eq:joint_tq} is the constraint on joint torques. The torque born by each joint is computed using the robot Jacobian $J$ and the force $F_h$ found by solving the object motion planning problem (equation \eqref{eq:handforce}). Equation \eqref{eq:base} is the constraint on the moving mobile base. We allowed the robot to translate along its local $x$ direction while keeping the other freedoms fixed. There was no particular reason for this constraint. We included it to accelerate optimization. Like object motion planning, the optimization problem may not have a solution. If a feasible robot motion is not found, the grasp pose is determined to be invalid. The system will switch to a different candidate grasping pose and return to the motion planning part to start a new optimization routine.
\section{Experiments and Analysis}
\label{sec:experiments}
We carried out experiments and analyses to examine the performance of the method. Notably, we used xArm7 (UFACTORY) for a cooperative manipulator, used Water2 (YUNJI TECHNOLOGY) for a mobile base, and mounted a xArmGripper (UFACTORY) to the xArm7 manipulator as the robot hand. The payload of the xArm7 manipulator is 3.5 kg without any end-effector and 1.87 kg after mounting the hand. The environmental obstacles and objects used in the experiments included a work table (length$\times$width$\times$height$=180$ cm$\times74$ cm$\times75.5$ cm, weight$=12.2$ kg), a Japanese tatami block (length$\times$width$\times$height$=$164 cm$\times$82 cm$\times$3 cm, weight$=2.5$ kg), and a plywood cabinet (length$\times$width$\times$height$=$29 cm$\times$41 cm$\times$29 cm, weight$=5.5$ kg). Fig. \ref{fig:robots} illustrates the robots, environmental obstacles, and objects.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=\linewidth]{imgs/robots.jpg}
\caption{(a) Robot and environmental obstacle (work table). (b) Object 1: Japanese tatami block. (c) Object 2: Cabinet.}
\label{fig:robots}
\end{center}
\end{figure}
\subsection{Mechanical Compliance}
We measured the mechanical compliance of the mobile manipulator following the method presented in Section IV.A. Especially, we set the manipulator part of the robot to a vertical pose, attached optical markers to its TCP, and measured the TCP displacements by recording the changes in marker positions. The setup is shown in Fig. \ref{fig:exp_com}(a) and (b). As illustrated in the figure, we moved the TCP of the vertically posed manipulator around the mobile base's local $x$ and $y$ axes to obtain the ranges of the two assumed orthogonal joints.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=\linewidth]{imgs/exp_com.jpg}
\caption{(a) Move the TCP of the vertically posed manipulator around the mobile base's $x$ axis. (b) Move the TCP of the vertically posed manipulator around the mobile base's $y$ axis. (c) Measured displacements. Upper: $x$; Lower: $y$.}
\label{fig:exp_com}
\end{center}
\end{figure}
Fig. \ref{fig:exp_com}(c) shows the recorded data. Following the data, we concluded that the $\delta d$ in the two directions were 15 mm and 25 mm, respectively. The rotation ranges of the two virtual joints are thus $0.5^\circ$ and $0.859^\circ$ following equation \eqref{eq:dt}.
\subsection{Performance of Object Motion Planning}
After obtaining the rotation ranges, we used them to determine the thickness of $\textnormal{M}_\textnormal{e}$ dynamically and thus carried out RRT-based object motion planning. First, we carried out the experiments in simulation using the Japanese Tatami Block and planned the object's motion for the two tasks shown in Fig. \ref{fig:tatami_tasks}. In the first task, the robot planned its motion to flip the tatami on the table. In the second task, the robot planned its motion to unload the tatami block from the table to the ground. Since the tatami is heavier than the robot's maximally affordable payload, the robot must make sure the environment partially supports it during manipulation. The RRT parameters were configured as follows during the experiments: Maximum iterations$=$10000, Smoothing iterations$=$350, and Maximum planning time $=$100 s.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=\linewidth]{imgs/tatami_tasks.jpg}
\caption{Two tasks used for analyzing object motion planning. Task 1: Flippng the tatami block on the table. Task 2: Unloading the tatami block from the table.}
\label{fig:tatami_tasks}
\end{center}
\end{figure}
The left part of Table \ref{table:tasks} shows the results of the first task. To understand the dynamic method's performance, we also plan using fixed $\textnormal{M}_\textnormal{e}$ thickness and compared them with our dynamic method. Columns 1, 2, and 3 of the table are the results with fixed thicknesses. The fixed values were 0.5 cm, 1.5 cm, and 3 cm, respectively. The last column is the results using the dynamically changing thickness computed following the changes of the robot's TCP. The planning time and success rate data are the averages of ten times of attempts. If the distance between the object and the robot exceeded the length of the arm, the thickness was defined as 0. This constrained the trajectory planning to be within the manipulator's operating range. The results show that different fixed thicknesses led to different performances. Thicker ``crust''s will lead to a higher success rate and smaller time costs. When the thickness was 0.5 cm, the success rate was 0.5 cm. Maintaining contact during probabilistic planning under such a thin thickness was very difficult. In contrast, the success rate reaches 90 when the thickness is changed to 3 cm. The success rate of a dynamically changing thickness had a performance competitive to the results of 3 cm thickness. The range of the dynamically changing thickness is from 2.6 cm to 3.35 cm. In the best case, it grew to a value larger than 3 cm. This happened when the robot was most expanded. In the worst case, it reduced to 2.6 cm when the robot was most folded. We could expect the dynamically changing thickness to be more flexible in the best case and safer when the robot is folded.
The right part of Table \ref{table:tasks} shows the results of the second task. The planner successfully solved the problem even though the thickness was fixed to 0.5 cm. However, the success rate was low. Only two of the ten attempts had positive values. Like the first task, the planning time decreased, and the success rates grew as we increased the thickness of $\textnormal{M}_\textnormal{e}$. The proposed dynamic thickness had competitive performance to the 3rd column of the table. The dynamic range differs from the previous task as the robot's TCP positions shifted significantly.
\begin{table}[hbtp]
\caption{Performance of object trajectory planning with different thickness of $\textnormal{M}_\textnormal{e}$}
\label{table:tasks}
\centering
\begin{tabular}{l|cccc|cccc}
\toprule
& \multicolumn{4}{c|}{Task 1} & \multicolumn{4}{c}{Task 2}\\
\cmidrule(lr){2-5} \cmidrule(lr){6-9}
Thickness (cm) & 0.5 & 1.5 & 3 & 2.60$\sim$3.35 (average 3.00) & 0.5 & 1.5 & 3 & 1.91$\sim$2.38 (average 2.09)\\
\midrule
Planning time (s) & - & 6.34 & 5.82 & 6.29 & 8.39 & 5.81 & 2.40 & 2.92\\
\midrule
Success rate (\%) & 0 & 80 & 90 & 90 & 20 & 40 & 60 & 50\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Real-World Experiments and Torque analysis}
Then, we optimized the robot motion for the two tasks based on the object trajectories obtained using the dynamic thickness. The simulation and real-world execution results are shown in Fig. \ref{fig:flip}. The results indicate that the robot maintained contact between the tatami and the environment during the manipulation to let the environment partially share the workload. For task 1, the robot maintained contact between a long edge of the tatami and the table surface. A partial weight of the tatami was supported by the work table, as seen in (a.ii), (a.iii), and (a.iv). For task 2, the robot maintained contact between the tatami surface and the short edge of the table. The table partially supported the tatami until it touched the ground, as seen in (b.iii), (b.iv), and (b.v).
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=\linewidth]{imgs/flip.jpg}
\caption{(a) Simulation and real-world results for task 1. (b) Simulation and real-world results for task 2.}
\label{fig:flip}
\end{center}
\end{figure}
We measured the force born by the robot gripper in the two tasks using the force sensor installed at the end flange of the robot (see Fig. \ref{fig:robots}(a) for the position of the force sensor). The force curves are shown in Fig. \ref{fig:fr}(a.i) and (b.i). The maximum forces born by the robot hand in the two tasks were smaller than 35 N. The robot was bearing a payload smaller than its 3.5 kg payload when performing the tasks. The executions were successful, and the proposed methods are thus considered effective. Fig. \ref{fig:robots}(a.ii) and (b.ii) show the torque at each joint of the robot. The torque curves are separated into three groups and plotted in three different diagrams since joints 1 and 2, joints 3, 4, and 5, and joints 6 and 7 had the same maximum torques, respectively. The dashed lines in the diagrams show the maximum torque that can be born by the joints. The torque curves were coherent with force at hand. All of them were smaller than the maximum torques of the joints.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=\linewidth]{imgs/forces_result.jpg}
\caption{Force and torque born by the hand and joints of the mobile manipulator when performing the two tasks. (a) Task 1. (b) Task 2.}
\label{fig:fr}
\end{center}
\end{figure}
For comparison, we simulated the torques born by the joints after removing the supporting table. During the simulation, the robot followed the same motion to manipulate the tatami block. Fig. \ref{fig:robots}(a.iii) and (b.iii) show the results. We can observe from the results that without supporting forces from the table, the 5th and 6th joints of the robot in the first task and the 3rd and 7th of the robot in the second task need to provide torque larger than their maximally bearable values. The observation confirmed that environmental support was helpful for the robot to manipulate the tatami block.
We also compared the necessity of robot motion optimization. Fig. \ref{fig:fr}(a.iv) and (b.iv) show the torque curves without solving the optimization problem presented in equation \eqref{eq:optproblem}. The robot motion was generated by directly solving the inverse kinematics of the robot following the trajectory of the object and the holding hand. The curves were slightly different from the ones measured with motion optimization. For task 1, the robot could still successfully finish the execution, although the torques born by the robot joints became more significant than the optimized results. For the second task, the robot failed to carry out the execution. The curves in (b.iv) were simulation results. We could observe from them that the 6th joint of the robot needed to provide a much larger torque to perform the motion successfully. Optimizing the robot motion helped find a proper robot pose sequence and thus avoid excessive joint load. For our readers' convenience, we visualized two optimized and non-optimized robot motion results in Fig. \ref{fig:copt}. Interested readers are recommended to see the figures for detailed examination.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=\linewidth]{imgs/compare_opt.jpg}
\caption{Optimized robot motion (*.i$\sim$iii) $vs.$ motion generated by solving IK sequentially (*.i${}^*\sim$iii${}^*$). (a) Task 1. (b) Task 2. The robots moved following pose sequences that led to smaller joint torque.}
\label{fig:copt}
\end{center}
\end{figure}
In addition to the tatami block, we also conducted experiments using the cabinet shown in Fig. \ref{fig:robots}. Like the tatami block, we defined two tasks shown in Fig. \ref{fig:cab}(a) and (b), and asked the system to find a manipulation sequence for the mobile manipulator to move the cabinet from the initial poses to the goal poses. Since the cabinet is much shorter than the tatami block, we included a smaller and lower table beside the original one and let the robot unload the cabinet from the original table to the lower one. With the help of the proposed method, the robot successfully solved both problems. Readers may also find execution details in the video supplementary published with this manuscript.
\begin{figure}[!htpb]
\begin{center}
\includegraphics[width=\linewidth]{imgs/cab.jpg}
\caption{Results with the cabinet. (a) Task 1: Unload the cabinet to a lower table. (b) Task 2: Put the cabinet upright.}
\label{fig:cab}
\end{center}
\end{figure}
\section{Conclusions and Future Work}
\label{sec:conclusions}
We proposed an expanded contact model and used it to plan the trajectory of an object while maintaining contact. The trajectory of the robot based on the trajectory was obtained by solving an optimization problem that incorporates joint torque constraints and other factors. The experimental results show that the method can be applied to various objects and different target states. It helped secure the manipulation of heavy objects by sharing the load between the robot and environmental obstacles. The robot can avoid overloaded joint torque by following the generated manipulation motion. In the future, we plan to verify whether the simulation behavior can be correctly realized in a real environment while improving the problems mentioned in the discussion section.
\bibliographystyle{IEEEtran}
|